Tianyang Hu
Assistant Professor, School of Data Science, The Chinese University of Hong Kong, Shenzhen.
Photo taken at SUFE in 2023. Special thanks to Yixuan and Taiyun :)
I was previously a Postdoctoral Fellow at the National University of Singapore, working with Prof. Kenji Kawaguchi, and a Research Scientist in the AI Theory Group of Huawei Noah’s Ark Lab. I received my Ph.D. in Statistics from Purdue University under the supervision of Prof. Guang Cheng, after earning an M.S. in Statistics from the University of Chicago and a B.S. in Mathematics from Tsinghua University.
My research focuses on the intersection of Statistics and AI, aiming to deepen the theoretical foundations of AI and design theory-inspired algorithms. I am particularly interested in:
- Statistical Machine Learning
- Representation Learning
- Deep Generative Modeling
I am actively seeking PhD/MPhil students and research assistants with a strong theoretical foundation and hands-on experience in AI to join my group. If you are interested, please send me an email with your CV.
selected publications
- ICLRSeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained FromInternational Conference on Learning Representations, Abridged in the NeurIPS 2025 Lock-LLM Workshop: Prevent Unauthorized Knowledge Use from Large Language Models, 2026
- ICLRPrefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from AttentionInternational Conference on Learning Representations. Abridged in the ICML 2025 Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT), 2026
- NeurIPSReward-Instruct: A Reward-Centric Approach to Fast Photo-Realistic Image GenerationAdvances in Neural Information Processing Systems, 2025
- JMLRMinimax Optimal Deep Neural Network Classifiers Under Smooth Decision BoundaryJournal of Machine Learning Research, 26 (136), 2025
- JMLRRandom Smoothing Regularization in Kernel Gradient Descent LearningJournal of Machine Learning Research, 25 (284), 2024
- NeurIPS
Spotlight Complexity Matters: Rethinking the Latent Space for Generative ModelingAdvances in Neural Information Processing Systems, 2023 - NeurIPSDiff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion ModelsAdvances in Neural Information Processing Systems, 2023
- ICLRYour Contrastive Learning Is Secretly Doing Stochastic Neighbor EmbeddingInternational Conference on Learning Representations, 2023
- NeurIPS
Spotlight Understanding Square Loss in Training Overparametrized Neural Network ClassifiersAdvances in Neural Information Processing Systems, 2022 - AISTATSRegularization Matters: A Nonparametric Perspective on Overparametrized Neural NetworkIn International Conference on Artificial Intelligence and Statistics, 2021