Tianyang Hu
Assistant Professor, School of Data Science, The Chinese University of Hong Kong, Shenzhen.

Photo taken at SUFE in 2023. Special thanks to Yixuan and Taiyun :)
I was previously a Postdoctoral Fellow at the National University of Singapore, working with Prof. Kenji Kawaguchi, and a Research Scientist in the AI Theory Group of Huawei Noah’s Ark Lab. I received my Ph.D. in Statistics from Purdue University under the supervision of Prof. Guang Cheng, after earning an M.S. in Statistics from the University of Chicago and a B.S. in Mathematics from Tsinghua University.
My research focuses on the intersection of Statistics and AI, aiming to deepen the theoretical foundations of AI and design theory-inspired algorithms. I am particularly interested in:
- Statistical Machine Learning
- Representation Learning
- Deep Generative Modeling
I am actively seeking PhD/MPhil students and research assistants with a strong theoretical foundation and hands-on experience in AI to join my group. If you are interested, please send me an email with your CV.
selected publications
- JMLRMinimax Optimal Deep Neural Network Classifiers Under Smooth Decision BoundaryJournal of Machine Learning Research, To appear, 2025
- ICMLExact Conversion of In-Context Learning to Model Weights in Linearized-Attention TransformersInternational Conference on Machine Learning, 2024
- ICMLReferee Can Play: An Alternative Approach to Conditional Generation via Model InversionInternational Conference on Machine Learning, 2024
- NeurIPS
Spotlight Complexity Matters: Rethinking the Latent Space for Generative ModelingAdvances in Neural Information Processing Systems, 2023 - NeurIPSDiff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion ModelsAdvances in Neural Information Processing Systems, 2023
- JMLRRandom Smoothing Regularization in Kernel Gradient Descent LearningJournal of Machine Learning Research, 25(284), 2024
- ICLRYour Contrastive Learning Is Secretly Doing Stochastic Neighbor EmbeddingInternational Conference on Learning Representations, 2023
- NeurIPS
Spotlight Understanding Square Loss in Training Overparametrized Neural Network ClassifiersAdvances in Neural Information Processing Systems, 2022 - AISTATSRegularization Matters: A Nonparametric Perspective on Overparametrized Neural NetworkIn International Conference on Artificial Intelligence and Statistics, 2021