Tianyang Hu

Assistant Professor, School of Data Science, The Chinese University of Hong Kong, Shenzhen.

sufe.jpg

Photo taken at SUFE in 2023. Special thanks to Yixuan and Taiyun :)

I was previously a Postdoctoral Fellow at the National University of Singapore, working with Prof. Kenji Kawaguchi, and a Research Scientist in the AI Theory Group of Huawei Noah’s Ark Lab. I received my Ph.D. in Statistics from Purdue University under the supervision of Prof. Guang Cheng, after earning an M.S. in Statistics from the University of Chicago and a B.S. in Mathematics from Tsinghua University.

My research focuses on the intersection of Statistics and AI, aiming to deepen the theoretical foundations of AI and design theory-inspired algorithms. I am particularly interested in:

  • Statistical Machine Learning
  • Representation Learning
  • Deep Generative Modeling

I am actively seeking PhD/MPhil students and research assistants with a strong theoretical foundation and hands-on experience in AI to join my group. If you are interested, please send me an email with your CV.

selected publications

  1. ICLR
    SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From
    Yao Tong, Haonan Wang, Siquan Li, Kenji Kawaguchi, and Tianyang Hu
    International Conference on Learning Representations, Abridged in the NeurIPS 2025 Lock-LLM Workshop: Prevent Unauthorized Knowledge Use from Large Language Models, 2026
  2. ICLR
    Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention
    Haonan Wang*, Brian Chen*, Siquan Li*, Xinhe Liang, Hwee Kuan Lee, Kenji Kawaguchi, and Tianyang Hu
    International Conference on Learning Representations. Abridged in the ICML 2025 Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT), 2026
  3. NeurIPS
    Reward-Instruct: A Reward-Centric Approach to Fast Photo-Realistic Image Generation
    Yihong Luo*, Tianyang Hu*, Weijian Luo, Kenji Kawaguchi, and Jing Tang
    Advances in Neural Information Processing Systems, 2025
  4. JMLR
    Minimax Optimal Deep Neural Network Classifiers Under Smooth Decision Boundary
    Tianyang Hu, Ruiqi Liu, Zuofeng Shang, and Guang Cheng
    Journal of Machine Learning Research, 26 (136), 2025
  5. JMLR
    Random Smoothing Regularization in Kernel Gradient Descent Learning
    (α-β) Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, and Yuan Yao
    Journal of Machine Learning Research, 25 (284), 2024
  6. NeurIPSSpotlight
    Complexity Matters: Rethinking the Latent Space for Generative Modeling
    Tianyang Hu, Fei Chen, Haonan Wang, Jiawei Li, Wenjia Wang, Jiacheng Sun, and Zhenguo Li
    Advances in Neural Information Processing Systems, 2023
  7. NeurIPS
    Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
    Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang
    Advances in Neural Information Processing Systems, 2023
  8. ICLR
    Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding
    Tianyang Hu, Zhili Liu, Fengwei Zhou, Wenjia Wang, and Weiran Huang
    International Conference on Learning Representations, 2023
  9. NeurIPSSpotlight
    Understanding Square Loss in Training Overparametrized Neural Network Classifiers
    Tianyang Hu*, Jun Wang*, Wenjia Wang*, and Zhenguo Li
    Advances in Neural Information Processing Systems, 2022
  10. AISTATS
    Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network
    Tianyang Hu*, Wenjia Wang*, Cong Lin, and Guang Cheng
    In International Conference on Artificial Intelligence and Statistics, 2021