Tianyang Hu

Postdoc at National University of Singapore.

pic.jpg

I am a Postdoc at the University of Singapore, working with Prof. Kenji Kawaguchi. I am currently on the job market for tenure-track positions starting in Fall 2025.

Previously, I was a research scientist at the AI Theory Group of Huawei Noah’s Ark Lab. I earned my Ph.D. in Statistics from Purdue University, under the supervision of Prof. Guang Cheng. Prior to that, I completed my M.S. in Statistics from the University of Chicago and B.S. in Math from Tsinghua University.

My research interests lie in the intersection of Statistics and AI, with the aim of advancing the theoretical understanding of AI and developing theory-inspired new algorithms. I am particularly interested in:

  • Statistical Machine Learning
  • Representation Learning
  • Deep Generative Modeling
  • Understanding Large Language Models

news

Nov 11, 2024 I started a postdoctoral position in the University of Singapore, working with Prof. Kenji Kawaguchi.
May 6, 2024 4 recent papers related to diffusion models are accepted, 2 on efficient sampling and 2 on training-free conditional guidance; I am now shifting towards understanding LLMs and my first work on the duality between prompting and finetuning is accepted by ICML 24!
Apr 10, 2024 Invited talk on the Duality Between Prompting and Weight Update in Transformers @ 2nd Workshop on ML Theory & Practice, Renmin University of China.
Nov 25, 2023 I will be hosting a session on AI4Math and Understanding LLMs in the 2023 X-AGI conference.
Sep 22, 2023 Two papers on generative modeling are accepted to NeurIPS 2023, including one Spotlight 🔦
Aug 25, 2023 Invited talk @ Conference on ML & Stat, East China Normal University

selected publications

  1. NeurIPSSpotlight
    Complexity Matters: Rethinking the Latent Space for Generative Modeling
    Tianyang Hu, Fei Chen, Haonan Wang, and 4 more authors
    Advances in Neural Information Processing Systems, 2023
  2. NeurIPS
    Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
    Weijian Luo, Tianyang Hu, Shifeng Zhang, and 3 more authors
    Advances in Neural Information Processing Systems, 2023
  3. JMLR
    Random Smoothing Regularization in Kernel Gradient Descent Learning
    (**)Liang Ding, Tianyang Hu, Jiahang Jiang, and 3 more authors
    Journal of Machine Learning Research, 25(284), 2024
  4. UAI
    Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification
    Pawel Piwek, Adam Klukowski, and Tianyang Hu
    In Conference on Uncertainty in Artificial Intelligence, 2023
  5. ICLR
    Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding
    Tianyang Hu, Zhili Liu, Fengwei Zhou, and 2 more authors
    International Conference on Learning Representations, 2023
  6. NeurIPSSpotlight
    Understanding Square Loss in Training Overparametrized Neural Network Classifiers
    Tianyang Hu*, Jun Wang*, Wenjia Wang*, and 1 more author
    Advances in Neural Information Processing Systems, 2022
  7. AISTATS
    Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network
    Tianyang Hu*, Wenjia Wang*, Cong Lin, and 1 more author
    In International Conference on Artificial Intelligence and Statistics, 2021
  8. JMLR
    Minimax Optimal Deep Neural Network Classifiers Under Smooth Decision Boundary
    Tianyang Hu, Ruiqi Liu, Zuofeng Shang, and 1 more author
    Journal of Machine Learning Research, Accept after minor revision, 2024