Tianyu Chen

University of Texas at Austin; tianyuchen@utexas.edu

prof_pic.jpg

White Sands, New Mexico

I am a third-year PhD student at the University of Texas at Austin, supervised by Professor Mingyuan Zhou. Before joining UT, I obtained my master’s degree in Statistics from the University of Chicago, where I closely collaborated with Kevin Bello, Bryon Aragam, Pradeep Ravikumar, Francesco Locatello, and supervised by Professor Jingshu Wang. I earned my Bachelor’s degree in Statistics from Fudan University, where I spent some of the most memorable moments of my life.

Research Interests: I have broad research interests in statistical machine learning. Specifically, my research interests lie in:

  • Generative Models: diffusion models, model distillation, and controllable image editing.
  • Reinforcement Learning for LLM: post-training for both autoregressive (AR) and diffusion language models (dLLMs), focusing on reasoning enhancement and algorithm design.
  • Agentic AI – constructing autonomous agents for data generation and multi-turn evaluation.
  • Statistical Sampling & Inference – neural posterior sampling and hypothesis testing under uncertainty.

news

Apr 01, 2026 Excited to start as a Student Researcher at Google DeepMind!
Jan 01, 2026 Two papers accepted at ICLR 2026: EdiVal-Agent (object-centric automated image editing evaluation, project page) and Score Distillation Beyond Acceleration (generative modeling from corrupted data).
Nov 01, 2025 Excited to announce that two papers are accepted by NeurIPS 2025! 🎉 CoLT (The Conditional Localization Test for assessing neural posterior estimates) is accepted as a Spotlight, and Improving Data Efficiency for LLM Reinforcement Fine-tuning introduces difficulty-targeted online data selection for more efficient LLM alignment. See you in San Diego!
Sep 01, 2025 Our new paper EdiVal-Agent introduces an object-centric VLM agent for automated, fine-grained evaluation of multi-turn image editing! Our benchmark reveals that SeedDream v4.0 > Nano Banana > GPT-Image-1 in multi-turn editing quality. The project received media coverage and is showcased on our project page.
Mar 01, 2025 Our new papers Denoising Score Distillation (DSD) and Restoration Score Distillation (RSD) propose a novel way to apply distillation methods on various corrupted data, including Fourier space MRI data. We prove that after distillation, we can further improve generative quality when only having access to corrupted data.

selected publications

  1. ICLR 2026
    EdiVal-Agent: An Object-Centric Framework for Automated, Scalable, Fine-Grained Evaluation of Multi-Turn Editing
    Tianyu Chen*, Yasi Zhang*, Zhi Zhang, and 13 more authors
    The 14th International Conference on Learning Representations, 2026
  2. ICLR 2026
    Score Distillation Beyond Acceleration: Generative Modeling from Corrupted Data
    Tianyu Chen*, Yasi Zhang*, Zhendong Wang, and 2 more authors
    The 14th International Conference on Learning Representations, 2026
  3. NeurIPS 2025
    CoLT: The conditional localization test for assessing the accuracy of neural posterior estimates
    Tianyu Chen, Vansh Bansal, and James G. Scott
    Advances in Neural Information Processing Systems 2025, 2025
  4. NeurIPS 2024
    Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
    Tianyu Chen, Zhendong Wang, and Mingyuan Zhou
    Advances in Neural Information Processing Systems 2024, 2024
  5. NeurIPS 2023
    iSCAN: identifying causal mechanism shifts among nonlinear additive noise models
    Tianyu Chen, Kevin Bello, Bryon Aragam, and 1 more author
    Advances in Neural Information Processing Systems 2023, 2023
  6. PNAS
    Model-based trajectory inference for single-cell rna sequencing using deep learning with a mixture prior
    Tianyu Chen*, Jin-Hong Du*, Ming Gao, and 1 more author
    Proceedings of the National Academy of Sciences, 2024