Sanghani Center Student Spotlight: Xinyue Zeng
February 20, 2026
Ph.D. student Xinyue Zeng recently learned that two of her papers –- one on which she is lead author and the other, co-lead author -- have been accepted to the International Conference on Learning Representations (ICLR) 2026.
The two papers to be presented in April at ICLR in Rio Rio de Janeiro, Brazil, are:
· "HalluGuard: Demystifying Data-Driven and Reasoning-Driven Hallucinations in LLMs" -- proposing a principled decomposition of hallucinations and consistently outperforms strong baselines on math and symbolic reasoning benchmarks, especially in long-horizon settings
· "Plan and Budget: Effective and Efficient Test-Time Scaling on Reasoning Large Language Models" –- introducing budget-aware test-time scaling that achieves stronger reasoning performance than existing methods with significantly lower compute
Advised by Dawei Zhou, Zeng’s research focuses on understanding and improving the reliability of large language model (LLM) reasoning and inference, including inference-time behaviors shaped by reinforcement learning–based training and decision-making.
Zeng investigates how different sources of uncertainty arise during model training and multi-step reasoning and how these uncertainties propagate and amplify during inference.
“I became interested in this area through my previous studies in statistics and biostatistics, where understanding uncertainty and inference is fundamental,” she said.
While working with real-world data and machine learning models, she observed that strong predictive performance alone was often insufficient and models frequently failed because their reasoning and inference processes were brittle or poorly characterized.
“As large language models began to perform increasingly complex, multi-step reasoning, these issues became more pronounced,” said Zeng. “This motivated me to focus on studying how LLM reasoning and inference behaviors emerge during training and deployment, why errors such as hallucinations arise, and how these failures propagate across multi-step decision-making processes.”
She earned both a bachelor of science degree (in statistics) and bachelor of arts degree (in English literature and linguistics, with honors) from Zhejiang University, China, and a master’s degree in biostatistics at the University of North Carolina at Chapel Hill.
When considering a Ph.D. program in computer science, Zeng said she was attracted to Virginia Tech because of its strong research culture and the Sanghani Center stood out to her as a place where interdisciplinary work is encouraged and where AI research is closely tied to questions of reliability, robustness, and real-world impact.
“What I value most is the research environment. The center brings together faculty and students who are interested in rigorous thinking about uncertainty, trustworthiness, and deployment of AI systems," she said. "There is strong support for pursuing long-term research questions that address real-world problems.”
In addition to Zeng’s two papers at the upcoming ICLR conference, her published work includes:
· "LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection," at ICML 2025
· “Cell type-specific inference from bulk RNA-sequencing data by integrating single-cell reference profiles via EPIC unmix,” in Genome Biology Journal, November 2025
After graduation, projected for May 2028, Zeng plans to pursue a research-focused role in industry, where she can continue to working on foundational problems in large-scale AI systems.
“ I am particularly interested in advancing reliable reasoning, inference, and reinforcement learning methods for real-world deployment,” she said.