We study how to build language models that are reliable and safe in real-world use. Our goal is to make NLP and multimodal systems robust, fair, and trustworthy in diverse environments.
Bias & fairness / Robustness / Trustworthy AI / Multimodal reliability
We study AI systems that understand and generate code. Our goal is to help AI reason about programs and support coding and data analysis tasks.
Code generation / Program reasoning / Language–code interaction
We study user-centered recommendation systems that capture how user behavior and preferences change over time. Our goal is to build AI systems that provide personalized and adaptive recommendations for real-world and educational applications.
Sequential recommendation / LLM-based recommendation / AI for education