Pei Zhou

firstname.lastname at microsoft dot com

Pei Zhou

I am a Senior Applied Scientist at Microsoft Office of Applied Research, where I drive research on improving Copilot reasoning for complex intent, through learning from interaction, synthetic data generation, and UX innovation. I received my Ph.D. in Computer Science at University of Southern California, where I work in the USC-NLP Group. My research interests are: large language models (LLM) reasoning, communicating agents, and human-AI symbiosis.

Previously, I received undergraduate degree in Mathematics of Computation from University of California, Los Angeles (UCLA). I've also done research at Google Deepmind (Gemini), Allen Institute for Artificial Intelligence (AI2), and Amazon Alexa AI. My work has been featured in The Register, Science Daily, VentureBeat, Tech Xplore, etc.

We are on lookout for exceptional applied science talents! If you have passion to improve LLM systems from interaction and excited about applied research impacts, feel free to drop me a line :)

[Full CV] [Microsoft Profile] [Twitter] [LinkedIn] [Github] [Google Scholar] [Semantic Scholar]


Recent News

  • [October 2024] Attending COLM in Penn and we are hiring interns and FTE, come say hi!
  • [December 2023] Attending NeurIPS in New Orleans, come say hi!
  • [September 2023] Started my collaboration with Google DeepMind working on LLM meta-task reasoning!
  • [July 2023] Attending ACL at Toronto, come say hi! I'll be presenting our paper on a Dungeon Master-like dialogue agent in D&D with theory-of-mind and RL!
  • [May 2023] Started my internship at Google Bard working on theory-of-mind capabilities in LLMs!
  • [April 2023] Our Theory-of-Mind workshop is accepted in ICML 2023! Hope to see you in Honolulu in July!

Education

Aug 2019 - April 2024
Ph.D. in Computer Science, University of Southern California

Sep 2015 - June 2019
B.S. in Mathematics of Computation with Minor in Statistics, University of California, Los Angeles (UCLA)


Selected Publications

(Full list see Google Scholar)

  • SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
    Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tzi Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng.
    NeurIPS, 2024.
    [abstract] [media coverage 1] [media coverage 2] [media coverage 3]

  • How FaR Are Large Language Models From Agents with Theory-of-Mind?
    Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra Aida Nematzadeh, Shyam Upadhyay, and Manaal Faruqui.
    Preprint, 2023.
    [abstract]

  • SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
    Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani Gunhee Kim, Maarten Sap, and Yejin Choi.
    In EMNLP, 2023. Outstanding Paper Award
    [abstract]

  • I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons
    Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, and Prithviraj Ammanabrolu.
    In ACL, 2023.
    [abstract] [media coverage]

  • Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality
    Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara, and Xiang Ren.
    In EMNLP, 2022.
    [abstract] [project page] [dataset] [media coverage]

  • Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation
    Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur.
    In ACL, 2022.
    [abstract] [media coverage]

  • Probing Commonsense Explanation in Dialogue Response Generation
    Pei Zhou, Pegah Jandaghi, Hyundong Cho, Bill Yuchen Lin, Jay Pujara, and Xiang Ren.
    In EMNLP-Findings, 2021.
    [abstract]

  • Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
    Ninareh Mehrabi*, Pei Zhou* (equal contribution), Fred Morstatter, Jay Pujara, Xiang Ren, and Aram Galstyan.
    In EMNLP, 2021.
    [abstract]

  • RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
    Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, and Xiang Ren,
    In EMNLP, 2021.
    [abstract] [project page] [data]

  • Commonsense-Focused Dialogues for Response Generation: An Empirical Study
    Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur.
    In SIGDIAL, 2021.
    [abstract] [data] [Amazon blog]

  • CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning
    Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren.
    In EMNLP-Findings, 2020.
    [abstract] [project page] [data] [media coverage]

  • Examining Gender Bias in Languages with Grammatical Gender
    Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang.
    In EMNLP-IJCNLP, 2019.
    [abstract] [code]

Internships


Miscellany

  • I play the piano and am a keyboardist in UCLA Accoustic Guitar Band called Parked in 4 East
  • I was born in Chengdu, a great city for vacation and spicy food lovers :)
  • Currently super into camping/glamping!
  • I'm into all kinds of RPGs from tabletop to ffxiv
  • Thanks to Nelson Liu for sharing source code of the website!