CSE AI Colloquium: Teach Language Models to Reason

Denny Zhou
Research Scientist
Google Brain
Location
Virtual
Organizer
Professor Yi Zhang

Join us on zoom: https://ucsc.zoom.us/j/97272421399?pwd=a1lzek9rZ2txUDY3MVdxeHlyVzlZZz09

Description: Denny Zhou is going to present his team's work on natural language reasoning, chain-of-thought prompting, and related work, explained to the audience in Google I/O 2022 by Google CEO Sunar Pichai. By combining with Google's newest large language model, chain-of-thought prompting with only a few examples outperforms the SOTA results in many NLP tasks with a striking margin. The SOTA methods in the literature are trained or fine tuned with 100x or 1000x more annotated examples than Zhou's research team. Moreover, their method is fully interpretable. Here are several notable examples: (1) numerical reasoning on GSM 8K, 75% with only 8 examples vs SOTA 55% ( fine tuned GPT-3 175B with 250x more data); (2) numerical reasoning on SVAMP, 87% with the same 8 examples for GSM8k vs SOTA 57%; (3) Strategy QA 82% with 6 examples vs SOTA 74% trained with the full training set; (4) compositional generalization on SCAN: 100% with 14 examples vs SOTA 100% trained with 15,000 examples. 

Speaker bio: Denny Zhou is a research scientist for Google Brain. He is leading the Natural Language Reasoning Group. His innovations on machine reasoning research includes chain-of-thought prompting, self-consistency decoding, and neural logical machines. He also led the projects SpreadSheetCoder which has been integrated to Google Sheets and received highly positive feedback from users, and MobileBERT which has been widely used in various Google products, in particular, mobile applications on Android. He received the Google Research Impact Award in 2022 and WSDM2022 Test of Time Award.