I hereby solicit suggestions for the video of the day. Please email me your ideas with explanations. Selected entries will win 5 homework points. If your video is played at the beginning of class, you must also briefly explain something about the video and something about yourself - in person.
https://pollev.com/slade You may also download the app to your phone. Use the "slade" poll id.
Speaker: Greg Durrett, University of Texas, Austin.Host: Arman Cohan
Title: Specializing LLMs for Reliability
Abstract:
Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab’s work on making LLM systems reliable by introspecting their behavior. First, I will demonstrate that better understanding of LLMs helps us train them to be more reliable reasoners. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Second, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.
Bio:
Greg Durrett is an associate professor of Computer Science at UT Austin. His research is broadly in the areas of natural language processing and machine learning. His group develops techniques for reasoning about knowledge in text, verifying factuality of LLM generations, and specializing LLMs to make them more reliable. He is a 2023 Sloan Research Fellow and a recipient of a 2022 NSF CAREER award. His work has been recognized by paper awards at EMNLP 2024 and EMNLP 2013. He was a founding organizer of the Workshop on Natural Language Reasoning and Structured Explanations at ACL 2023 and ACL 2024 and is a current member of the NAACL board. He received his BS in Computer Science and Mathematics from MIT and his PhD in Computer Science from UC Berkeley, where he was advised by Dan Klein.
Website: https://www.cs.utexas.edu/~gdurrett/
Refreshments from Koffee Katering will be available.
Also, if your final exam grade is higher than your lower midterm grade, that lower grade will be replaced by your final exam grade. The quality of mercy is not strained.
You are invited to create a music video for this song. Here are the rules:
In class on February 3rd, I introduced Toch's Geographical Fugue (wiki + score) as well as my derived Internet Fugue
You are invited to perform the Internet Fugue either on video, or (preferably) live in class. The rules and rewards are the same as above.
Review hw4. Use case special form for all-vars
and eval-in-env
.