I hereby solicit suggestions for the video of the day. Please email me your ideas with explanations. Selected entries will win 5 homework points. If your video is played at the beginning of class, you must also briefly explain something about the video and something about yourself - in person.
Who is the author of the following three laws?
https://pollev.com/slade You may also download the app to your phone. Use the "slade" poll id.
Taken by Artificial Surprise and more artificial surprise
Utilizing her decades-long technical training in sleight-of-hand magic, Jeanette Andrews creates interactive, surreal vignettes that explore the nature of perception and cognition. She invites audiences to co-create her illusory performances, which function as live thought experiments. Often utilizing refined yet common items, such as glassware, paper, plants, and fabric, her works investigate perceptual anomalies, expectation violation, and the nature of belief. She also works in sound, installation, film, and objects to bring her ideas about hidden worlds to life. Dean of Arts and Sciences at Yale Tamar Gendler calls Jeanette "scholar of perception disguised as a magician" and Artnet says, “Andrews’s avant-garde approach to magic transforms it into performance art.”Jeanette is currently a visiting artist at MIT's Center for Art, Science and Technology (CAST).
Host: Arman CohanTitle: Specializing LLMs for Reliability
Abstract:
Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab’s work on making LLM systems reliable by introspecting their behavior. First, I will demonstrate that better understanding of LLMs helps us train them to be more reliable reasoners. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Second, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.
Bio:
Greg Durrett is an associate professor of Computer Science at UT Austin. His research is broadly in the areas of natural language processing and machine learning. His group develops techniques for reasoning about knowledge in text, verifying factuality of LLM generations, and specializing LLMs to make them more reliable. He is a 2023 Sloan Research Fellow and a recipient of a 2022 NSF CAREER award. His work has been recognized by paper awards at EMNLP 2024 and EMNLP 2013. He was a founding organizer of the Workshop on Natural Language Reasoning and Structured Explanations at ACL 2023 and ACL 2024 and is a current member of the NAACL board. He received his BS in Computer Science and Mathematics from MIT and his PhD in Computer Science from UC Berkeley, where he was advised by Dan Klein.
Website: https://www.cs.utexas.edu/~gdurrett/
Refreshments from Koffee Katering will be available.
The midterm will be Tuesday February 25th at 7pm in Davies Auditorium. It will be a 2 hour hand written exam. No computers. No notes. No books. No kidding. Students registered with Student Accessibility Services will take the exam at Becton C031 next door.
Sample Midterm Exam available . (solutions) The midterm will not have a boolean function question. Instead, it will have a struct question. The actual exam will also include UNIX questions (Principles 1 and 2). I will give you a transcript with some of the commands X'd out. You will have to deduce those commands (solutions).
You should be familiar with the recursion and tail recursion examples from the recursion.rkt and Recursion.html For more details on the wonders of tail recursion, see TailRecursion.html and this tail recursion article.
Also, the paper Music and Computation, discussed below, is also in scope, up to but not including Music. There will be true/false questions about binary encodings of numbers, text, images, and sound. No questions about music.
See Point.html for a sample struct question.
Owen Prem will hold a review session Sunday, February 23rd, from 1-3pm, at DL 220. Check Ed Discussios for details.
Structs.html (covered on Monday)
Hw3.html Problem 2.
Execute examples from tmcopy.rkt using hw3 simulator.
(simulate tm1 (conf 'q1 '(b) 0 '(1 1 1)) 20) (simulate tmcopy (conf 'q1 '(b) 0 '(1)) 200)
See Turing Machine Notes Detailed explanation of tmcopy.
Computability.html (jupyter) Part 2. What is computable?