CS 201 - Spring 2025. 2/26/2025.


[Home]

Welcome to CS 201!

Video of the Day

It's a UNIX System! Jurassic Park.

I hereby solicit suggestions for the video of the day. Please email me your ideas with explanations. Selected entries will win 5 homework points. If your video is played at the beginning of class, you must also briefly explain something about the video and something about yourself - in person.

Logical problem of the day

Batman, Nick from The Great Gatsby, Mr. Burns from The Simpsons. Name another member of this set.

https://pollev.com/slade You may also download the app to your phone. Use the "slade" poll id.

YaleFiction.html

Canvas Quiz of the Day (need daily password)

Most days, there will be a simple canvas quiz related to the lecture. You need a password to activate the quiz, which I will provide in class. These quizzes will count toward your class participation grade. The quiz is available only during class.

Click for today's quiz.

Lecture 19: Halting Problem and Boolean Functions.

  • I have office hours Wednesdays from 4-6 pm, on zoom, id 459 434 2854.

  • I am available for lunch on Mondays at 1 pm in Morse.

  • ULA office hours are found at https://csofficehours.org/CS201/schedule. Sign up via the queue.

  • Homework assignments: [Assignments]. hw4 is now available.

    Announcements

  • If you have an upcoming performance or athletic event, I am happy to promote it during class. Just send me a note.

  • Information Society Project Yale Law School. Weekly Events

  • CS Colloquium, Thursday, February 27, 10:30am. DL 220.
    Speaker: Greg Durrett, University of Texas, Austin.

    Host: Arman Cohan

    Title: Specializing LLMs for Reliability

    Abstract:

    Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab’s work on making LLM systems reliable by introspecting their behavior. First, I will demonstrate that better understanding of LLMs helps us train them to be more reliable reasoners. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Second, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.

    Bio:

    Greg Durrett is an associate professor of Computer Science at UT Austin. His research is broadly in the areas of natural language processing and machine learning. His group develops techniques for reasoning about knowledge in text, verifying factuality of LLM generations, and specializing LLMs to make them more reliable. He is a 2023 Sloan Research Fellow and a recipient of a 2022 NSF CAREER award. His work has been recognized by paper awards at EMNLP 2024 and EMNLP 2013. He was a founding organizer of the Workshop on Natural Language Reasoning and Structured Explanations at ACL 2023 and ACL 2024 and is a current member of the NAACL board. He received his BS in Computer Science and Mathematics from MIT and his PhD in Computer Science from UC Berkeley, where he was advised by Dan Klein.

    Website: https://www.cs.utexas.edu/~gdurrett/

    Refreshments from Koffee Katering will be available.

  • Benefit Concert for LA Fires: Sunday March 2, 6pm. SSS 114

  • Davenport Pops Orchestra Concert, Saturday March 1, 3pm, Woolsey Hall. Submitted by Sophia Zhang. It is completely free and open to everyone, people just have to register at this link: https://yaleconnect.yale.edu/YCDPops/rsvp_boot?id=2293889. The program features songs from Wicked, Les Miserables, Africa by Toto, Kiki's Delivery Service, Undertale, and Laufey.

    Midterm and Grades

    At the end of the semester, I add up all the raw scores (problem sets, quizzes, exams, etc.). I then weigh the scores, with homeworks and quizzes worth 1/3, and exams worth 2/3. I then sort the scores and apply a curve such that over half the class gets an A or A-. Note: this is consistent with the published grade distributions for the computer science department, which surprisingly, is pretty GPA friendly, unlike, say, economics.

    Also, if your final exam grade is higher than your lower midterm grade, that lower grade will be replaced by your final exam grade. The quality of mercy is not strained.

  • Harvard Business School story
  • Alan Perlis story
  • Slade physics midterm story

    CS 201 Video Contest

    In the tradition of the racket/beat it song we have a song for Turing Machines: Would It Be Computable? to the tune of "Wouldn't It be Loverly?" from My Fair Lady.

    You are invited to create a music video for this song. Here are the rules:

    Second song contest: The Internet Fugue.

    In class on February 3rd, I introduced Toch's Geographical Fugue (wiki + score) as well as my derived Internet Fugue

    You are invited to perform the Internet Fugue either on video, or (preferably) live in class. The rules and rewards are the same as above.

    Lecture: Computability.

    Computability.html (jupyter) Definition of Computability and the Halting Problem. "This statement is false." (Proof by Contradiction.)

    Lecture: Boolean Functions.

    Boolean.html (jupyter)

    Review hw4. Use case special form for all-vars and eval-in-env.

    Getting to know UNIX

    UNIX Introduction Principle 3.
    [Home]