I hereby solicit suggestions for the video of the day. Please email me your ideas with explanations. Selected entries will win 5 homework points. If your video is played at the beginning of class, you must also briefly explain something about the video and something about yourself - in person.
https://pollev.com/slade You may also download the app to your phone. Use the "slade" poll id.
Here is a practice exam. (solutions to practice exam) Ignore problems 3, 4, 5(a), and 5(e). TC-201 is not in scope for this exam. However, tail recursion is.
There will be a UNIX question, as in the first midterm. sample UNIX transcript (solutions)
Review session: Saturday March 29th, 3pm. RTBA.
As before, I recommend using Yale's Clarity and Google experimental tutor as sources for review questions.
Tune in for a presentation from the Yale Office of Career Strategy. This talk will review a number of tips and advice on the internship search process for CS jobs. There will also be a panel of students who have obtained internships at Microsoft, Netflix and Amazon to speak about their experiences and offer advice. Open to all students!
Title: Robust NLP: Can We Do Better Than Bigger?Abstract: The robust text understanding and generation capabilities of today’s NLP has been driven by the assumption that larger models and larger datasets improve predictive performance. However, this approach may not be sufficient to address real-world challenges, particularly in low-resource languages and domains.
In this talk, I will argue that robust NLP requires us to move beyond accuracy and perplexity as evaluation metrics. Towards this goal, I will present the idea of learning from knowledge rather than data, which exploits input and label meaning, in the presence of invariant domain knowledge. This approach generalizes traditional machine learning and has shown promise across multiple NLP problems. I will connect back to the theme of low-resource domains by presenting a text-based crisis counseling application. I will conclude by outlining future research directions around the theme of making inferences about text across domains despite limited data and compute resources.
Bio: Vivek Srikumar is an associate professor in the Kahlert School of Computing at the University of Utah. His research lies in the areas of artificial intelligence, natural language processing and machine learning, and has been primarily driven by questions arising from the need to efficiently reason about textual data with limited supervision. His research has been published at various AI, NLP and ML venues, and has been recognized by a paper award at EMNLP 2014, and honorable mentions from CoNLL 2019 and the IEEE Micro magazine. His work has been supported by research grants from NSF, US-Israel BSF, NIH, and awards from Google, Intel, Nvidia and Verisk. He has served as associate program chair of AAAI 2022 and the program co-chair of CoNLL 2022 and ACL 2024. Furthermore, he has organized several workshops hosted at the primary ML and NLP conferences around the theme of how learning and structured knowledge intersect. He was a post-doctoral scholar at Stanford University before moving to Utah, and prior to that, in 2013, he obtained his PhD from the University of Illinois at Urbana-Champaign.
Website: https://svivek.com
You are invited to create a music video for this song. Here are the rules:
In class on February 3rd, I introduced Toch's Geographical Fugue (wiki + score) as well as my derived Internet Fugue Here is a sample recording of the first 32 bars by a guest artist using GarageBand. It took 10 minutes and it shows.
You are invited to perform the Internet Fugue either on video, or (preferably) live in class. The rules and rewards are the same as above.
Gates.html (jupyter) computer memory - sequential circuits.
hw5 review. - next-value
racket hash tables We use mutable hashes in this assignment.
make-hash hash-ref hash-keys hash-set! hash-copy hash-has-key?