Discussion Topic: 1/28 Guest speaker comment1/28 Guest speaker comment What did you learn from today's speaker, Dan Russell? What was the best question from the audience? Did you ask it? Reply from Yassin Mudawi I learned more about the evolution of AI over the years and how various machines implemented aspects of what we enjoy in LLMs today. For me, the most fascinating part was learning about how vastly more complicated Google search is than I previously assumed. Reply from Joanne Zhao My main takeaway from Dan Russell's talk was the importance of making AI systems easy to use and understandable for users. His example of the AI program he wrote to translate a Xerox task into concrete steps stuck with me, as the perfect accuracy of his program did not matter because users couldn't follow those steps. I thought the best question from the audience was the one about Russell's thoughts on Waymo cars in SF, and more broadly, how to ensure technology isn't negatively disruptive to the communities it is deployed in. He had a unique perspective given he lived in close proximity to Waymo HQ, so he was used to seeing Waymo cars on the road for many years. His response was centered around the accuracy/safety of Waymo cars and the camera capabilities they have that human drivers don't. I wonder what other potentially negative impacts on communities could be besides the safety concerns he focused on. Reply from Gavin Onghai I learned about the power of search and how so many people overestimate their search abilities. I also learned more about the importance of the user experience. One question that I thought was good was about how the CS curriculums would change in the future. Yassin Mudawi asked the question. Dan said that alot of the assembly programming, he believes, will become outdated. I'm not sure if I fully agree, since everything is based upon assembly programming. Reply from Lakxshanna Raveendran I learned a lot about user experience from the beginning of this lecture. Some of my main takeaways were about the importance of user testing and not relying on "intuition", especially because if your UI is poor, then nobody can access the true function of your product. The Xerox machine example was very helpful. I liked the question (that I did not ask) about how he saw CS curricula in colleges changing over the next few years. As a student and a CS peer mentor, I've already seen such significant differences in both coursework and how students approach their psets. Like Dan said, I hope for the sake of future students that learning assembly code is phased out of the core curricula. Reply from Xinyuan Zhu He shared insights from his 30-year career as a "techno-cognitive anthropologist" studying how humans interact with search and AI. A major takeaway was that poor user interfaces can block even the best AI, which he illustrated through his failed copier planner and the social robot Jibo. He also highlighted a massive "Dunning-Kruger effect" in search: while most of people feel confident in their abilities, most search less than once a day and often stop at the very first result without verifying information. The best audience question addressed the gap between industry and academia; Dan noted a staggering "$100 billion gap" in compute power alone, meaning universities cannot train models at the same scale as giants like Google or OpenAI. I did not ask any questions during the session. Reply from Morgan Go Dan's comment on unintuitive UI being a roadblock for AI really stuck with me as someone who is passionate about UI/UX design. I think using that perspective may help with how we choose to utilize AI (Maybe as a test dummy to see how usable a product would be to a random person) and his experience with emerging and changing technologies also helped decrease any anxiety around the future of computer science/what we should study next. His real-life examples of people trying to work around the system (Man with the cart to prevent traffic in his neighborhood) put into perspective how important good design is, and I loved his response to the questions related to Waymo and it's recent popularity boom (I did not ask this question but was very grateful someone asked about it as someone who lives in a city with LOTS of waymos) Reply from Guangxing Cao 1. I learned that even very capable AI systems can fail if the user experience is weak, because people rely on the interface to build a correct mental model of what the system can and cannot do. 2. Best Question: how do we make sure that the research we do is not overbearing? (because it forces researchers to think about ethics and user consent, not just whether a method works.) Reply from Cixuan Zhang We learned that the User Experience (UX) of AI systems is critical and must be integrated from the very beginning—rather than added as an afterthought—to effectively support human reasoning and mental models. To me, the best question from the audience was whether he uses any voice-based LLM tools while driving. Dan said definitely not, because the UI is awful. Even though I wasn't the one who asked, his answer really resonated with what I’ve always thought, which is why that was my favorite part of the Q&A. Reply from Olivia Ye I learned about a lot of side projects that Google does on the side, as it was neat to hear form someone who ideated one himself and then pitched it. I also enjoyed hearing about what people's mental model of Google was, because I had never stopped to ask it myself. I liked someone else's question about self-driving cars and their reliability, as it was interesting to hear he would get in a Waymo but not Tesla. I wish I had asked myself. Reply from Diana Shyshkova From Dan Russell's presentation and his work on studying user behavior during information retrieval, I took away one key conclusion: how important user interfaces are. Even the most powerful tools or models will not be of much use if people don't know how to use them effectively. His statement that more than 90% of users don't even know how to use such a basic function as Ctrl+F truly struck me and highlighted the huge gap between technical capabilites and actual user behavior. This clearly showed that good design and usability are just as important as the underlying technologies Reply from Tracy Chen I learned that we should not blindly trust our intuitions when using AI systems. Even simple tools like spell-check can be content-sensitive, and small wording changes may lead to different outputs. He also emphasized that AI systems are often updated without notice, and if nothing goes wrong, users may not realize that the system’s behavior has changed. Reply from Esha Garg I learned a lot from Dan's discussion last week, but I think that my favorite part was his emphasis on what is not AI. Today, the term AI gets thrown around a lot, both in the computer science world as well as outside of it, such as the business world. It was cool to see how for a seemingly simple Google Mail mechanism of checking if you've attached a file is simply a semantic search to see if you have included the word "attach" rather than an "AI" methodology. I really liked learning more about the inside of how Google works. Reply from Lucas Liu People are somehow too confident about their knowledge but they dont even ctrl f. Reply from Yna Owusu I learned a lot about being a better searcher. The best question was about the outlook of software engineering as we know it. No I did not ask it. Reply from Andy Ma I was very surprised to learn that most people here at Yale are unusual in ways that I hadn't considered (knowing how to use Ctrl+F, Googling/using ChatGPT many times per day, etc.). The question that stuck with me the most was my own about how the conflict between AI and news outlets will be resolved. Reply from Weixing Zhang I learned about the importance of detailing and understanding user needs. I also learned about the swipe-to-text feature. A question that I thought was good from the audience was what kind of car he drives and his insight on self-driving cars like Waymo versus self-driving Teslas. Reply from Isabelle Millman The most interesting thing I learned from Dan Russell is how few people know how to use find to search pages for things. The best question from the audience was "what car do you drive?" Reply from Zikang Chen A lot comments on AI and the last part of the AI written music. That really shocked me. The most fun question to me was the question about vehicle. The insights on auto driving and new energy cars are still thoughtful. I did not the one who asked this question. Reply from Emmett Seto From the guest lecture today, I learned a lot of fun facts. For example, he mentioned that his friend was the one who created the swiping effect in texting and earns a nickel everytime it is used. Additionally, something else that I learned that was interesting was that Gen Z the first place they always look are the comments of a video. I thought his overall talk about HCI was interesting and the way our current generation interacts with certain commands. One of the questions asked was that currently a lot of startups are essentially GPT wrappers. If the only difference is the user interface, will these startups survive? I had asked this question after class. Reply from Thomas Chung I thought the coolest part about Dan Russell coming in was hearing such an inside perspective on the industry that is also extremely well connected. It was so funny when he asked us if we we had ever used a technology then casually dropped that he knew the person that made it. I think this ultimately built his credibility on the future of AI and its integration in human society. I thought the best question from the audience was not asked by me but pertained to how we should utilize AI in our own futures, and how it should be absolutely used as a tool but not relied upon. Reply from Hubert Wang Dan’s argument that AI should explicitly disclose its guardrails is compelling. In my own experience with image generation, I’ve often encountered unexpected or wrong outputs, wondering what internal constraints were at play. One of the questions asked was regarding Waymo and AI's application in autonomous driving. I followed up with Dan after the lecture to explore what the future holds as autonomous vehicles become more prevalent. Clearly this field still has room for improvement and needs a lot more testing, but it's on the right track. Reply from Omar Espiricueta My favorite thing I learned was the idea that we are not "normal". When designing user interface systems, we must remember that we are not the target audience, nor are many of our peers. To make a successful product, you must design with the masses in mind. Understanding people's behavior, tendencies, and desires is far more critical to a helpful, successful product than a "cool" interface. Reply from Kemi Omoniyi Finding information online is less about having “smarter AI” and more about designing a UX that supports how people actually think and ask questions. He emphasized that confidence often outpaces skill, so users may accept the first plausible answer, especially when the system isn’t obviously wrong, and normalize subtle errors. Reply from Alba Quintas Núñez I think that Dan Russell was a very inspiring speaker. I believe that his approach of the revolution of AI in our lives was very unique and interesting. Sometimes we tend to believe that, when it comes to developing new technological tools, the key component of their success is strictly tied to how effective and sophisticated these technologies are. However, Russell raises a very important point: the user experience is fundamental. It does not matter how competent an AI system is if the average person does not understand or know how to use it. In the last part of his talk, he proceed to leave us with the message that, as much as AI can be really good, we still need humans to think behind it, we still need a person to develop that user experience. I think the best question from the audience (I did not ask it) was: What do you think is going to happen with AI in the CS curriculum of higher education in the future? Even though it is impossible to predict exactly what is going to happen, Russell recognised that this tool is already completely changing this subject, but he still encouraged us to never stop reasoning and thinking. Reply from Tony Chang I learned how important the connection between the users of AI and the AI model itself actually is for the user experience. For example, when the model behaves unexpectedly, it would be widely beneficial to provide an explanation for the strange behavior. However, it also becomes a slippery slope as to how much should be disclosed to the users, as sometimes volatile and unsafe information could leak. It's shown with ChatGPT and users phrasing their query in a "nice" language to get the dangerous information they desire. I think the best question from the audience was how he saw the CS curriculum changing in the coming future, as it has become a widely relevant topic with the release of AI. I thought that it was good to see the perspective from someone who has worked closely with academia and industry, especially one acquainted with AI and user experience. I did not ask this question. Reply from Felix Zou The general overarching lesson is the UX is important. This may be somewhat common sense, but the examples provided of UX failures on Google apps (as well as the first printer one) do reinforce this point. Another point that Dan Russell is trying to drive home is how unaware the majority of people are of certain tools/shortcuts like control-F. This is new to me, as I have never thought of myself as being particularly keyboard-shortcut-savvy but somehow actually rank in the top 10% just by knowing control-F. In any case, it speaks to how easy it is to miss the experience of the majority of users if one judges UI design using one's own familiarity. Towards the end he also discusses AI trust, which are certainly relevant and timely, though he does not provide any useful guiding framework and/or solution. This is understandable, since getting symbolic human intelligence to trust purely statistical ML systems is never easy, and perhaps is not practically possible without making deceiving the human. In terms of questions, I liked the first one about what car he drives. It is funny and simultaneously relevant to the topic. I did not ask the question. Reply from Will Yang Prof. Russell offered great perspective on the human-machine interaction aspect. As computer science students, we typically focus on designing efficient and effective systems that calculate complex equations well or predict the next token perfectly; however, we often ignore that these applications and programs are ultimately used by humans (at least in the near future). Because of that, it's incredibly important to also think about UX design and how people will actually use the program, about whether it's intuitive and helpful enough for the target users. That's my central takeaway from the guest lecture. A very good question was asked about how advances in AI and LLMs might affect the future CS curriculum, which I think is especially relevant as the required skill set continues to evolve. I didn't ask that question, but I did ask another one about the gap between academia and industry, which I also care deeply about. Reply from Yide Jin From today’s speaker, Dan Russell, I learned that the hardest part of useful AI is not only the model. It is the system design. A real “connected” AI system should let tools communicate with each other, keep updating based on new information and user feedback, and try to understand the human goal over time. Just as important, it should explain its actions in plain language so users can trust it, catch mistakes, and steer it easily. The best audience question was about the gap between universities and industry, and how we turn strong academic ideas into real products and real impact. I did not ask it, but it resonated with me because this gap looks different across countries. In China, there are more structured industry and academia integration models. For example, Shandong University has school affiliated enterprises such as Shanda Dareway Software Co., Ltd. (山大地纬软件股份有限公司), and its leadership includes university faculty. The company’s president, is also a Shandong University professor. The company’s chairman, is also described as a Shandong University professor. This kind of close integration can reduce the “last mile” barrier and make it easier to build AI that is deployable, explainable, and user friendly in real settings. Reply from Sean Lee I learned the importance of building intuitive and transparent UI/UX for AI systems. The user interface serves as a critical bridge between users and the underlying system, so it is essential that it accurately reflects what the system is actually doing. If the interface is unintuitive or opaque, users may fail to take full advantage of the system or even misunderstand its behavior. Dan Russell also emphasized the importance of testing UI with real users, since there is often a gap between how engineers expect users to interact with a system and how people actually use it in practice. I thought one of the best questions from the audience was about how university CS curricula might change over the next few years. Because CS education plays a major role in shaping the next generation of engineers and programmers, I thought that this question highlights a potential shift in how we teach and prepare students, which could have a significant long-term impact. Reply from Ayushi Das I really appreciated how Dan described his work at the intersection of computer science, AI, and human behavior. His introduction of himself as a cyber-tribal-techno-cognitive anthropologist was especially interesting. I found it fascinating how he works with multiple datasets and makes thorough alterations to interfaces to make them more accessible and user-friendly. One of the most interesting moments for me was his response to the question about how he sees the AI and computer science curriculum changing over the next five years. Reply from Yuwang Ma I learned that human-computer interaction about how results are explained, how evidence is highlighted, how filters and facets are presented, how the system supports “backtracking” or “branching” lines of inquiry can either help or hurt the user’s ability to explore. I realized this is also important as algorithm is. I can remember a question about "How can a search interface make its “reasoning” legible without overwhelming users like showing why something ranked highly, or what evidence supports a summary." I didn't ask it though. Reply from Alex Lu I learned that users don't have a good mental model of how AI works, partially because underlying models can change very quickly. I also learned that the average search user uses search less than once per day, and that a majority of internet users do not know ctrl + f. The best question was someone else asking how San Francisco has changed since Waymo has started operating there. It made me realize autonomous systems like robot taxis are maturing and being deployed out in the world faster than I thought. Reply from Omar Abdellall I learned a lot about search and the deep research that went into it. There were some mind-boggling statistics about how people use search though, so I was surprised about the overconfidence of our search abilities. The best question was asking about what car he drove -- then following up to ask about autopilot & AI. I did not ask it. Reply from Thomas Luong I learned about his thoughts on human-centered perspectives on AI. His work at Xerox PARC with the AI scheduler system project was interesting because he described how his team walked through the process of creating a UI that incorporated sample user input and research --> motivating question of his work is "how can we study and improve user experience with AI?" - How do we make sure that the research we do is not overbearing? --> continuously test these things and make sure that what you're doing is useful for the team and useful for humanity. Reply from Arya Bhushan One thing I learned from Dan Russell is how decision systems, especially in the modern age of AI, can be a two-edged sword. While it can be incredibly helpful in researching and consolidating information, it can also often reinforce mistaken information when people try to verify misinformation. This raises a crucial AI and UX design issue of developing responsible AI. My favorite question from the audience was "How do you think the CS curriculum in universities changes over the next 5 years?" Unfortunately, I did not ask that question. Reply from Jiashu Huang I learned that user experience is usually overlooked but a very important part in AI software development. The best question is "how would you predict the trend of future CS education in college". I was not the one asking it. Reply from Bende Doernyei I truly enjoyed both the lecture and the dinner with Dan Russell. He made me think outside of the box and see how many different ways computer science and HCI tools could have evolved. It also made me think whether LLMs deserve the current hype, since it is overfunded compared to other, potentially more valuable AI developments (and still very much inaccurate, see: "dutch composer" example). Over dinner, his comments on delviery robots and Waymo security vs people were especially interesting. I attended another talk the day after about AI, and the speaker differentiated AI technologies that are substituting smart humans, doing the same thing that smart humans could do, vs. AI technologies that are doing things that even smart people could never do, like gene analysis or identifying the author of a book by looking at just the filler words. I think, consistent with the thinking of Dan Russell, the second version has brighter future, but the first one can help us focus less on substitutable skills while learning greater ones. Reply from Youxuan Ma The most important thing I learned is that your AI product is only as good as your UX design: if people can't understand how to navigate or use it easily, no one is gonna use it and the product will fail eventually. Also, another interesting thing I learned is that we are really not "normal" people, and we have to put ourselves in other, "normal" people's shoes when designing our product. I think the best question asked was: "how do you see the CS curriculums change/develop in 5 years?" I didn't ask it. Reply from Annabelle Huang Dan's friend invented the swipe type system, which is really cool. I learned we cannot really type swipe low frequency names in it, because words like proper names don’t work because they’re not in the training set. I think the best question is: How do you see the CS curriculum in the next five years? Dan has taught the human computer interaction AI course many times, and the course curriculum has changed a lot in the past few years. Dan thinks AI coding is getting better really fast, so it opens up for interpretation whether down the future we will still need to learn skills like assembly code, where it might be a skill that we can leave AI to handle. Reply from Sam Meddin I really enjoyed learning about how much more complex the systems we use are than we realize yet some things are much simpler and just as popular like typing using the drag feature. I liked the question about autonomous vehicles specifically teslas versus Waymo’s which I did not ask.