Spring 2024 Computer Science 458 Introduction. 2/12/2024


[Home]

Canvas Quiz of the Day (need daily password)

Most days, there will be a simple canvas quiz related to the lecture. You need a password to activate the quiz, which I will provide in class. These quizzes will count toward your class participation grade. The quiz is available only during class.

Click for today's quiz.

Also, you will earn class participation points for posting to Discussions (not Ed Discussions.)

Administrivia

  • I have office hours Mondays and Wednesdays from 2-3 pm, on zoom, id 459 434 2854.

  • Complete the online student information sheet. Note: the previous form was not working. Please submit again. Thanks.

  • Yale Information Society Project Free lunch.
    Tuesday, February 13, 2024 - 12:10PM-1:30PM - SLB 128

    Regulating Algorithmic Harms

    Sylvia Lu, Faculty Fellow at the University of Michigan Law School

    In recent years, algorithmic harms—a host of harms to fundamental civil rights—have become a pressing problem for contemporary democracy. As machine-based systems promise to optimize our life with greater efficiency, they present a critical new array of civil rights concerns. For instance, a facial recognition system for improving criminal detection wrongly flagged innocent customers as shoplifters, a healthcare software designed to identify high-risk patients denied medical treatment to Black individuals with poor health conditions, and a social media algorithm intended to boost social engagement exacerbated addictive behavior and mental illness in teenagers. At the core of these problems is the varied, dynamic, and opaque nature of algorithmic harms.

    To confront these new challenges, policymakers worldwide are increasingly adopting regulatory measures directed at algorithmic harms. In December 2023, the European Union passed a landmark comprehensive AI Act, targeting the governance of AI applications through a risk-based approach. Nearly concurrently, the U.S. White House issued an Executive Order to commence initiatives to protect citizens from algorithmic harms. While algorithmic harm concerns are widely recognized in policy agendas, policymakers still struggle to articulate their nature and scope, impeding effective legislation and meaningful enforcement.

    This Article constructs a taxonomy of algorithmic harm that identifies four different interests at stake: privacy erosion, autonomy circumvention, equality diminishment, and safety risks. This taxonomy is informed by case studies of three AI harm mitigation frameworks. This comparative analysis examines the strengths and limitations of each framework, arguing for a shift toward alternative, harm-centered solutions. In doing so, this Article suggests a set of refined harm-based rules, modeled on recent proposals but modified to reflect a more comprehensive understanding of algorithmic harms, aligning the law more closely with the actual problems it intends to regulate.

    Sylvia Lu is a faculty fellow at the University of Michigan Law School. Her teaching and research interests lie in the interplay of law, innovation, and society.

    Lu writes about data privacy laws, artificial intelligence regulations, and comparative law, with a particular focus on the United States, the European Union, and China. She holds a Doctor of Science of Law and a Master of Laws degree from the University of California, Berkeley and earned a Master of Laws from National Tsinghua University in Taiwan.

    Thursday, February 15, 2024 - 12:00PM-1:30PM - Baker Hall 405

    Leveraging Procedural Justice to Shape Online Norms and Rule Following

    Matt Katsaros, Director of the Social Media Governance Initiative within the Justice Collaboratory

    Caroline Nobo, Executive Director of the Justice Collaboratory at Yale Law School

    Tom Tyler, Macklin Fleming Professor of Law and Professor of Psychology at Yale Law School

    Online platforms of all sizes spend considerable resources towards content moderation apparatuses aimed at enforcing rules and decreasing unwanted antisocial interactions. Five years ago, the Justice Collaboratory began the Social Media Governance Initiative to understand how we can translate the theories and ideas developed over decades in the criminal-legal arena on building trust and legitimacy into our online spaces. In this talk, we will share some of our research conducted during this time in collaboration with platforms like Facebook, Twitter, and Nextdoor looking at how to build community vitality in our online world by leveraging procedural justice theory and the social sciences more broadly.

    The presentation will be led by Tom Tyler, Caroline Sarnoff, and Matt Katsaros. Tom R. Tyler is the Macklin Fleming Professor of Law and Professor of Psychology at Yale Law School, as well as a Co-Founding Director of The Justice Collaboratory. Caroline Nobo is a Research Scholar in Law and Executive Director of the Justice Collaboratory at Yale Law School. Matt Katsaros is a researcher and the Director of the Social Media Governance Initiative within the Justice Collaboratory.

    Assignments

    Assignments: hw 2 is now available.

    Guest lecture: Joanne Lipman and Rebecca Distler: Monday February 12th

    JOANNE LIPMAN
    Author & journalist who has served as Editor-in-Chief of USA Today, USA Today Network, Conde Nast Portfolio, and The Wall Street Journal’s Weekend Journal. Currently an on-air CNBC contributor and Yale University journalism lecturer.

    Bestselling author of That’s What She Said: What Men and Women Need to Know About Working Together and Next!: The Power of Reinvention in Life and Work. Yale BA ‘83.

    Joanne will speak on AI & the media: its perils (ie misinformation); its potential (ie rebuilding local news media); and the surprising results when she assigned her journalism students to use it for their reporting

    REBECCA DISTLER
    Strategist for AI & Digital Health at the Patrick J. McGovern Foundation, a philanthropy focused on advancing AI for social impact. Served as Director of Global Health Initiatives at Element Inc, an AI digital identity company.

    Background in public health and technology, including work with organizations like WHO, Gavi the Vaccine Alliance, and the Bill & Melinda Gates Foundation. Yale BA’12 & Yale MPH’13.

    Rebecca will discuss AI & public health: AI & decision-making in public health, with case studies.

    slides

    We will go out to dinner afterwards at Villa Lulu, 230 College Street. We will have an in-person lottery in class on Wednesday to select students coming to dinner.

    AI and Intentionality: The Chinese Room

    See Consciousness in Artificial Intelligence John Searle, talk at Google. See 9 minutes in for discussion of cognitive science and Sloan talks at Yale.

    See Minds, brains, and programs John R. Searle, The Behavioral and Brain Sciences (1980).

    The Yale AI Project: Cognitive Modelling

    See The Yale Artificial Intelligence Project: A Brief History Stephen Slade, AI Magazine, 1987.

    See Conceptual Dependency and Its Descendants Steven Lytinen, 1992.

    The Realm of Decisions

    For the next class and the coming weeks: Give an example of an explanation you thought interesting because it was especially good or bad. It can be personal or from the news. Use the Discussions section of canvas (not Ed Discussion). You earn a quiz point by posting to Discussions.
  • What is a correct decision? See A Realistic Model of Rationality. This short paper provides a high-level introduction to the topics we will discuss in this course: goals, plans, resources, relationships, goal adoption, explanations, subjective decisions, emotions, advice, and persuasion. We contrast it with the standard economic decision theory. We want to develop a theory that can be implemented in a computer program.


    [Home]