Spring 2024 Computer Science 458. 2/19/2024


[Home]

Canvas Quiz of the Day (need daily password)

Most days, there will be a simple canvas quiz related to the lecture. You need a password to activate the quiz, which I will provide in class. These quizzes will count toward your class participation grade. The quiz is available only during class.

Click for today's quiz.

Also, you will earn class participation points for posting to Discussions (not Ed Discussions.)

Administrivia

  • I have office hours Mondays and Wednesdays from 2-3 pm, on zoom, id 459 434 2854.

  • Complete the online student information sheet. Note: the previous form was not working. Please submit again. Thanks.

  • Yale Information Society Project Free lunch.
    Tuesday, February 20, 2024 - 12:10PM-1:30PM - SLB 128

    The Prediction Society: AI and the Problems of Forecasting the Future

    Hideyuki Matsumi, PhD candidate/researcher at the Research Group on Law Science, Technology and Society (LSTS) as well as at the Health and Ageing Law Lab (HALL) of the Vrije Universiteit Brussel (VUB)

    and

    Daniel J. Solove, Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law, George Washington University Law School

    Today’s predictions are produced by machine learning algorithms that analyze massive quantities of data, and increasingly, important decisions about people are being made based on these predictions. Algorithmic predictions are a type of inference, but predictions are different from other inferences and raise several unique problems. More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. Data protection/privacy law do not adequately address these problems. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference.

    Click here to read the full paper.

    Hideyuki Matsumi, or Yuki, is a doctoral researcher at the Research Group on Law Science, Technology and Society (LSTS) of the Vrije Universiteit Brussel (VUB). He is also a member of the Health and Ageing Law Lab (HALL), and works on EU H2020 project Hospital Smart development based on AI (HosmartAI).

    Daniel J. Solove is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School. He is also the founder of TeachPrivacy, a privacy and cybersecurity training company.


    Thursday, February 22, 2024 - 12:00PM-1:30PM - Baker Hall 405

    Less Discriminatory Algorithms

    Solon Barocas, Principal Researcher in the New York City lab of Microsoft Research

    Entities that use algorithmic systems in traditional civil rights domains like housing, employment, and credit should have a duty to search for and implement less discriminatory algorithms (LDAs). Why? Work in computer science has established that, contrary to conventional wisdom, for a given prediction problem there are almost always multiple possible models with equivalent performance—a phenomenon termed model multiplicity. Critically for our purposes, different models of equivalent performance can produce different predictions for the same individual, and, in aggregate, exhibit different levels of impacts across demographic groups. As a result, when an algorithmic system displays a disparate impact, model multiplicity suggests that developers may be able to discover an alternative model that performs equally well, but has less discriminatory impact. Indeed, the promise of model multiplicity is that an equally accurate, but less discriminatory alternative algorithm almost always exists. But without dedicated exploration, it is unlikely developers will discover potential LDAs.

    Model multiplicity has profound ramifications for the legal response to discriminatory algorithms. Under disparate impact doctrine, it makes little sense to say that a given algorithmic system used by an employer, creditor, or housing provider is either “justified” or “necessary” if an equally accurate model that exhibits less disparate effect is available and possible to discover with reasonable effort. Indeed, the overarching purpose of our civil rights laws is to remove precisely these arbitrary barriers to full participation in the nation’s economic life, particularly for marginalized racial groups. As a result, the law should place a duty of a reasonable search for LDAs on entities that develop and deploy predictive models in covered civil rights domains. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should be recognized to include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frameworks for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.

    Click here to read the full paper.

    Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research, where he is a member of the Fairness, Accountability, Transparency, and Ethics in AI (FATE) research group. He's also an Adjunct Assistant Professor in the Department of Information Science at Cornell University, where he’s part of the initiative on Artificial Intelligence, Policy, and Practice (AIPP). His research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference. He is co-author of the forthcoming textbook on "Fairness and Machine Learning: Limitations and Opportunities" and he co-founded the ACM conference on Fairness, Accountability, and Transparency (FAccT).

    Assignments

    Assignments: hw 2 is now available.

    Upcoming Guest lecture: Luciano Floridi, Yale Digital Ethics Center: Monday February 19th

    AI Risks and Opportunities

    In an age of digital disruption and uncertainties, it is essential to understand the new challenges facing us and to shape the right strategies. We need to be better at analysing the present and designing the future. This is particularly true in the business of risks and opportunities raised by AI, which is both a crucial element in the development of a fair and sustainable society and one of the most challenging aspects of a fast-paced digital transition.

    slides

    Luciano Floridi is the Founding Director of the Digital Ethics Center and Professor in the Cognitive Science Program at Yale University. He is world-renowned as one of the most authoritative voices of contemporary philosophy, the founder of the philosophy of information, and one of the major interpreters of the digital revolution. His most recent books are The Ethics of Artificial Intelligence – Principles, Challenges, and Opportunities (OUP, 2023) and The Green and The Blue – Naive Ideas to Improve Politics in the Digital Age (Wiley, 2023). His more than 300 works about the philosophy of information, digital ethics, the ethics of AI, and the philosophy of technology have been translated into many languages. In 2022 he was made Knight of the Grand Cross OMRI for his foundational work in philosophy.

    We will go out to dinner afterwards at Villa Lulu, 230 College Street. Students coming to dinner:

    Upcoming Guest lecture: Duke Dukellis, Google: Wednesday February 21st

    https://www.linkedin.com/in/dukellis/

    This lecture will be virtual, probably over zoom. Alas, there will be no dinner.

    CACM History of AI

    The Yale AI Project: Cognitive Modelling

    See The Yale Artificial Intelligence Project: A Brief History Stephen Slade, AI Magazine, 1987.

    See Conceptual Dependency and Its Descendants Steven Lytinen, 1992.

    The Realm of Decisions

    For the next class and the coming weeks: Give an example of an explanation you thought interesting because it was especially good or bad. It can be personal or from the news. Use the Discussions section of canvas (not Ed Discussion). You earn a quiz point by posting to Discussions.
  • What is a correct decision? See A Realistic Model of Rationality. This short paper provides a high-level introduction to the topics we will discuss in this course: goals, plans, resources, relationships, goal adoption, explanations, subjective decisions, emotions, advice, and persuasion. We contrast it with the standard economic decision theory. We want to develop a theory that can be implemented in a computer program.


    [Home]