Video Trailer for Computer Science 458/558


[Home]

Automated Decision Systems, Spring 2024

Video Trailer in Canvas Media Library.


MW 4:00-5:20pm, Davies.

Stephen Slade
113 AKW, 432-1246 stephen.slade@yale.edu
Office hours over zoom: Monday and Wednesday, 2-3 pm, and by appointment.
Zoom meeting id: 459 434 2854.

What am I like?

Many of you have been my students other computer science courses, especially 201. If you have been one of my students, you know what I am like. If you were not in one of my classes, ask a classmate who has. There are hundreds of them.

One thing you should know is that I tend to use the web a lot to organize the class materials. This web page is a good example.

What makes this course great?

In this course, you will get to tackle the hardest problem in artificial intelligence: how to create a computer program that thinks like you.

Imagine that you have a program that can respond to all your email automatically. What would that program need to know? That's the problem we are going to address. Singularity, here we come!

What happens in class?

In most classes, I will describe how to get a computer to simulate human decision making. The students will discuss their own views and provide examples based on homework assignments.

We will have guest speakers who discuss real world automated decision systems. In the past, we have had speakers from Facebook, Google, Palantir, and Wall Street firms. The speakers often are interested in recruiting as well. We take the speaker out to dinner with 4-5 students. Stay tuned.

What are the assessments and feedback like?

There will be regular homework assignments, implementing decision systems in Python. These will also be the basis for class discussions. You will write a paper about half-way through the course and have a final project in which you implement a decision system.

What will I do?

There is some reading, but not much. We expect you to know Python for the problem sets. Unlike most courses, we expect you to come up with new ideas. This course is a research project. I hope to be surprised by what you come up with. I want your ideas to spark joy -- for both you and me.

Another example: The Trolley Problem

In the world of driver-less cars, people discuss the trolley problem

Should you pull the lever to divert the runaway trolley onto the side track?

This problem originated in the 1960's in the context of abortion and other dilemmas. Today it often crops up in discussions about autonomous vehicles, as in, what decision should the driver-less car make in a similar circumstance?

Some people conclude that we should not have driver-less cars until we solve the trolley problem.

By that argument, we should not allow people to drive either.

My solution to the trolley problem for autonomous cars is the same as for human drivers: obey the traffic laws.

Nonetheless, there has been a lot research into the cultural and developmental differences involving the trolley problem. Here is one example: 2 year old Nicholas and the trolley problem

I think the trolley problem misses the boat. I propose the following as a more meaningful test for a driverless car.

A driverless car will be able to run errands for you. For example, it could pick up your dry cleaning or groceries. It could pick up your children at school.

Let's say that Otto (your driverless car) is supposed to pick up your teenage son at soccer practice. When Otto arrives, your son says that he injured his leg and needs to go to the hospital. Otto should comply with this change of plan.

Instead, suppose that Otto arrives at soccer practice, and your son says that he needs to go to the liquor store. What should Otto do?

There might be a legimate reason for your minor child to go to a liquor store, but Otto should not automatically comply. What should a human driver do?

We will see that for many of these decision problems, there is no one right answer. However, they help us to refine our cognitive model for automated decision making.

Update: I came across the following yesterday (1/16/2024)