In the spring, I am teaching CPSC 4580/5580 Automated Decision Systems (ADS).
It is not Alabama, though they have won 16. Another school tops them at 18. Coming in at third, with 15 wins is, wait for it, Princeton.
Just like most people are not aware of Yale's glorious football history, many do not know that Yale was a major AI research center 50 years ago. See The Yale Artificial Intelligence Project: A Brief Historv, Stephen Slade, AI Magazine Volume 8 Number 4 (1987). Unlike today's large language models, the Yale AI Project's focus was on cognitive modeling involving a task orientation, psychological process models, and a canonical representation of knowledge.
We defined intelligence as what people can do. We wanted to find out how people do what they do, and implement those algorithms inside a machine. For us, people provided an existence proof for intelligence.
We wanted to focus on the purple. Nevermind AI, we would like to understand human cognition. Also, AI research should concentrate on what AI models cannot do.
Broadly speaking, this perspective comprised a cognitive science approach to AI, with active collaboration with colleagues in psychology and related disciplines. In fact, in the late 1970's, Yale was one of the main centers for the nascent field of cognitive science. The Yale AI Project received generous funding from the Sloan Foundation to help establish the discipline of cognitive science, including the journal Cognitive Science and the founding of the Cognitive Science Society.
Part of the early seeding of the field was Sloan's support for visiting scholars who would travel around to various research centers to share ideas. These academics came to be known as Sloan Rangers. One of the most prominant of these was the philosopher, John Searle, who visited Yale at this time and subsequently published his famous Chinese Room article, Minds, brains, and programs, John R. Searle, The Behavioral and Brain Sciences, (1980) 3.
Searle's argument dominated the field (and Searle's life) for many decades. Here is a talk he gave in 2015 at Google where he discusses coming to Yale among other things: Consciousness in Artificial Intelligence John Searle | Talks at Google, Dec 4, 2015.
The focus of Searle's argument was the claim made by researchers at Yale and elsewhere that computer programs could understand. Part of his argument is that understanding requires intentionality and no computer possessed intentionality. Therefore, no computer could understand. QED.
What would it mean for a computer to have intentions? That question is one of the main themes of the ADS course in the spring. We propose a framework for creating computer programs that have goals and relationships, which permit them to make decisions and provide explanations. Moreover, we describe a model of emotions that can be naturally derived from this process of goal pursuit.
The main text is Goal-based Decision Making. Stephen Slade. Hardcover: 304 pages. Publisher: Psychology Press (October 1, 1993). Online copy through Yale library
BTW, the goal-based model is complementary to the LLM approach. They can talk to each other.
Ethical issues in AI include bias and discrimination from training data, privacy concerns regarding data collection and surveillance, and questions of accountability and responsibility for AI decisions. Other major concerns are the potential for misinformation and deepfakes, the impact on employment, and the need for transparency and explainability in how AI systems work.I got that quote from Gemini, Google's LLM.
Yale has its own Digital Ethics Center
At the Yale Digital Ethics Center (DEC), we research the governance, ethical, legal, and social implications (GELSI) of digital innovation and technologies and their human, societal, and environmental impact. Through our work, we seek to design a better information society: critical, equitable, just, open, pluralistic, sustainable, and tolerant. We aim to identify and enhance the benefits of digital innovation and technologies while mitigating their risks and shortcomings.The Yale Digital Ethics Center is a leader in promoting guidelines for creating ethical AI -- rules that should govern responsible AI based on the principals derived from bioethics: beneficence, nonmaleficence, justice, and autonomy. AI requires a fifth principle: explicability.
We propose a complementary approach. In addition to creating rules that govern AI behavior, we suggest creating AI programs that actually know the difference between right and wrong. Creating an AI program with a conscience.
In real life, we have laws and the police to enforce those laws, but we expect individuals to know the difference between right and wrong. In ADS we will explore how to achieve that goal for computer programs. The solution should entail:
If that appeals to you, sign up for the course. We have an open enrollment policy.
Many people say that it is impossible for a computer to have intentions and ethics. (They said the same thing about speech recognition, chess playing, machine translation, and driver-less cars.) That is a good reason to tackle the problem. If you fail, well, what did they expect? If you succeed, then so much the better. That's what makes a good research problem.
We also have guest speakers who discuss the real world of computers that make decisions. Many of them are from finance. We normally take the speakers out to dinner with several students.