| Assigned: | Monday January 27 |
|---|---|
| Deadline: | Tuesday May 6 |
AI ethics is a front burner topic these days. The AIMA book gets around to it in chapter 27. We are not going to wait until then.
In lieu of the scheduled final exam, we will instead require a final paper on AI and Ethics. It is not a research paper. Instead, you are expected to present your ideas. It should not be a magnum opus - just 5 to 10 pages. I am not asking you to write a program. Take your cue from Turing: when he proposed the idea of AI, he wrote a paper, not a program.
The question to address is NOT "how do we create guardrails and regulations and laws to protect us from unethical AI behavior." That is the normal framing of the question. Instead, you are to address the following question:
How can we create an AI program that knows how to act ethically? The AI knows the difference between right and wrong. It can explain and justify its decisions and actions. It understands how to reason about the five pillars of ethical behavior: beneficence, nonmaleficence, autonomy, justice, and explicability.We are in cognitive science territory. The idea is to envision a person who behaves morally and recreate that person's cognitive process in a computer.
We will discuss the VOTE program, which provides a plausible framework for ethical decision making. However, you are free to explore new terrain or revisit the works of the masters, such as the following.
You don't have to include any of the above. This is not a research paper. You have been on this earth for a couple of decades. You should know the difference between right and wrong. Can you teach a computer what you know?
I have suggested that you practice introspection - think about your own thought process. Now you have a topic. How do you know how to do the right thing?
Many of you are thinking about how to get a job and have a successful career. That's fine. However, your life's journey is not a simple constraint satisfaction problem (see AIMA chapter 6.)
Slade's career advice: Don't just make a living. Make a difference.
Note: if you disagree with the premise, you do not have to support it. Instead, you can write about how the idea of an ethical AI is impossible. Get in touch with your inner Searle.
However, in VOTE, the interesting (and challenging) decision problems were those in which the member was conflicted, that is, there were reasons (stances) on both sides of an issue. I mentioned an actual member of Congress who was personally opposed to the death penalty, but knew that nearly all of his constituents were in favor of capital punishment. He had stances on both sides, PRO and CON.
In a logical model, this would be P and not P, a contradiction, and the system would grind to a halt. However, VOTE is not a logical model. It is a psychological model. People have to deal with conflicts all the time. Thus, our driver-less car might find itself in a situation where there is a compelling reason to speed, in spite of traffic law. How should the computer reason about this situation?
Maybe traffic laws are trivial. What about the Ten Commandments, like, thou shall not kill? The recent movie Bonhoeffer tells the true story of the German Lutheran pastor, Dietrich Bonheoffer, who was part of a plot to assassinate Hitler. Bonhoeffer wrote a book on ethics in which he framed ethical decisions not in terms of good and bad, but in coming to grips with the problem of evil in the world.
I don't expect you to solve that problem. However, I don't want you simply to say that the computer should obey the law.
I have been undergoing human subjects review training. One of the issues is conflicts of interest or COI. Broadly speaking, there are two flavors of COI facing a researcher: financial and non-financial. Financial COI's include having an equity interest in the company sponsoring the research. Non-financial COI's include having your graduate students work for your company or giving a good grant review to your friends and colleagues.
Conflicts seem to be an interesting area of ethical reasoning. This is VOTE territory.
Most of you are friendly with Mr. Number and, by extension, mathematical proofs. An essay is like a proof. You begin a proof with a statement of what you intend to prove. This is like the thesis statement. Your reader should never be confused or uncertain about the point of your argument.
The steps of the proof are axioms or logical inferences that form the foundation of your argument. These are like the supporting paragraphs of your paper.
A proof ends with the conclusion, which is the original statement, albeit now you have proven it. QED stands for quod erat demonstrandum, "Which was to be demonstrated." Sometimes you may also see W5, or "which was what we wanted."
Your reader should always know what you are trying to prove.