CS 4580/5580 - Paper
| Assigned: | Monday February 23 |
| Deadline: | Monday March 30, 11:59pm |
As the syllabus states, you are required to write a short paper for
this course, which counts for 20% of your grade. This is that paper.
Your paper should be around 5 pages (1500 words). You should submit
it in canvas, as an uploaded file in word or pdf format.
For the paper, you should read the book
Goal-based Decision Making.(GBDM) Stephen Slade.
Hardcover: 304 pages.
Publisher: Psychology Press (October 1, 1993).
It may also be available at the Yale Bookstore. (I have not looked.)
Online copy through Yale library
Online copy of thesis from which book was derived at Yale Library
In so far as possible, you should incorporate ideas from the book into
your paper. Note: you do not have to agree with the book. You can
challenge or dispute it.
The Big Picture
By now, you should have realized that a premise of this course is that there
are many domains in life for which there are no right answers. For most of your
academic career, you have avoided those issues. That is, the educational system
has generally tested you on objective knowledge. You guys are really good at that.
Once you leave academia, you will find that the world is more
subjective. You will face decisions that are not so black and white.
The security analysis homework is an example. So is this paper. I am
not looking for a right answer. I am looking for an informed analysis
and cogent explanation.
Also, the GBDM framework provides a method for approaching ill-defined
questions for which there are multiple, subjective perspectives.
Topics
Below are the topics from which you may choose.
- The Turing Test. Take a stand. Is it good or bad as a measure
of artificial intelligence? Why?
- Searle and the Chinese Room. This paper originally appeared
in Behavioral and Brain Sciences, a peer commentary journal.
It was published along with a couple of dozen critiques. Write your
own critique.
- Cognitive Modelling: Benefit or Necessity? Discuss
the benefits and needs for AI programs to mirror human cognitive processes.
- Chatting up ChatGPT, Gemini, et al.. Siri and Alexa primarily
respond to questions. What would it take to have a long
conversation with a computer? I encourage you to take the
cognitive modelling perspective and explore what it takes for a
person to engage in a long conversation. What does ChatGPT tell us about
human cognition?
- Driverless cars: the next frontier. As suggested in
the video trailer for this course, the Trolley Problem is not the
best test for driverless vehicles. What should a driverless car know
in order to be more like a human chauffeur?
- Risk management. In the fintech world, risk management is
the practice of mitigating the downside risk of your investments.
That is, to minimize how much money you lose. As we learned, risk
is often quantified by the volatility of your portfolio. In life,
there are predictable risks which we can mitigate. In driving a
car, you can get a flat tire. You can mitigate this risk by
carrying a spare or belonging to a roadside assistance plan such
as AAA. You have the risk of getting sick. You mitigate this risk
in part by having health insurance (and getting vaccinated). In
computer programming, we are also concerned with mitigating risk.
In programming, risk is often equated with errors
or exceptions. That is something happens which causes the
program to break. Many of these exceptional conditions are known
in advance, such as division by zero or missing files. Python has
an extensive list of built-in
exceptions. For this topic, I ask you to come up with a
similar list of exceptions for finance and investing, preferably
with risk mitigation plans. Examples include foreign
exchange risk and credit risk.
- Emotions. What are the benefits of a computer program
understanding emotions? How about a computer having
emotions? Propose how to do this. See GBDM pages 95-97, 104.
- Experience. There is a saying: good judgment comes
from experience and experience comes from bad judgment. Why
do many people become less idealistic over time? Possible
explanations include finding counterexamples to expectations,
adopting conflicting goals through relationships, making bad
decisions that need to be justified. Try to frame your discussion
using the goal-based model of interpersonal relationships. Thus,
how would a computer change over time through its experience and
relationships?
- Anthropic vs. The Pentagon. As you may know, last week
Anthropic refused to allow the Pentagon to use Claude for (a) massive
civilian surveillance, or (b) autonomous lethal weapons. What do
you think? (Note: OpenAi quickly raised its hand to volunteer
their services.)
- Digital Ethics The five principles of digital ethics are
beneficence, nonmaleficence, autonomy, justice, and explicability.
In this class, we have emphasized the last point, namely that
computers should not be black boxes. They should be able to
explain themselves. GBDM demonstrates one framework for achieving
explicability. How can you use the GBDM approach to address the
other principles: doing good, not doing bad, deciding to decide,
and societal good. Can you create an ethical computer?
The GARP document ethics.html
gives a very good overview of ethics in general and AI ethics in particular.
Should you write about ethics, you may take the content of this paper
as axiomatic. That is, you should not repeat any of this material,
but may build on top of it. I would welcome a paper in which you
use the GBDM framework to propose implementations to address one or
more of the ethical issues raised in the GARP document.
You get
an automatic upgrade if you tackle this question using that approach.
Also, you might consider implementing your ideas for the final project in the course. That is, write a paper describing how to implement an ethical program,
then implement it.
- Dealer's Choice. If you have a different idea for a paper,
you should run it by me. I am likely to approve it. If you are
going this route, you should get approval before spring break.
ChatGPT / Gemini etc.
Even if you don't write explicitly about ChatGPT and friends, you are
welcome to use them in developing your paper. You can use them as a
sounding board - that is, bounce ideas off them. You may also use them to
massage your prose. In any event, if you use an AI tool, you should
document it. That is, provide a transcript as an appendix to your
paper showing the AI exchanges.
How to Write a Paper
In previous years, I assumed that Yale students knew how to write a
paper. I was wrong. Your essay should have the following structure.
- An introductory paragraph concluding with a thesis
statement. The thesis statement is what you are going to
demonstrate. It is the point of the paper.
- Three or more paragraphs or sections supporting your thesis statement.
- A concluding paragraph that recapitulates your thesis statement.
Most of you are friendly with Mr. Number and, by extension,
mathematical proofs. An essay is like a proof. You begin a proof
with a statement of what you intend to prove. This is like the
thesis statement. Your reader should never be confused or uncertain
about the point of your argument.
The steps of the proof are axioms or logical inferences that form the
foundation of your argument. These are like the supporting
paragraphs of your paper.
A proof ends with the conclusion, which is the original statement,
albeit now you have proven it. QED stands for quod erat
demonstrandum, "Which was to be demonstrated." Sometimes you
may also see W5, or "which was what we wanted."
Your reader should always know what you are trying to prove.