In recent years, we have witnessed impressive advances in computer technology based on statistical models of machine learning. Techniques such as deep learning (based on neural networks), hidden Markov models, naive Bayesian models, decision trees (and forests, which are collections of trees), linear and logistic regression, and others, have been labelled collectively as big data or data science or machine learning or, spuriously, artificial intelligence. By and large, these are statistical techniques that have been around for decades. What has changed is that we now have access to enormous data sets and the concomitant computer capacity to process these data sets.
In the past, a key issue in statistics was how to select a representative sample from a population. People are familiar with opinion polling data that have a plus or minus range of uncertainty. If the opinion polling guys were able to speak with every single person, there would be no range of error. Essentially, that is what now happens with big data. Instead of taking a sample of words from Shakespeare or DNA base pairs from an e coli genome or Facebook likes or self-driving car left turn examples for statistical analysis or words and phrases that appear on web pages, the researchers can examine the entire population. There is no error range.
Admittedly, the data - except for Shakespeare and maybe the e coli genome - may change over time, so there is a need to update the results.
At a recent symposium at Yale on Cyberwarfare, the topic of killer robots came up. There were many issues, including, should we allow a robot (or drone) to use lethal force without human intervention? Current practice requires that there be a human operator who actually pulls the trigger, as it were. Do we want to remove the human from the loop?
Former Defense Secretary Ash Carter stated that we could do so only if there was “transparent accountability.” If a human soldier uses lethal force outside the chain of command, there is usually a tribunal which reviews the circumstances. Carter was advocating a similar process for robots. The robot who killed someone would have to explain why they did it. Presumably, self-defense would not be a valid reason.
The other speakers included former Secretary of State Henry Kissenger, and former Google CEO Eric Schmidt. No one questioned if a robot or drone could exercise autonomous lethal force. That was taken for granted. The question was should we allow the robot to have that agency?
Early AI researchers knew that goals were an important part of human cognition. Hewitt's PLANNER programming language, which provided a way to combine knowledge representations with procedural (programming) knowledge, reasoned about goals.
You could give PLANNER a high-level goal and it would develop a plan to achieve that goal. It did so by treating the goal as a theorem, which it would then prove. The steps of the proof became the plan to achieve the goal. It was quite a brilliant approach.
The Micro-PLANNER implementation was used by Terry Winograd in his program SHRDLU, which was a natural language understanding program in the blocks world, which was the equivalent of e. coli experiments in biology. Here is an example SHRDLU dialog.
Driverless car: take teenager to liquor store, to ER when in diabetic shock, to police station when hijacked.
Discussed in course trailer video:
In the world of driver-less cars, people discuss the trolley problem
This problem originated in the 1960's in the context of abortion and other dilemmas. Today it often crops up in discussions about autonomous vehicles, as in, what decision should the driver-less car make in a similar circumstance?![]()
Should you pull the lever to divert the runaway trolley onto the side track?
Some people conclude that we should not have driver-less cars until we solve the trolley problem.
By that argument, we should not allow people to drive either.
My solution to the trolley problem for autonomous cars is the same as for human drivers: obey the traffic laws.
Nonetheless, there has been a lot research into the cultural and developmental differences involving the trolley problem. Here is one example: 2 year old Nicholas and the trolley problem
I think the trolley problem misses the boat. I propose the following as a more meaningful test for a driverless car.
A driverless car will be able to run errands for you. For example, it could pick up your dry cleaning or groceries. It could pick up your children at school.
Let's say that Otto (your driverless car) is supposed to pick up your teenage son at soccer practice. When Otto arrives, your son says that he injured his leg and needs to go to the hospital. Otto should comply with this change of plan.
Instead, suppose that Otto arrives at soccer practice, and your son says that he needs to go to the liquor store. What should Otto do?
There might be a legimate reason for your minor child to go to a liquor store, but Otto should not automatically comply. What should a human driver do?
We will see that for many of these decision problems, there is no one right answer. However, they help us to refine our cognitive model for automated decision making.
The term artificial intelligence means different things to different people. It also changes over time. Alan Turing, the British mathematician, breaker of the WWII German Enigma code, and father of both computer science and artificial intelligence, acknowledged that the very idea of machine intelligence was audacious. He knew that people would be hesitant to ascribe intelligence to a computer. He proposed an experiment in which a human subject would communicate via teletype to either another human or a computer program. If the human subject could not tell the difference, then we would conclude that the computer program was intelligent.
Over the years, there have been many milestones for computer intelligence. The first computer language compilers, which converted statements in high level languages like FORTRAN or ALGOL into machine code were termed automatic programming. They were performing tasks that previously required highly trained and skilled humans to perform.
Following Turing, artificial intelligence got its start at an academic workshop at Dartmouth in the summer of 1956. The workshop was organized by John McCarthy (the inventor of the LISP programming language). Other prominent participants included Marvin Minsky, Allen Newell, Claude Shannon, Herbert Simon, Oliver Selfridge, John Nash, Arthur Samuel, and John Holland. Most of the major breakthroughs in the field for the following decades can be traced to this meeting. These advances included new programming languages, theorem proving, champion chess and checkers programs, machine learning, software engineering techniques, robotics, vision, and speech recognition.
As the computer programs would achieve each new milestone, many observers would claim that we had arrived at artificial intelligence, which, alas, was just not the case. Indeed, part of the perception problem was due to the hubris of the Dartmouth participants and their colleagues. They would promise that in 10 years or so, they would solve the “vision problem” or the “natural language problem” or the “planning problem.” Their enthusiasm and confidence was similar to that of medical researchers who greet each new diagnostic or treatment advance as a major breakthrough that will rid mankind of disease. Antibiotics, anaesthesia, and x-rays are certainly salutary developments, but they do not represent a universal panacea.
Most people recognize that modern medicine, even with its impressive advances, has not conquered all disease. People will still die. For better or worse, artificial intelligence, for much of the world, continues to have the misleading reputation of, well, intelligence.
Computer programs do not display human intelligence anymore than penicillin or morphine cure all disease. The predictions and fears surrounding artificial intelligence, often inspired by science fiction, seem to imply that robots can be functioning replacements for humans in any way imaginable.
One of the earliest papers discussing this delusional view of AI was Drew McDermott's Artificial Intelligence Meets Natural Stupidity published in 1976 before Drew moved from MIT to Yale, where he remained until his recent retirement.
Philosophers have contemplated human intelligence for thousands of years. Once artificial intelligence reared its head, philosophers paid attention. In particular, John Searle wrote a famous paper “Minds, Brains, and Programs” in 1980.
One of Searle's key observations is that the computer cannot be intelligent because it lacks intentionality. The program that plays chess does not want to play chess. The self driving car does not want to drive. It is not driving because it wants to go to school or go to the movies.
These so-called intelligent programs lack intentionality. They have no motivation. We have a robot without a cause.
To some extent, the response to this observation is "so what." An airplane, unlike a bird, does not want to fly. Nevertheless, airplanes are extraordinarily useful and marvelous machines. The modern world is a much better place because of air travel.
However, Searle is not arguing that computer programs are not useful. He is saying that the are not intelligent due to their lack of intentionality.
I believe that is a fair criticism. Computers do not have intentionality.
This raises the question: is it possoble for computers to have intentionality? Can we create a computer that has goals and desires, and can act based on those beliefs?
We will explore that question throughout this document.
For a computer to have goals, we need a goal data structure. Let's give it a try. We start with one flavor of goals: an issue.
We use standard Python object oriented programming techniques. Nothing fancy here.
Unlike most academic papers, we will not limit our argument to theories, explanations, formulas, and examples. Thanks to the marvels of Jupyter Notebooks, we will include actual, honest to God, executable code (in Python). Like so many advances in the field of computer science, the idea of combining text and code can be traced to Donald Knuth. He pioneered literate programming back in 1984. Knuth named his implementation WEB, since this was one of the few three-letter English words that had not been applied to computing.
class issue:
''' Class for issues '''
# keep track of each instance of an issue.
count = 0 # how many issues have we created?
issues = {} # store the issues in a dictionary, aka, a hash table.
def __init__(self, name):
''' This is the constructor for an issue. It is invoked with the class name and
the name of the issue, e.g., issue("abortion")
We use the Python string method upper() to convert the name to upper case.
If the issue is already in the dictionary, we ignore this instance.
Otherwise, we add it to the dictionary.
We assign a sequential count to the instance and increment the class count.
We stick the new issue in dictionary.'''
self.name = name.upper()
if self.name not in issue.issues:
self.count = issue.count
issue.count += 1
issue.issues[self.name] = self
def __repr__(self):
''' Print out a representation that evaluates to this issue.'''
return f'issue({self.name!r})'
def __str__(self):
''' Return string version of the issue that includes its name and count. '''
return f"<issue ({self.count}): {self.name}>"
def __eq__(self, other):
''' Overload == operator. Two issues must match issue name. '''
return self.name == other.name
OK. This is pretty minimal. We will expand it later on.
As we slowly enlarge the definitions, the reader will no doubt raise objections that the model does not do this or the model cannot handle this case. Those objections will mostly be true. However, we urge patience. The model has to do something before it can do everything.
The author observes that such objections may fall prey to the fallacy of the counter-example. That is, the argument that a model is invalid because it fails to handle this case or that. For example, aspirin is of no use because it does not cure cancer.
Therefore, we advise patience, which Ambrose Bierce defined as A minor form of despair disguised as a virtue.
We can now create a couple of issues.
i1 = issue('abortion rights')
i2 = issue('gun control')
i3 = issue('covid')
i4 = issue('equality')
i5 = issue('freedom of speech')
i1
issue('ABORTION RIGHTS')
print(i1)
<issue (0): ABORTION RIGHTS>
The value of the issue is rendered with the repr
method. The print
function calls the str
method.
We can demonstrate the overloaded ==
and (implicitly defined) !=
operators.
i1 == i1
True
i1 == i2
False
i1 != i2
True
The issue.issues
dictionary is useful.
issue.issues
{'ABORTION RIGHTS': issue('ABORTION RIGHTS'), 'GUN CONTROL': issue('GUN CONTROL'), 'COVID': issue('COVID'), 'EQUALITY': issue('EQUALITY'), 'FREEDOM OF SPEECH': issue('FREEDOM OF SPEECH')}
issue.issues['ABORTION RIGHTS']
issue('ABORTION RIGHTS')
issue.issues['DEMOCRACY']
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[11], line 1 ----> 1 issue.issues['DEMOCRACY'] KeyError: 'DEMOCRACY'
You can catch this KeyError
exception, or use the in
operator.
'ABORTION RIGHTS' in issue.issues
True
'DEMOCRACY' in issue.issues
False
'abortion rights' in issue.issues
False
The issue name must be upper case to match.
These issues are diverse. There are some issues that usually have a valence. For example, most people are either pro-abortion (or pro-choice) or anti-abortion (or pro-life). Similarly, there are gun control advocates and supporters of the second amendment.
We assume almost everyone is opposed to the corona virus. However, there are people who are pro-vaccine and there are anti-vaxxers. There are people in favor of mask mandates and those opposed. We can add issues for those groups.
i6 = issue('covid vaccine')
i7 = issue('mask mandates')
We want to be able to represent the positions of supporting or opposing abortion or vaccines or gun control or mask mandates. We also want to capture the fact that even people who agree on a position may not be equally adamant. One person who opposes abortion may bomb a Planned Parenthood clinic. Another may merely vote for a conservative politician. We want to be able to represent the spectrum of intensity.
Our initial attempt is to create a stance data structure that comprises
where is A is very important, B is moderate, and C is minimal.
Why not use numbers to represent importance? Good question. Numbers can indeed capture the variation of intensity or commitment. However, once you start using numbers, there is the temptation to use them in inappropaite ways, such as addition, multiplication, square roots, exponents, logarithms, and on and on. It is a slippery slope.
Economic decision theory originally did not require numbers - just ordinal ranking. We can achieve ordinal ranking from A, B, and C. It will serve to rein in any tendency to numerical exuberance.
Here is the stance
class.
class stance:
''' Class for importance and side on a given issue.'''
count = 0
stances = []
def __init__(self, issuename, side='pro', importance='A'):
''' Constructor for stance(). If the issuename is not already an issue,
create a new issue '''
if not issuename.upper() in issue.issues:
issue(issuename)
self.issue = issue.issues[issuename.upper()]
self.side = side.upper()
self.importance = importance.upper()
self.count = stance.count
stance.count += 1
stance.stances.append(self)
def __repr__(self):
''' Print out code that evaluates to this stance.'''
return f'stance({self.issue.name!r}, {self.side!r}, {self.importance!r})'
def __str__(self):
''' Return string version of self '''
return f"<stance ({self.count}): {self.issue.name} [{self.side}:{self.importance}]>"
def __eq__(self, other):
''' Overload == operator. Two stances must match issue and side,
though not importance. '''
return self.issue == other.issue and self.side == other.side
def copy(self):
''' Clone a stance. New stance has same issue, side, and importance. '''
return stance(self.issue.name, self.side, self.importance)
def __hash__(self):
''' hash() function for stance.
Need this for set() to remove duplicates.
Note: do not need to include importance. Match is on issue and side only. '''
return hash((self.issue.name, self.side))
def __lt__(self, other):
''' Comparison operator < to allow sorting stances. '''
return self.issue.name + self.side < other.issue.name + other.side
s1 = stance('abortion rights')
s2 = stance('abortion rights', 'con', 'b')
s3 = stance('free speech')
s1
stance('ABORTION RIGHTS', 'PRO', 'A')
print(s1)
<stance (0): ABORTION RIGHTS [PRO:A]>
s1 == s2
False
s1 != s2
True
s1 == stance('abortion rights', 'pro', 'c')
True
s3 == s3.copy()
True
sorted([s1,s2,s3])
[stance('ABORTION RIGHTS', 'CON', 'B'), stance('ABORTION RIGHTS', 'PRO', 'A'), stance('FREE SPEECH', 'PRO', 'A')]
An agent has goals - or stances - that reflects her desires and guides her choices.
Below is a first draft of an agent class. We will later expand this definition to include relationships with other agents.
class agent:
'''Class for agents who have goals.'''
count = 0
agents = []
def __init__(self, name, pronouns='he him his'):
''' Constructor for agent with name.'''
self.name = name
self.pronouns = pronouns
self.goals = []
self.count = agent.count
agent.count += 1
agent.agents.append(self)
def __repr__(self):
''' Print out agent so that it can evaluate to itself.'''
return f"agent({self.name!r})"
def __str__(self):
'''Return agent as a string.'''
return f"<agent. name: {self.name} ({self.count})>"
def add_goal(self, goal):
'''Add goals (stances) without duplicates.'''
if not goal in self.goals:
self.goals.append(goal)
def pp(self):
'''Pretty print agent information.'''
result = f"Name:\t{self.name}"
if self.goals:
result += f"\nGoals:\t{self.goals}"
if self.pronouns:
result += f"\nPronouns:\t{self.pronouns}"
return result
def __eq__(self, other):
''' Overload == operator. Are two agents equal by name and goals? '''
return self.name == other.name and sorted(self.goals) == sorted(other.goals)
def copy(self):
''' Clone the agent, including name, and goals. '''
newagent = agent(self.name)
newagent.goals = self.goals[:]
return newagent
Question for the reader: why write the copy()
constructor with
newagent.goals = self.goals[:]
instead of simply
newagent.goals = self.goals
[1,2,3] == [3,2,1]
False
sorted([1,2,3]) == sorted([3,2,1])
True
a1 = agent('James Bond')
a2 = agent('Mata Hari', 'she her her')
a1
agent('James Bond')
print(a2)
<agent. name: Mata Hari (1)>
a1.add_goal(s1)
a1.add_goal(s2)
a1.add_goal(s3)
a1.add_goal(s1)
a1.goals
[stance('ABORTION RIGHTS', 'PRO', 'A'), stance('ABORTION RIGHTS', 'CON', 'B'), stance('FREE SPEECH', 'PRO', 'A')]
a1.pp()
"Name:\tJames Bond\nGoals:\t[stance('ABORTION RIGHTS', 'PRO', 'A'), stance('ABORTION RIGHTS', 'CON', 'B'), stance('FREE SPEECH', 'PRO', 'A')]\nPronouns:\the him his"
print(a1.pp())
Name: James Bond Goals: [stance('ABORTION RIGHTS', 'PRO', 'A'), stance('ABORTION RIGHTS', 'CON', 'B'), stance('FREE SPEECH', 'PRO', 'A')] Pronouns: he him his
a1 == a2
False
a1 == a1.copy()
True
We can now create an agent who has a collection of goals, or more precisely, stances.
These stances comprise preferences. An agent makes choices that reflect these preferences.
The first step is to create agents that can make very minimal decisions. We propose an initial stage in which an agent is given a state of the world and evaluates it. The decision function is simply the predicate like. Does the agent like the outcome or not?
The next step is to give the agent a choice between two outcomes: A or B. There are more possibilities. The agent may prefer A to B, B to A, be indifferent between the choices, or not wish either.
def likes(agt, obj):
proresult = []
conresult = []
for g in agt.goals:
for s in obj:
if g == s:
proresult.append(s)
if g != s:
conresult.append(s)
if proresult and conresult:
return ("both", proresult, conresult)
if proresult:
return (True, proresult)
if conresult:
return (False, conresult)
else:
return False
print(a1.pp())
Name: James Bond Goals: [stance('ABORTION RIGHTS', 'PRO', 'A'), stance('ABORTION RIGHTS', 'CON', 'B'), stance('FREE SPEECH', 'PRO', 'A')] Pronouns: he him his
likes(a1, [stance('abortion rights','con')])
('both', [stance('ABORTION RIGHTS', 'CON', 'A')], [stance('ABORTION RIGHTS', 'CON', 'A'), stance('ABORTION RIGHTS', 'CON', 'A')])
Here is some code to generate English versions of the stances.
import random
def english(agent, stance):
side = stance.side
match side:
case "PRO":
return english_pro(agent, stance)
case "CON":
return english_con(agent, stance)
case _:
return "default"
def english_pro(agent, stance):
imp = stance.importance
iss = stance.issue.name.lower()
phrases = {
"A": [f"{verb(agent,'be','present')} unwavering in {pronoun(agent,'poss')} support of {iss}",
f"{verb(agent,'stress','present')} {pronoun(agent,'poss')} long-standing support of {iss}",
f"{verb(agent,'emphasize','present')} {pronoun(agent,'poss')} deep-rooted support of {iss}",
f"unwaveringly {verb(agent,'endorse','present')} {iss}"
],
"B": [f"strongly {verb(agent,'support','present')} {iss}",
f"readily {verb(agent,'endorse','present')} {iss}"
],
"C": [f"{verb(agent,'approve','present')} of {iss}",
f"{verb(agent,'endorse','present')} {iss}"
]
}
print(agent.name + " " + random.choice(phrases[imp]))
irreg = {
"be": "is"
}
def verb(agent, v, tense):
if v in irreg.keys():
return irreg[v]
if v[-1] == 's':
return v+'es'
return v+'s'
def pronoun(agent, case):
pronouns = agent.pronouns.split()
cases = {'subj': 0, 'obj': 1, 'poss': 2}
return pronouns[cases[case]]
def english_con(agent, stance):
imp = stance.importance
iss = stance.issue.name.lower()
phrases = {
"A": [f"{verb(agent,'be','present')} unwavering in {pronoun(agent,'poss')} opposition to {iss}",
f"{verb(agent,'stress','present')} {pronoun(agent,'poss')} long-standing opposition to {iss}",
f"{verb(agent,'emphasize','present')} {pronoun(agent,'poss')} deep-rooted opposition to {iss}",
f"unwaveringly {verb(agent,'endorse','present')} {iss}"
],
"B": [f"strongly {verb(agent,'oppose','present')} {iss}",
f"{verb(agent,'be', 'present')} strongly opposed to {iss}",
f"firmly {verb(agent,'oppose','present')} {iss}",
f"readily {verb(agent,'endorse','present')} {iss}"
],
"C": [f"{verb(agent,'be','present')} opposed to {iss}",
f"{verb(agent,'be','present')} against {iss}",
f"{verb(agent,'object','present')} to {iss}",
f"{verb(agent,'oppose','present')} {iss}"
]
}
print(agent.name + " " + random.choice(phrases[imp]))
s4 = stance('abortion rights', 'pro', 'b')
s5 = stance('abortion rights', 'pro', 'c')
s6 = stance('abortion rights', 'con', 'a')
s7 = stance('abortion rights', 'con', 'b')
s8 = stance('abortion rights', 'con', 'c')
english(a1,s1)
James Bond is unwavering in his support of abortion rights
english(a1,s2)
James Bond firmly opposes abortion rights
english(a1,s3)
James Bond is unwavering in his support of free speech
english(a1,s4)
James Bond readily endorses abortion rights
english(a1,s5)
James Bond endorses abortion rights
english(a1,s6)
James Bond stresses his long-standing opposition to abortion rights
english(a1,s7)
James Bond strongly opposes abortion rights
english(a1,s8)
James Bond is against abortion rights
Now try with female pronouns.
english(a2,s1)
Mata Hari emphasizes her deep-rooted support of abortion rights
english(a2,s6)
Mata Hari unwaveringly endorses abortion rights