J Blackmon Introduction Alan Turing 1950 and The Turing Test IBMs Deep Blue beats world champion Garry Kasparov in chess 1997 IBMs Watson beats two champions on Jeopardy interpreting natural language text and providing answers without live access to the Internet ID: 661356
Download Presentation The PPT/PDF document "Wendell Wallach: Ethics, Law, and the Go..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Wendell Wallach: Ethics, Law, and the Governance of Robots
J. BlackmonSlide2
Introduction
Alan Turing (1950) and The Turing Test
IBM’s Deep Blue beats world champion Garry Kasparov in chess.
(
1997)
IBM’s Watson beats two champions on
Jeopardy!
, interpreting natural language text and providing answers without live access to the Internet.
Ray Kurzweil makes popular the concept of “the singularity”. Slide3
Introduction
Roboethics
Machine EthicsSlide4
Introduction
Roboethics
is human-centered and focuses on the ethical use of robots in society.
How should (and shouldn’t) we use them? What new harms might they introduce? What are their legal implications?Slide5
Introduction
Machine Ethics concerns the development of robots and AI capable of making moral decisions.
For example, how should Google’s driverless car solve a forced choice scenario: hit a school bus or drive off the bridge?Slide6
Roboethics
The core ethical issues are subsumed within…
F
ive Interrelated Themes
Safety
Appropriate Use
Capabilities
Privacy and Property Rights
Responsibility and LiabilitySlide7
Roboethics
Safety: Are robots safe?
WW: Regarding robots developed so far, current product liability laws sufficiently cover this question.
Robot safety is clearly the legal responsibility of the companies that produce the robots and of the end users who adapt them.
Slide8
Roboethics
Appropriate Use: Are robots appropriate for the applications for which they are designed?
Robots as sex toys
Robots as pets
Robots as companions
Robots as nannies and caregiversSlide9
Roboethics
Appropriate Use: Are robots appropriate for the applications for which they are designed?
Robots as sex toys
Robots as pets
Robots as companions
Robots as nannies and caregivers
Adequately advanced robots could meet some of the preferences and needs of people without any cost to a providing human (or animal).Slide10
Roboethics
Appropriate Use: Are robots appropriate for the applications for which they are designed?
Preferences and Needs
e
ntertainment, companionship, care
No cost to a providing human (or animal)
Robots won’t feel harmed, bored, disgusted, mistreated, or lonely. (Recall the “three Ds”, Dull, Dirty, Dangerous)Slide11
Roboethics
Appropriate Use: Are robots appropriate for the applications for which they are designed?
Replacing humans (and animals)
Will we lose crucial sensibilities (virtues?) or lessons?
Would using robots as caregivers and nannies be abusive to those no longer cared for by humans?
Would infants and children be emotionally or intellectually stunted?Slide12
Roboethics
Appropriate Use: Are robots appropriate for the applications for which they are designed?
Is it inappropriate or wrong to violently or otherwise abuse a robot? If so, why?
Assuming these are robots incapable of suffering, what would be wrong with it?
Test Cases for Consideration: animals, plants, video game characters, toys.Slide13
Roboethics
Capabilities: Can robots live up to the task for which they have been designed?
We
tend to anthropomorphize
robots,
expecting them to have capabilities they don’t have.Slide14
Roboethics
Capabilities: Can robots live up to the task for which they have been designed?
Marketers will exploit this
tendency to anthropomorphize.
Thus, we may be systematically duped
.Slide15
Roboethics
Capabilities: Can robots live up to the task for which they have been designed?
WW: We need a professional association or regulatory commission to certify robots for particular uses. Yes, this will be costly, and it will have to adapt as the field of robotics progresses.Slide16
Roboethics
Privacy and Property Rights: How will robots affect the alleged loss/diminution of these rights?
A robot’s ability to sense and store data is crucial to its performance, also valuable to the owner and to a technician when trying to debug, fix, or upgrade it.
But if this robot is used in the home or other private settings, the data will also be a record of (traditionally) private activity.Slide17
Roboethics
Privacy and Property Rights: How will robots affect the alleged loss/diminution of these rights?
Such a record could be subpoenaed.
It would be accessible for various criminal purposes.
Also, not mentioned by Wallach, “function creep” will make much of the record (often legally but unknowingly) available to third parties.Slide18
Roboethics
Responsibility and Liability: How do we assign moral and legal responsibility for a robot’s actions?
Robots are the product of “many hands”, and as such, individual developers of a component may have only a limited understanding of how it will interact with others, potentially increasing risks.
Deadlines and limited funding also contribute to limited understanding and increased risk. Slide19
Roboethics
Responsibility and Liability: How do we assign moral and legal responsibility for a robot’s actions?
The possibility of unknown risks may lead a company to delay the release of a robot.
But should this be the default standard?
Too many delays weaken productivity and innovation, consequently our economy.
We could lose the competitive advantage to other countries.Slide20
Roboethics
Responsibility and Liability: How do we assign moral and legal responsibility for a robot’s actions?
“When an intelligent system fails, manufacturers will try to dilute or mitigate liability by stressing an appreciation for the complexity of the system and the difficulties in establishing who is responsible for the failure.”Slide21
Roboethics
Responsibility and Liability: How do we assign moral and legal responsibility for a robot’s actions?
“When an intelligent system fails, manufacturers will try to dilute or mitigate liability by stressing an appreciation for the complexity of the system and the difficulties in establishing who is responsible for the failure.”
To address such concerns, Wallach proposes Five Rules.Slide22
Roboethics
Five Rules
1. “The people who design, develop, or deploy a computing artifact are morally responsible for that artifact, and for the foreseeable effects of that artifact. This responsibility is shared with other people who design, develop, deploy, or knowingly use the artifact as part of a sociotechnical system.”
All of the creators and users of a robot are morally responsible for it and its foreseeable effects.Slide23
Roboethics
Five Rules
2. “The shared responsibility of computing artifacts is not a zero-sum game. The responsibility of an individual is not reduced simply because more people become involved in designing, developing, deploying or using the artifact. Instead, a person’s responsibility includes being answerable for the behaviors of the artifact and for the artifact’s effects after deployment, to the degree to which these effects are reasonably foreseeable by that person.”
One’s moral responsibility is not diminished by the fact that others were involved in creating or using the robot.Slide24
Roboethics
Five Rules
3
. “People who knowingly use a particular computing artifact are morally responsible for that use.”
This is intended to include a “no willful ignorance” clause.Slide25
Roboethics
Five Rules
4. “People who knowingly design, develop, deploy, or use a computing artifact can do so responsibly only when they make a reasonable effort to take into account the sociotechnical systems in which the artifact is embedded.”
Without such an effort, they would be using them irresponsibly.Slide26
Roboethics
Five Rules
5. “People who design, develop, deploy, promote, or evaluate a computing artifact should not explicitly or implicitly deceive users about the artifact or its foreseeable effects, or about the sociotechnical systems in which the artifact is embedded.”
Among other things, this ameliorates the effects of our tendency to anthropomorphize.Slide27
Machine EthicsSlide28
Machine Ethics
Operational Morality
Technology is developing along two dimensions: autonomy and sensitivity (to ethical considerations).
Hammer
Fuel Gauge, Fire Alarm
ThermostatSlide29
Machine Ethics
Operational Morality
A system is
operational moral
if it follows proscribed actions programmed in by designers for all types of situations it will encounter.Slide30
Machine Ethics
Operational Morality
A system is
operational moral
if it follows proscribed actions programmed in by designers for all types of situations it will encounter.
Operational
morality
requires that designers make ethical decisions to
cover all situations.Slide31
Machine Ethics
Robonanny
Children can put themselves (and others) in danger.
They can abuse themselves (or others).
They can ignore the
robonanny’s
commands to stop.
Should
robonanny
intervene?Slide32
Machine Ethics
Robonanny
As Wallach notes, parents will be comforted to be able to preset responses. Perhaps it will have levels of reprimand.
Manufacturers can then protect themselves from liability.Slide33
Machine Ethics
Functional Morality
A system is
functionally moral
if it evaluates situations according to an array of considerations, then uses rules, principles, or procedures to make an explicit judgment.
Top-Down vs. Bottom-Up Decision-MakingSlide34
Machine Ethics
Laws of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot most protect its own existence as long as such protection does not conflict with the First and Second Laws.Slide35
Machine Ethics
Laws of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
But there are (plenty of) cases in which it is logically impossible to follow this law.
Forced Choice
S
cenariosSlide36
Machine Ethics
Laws of Robotics
In some cases, you will either harm a human or allow a human to be harmed.
Stopping a violent crime in progress often requires harming the
attacker
.
If you don’t harm the attacker, you will be allowing the victim to be harmed.
So, the First Law of Robotics fails in light of this simple consideration.Slide37
Machine Ethics
Laws of Robotics
In
some cases, you will either harm a human or allow a human to be harmed.
Google’s driverless car: Hit a school bus or swerve off a bridge?
Even sitting there idling: Get hit by a large oncoming truck or drive into pedestrians who are in the way of your only escape?Slide38
Machine Ethics
Laws of Robotics
In
some cases, you will either harm a human or allow a human to be harmed.
The Famous(Infamous) Trolley ProblemSlide39
Machine Ethics
Laws of Robotics
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
Conflicting orders given by different humans are logically impossible to follow.Slide40
Machine Ethics
Laws of Robotics
A robot most protect its own existence as long as such protection does not conflict with the First and Second Laws.
Even this law is made logically impossible by any scenario in which the robot’s existence is at stake but there is no action which would protect it.Slide41
Machine Ethics
As Wallach sees it, Asimov showed that “a simple rule-based system of ethics will be ineffective”.
We need a combination of bottom-up and top-down approaches.Slide42
Machine Ethics
Top-Down and Bottom-Up Approaches
Top-down approaches are broad.
But it’s hard to apply them to the vast array of specific challenges.
Bottom-up approaches can integrate input from discrete subsystems.
But it’s hard to define the ethical goal for such a system and hard to integrate them.Slide43
Machine Ethics
Top-Down and Bottom-Up
Approaches
We need both: The dynamic and flexible morality of the bottom-up approach subjected to the evaluation of the top-down approach.Slide44
Machine Ethics
Top-Down and Bottom-Up
Approaches
We need both: The dynamic and flexible morality of the bottom-up approach subjected to the evaluation of the top-down approach.
We need to find a computational method for doing this.
We need to set boundaries/standards for evaluating an AMA’s moral reasoning.Slide45
Machine Ethics
The Future of AMAs and Wallach’s Proposal
Many challenges remain.
Will AMAs need to emulate all human faculties in order to function as adequate moral agents?
How do we determine whether an AMA deserves rights or should be held responsible for its actions?
Should we control the ability of robots to reproduce?
How will we protect ourselves against threats from AMAs that are more intelligent than we are? Slide46
Machine Ethics
The Future of AMAs and Wallach’s Proposal
Some form of monitoring is required in order to address a wide array of issues.
Health and Safety
Environmental Risks
Funding for R&D
Intellectual Property Rights
Public Perception of Risks & Benefits
Government Oversight
Competition with Industries InternationallySlide47
Machine Ethics
The Future of AMAs and Wallach’s Proposal
Governance Coordination Committee
Role: to monitor development of AMAs and flag issues or gaps in the existing policies, to coordinate the activities of stakeholders, and to “modulate the pace of development”.
The GCC would be required to avoid regulation where possible, favoring “soft governance” and industry oversight.Slide48
End