Atkeson CMU With input from Florian Jentsch UCF and Jean Oh CMU Robotics Collaborative Technology Alliance RCTA and Katia Sycara CMU DARPA Robotics Challenge DARPA Robotics Challenge Lessons Learned ID: 640805
Download Presentation The PPT/PDF document "Human-Robot Teams Chris" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Human-Robot Teams
Chris
Atkeson
, CMU
With input from Florian
Jentsch
, UCF
and
Jean Oh, CMU
(Robotics Collaborative Technology Alliance (RCTA), and Katia
Sycara
, CMU. Slide2
DARPA Robotics ChallengeSlide3
DARPA Robotics Challenge: Lessons Learned
www.cs.cmu.edu/~cga/drc
Operator errors dominated failure causes
Software must detect and handle operator errors.
Safety software is a major source of fatal bugs. Example: fall false alarm causes robot to fall down “safely” when robot could have not fallen at all.
Operators want control at all levels
“Nudging” at various levels and in various coordinate systems useful.
Operators not particularly interested in autonomy.
More important to design robot to be easy to drive than to design for autonomy or autonomous performance.
Need to protect robot from operator.
Design for operation with multiple subsystems not working.
Autonomy Valley: It gets worse
before it gets better.Slide4
“Go to traffic barrel
b
ehind building”
Toward Mobile Robots Reasoning Like
Humans,
Oh, et al, AAAI 2015Slide5
Human Robot Team Interaction (HRTI):
Now and 2020s
Humans with tools
Carefully engineered, brittle, erratic
Gets stuck, repeats errors, crashes or fails catastrophically
Robotic interface
Robotic (inscrutable) reasoning
Literal command following
Isolated mission and individual learning: wheel reinvented every day
Object recognition
Vision, speech
Information flood, feedback and mind-sharing are distractions
Individual sensor-based perception, crude sensor-fusion
Individual attention, plans, uncoordinated thinkingHRI on lab prototypes, non-working systems
Human-”animal” teams
General, robust, predictable behavior
Metacognition: is something about to or going wrong?
Human-like interface, natural language
Human-like reasoning, transparency
Objective/intent achievement
Training/mission loop: Life-long shared team learning
Affordance recognition
+
audition, whole body tactile, smell
Minimal task-relevant feedback highlighting unexpected events
Team perception: multi-modal, multi-observer sensor-fusion
Group attention, planning, mind
HRI on functional state of the art systems, universally applied.Slide6
Human-Robot Teams:
Research Issues for the 2020s
Flexible task allocation
Adaptive autonomy
Adjustable autonomy
Mixed Initiative
Mutual trust
Mutual monitoring and supporting behaviors
Mutual predictability and intent inferencing
Establishment and maintenance of common ground
Ability to redirect and co-adapt
Human-robot
t
eamwork metricsSlide7
HULC
XOS3Slide8
Physical Human-Robot Interaction (PHRI
)
Wearable robots/exoskeletons
Now and 2020s
Assist/Carry load
Failure of XOS (power)
Failure of HULC (tiring)
Small number of examples of successful assistance of typical subjects
Small
# of actuated DOF
Human actuates most
DOF (with approx
. 700 skeletal muscles
)
Get out of the
way control/neutral gear to provide freedom of movement
Minimize
limb encumbrance
Predictable, not smart
What is load? Armor, Supplies, (Active) sensors, Additional (somewhat autonomous?) arms and hands, Additional legs, Weapons
Blurring of distinction between on-body robots and disconnected robots and vehicles.
Ironman suit comes after simple suit, if at all.Slide9
Modeling individuals and teams
Every soldier has a phone/part-of-uniform/assistive-device/wearable-robot from basic training onwards.
Capture all experiences (Lifelog, …)
Capture all training (Classroom 2000, …)
Physical and skill models: What does the user know? What can the user do? Current state (fatigue, conditioning, ...)
Cognitive models: Know? Do? Current state (distraction, attention, …)
Motivational
models (energy/fatigue, …)
Can we more effectively coordinate teams if we have teammate models?
Can we transfer knowledge across teammates? Across teams? Across time? Across situations? Between humans? Between robots? Between humans and robots?
Can we advise and or coach individuals and teams? Personal trainer? Team trainer?Slide10
Smart/Natural/Trustworthy or Simple/Predictable/Symbiotic
Physical Human-Robot Interaction:
There is no existing model for how a “natural” exoskeleton should work. The field is dominated by the dream of an “invisible” suit that gives us super powers but otherwise does not restrict us in any way (Ironman). We can’t build that suit with existing technology.
An alternative design goal is a limited exoskeleton that provides predictable behavior on the short time scale, and adapts to the user over longer time scales (“symbiotic”, such as an intelligent/adaptive bicycle, skateboard, windsurfer, …). The human operator learns to provide rich behavior on top of the underlying simple and predictable system.
This debate applies to informational interaction: natural interface vs. symbiotic. Maybe a simple game-like interface is better in some cases?
This debate applies to how smart a robot should be. Maybe stupid is better
in some cases?
This debate applies to how trustworthy a robot should be. Maybe a robot is trustworthy because its behavior is simple and predictable, not because it autonomously does a complex job well
.
Animal model: should goal be doglike behavior?
Should natural, smart, and trustworthy be goals for the 2020s, or are symbiotic, simple, and predictable more realistic goals for the 2020s?Slide11
One size does not fit all
What are the tasks?
What are the environmental characteristics?
What robot abilities are needed?
What human abilities (physical, cognitive, and attentional) are needed?Slide12
Research Questions for
Mulit
-Robot HRI
How can people control multiple robot teams of increasing size?
What is the density of robots a human(s) can control?
What kinds of command are possible for a particular density?
Key Constraint: Human attention is limited; attention is the budget
How many “things” can a human manage?
R
obots
Tasks
Other people
Sources of informationSlide13
Many forms of interaction are needed
Robots
must be individually controllable
must function autonomously for long periods
must be
commandable
as cooperating teams
must adapt to absence of human attention
must incorporate humans in autonomous plans
Key Idea: Look at HRI from viewpoint of complexity of operator’s cognitive complexity of command
This framework allows systematic study of human control of multi-robot systemsSlide14
As size grows, complexity of command domina
tes
O(1)
O(m)
O(>m)
Cognitive limit
# of RobotsSlide15
O(n)
Fan-out models
Independent Robots
Individual automation
Scheduling attention
Organizing multiple
operators
O(1)
swarms or centralized control
Autonomously
Coordinating
Expressing goal
Recognizing satisfaction
O(>n)
Playbook
Machinetta
Assisted
Coordination
Initiate
Recognize plans
Modify
HRI for Robot TeamsSlide16Slide17
“Go to the left of the building.”
Robot receives natural language command from human.
Robot reports status back in natural language.
Semantic perception
Language grounding & PlanningSlide18Slide19
Ease
Of
Use
None Autonomy Full
Systematic
Errors
Low Frequency
Random Errors
Autonomy Valley