/
HUMANOID ROBOTICS HUMANOID ROBOTICS

HUMANOID ROBOTICS - PDF document

alida-meadow
alida-meadow . @alida-meadow
Follow
388 views
Uploaded On 2016-04-24

HUMANOID ROBOTICS - PPT Presentation

S ROBOTS INCREASINGLYBEcomepart of our everyday livesthey willserve as caretakers for the elderly and disabledassistants in surgery and rehabilitationand educational toys But for this to happe ID: 292000

ROBOTS INCREASINGLYBE-comepart our

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "HUMANOID ROBOTICS" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

HUMANOID ROBOTICS S ROBOTS INCREASINGLYBE-comepart of our everyday lives,they willserve as caretakers for the elderly and dis-abled,assistants in surgery and rehabilita-tion,and educational toys. But for this to hap-pen,programming and control must becomeevant to humanoid robots,which are highlydifficult to control yet most natural for inter-environments.As this article shows,we have used bio-logically inspired notions of behavior-basedcontrol to address these challenges at the Uni-versity of Southern CaliforniaÕs InteractionLab,part of the USC Robotics Research Labs.By endowing robots with the ability to imi-tate,we can program and interact with themthrough human demonstration,a naturalhumanÐhumanoid interface. The human abil-ity to imitateÑto observe and repeat behav-understood but powerful form of skill learn-ing. Two fundamental open problems in imi-tation involve interpreting and understandingthe observed behavior and integrating thevisual perception and movement control sys-tems to reconstruct what was observed. Our research has a similarly twofold goal:we are developing methods for segmentinghuman behavior as well as methods for struc-movement and imitation-learning capabili-ties. Our approach brings these two pursuitstogether much as the evolutionary processWe structure the motor system into acollection of movement primitives,whichthen serve both to generate the humanoidÕsmovement repertoire and to provide predic-tion and classification capabilities for visualperception and interpretation of movement.This way,what the humanoid can do helps itunderstand what it sees and vice versa. Themore it sees,the more it learns to do,and thusOur work over the last 15 years has focusedon developing distributed,behavior-basedrobots and,most recently,humanoids. Behav-ior-based control involves the design of con-behaviors.Behaviors are real-time processesthat take inputs from sensors (such as vision,sonar,or infrared) or other behaviors and sendoutput commands to effectors (wheels,motors,or arms) or other system behaviors. The con-troller then is a distributed network of suchcommunicating,concurrently executed behav-iors,typically with excellent real-time and scal-ing properties. The interaction of the behav-iors through the environment produces thedesired overall system performance.HISBEHAVIORBASEDAPPROACHTOSTRUCTURINGANDCONTROLLINGCOMPLEXROBOTICSYSTEMSUSESIMITATIONFORINTERACTIONANDLEARNINGHEUSEOFBASISORPRIMITIVESLETSTHESEHUMANOIDROBOTSDANCETHEDOAEROBICSANDTHROW 1094-7167/00/$10.00 ©2000 IEEEIEEE INTELLIGENTSYSTEMS The inspiration for behavior-based controlcomes from biology,where natural systemsare believed to be similarly organized,fromspinal reflex movements up to more complexbehaviors such as flocking and foraging.We have focused on applying these principlesto high-dimensional,behavior-based systemssuch as those involved in controlling individ-ual and group behavior of mobile and hu-manoids robots.In all problem domains,wehave used basis behaviors,or ,toBasis behaviors and primitives. Severalmethods for principled behavior design andIn 1992,we intro-basis behaviors,a smallset of necessary and sufficient behaviors thatposition) for handling controller complexitybehaviors are the primitives that serve as a sub-strate for control,representation,and learningin our behavior-based systems. In first demon-strating their effectiveness on groups of mobilerobots,we used a basis set consisting of avoid-ance,following,homing,aggregation,and dis-persion,which served to demonstrate higher-level group behaviors including flocking,foraging and collection,and herding.We alsodemonstrated how,given such a basis behav-ior set,a learning algorithm could improvebehavior selection over time.Collections of behaviors are a natural rep-robots. But how can we use the same idea inthe humanoid control domain,where thebodyÕs individual degrees of freedom aremore coupled and constrained? For this,wehave combined the notion of primitives withanother line of evidence from neuroscienceÑmirror neuronsmotor control into a general and robust sys-tem capable of a variety of skills and learn-Humanoid control and imitation. control is a complex problem,involving sen-sory and effector limitations and variousforms of uncertainty. The more complex thesystem to be controlled,the more we mustmodularize the approach to make controlviable and efficient. Humanoid agents androbots are highly complex; a human arm hasseven degrees of freedom (DOF),the handhas 23,and the control of an actuated humanspine is beyond current consideration. Yethumans display complex dynamic behaviorsin real time and learn various motor skillsthroughout life,often through imitation.learning,which lets a robot improve its behav-ior based on trial-and-error feedback,is verypopular. However,reinforcement learning isslow,as the robot must repeatedly try variousbehaviors in different situations. It can alsojeopardize the robot. In contrast,learning bylets the designer specify entire behaviors bydemonstration,instead of using low-level pro-In biological systems,imitation appears tobe a complex learning mechanism that in-volves an intricate interaction between visualperception and motor control,both of whichare complex in themselves. Although variousanimals demonstrate simple mimicry,only avery few species,including humans,chimps,and dolphins,are capable of so-called ,which involves the ability to learn arbi-trary new skills by observation.a complex mechanism,which has been impli-and even language evolution.such a mechanism appear to lie in an evolu-that is,the mirror system.animals points to two neural structures wefind of key relevance to imitation:spinalfields and mirror neurons. Spinal fields,found in frogs and rats so far,code for com-plete primitive movements (or behaviors),estingly,they are additive; when multiplefields are stimulated,the resulting movementspine encodes a finite number of such fields,they represent a basis set of primitives andwere precisely the inspiration for our workon basis behaviors.Investigators recently found neurons withso-called mirror properties in monkeys andhumans. They appear to directly connect theping observed behaviors,such as reachingand grasping,to motor structures that pro-How many such movements themirror system directly maps is still unknown,but the basic idea serves as rich inspirationWe combine these two lines of evidence,spinal basis fields and mirror neurons,into amore sophisticated notion of behaviors,orperceptual-motor primitives. These let acomplex system,such as a humanoid,rec-ognize,reproduce,and learn motor skills.The primitives serve as the basis set for gen-erating movements,but also as a vocabularyfor classifying observed movements into exe-cutable ones. Thus,primitives can classify,predict,and act.In our approach to imitation,the vision sys-tem continually matches any observed humanmovements onto its own set of motor primi-tives. The primitive,or combination of primi-tives,that best approximates the observed inputalso provides the best predictor of what therobot expects to observe next. This expectationfacilitates visual segmentation and interpreta-tion of the observed movement. Imitation,then,is a process of matching,classification,and pre-diction. Learning by imitation,in turn,createsnew skills as novel sequences and superposi-tions of the matched and classified primitives.proach lets the robot initially observe and imi-tate a skill,then perfect it through repetition,soprimitive. The set of primitives can thus adaptover time,to allow for learning arbitrary newskillsÑthat is,for ÒtrueÓimitation.Figure 1 shows the structure of our imita-tion architecture,including the visual per-ception and attention module,the classifica-tion module,and the motor primitives.learning component allows adaptation bothat the classification level,for finding a closer JULY/AUGUST 2000 Visual trackingof movement selectionPerception classification primitivesAction Figure 1. Our imitation architecture, structured aroundperceptual-motor primitives used for movement match to the observed behavior,and themovement generation level,for optimizingMovement primitives or behaviors are thetion and motor control in our approach,anddriven by several constraints. On the one hand,bottom-up limitations,based on its kinematicand dynamic properties. It also provides top-down constraints from the type of movementsthe system is expected to perform,because theprimitives must be sufficient for the robotÕsentire movement repertoire. On the otherhand,the visual systemÕs structure and inputsinfluence the choice of primitives for mappingthe various observed movements onto its ownexecutable repertoire.To serve as a general and parsimoniousbasis set,the primitives encode groups orclasses of stereotypical movements,invari-ant to exact position,rate of motion,size,andperspective. Thus,they represent the genericbuilding blocks of motion that can be imple-sider a primitive for reaching. Its most impor-pointÑthat is,the hand or held object. Itmight be further parametrized by a defaultposture for the entire arm. Such a primitivelets a robot reach toward various goals withina multitude of tasks,from grasping objectsand tools,to dancing,to writing and draw-ing. We used just such a reaching primitive inour experiments to reconstruct the populardance,the Macarena.sharp contrast to the explicit planning ap-whenever they are needed. While fully gen-eral,on-demand trajectory generation iscomputationally expensive and potentiallyslow. In our approach,instead of computingtrajectories over again,stereotypical trajec-tories are built in as well as learned,thenlooked up and parameterized for the specifictask at hand. The notion of primitives capi-talizes on the fact that it is simpler to learnand reuse an approximation of the inversekinematics for specific areas in the work-space or a specific trajectory than it is to com-pute them anew each time.What constitutes a good set of primitives?We have experimented with two types:innateand learned. Innate primitives are user-selected and preprogrammed. We havedemonstrated the effectiveness of a basis set¥discrete straight-line movements of sub-sets of degrees of freedom,accounting for¥continuous oscillatory movements of sub-sets of DOFs,accounting for repetitive¥postures,accounting for large subsets ofthe bodyÕs DOFs.tives directly from human movement data. Wegather different types of such data,using thefollowing methods:vision-based motiontracking system),magnetic markers on thearm (using the FastTrak system),and full-suit). We first reduce the dimensionality of themovement data by employing principal com-ponents analysis,wavelet compression,andcorrelation across multiple DOF. Next,we useclustering techniques to extract patterns ofsimilar movements in the data. These clustersor patterns form the basis for the primitives;the movements in the clusters are generalizedand parameterized,to result in primitives forproducing a variety of similar movements.Visual classification into primitives. Visualthe primitives and a key component of the imi-ited,it must select the visual features that aremost relevant to the given imitation task.given demonstration is a challenging problem.Our previous work showed that peoplewatching videos of arm movements displayedno difference in attention whether they werejust watching or intending to subsequentlyimitate. In both cases,they fixated at the endHowever,into a useful set of task-independent categoriesor primitives is impossible. Fortunately,thisis not necessary,because the imitation systemis targeted for mapping observed movementof bodies similar to the observerÕs own motorrepertoire. Accordingly,the mirror system issensitive to the biological motion of similarbodies (for example,monkeys respond tomonkey and human movements). Further-more,although human(oid) bodies are poten-tially capable of vast movement repertoires,the typical,everyday movement spectrum isnot nearly as large.Consequently,we can effectively bias thevisual perception mechanism toward recog-nizing movements that it can execute,espe-cially those movements it performs mostfrequently. The motor control systemÕs struc-ture,and its underlying set of movementprimitives,provides key constraints for visualmovement recognition and classification.Our primitive classifier uses the primitivesÕdescriptions to segment a given motion basedon the movement data. In the experimentsdescribed below,we used end-point data forboth arms as input for the vector quantiza-tion-based classifier.Again,a key issue inclassification is representing the primitivessuch that they account for significant invari-ances,such as position,rotation,and scaling. Our classification approach forms the orig-inal motion into a vector of relative end-pointmovements between successive frames,thensmoothes and normalizes it. At the classifi-cation level,we ignore all other informationabout the movement,such as global positionand arm configuration,enabling a small setof high-level primitive representations insteadof a potentially prohibitively large set offor correct imitation serves for parameteriz-ing the selected primitives at the level ofmovement reconstruction and execution.To simplify matching,our approach de-scribes primitives themselves in the samenormalized form. For each time step of theobserved motion,we compare a fixed- ECANEFFECTIVELYBIASTHEVISUALPERCEPTIONMECHANISMTOWARDRECOGNIZINGMOVEMENTSTHATITCANEXECUTEESPECIALLYTHOSEMOVEMENTSITPERFORMSMOSTFREQUENTLY horizon window to every primitive andAdjacent windows with identical classifi-cations connect to form continuous seg-ments. For any segments that fail to matchthe given primitives,our approach uses thereaching primitive to move the end pointdow is of fixed size,the perception of a dis-tinct match of a primitive applies only forthe given timescale. We are currently work-ing on addressing classification at multipleTo validate our approach,we implementedvarious examples of imitation,includingreaching,ball throwing,aerobics moves,anddance,all on humanoid testbeds,takingUsing human movement data. We used sev-eral types of human movement data as inputto test our approach to imitation,includingvisual motion tracking,Cartesian magneticmarkers,and joint angle data. For vision-basedmotion tracking,we developed a motion-tracking system that extracts features from avideo stream. The system is simple,since itcommercial version. It relies on a constrainedbeing imitated. This approach greatly simpli-fies finding the initial match between the fea-tures in the visual image and the observedbody. The match enables tracking of the bodyover time,allowing for fast computation andupdating of current limb position,as well assimple prediction of future position,used inwork will address how the primitives them-selves also provide further predictive capabil-ity for the visual tracking. Figure 2 shows aWe also used 3D magnetic marker datafrom the human arm,gathered from subjectsimitating videos of arm movements whilewearing FastTrak markers for positionthe Study of Neural Models of Behavior atthe University of Rochester.) We used fourmarkers:near the shoulder,the elbow,thewrist,and the start of the middle finger. Themovement data resulting from this experi-ment serve as input into our imitation sys-tem,as well as for automatically learning theprimitives.Finally,we used full-body joint angle datagathered with the Sarcos Sensuit,a wearableexoskeleton that simultaneously records thejoint positions of 35 DOF:the shoulders,elbows,wrists,hips,knees,ankles,and waist.ration with the ATR Dynamic Brain ProjectKyoto,Japan.) We are currently focusing onreproducing the upper-body movements fromthose data on our testbeds,described next.To properly validate our approach tohumanoid motor control and imitation,weuse different experimental testbeds. Most ofour work so far has been done on Adonis(developed at the Georgia Institute of Tech-nology Animation Lab),a 3D rigid-bodyand human density estimates,and the motionsolver,SD/Fast. The simulation acts undergravity and accepts external forces from theenvironment. The static ground contactremoves the need for explicit balance con-trol. We have added joint limits and self-collision detection. Adonis consists of eightrigid links connected with revolute joints ofone DOF and three DOF,totaling 20 DOF, JULY/AUGUST 2000 Figure 2. A snapshot of the output of our vision-basedmotion-tracking system. Figure 3. The Adonis 20-degree-of-freedom full-body dynamic simulation.3 DOF neck3 DOF shoulder1 DOF elbow y3 DOF wrist3 DOF shoulder3 DOF elbow y3 DOF waist3 DOF wristxyz but most of our experiments use the 13 DOFof AdonisÕupper body and currently do notWe also use a pair of Cosimir 3D humanoidavatars (developed at the University of Dort-mund),which provide graphical simulationswith 65 DOF each (Figure 4). Unlike Adonis,these avatars have actuated fingers,allowingexperiments with fine motion control. How-ever,the avatars are not directly equipped withdynamics,so we can experiment with differ-ent models of mechanics,muscle models,andso forth. As with Adonis,we do not addressbalance with these avatars,but anchor themat the hips. The avatars serve to imitate eachother or an external demonstrator.As the project progresses,we plan to usetestbeds for evaluating our imitation arch-itecture. The NASA Robonaut is one candi-date,through collaboration with the John-son Space Center. The Sarcos full-bodyhumanoid robot is another,through collab-oration with the ATR Dynamic Brain Pro-ject. Both robots are highly complex,built toapproximate human body structure as faith-fully as practically possible,and featureUR APPROACH TO HUMANOIDof a set of movement primitives. We haveexperimented with different types of suchprimitives on different humanoid simulationtestbeds. Specifically,we have implementedtwo versions of the spinal fields found incontrolled individual joints of AdonisÕsinspired approach,impedance control,which operates in the external coordinateframeÑin our case,each of AdonisÕs hands.the arm along. We also used a default pos-ture for the arm,which provided natural-appearing,whole-arm movements thatWe tested both types of primitives on asequential motor task,dancing theMacarena. Both proved effective,but eachhad limitations for particular types of move-ments. This led us to propose and explore acombination approach,where multiple typesof primitives can be sequenced and com-bined. For example,we constructed a basisbehavior set consisting of three types ofprimitives:¥discrete straight-line movements using¥continuous oscillatory movements usingpiece-wise linear segments using imped-¥postures,using PD-servos to directly con-We also added a forth type of primitive,foravoidance,implemented as a repulsive vec-tor field. The continuously active fourth prim-itive combined with whatever other primitivewas executing to prevent any collisions IEEE INTELLIGENT SYSTEMS Figure 4. The Cosimir 65-degree-of-freedom full-body avatars. Figure 5. Snapshots of Adonis dancing the Macarena. between body parts. In the Macarena,forexample,this is necessary for arm movementsThese results validated the notion of prim-itives and served as a basis for our imitationarchitecture. Primitives provide higher-levelmovement descriptions; metric informationserves to parameterize those descriptions forgenerating executable movement. This al-lows a small number of general primitives torepresent a large class of different move-ments,such as reaches to various destinationson and around the body.Our goal is not to achieve perfect,com-pletely precise,high-fidelity imitation. Whilethat might be possible,through the use ofexact quantitative measurements of the ob-served movement using signal-processingtechniques,it is not what happens in imitationful for our main goals:natural interaction andprogramming of robots. For those purposes,we aim for an approximation of the observedbehavior,one that allows any necessary free-but achieves the task and effectively commu-task-level imi-,which only achieves the demonstrationÕsgoal,but does not imitate the behaviorsinvolved. This problem has been studied inassembly robotics,where investigators used arobotic arm to record,segment,interpret,andthat work focused on visual segmentation intosubgoals,we aim for a more biologically moti-vated model of imitation:a system capable ofimitating the observed behaviorÑthat is,theprocess that brings the subgoals about,with suf-ficient,but not perfect,accuracy.Because our current movement primitivesrepresent whole-arm movements,our visualhigher-fidelity data that include the positionof the elbow and other markers. In the exper-iments described here,the position of thehand over time sufficed for effective imita-tion. The system segments hand trajectoryover time using the classifier,which,at eachpoint in time,matches the expected output ofeach of the primitives with the observed inputand selects the best match. Consecutivematches of the same primitive indicate ahigher confidence in the match. The classifi-cation output is a sequence of primitives andtheir associated parameters. These then go tothe motor control system and activate theprimitives in turn (and avoidance in parallel)to reconstruct the observed behavior.In validating the imitation architecture,weselected tasks from athletics,dancing,andhuman demonstratorin front of the visionFigure 6 shows an example of the system per-movements:a throw,a twist,and a raising ofWe are currently pursuing meth-ods for evaluating the quality of the resultingimitation,both in terms of the produced end-point trajectories and the overall qualitativeappearance. Additional information about thisThe work described here is funded by NSFCareer Grant IRI-962423. The author thanks themembers of her imitation team:Chad Jenkins,Aude Billard,Stefan Weber,and Ajo Fod.1.M.J. Mataric,ÒSensory-Motor Primitives asa Basis for Imitation:Linking Perception toAnimals and Artifacts,C. Nehaniv and K. JULY/AUGUST 2000 05010015020025030004080120160200Time (frames)210220230240250260270280Time (frames)140160180200220240260280300Time (frames)Circle(1,2)Arc(5,6)Arc(3,4)Arc(1,2)Circle(1,2)Arc(5,6)Arc(3,4)Arc(1,2)Circle(13,14)Arc(11,12)Arc(9,10)Arc(7,8)160180200220240260280 4080120160200 Figure 6. A selection of imitation results showing (a) a throw, (b) twists, and (c) arm raises. On the left, plots of the Dautenhahn,eds.,MIT Press,Cambridge,Mass.,2000. 2.M.J. Mataric,V.B. Zordan,and M.Williamson,ÒMaking Complex ArticulatedAgents Dance:An Analysis of Control Meth-ods Drawn from Robotics,Animation,andBiology,ÓAutonomous Agents and MultiagentVol. 2,No. 1,Mar. 1999,pp. 23Ð44.3.R.C. Arkin,Behavior-Based Robotics,MITPress,Cambridge,Mass.,1998.4.E. Bizzi,F.A. Mussa-Ivaldi,and S. Giszter,ÒComputations Underlying the Execution ofMovement:A Biological Perspective,Ó,Vol. 253,July 1991,pp. 287Ð291.5.M.J. Mataric,ÒDesigning and UnderstandingAdaptive Group Behavior,Óior,Vol. 4,No. 1,Dec. 1995,pp. 50Ð81.6.G. Rizzolatti et al.,ÒPremotor Cortex and theRecognition of Motor Actions,ÓCognitiveBrain Research,Vol. 3,1996,pp. 131Ð141.7.R.W. Byrne and A.E. Russon,ÒLearning byImitation:A Hierarchical Approach,ÓJ. Behav-ioral and Brain Sciences,Vol. 6,No. 3,1998. 8.M. Arbib,ÒThe Mirror System,Imitation,andthe Evolution of Language,Óural and Artificial Systems,K. Dautenhahnand C. Nehaniv,eds.,MIT Press,Cambridge,Mass.,2000. 9.A. Billard and M.J. Mataric,ÒLearning MotorSkills by Imitation:A Biologically InspiredRobotic Model,ÓProc. Autonomous AgentsACMPress,New York,2000,pp.10.M.J. Mataricand M. Pomplun,ÒFixationBehavior in Observation and Imitation ofHuman Movement,ÓCognitive BrainResearch,Vol. 7,No. 2,1998,pp. 191Ð202. 11.S. Arya and D.M. Mount,ÒAlgorithms forFast Vector Quantization,ÓProc. Data Com-pression Conf.,IEEE Press,Piscataway,N.J.,1993,pp. 381Ð390.12.S. Weber,M.J. Mataric,and O.C. Jenkins,Motor Primitives,ÓAutonomous Agents,ACMPress,New York,2000,pp. 136-137.13.O.C. Jenkins and M.J. MataricBased Movement Classification for Humanoid,Tech. Report IRIS-00-385,Univ. ofSouthern California,Inst. for Robotics andIntelligent Systems,2000.14.N. Hogan,ÒImpedance Control:An Approachto Manipulation,ÓJ. Dynamic Systems,Mea-surement,and Control,Vol. 107,Mar. 1985,15.Y. Kuniyoshi,M. Inaba,and H. Inoue,ÒLearn-ing by Watching,Extracting Reusable TaskKnowledge from Visual Observation of HumanPerformance,ÓIEEE Trans. Robotics andAutomation,Vol. 10,No. 6,1994,pp. 799Ð822.Maja J. Mataricscience Program at the University of Southern Cal-ifornia,the director of the USC Robotics Researchlabs,and an associate director of the Institute forforms research into behavior-based control andreceived her BS in computer science from the Uni-versity of Kansas,her MS in computer science fromMIT,and her PhD in computer science and artifi-cial intelligence,also from MIT. She has receivedthe NSF Career Award,the MIT TechnologyReview TR100 Innovation Award,and the IEEEEarly Academic Career Award in Robotics andDept.,Univ. of Southern California,941 W. 37thPlace,SAL 228,Mailcode 0781,Los Angeles, IEEE INTELLIGENT SYSTEMS JAN/FEBÑ Top 10 Algorithms of the CenturyJack Dongarra, dongarra@cs.utk.edu, University of Tennessee, and Francis Sullivan, fran@super.org, IDA Center for Computing Sciences practice of science and engineering in the 20th century (also the challenges facing usin the 21st century).MAR/APRÑ ASCI CentersRobert Voigt, rvoigt@compsci.wm.edu, and Merrell Patrick, mpatr@concentric.net Status report on the five university Centers of Excellence funded in 1997 along withMAY/JUNÑ Earth Systems ScienceThe articles featured in this special issue will document the progress being made inmodeling and simulating the earth as a planet.JUL/AUGÑ New Directionsengineering field. Articles cover topics about Monte Carlo arithmetic, graphicmodeling, and satellite technology.SEP/OCTÑ Computing in MedicineMartin S. Weinhous, weinhous@radonc.ccf.org, Cleveland Clinic, and NOV/DECÑ Computational ChemistryDonald G. Truhlar, truhlar@chem.umn.edu, University of Minnesota, and B. Vincent McKoy, mckoy@its.caltech.edu, California Institute of TechnologyOverviews of the state of the art in diverse areas of computational chemistry with an EDITORIAL CALENDAR WHATFEATURINGWHATFEATURINGWHATFEATURING To submit an article, seehttp://computer.org/cise