/
Cascaded Pose Regression Piotr Doll ar Peter Welinder Cascaded Pose Regression Piotr Doll ar Peter Welinder

Cascaded Pose Regression Piotr Doll ar Peter Welinder - PDF document

karlyn-bohler
karlyn-bohler . @karlyn-bohler
Follow
430 views
Uploaded On 2015-05-12

Cascaded Pose Regression Piotr Doll ar Peter Welinder - PPT Presentation

edu Abstract We present a fast and accurate algorithm for comput ing the 2D pose of objects in images called cascaded pose regression CPR CPR progressively re64257nes a loosely spec i64257ed initial guess where each re64257nement is carried out by a ID: 65642

edu Abstract present

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "Cascaded Pose Regression Piotr Doll ar P..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Figure2.Pose-indexedfeatures.Left:Micedescribedbya1-partposemodel.Right:3-partposemodelofzebrash.Theyellowcrossesrepre-sentthecoordinatesystemdenedbythecurrentestimateoftheposeoftheobject(whichdoesnothavetobecenteredontheobject).Thecol-oredarrowsshowcontrolpointsdenedrelativetotheposecoordinates.Theweaklypose-invariantfeaturesusedinthispaperweredenedasthedifferenceinpixelvaluesattwocontrolpointsrelativetothepose.pendsonlyontheobjectoandtherelativepose 12be-tweentheinputpose1andtruepose2.Inotherwords,hisweaklyinvariantifitsoutputisconstantgivenaconsistent(notnecessarilycorrect)estimateofthepose.Composingweaklyinvariantfeatursusingstandardoperationsresultsinafeaturesthatarethemselvesweaklyinvariant.Notethatinvarianceasdenedaboveisamuchweakerrequirementthangeneralposeinvariance,whichcouldbestatedasfollows:h(G(o;))=h(G(o;)).De-signinginvariantfunctionthatsatisfythelatterdenitionisexceedinglydifcult,whileourdenitionrequiresinvari-anceonlywhengivenaconsistentestimateofthepose.Itisalsoworthcomparing(2)tothe`stationarityassump-tion'introducedin[16].Usingthenotationdenedhere,anon-probabilisticformofthestationarityassumptioncanbewrittenash(1;G(o;1))=h(2;G(o;2)).Inotherwordsitstatesh(;G(o;))isconstantregardlessofthevalueof.Observethat(2)isaverynatural,albeitstronger,generalizationofthestationarityassumption.Ourweakinvarianceassumptionjustiesthederivationsthatfollowandallowsustoprovestrongconvergenceratesfortheresultingalgorithm.Underidealconditions,wecanprove(2)holds;however,asin[16],weobservethatinpractice(2)willfrequentlybeviolated.Nevertheless,asweshalldemonstrate,thealgorithmderivedusingtheweakinvarianceassumptionisveryeffectiveinpractice.Pose-IndexedControlPointFeaturesInallourexper-imentsweusetheextremelysimpleandfasttocomputecontrolpointfeatures[22,24].Inourimplementation,eachcontrolpointfeatureiscomputedasthedifferenceoftwoimagepixelsatpredenedimagelocations.Morespecif- Input:ImageI,initialpose01:fort=1toTdo2:x=ht(t�1;I)//computefeatures3:=Rt(x)//evaluateregressor4:t=t�1//updatet5:endfor6:OutputT Figure3.EvaluationofCascadedPoseRegression.ically,eachfeaturehp1;p2isdenedbytwoimageloca-tionsp1andp2andisevaluatedbycomputinghp1;p2(I)=I(p1)�I(p2),whereI(p)denotesthegrayscalevalueofimageIatlocationp.Asidefromtheirspeedandsurprisingeffectivenessinrealapplications[24],theadvantageoftheabovefeaturesistheyarestraightforwardtoindexbypose.Forexample,supposeobjectposeisspeciedbyatranslation,rotation,scaleandaspectratio(orsomesubsetoftheseparameters).Foreachpose,wecandeneanassociated33homogra-phymatrixH,expresspinhomogeneouscoordinates,anddenehp1;p2(;I)=I(Hp1)�I(Hp2).Wecaneasilyextendthisapproachtoarticulatedobjectswhereeachparthasarotation,scaleandaspectratiobyassociatingasepa-ratehomographymatrixwitheachpart.SeeFigure2.Intheappendix(availableontheprojectwebsite)weshowthath(;I)denedinthemanneraboveisweaklyin-variantundercertainassumptions.Ingeneral,however,de-signingweaklyinvariantpose-indexedfeaturescanbequitechallengingandrequirescarefulconsiderationwhenapply-ingourproposedframeworktonovelproblems.2.2.CascadedPoseRegressionWenowdescribetheevaluationandtrainingproceduresforacascadedposeregressorR=(R1;:::;RT),showninFigures3and4,respectively.Wewilltrainacascadedre-gressorR=(R1;:::;RT),suchthat,givenaninputpose0,R(0;I)isevaluatedbycomputing:t=t�1Rt(ht(t�1;I));(3)fromt=1:::TandnallyoutputtingT(seeFigure3).EachcomponentregressorRtistrainedtoattempttomin-imizethedifferencebetweenthetrueposeandtheposecomputedbythepreviouscomponentsusing(pose-indexed)featuresht.Ourgoalistooptimizethefollowingloss:L=NXi=1d(Ti;i):(4)Webeginbycomputing0=argminPid(;i),andset0i=0foreachi.0isthesingleposeestimatethatgivesthelowesttrainingerrorwithoutrelyingonanycomponentregressors.WenowdescribetheprocedurefortrainingRt Input:Data(Ii;i)fori=1:::N1:0=argminPid(;i)2:0i=0fori=1:::N3:fort=1toTdo4:xi=ht(t�1;Ii)5:ei= t�1ii6:Rt=argminRPid(R(xi);ei)7:ti=t�1iRt(xi)8:t=Pid(ti;i)=Pid(t�1i;i)9:Ift1stop10:endfor11:OutputR=(R1;:::;RT) Figure4.TrainingforCascadedPoseRegression.givenR1;:::;Rt�1.Ineachphaset,webegintrainingbyrandomlygeneratingthepose-indexedfeatureshtandcom-putingxi=ht(t�1i;Ii)foreachtrainingexampleIiwiththepreviousposeestimatet�1i.Ourgoalistolearnare-gressorRtsuchthatti=t�1iRt(xi)minimizesthelossin(4).Aftersomemanipulation,wecanwritethisas:Rt=argminRXid(R(xi);ei);(5)whereei= t�1ii.WecansolveforRtusingstan-dardregressiontechniques,moreover,sinceRtneedstoonlyslightlyreducetheerror,wecantrainseparatesingle-variateregressorsforeachcoordinateofeandsimplykeepthebestone.Inthisworkwerelyonrandomregressionferns,describedbelow.AftertrainingRt,weapply(3)tocomputetiforuseinthenextphaseoftraining.IftheregressorRtwasunabletoreducetheerrortrainingstops.Let:t=Xid(ti;i)=Xid(t�1i;i):(6)Ift1trainingstops,otherwisewecancontinuetrainingforTphasesoruntiltheerrordropsbelowacertaintargetvalue.ThefulltrainingprocedureisgiveninFigure4.RandomFernRegressorsEncouragedbythesuccessofrandomfernsforclassication[24]andrandomforestsforregression[6],wetrainarandomfernregressorateachstageinthecascade.Afernregressortakesaninputvec-torxi2RFandproducesanoutputyi2R.ItiscreatedbyrandomlypickingSelementsfromtheF-dimensonalfea-turevectorwithreplacement,andthensamplingSthresh-oldsrandomly.ThejthelementofxiiscomparedtothejththresholdtocreateabinarysignatureoflengthS.Thus,eachxiendsupinoneof2Sbins.Theypredictionforabinisthemeanoftheyi'softhetrainingexamplesthatfallintothebin.Ateachstageinthecascade,thebestfernintermsoftrainingerrorispickedfromapoolofRrandomlygeneratedferns.PoseClusteringDependingontheinitializationoftheposebeforeapplyingCPR,thealgorithmsometimesfailstoestimatethecorrectpose.However,moreoftenthannot,justre-runningCPRwithadifferentinitialposeyieldsareasonableestimate.Thus,weusedasimple“posecluster-ing”heuristictoimprovetheperformanceofthealgorithm.Foreachimage,CPRwasrunKtimeswithdifferentran-dominitialposes.Then,afterallKruns,theposeinthehighestdensityregionofpose-spacewaspickedasthe-nalpredictionofthealgorithm.WeusedasimpleParzenwindowapproachwithaGaussiankernelofwidth1(usingnoramlizeddistancesdescribedinSection3)toestimatethedensityateachposeascomparedtotheotherK�1poses.ConvergenceRateWeprovethatouriterativeschemewillconvergeunderfairlyweakassumptions,andfurther-more,showthattherateofconvergenceisexponentialintheweakerrorsofthecomponentregressors.Theproofissimilarinspirittotheproofsforconvergenceofboostedregressors[17,10],butrequiresaweakernotionofweaklearnability.Herewehighlightthemainndings(seeap-pendixonprojectwebpageforfullproof).Lethrepresentasetofstandard(notpose-indexed)fea-tures.WedenetherelativeerrorofaregressorRonadataset(Ii;i)as=Pid(R(h(Ii));i)=Pid(;i),forthewhichminimizesthedenominator.Thus,ifRperformsbetterthenreturningthesingleuniformprediction,1.FairlystraightforwardconvergenceproofsforbothCPRandboostedregression[17]requirethatwehaveaccesstoaweaklearner,that,givenadataset(Ii;i)canoutputare-gressorwithrelativeerror forsome 1.Undertheseconditions,therateofconvergenceforbothCPRandboostedregressionisgivenbyT T.TheprimarydifferencebetweentheconvergenceratesofCPRandboostingregressionliesinthestrengthoftheweaklearnabilityassumption.LetIi=G(oi;0i)forsomeunknownoi.InCPR,weneedaccesstoaweaklearnerthatcanoutputaregressorwith onatrainingset(Ii;i)onlyifi=0i.Forboostedregression,weneedaccesstoaweaklearnerthatcanoutputaregressorwith onarbitrarytrainingsets(Ii;i)whereineednotequal0i.Thus,althoughbothCPRandboostedregressionconvergeexponentiallyatsomerate Taslongastheweaklearnabilityassumptionissatised,inpracticethebaseoftheexponent ismuchlowerforCPR.2.3.DataAugmentationUtilizingposeindexedfeatures,wecanarticiallysimulatealargeamountofdatafromtheNtrainingsamplesbysim-plyusingdifferentinitialestimatesforthepose.Thisallowsustoavoidthecombinatorialexplosionofdatathatwouldberequiredifweneededtoobserveeveryobjectineverypose.Supposewearegiventrainingsamples(Ii;i)for i=1:::Nandwishtosimulateadditionaldata.RecallthateachimageIiisgeneratedusingG(oi;i)whereoiistheunknownobjectappearance.Usingthetrainingdata,wecanestimatethedistributionDoftheposesi(orusetheirempiricaldistribution);wewouldliketouseDtoaugmentourtrainingsetbysamplingjDandgeneratingnewtrainingimagesIij=G(oi;j).AlthoughwedonothaveaccesstoeitherGoroi,wecanactuallyachieveanidenti-caleffectbytakingadvantageofourpose-indexedfeaturesbeingweaklyinvariant.Weformalizethisbelow.Whentrainingwiththeun-augmenteddataweoptimizethefollowingloss:L(R)=PNi=1d(R(h(;Ii));i)whereistheinitialpose.SupposewecouldexplicitlygenerateadditionalimagesusingGbysamplingjDandgen-eratingnovelimagesIij=G(oi;j).Ineffect,wecouldoptimizeL(R)=XiEDd(R(h(;G(oi;j)));j);(7)whereEDdenotesexpectationoverD.Ofcourse,inpracticewedonothaveaccesstoG.However,wecanachieveanidenticaleffectwithoutneedingtoexplicitlycomputeG.Belowweprovethat jR(;G(o;j))= iR(0;G(o;i)),where0=i j.Plugginginto(7)andre-arranginggives:L(R)=XiEDd(R(h(i j;Ii));i):(8)Inotherwords,trainingRwiththeoriginalIibutwithaninitialposeestimate0=i jisexactlyequivalenttotrainingwithexplicitlygeneratednovelimagesIij.Inpractice,weapproximatethelossin(8)bysamplinganitesetofinitialposesforeachtrainingexample.Tocompletetheabovederivation,weprovethat81;2;01;02,if 101= 202,thenthefollowingholds: 1R(h(01;G(o;1)))= 2R(h(02;G(o;2)))(9)Weprovebyinductionthat 1t1= 2t2foreveryt,wheret1andt2aredenedasin(3).Thebasecase(t=0)istruebydenition.Wenowshowthatiftheaboveholdsfort�1�0,italsoholdsfort.Proof: 1t1= 1t�11Rt(h(t�11;G(o;1)))(10)= 1t�11Rt(h( 1t�11;G(o;e)))(11) 2t2= 2t�12Rt(h(t�12;G(o;2)))(12)= 2t�12Rt(h( 2t�12;G(o;e)))(13)Therefore,if 1t�11= 2t�12then 1t1= 2t2,thuscompletingtheproof.3.HumanAnnotationsandGroundTruthWeobtainedthreepose-labeleddatasets:Mice,FishandFaces.TheMicedatasetconsistedof3000blackmicela-beledintop-viewimages(with1-3miceperimage).Pose Figure5.Poselabelsprovidedbyhumanannotators.Top-left:An-notationsfortheMiceandFishdatasetsprovidedbydifferentan-notators(colordenotesannotator).Top-right:Parameterizationoftheposes.Themouseposeisanellipseatlocation(x;y)withorientation,scales1andaspectratios1=s2.Theshposeisa3-partmodelwherethebody(middle)partiscenteredatlocation(x;y)withorientationb,andthetailandheadpartshaveanglestandhrespectivelyw.r.t.tothebodypart.Thescalesisthelengthoftheparts.Bottom:Distributionsofdifferencesinhuman-providedposelabelsforthelocationandorientationofthemice(threeannotators:S1,S2,D),andthetailandheadanglesforthesh(twoannotators).Theestimatedmeanandstandarddeviationofeachdistributionisdenotedbyandrespectively.wasspeciedbythelocation,orientation,scale,andaspectratioofanellipsettedaroundeachmouse,seeFig.5(top).TheFishdatasetconsistedof38top-viewimagesofzebra-shswimminginanaquarium,with5shperimage.Withreection,thisgivesatotalof380examples.Theposeofeachshwasannotatedbyttinga3-partmodelspeciedbythelocation,orientationandscaleofacentralbodypart,andtheanglesofthetailandheadwithrespecttothebody,seeFig.5(middle).FortheFacesdatasetweusedCaltech10,000WebFaces[3],whereeachfacewaslabeledbyfourpoints(eyes,mouth,nose).Weusedtherstthreecoordi-nates(thenosewassomewhatinconsistent)todeneaposewiththesameparameterizationasanellipse.InadditiontotrainingandevaluatingCPR,weusedredundantlyannotatedimagesformeasuringhumanper-formanceanddeningaperceptuallymeaningfuldistancemeasurebetweenposes.Threeannotatorsprovidedredun-dantlabelsfor750mice,andtwoannotatorsprovidedla-belsforallthesh,seeFig.5(top)forsomeexamples.Toquantifyannotatorconsistency,foreachposeparameterwecomputedthedistributionsofpose-parameterdifferences,asshowninFig.5(bottom).Observethattheannotationsareconsistentandunbiased(0)andthedifferencesinindividualposeparametersarenormallydistributed.Thelattermotivatestheuseofthestandarddeviationasnormal-izationinaperceptuallymeaningfuldistancemeasure.Inordertoweigherrorsinestimatingthediffer- Figure6.Performancevs.thenumberofphasesTintheregres-sioncascade.Overallmedianerrorplottedinblack;errorofin-dividualposeparametersplottedincolor.Mice:Initiallyonlythetranslationparametersarerened,nextCPRbeginspredictingorientation,nallyafter128iterationsCPRalsobeginsren-ingscaleandaspectratio.Fish:Duetotheelongated3-partposemodel,CPRdeterminestheorientationofthemodelbeforecon-centratingontheposition,scaleandpartangles.NooverttingisobservedwithincreasingT. Figure7.Effectofdataaugmentation(Sec.2.3).EachcurveshowstheeffectofaugmentingNactualtrainingexamplestodif-ferentsizesNN.Mice:PerformanceisreasonablewithjustN125trainingexamplesaugmentedtoN=4000totalexam-ples(andmaximizedassoonasN250).Fish:PerformanceismaximizedwhenN64trainingexamplesareused(againaug-mentedtoN=4000totalexamples);infact,sincethetrainingimagesweremirrored,thiscorrespondsto32actualannotations.Thus,althoughthetotalamountofaugmenteddataCPRrequiresismassive,theamountofactualannotateddataneededisverysmall.entposeparametersequallyagainsteachother,wede-nethedistancebetweentwoposesasd(1;2)=q 1 DPDi=11 2i�i1�i22,whereDisthenumberofposeparameters.Here2idenotesthevarianceofthedifferencesbetweenannotationsoftheithposeparameter,estimatedfromthedifferencedistributionsshowninFig.5.TheseweightswereusedbothforevaluationandfortrainingCPR.Theabovenormalizationassumesthattheposeparametersareuncorrelated.Theexpectedsquareddistancebetweentwohumanannotatorsis1.Wedenedaposeestimatewithadistancefromthegroundtruthgreaterthan2:5tobeafail-ure,asweobservedthatwith99%probabilitytwohumanannotationsofthesameobjectwerewithinanormalizeddistanceof2:5.4.ExperimentsWeperformedexperimentsonthethreedatasetsdescribedabove:Mice,FacesandFish.WedividedtheMiceandFacesdatasetsintotraining,validationandtestsetsof1000imageseach.WedividedtheFishdatasetintoatrainingset Figure8.Performanceasafunctionoftheuncertaintyintheini-tialpositionrofa(simulated)detector(seetextfordetails).Left:Meanerrorasafunctionorrincreasesgradually,exceptforthefaceswheremoretexturedbackgroundslikelymakeposeestima-tionmorechallengingwithlargeroffsets.Right:Thefailurerateincreasessmoothlybutrapidlyasdetectoruncertaintyincreases.of250imagesandatestsetof130images(butnovalida-tionset).Inaddition,weappliedan88medianltertotheshimagestoremovesaltandpeppernoisefromthebackground.Allreportedexperimentswereaveragedover25trials,eachperformedwithadifferentrandomseed.WemeasuredtheerrorofasingleannotationusingtheperceptualdistancesdenedinSection3.Iftheerrorisabove2:5wedenetheposeestimatetobeafailure.Wereportoverallerrorintermsofthepercentfailurerateandthemean-erroroftheremainingexamples.Alternatively,wereportmedian-errorwhenwewishtodescribeperfor-manceusingasinglecurve.ThemainparametersofCPRaretheamountoftrain-ingdataN,thetotalamountofdataafterdataaugmen-tationN,thenumberofphasesinthecascadeT,theferndepthS,thenumberoffernsRandfeaturesFgen-eratedateachstageoftraining,andthenumberofrestartsK.UsingtheMiceandFacesvalidationsetswefound(N=4000;T=512;S=5;R=64;F=64;K=16)tobeagoodtradeoffbetweenperformanceandspeedonbothdatasets.Allsubsequentexperiments,onalldatasets,usedtheaboveparametersettingsunlessotherwisenoted.Cascadedepth:TheinuenceofthenumberofphasesTontheerrorisshowninFig.6.Thealgorithmcon-vergesafter512stagesforalldatasets,includingthefaces(notshown),anddoesnotovert.Thelackofoverttingisinstarkcontrasttostandardboostedregressionalgorithmswhichtendtostronglyovertandrequirecarefultuningofa“learningrate”parameter[17].Data-augmentation:Fig.7showstheadvantageofthedataaugmentationschemedescribedinSection2.3.Withonly250trainingexamplesitispossibletomatchtheper-formanceachievedusingall1000trainingexamplesontheMicedataset.OntheFishdatasetthebenetofdataaug-mentationwasevenmorestriking:just32poselabelsweresufcienttoachievehumanperformance.Translationaluncertainty:Weexpectadetec-tor/trackertoprovideacenterpositionestimatextoCPR.Weassumethedetectoralwaysreturnsanestimatexthatiswithinamaximumdistancerwofthetrueobjectpositionx,i.e.,jjx�xjj2rw,wherewdenotesobjectwidth Figure9.Micedataset:Humanagreementonthisdatasetishigh,withameanerror1andafailureratef1%.Boosted-Regfailsonalmostalltestexamples.Choosingthebestof16randomposes(Rand-16-Best)resultsinperformanceslightlybetterthanBoosted-Reg,butmuchworsethanCPR.Thedistributionofer-rorsofCPR-1isbimodalwith39.6%failures;however,themean(excludingfailures)is=1:13,whichisnotfarofffromhu-manperformance.RunningCPRwithclustering(CPR-16-Clust)reducesfailuressubstantially.Havinganoraclepickthebestof16posepredictions(CPR-16-Best)removesmostremainingfail-ures,apropertythatcanbeexploitedintrackingsystemswheredynamicinformationisavailable.Theimagesinthebottomrowshowexamplesofposeestimates(greenellipse)fromCPR-16-Clustatdifferentdistancesfromthegroundtruth(blueellipse).Weobservedthatmanyofthefailurecaseswerecausedbyorien-tationerrorsof180,asintheright-mostimage.measuredattherootpartandrdenestheuncertaintyofthedetector.Wesimulatesuchadetectorbysamplingini-tialestimatesxiuniformlyfromacircularregionofradiusrwcenteredonxiforeachexamplei.ResultsareshowninFig.8.Meanperformance(excludingfailures)degradesgraduallyastheuncertaintyrincreases;however,thefail-urerateincreasesmorequickly.Throughoutallexperimentsweuseasimulateddetectorwithr=:5asweexpectmostdetectionsystemstoreturnapositionestimatethatiswithinhalfanobjectwidthofthetrueobjectposition.Performance:Resultsonthesh,miceandfacesareshowninFigures9–11.Weplotthefulldistributionofer-rors,andlistboththemeanerrors(excludingthefailures)andthefailureratesf.Foreachdataset,wecomparethefollowingapproaches: Human:humanversushumanperformance.Boosted-Reg:boostedregression[17]usingsamefeaturesasCPR.Rand-16-Best:oracleselectsthebestof16randomposes.CPR-1:CPRwithasingle(K=1)startingpose.CPR-16-Clust:CPRwith16startingposesfollowedbyclustering.CPR-16-Best:CPRwith16startingposes,oracleselectsbest. CPR-16-Clustisthebasicapproachdescribedinthispa-perwhileCPR-16-Bestshowsperformancewhenanoutsidesourceofinformationisavailabletoselectthebestof16posescomputedbyCPR(e.g.,temporalconsistencyinfor-mation).Overall,theperformanceofCPR-16-Clustisclose Figure10.Fishdataset:Resultsarequalitativelysimilartothemicedata(seeFig.9).Failuresareagainprimarily180orienta-tionerrors(whichcanberemovedbyclusteringorbyincorporat-ingdynamicinformation).Interestingly,exceptforthefewfail-urecases,CPRoutperformsthehumanannotatorthatredundantlylabeledthesamedataafterbeinggivenidenticallabelinginstruc-tions.Neithersetofannotationslookssloppy,ratherthereappar-entlymustbesomeslightbiasbetweentheannotators,whereasthealgorithmlearnstomimictherstannotatorquiteclosely.tothatofhumanannotators,exceptfora5%-25%failureratedependingonthedataset.However,manyfailurecasesintheMiceandFishdatasetsareduetoorientationerrorsof180,aproblemthatcanbealleviatedbydynamiccon-straintsfromatrackingsystem.Indeed,CPR-16-Besttendstohaveaverylowfailurerateof1%-5%.Forcomparison,Boosted-RegandRand-16-Bestperformverypoorly.Implementation:UsingaMatlabimplementationofCPR,ittakesabout3minutestotraintheentiresystemonN=1000image/posepairsusingthedefaultparameters.Testingisalsoveryfast,averaging2-3msperimagewithdefaultparametersandK=1startingposes.Wewillpostallcode(whichisfairlysmall)ontheprojectwebsite.5.DiscussionandConclusionsWepresentedanewalgorithm,cascadedposeregression(CPR),tocomputethe2Dposeofanobjectfromaroughinitialestimate.ThekeytoCPR'ssuccessappearstobetheuseofpose-indexedfeatureswhosevaluesdependsonthecurrentestimateofthepose.TrainingCPRtakesonlyafewminutesandcomputingposeafewmillisecondsonastandardmachine.CPRisinsensitivetoexactparametersetting;indeed,identicalparameterswereusedforallthreedatasets.Moreover,CPRcanlearneffectivemodelswithverylittletrainingdata(100trainingsamples).Experimentscarriedoutonthreedatasets(faces,mice,sh)withdifferentobjectandbackgroundstatisticsdemon-stratedthatCPRcanlearndiversemodelsofposewithamedianerrorcomparabletothatofaskilledhumananno-tator.Whilethisisveryencouraging,wewillneedtoex-perimentwithmorecategoriesbeforewecanclaimgeneralapplicability.Thefailurerateishigherthanthatofhuman