/
TRACKING MOTIONBLURRED TARGETS IN VIDEO Shengyang Dai Ming Yang Ying Wu Aggelos K TRACKING MOTIONBLURRED TARGETS IN VIDEO Shengyang Dai Ming Yang Ying Wu Aggelos K

TRACKING MOTIONBLURRED TARGETS IN VIDEO Shengyang Dai Ming Yang Ying Wu Aggelos K - PDF document

sherrill-nordquist
sherrill-nordquist . @sherrill-nordquist
Follow
508 views
Uploaded On 2015-02-25

TRACKING MOTIONBLURRED TARGETS IN VIDEO Shengyang Dai Ming Yang Ying Wu Aggelos K - PPT Presentation

Katsaggelos Department of EECS Northwestern University 2145 Sheridan Road Evanston IL 60208 sda690mya671yingwuaggk ecenorthwesternedu ABSTRACT Many emerging applications require tracking targets in video Most existing visual tracking methods do not w ID: 39181

Katsaggelos Department EECS Northwestern

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "TRACKING MOTIONBLURRED TARGETS IN VIDEO ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

TRACKINGMOTION-BLURREDTARGETSINVIDEOShengyangDai,MingYang,YingWu,AggelosK.KatsaggelosDepartmentofEECS,NorthwesternUniversity2145SheridanRoad,Evanston,IL60208fsda690,mya671,yingwu,aggkg@ece.northwestern.eduABSTRACTManyemergingapplicationsrequiretrackingtargetsinvideo.Mostexistingvisualtrackingmethodsdonotworkwellwhenthetargetismotion-blurred(especiallyduetofastmotion),becausetheimperfectnessofthetarget'sappearancesinvalidatestheimagematchingmodel(orthemeasurementmodel)intracking.Thispaperpresentsanovelmethodtotrackmotion-blurredtargetsbytakingadvantageoftheblurswithoutperformingimagerestoration.Unliketheglobalblurinducedbycameramotion,thispaperisconcernedwiththelocalblursthatareduetotarget'smotion.Thisisachal-lengingtaskbecausetheblursneedtobeidentiedblindly.Theproposedmethodaddressesthisdifcultybyintegratingsignalprocessingandstatisticallearningtechniques.Thees-timatedblursareusedtoreducethesearchrangebyprovidingstrongmotionpredictionsandtolocalizethebestmatchac-curatelybymodifyingthemeasurementmodels.1.INTRODUCTIONManyemergingapplicationsrequiretrackingtargetsinvideo.Inrealsituations,motionblursarenotuncommonincap-turedvideodata.Theyareundesirableforimageanalysisbe-causetheimperfectnessofthetarget'sappearancesislikelytojeopardizeimagefeatures,suchasimagegradients,sum-of-square-differences(SSD),andcolorhistograms,andthusinvalidatestheimagematchingmodel(orthemeasurementmodel)intracking.Althoughdeblurringmethodsthatim-provetheimagequalityhavebeenwidelyinvestigatedintheliterature[1,2],thesestudiesareoftenbasedontheassump-tionthattheentireimageissubjecttothesameglobalmo-tionblur(thisisforthecasewhenthereiscameramotion).Unfortunately,inrealapplications,weobservemorecompli-catedandchallengingsituationswheremotionblursareonlypresentinpartsoftheimage.Theselocalmotionblurscanbeproducedby1)fastmovementsofthetargets,or2)insuf-cientlightingthatreducestheshutterspeedofauto-exposurecameras.Studiesonmotionblurdetectionorestimationofthepointspreadfunction(PSF)canbetracedbackto1970s.Forexam-ple,motionblursandout-of-focusblursmaybeidentiedbylocatingthezero-crossingsofthecepstrum[3]andthebispec-trum[4].Inthemorerecentstudies,blurscanbeestimatedbymappingthelowfrequencycomponentstothehighfre-quenciesbasedonalearnedVQcodebook[5],oremployingthediscreteperiodicRadontransformforefcientcomputa-tion[6].Althoughmotionblurdegradesimagequalities,wecanalsotakeadvantageofit.Ifthemotioncanbeestimatedfromtheblur,itcanfacilitatemanymotion-relatedtasks,suchas,opticalowcomputation[7],motionsegmentation[8],andtracking[9].Notethatintheaforementionedworks,aglobalmotionblurisassumedinsteadofalocalone.Thispaperpresentsanovelmethodtodealwithlocalmotionblursinvisualtrackingwithoutdeblurring,byinte-gratingsignalprocessingandstatisticallearningtechniques.Thislearning-basedschemedifferentiatesmotion-blurredim-agepatchesfromnon-blurredpatchesandestimatesthelocalmotionblurmasktofacilitatetracking.Asupportvectorma-chine(SVM)classierisemployedtodiscriminateblursafteraligningthepatchesbysteerablelters.Thenthoseidentiedblurredpatchesareclusteredtoobtainamoreaccurateesti-mationofthemotiondirection.Duringthetrackingprocess,blurredtemplatesofthetargetaresynthesizedaccordingtotheblurdirectionandusedformatching.2.THEPROBLEMItisreasonabletoassumethatthetarget'smotionislinearbetweentwosuccessiveframes,sothePSFh(x;y)canbemodelledasarectangularpulsewhoseorientationandwidthcorrespondtothedirectionandseverityoftheblur.Regard-lessoftheorientation,thePSFish(x;y)=0y=0;1x11=ly=0;l=2xl=2;(1)wherelrepresentsthestrengthoftheblur.Theblurredim-ageIblur(x;y)isgeneratedbyconvolutingthePSFandtheoriginalimageI(x;y),thatis,Iblur(x;y)=I(x;y)h(x;y)+n(x;y);(2)wheren(x;y)representstheadditivenoise.Inthefrequencydomain,therectangularPSFcorrespondstoasincfunction,whichprovidesvaluableinformationforblurestimation.Itisverydifculttodetectthezero-crossingsassociatedwith thesincfunction.However,theassociatedstrippatternsthatindicatethepossiblepresenceofmotionblursaregenerallyadequatelydetectable.Thusanimportantissuehereistolo-catethestrippatterns,orsimplythedominantdirectionsinthefrequencydomain.Toidentifyglobalblurs,imagescanbedividedintosmallpatchessoastosuppressthenoisebyaveragingtheestimatedblurs.However,thesituationismorecomplicatedwhende-tectingandidentifyinglocalmotionblurs,becausesomepartsoftheimagearenon-blurredregions,sothenwecannotsim-plyapplytheaveragingscheme.Thus,acriticalissuehereistodetermineifasmallimagepatchismotionblurredornot.Weneedtoanswersomeimportantquestions:(1)istheinfor-mationintheindividualimagepatchsufcienttodetermineifitisblurred?(2)howtosuppresstheinuenceofnoiseandotherfactorssuchastexture?(3)howtodetectandestimatetheblur?and(4)howcantheblurbeusedfortracking?Toanswertherstquestion,onlyonesmallimagepatchitselfmaynotbeabletodeterminetheblur.Sinceitcanpro-videsomeroughestimates,thenaldecisioncanbemadebyintegratinganumberofpatches.Forthesecondquestion,therearemanyfactorsthatmayinuencetheestimationre-sult,suchastheadditivenoise,textureregions,andblockboundaryeffectsduetocompression.Sinceitisdifculttomodelthesefactors,weemployastatisticallearningapproach(asupportvectormachine)todiscriminateblurs.Forthethirdquestion,weemploysteerablelterstoestimatethedominantdirectioninthefrequencydomain.Whenthetargetisblurredduetofastmotion,thepredeter-minedmeasurementmodelofthetrackerisgenerallyunabletoobtaingoodmatchesandislikelytofailthetracker.Atthispoint,oncetheblurisdetectedandidentied,wecansyn-thesizevariousblurredtemplatesofthetargetswithdifferentblurstrengths,andsearchforthebestmatchesoftheblurredtemplatessoastoobtainmoreaccuratetrackingresults.3.PROPOSEDAPPROACHTheproposedapproachtotrackingmotion-blurredtargetsisillustratedinFig.1.ThetargetofinterestTisinitializedintherstframe.Foreachimageframefk,themean-shifttracker[10]thatmatchesthetarget'scolor-histogramsisap-pliedrst.Ifitsmatchingscoreislow,weassumethatbluroccursandweneedtoestimatetheblurparameters(directionbandstrengthlb).Thebluridenticationprocedureconsistsofthreesteps:blurmaskdetectionwithSVM,blurdirectionestimationwithsteerablelters,andblurstrengthestimation.Then,themean-shiftalgorithmisappliedagainwithsynthe-sizedblurredtemplatesT(b;lb).3.1.SteerableltersSteerablelters[11]canextractthedominantdirectionofasmallimagepatch.Theyaremuchmorecomputationallyef-cientthantheRadontransform.Thelterinanygivendi-rectioncanbeexpressedasalinearcombinationofasmall Fig.1.Theproposedapproach.setofbasislters.Soweonlyneedtocomputetheconvolu-tionwithasmallnumberoflters,andinfertheresponsesonotherdirectionsbyasimplelinearcombination.Thelterindirectioncanbeexpressedasf(x;y)=jkj()fj(x;y);(3)kj()=1 3[1+2cos(2(j))];(4)wherej=j=3;j=0;1;2,andfjisthebasislterindirectionj.Afterscalingofxandy,itcantaketherotatedversionofthefollowingevenandoddltersrespectivelyfe(x;y)=0:9213(2x21)e(x2+y2);(5)fo(x;y)=(2:205x+0:9780x3)e(x2+y2):(6)Toextractthestripdirectioninthefrequencydomain,onlytheevenlterisrequiredduetothesymmetryproperty.Toextractthedominantdirectionofanimagepatchintheim-agedomain,bothevenandoddltersareneeded,sincebothastrongimageedgeandastripmayproduceadominantdi-rection.Thenaldirectioncanbechosenasb=argmax((fe(x;y))2+(fo(x;y))2):(7)3.2.BlurdetectionbasedonSVMASupportVectorMachine(SVM)[12]isapowerfulstatis-ticallearningalgorithm.Itcanlearnanon-lineardiscrimina-tionfunctionfromasetoflabelledtrainingdataforpatternclassicationordataregression.Thisfunctioncanminimizethetrainingerror,andatthesametime,guaranteethegen-eralizationabilityoftheclassierbymaximizethemarginbetweentheso-calledsupportvectors.Inourapproach,anSVMistrainedtoclassifyblurredandnon-blurredsmallimagepatchesinordertoimplicitlyhan-dlethenoiseandotherinuencesthataredifculttomodelexplicitly.Thisapproachiscompletelydifferentfrommostexistingblurestimationmethodsthatarebasedonspectralanalysisandarethuslimitedbynoise.Imageswithalin-earmotionblurgenerallyshowstrippatternsinthefrequencydomain.Intheory,theblurdirectionandstrengthcanbees-timatedbyextractingthedirectionandwidthofthestrips. Fig.2.2Dcolorhistogramsforthesameobjectwithoutandwithmotionblur(frame#1and#70inthesequenceofFig.5(a)).However,therearetwodifcultieshere:(1)thestrippat-ternsarenotapparentwhenthepatchesaresmallsuchthatthenoisemayoverridethestrippatterns,and(2)thewindoweffectproducesedgesinthefrequencydomainthatgenerateundesirablepatterns.Althoughitisdifculttomodeltheseinuences,theycanbelearnedfromthetrainingexamples.TotrainanSVMclassier,wecollectnegative(ornon-blurred)trainingdatathatarepatchesfromnon-blurredimages,andpositive(orblurred)trainingdatathataresynthesizedmotion-blurredim-agepatches.Thesedataarealignedbyrotatingthepatchesaccordingtotheirdominantdirectionsextractedbysteerableltersonaxedscale(4inourexperiments).ThedirectionsaliencyofapatchpisdenedasS(p)=max((fe(x;y))2+(fo(x;y))2) min((fe(x;y))2+(fo(x;y))2):(8)PatcheswithS(p)Sthrarenotincluded,becausetheyusuallyrepresenthomogeneousregionsandareambiguoustoclassify.WesetSthr=2.Thesizeofeachpatchis1616pixels,5000non-blurredpatchesarerandomlycol-lectedfromtheCoreldatabase,while5000blurredpatchesaresynthesizedonrandompatchesselectedinthesamedata-base.Theblurstrengthsarerandomlychosenfrom5to32pixels.Eachpatchisrepresentedbyits256DFTcoefcients.Theaverageclassicationaccuracyof5-foldcrossvalidationis90:5%,whichisenoughfordirectionestimationinourex-periments.3.3.BlurestimationfortrackingAsmentionedinSec.1,motionblursinuencemostlow-levelimagefeaturesandthusfailmanytrackingalgorithms.Themean-shifttracker[10],whichsearchesforthebestmatchingofthekernelweightedcolorhistogramsofthetargetbygradi-entdescent,isusedinthiswork.Whenmotionblurispresent,adjacentcolorpixelswillbeblendedandgeneratenewcolorssuchthatthecolorappearanceofthetargetmaybegreatlychanged(asshowninFig.2)tofailthemean-shifttracker.Fortunately,oncewecanestimatethelocalmotionblurdi-rection,wecansynthesizevariousmotion-blurredtemplatesofthetargetwithdifferentblurstrengthsalongtheestimatedmotiondirection.Thenwecanutilizethecolorhistogramsoftheblurredtemplatestoobtainthematchesbymean-shift.Inourexperiment,theblurredpatchesareidentiedontheregionwhichisobtainedbyenlargingthetrackingresultin Fig.3.Localmotionblurdetectionandidentication. 0501001502000.30.40.50.60.70.80.91.0 Bhattacharyya matching score# of frames Matching score (our approach) Matching score (Mean-shift) Fig.4.MatchingscorecomparisonwithMean-shiftforthese-quenceofFig.5(a).thepreviousframeby50%inlength.Weusemorphologicaloperationstoobtainaspatiallyconsistentblurredmask,andsteerablelterstoestimatetheblurdirectionofthisregion.Thedirectionestimationresultislikelytobedistractedbynearbystrongedgeswithoutthebluridenticationmodular.Oncetheblurdirectionbisestimated,foragivenblurstrengthl,anewtemplateT(b;l)canbesynthesized(thetemplatesizeisreducedtoexcludetheinuencefromnearbyregions).Themean-shifttrackerndsthebestmatchforT(b;l)veryefciently,andprovidesasimilaritymeasurementde-notedbyMT(b;l).Theblurstrengthlbischosenbyalinesearch,thesearchstepinourexperimentsissetto4pixels.lb=argmaxlMT(b;l):(9)4.EXPERIMENTS4.1.LocalmotionblurdetectionandidenticationFig.3showstheresultoflocalblurdetectionandidentica-tiononsynthesizedimage.Theleftimagecontainstwore-gionsblurredbydifferentparameters.TherightimageshowsthecondenceoftheSVMclassier(darkercolormeanslowercondence).Thegroundtruthis=0;l=10and=90;l=15fortworegionsrespectively.Theestimatedblurdirectionsofthetworegionsare=5oand=84orespec-tively,whichareveryclosetothegroundtruth.4.2.TrackingwithblurestimationWeincorporatethelocalmotionblurdetectioninamean-shifttracker.Thecolorhistogramhas6464binsinthehue-saturationplane.Astheobjectmovesquicklyortheillumi-nationisnotsufcient,themotionblursaresevereandresult (a) (b)Fig.5.Trackingandcomparison.Twoexamplesareshown,wheretop/buttomrowsin(a)and(b)arethetrackingresultswithout/withmotionblurestimation(frame#for(a):1,69,70,105,183,190,frame#for(b):1,63,64,135,177,190).intheblendingoftheadjacentcolors.Thetargetismanuallyinitializedintherstframe.AsshowninFig.5,theyellowpixelsinthebooktitleregionareblendedwiththeadjacentredpixels,andthepurpleandthegreenonesontheboxaremergedtogeneratesomeyellowpixels.Asshownintherstrow,themean-shifttrackerisunabletondanexactmatchingfromframe69and63,thuslosestrack.Withthehelpoflocalmotionblurestimation,ourapproachachievesmuchmoreac-curatetrackingresults,asshowninthebottomrowsinFig.5(a)and(b).Forsequence(a),Fig.4showsthatourmethodcanachievehighermatchingscoresinpresentofmotionblur,thusprovideamoreaccuratetrackingresult.5.CONCLUSIONThispaperaddressesthechallengesintrackingmotion-blurredtargetsandproposesanovellearning-basedmethodtoesti-matelocalmotionblursfortracking.ThismethodusesanSVMclassiertodifferentiateblurredandnon-blurredimagepatchestocopewiththeinuenceofnoiseandothernon-parametricalfactors.Theproposedmethodyieldspromisingresultsondetectinglocalmotionblursandachievesencourag-ingresultsontrackingblurredtargetscomparedtothestate-of-the-arttrackingalgorithm.Thefuturestudyincludestheincorporationofmorecomplexmotionblursandthereduc-tionofthecomplexity.6.REFERENCES[1]DeepaKundurandDimitriosHatzinakos,“Blindimagedecon-volution,”IEEESignalProcessingMag.,vol.13,no.3,pp.43–64,May1996.[2]MarkR.BanhamandAggelosK.Katsaggelos,“Digitalimagerestoration,”IEEESignalProcessingMag.,vol.14,no.2,pp.24–41,Mar.1997.[3]MichaelCannon,“Blinddeconvolutionofspatiallyinvariantimageblurswithphase,”IEEETrans.Acoust.,Speech,SignalProcessing,vol.24,no.1,pp.58–63,Feb.1976.[4]M.MichaelChang,MuratTekalp,andA.TanjuErdem,“Bluridenticationusingthebispectrum,”IEEETrans.SignalProcessing,vol.39,no.10,pp.2323–2325,Oct.1991.[5]RyoNakagakiandAggelosK.Katsaggelos,“AVQ-basedblindimagerestorationalgorithm,”IEEETrans.ImageProcessing,vol.12,no.9,pp.1044–1053,Sept.2003.[6]DanielP.K.Lun,TommyC.L.Chan,Tai-ChiuHsung,DavidD.Feng,andYuk-HeeChan,“Efcientblindimagerestorationusingdiscreteperiodicradontransform,”IEEETrans.ImageProcessing,vol.13,no.2,pp.188–200,Feb.2004.[7]IoannisM.Rekleitis,“Steerableltersandcepstralanalysisforopticalowcalculationfromasingleblurredimage,”inVI'96,Toronto,May1996,pp.159–166.[8]PaoloFavoroandStefanoSoatto,“Avariationalapproachtoscenereconstructionandimagesegmentationfrommotion-blurcues,”inCVPR'04,Washington,DC,Jun.27-Jul.262004,vol.1,pp.631–637.[9]HailinJin,PaoloFavaro,andRobertoCipolla,“Visualtrackinginthepresenceofmotionblur,”inCVPR'05,SanDiego,CA,June20-26,2005,vol.2,pp.10–25.[10]DorinComaniciu,VisvanathanRamesh,andPeterMeer,“Kernel-basedobjecttracking,”IEEETrans.PatternAnal.Ma-chineIntell.,vol.25,no.5,pp.564–577,May2003.[11]WilliamT.FreemanandEdwardH.Adelson,“Thedesignanduseofsteerablelters,”IEEETrans.PatternAnal.MachineIntell.,vol.13,no.9,pp.891–906,1991.[12]VladimirN.Vapnik,TheNatureofStatisticalLearningThe-ory,Springer-Verlag,NewYork,1995.