/
INVITED PAPER ComputationalMethodsfor SparseSolutionofLinear InverseProblems In many engineering INVITED PAPER ComputationalMethodsfor SparseSolutionofLinear InverseProblems In many engineering

INVITED PAPER ComputationalMethodsfor SparseSolutionofLinear InverseProblems In many engineering - PDF document

debby-jeon
debby-jeon . @debby-jeon
Follow
519 views
Uploaded On 2014-12-12

INVITED PAPER ComputationalMethodsfor SparseSolutionofLinear InverseProblems In many engineering - PPT Presentation

By Joel A Tropp Member IEEE andStephenJWright ABSTRACT The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection This paper surveys the major p ID: 23064

Joel Tropp

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "INVITED PAPER ComputationalMethodsfor Sp..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

INVITEDPAPERComputationalMethodsforSparseSolutionofLinearInverseProblemsInmanyengineeringareas,suchassignalprocessing,practicalresultscanbeobtainedbyidentifyingapproachesthatyieldthegreatestqualityimprovement,orbyselectingthemostsuitablecomputationmethods.JoelA.Tropp,MemberIEEE,andStephenJ.WrightThegoalofthesparseapproximationproblemistoapproximateatargetsignalusingalinearcombinationofafewelementarysignalsdrawnfromafixedcollection.Thispapersurveysthemajorpracticalalgorithmsforsparseapproximation.Specificattentionispaidtocomputationalissues,tothecircumstancesinwhichindividualmethodstendtoperformwell,andtothetheoreticalguaranteesavailable.Manyfundamentalquestionsinelectricalengineering,statis-tics,andappliedmathematicscanbeposedassparseapproximationproblems,makingthesealgorithmsversatileandrelevanttoaplethoraofapplications.KEYWORDSCompressedsensing;convexoptimization;match-ingpursuit;sparseapproximationLinearinverseproblemsarisethroughoutengineeringandthemathematicalsciences.Inmostapplications,theseproblemsareill-conditionedorunderdetermined,soonemustapplyadditionalregularizingconstraintsinordertoobtaininterestingorusefulsolutions.Overthelasttwodecades,sparsityconstraintshaveemergedasafundamen-taltypeofregularizer.Thisapproachseeksanapproximatesolutiontoalinearsystemwhilerequiringthattheun-knownhasfewnonzeroentriesrelativetoitsdimensionFindsparsesuchthatisatargetsignalandisaknownmatrix.Generically,thisformulationisreferredtoasapproximation[1].Theseproblemsariseinmanyareas,includingstatistics,signalprocessing,machinelearning,codingtheory,andapproximationtheory.samplingreferstoaspecifictypeofsparseapproximationproblemfirststudiedin[2]and[3].Tykhonovregularization,theclassicaldeviceforsolvinglinearinverseproblems,controlstheenergy(i.e.,theEuclideannorm)oftheunknownvector.Thisapproachleadstoalinearleastsquaresproblemwhosesolutionisgenerallynonsparse.Toobtainsparsesolutions,wemustdevelopmoresophisticatedalgorithmsandoftencommitmorecomputationalresources.Theeffortpaysoff.Recentresearchhasdemonstratedthat,inmanycasesofinterest,therearealgorithmsthatcanfindgoodsolutionstolargesparseapproximationproblemsinreasonabletime.Inthispaper,wegiveanoverviewofalgorithmsforsparseapproximation,describingtheircomputationalrequirementsandtherelationshipsbetweenthem.Wealsodiscussthetypesofproblemsforwhicheachmethodismosteffectiveinpractice.Finally,wesketchthetheoreticalresultsthatjustifytheapplicationofthesealgorithms.Althoughlow-rankregularizationalsofallswithinthesparseapproximationframework,thealgorithmswedescribedonotapplydirectlytothisclassofproblems.SectionI-Adescribesideallformulationsofsparseapproximationproblemsandsomecommonfeaturesof ManuscriptreceivedMarch16,2009;revisedDecember10,2009;acceptedFebruary11,2010.DateofpublicationApril29,2010;dateofcurrentversionMay19,2010.TheworkofJ.A.TroppwassupportedbytheOfficeofNavalResearch(ONR)underGrantN00014-08-1-2065.TheworkofS.J.WrightwassupportedbytheNationalScienceFoundation(NSF)underGrantsCCF-0430504,DMS-0427689,CTS-0456694,CNS-0540147,andDMS-0914524.J.A.TroppiswiththeAppliedandComputationalMathematics,FirestoneLaboratoriesMC217-50,CaliforniaInstituteofTechnology,Pasadena,CA91125-5000USA(e-mail:jtropp@acm.caltech.edu).S.J.WrightiswiththeComputerSciencesDepartment,UniversityofWisconsin,Madison,WI53706USA(e-mail:swright@cs.wisc.edu).DigitalObjectIdentifier:10.1109/JPROC.2010.2044010ProceedingsoftheIEEE|Vol.98,No.6,June20100018-9219/$26.002010IEEE algorithmsthatattempttosolvetheseproblems.SectionIIprovidesadditionaldetailaboutgreedypursuitmethods.SectionIIIpresentsformulationsbasedonconvexprog-rammingandalgorithmsforsolvingtheseoptimizationA.FormulationsSupposethatisarealmatrixwhosecolumnshaveunitEuclideannorm:1for...(Thenormalizationdoesnotcompromisegenerality.)Thismatrixisoftenreferredtoasadictionary.Thecolumnsofthematrixareentriessinthedictionary,andacolumnsubmatrixiscalledasubdictionaryThecountingfunctionkkreturnsthenumberofnonzerocomponentsinitsargument.Wesaythatavectorsparsewhen.Whenwerefertoasarepresentationofthesignalwithrespecttothedictionary.Inpractice,signalstendtobecompressible,ratherthansparse.Mathematically,acompressiblesignalhasarepre-sentationwhoseentriesdecayrapidlywhensortedinorderofdecreasingmagnitude.Compressiblesignalsarewellapproximatedbysparsesignals,sothesparseapproxi-mationframeworkappliestothisclass.Inpractice,itisusuallymorechallengingtoidentifyapproximaterepre-sentationsofcompressiblesignalsthanofsparsesignals.ThemostbasicproblemweconsideristoproduceamaximallysparserepresentationofanobservedsignalminsubjecttoOnenaturalvariationistorelaxtheequalityconstrainttoallowsomeerrortolerance0,incasetheobservedsignaliscontaminatedwithnoiseminsubjectto(2)Itismostcommontomeasuretheprediction–observationdiscrepancywiththeEuclideannorm,butotherlossfunctionsmayalsobeappropriate.Theelementsof(2)canbecombinedinseveralwaystoobtainrelatedproblems.Forexample,wecanseektheminimalerrorpossibleatagivenlevelofsparsitysubjecttoWecanalsouseaparameter0tobalancethetwinobjectivesofminimizingbotherrorandsparsity (4)Iftherearenorestrictionsonthedictionaryandthe,thensparseapproximationisatleastashardasageneralconstraintsatisfactionproblem.Indeed,forfixed1,itis-hardtoproduceaapproximationwhoseerrorlieswithinafactoroftheminimal-termapproximationerror[4,Sec.0.8.2].Nevertheless,overthepastdecade,researchershaveidentifiedmanyinterestingclassesofsparseapproxima-tionproblemsthatsubmittocomputationallytractablealgorithms.Thesestrikingresultshelptoexplainwhysparseapproximationhasbeensuchanimportantandpopulartopicofresearchinrecentyears.Inpractice,sparseapproximationalgorithmstendtobeslowunlessthedictionaryadmitsafastmatrix–vectormultiply.Letusmentiontwoclassesofsparseapproxima-tionproblemswherethispropertyholds.First,manynaturallyoccurringsignalsarecompressiblewithrespecttodictionariesconstructedusingprinciplesofharmonicanalysis[5](e.g.,waveletcoefficientsofnaturalimages).Thistypeofstructureddictionaryoftencomeswithafasttransformationalgorithm.Second,incompressivesam-pling,wetypicallyviewastheproductofarandomobservationmatrixandafixedorthogonalmatrixthatdeterminesabasisinwhichthesignalissparse.Again,fastmultiplicationispossiblewhenboththeobservationmatrixandsparsitybasisarestructured.Recently,therehavebeensubstantialeffortstoincorporatemoresophisticatedsignalconstraintsintosparsitymodels.Inparticular,Baraniuketal.havestudiedmodel-basedcompressivesamplingalgorithms,whichuseadditionalinformationsuchasthetreestructureofwaveletcoefficientstoguidereconstructionofsignals[6].B.MajorAlgorithmicApproachesThereareatleastfivemajorclassesofcomputationaltechniquesforsolvingsparseapproximationproblems.Greedypursuit.Iterativelyrefineasparsesolu-tionbysuccessivelyidentifyingoneormorecomponentsthatyieldthegreatestimprovementinquality[7].Convexrelaxation.Replacethecombinatorialproblemwithaconvexoptimizationproblem.Solvetheconvexprogramwithalgorithmsthatexploittheproblemstructure[1].Bayesianframework.Assumeapriordistributionfortheunknowncoefficientsthatfavorssparsity.Developamaximumaposterioriestimatorthatincorporatestheobservation.Identifyaregionofsignificantposteriormass[8]oraverageovermost-probablemodels[9].Nonconvexoptimization.Relaxthetoarelatednonconvexproblemandattempttoidentifyastationarypoint[10].Bruteforce.Searchthroughallpossiblesupportsets,possiblyusingcutting-planemethodstore-ducethenumberofpossibilities[11,Sec.3.7–3.8].TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsVol.98,No.6,June2010|ProceedingsoftheIEEE Thispaperfocusesongreedypursuitsandconvexoptimization.Thesetwoapproachesarecomputationallypracticalandleadtoprovablycorrectsolutionsunderwell-definedconditions.Bayesianmethodsandnonconvexoptimizationarebasedonsoundprinciples,buttheydonotcurrentlyoffertheoreticalguarantees.Bruteforceis,ofcourse,algorithmicallycorrect,butitremainsplausibleonlyforsmall-scaleproblems.Recently,wehavealsoseeninterestinheuristicalgo-rithmsbasedonbelief-propagationandmessage-passingtechniquesdevelopedinthegraphicalmodelsandcodingtheorycommunities[12],[13].C.VerifyingCorrectnessResearchershaveidentifiedseveraltoolsthatcanbeusedtoprovethatsparseapproximationalgorithmspro-duceoptimalsolutionstosparseapproximationproblems.Thesetoolsalsoprovideinsightintotheefficiencycomputationalalgorithms,sothetheoreticalbackgroundmeritsasummary.Theuniquenessofsparserepresentationsisequivalenttoanalgebraicconditiononsubmatricesof.Supposeahastwodifferent-sparserepresentations.ClearlyInotherwords,mapsanontrivial-sparsesignaltozero.Itfollowsthateach-sparserepresentationisuniqueifandonlyifeach-columnsubmatrixofinjective.Toensurethatsparseapproximationiscomputationallytractable,weneedstrongerassumptionson.Notonlyshouldsparsesignalsbeuniquelydetermined,buttheyshouldbestablydetermined.Considerasignalperturba-tionandan-sparsecoefficientperturbationrelatedby.Stabilityrequiresthatandarecomparable.Thispropertyiscommonlyimposedbyfiat.Wesaythatthematrixsatisfiestherestrictedisometryproperty(RIP)oforderwithconstant1ifForsparseapproximation,wehope(5)holdsforlargeThisconceptwasintroducedintheimportantpaper[14];somerefinementsappearin[15].TheRIPcanbeverifiedusingthecoherencestatisticthematrix,whichisdefinedasmaxAnelementaryargument[16]viaGershgorin’scircletheoremestablishesthattheRIPconstantInsignalprocessingapplications,itiscommonthat,sowehavenontrivialRIPboundsfor.Unfortunately,noknowndeterministicmatrixyieldsasubstantiallybetterRIP.Earlyreferencesforcoherenceinclude[7]and[17].Certainrandommatrices,however,satisfymuchstrongerRIPboundswithhighprobability.ForGaussianandBernoullimatrices,RIPholdswhenFormorestructuredmatrices,suchasarandomsectionofadiscreteFouriertransform,RIPoftenholdswhenlogforasmallinteger.Thisfactexplainsthebenefitofrandomnessincompressivesampling.Estab-lishingtheRIPforarandommatrixrequirestechniquesmoresophisticatedthanthesimplecoherencearguments;see[14]fordiscussion.Recently,researchershaveobservedthatsparsematricesmaysatisfyarelatedproperty,calledRIP-1,evenwhentheydonotsatisfy(5).RIP-1canalsobeusedtoanalyzesparseapproximationalgorithms.Detailsaregivenin[18].D.Cross-CuttingIssuesStructuralpropertiesofthematrixhaveasubstantialimpactontheimplementationofsparseapproximationalgorithms.Inmostapplicationsofinterest,thelargesizeorlackofsparsenessinmakesitimpossibletostorethismatrix(oranysubstantialsubmatrix)explicitlyincomputermemory.Often,however,matrix–vectorproductsinvolvingcanbeperformedefficiently.Forexample,thecostoftheseproductsisisconstructedfromFourierorwaveletbases.Foralgorithmsthatsolveleastsquaresproblems,afastmultiplyisparticularlyimpor-tantbecauseitallowsustouseiterativemethodssuchasLSQRorconjugategradient(CG).Infact,allthealgorithmsdiscussedbelowcanbeimplementedinawaythatrequiresaccesstoonlythroughmatrix–vectorproducts.Spectralpropertiesofsubdictionaries,suchasthoseencapsulatedin(5),haveadditionalimplicationsforthecomputationalcostofsparseapproximationalgorithms.SomemethodsexhibitfastlinearasymptoticconvergencebecausetheRIPensuresthatthesubdictionariesencoun-teredduringexecutionhavesuperbconditioning.Otherapproaches(forexample,interior-pointmethods)arelesssensitivetospectralproperties,sotheybecomemorecompetitivewhentheRIPislesspronouncedorthetargetsignalisnotparticularlysparse.Itisworthmentioningherethatmostalgorithmicpapersinsparsereconstructionpresentcomputationalre-sultsonlyonsynthetictestproblems.Testproblemcol-lectionsrepresentativeofsparseapproximationproblemsencounteredinpracticearecrucialtoguidingfurtherdev-elopmentofalgorithms.Asignificanteffortinthisdirec-tionisSparco[19],aMatlabenvironmentforinterfacingalgorithmsandconstructingtestproblemsthatalsoin-cludesavarietyofproblemsgatheredfromtheliterature.TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsProceedingsoftheIEEE|Vol.98,No.6,June2010 II.PURSUITMETHODSpursuitmethodforsparseapproximationisagreedyapproachthatiterativelyrefinesthecurrentestimateforthecoefficientvectorbymodifyingoneorseveralcoefficientschosentoyieldasubstantialimprovementinapproximatingthesignal.Webeginbydescribingthesimplesteffectivegreedyalgorithm,orthogonalmatchingpursuit(OMP),andsummarizingitstheoreticalguaran-tees.Afterward,weoutlineamoresophisticatedclassofmodernpursuittechniquesthathasshownpromiseforcompressivesamplingproblems.Webrieflydiscussiterativethresholdingmethods,andconcludewithsomegeneralcommentsabouttheroleofgreedyalgorithmsinsparseapproximation.A.OrthogonalMatchingPursuitOMPisoneoftheearliestmethodsforsparseapproxi-mation.Basicreferencesforthismethodinthesignalpro-cessingliteratureare[20]and[21],buttheideacanbetracedto1950sworkonvariableselectioninregression[11].Fig.1containsamathematicaldescriptionofOMP.ThesymboldenotesthesubdictionaryindexedbyaInatypicalimplementationofOMP,theidentificationstepisthemostexpensivepartofthecomputation.Themostdirectapproachcomputesthemaximuminnerpro-ductviathematrix–vectormultiplication,whichforanunstructureddensematrix.Someau-thorshaveproposedusingnearestneighbordatastructurestoperformtheidentificationquerymoreefficiently[22].Incertainapplications,suchasprojectionpursuitregres-sion,theheof%areindexedbyacontinuousparameter,andidentificationcanbeposedasalow-dimensionaloptimizationproblem[23].Theestimationsteprequiresthesolutionofaleastsquaresproblem.Themostcommontechniqueistomain-tainafactorizationof,whichhasamarginalcostinthethiteration.Thenewresidualisaby-productoftheleastsquaresproblem,soitrequiresnoextracomputation.Thereareseveralnaturalstoppingcriteria.Haltafterafixednumberofiterations:Haltwhentheresidualhassmallmagnitude:Haltwhennocolumnexplainsasignificantamountofenergyintheresidual:Thesecriteriacanallbeimplementedatminimalcost.Manyrelatedgreedypursuitalgorithmshavebeenproposedintheliterature;wecannotdothemalljusticehere.Someparticularlynoteworthyvariantsincludematchingpursuit[7],therelaxedgreedyalgorithm[24],andthe-penalizedgreedyalgorithm[25].B.GuaranteesforSimplePursuitsOMPproducestheresidual(providedthatthedictionarycanrepresentthesignalexactly),butthisrepresentationhardlyqualifiesassparse.Classicalanalysesofgreedypursuitfocusinsteadontherateofconvergence.Greedypursuitsoftenconvergelinearlywitharatethatdependsonhowwellthedictionarycoversthesphere[7].Forexample,OMPofferstheestimatewhere(See[21,Sec.3]fordetails.)Unfortunately,thecoveringparameteristypicallyunlessthenumberatomsishuge,sothisestimatehaslimitedinterest.Asecondtypeofresultdemonstratesthattherateofconvergencedependsonhowwellthedictionaryexpressesthesignalofinterest[24,eq.(1.9)].Forexample,OMPofferstheestimatewhere Fig.1.Orthogonalmatchingpursuit.TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsVol.98,No.6,June2010|ProceedingsoftheIEEE Thedictionarynormkkistypicallysmallwhenitsargumenthasagoodsparseapproximation.Forfurtherimprovementsonthisestimate,see[26].Thisboundisusuallysuperiortotheexponentialrateestimateabove,butitcanbedisappointingforsignalswithexcellentsparseapproximations.Subsequentworkestablishedthatgreedypursuitpro-ducesnear-optimalsparseapproximationswithrespecttoin-coherentdictionaries[22],[27].Forexample,if31,thenwheredenotesthebestapproximationofasalinearcombinationofcolumnsfrom.See[28]–[30]forFinally,whenissufficientlyrandom,OMPprovably-sparsesignalswhenandtheparametersaresufficientlylarge[31],[32].C.ContemporaryPursuitMethodsFormanyapplications,OMPdoesnotofferadequateperformance,soresearchershavedevelopedmoresophis-ticatedpursuitmethodsthatworkbetterinpracticeandyieldessentiallyoptimaltheoreticalguarantees.Thesetechniquesdependonseveralenhancementstothebasicgreedyframework:1)selectingmultiplecolumnsperiteration;2)pruningthesetofactivecolumnsateachstep;3)solvingtheleastsquaresproblemsiteratively;4)theoreticalanalysisusingtheRIPbound(5).Althoughmodernpursuitmethodsweredevelopedspecif-icallyforcompressivesamplingproblems,theyalsoofferattractiveguaranteesforsparseapproximation.Therearemanyearlyalgorithmsthatincorporatesomeofthesefeatures.Forexample,stagewiseorthogonalmatchingpursuit(StOMP)[33]selectsmultiplecolumnsateachstep.Theregularizedorthogonalmatchingpursuitalgorithm[34],[35]wasthefirstgreedytechniquewhoseanalysiswassupportedbyaRIPbound(5).Forhistoricaldetails,wereferthereadertothediscussionin[36,Sec.7].Compressivesamplingmatchingpursuit(CoSaMP)[36]wasthefirstalgorithmtoassembletheseideastoobtainessentiallyoptimalperformanceguarantees.DaiandMilenkovicdescribeasimilaralgorithm,calledsub-spacepursuit,withequivalentguarantees[37].Othernaturalvariantsaredescribedin[38,App.A.2].Becauseofspaceconstraints,wefocusontheCoSaMPapproach.Fig.2describesthebasicCoSaMPprocedure.Thedenotestherestrictionofavectortothecomponentslargestinmagnitude(tiesbrokenlexico-graphically),whiledenotesthesupportofthevector,i.e.,thesetofnonzerocomponents.Thenaturalvalueforthetuningparameteris1,butempiricalrefinementmaybevaluableinapplications[39].BoththepracticalperformanceandtheoreticalanalysisofCoSaMPrequirethedictionarytosatisfytheRIP(5)oforder2withconstant1.Ofcourse,thesemethodscanbeappliedwithouttheRIP,butthebehaviorisunpredictable.AheuristicforidentifyingthemaximumsparsitylevelistorequirethatlogUndertheRIPhypothesis,eachiterationofCoSaMPreducestheapproximationerrorbyaconstantfactoruntilitapproachesitsminimalvalue.Tobespecific,supposethatthesignalforunknowncoefficientvectorandnoiseterm.Ifwerunthealgorithmforasufficientnumberofiterations,theisaconstant.Theformofthiserrorboundisoptimal[40].Stoppingcriteriaaretailoredtothesignalsofinterest.Forexample,whenthecoefficientvectoriscompressible,thealgorithmrequiresonlyiterations.UndertheRIPhypothesis,eachiterationrequiresaconstantnumberofmultiplicationswithtosolvetheleastsquares Fig.2.Compressivesamplingmatchingpursuit.TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsProceedingsoftheIEEE|Vol.98,No.6,June2010 problem.Thus,thetotalrunningtimeisastructureddictionaryandacompressiblesignal.Inpractice,CoSaMPisfasterandmoreeffectivethanOMPforcompressivesamplingproblems,exceptperhapsintheultrasparseregimewherethenumberofnonzerosintherepresentationisverysmall.CoSaMPisfasterbutusuallylesseffectivethanalgorithmsbasedonconvexD.IterativeThresholdingModernpursuitmethodsarecloselyrelatedtoiterativethresholdingalgorithms,whichhavebeenstudiedex-tensivelyoverthelastdecade.(See[39]foracurrentbibliography.)SectionIII-Ddescribesadditionalconnec-tionswithoptimization-basedapproaches.Amongthresholdingapproaches,iterativehardthresh-olding(IHT)isthesimplest.Itseeksan-sparserepresentationofasignalviatheiterationBlumensathandDavies[41]haveestablishedthatIHTadmitsanerrorguaranteeoftheform(7)underaRIPhypothesisoftheform1.ForrelatedresultsonIHT,see[42].GargandKhandekar[43]describeasimilarmethod,gradientdescentwithsparsification,andpresentanelegantanalysis,whichisfurthersimplifiedin[44].Thereisempiricalevidencethatthresholdingisreasonablyeffectiveforsolvingsparseapproximationproblemsinpractice;see,e.g.,[45].Ontheotherhand,somesimulationsindicatethatsimplethresholdingtech-niquesbehavepoorlyinthepresenceofnoise[41,Sec.8].Veryrecently,DonohoandMalikihaveproposedamoreelaboratemethod,calledtwo-stagethresholding(TST)[39].TheydescribethisapproachasahybridofCoSaMPandthresholding,modifiedwithextratuningparameters.TheirworkincludesextensivesimulationsmeanttoidentifyoptimalparametersettingsforTST.Byconstruction,theseoptimallytunedalgorithmsdominaterelatedapproacheswithfewerparameters.Thediscussionin[39]focusesonperfectlysparse,randomsignals,sotheapplicabilityoftheapproachtosignalsthatarecompress-ible,noisy,ordeterministicisunclear.E.CommentaryGreedypursuitmethodshaveoftenbeenconsiderednaive,inpartbecausetherearecontrivedexampleswheretheapproachfailsspectacularly;see[1,Sec.2.3.2].However,recentresearchhasclarifiedthatgreedypursuitssucceedempiricallyandtheoreticallyinmanysituationswhereconvexrelaxationworks.Infact,theboundarybetweengreedymethodsandconvexrelaxationmethodsissomewhatblurry.Thegreedyselectiontechniqueiscloselyrelatedtodualcoordinate-ascentalgorithms,whilecertainmethodsforconvexrelaxation,suchasleast-angleregres-sion[46]andhomotopy[47],useatypeofgreedyselectionateachiteration.Wecanmakecertaingeneralobserva-tions,however.Greedypursuits,thresholding,andrelatedmethods(suchashomotopy)canbequitefast,especiallyintheultrasparseregime.Convexrelaxationalgorithmsaremoreeffectiveatsolvingsparseapproximationproblemsinawidervarietyofsettings,suchasthoseinwhichthesignalisnotverysparseandheavyobservationalnoiseispresent.Greedytechniqueshaveseveraladditionaladvantagesthatareimportanttorecognize.First,whenthedictionarycontainsacontinuumofelements(asinprojectionpursuitregression),convexrelaxationmayleadtoaninfinite-dimensionalprimalproblem,whilethegreedyapproachreducessparseapproximationtoasequenceofsimple1-Doptimizationproblems.Second,greedytechniquescanin-corporateconstraintsthatdonotfitnaturallyintoconvexprogrammingformulations.Forexample,thedatastreamcommunityhasproposedefficientgreedyalgorithmsforcomputingnear-optimalhistogramsandwavelet-packetapproximationsfromcompressivesamples[4].Morerecently,ithasbeenshownthatCoSaMPcanbemodifiedtoenforcetree-likeconstraintsonwaveletcoefficients.Extensionstosimultaneoussparseapproximationpro-blemshavealsobeendeveloped[6].Thisisanexcitingandimportantlineofwork.Atthispoint,itisnotfullyclearwhatrolegreedypursuitalgorithmswillultimatelyplayinpractice.Never-theless,thisstrandofresearchhasledtonewtoolsandinsightsforanalyzingothertypesofalgorithmsforsparseapproximation,includingtheiterativethresholdingandmodel-basedapproachesabove.III.OPTIMIZATIONAnotherfundamentalapproachtosparseapproximationreplacesthecombinatorialfunctioninthemathematicalprogramsfromSectionI-Awiththe-norm,yieldingoptimizationproblemsthatadmittractablealgo-rithms.Inaconcretesense[48],the-normistheclosestconvexfunctiontothefunction,sothishisisquitenatural.Theconvexformoftheequality-constrainedproblem(1)isminsubjecttowhilethemixedformulation(4)becomesmin TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsVol.98,No.6,June2010|ProceedingsoftheIEEE Here,0isaregularizationparameterwhosevaluegovernsthesparsityofthesolution:largevaluestypicallyproducesparserresults.Itmaybedifficulttoselectanappropriatevalueforinadvance,sinceitcontrolsthesparsityindirectly.Asaconsequence,weoftenneedtosolve(9)repeatedlyfordifferentchoicesofthisparameter,ortotracesystematicallythepathofsolutionsasdecreasestowardzero.When,thesolutionof(9)isAnothervariantistheleastabsoluteshrinkageandselectionoperatorformulation[49],whichfirstaroseinthecontextofvariableselectionminsubjecttoTheLASSOisequivalentto(9)inthesensethatthepathofsolutionsto(10)parameterizedbypositivethesolutionpathfor(9)asvaries.Finally,wenoteanothercommonformulationsubjecttothatexplicitlyparameterizestheerrornorm.A.GuaranteesIthasbeendemonstratedthatconvexrelaxationmethodsproduceoptimalornear-optimalsolutionstosparseapproximationproblemsinavarietyofsettings.Theearliestresults[16],[17],[27]establishthattheequality-constrainedproblem(8)correctlyrecoversall-sparsesignalsfromanincoherentdictionaryprovidedthat21.Inthebestcase,thisboundappliesatthesparsitylevel.Subsequentwork[29],[50],[51]showedthattheconvexprograms(9)and(11)canidentifynoisysparsesignalsinasimilarparameterregime.Theresultsdescribedabovearesharpfordeterministicsignals,buttheycanbeextendedsignificantlyforrandomsignalsthataresparsewithrespecttoanincoherentdictionary.Thepaper[52]provesthattheequality-constrainedproblem(8)canidentifyrandomsignals,evenwhenthesparsitylevelisapproximatelyMostrecently,thepaper[53]observedthatideasfrom[51]and[54]implythattheconvexrelaxation(9)canidentifynoisy,randomsparsesignalsinasimilarparameterregime.Resultsfrom[14]and[55]demonstratethatconvexrelaxationsucceedswellinthepresenceoftheRIP.Supposethatsignalandunknowncoefficientvectorarerelatedasin(6)andthatthedictionaryhasRIPconstant1.Then,thesolutionto(11)verifiesforsomeconstant,providedthat.Comparethisboundwiththeerrorestimate(7)forCoSaMPandIHT.Analternativeapproachforanalyzingconvexrelaxa-tionalgorithmsreliesongeometricpropertiesofthekernelofthedictionary[40],[56]–[58].Anothergeomet-ricmethod,basedonrandomprojectionsofstandardpolytopes,isstudiedin[59]and[60].B.ActiveSet/PivotingPivotingalgorithmsexplicitlytracethepathofsolutionsasthescalarparameterin(10)rangesacrossaninterval.Thesemethodsexploitthepiecewiselinearityofthesolutionasafunctionof,aconsequenceofthefactthattheoptimalityKarush–Kuhn–Tucker(KKT)condi-tionscanbestatedasalinearcomplementarityproblem.ByreferringtotheKKTsystem,wecanquicklyidentifythenextxtonthesolutionpaththenearestvalueatwhichthederivativeofthepiecewise-linearfunctionchanges.Thehomotopymethodof[47]followsthisapproach.Itstartswith0,wherethesolutionof(10)isanditprogressivelylocatesthenextlargestvalueofwhereacomponentofswitchesfromazerotoanon-zero,orviceversa.Ateachstep,themethodupdatesordowndatesafactorizationofthesubmatrixofcorrespondstothenonzerocomponentsof.Asimilarmethod[46]isimplementedasSolveLassointheSparseLabtoolbox.Relatedapproachescanbedevelopedfortheformulation(9).Ifwelimitourattentiontovaluesofforwhichfewnonzeros,theactive-set/pivotingapproachisefficient.Thehomotopymethodrequiresabout2matrix–vectormultiplicationsby,toidentifynonzerosintogetherwithoperationsforupdatingthefactor-izationandperformingotherlinearalgebraoperations.ThiscostiscomparablewithOMP.OMPandhomotopyarequitesimilarinthatthesolu-tionisalteredbysystematicallyaddingnonzerocompo-nentstoandupdatingthesolutionofareducedlinearleastsquaresproblem.Ineachcase,thecriterionforselectingcomponentsinvolvestheinnerproductsbetweeninactivecolumnsofandtheresidual.Onenotabledifferenceisthathomotopyoccasionallyallowsfornonzerocomponentsoftoreturntozerostatus.See[46]and[61]forothercomparisons.C.Interior-PointMethodsInterior-pointmethodswereamongthefirstap-proachesdevelopedforsolvingsparseapproximationprob-lemsbyconvexoptimization.Theearlyalgorithms[1],[62]applyaprimal–dualinterior-pointframeworkwheretheinnermostsubproblemsareformulatedaslinearleastsquaresproblemsthatcanbesolvedwithiterativemethods,thusallowingthesemethodstotakeadvantageTroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsProceedingsoftheIEEE|Vol.98,No.6,June2010 offastmatrix–vectormultiplicationsinvolvingAnimplementationisavailableaspdcotheSparseLabtoolbox.Otherinterior-pointmethodshavebeenproposedexpresslyforcompressivesamplingproblems.Thepaper[63]describesaprimallog-barrierapproachforaquadraticprogrammingreformulationof(9):min subjecttoThetechniquereliesonaspecializedpreconditionerthatallowstheinternalNewtoniterationstobecompletedefficientlywithCG.Themethodisimplementedasthe.The-magicpackage[64]containsaprimallog-barriercodeforthesecond-orderconeformu-lation(11),whichincludestheoptionofsolvingtheinnermostlinearsystemwithCG.Ingeneral,interior-pointmethodsarenotcompetitivewiththegradientmethodsofSectionIII-Donproblemswithverysparsesolutions.Ontheotherhand,theirper-formanceisinsensitivetothesparsityofthesolutionorthevalueoftheregularizationparameter.Interior-pointmethodscanberobustinthesensethattherearenotmanycasesofveryslowperformanceoroutrightfailure,whichsometimesoccurswithotherapproaches.D.GradientMethodsGradient-descentmethods,alsoknownasfirst-ordermethods,areiterativealgorithmsforsolving(9)inwhichthemajoroperationateachiterationistoformthegrad-ientoftheleastsquarestermatthecurrentiterate,viz.,.Manyofthesemethodscomputethenextiterateusingtherulesargmin (12a)(12b)forsomechoiceofscalarparameters.Alter-natively,wecanwritesubproblem(12a)asargmin 12zxk 1k%ð%xkuÞ22þ Algorithmsthatcomputestepsofthistypeareknownbysuchlabelsasoperatorsplitting[65],iterativesplittingandthresholding(IST)[66],fixed-pointiteration[67],andsparsereconstructionviaseparableapproximation(SpaRSA)[68].Fig.3showstheframeworkforthisclassofmethods.Standardconvergenceresultsforthesemethods,e.g.,[65,Th.3.4],requirethat2,atightrestrictionthatleadstoslowconvergenceinpractice.Themorepracticalvariantsdescribedin[68]admitsmallervaluesof,providedthatasufficientdecreaseintheobjectivein(9)occursoveraspanofsuccessiveiterations.SomevariantsuseBarzilai–Borweinformulasthatselectvaluesoflyinginthespectrumof.WhentheacceptancetestinStep2,theparameterisincreased(repeatedly,asnecessary)byaconstantfactor.Steplengths1areusedin[67]and[68].Theiteratedhardshrinkagemethodof[69]sets0in(12)andchoosestodoaconditionalminimizationalongthesearchdirection.RelatedapproachesincludeTwIST[70],avariantofISTthatissignificantlyfasterinpractice,andwhichdeviatesfromtheframeworkofFig.3inthatthepreviousiteratealsoentersintothestepcalculation(inthemannerofsuccessiveover-relaxationapproachesforlinearequations).GPSR[71]issimplyagradient-projectionalgorithmfortheconvexquadraticprogramobtainedbysplittingintopositiveandnegativeparts.TheapproachesabovetendtoworkwellonsparsesignalswhenthedictionarysatisfiestheRIP.Often,thenonzerocomponentsofareidentifiedquickly,afterwhichthemethodreducesessentiallytoaniterativemethodforthereducedlinearleastsquaresprobleminthesecomponents.BecauseoftheRIP,theactivesub-matrixiswellconditioned,sothesefinaliteratesconvergequickly.Infact,thesestepsarequitesimilartotheestimationstepofCoSaMP.Thesemethodsbenefitfromwarmstarting,thatis,theworkrequiredtoidentifyasolutioncanbereduceddramaticallywhentheinitialestimateinStep1isclosetothesolution.Thispropertycanbeusedtoamelioratethe Fig.3.Gradient-descentframework.TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsVol.98,No.6,June2010|ProceedingsoftheIEEE oftenpoorperformanceofthesemethodsonproblemsforwhich(9)isnotparticularlysparseortheregularizationparameterissmall.Continuationstrategieshavebeenproposedforsuchcases,inwhichwesolve(9)forade-creasingsequenceofvaluesof,usingtheapproximatesolutionforeachvalueasthestartingpointforthenextsubproblem.Continuationcanbeviewedasacoarse-grained,approximatevariantofthepivotingstrategiesofSectionIII-B,whichtrackindividualchangesintheactivecomponentsofexplicitly.Somecontinuationmethodsaredescribedin[67]and[68].Thoughadaptivestrategiesforchoosingthedecreasingsequenceofvalueshavebeenproposed,thedesignofarobust,practical,andtheoret-icallyeffectivecontinuationalgorithmremainsaninter-estingopenquestion.E.ExtensionsofGradientMethodsSecond-orderinformationcanbeusedtoenhancegrad-ientprojectionapproachesbytakingapproximatereducedNewtonstepsinthesubsetofcomponentsofthatappearstobenonzero.Insomeapproaches[68],[71],thisen-hancementismadeonlyafterthefirst-orderalgorithmisterminatedasameansofremovingthebiasinthefor-mulation(9)introducedbytheregularizationterm.Othermethods[72]applythistechniqueatintermediatestepsofthealgorithm.(Asimilarapproachwasproposedfortherelatedproblemof-regularizedlogisticregressionin[73].)Iterativemethodssuchasconjugategradientcanbeusedtofindapproximatesolutionstothereducedlinearleastsquaresproblems.Thesesubproblemsare,ofcourse,closelyrelatedtotheonesthatariseinthegreedypursuitalgorithmsofSectionII.TheSPGmethodof[74,Sec.4]appliesadifferenttypeofgradientprojectiontotheformulation(10).Thisap-proachtakesstepsalongthenegativegradientoftheleastsquaresobjectivein(10),withsteplengthchosenbyaBarzilai–Borweinformula(withbacktrackingtoenforcesufficientdecreaseoverareferencefunctionvalue),andprojectstheresultingvectorontotheconstraintset.Sincetheultimategoalin[74]istosolve(11)foragivenvalueof,theapproachaboveisembeddedintoascalarequationsolverthatidentifiesthevalueofforwhichthesolutionof(10)coincideswiththesolutionof(11).Animportantrecentlineofworkhasinvolvedapplyingoptimalgradientmethodsforconvexminimization[75]–[77]totheformulations(9)and(11).Thesemethodshavemanyvariants,buttheysharethegoaloffindinganap-proximatesolutionthatisascloseaspossibletotheopti-malset(asmeasuredbynorm-distanceorbyobjectivevalue)inagivenbudgetofiterations.(Bycontrast,mostiterativemethodsforoptimizationaimtomakesignificantprogressduringeachindividualiteration.)Optimalgrad-ientmethodstypicallygenerateseveralconcurrentse-quencesofiterates,andtheyhavecomplexsteplengthrulesthatdependonsomepriorknowledge,suchastheLipschitzconstantofthegradient.Specificworksthatapplyoptimalgradientmethodstosparseapproximationinclude[78]–[80].Thesemethodsmayperformbetterthansimplegradientmethodswhenappliedtocompressiblesignals.Weconcludethissectionbymentioningthedualformulationof(9) subjecttoAlthoughthisformulationhasnotbeenstudiedexten-sively,anactive-setmethodwasproposedin[81].Thismethodsolvesasequenceofsubproblemswhereasubsetoftheconstraints(correspondingtoasubdictionary)isenforced.Thedualofeachsubproblemcaneachbeex-pressedasaleastsquaresproblemoverthesubdictionary,wherethesubdictionariesdifferbyasinglecolumnfromoneproblemtothenext.Theconnectionsbetweenthisapproachandgreedypursuitsareevident.REFERENCES[1]S.S.Chen,D.L.Donoho,andM.A.Saunders,Atomicdecompositionbybasispursuit,uit,SIAMRev.,vol.43,no.1,pp.129–159,2001.[2]E.Candes,J.Romberg,andT.Tao,Robustuncertaintyprinciples:ExactsignalreconstructionfromhighlyincompleteFourierinformation,,IEEETrans.Inf.Theory,vol.52,no.2,pp.489–509,Feb.2006.[3]D.L.Donoho,Compressedsensing,ng,IEEETrans.Inf.Theory,vol.52,no.4,pp.1289–1306,Apr.2006.[4]S.Muthukrishnan,DataStreams:AlgorithmsandApplicationsBoston,MA:NowPublishers,2005.[5]D.L.Donoho,M.Vetterli,R.A.DeVore,andI.Daubechies,Datacompressionandharmonicanalysis,is,IEEETrans.Inf.Theory,vol.44,no.6,pp.2433–2452,Oct.1998.[6]R.G.Baraniuk,V.Cevher,M.Duarte,andC.Hegde,Model-basedcompressivesensing,ng,2008,submittedforpublication.[7]S.MallatandZ.Zhang,Matchingpursuitswithtime-frequencydictionaries,es,IEEETrans.SignalProcess.,vol.41,no.12,pp.3397–3415,Dec.1993.[8]D.WipfandB.Rao,SparseBayesianlearningforbasisselection,n,IEEETrans.SignalProcess.,vol.52,no.8,pp.2153–2164,Aug.2004.[9]P.Schniter,L.C.Potter,andJ.Ziniel,FastBayesianmatchingpursuit:Modeluncertaintyandparameterestimationforsparselinearmodels,,IEEETrans.SignalProcess.,2008,submittedforpublication.[10]R.Chartrand,Exactreconstructionofsparsesignalsvianonconvexminimization,n,IEEESignalProcess.Lett.,vol.14,no.10,pp.707–710,Oct.2007.[11]A.J.Miller,SubsetSelectioninRegression,2nded.London,U.K.:ChapmanandHall,2002.[12]D.Baron,S.Sarvotham,andR.G.Baraniuk,Bayesiancompressivesensingviabeliefpropagation,n,IEEETrans.SignalProcess.vol.58,no.1,pp.269–280,Jan.2010.[13]D.L.Donoho,A.Maliki,andA.Montanari,Message-passingalgorithmsforcompressedsensing,ng,Proc.Nat.Acad.Sci.,vol.106,no.45,pp.18914–18919,2009.[14]E.J.CandesandT.Tao,Near-optimalsignalrecoveryfromrandomprojections:Universalencodingstrategies?’’IEEETrans.Inf.Theory,vol.52,no.12,pp.5406–5425,Dec.2006.[15]S.FoucartandM.-J.Lai,Sparsestsolutionsofunderdeterminedlinearsystemsvia-minimizationfor00Appl.Comput.HarmonicAnal.,vol.26,no.3,pp.395–407,2009.TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsProceedingsoftheIEEE|Vol.98,No.6,June2010 [16]D.L.DonohoandM.Elad,Optimallysparserepresentationingeneral(nonorthogonal)dictionariesviaminimization,on,Proc.Nat.Acad.Sci.,vol.100,pp.2197–2202,Mar.2003.[17]D.L.DonohoandX.Huo,Uncertaintyprinciplesandidealatomicdecomposition,n,IEEETrans.Inf.Theory,vol.47,no.7,pp.2845–2862,Nov.2001.[18]R.Berinde,A.C.Gilbert,P.Indyk,H.Karloff,andM.Strauss,Combininggeometryandcombinatorics:Aunifiedapproachtosparsesignalrecovery,,inProc.46thAnnu.AllertonConf.Commun.ControlComput.,2008,pp.798–805.[19]E.vandenBerg,M.P.Friedlander,G.Hennenfent,F.Herrmann,R.Saab,andO.Yilmaz,Sparco:Atestingframeworkforsparsereconstruction,n,ACMTrans.Math.Softw.,vol.35,no.4,pp.1–16,2009.[20]Y.C.Pati,R.Rezaiifar,andP.S.Krishnaprasad,Orthogonalmatchingpursuit:Recursivefunctionapproximationwithapplicationstowaveletdecomposition,n,inProc.27thAnnu.AsilomarConf.SignalsSyst.Comput.,Nov.1993,vol.1,pp.40–44.[21]G.Davis,S.Mallat,andM.Avellaneda,Adaptivegreedyapproximation,on,J.Constr.Approx.,vol.13,pp.57–98,1997.[22]A.C.Gilbert,M.Muthukrishnan,andM.J.Strauss,Approximationoffunctionsoverredundantdictionariesusingcoherence,e,inProc.14thAnnu.ACM-SIAMSymp.DiscreteAlgorithms,Jan.2003.[23]J.H.FriedmanandW.Stuetzle,Projectionpursuitregressions,,J.AmerStat.Assoc.vol.76,no.376,pp.817–823,Dec.1981.[24]R.DeVoreandV.N.Temlyakov,Someremarksongreedyalgorithms,hms,Adv.Comput.Math.,vol.5,pp.173–187,1996.[25]C.Huang,G.Cheang,andA.R.Barron,Riskofpenalizedleast-squares,greedyselection,and-penalizationforflexiblefunctionlibraries,ies,Ann.Stat.,2008,submittedforpublication.[26]A.R.Barron,A.Cohen,R.A.DeVore,andW.Dahmen,Approximationandlearningbygreedyalgorithms,hms,Ann.Stat.,vol.36,no.1,pp.64–94,2008.[27]J.A.Tropp,Greedisgood:Algorithmicresultsforsparseapproximation,on,IEEETrans.Inf.Theory,vol.50,no.10,pp.2231–2242,Oct.2004.[28]J.A.Tropp,A.C.Gilbert,andM.J.Strauss,Algorithmsforsimultaneoussparseapproximation.PartI:Greedypursuit,uit,J.SignalProcess.,vol.86,SpecialIssueonSparseApproximationsinSignalandImageProcessing,pp.572–588,Apr.2006.[29]D.L.Donoho,M.Elad,andV.N.Temlyakov,Stablerecoveryofsparseovercompleterepresentationsinthepresenceofnoise,se,IEEETrans.Inf.Theory,vol.52,no.1,pp.6–18,Jan.2006.[30]T.Zhang,Ontheconsistencyoffeatureselectionusinggreedyleastsquaresregression,,J.Mach.LearningRes.vol.10,pp.555–568,2009.[31]J.A.TroppandA.C.Gilbert,Signalrecoveryfromrandommeasurementsviaorthogonalmatchingpursuit,uit,IEEETrans.Inf.Theoryvol.53,no.12,pp.4655–4666,Dec.2007.[32]A.FletcherandS.Rangan,Orthogonalmatchingpursuitfromnoisyrandommeasurements:Anewanalysis,s,inAdvancesinNeuralInformationProcessingSystemsCambridge,MA:MITPress,2009,p.23.[33]D.L.Donoho,Y.Tsaig,I.Drori,andJ.-L.Starck,Sparsesolutionofunderdeterminedlinearequationsbystagewiseorthogonalmatchingpursuit(StOMP),,StanfordUniv.,PaloAlto,CA,Stat.Dept.Tech.Rep.2006-02,Mar.2006.[34]D.NeedellandR.Vershynin,Uniformuncertaintyprincipleandsignalrecoveryviaregularizedorthogonalmatchingpursuit,uit,Found.Comput.Math.,vol.9,no.3,pp.317–334,2009.[35]D.NeedellandR.Vershynin,Signalrecoveryfromincompleteandinaccuratemeasurementsviaregularizedorthogonalmatchingpursuit,t,IEEEJ.Sel.TopicsSignalProcess.,vol.4,no.2,pp.310–316,Apr.2009.[36]D.NeedellandJ.A.Tropp,CoSaMP:Iterativesignalrecoveryfromincompleteandinaccuratesamples,,Appl.Comput.Harmonic,vol.26,no.3,pp.301–321,2009.[37]W.DaiandO.Milenkovic,Subspacepursuitforcompressivesensing:Closingthegapbetweenperformanceandcomplexity,ty,IEEETrans.Inf.Theory,vol.55,no.5,pp.2230–2249,May2009.[38]D.NeedellandJ.A.Tropp,CoSaMP:Iterativesignalrecoveryfromincompleteandinaccuratesamples,s,CaliforniaInst.Technol.,Pasadena,CA,ACMRep.2008-01,2008.[39]A.MalikiandD.Donoho,Optimallytunediterativereconstructionalgorithmsforcompressedsensing,ng,Sep.2009.[Online].Available:arXiv:0909.0777[40]A.Cohen,W.Dahmen,andR.DeVore,Compressedsensingandbest-termapproximation,on,J.Amer.Math.Soc.vol.22,no.1,pp.211–231,2009.[41]T.BlumensathandM.Davies,Iterativehardthresholdingforcompressedsensing,ng,Appl.Comput.HarmonicAnal.,vol.27,no.3,pp.265–274,2009.[42]A.Cohen,R.A.DeVore,andW.Dahmen,Instance-optimaldecodingbythresholdingincompressedsensing,ng,2008.[43]R.GargandR.Khandekar,Gradientdescentwithsparsification:Aniterativealgorithmsforsparserecoverywithrestrictedisometryproperty,ty,inProc.26thAnnu.Int.Conf.Mach.Learn.,Montreal,QC,Canada,Jun.2009.[44]R.Meka,P.Jain,andI.S.Dhillon,Guaranteedrankminimizationviasingularvalueprojection,n,[Online].Available:arXiv:0909.5457[45]J.-L.Starck,M.Elad,andD.L.Donoho,Redundantmultiscaletransformsandtheirapplicationformorphologicalcomponentanalysis,s,J.Adv.ImagingElectronPhys.,vol.132,pp.287–348,2004.[46]B.Efron,T.Hastie,I.Johnstone,andR.Tibshirani,Leastangleregression,,Ann.Stat.,vol.32,no.2,pp.407–499,2004.[47]M.R.Osborne,B.Presnell,andB.Turlach,Anewapproachtovariableselectioninleastsquaresproblems,s,IMAJ.Numer.Anal.vol.20,pp.389–403,2000.[48]R.GribonvalandM.Nielsen,Highlysparserepresentationsfromdictionariesareuniqueandindependentofthesparsenessmeasure,ure,AalborgUniv.,Aalborg,Denmark,Tech.Rep.,Oct.2003.[49]R.Tibshirani,RegressionshrinkageandselectionviatheLASSO,SSO,J.R.Stat.Soc.Bvol.58,pp.267–288,1996.[50]J.-J.Fuchs,Onsparserepresentationsinarbitraryredundantbases,es,IEEETrans.Inf.Theory,vol.50,no.6,pp.1341–1344,Jun.2004.[51]J.A.Tropp,Justrelax:Convexprogrammingmethodsforidentifyingsparsesignalsinnoise,se,IEEETrans.Inf.Theory,vol.52,no.3,pp.1030–1051,Mar.2006.[52]J.A.Tropp,Ontheconditioningofrandomsubdictionaries,es,Appl.Comput.HarmonicAnal.,vol.25,pp.1–24,2008.[53]E.J.CandesandY.Plan,Near-idealmodelselectionbyminimization,n,Ann.Stat.vol.37,no.5A,pp.2145–2177,2009.[54]J.A.Tropp,Normsofrandomsubmatricesandsparseapproximation,ion,C.R.Acad.Sci.ParisSer.IMath.,vol.346,pp.1271–1274,2008.[55]E.J.Candes,J.Romberg,andT.Tao,Stablesignalrecoveryfromincompleteandinaccuratemeasurements,ts,Commun.PureAppl.Math.,vol.59,pp.1207–1223,2006.[56]R.GribonvalandM.Nielsen,Sparserepresentationsinunionsofbases,es,IEEETrans.Inf.Theory,vol.49,no.12,pp.3320–3325,Dec.2003.[57]M.RudelsonandR.Vershynin,OnsparsereconstructionfromFourierandGaussianmeasurements,,Commun.PureAppl.Math.vol.61,no.8,pp.1025–1045,2008.[58]Y.Zhang,Onthetheoryofcompressivesensingbyminimization:Simplederivationsandextensions,s,RiceUniv.,Houston,TX,CAAMTech.Rep.TR08-11,2008.[59]D.DonohoandJ.Tanner,Countingfacesofrandomlyprojectedpolytopeswhentheprojectionradicallylowerdimensions,ons,J.Amer.Math.Soc.,vol.1,pp.1–53,2009.[60]B.HassibiandW.Xu,Onsharpperformanceboundsforrobustsparsesignalrecovery,,inProc.IEEESymp.Inf.Theory,Seoul,Korea,2009,pp.493–397.[61]D.L.DonohoandY.Tsaig,Fastsolution-normminimizationproblemswhenthesolutionmaybesparse,se,IEEETrans.Inf.Theory,vol.54,no.11,pp.4789–4812,Nov.2008.[62]M.A.Saunders,PDCO:Primal-dualinterior-pointmethodforconvexobjectives,,Syst.Optim.Lab.,StanfordUniv.,Stanford,CA,Nov.2002.[63]S.-J.Kim,K.Koh,M.Lustig,S.Boyd,andD.Gorinevsky,Aninterior-pointmethodforlarge-scale-regularizedleastsquares,res,IEEEJ.Sel.TopicsSignalProcess.,vol.1,no.4,pp.606–617,Dec.2007.[64]E.CandesandJ.Romberg,-MAGIC:Recoveryofsparsesignalsviaconvexprogramming,,CaliforniaInst.Technol.,Pasadena,CA,Tech.Rep.,Oct.2005.[65]P.L.CombettesandV.R.Wajs,Signalrecoverybyproximalforward-backwardsplitting,g,MultiscaleModel.Simul.vol.4,no.4,pp.1168–1200,2005.[66]I.Daubechies,M.Defriese,andC.DeMol,Aniterativethresholdingalgorithmforlinearinverseproblemswithasparsityconstraint,,Commun.PureAppl.Math.,vol.LVII,pp.1413–1457,2004.[67]E.T.Hale,W.Yin,andY.Zhang,Afixed-pointcontinuationmethod-minimization:Methodologyandconvergence,ce,SIAMJ.Optim.,vol.19,pp.1107–1130,2008.[68]S.J.Wright,R.D.Nowak,andM.A.T.Figueiredo,Sparsereconstructionbyseparableapproximation,on,IEEETrans.SignalProcess.,vol.57,no.8,pp.2479–2493,Aug.2009.[69]K.BrediesandD.A.Lorenz,LinearconvergenceofiterativeTroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsVol.98,No.6,June2010|ProceedingsoftheIEEE soft-thresholding,g,SIAMJ.Sci.Comput.vol.30,no.2,pp.657–683,2008.[70]J.M.Bioucas-DiasandM.A.T.Figueiredo,AnewTwIST:Two-stepiterativeshrinking/thresholdingalgorithmsforimagerestoration,n,IEEETrans.ImageProcess.vol.16,no.12,pp.2992–3004,Dec.2007.[71]M.A.T.Figueiredo,R.D.Nowak,andS.J.Wright,Gradientprojectionforsparsereconstruction:Applicationtocompressedsensingandotherinverseproblems,,IEEEJ.Sel.TopicsSignalProcess.,vol.1,no.4,pp.586–597,Dec.2007.[72]Z.Wen,W.Yin,D.Goldfarb,andY.Zhang,Afastalgorithmsforsparsereconstructionbasedonshrinkage,subspaceoptimization,andcontinuation,n,RiceUniv.,Houston,TX,CAAMTech.Rep.09-01,Jan.2009.[73]W.Shi,G.Wahba,S.J.Wright,K.Lee,R.Klein,andB.Klein,LASSO-Patternsearchalgorithmwithapplicationtoopthalmologydata,a,Stat.Interface,vol.1,pp.137–153,Jan.2008.[74]E.vandenBergandM.P.Friedlander,ProbingtheParetofrontierforbasispursuitsolutions,ons,SIAMJ.Sci.Comput.vol.31,no.2,pp.890–912,2008.[75]Y.Nesterov,AmethodforunconstrainedconvexproblemwiththerateofconvergenceceDokladyANSSSR,vol.269,pp.543–547,1983.[76]Y.Nesterov,IntroductoryLecturesonConvexOptimization:ABasicCourse.Norwell,MA:Kluwer,2004.[77]A.NemirovskiandD.B.Yudin,ProblemComplexityandMethodEfficiencyinOptimization.NewYork:Wiley,1983.[78]Y.Nesterov,v,Gradientmethodsforminimizingcompositeobjectivefunction,,CatholicUniv.Louvain,Louvain,Belgium,COREDiscussionPaper2007/76,Sep.2007.[79]A.BeckandM.Teboulle,Afastiterativeshrinkage-thresholdalgorithmforlinearinverseproblems,,SIAMJ.ImagingSci.vol.2,pp.183–202,2009.[80]S.Becker,J.Bobin,andE.J.CandeNESTA:Afastandaccuratefirst-ordermethodforsparserecovery,y,Apr.2009.[Online].Available:arXiv:0904.3367[81]M.P.FriedlanderandM.A.Saunders,Active-setapproachestobasispursuitdenoising,,inTalkSIAMOptim.Meeting,Boston,MA,May2008.ABOUTTHEAUTHORS JoelA.Tropp(Member,IEEE)receivedtheB.A.degreeinPlanIILiberalArtsHonorsandtheB.S.degreeinmathematicsfromtheUniversityofTexasatAustinin1999.HecontinuedhisgraduatestudiesincomputationalandappliedmathematicsatUT-AustinwherehereceivedtheM.S.degreein2001andthePh.D.degreein2004.HejoinedtheMathematicsDepartment,Uni-versityofMichigan,AnnArbor,asaResearchAssistantProfessorin2004,andhewasappointedT.H.HildebrandtResearchAssistantProfessorin2005.SinceAugust2007,hehasbeenanAssistantProfessorofApplied&ComputationalMathematicsattheCaliforniaInstituteofTechnology,Pasadena.Dr.Tropp’sresearchhasbeensupportedbytheNationalScienceFoundation(NSF)GraduateFellowship,theNSFMathematicalSciencesPostdoctoralResearchFellowship,andtheOfficeofNavalResearch(ONR)YoungInvestigatorAward.Heisalsoarecipientofthe2008PresidentialEarlyCareerAwardforScientistsandEngineers(PECASE),andthe2010AlfredP.SloanResearchFellowship. StephenJ.WrightreceivedtheB.Sc.(honors)andPh.D.degreesfromtheUniversityofQueensland,Brisbane,Qld.,Australia,in1981and1984,respectively.AfterholdingpositionsatNorthCarolinaStateUniversity,ArgonneNationalLaboratory,andtheUniversityofChicago,hejoinedtheComputerSciencesDepartment,UniversityofWisconsin–MadisonasaProfessorin2001.Hisresearchinterestsincludetheory,algorithms,andapplica-tionsofcomputationaloptimization.Dr.WrightisChairoftheMathematicalProgrammingSocietyandservesontheBoardofTrusteesoftheSocietyforIndustrialandAppliedMathematics(SIAM).HehasservedontheeditorialboardsoftheSIAMJournalonOptimization,theSIAMJournalonScientificComputingSIAMReview,andMathematicalProgramming,SeriesAandB(thelastasEditor-in-Chief).TroppandWright:ComputationalMethodsforSparseSolutionofLinearInverseProblemsProceedingsoftheIEEE|Vol.98,No.6,June2010