/
13SparseCodesandSpikesBrunoA.OlshausenIntroductionInordertomakeprogres 13SparseCodesandSpikesBrunoA.OlshausenIntroductionInordertomakeprogres

13SparseCodesandSpikesBrunoA.OlshausenIntroductionInordertomakeprogres - PDF document

alida-meadow
alida-meadow . @alida-meadow
Follow
366 views
Uploaded On 2016-08-05

13SparseCodesandSpikesBrunoA.OlshausenIntroductionInordertomakeprogres - PPT Presentation

258SparseCodesandSpikessentationthatreectstheunderlyingcausalstructureoftheimagesBarlow196119891Ishallshowherethatwhenasparseindependentcodeissoughtfortimevaryingnaturalimagesthebasisfunction ID: 434390

258SparseCodesandSpikessentationthatreectstheunderlyingcausalstructureoftheimages(Barlow 1961;1989).1Ishallshowherethatwhenasparse independentcodeissoughtfortime-varyingnaturalimages thebasisfunction

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "13SparseCodesandSpikesBrunoA.OlshausenIn..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

13SparseCodesandSpikesBrunoA.OlshausenIntroductionInordertomakeprogresstowardunderstandingthesensorycodingstrategiesem-ployedbythecortex,itwillbenecessarytodrawuponguidingprinciplesthatpro- 258SparseCodesandSpikessentationthatreectstheunderlyingcausalstructureoftheimages(Barlow,1961;1989).1Ishallshowherethatwhenasparse,independentcodeissoughtfortime-varyingnaturalimages,thebasisfunctionsthatemergeresemblethereceptiveeldprop-ertiesofcorticalsimple-cellsinbothspaceandtime.Moreover,themodelyieldsarepresentationoftime-varyingimagesintermsofsparse,spike-likeevents.Itissug-gestedthatthespiketrainsofsensoryneuronsessentiallyserveasasparsecodeintime,whichinturnformsamoreefcientandmeaningfulrepresentationofimagestruc-ture.Thus,asingleprinciplemaybeabletoaccountforboththereceptivepropertiesofneuronsandthespikingnatureofneuralactivity.Therstpartofthischapterpresentsthebasicgenerativeimagemodelforstaticimages,anddiscusseshowtorelatethebasisfunctionsandsparseactivitiesofthemodeltoneuralreceptiveeldsandactivities.Thesecondpartappliesthemodeltotime-varyingimagesandshowshowspace-timereceptiveeldsandspike-likerepresentationsemergefromthisprocess.Finally,Ishalldiscusshowthemodelmaybetestedandhowitwouldneedtobefurthermodiedinordertoberegardedasafullyneurobiologicallyplausiblemodel.SparseCodingofStaticImagesImagemodelInpreviouswork(Olshausen&Field,1997),wedescribedamodelofV1simple-cellsintermsofalineargenerativemodelofimages(gure13.1a).Accordingtothismodel,imagesaredescribedintermsofalinearsuperpositionofbasisfunctionsplusnoise: !   \n(13.1)Animageisthusrepresentedbyasetofcoefcientvalues,!,whicharetakentobeanalogoustotheactivitiesofV1neurons.Importantly,thebasissetisovercom-plete,meaningthattherearemorebasisfunctions(andhencemore!'s)thaneffectivedimensionsintheimages.Overcompletenessintherepresentationisimportantbe-causeitallowsforthejointspaceofposition,orientation,andspatial-frequencyto1.Althoughitisnotpossibleingeneraltoachievecompleteindependencewiththesimplelinearmodelweproposehere,wecanneverthelessseektoreducestatisticaldependenciesasmuchaspossibleoverbothspace(i.e.,acrossneurons)andtime. SparseCodesandSpikes259aifi()I().aiI(x,y)aiP(ai)Figure13.1.Imagemodel.a,Imagesoftheenvironmentaremodeledasalinearsuper-positionofbasisfunctions,,whoseamplitudesaregivenbythecoefcients .b,ThepriorprobabilitydistributionoverthecoefcientsispeakedatzerowithheavytailsascomparedtoaGaussianofthesamevariance(overlaidasdashedline).Suchadistribu-tionwouldresultfromasparseactivitydistributionoverthecoefcients,asdepictedinc.betiledsmoothlywithoutartifacts(Simoncellietal.,1992).Moregenerallythough,itallowsforagreaterdegreeofexibilityintherepresentation,asthereisnoreasontobelieveapriorithatthenumberofcausesforimagesislessthanorequaltothenumberofpixels(Lewicki&Sejnowski,2000).Withnon-zeronoise,,thecorrespondencebetweenimagesandcoefcientval-uesisprobabilistic—i.e.,somesolutionsaremoreprobablethanothers.Moreover,whenthebasissetisovercomplete,thereareaninnitenumberofsolutionsforthecoefcientsinequation13.1(evenwithzeronoise),allofwhichdescribetheimagewithequalprobability.Thisdegeneracyintherepresentationisresolvedbyimpos-ingapriorprobabilitydistributionoverthecoefcients.Theparticularformofthepriorimposedinourmodelisonethatfavorsaninterpretationofimagesintermsofsparse,independentevents:""!(13.2)"!\r(13.3)whereisanon-convexfunctionthatshapes"!soastohavetherequisite“sparse”form—i.e.,peakedatzerowithheavytails,orpositivekurtosis—asshowningure13.1b.Theposteriorprobabilityofthecoefcientsforagivenimageisthen"""(13.4) 260SparseCodesandSpikes"\r   (13.5)"\r(13.6)whereisthebasisfunctionmatrixwithcolumns andistheinverseofthenoisevariance.denotestheentiresetofmodelparameters,,and.Sincetherelationbetweenimagesandcoefcientsisprobabilistic,thereisnotasingleuniquesolutionforchoosingthecoefcientstorepresentagivenimage.Onepossibility,forexample,istochoosethemeanoftheposteriordistribution".Thisisdifculttocompute,though,sinceitrequiressomeformofsamplingfromtheposterior.Thesolutionweproposehereistochoosethecoefcientsthatmaximizetheposteriordistribution(MAPestimate)"(13.7)whichisaccomplishedviagradientascentonthelog-posterior:  " $!\n(13.8) \n(13.9)where\nistheresidualerrorbetweentheimageandthemodel'sreconstructionoftheimage,\n.Whenisanon-convexfunctionappropriateforencouragingsparseness,suchas \r!,or! \r,itsderivative,,providesaformofnon-linearself-inhibitionforcoefcientvaluesnearzero.Arecurrentneuralnetworkimplementationofthisdifferentialequation(13.9)isshowningure13.2.Learningbasisfunctionsofthemodelareadaptedbymaximizingtheaveragelog-likelihoodoftheimagesunderthemodel,whichisequivalenttominimizingthemodel'sesti-mateofcodelength,: " (13.10)where"\n""(13.11) SparseCodesandSpikes261-3-2-10123-10-8-6-4-20246810-3-2-10123051015I(x)aifi(x)------++++++-S'++++++x)S(ai)aiS'(ai)Figure13.2.Asimplenetworkimplementationofinference.Theoutputs aredrivenbyasumoftwoterms.Thersttermtakesaspatiallyweightedsumofthecurrentresidualimageusingthefunction$,astheweights.Thesecondtermappliesanon-linearself-inhibitionontheoutputsaccordingtothederivativeof5,thatdifferentiallypushesactivitytowardszero.Shownatrightisthederivativeofthesparsecostfunction5$ ', .$$ ,,,.,..providesanupperboundestimateoftheentropyoftheimages,whichinturnprovidesalowerboundestimateofcodelength.Alearningruleforthebasisfunctionsmaybeobtainedviagradientdescenton:(13.12)\n   (13.13)Thus,thebasisfunctionsareupdatedbyaHebbianlearningrule,wheretheresidualerror\nconstitutesthepre-synapticinputandthecoefcientsconstitutethepost-synapticoutputs.Insteadofsamplingfromthefullposteriordistribution,though,weutilizeansimplerapproximationinwhichasinglesampleistakenattheposteriormaximum,andsowehave\n(13.14)Thepricewepayforthisapproximationisthatthebasisfunctionswillgrowwithoutbound,sincethegreatertheirnorm, ,thesmallereach!willbecome,thusde-creasingthesparsenesspenaltyin(13.8).Thistrivialsolutionisavoidedbyrescaling 262SparseCodesandSpikesthebasisfunctionsaftereachlearningstep(13.14)sothattheirL2norm, ,maintainsanappropriatelevelofvarianceoneachcorrespondingcoefcient!:\n!(13.15)whereisthescalingparameterusedinthesparsecostfunction,.Thismethod,althoughanapproximationtogradientdescentonthetrueobjective,hasbeenshowntoyieldsolutionssimilartothoseobtainedwithmoreaccuratetechniquesinvolvingsampling(Olshausen&Millman,2000).DoesV1dosparsecoding?Whenthemodelisadaptedtostatic,whitened2naturalimages,thebasisfunctionsthatemergeresembletheGabor-likespatialprolesofcorticalsimple-cellreceptiveelds(gure13.3,similarresultswerealsoobtainedwithvanHateren&Ruderman'sICAmodel).Thatis,thefunctionsbecomespatiallylocalized,oriented,andbandpass(selectivetostructureatdifferentspatialscales).Becauseallofthesepropertiesemergepurelyfromtheobjectiveofndingsparse,independentcomponentsfornaturalimages,theresultssuggestthatthereceptiveeldsofV1neuronshavebeendesignedaccordingtoasimilarcodingprinciple.Theresultisquiterobust,andhasbeenshowntoemergefromotherformsofindependentcomponentsanalysis(ICA).Someofthesealsomakeanexplicitassumptionofsparseness(Bell&Sejnowski,1997;Lewicki&Olshausen,1999)whileothersseekonlyindependenceamongthecoefcients,inwhichcasesparsenessemergesaspartoftheresult(vanHateren&vanderSchaaf,1998;Olshausen&Millman,2000).Wearecomparingthebasisfunctionstoneuralreceptiveelds3herebecausetheyarethefeedforwardweightingfunctionsusedincomputingtheoutputsofthemodel,!(seegure13.2).However,itisimportanttobearinmindthattheoutputsarenotcomputedpurelyviathisfeedforwardweightingfunction,butalsoviaanon-linear,recurrentcomputation(13.9),theresultofwhichistosparsifyneuralactivity.Thus,aneuroninourmodelwouldbeexpectedtorespondlessoftenthanonethatsimply2.Whiteningremovessecond-ordercorrelationsduetothepowerspectrumofnaturalimages,anditapproximatesthetypeoflteringperformedbytheretina(seeAtick&Redlich,1992).Itshouldbenotedthatterm`receptiveeld'isnotwell-dened,evenamongphysiologists.Oftentimesitistakentomeanthefeedforward,linearweightingfunctionofaneuron.Butinreality,themeasuredreceptiveeldofaneuronreectsthesumtotalofalldendriticnon-linearities,outputnon-linearities,aswellasrecurrentcomputationsduetohorizontalconnectionsandtop-downfeedbackfromotherneurons. SparseCodesandSpikes263Figure13.3.Basisfunctionslearnedfromstaticnaturalimages.Shownisasetof200basisfunctionswhichwereadaptedtopixelimagepatches,accordingtoequations(13.14)and(13.15).Initialconditionswerecompletelyrandom.Thebasissetisapproximately2'sovercomplete,sincetheimagesoccupyonlyabout3/4ofthedimensionalityoftheinputspace.(SeeOlshausen&Field,1997,forsimulationdetails.)computestheinnerproductbetweenaspatialweightingfunctionandtheimage,asshowningure13.4a.HowcouldonetellifV1neuronswereactivelysparsifyingtheiractivityaccordingtothemodel?Onepossibilityistomeasureaneuron'sreceptiveeldviareversecorrelation,usinganarticialimageensemblesuchaswhitenoise,andthenusethismeasuredreceptiveeldtopredicttheresponseoftheneurontonaturalimagesviaconvolution.Ifneuralactivitieswerebeingsparsiedasinthemodel,thenonewouldexpecttheactualresponsesobtainedwithnaturalimagestobenon-linearlyrelatedtothosepredictedfromconvolution,asshowningure13.4c.Theneteffectofthisnon-linearityisthatittendstosuppressresponseswherethebasisfunctiondoesnotmatchwellwiththeimage,anditampliesresponseswherethebasisfunctiondoesmatchwell.Thisformofnon-linearityisqualitativelyconsistentwiththe“expansivepower-function”contrastresponsenon-linearityobservedinsimplecells(Albrecht&Hamilton,1982;Albrecht&Geisler,1991).Notehoweverthatthisresponsepropertyemergesfromthesparsepriorinourmodel,ratherthanhavingbeenassumedasanexplicitpartoftheresponsefunction.Whetherornotthisresponsecharacteristicisduetothekindofdynamicsproposedinourmodel,asopposedtotheapplicationofaxedpointwisenon-linearityontheoutputoftheneuron,wouldrequiremorecomplicatedteststoresolve.Theabovemethodassumesthattheanalogvaluedcoefcientsinthemodel(orpositivelyrectiedversionsofthesequantities)correspondtospikerate.However,recentstudieshavedemonstratedthatspikerates,whicharetypicallyaveragedover 264SparseCodesandSpikes020406080100120140160180200-2-1012posterior maximum020406080100120140160180200-2-1012linear predictioncoefficientoutputbasis functionreverse correlation map-1-0.500.51-1-0.500.51linear predictionactual outputFigure13.4.Effectofsparsication.a,Anexampleimageanditsencodingobtainedbymaximizingtheposterioroverthecoefcients.Therepresentationobtainedbysimplytakingtheinner-productoftheimagewiththebestlinearpredictingkernelforeachbasisfunctionisnotnearlyassparsebycomparison.b,Shownisoneofthelearnedbasisfunctions(row6,column7ofgure13.3)togetherwithitscorresponding“receptiveeld”asmappedoutviareversecorrelationwithwhitenoise(1440trials).c,Theresponseobtainedbysimplyconvolvingthisfunctionwiththeimageisnon-linearlyrelatedtotheactualoutputchosenbyposteriormaximization.Specically,smallvaluestendtogetsuppressedandlargevaluesamplied(thesolidlinepassingthroughthediamondsdepictsthemeanofthisrelationship,whiletheerrorbarsdenotethestandarddeviation).epochsof100msormore,tendtovastlyunderestimatethetemporalinformationcontainedinneuralspiketrains(Riekeetal.,1997).Inaddition,wearefacedwiththefactthattheimageontheretinaisconstantlychangingduetobothself-motion(eye,headandbody)andthemotionsofobjectsintheworld.Themodelaswehavecurrentlyformulateditisnotwell-suitedtodealwithsuchdynamics,sincetheprocedureformaximizingtheposterioroverthecoefcientsrequiresarecurrentcomputation,anditisunlikelythatthiswillcompletebeforetheinputchanges SparseCodesandSpikes265appreciably.Inthenextsection,weshowhowtheseissuesmaybeaddressed,atleastinpart,byreformulatingthemodeltodealdirectlywithtime-varyingimages.SparseCodingofTime-VaryingImagesImagemodelWecanreformulatethesparsecodingmodeltodealwithtime-varyingimagesbyex-plicitlymodelingtheimagestream intermsofasuperpositionofspace-timebasisfunctions .Hereweshallassumeshift-invarianceintherepresentationovertime,sothatthesamebasisfunction maybeusedtomodelstructureintheimagesequencearoundanytimewithamplitude!.Thus,theimagemodelmaybeexpressedastheconvolutionofasetoftime-varyingcoefcients,!,withthebasisfunctions: !  (13.16)!    (13.17)Themodelisillustratedschematicallyingure13.5.ttaitxyxyfiFigure13.5.Imagemodel.Amovie*$44,ismodeledasalinearsuperpositionofspatio-temporalbasisfunctions,$44,,eachofwhichislocalizedintimebutmaybeappliedatanytimewithinthemoviesequence. 266SparseCodesandSpikesThecoefcientsforagivenimagesequencearecomputedasbeforebymaximizingtheposteriordistributionoverthecoefcients"(13.18)whichisagainachievedbygradientdescent,leadingtothefollowingdifferentialequationfordeterminingthecoefcients: ! \r !(13.19)\r  !   (13.20)wheredenotescorrelation.Notehoweverthatinordertobeconsideredacausalsystem, mustbezerofor.Fornowthoughweshalloverlooktheissueofcausality,andinthediscussionweshallconsidersomewaysofdealingwiththisissue.ThismodeldiffersfromtheICA(independentcomponentsanalysis)modelfortime-varyingimagesproposedearlierbyvanHaterenandRuderman(1998)inanimportantrespect:namely,thebasisfunctionsareappliedtotheimagesequenceinashift-invariantmanner,ratherthaninablockedfashion.InvanHaterenandRuderman'sICAmodel,trainingdataisobtainedbyextractingblocksofsize12x12pixelsand12samplesintimefromalargermovie,andasetofbasisfunctionsweresoughtthatmaximizeindependenceamongthecoefcients(byseekingextremaofkurtosis)averagedovermanysuchblocks.Animageblockisdescribedvia  !  (13.21)andthecoefcientsarecomputedbymultiplyingtherowsofthepseudo-inverseofwitheachblockextractedfromtheimagestream(akintoconvolution).Bycontrast,ourmodelassumesshift-invarianceamongthebasisfunctions—i.e.,abasisfunctionmaybeappliedtodescribestructureoccurringatanypointintimeintheimagesequence.Inaddition,sincethebasissetisovercomplete,thecoefcientsmaybesparsied,givingrisetoanon-linear,spike-likerepresentationthatisqualitativelydifferentfromthatobtainedvialinearconvolution(see“Resultsfromnaturalmoviesequences”). SparseCodesandSpikes267Learningobjectivefunctionforadaptingthebasisfunctionsisagainthecodelength, \r"(13.22)"\n"" (13.23)wherenowtheimagelikelihoodandprioraredenedas"\r  \r  !  (13.24)" \r (13.25)andreferstothemodelparameters ,,and.Byusingthesameapproximationtothetruegradientofdiscussedintheprevi-oussection,theupdateruleforthebasisfunctionsisthen  !\r (13.26)Thus,thebasisfunctionsareadaptedoverspaceandtimebyHebbianlearningbetweenthetime-varyingresidualimageandthetime-varyingcoefcientactivities.ResultsfromnaturalmoviesequencesThemodelwastrainedonmovingimagesequencesobtainedfromHansvanHateren'snaturalmoviedatabase(http://hlab.phys.rug.nl/vidlib/viddb).Themovieswererstwhitenedbyalterthatwasderivedfromtheinversespatio-temporalam-plitudespectrum,andlowpasslteredwithacutoffat80%oftheNyquistfrequencyinspaceandtime(seealsoDong&Atick,1995,forasimilarwhiteningprocedure).Trainingwasdoneinbatchmodebyloadinga$$pixel,64framesequenceintomemoryandrandomlyextractingaspatialsubimageofthesametemporallength.Thecoefcientswerettedtothissequencebymaximizingtheposteriordistribu-tionviaeqs.(13.19)and(13.20).Thestatisticsforlearningwereaveragedovertensuchsubimagesequencesandthebasisfunctionswerethenupdatedaccordingto(13.26),againsubjecttorescaling(13.15).Afterseveralhoursoftrainingona450MhzPentium,thesolutionreachedequilibrium.Theresultsforasetof96basisfunctions,each8x8pixelsandoflength5intime,areshowningure13.6.Spatially,theysharemanyofthesamecharacteristicsofthebasisfunctionsobtainedpreviouslywithstaticimages(gure13.3).Themaindif- 268SparseCodesandSpikesFigure13.6.Space-timebasisfunctionslearnedfromtime-varyingnaturalimages.Shownareasetof96basisfunctionsarrangedintosixcolumnsof16each.Eachbasisfunctionispixelsinspaceand5framesintime.Eachrowshowsadifferentbasisfunction,withtimeproceedinglefttoright.Thetranslat-ingcharacterofthefunctionsisbestviewedasamovie,whichmaybeviewedathttp://redwood.ucdavis.edu/bruno/bfmovie/bfmovie.html.ferenceisthattheynowalsohaveatemporalcharacteristic,suchthattheytendtotranslateovertime.Thus,thevastmajorityofthebasisfunctionsaredirectionselec-tive(i.e.,theircoefcientswillrespondonlytoedgesmovinginonedirection),withthehighspatial-frequencyfunctionsbiasedtowardlowervelocities.Thesepropertiesaretypicalofthespace-timereceptiveeldsofV1simple-cells(Jones&Palmer,1989;DeAngelisetal.,1995),andalsoofthoseobtainedpreviouslywithICA(vanHateren&Ruderman,1998).Becausetheoutputsofthemodelaresparsiedoverbothspaceandtime,themodelyieldsaqualitativelydifferentbehaviorthanlinearconvolution,asinICA.Figure13.7illustratesthisdifferencebycomparingthetime-varyingcoefcientsob-tainedbymaximizingtheposteriortothoseobtainedbystraightforwardconvolution(similartothelinearpredictiondiscussedintheprevioussection).Thedifferenceisstrikinginthatthesparsiedrepresentationischaracterizedbyhighlylocalized,punctateevents.Althoughstillanalog,itbearsastrongresemblancetothethespik-ingnatureofneuralactivity.Atpresentthough,thiscomparisonismerelyqualitative.DiscussionWehaveshowninthischapterhowboththespatialandtemporalresponseprop-ertiesofneuronsmaybeunderstoodintermsofaprobabilisticmodelwhichat-temptstodescribeimagesintermsofsparse,independentevents.Whenthemodelisadaptedtotime-varyingnaturalimages,thebasisfunctionsconvergeuponasetof SparseCodesandSpikes269-2-1.5-1-0.500.511.52time (frame number)coefficient204060102030405060708090-0.5-0.4-0.3-0.2-0.100.10.20.30.40.5time (frame number)coefficient204060102030405060708090Figure13.7.Coefcientscomputedbyconvolvingthebasisfunctionswiththeimagesequence(left)vs.posteriormaximization(right)fora60frameimagesequence(bottom).space-timefunctionswhicharespatiallyGabor-likeandtranslatewithtime.More-over,thesparsiedrepresentationhasaspike-likecharacter,inthatthecoefcientsignalsaremostlyzeroandtendtoconcentratetheirnon-zeroactivityintobrief,punctateevents.Thesebriefeventsrepresentlongerspatiotemporaleventsintheimageviathebasisfunctions.Theresultssuggest,then,thatboththereceptiveeldsandspikingactivityofV1neuronsmaybeexplainedintermsofasingleprinciple,thatofsparsecodingintime.Theinterpretationofneuralspiketrainsasasparsecodeintimeisnotnew.Mostrecently,Bialekandcolleagueshaveshownthatsensoryneuronsintheyvisualsystem,frogauditorysystem,andthecricketcercalsystem,essentiallyemployaboutonespikeper“correlationtime”toencodetime-varyingsignalsintheirenvironment(Riekeetal.,1997).Infact,theimagemodelproposedhereisidenticaltotheirlin-earstimulusreconstructionframeworkusedformeasuringthemutualinformationbetweenneuralactivityandsensorysignals.Themaincontributionofthispaper,be-yondthispreviousbodyofwork,isinshowingthattheparticularspatiotemporal 270SparseCodesandSpikesreceptiveeldstructuresofV1neuronsmayactuallybederivedfromsuchsparse,spike-likerepresentationsofnaturalimages.ThisworkalsosharesmuchincommonwithLewicki'sshift-invariantmodelofau-ditorysignals,discussedintheprecedingchapterinthisbook.ThemaindifferenceisthatLewicki'smodelutilizesamuchhigherdegreeofovercompleteness,whichallowsforamoreprecisealignmentofthebasisfunctionswithfeaturesoccurringinnaturalsounds.Presumably,increasingthedegreeofovercompletenessinourmodelwouldyieldevenhigherdegreesofsparsityandbasisfunctionsthatareevenmorespecializedforthespatio-temporalfeaturesoccurringinimages.Butlearningbe-comesproblematicinthiscasebecauseofthedifcultiesinherentinproperlymaxi-mizingorsamplingfromtheposteriordistributionoverthecoefcients.Thedevel-opmentofefcientmethodsforsamplingfromtheposterioristhusanimportantgoaloffuturework.Anotherimportantyetunresolvedissueinimplementingthemodelishowtodealwithcausality.Currently,thecoefcientsarecomputedbytakingintoaccountinformationbothinthepastandinthefutureinordertodeterminetheiroptimalstate.Butobviouslyanyphysicalimplementationwouldrequirethattheoutputsbecomputedbasedonlyonpastinformation.Thefactthatthebasisfunctionsbecometwo-sidedintime(i.e.,non-zerovaluesforbothnegativeandpositivetime)indicatesthatacoefcientattimeismakingastatementabouttheimagestructureexpectedinthefuture().Thisfactcouldpossiblybeexploitedinordertomakethemodelpredictive.Thatis,bycommittingtorespondatthepresenttime,basedonlyonwhathashappenedinthepast,aunitwillbemakingapredictionaboutwhatistohappenashorttimeinthefuture.Anadditionalchallengeinlearning,then,istoadaptanappropriatedecisionfunctionfordeterminingwhenaunitshouldbecomeactive,sothateachunitservesasagoodpredictoroffutureimagestructureinadditiontobeingsparse.AcknowledgementsThisworkbenetedfromdiscussionswithMikeLewickiandwaswassupportedbyNIMHgrantR29-MH57921.IamalsoindebtedtoHansvanHaterenformakinghisnaturalmoviedatabasefreelyavailable.References[1]AlbrechtDG,HamiltonDB(1982)Striatecortexofmonkeyandcat:Contrastresponsefunction.JournalofNeurophysiology,48:217-237.[2]AtickJJ,RedlichAN(1992)Whatdoestheretinaknowaboutnaturalscenes? References271NeuralComputation,4:196-210.[3]BarlowHB(1961)Possibleprinciplesunderlyingthetransformationsofsen-sorymessages.In:SensoryCommunication,W.A.Rosenblith,ed.,MITPress,pp.217-234.[4]BarlowHB(1989)Unsupervisedlearning,NeuralComputation,1:295-311.[5]BaumEB,MoodyJ,WilczekF(1988)Internalrepresentationsforassociativememory,BiologicalCybernetics,59:217-228.[6]BellAJ,SejnowskiTJ(1997)Theindependentcomponentsofnaturalimagesareedgelters,VisionResearch,37:3327-3338.[7]DeAngelisGC,OhzawaI,FreemanRD(1995)Receptive-elddynamicsinthecentralvisualpathways.TrendsinNeurosciences,18(10),451-458.[8]DongDW,AtickJJ(1995)Temporaldecorrelation:atheoryoflaggedandnonlaggedresponsesinthelateralgeniculatenucleus,Network:ComputationinNeuralSystems,6:159-178.[9]FieldDJ(1994)Whatisthegoalofsensorycoding?NeuralComputation,6:559-601.[10]FoldiakP(1995)Sparsecodingintheprimatecortex,In:TheHandbookofBrainTheoryandNeuralNetworks,ArbibMA,ed,MITPress,pp.895-989.[11]LewickiMS,OlshausenBA(1999)Probabilisticframeworkfortheadaptationandcomparisonofimagecodes,J.Opt.Soc.ofAm.,A,16(7):1587-1601.[12]LewickiMS,SejnowskiTJ(2000)Learningovercompleterepresentations.Neu-ralComputation,12:337-365.[13]McLeanJ,PalmerLA(1989)Contributionoflinearspatiotemporalreceptiveeldstructuretovelocityselectivityofsimplecellsinarea17ofcat.VisionResearch,29(6):675-9.[14]MumfordD(1994)Neuronalarchitecturesforpattern-theoreticproblems,In:LargeScaleNeuronalTheoriesoftheBrain,KochC,Davis,JL,eds.,MITPress,pp.125-152.[15]OlshausenBA,FieldDJ(1997).Sparsecodingwithanovercompletebasisset:AstrategyemployedbyV1?VisionResearch,37,3311-3325.[16]OlshausenBA,MillmanKJ(2000).Learningsparsecodeswithamixture-of-Gaussiansprior.In:AdvancesinNeuralInformationProcessingSystems,12,S.A.Solla,T.K.Leen,K.R.Muller,eds.MITPress,pp.841-847.[17]RiekeF,WarlandD,deRuytervanStevenickR,BialekW(1997)Spikes:Explor-ingtheNeuralCode.MITPress.[18]SimoncelliEP,FreemanWT,AdelsonEH,HeegerDJ(1992)Shiftablemultiscaletransforms,IEEETransactionsonInformationTheory,38(2):587-607.[19]TadmorY,TolhurstDJ(1989)Theeffectofthresholdontherelationshipbe-tweenthereceptiveeldproleandthespatial-frequencytuningcurveinsim- 272SparseCodesandSpikesplecellsofthecat'sstriatecortex,VisualNeuroscience,3:445-454.[20]vanHaterenJH,vanderSchaaffA(1998)Independentcomponentltersofnaturalimagescomparedwithsimplecellsinprimaryvisualcortex,Proc.RoyalSoc.Lond.B,265:359-366.[21]vanHaterenJH,RudermanDL(1998)Independentcomponentanalysisofnaturalimagesequencesyieldsspatio-temporallterssimilartosimplecellsinprimaryvisualcortex.Proc.R.Soc.Lond.B,265:2315-2320.

Related Contents


Next Show more