181K - views

Approximate inference in continuous time GaussianJump processes Manfred Opper Fakult at Elektrotechnik und Informatik Technische Universit at Berlin Berlin Germany oppermcs

tuberlinde Andreas Ruttor Fakult at Elektrotechnik und Informatik Technische Universit at Berlin Berlin Germany andreasruttortuberlinde Guido Sanguinetti School of Informatics University of Edinburgh GSanguinettiedacuk Abstract We present a novel app

Embed :
Pdf Download Link

Download Pdf - The PPT/PDF document "Approximate inference in continuous time..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Approximate inference in continuous time GaussianJump processes Manfred Opper Fakult at Elektrotechnik und Informatik Technische Universit at Berlin Berlin Germany oppermcs






Presentation on theme: "Approximate inference in continuous time GaussianJump processes Manfred Opper Fakult at Elektrotechnik und Informatik Technische Universit at Berlin Berlin Germany oppermcs"— Presentation transcript:

ApproximateinferenceincontinuoustimeGaussian-Jumpprocesses ManfredOpperFakult¨atElektrotechnikundInformatikTechnischeUniversit¨atBerlinBerlin,Germanyopperm@cs.tu-berlin.deAndreasRuttorFakult¨atElektrotechnikundInformatikTechnischeUniversit¨atBerlinBerlin,Germanyandreas.ruttor@tu-berlin.deGuidoSanguinettiSchoolofInformaticsUniversityofEdinburghG.Sanguinetti@ed.ac.ukAbstractWepresentanovelapproachtoinferenceinconditionallyGaussiancontinuoustimestochasticprocesses,wherethelatentprocessisaMarkovianjumpprocess.Werstconsiderthecaseofjump-diffusionprocesses,wherethedriftofalinearstochasticdifferentialequationcanjumpatarbitrarytimepoints.Wederivepartialdifferentialequationsforexactinferenceandpresentaveryefcientmeaneldapproximation.Byintroducinganovellowerboundonthefreeenergy,wethengeneraliseourapproachtoGaussianprocesseswitharbitrarycovariance,suchasthenon-MarkovianRBFcovariance.Wepresentresultsonbothsimulatedandrealdata,showingthattheapproachisveryaccurateincapturinglatentdynamicsandcanbeusefulinanumberofrealdatamodellingtasks.IntroductionContinuoustimestochasticprocessesarereceivingincreasingattentionwithinthestatisticalmachinelearningcommunity,astheyprovideaconvenientandphysicallyrealistictoolformodellingandinferenceinavarietyofrealworldproblems.Bothcontinuousstatespace[1,2]anddiscretestatespace[3–5]systemshavebeenconsidered,withapplicationsrangingfromsystemsbiology[6]tomodellingmotioncapture[7].Withinthemachinelearningcommunity,Gaussianprocesses(GPs)[8]haveprovedparticularlypopular,duetotheirappealingpropertieswhichallowtoreducetheinnitedimensionalsmoothingproblemintoanitedimensionalregressionproblem.WhileGPsareindubitablyaverysuccessfultoolinmanypatternrecognitiontasks,theiruseisrestrictedtoprocesseswithcontinuouslyvaryingtemporalbehaviour,whichcanbealimitinmanyapplicationswhichexhibitinherentlynon-stationaryordiscontinuousbehaviour.Inthiscontribution,weconsiderthestateinferenceandparameterestimationproblemsinawiderclassofconditionallyGaussian(orGaussian-Jump)processes,wherethemeanevolutionoftheGPisdeterminedbythestateofalatent(discrete)variablewhichevolvesaccordingtoMarkoviandynam-ics.Werstconsiderthespecial,butimportant,casewheretheGPisaMarkovianprocess,i.e.anOrnstein-Uhlenbeck(OU)process.Inthiscase,exactinferencecanbederivedbyusingaforward-backwardprocedure.Thisleadstopartialdifferentialequations,whosenumericalsolutioncanbecomputationallyexpensive;alternatively,avariationalapproximationleadstoaniterativeschemeinvolvingonlythenumericalsolutionofordinarydifferentialequations,andwhichisextremelyefcientfromacomputationalpointofview.Wethenconsiderthecaseofgeneral(non-Markov)1 GPscoupledtoaMarkovianlatentvariable.Inferenceinthiscaseisintractable,but,bymeansofaLegendretransform,wecanderivealowerboundontheexactfreeenergy,whichcanbeoptimisedusingasaddlepointprocedure.1ConditionallyGaussianMarkovProcessesWeconsideracontinuousstatestochasticsystemgovernedbyalinearstochasticdifferentialequa-tion(SDE)withpiecewiseconstant(intime)driftbiaswhichcanswitchrandomlywithMarkoviandynamics(seee.g.[9]foragoodintroductiontostochasticprocesses).Forsimplicity,wegivethederivationsinthecasewhenthereareonlytwostatesintheswitchingprocess(i.e.itisarandomtelegraphprocess)andthediffusionsystemisonedimensional;generalisationtomoredimensionsormorelatentstatesisstraightforward.Thesystemcanbewrittenasdx=(A+b�x)dt+dw(t);(t)TP(f);(1)wherewistheWienerprocesswithvariance2and(t)isarandomtelegraphprocesswithswitch-ingratesf.Ourinterestinthistypeofmodelsistwofold:similarmodelshavefoundapplicationsineldslikesystemsbiology,wheretherapidtransitionsofregulatoryproteinsmakeaswitchinglatentvariableaplausiblemodel[6].Atthesametime,atleastintuitively,model(1)couldbecon-sideredasanapproximationtomorecomplexnon-lineardiffusionprocesses,wherediffusionnearlocalminimaofthepotentialisapproximatedbylineardiffusion.Letusassumethatweobservetheprocessxatanitenumberoftimepointswithi.i.d.noise,givingvaluesyiN�x(ti);s2;i=1;:::;N:Forsimplicity,wehaveassumedthattheprocessitselfisobserved;nothingwouldchangeinwhatfollowsifweassumedthatthevariableyislinearlyrelatedtotheprocess(exceptofcoursethatwewouldhavemoreparameterstoestimate).Theproblemwewishtoaddressistheinferenceofthejointposterioroverbothvariablesxandatanytimewithinacertaininterval,aswellasthedeterminationof(asubsetof)theparametersandhyperparametersinvolvedinequation(1)andintheobservationmodel.1.1ExactstateinferenceAsthesystemdescribedbyequation(1)isaMarkovianprocess,themarginalprobabilitydistributionq(x;t)forbothstatevariables2f0;1gandxoftheposteriorprocesscanbecalculatedusingasmoothingalgorithmsimilartotheonedescribedin[6].BasedontheMarkovpropertyonecanshowthatq(x;t)=1 Zp(x;t) (x;t):(2)Herep(x;t)denotesthemarginallteringdistribution,while (x;t)=p(fyijti�tgjxt=x;t=)isthelikelihoodofallobservationsaftertimetundertheconditionthattheprocesshasstate(x;)attimet(backwardmessage).ThetimeevolutionofthebackwardmessageisdescribedbythebackwardChapman-Kolmogorovequationfor2f0;1g[9]:@  @t+(A+b�x)@  @x+2 2@2  @x2=f1�( (x;t)� 1�(x;t)):(3)ThisPDEmustbesolvedbackwardintimestartingatthelastobservationyNusingtheinitialcondition (x;tN)=p(yNjx(tN)=x):(4)Theotherobservationsaretakenintoaccountbyjumpconditions (x;t�j)= (x;t+j)p(yjjx(tj)=x);(5)where (x;tk)beingthevaluesof (x;t)beforeandafterthek-thobservationandp(yjjx(tj)=x)isgivenbythenoisemodel.2 Inordertocalculateq(x;t)weneedtocalculatethelteringdistributionp(x;t),too.ItstimeevolutionisgivenbytheforwardChapman-Kolmogorovequation[9]@p @t+@ @x(A+b�x)p(x;t)�2 2@2p @x2=fp1�(x;t)�f1�p(x;t):(6)Wecanshowthattheposteriorprocessq(x;t)fullsasimilarPDEbycalculatingitstimederivativeandusingboth(3)and(6).Bydoingsowend@q @t+@ @x(A+b�x+c(x;t))q(x;t)�2 2@2q @x2=g(x;t)q1�(x;t)�g1�(x;t)q(x;t);(7)whereg(x;t)= (x;t) 1�(x;t)f(8)aretimeandstatedependentposteriorjumprates,whilethedriftc(x;t)=2@ @xlog (x;t)(9)takestheobservationsintoaccount.Itisclearlyvisiblethat(7)isalsoaforwardChapman-Kolmogorovequation.Consequently,theonlydifferencesbetweenpriorandposteriorprocessarethejumpratesforthetelegraphprocessandthedriftofthediffusionprocessx.1.2VariationalinferenceTheexactinferenceapproachoutlinedabovegivesrisetoPDEswhichneedtobesolvednumeri-callyinordertoestimatetherelevantposteriors.ForonedimensionalGPsthisisexpensive,butinprinciplefeasible.Thisworkwillbedeferredtoafurtherpublication.Ofcourse,numericalso-lutionsbecomecomputationallyprohibitiveforhigherdimensionalproblems,leadingtoaneedforapproximations.Wedescribehereavariationalapproximationtothejointposteriorovertheswitch-ingprocess(t)andthediffusionprocessx(t)whichgivesanupperboundonthetruefreeenergy;itisobtainedbymakingafactorisedapproximationtotheprobabilityoverpaths(x0:T;0:T)oftheformq(x0:T;0:T)=qx(x0:T)q(0:T);(10)whereqxisapurediffusionprocess(whichcanbeeasilyshowntobeGaussian)andqisapurejumpprocess.ConsideringtheKLdivergencebetweentheoriginalprocess(1)andtheapproxi-matingprocess,andkeepingintoaccounttheconditionalstructureofthemodelandequation(10),weobtainthefollowingexpressionfortheKullback-Leibler(KL)divergencebetweenthetrueandapproximatingposteriors:KL[qkp]=K0+NXi=1hlogp(yijx(ti))iqx+hKL[qxkp(x0:Tj0:T)]iq+KL[qkp(0:T)]:(11)ByusingthegeneralformulafortheKLdivergencebetweentwodiffusionprocesses[1],weobtainthefollowingformforthethirdterminequation(11):hKL[qxkp(x0:Tj0:T)]iq=Zdt1 22f[ (t)+]2c2(t)+m2(t)+[ (t)�b]2++2[ (t)+][ (t)�b]m(t)+A2�2A( (t)+)m(t)�2A( (t)�b)q1(t)g:(12)Here and arethegainandbias(coefcientsofthelineartermandconstant)ofthedriftoftheapproximatingdiffusionprocess,mandc2arethemeanandvarianceoftheapproximatingprocess,andq1(t)isthemarginalprobabilityattimetoftheswitchbeingon(computedusingtheapproximatingjumpprocess).SotheKListhesumofaninitialconditionpart(whichcanbesettozero)andtwootherpartsinvolvingtheKLbetweenaMarkovianGaussianprocessandaMarkovianGaussianprocessobservedlinearlywithnoise(secondandthirdterms)andtheKLbetweentwotelegraphprocesses.ThevariationalE-stepiterativelyminimisesthesetwopartsusingrecursionsoftheforward-backwardtype.Interleavedwiththis,variationalM-stepscanbecarriedoutbyoptimisingthevariationalfreeenergyw.r.t.theparameters;thexedpointequationsforthisareeasilyderivedandwillbeomittedhereduetospaceconstraints.EvaluationoftheHessianofthefreeenergyw.r.t.theparameterscanbeusedtoprovideameasureoftheuncertaintyassociated.3 1.2.1ComputationoftheapproximatingdiffusionprocessMinimisationofthesecondandthirdterminequation(11)requiresndinganapproximatingGaus-sianprocess.Byinspectionofequation(12),weseethatwearetryingtocomputetheposteriorprocessforadiscretelyobservedGaussianprocesswith(prior)driftAq1(t)+b�x,withtheobser-vationsbeingi.i.d.withGaussiannoise.DuetotheMarkoviannatureoftheprocess,itssingletimemarginalscanbecomputedusingthecontinuoustimeversionofthewellknownforward-backwardalgorithm[10,11].Thesingletimeposteriormarginalcanbedecomposedasq(x(t))=p(x(t)jy1;:::;yN)=1 Z(x(t))(x(t));(13)whereisthelteredprocessorforwardmessage,andisthebackwardmessage,i.e.thelikelihoodoffutureobservationsconditionedontimet.TherecursionsarebasedonthefollowinggeneralODEslinkingmean^mandvariance^c2ofageneralGaussiandiffusionprocesswithsystemnoise2tothedriftcoefcients^ and^ oftherespectiveSDE,whichareaconsequenceoftheFokker-PlanckequationforGaussianprocessesd^m dt=^ ^m+^ ;d^c2 dt=2^ ^c2+2:(14)ThelteredprocessoutsidetheobservationssatisestheforwardFokker-Planckequationofthepriorprocess,soitsmeanandvariancecanbepropagatedusingequations(14)withpriordriftcoefcients^ =�and^ =Aq1+b.Observationsareincorporatedviathejumpconditionslimt!t+i(x(t))/p(yijx(ti))limt!t�i(x(t));(15)whencetherecursionsonthemeanandvarianceseasilyfollow.Noticethatthisismuchsimplerthan(discretetime)Kalmanlterrecursionsasthepriorgainiszeroincontinuoustime.Computationofthebackwardmessage(smoothing)isanalogous;thereaderisreferredto[10,11]forfurtherdetails.1.2.2JumpprocesssmoothingHavingcomputedtheapproximatingdiffusionprocess,wenowturntogivetheupdatesfortheapproximatingjumpprocess.TheKLdivergenceinequation(11)involvesthejumpprocessintwoterms:thelasttermistheKLdivergencebetweentheposteriorjumpprocessandthepriorone,whilethethirdterm,whichgivestheexpectationoftheKLbetweenthetwodiffusionprocessesundertheposteriorjump,alsocontainstermsinvolvingthejumpposterior.TheKLdivergencebetweentwotelegraphprocesseswascalculatedin[4];consideringthejumptermscomingfromequation(12),andaddingaLagrangemultipliertotakeintoaccounttheMasterequationfullledbythetelegraphprocess,weendupwiththefollowingLagrangian:L[q;g; ;]=KL[qkpprior]+Zdt1 22A2�2A( +)m�2A( �b)q1(t)+Zdt (t)dq1 dt+(g�+g+)q1�g+:(16)Noticeweuseq1(t)=q((t)=1)tolightenthenotation.Functionalderivativesw.r.t.totheposteriorratesgallowtoeliminatetheminfavouroftheLagrangemultipliers;insertingthisintothefunctionalderivativesw.r.t.tothemarginalsq1(t)givesODEsinvolvingtheLagrangemultiplierandthepriorratesonly(aswellastermsfromthediffusionprocess),whichcanbesolvedbackwardintimefromthecondition (T)=0.Thisallowstoupdatetheratesandthentheposteriormarginalscanbefoundinaforwardpropagation,inamannersimilarto[4].2ConditionallyGaussianProcesses:generalcaseInthissection,wewouldliketogeneraliseourmodeltoprocessesoftheformdx=(�x+A+b)dt+df(t);(17)4 wherethewhitenoisedrivingprocessdw(t)in(1)isreplacedbyanarbitraryGPdf(t)1.Theap-plicationofourvariationalapproximation(11)requirestheKLdivergenceKL[qxkp(x0:Tj0:T)]betweenaGPqxandaGPwithashiftedmeanfunctionp(x0:Tj0:T).Assumingthesameco-variancethiscouldinprinciplebecomputedusingtheRadon-Nykodymderivativebetweenthetwomeasures.Ourpreliminaryresults(basedontheCameron-MartinformulaforGPs[12])indicatesthateveninsimplecases(likeOrnstein-Uhlenbecknoise)themeasuresarenotabsolutelycontinu-ousandtheKLdivergenceisinnite.Hence,wehaveresortedtoadifferentvariationalapproach,whichisbasedonalowerboundtothefreeenergy.Weusethefact,thatconditionedonthepathoftheswitchingprocess0:T,thepriorofx(t)isaGPwithacovariancekernelK(t;t0)andcanbemarginalisedoutexactly.ThekernelKcanbeeasilycomputedfromthekernelofthedrivingnoiseprocessf(t)[2].InthepreviouscaseofwhitenoiseKisgivenbythe(nonstationary)Ornstein-UhlenbeckkernelKOU(t;t0)=2 2ne�jt�t0j�e�(t+t0)o.ThemeanfunctionoftheconditionedGPisobtainedbysolvingthelinearODE(17)withoutnoise,i.e.withf=0.ThisyieldsEGP[x(t)j0:T]=Zt0e�(t�s)(A(s)+b)ds:(18)MarginalisingouttheconditionalGP,thenegativelogmarginalprobabilityofobservations(freeenergy)F=�lnp(D)isrepresentedasF=�lnE[p(Dj0:T)]=�lnEexp�1 2(y�x)�(K+2I)�1(y�x):(19)HereEdenotesexpectationoverthepriorswitchingprocessp,yisthevectorofobservations,andx=EGP[(x(t1);:::;x(tN))�j0:T]isthevectorofconditionalmeansatobservationtimesti.Kisthekernelmatrixand=1 2ln(j2Kj).Thisintractablefreeenergycontainsafunctionalintheexponentwhichisbilinearintheswitchingprocess.Inthespiritofothervariationaltransfor-mations[13,14]thiscanbelinearisedthroughaLegendretransform(orconvexduality).Applying1 2z�A�1z=max�z�1 2�A tothevectorz=(y�x)andthematrixA=(K+2I),andexchangingthemaxoperationwiththeexpectationover,leadstothelowerboundF+max�1 2�(K+2I)�lnEexp��(y�x) :(20)Asimilarupperboundwhichishoweverhardertoevaluatecomputationallywillbepresentedelse-where.Itcanbeshownthatthelowerbound(20)neglectsthevarianceoftheE[x]process(intuitively,thetwopointexpectationsin(19)aredropped).Thesecondterminthebracketlookslikethefreeenergyforajumpprocessmodelhavinga(pseudo)loglikelihoodofthedatagivenby��(y�x).Thisauxiliaryfreeenergycanagainberewrittenintermsofthe“standardvariational”representation�lnEexp��(y�x) =minqKL[qkpprior]+�(y�Eq[x]) ;(21)whereinthesecondlinewehaveintroducedanarbitraryprocessqovertheswitchingvariableandusedstandardvariationalmanipulations.Inserting(18)intothelasttermin(21),weseethatthisKLminimisationisofthesamestructureastheoneinequation(16)withalinearfunctionalofqinthe(pseudo)likelihoodterm.ThereforetheminimiserqisaninhomogeneousMarkovjumpprocess,andwecanuseabackwardandforwardsweeptocomputemarginalsq1(t)exactlyforaxed!Thesemarginalsareusedtocomputethegradientofthelowerbound(K+2I)+(y�Eq[x])andweiteratebetweengradientascentstepsandrecomputationsofEq[x].Sincetheminimaxproblemdenedby(20)and(21)isconcaveinandconvexinqthesolutionmustbeunique.Uponconvergence,weusetheswitchingprocessmarginalsq1forprediction.StatisticsofthesmoothedxprocesscanthenbecomputedbysummingtheconditionalGPstatistics(obtainedbyexactGPregression)andthexstatistics,whichcanbecomputedusingthesamemethodsasin[6]. 1Incaseofaprocesswithsmoothsamplepaths,wecanwritedf(t)=g(t)dtwithan“ordinary”GPg5 Figure1:Resultsonsyntheticdata.VariationalMarkovianGaussian-Jumpprocessontheleft,approximateRBFGaussian-Jumpprocessontheright.Toprow,inferredposteriorjumpmeans(solidline)andtruejumpprole(dottedblack)Bottomrow:inferredposteriormeanx(solid)withcondenceintervals(dottedred);datapointsareshownasredcrosses,andthetruesampleproleisshownasblackdots.NoticethatthelesscondentjumppredictionfortheRBFprocessgivesamuchhigheruncertaintyinthexprediction(seetext).Thexaxisunitsarethesimulationtimesteps.3Results3.1SyntheticdataToevaluatetheperformanceandidentiabilityofourmodel,weexperimentedrstwithasimpleone-dimensionalsyntheticdatasetgeneratedusingajumpprolewithonlytwojumps.AsamplefromtheresultingconditionalGaussianprocesswasthenobtainedbysimulatingtheSDEusingtheEuler-Maruyamamethod,andtenidenticallyspacedpointswerethentakenfromthesamplepathandcorruptedwithGaussiannoise.Inferencewasthencarriedoutusingtwoprocedures:aMarkovianGaussian-JumpprocessasdescribedinSection1,usingthevariationalalgorithm,anda“RBF”Gaussian-Jumpprocesswithslowlyvaryingcovariance,asdescribedinSection2.Theparameterss2;2andfwerekeptxed,whiletheA;bandhyperparameterswereoptimisedusingtypeIIML.TheinferenceresultsareshowninFigure1:theleftcolumngivestheresultsofthevariationalsmoothing,whiletherightcolumngivestheresultsobtainedbyttingaRBFGaussian-Jumppro-cess.Thetoprowshowstheinferredposteriormeanofthediscretestatedistribution,whilethebottomrowgivestheconditionallyGaussianposterior.WenoticethatbothapproachesprovideagoodsmoothingoftheGPandthejumpprocess,althoughthesecondjumpisinferredasbeingslightlylaterthaninthetruepath.NoticethattheuncertaintiesassociatedwiththeRBFprocessaremuchhigherthanintheMarkovianone,andaredominatedbytheuncertaintyintheposteriormeancausedbytheuncertaintyinthejumpprocess,whichislesscondentthanintheMarkoviancase(toprightgure).Thisisprobablyduetothefactthatthelowerbound(20)ignoresthecontributionsofthevarianceofthexterminthefreeenergy,whichisduetothevarianceofthejumppro-cess,andhenceremovesthepenaltyforhavingintermediatejumpposteriors.Asimilarbehaviourwasalreadynotedinarelatedcontextin[14].Intermsofcomputationalefciency,thevariationalMarkovianalgorithmconvergedinapproximately0.1secondsonastandardlaptop,whiletheRBFprocesstookapproximatelytwominutes.Asabaseline,weusedastandarddiscretetimeSwitching6 !"!!#!!$!!%!!&!!'!!(!!)!!*!!"!!! 0100200300400500600700800900100000.511.522.533.544.5 0100200300400500600700800900100000.10.20.30.40.50.60.70.8 0100200300400500600700800900100000.10.20.30.40.50.60.70.8 Figure2:Resultsondoublewelldiffusion.Left:inferredposteriorswitchmean;rightsmootheddata,withcondenceintervals.Thexaxisunitsarethesimulationtimesteps.KalmanFilterintheimplementationof[15],butdidnotmanagetoobtaingoodresults.Itisnotclearwhethertheproblemresidedintheshorttimeseriesorinourapplicationofthemodel.Estimationoftheparametersusingthevariationalupperboundalsogaveveryaccurateresults,withA=3:10:310�2(truevalue310�2),b=1:0210�2(truevalue110�2)and=1:10:110�2(truevalue110�2).Itisinterestingtonotethat,ifthesystemnoiseparameter2wassetatahighervalue,thentheAparameterwasalwaysdriventozero,leadingtoadecouplingoftheGaussianandjumpprocesses.Infact,itcanbeshownthatthetruefreeenergyhasalwaysalocalminimumforA=0:heuristically,theGPisalwaysasufcientlyexiblemodeltotthedataonitsown.However,forsmalllevelsofsystemnoise,theevidenceofthedataissuchthatthemorecomplexmodelinvolvingajumpprocessisfavoured,givingatypeofautomatedOccamrazor,whichisoneofthemainattractionsofBayesianmodelling.3.2Diffusioninadouble-wellpotentialToillustratethepropertiesoftheGaussian-jumpprocessasanapproximatorfornon-linearstochas-ticmodels,weconsideredthebenchmarkproblemofsmoothingdatageneratedfromaSDEwithdouble-wellpotentialdriftandconstantdiffusioncoefcient.Sincetheprocesswewishtoapprox-imateisadiffusionprocess,weusethevariationalupperboundmethod,whichgavegoodresultsinthesyntheticexperiments.Thedataweuseisthesameastheoneusedin[1],whereanon-stationaryGaussianapproximationtothenon-linearSDEwasproposedbymeansofavariationalapproximation.TheresultsareshowninFigure2:asisevidentthemethodbothcapturesaccuratelythetransitiontime,andprovidesanexcellentsmoothing(verysimilartotheonereportedin[1]);theseresultswereobtainedin0.07seconds,whiletheGaussianprocessapproximationof[1]in-volvesgradientdescentinahighdimensionalspaceandtakesapproximatelythreetofourordersofmagnitudelonger.Naturally,ourmethodcannotbeusedinthiscasetoestimatetheparametersofthetrue(doublewell)priordrift,asitonlymodelsthelinearbehaviournearthebottomofeachwell;however,forsmoothingpurposesitprovidesaveryaccurateandefcientalternativemethod.3.3RegulationofcompetenceinB.subtilisRegulationofgeneexpressionatthetranscriptionallevelprovidesanimportantapplication,aswellasmotivationfortheclassofmodelswehavebeenconsidering.Transcriptionratesaremodulatedbytheactionoftranscriptionfactors(TFs),DNAbindingproteinswhichcanbeactivatedfastinresponsetoenvironmentalsignals.TheactivationstateofaTFisanotoriouslydifcultquantitytomeasureexperimentally;thishasmotivatedasignicanteffortwithinthemachinelearningandsystemsbiologycommunitytoprovidemodelstoinferTFactivitiesfrommoreeasilymeasurablegeneexpressionlevels[2,16,17].Inthissection,weapplyourmodeltosinglecelluorescencemeasurementsofproteinconcentrations;theintrinsicstochasticityinherentinsinglecelldatawouldmakeconditionallydeterministicmodelssuchas[2,6]aninappropriatetool,whileourvariationalSDEmodelshouldbeabletobettercapturetheinherentuctuations.Thedataweusewasobtainedin[18]duringastudyofthegeneticregulationofcompetenceinB.subtilis:briey,bacteriaunderfoodshortagecaneitherenteradormantstage(spore)orcan7 010020030040050060070080000.10.20.30.40.50.60.70.8 Figure3:Resultsoncompetencecircuit.Left:inferredposteriorswitchmean(ComKactivityprole);rightsmoothedComSdata,withcondenceintervals.Theyaxisunitsintherighthandpanelarearbitraryuorescenceunits.continuetoreplicatetheirDNAwithoutdividing(competence).Competenceisessentiallyabetthatthefoodshortagewillbeshort-lived:inthatcase,thecompetentcellcanimmediatelydivideintomanydaughtercells,givinganevolutionaryadvantage.Themolecularmechanismsunderpinningcompetencearequitecomplex,buttheessentialbehaviourcanbecapturedbyasimplesysteminvolvingonlytwocomponents:thecompetenceregulatorComKandtheauxiliaryproteinComS,whichiscontrolledbyComKwithaswitch-likebehaviour(Hillcoefcient5).In[18],ComKactivitywasindirectlyestimatedusingagenereportersystem(usingtheComGpromoter).Here,weleaveComKasalatentswitchingvariable,anduseourmodeltosmooththeComSdata.TheresultsareshowninFigure3,showingaclearswitchbehaviourforComKactivity(asexpected,andinagreementwiththehighHillcoefcient),andagoodsmoothingoftheComSdata.Analysisoftheoptimalparametersisalsoinstructive:whiletheAandbparametersarenotsoinformativeduetothefactthatuorescencemeasurementsarereportedinarbitraryunits,theComSdecayrateisestimatedas0:320:06h�1,correspondingtoahalflifeofapproximately3hours,whichisclearlyplausiblefromthedata.Itshouldbepointedoutthat,inthesimulationsinthesupplementarymaterialof[18],anominalvalueof0.0014s�1wasused,correspondingtoahalflifeofonly20minutes!Whilethepurposeofthatsimulationwastorecreatethequalitativebehaviourofthesystem,ratherthantoestimateitsparameters,theuseofsuchanimplausibleparametervalueillustratesalltoowelltheneedforappropriatedata-driventoolsinmodellingcomplexsystems.4DiscussionInthiscontributionweproposedanovelinferencemethodologyforcontinuoustimeconditionallyGaussianprocesses.Aswellasbeinginterestinginitsownrightasamethodforinferenceinjump-diffusionprocesses(toourknowledgethersttobeproposed),thesemodelsndapowerfulmotivationduetotheirrelevancetoeldssuchassystemsbiology,aswellasplausibleapproxima-tionstonon-lineardiffusionprocesses.WepresentedbothamethodbasedonavariationalupperboundinthecaseofMarkovianprocesses,andamoregenerallowerboundwhichholdsalsofornon-MarkovianGaussianprocesses.Anaturalquestionfromthemachinelearningpointofviewiswhataretheadvantagesofcontinuoustimeoverdiscretetimeapproaches.Aswellasprovidingaconceptuallymorecorrectdescriptionofthesystem,continuoustimeapproacheshaveatleasttwosignicantadvantagesinourview:acom-putationaladvantageintheavailabilityofmorestablesolvers(suchasRunge-Kuttamethods),andacommunicationadvantage,astheyaremoreimmediatelyunderstandabletothelargecommunityofmodellerswhichusedifferentialequationsbutmaynotbefamiliarwithstatisticalmethods.Thereareseveralpossibleextensiontotheworkwepresented:arelativelysimpletaskwouldbeanextensiontoafactorialdesignsuchastheoneproposedforconditionallydeterministicsystemsin[14].Atheoreticaltaskofinterestwouldbeathoroughinvestigationoftherelationshipbetweentheupperandlowerboundswepresented.Thisispossible,atleastforMarkovianGPs,butwillbepresentedinotherwork.8 02.557.51012.51517.52000.10.20.30.40.50.60.70.8 02.557.51012.51517.5 References[1]CedricArchambeau,DanCornford,ManfredOpper,andJohnShawe-Taylor.Gaussianprocessapproximationsofstochasticdifferentialequations.JournalofMachineLearningResearchWorkshopandConferenceProceedings,1(1):1–16,2007.[2]NeilD.Lawrence,GuidoSanguinetti,andMagnusRattray.Modellingtranscriptionalregu-lationusingGaussianprocesses.InAdvancesinNeuralInformationProcessingSystems19,2006.[3]UriNodelman,ChristianR.Shelton,andDaphneKoller.ContinuoustimeBayesiannetworks.InProceedingsoftheEighteenthconferenceonUncertaintyinArticialIntelligence(UAI),2002.[4]ManfredOpperandGuidoSanguinetti.VariationalinferenceforMarkovjumpprocesses.InAdvancesinNeuralInformationProcessingSystems20,2007.[5]IdoCohn,TalEl-Hay,NirFriedman,andRazKupferman.Meaneldvariationalapproxima-tionforcontinuous-timeBayesiannetworks.InProceedingsofthetwenty-fththconferenceonUncertaintyinArticialIntelligence(UAI),2009.[6]GuidoSanguinetti,AndreasRuttor,ManfredOpper,andCedricArchambeau.Switchingreg-ulatorymodelsofcellularstressresponse.Bioinformatics,25(10):1280–1286,2009.[7]MauricioAlvarez,DavidLuengo,andNeilD.Lawrence.Latentforcemodels.InProceedingsoftheTwelfthInterhantionalConferenceonArticialIntelligenceandStatistics(AISTATS),2009.[8]CarlE.RasmussenandChristopherK.I.Williams.GaussianProcessesforMachineLearning.MITpress,2005.[9]C.W.Gardiner.HandbookofStochasticMethods.Springer,Berlin,secondedition,1996.[10]AndreasRuttorandManfredOpper.Efcientstatisticalinferenceforstochasticreactionpro-cesses.Phys.Rev.Lett.,103(23),2009.[11]CedricArchambeauandManfredOpper.Approximateinferenceforcontinuous-timeMarkovprocesses.InDavidBarber,TaylanCemgil,andSilviaChiappa,editors,InferenceandLearn-inginDynamicModels.CambridgeUniversityPress,2010.[12]M.A.Lifshits.GaussianRandomFunctions.Kluwer,Dordrecht,secondedition,1995.[13]MichaelI.Jordan,ZoubinGhahramani,TommiS.Jaakkola,andLawrenceK.Saul.Anintro-ductiontovariationalmethodsforgraphicalmodels.MachineLearning,37:183–233,1999.[14]ManfredOpperandGuidoSanguinetti.Learningcombinatorialtranscriptionaldynamicsfromgeneexpressiondata.Bioinformatics,26(13):1623–1629,2010.[15]DavidBarber.ExpectationcorrectionforsmoothinginswitchinglinearGaussianstatespacemodels.JournalofMachineLearningResearch,7:2515–2540,2006.[16]JamesC.Liao,RiccardoBoscolo,Young-LyeolYang,LinhMyTran,ChiaraSabatti,andVwaniP.Roychowdhury.Networkcomponentanalysis:Reconstructionofregulatorysignalsinbiologicalsystems.ProceedingsoftheNationalAcademyofSciencesUSA,100(26):15522–15527,2003.[17]MartinoBarenco,DanielaTomescu,DavidBrewer,RobinCallard,JaroslavStark,andMichaelHubank.Rankedpredictionofp53targetsusinghiddenvariabledynamicalmodelling.GenomeBiology,7(3),2006.[18]G¨urolM.Su¨el,JordiGarcia-Ojalvo,LouisaM.Liberman,andMichaelB.Elowitz.Anex-citablegeneregulatorycircuitinducestransientcellulardifferentiation.Nature,440:545–50,2006.9