observedFisherinformationanduseitasanotherapproximationtoFisherinformationOfcoursewestilldontknowsoweneedasecondpluginusingJxxinsteadofJxThepluginprincipleappliesheretooTheerrormade ID: 115564
Download Pdf The PPT/PDF document "Sowhatisgenerallydoneistostartagoodlocal..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Sowhatisgenerallydoneistostartagoodlocaloptimizationalgorithmata\good"startingpointandtakethesolutionproducedbythealgorithmtobetheMLE(ifthealgorithmconvergestoasolution).Technically,whatisrequiredofthestartingpointtobe\good"isthatitobeysthesquarerootlaw:itsestimationerrorgoestozerolikeaconstantdividedbythesquarerootofthesamplesize.Generally,onejustusesthebestestimatoronecancalculateasthestartingpoint.1.3ExpectedFisherInformationBecausethelogfunctionismonotone,maximizingthelikelihoodisthesameasmaximizingtheloglikelihoodlx()=logLx():(3)Formanyreasonsitismoreconvenienttouseloglikelihoodratherthanlikeli-hood.Thederivativesoftheloglikelihoodfunction(3)areveryimportantinlikeli-hoodtheory.Themomentsofloglikelihoodderivativessatisfysomeimportantidentities.Notethatifthelikelihoodisgivenby(2),thentheloglikelihoodisgivenbylx()=logh(x)+logf(x)(4)andthederivatives(thesearederivativeswithrespectto)donotinvolvethelogh(x)termbecauseitdoesnotcontain.Sothesederivativesarewelldenedandthesameregardlessofwhath(x)weuse.Alsonotethatthemaximizerof(4)nomatterhowdened(localorglobalmaximizer)doesnotdependonh(x).ThustheMLEisthesame(ifdened)regardlessofwhath(x)weuse.First,therstderivativehasexpectationzeroErlx() =0;(5)whererfdenotesthevectorofpartialderivativesofascalarfunctionfofavectorvariable,oftencalledthegradientoff,andrf(x)denotesthevalueofthegradientatthepointx.Second,thevarianceoftherstderivativeisminustheexpectationofthesecondvarrlx() =Er2lx() ;(6)wherer2fdenotesthematrixofsecondpartialderivativesofascalarfunctionfofavectorvariable,oftencalledthehessianoff,andr2f(x)denotesthevalueofthehessianatthepointx.Eithersideof(6)iscalledtheexpectedFisherinformation(orjust\Fisherinformation"withno\expected"whenitisclearwhatismeant)andisdenotedI().2 observedFisherinformationanduseitasanotherapproximationtoFisherin-formation.Ofcourse,westilldon'tknow,soweneedasecond\plug-in"usingJx(^x)insteadofJx().Theplug-inprincipleappliesheretoo.TheerrormadebyapproximatingI()byJx(^x)isnegligiblecomparedtotheerrorapproxi-matingtheactualsamplingdistributionoftheMLEbyNormal;I()1.1.6SummaryofTheoryTheasymptoticapproximationtothesamplingdistributionoftheMLE^xismultivariatenormalwithmeanandvarianceapproximatedbyeitherI(^x)1orJx(^x)1.2MaximumLikelihoodEstimationinR2.1TheCauchyLocation-ScaleFamilyThe(standard)CauchyDistributionisthecontinuousunivariatedistributionhavingdensityf(x)=1 1 1+x2;1x1:(7)ThestandardCauchydistributionhasnoparameters,butitinducesatwo-parameterlocation-scalefamilyhavingdensitiesf;(x)=1 fx (8)Iffisanydistributionhavingmeanzeroandvariance1,thenf;hasmeanandvariance2.ButtheCauchydistributionhasneithermeannorvariance.Thuswecallthelocationparameterandthescaleparameter.SincethestandardCauchydistributionisclearlysymmetricaboutzero,theCauchy(;)distributionissymmetricabout.Henceisthepopulationmediananda\good"estimateisthesamplemedian.Arobustscaleestimatoranalogoustothesamplemedianistheinterquartilerange(IQR).TheIQRofthestandardCauchydistributionisqcauchy(3/4)-qcauchy(1/4)[1]2ThusthepopulationIQRoftheCauchy(;)distributionis2,andhencea\good"estimateofisthesampleIQRdividedby2.2.2MaximumLikelihood2.2.1OneParameterTheRfunctionnlmminimizesarbitraryfunctionswritteninR.Sotomax-imizethelikelihood,wehandnlmthenegativeoftheloglikelihood(foranyfunctionf,minimizingfmaximizesf).4 nsim-100ကmu-0ကmu.hat-double(nsim)ကmu.twiddle-double(nsim)ကfor(iin1:nsim){+xsim-rcauchy(n,location=mu)+mu.start-median(xsim)+out-nlm(mlogl,mu.start,x=xsim)+mu.hat[i]-out$estimate+mu.twiddle[i]-mu.start+}ကmean((mu.hat-mu)^2)[1]0.06203118ကmean((mu.twiddle-mu)^2)[1]0.08242236Thetwonumbersreportedfromthesimulationarethemeansquareerrors(MSE)ofthetwoestimators.Theirratioကmean((mu.hat-mu)^2)/mean((mu.twiddle-mu)^2)[1]0.7526013istheasymptoticrelativeeciency(ARE)oftheestimators.NowweseetheMLEismoreaccurate,astheorysaysitmustbe.2.2.3TwoParametersMinustheloglikelihoodforthetwo-parameterCauchycanbewrittenကmlogl3-function(theta,x){+sum(-dcauchy(x,location=theta[1],scale=theta[2],log=TRUE))+}andtheMLEcalculatedbyကtheta.start-c(median(x),IQR(x)/2)ကtheta.start[1]-0.19550620.7125899ကout-nlm(mlogl3,theta.start,x=x)ကtheta.hat-out$estimateကtheta.hat[1]-0.18092990.76055616 2.3.3ExpectedFisherInformationRhasafunctionderivthatdoesderivativesofRexpressions.Butitisn'tverysophisticated.Itwon'tcalculatelikelihoodderivativeshere.Solet'sdothederivativesbypencilandpaper.First,theloglikelihooditselflx(;)=nlog()nXi=1log2+(xi)2Therstderivativesare@lx(;) @=nXi=12(xi) 2+(xi)2@lx(;) @=n nXi=12 2+(xi)2Rdoesn'tdoanalyticintegralsatall.Butitdoesdonumericalintegrals,whichisallweneedtodoFisherinformation.theta.hat[1]-0.18092990.7605561mu-theta.hat[1]ကsigma-theta.hat[2]ကgrad1-function(x)2*(x-mu)/(sigma^2+(x-mu)^2)ကgrad2-function(x)(1/sigma-2*sigma/(sigma^2+(x-mu)^2))ကfish.exact-matrix(NA,2,2)ကfish.exact[1,1]-integrate(function(x)grad1(x)^2*dcauchy(x,+mu,sigma),-Inf,Inf)$valueကfish.exact[2,2]-integrate(function(x)grad2(x)^2*dcauchy(x,+mu,sigma),-Inf,Inf)$valueကfish.exact[1,2]-integrate(function(x)grad1(x)*grad2(x)*+dcauchy(x,mu,sigma),-Inf,Inf)$valueကfish.exact[2,1]-fish.exact[1,2]ကround(n*fish.exact,10)[,1][,2][1,]25.931570.00000[2,]0.0000025.93157ကfish[,1][,2][1,]32.507337270.01480913[2,]0.0148091319.348190968