/
Making a Completely Blind Image Quality Analyzer Anish Mittal Rajiv Soundararajan and Making a Completely Blind Image Quality Analyzer Anish Mittal Rajiv Soundararajan and

Making a Completely Blind Image Quality Analyzer Anish Mittal Rajiv Soundararajan and - PDF document

debby-jeon
debby-jeon . @debby-jeon
Follow
560 views
Uploaded On 2015-03-08

Making a Completely Blind Image Quality Analyzer Anish Mittal Rajiv Soundararajan and - PPT Presentation

Bovik Fellow IEEE Abstract An important aim of research on the blind image quality assessment IQA problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortio ID: 42804

Bovik Fellow IEEE Abstract

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "Making a Completely Blind Image Quality ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1Makinga`CompletelyBlind'ImageQualityAnalyzerAnishMittal,RajivSoundararajanandAlanC.Bovik,Fellow,IEEEAbstract—Animportantaimofresearchontheblindimagequalityassessment(IQA)problemistodeviseperceptualmodelsthatcanpredictthequalityofdistortedimageswithaslittlepriorknowledgeoftheimagesortheirdistortionsaspossible.Currentstate-of-the-art`generalpurpose'noreference(NR)IQAalgorithmsrequireknowledgeaboutanticipateddistortionsintheformoftrainingexamplesandcorrespondinghumanopinionscores.HoweverwehaverecentlyderivedablindIQAmodelthatonlymakesuseofmeasurabledeviationsfromstatisticalregularitiesobservedinnaturalimages,withouttrainingonhuman-rateddistortedimages,and,indeedwithoutanyexpo-suretodistortedimages.Thus,itis`completelyblind.'ThenewIQAmodel,whichwecalltheNaturalImageQualityEvaluator(NIQE)isbasedontheconstructionofa`qualityaware'collectionofstatisticalfeaturesbasedonasimpleandsuccessfulspacedomainnaturalscenestatistic(NSS)model.Thesefeaturesarederivedfromacorpusofnatural,undis-tortedimages.ExperimentalresultsshowthatthenewindexdeliversperformancecomparabletotopperformingNRIQAmodelsthatrequiretrainingonlargedatabasesofhumanopinionsofdistortedimages.Asoftwarereleaseisavailableat:http://live.ece.utexas.edu/research/quality/niqe release.zip.IndexTerms—Completelyblind,distortionfree,noreference,imagequalityassessmentI.INTRODUCTIONAmericanscaptured80billiondigitalphotographsin2011andthisnumberisincreasingannually[1].Morethan250millionphotographsarebeingposteddailyonfacebook.Con-sumersaredrowningindigitalvisualcontentandndingwaystoreviewandcontrolofthequalityofdigitalphotographsisbecomingquitechallenging.Atthesametime,cameramanufacturerscontinuetoprovideimprovementsinphotographicqualityandresolution.Therawcapturedimagespassthroughmultiplepostprocessingstepsinthecamerapipeline,eachrequiringparametertuning.Aproblemofgreatinterestistondwaystoautomaticallyevaluateandcontroltheperceptualqualityofthevisualcontentasafunctionofthesemultipleparameters.Objectiveimagequalityassessmentreferstoautomaticallypredictthequalityofdistortedimagesaswouldbeperceivedbyanaveragehuman.Ifanaturalisticreferenceimageissuppliedagainstwhichthequalityofthedistortedimagecanbecompared,themodeliscalledfullreference(FR)[2].Copyright(c)2012IEEE.Personaluseofthismaterialispermitted.However,permissiontousethismaterialforanyotherpurposesmustbeobtainedfromtheIEEEbysendingarequesttopubs-permissions@ieee.org.A.MittalandA.C.BovikarewiththeLaboratoryforImageandVideoEngineering(LIVE),TheUniversityofTexasatAustin,Texas,USA.R.SoundararajanwaswiththeUniversityofTexasatAustinwhilemostofthisworkwasdone.HeiscurrentlywithQualcommResearchIndia,Bangalore.Correspondingauthorsemailaddress:mittal.anish@gmail.com.Conversely,NRIQAmodelsassumethatonlythedistortedimagewhosequalityisbeingassessedisavailable.ExistinggeneralpurposeNRIQAalgorithmsarebasedonmodelsthatcanlearntopredicthumanjudgmentsofimagequalityfromdatabasesofhuman-rateddistortedimages[3],[4],[5],[6],[7].ThesekindsofIQAmodelsarenecessarilylimited,sincetheycanonlyassessqualitydegradationsarisingfromthedistortiontypesthattheyhavebeentrainedon.However,itisalsopossibletocontemplatesubcategoriesofgeneral-purposeNRIQAmodelshavingtighterconditions.Amodelis`opinion-aware'(OA)ifithasbeentrainedonadatabase(s)ofhumanrateddistortedimagesandassociatedsubjectiveopinionscores.ThusalgorithmslikeDIIVINE[4],CBIQ[6],LBIQ[7],BLIINDS[5]andBRISQUE[3]areOAIQAmodels.Giventheimpracticalityofobtainingcollectionsofdistortedimageswithco-registeredhumanscores,modelsthatdonotrequiretrainingondatabasesofhumanjudgmentsofdistortedimages,andhenceare`opinionunaware'(OU),areofgreatinterest.Onesucheffortwasmadeinthisdirectionbytheauthorsof[8].However,theirmodelrequiresknowledgeoftheexpectedimagedistortions.Likewise,amongalgorithmsderivedfromOUmodels,dis-tortedimagesmayormaynotbeavailableduringIQAmodelcreationortraining.Forexample,inhighlyunconstrainedenvironments,suchasaphotographuploadsite,theapriorinatureofdistortionsmaybeverydifculttoknow.Thusamodelmaybeformulatedas`distortionaware'(DA)bytrainingon(andhencetuningto)specicdistortions,oritmaybe`distortionunaware'(DU),relyinginsteadonlyonexposuretonaturalisticsourceimagesorimagemodelstoguidetheQAprocess.Whilethismayseemasanextremepaucityofinformationtoguidedesign,itisworthobservingthatverysuccessfulFRIQAmodels(suchasthestructuralsimilarityindex(SSIM)[9])areDU.OurcontributioninthisdirectionisthedevelopmentofaNSS-basedmodelingframeworkforOU-DUNRIQAdesign,resultinginarstofakindNSS-drivenblindOU-DUIQAmodelwhichdoesnotrequireexposuretodistortedimagesapriori,noranytrainingonhumanopinionscores.ThenewNROU-DUIQAqualityindexperformsbetterthanthepopularFRpeaksignal-to-noise-ratio(PSNR)andstructuralsimilarity(SSIM)indexanddeliversperformanceatparwithtopperformingNROA-DAIQAapproaches.II.NOREFERENCEOPINION-UNAWAREDISTORTION-UNAWAREIQAMODELOurnewNROU-DUIQAmodelisbasedonconstructingacollectionof`qualityaware'featuresandttingthemto 2 (a)(b)Fig.1.Themarkedblocksintheimages(a)and(b)depictinstancesofnaturalimagepatchesselectedusingalocalsharpnessmeasure.amultivariateGaussian(MVG)model.Thequalityawarefeaturesarederivedfromasimplebuthighlyregularnaturalscenestatistic(NSS)model.ThequalityofagiventestimageisthenexpressedasthedistancebetweenamultivariateGaussian(MVG)toftheNSSfeaturesextractedfromthetestimage,andaMVGmodelofthequalityawarefeaturesextractedfromthecorpusofnaturalimages.A.SpatialDomainNSSOur`completelyblind'IQAmodelisfoundedonpercep-tuallyrelevantspatialdomainNSSfeaturesextractedfromlocalimagepatchesthateffectivelycapturetheessentiallow-orderstatisticsofnaturalimages.TheclassicalspatialNSSmodel[10]thatweusebeginsbypreprocessingtheimagebyprocessesoflocalmeanremovalanddivisivenormalization:^I(i;j)=I(i;j)(i;j) (i;j)+1(1)wherei2f1;2:::Mg,j2f1;2:::Ngarespatialindices,MandNaretheimagedimensions,and(i;j)=KXk=KLXl=Lwk;lI(i+k;j+l)(2)(i;j)=vuut KXk=KLXl=Lwk;l[I(i+k;j+l)(i;j)]2(3)estimatethelocalmeanandcontrast,respectively,wherew=fwk;ljk=K;:::;K;l=L;:::Lgisa2Dcircularly-symmetricGaussianweightingfunctionsampledoutto3standarddeviations(K=L=3)andrescaledtounitvolume.Thecoefcients(1)havebeenobservedtoreliablyfollowaGaussiandistributionwhencomputedfromnaturalimagesthathavesufferedlittleornoapparentdistortion[10].Thisidealmodel,however,isviolatedwhentheimagesdonotderivefromanaturalsource(e.g.computergraphics)orwhennaturalimagesaresubjectedtounnaturaldistortions.Thedegreeofmodicationcanbeindicativeofperceptualdistortionseverity.TheNSSfeaturesusedintheNIQEindexaresimilartothoseusedinapriorOA-DAIQAmodelcalledBRISQUE[3].However,NIQEonlyusestheNSSfeaturesfromacorpusofnaturalimageswhileBRISQUEistrainedonfeaturesobtainedfrombothnaturalanddistortedimagesandalsoonhumanjudgmentsofthequalityoftheseimages.Therefore,BRISQUEislimitedtothetypesofdistortionsithasbeentunedto.Bycomparison,theNIQEIndexisnottiedtoanyspecicdistortiontype,yet,aswillbeshown,deliversnearlycomparablepredictivepoweronthesamedistortionstheBRISQUEindexhasbeentrainedon,withasimilarlowcomplexity.B.PatchSelectionOncetheimagecoefcients(1)arecomputed,theimageispartitionedintoPPpatches.SpecicNSSfeaturesarethencomputedfromthecoefcientsofeachpatch.However,onlyasubsetofthepatchesareusedforthefollowingreason.Everyimageissubjecttosomekindoflimitingdistortion[11].Forinstance,thereisalossofresolutionduetodefocusblurinpartsofmostimagesduetothelimiteddepthofeld(DOF)ofanysingle-lenscamera.Sincehumansappeartomoreheavilyweighttheirjudgmentsofimagequalityfromthesharpimageregions[12],moresalientqualitymeasurementscanbemadefromsharppatches.Settingasidethequestionoftheaestheticappealofhavingsomepartsofanimagesharperthanothers,anydefocusblurrepresentsapotentiallossofvisualinformation.Weuseasimpledevicetopreferentiallyselectfromamongstacollectionofnaturalpatchesthosethatarerichestininfor-mationandlesslikelytohavebeensubjectedtoalimitingdistortion.Thissubsetofpatchesisthenusedtoconstructamodelofthestatisticsofnaturalimagepatches.Thevarianceeld(3)hasbeenlargelyignoredinthepastinNSSbasedimageanalysis,butitisarichsourceofstructuralimageinformationthatcanbeusedtoquantifylocalimagesharpness.LettingthePPsizedpatchesbeindexedb=1;2;::;B,adirectapproachistocomputetheaveragelocaldeviationeldofeachpatchindexedb:(b)=XX(i;j)2patchb(i;j)(4)wheredenoteslocalactivity/sharpness.Oncethesharpnessofeachpatchisfound,thosehavingasuprathresholdsharpness�Tareselected.ThethresholdTispickedtobeafractionpofthepeakpatchsharpnessovertheimage.Inourexperiments,weusedthenominalvaluep=0:75.ExamplesofthiskindofpatchselectionareshowninFig.1.Wehaveobservedonlysmallvariationsinperformancewhenpisvariedintherange[0:6,0:9].C.CharacterizingImagePatchesGivenacollectionofnaturalimagepatchesselectedasabove,theirstatisticsarecharacterizedby`qualityaware'NSSfeaturescomputedfromeachselectedpatch[3].PriorstudiesofNSSbasedimagequalityhaveshownthatthegeneralizedGaussiandistributioneffectivelycapturesthebehaviorofthecoefcients(1)ofnaturalanddistortedversionsofthem[13].ThegeneralizedGaussiandistribution(GGD)withzeromeanisgivenby:f(x; ; )= 2 (1= )expjxj (5) 3where()isthegammafunction:(a)=Z10ta1edta�0:(6)TheparametersoftheGGD( , ),canbereliablyestimatedusingthemoment-matchingbasedapproachproposedin[14].Thesignsofthetransformedimagecoefcients(1)havebeenobservedtofollowafairlyregularstructure.However,distortionsdisturbthiscorrelationstructure[3].Thisdeviationcanbecapturedbyanalyzingthesampledistributionoftheproductsofpairsofadjacentcoefcientscomputedalonghorizontal,verticalanddiagonalorientations:^I(i;j)^I(i;j+1),^I(i;j)^I(i+1;j),^I(i;j)^I(i+1;j+1)and^I(i;j)^I(i+1;j1)fori2f1;2:::Mgandj2f1;2:::Ng[3].Theproductsofneighboringcoefcientsarewell-modeledasfollowingazeromodeasymmetricgeneralizedGaussiandistribution(AGGD)[15]:f(x;\r; l; r)=8�&#x-0.9;←က:\r ( l+ r)(1 \r)expx l\r8x0\r ( l+ r)(1 \r)expx r\r8x0:(7)TheparametersoftheAGGD(\r; l; r)canbeefcientlyestimatedusingthemoment-matchingbasedapproachin[15].Themeanofthedistributionisalsouseful:=( r l)(2 \r) (1 \r):(8)Byextractingestimatesalongthefourorientations,16pa-rametersarearrivedatyielding18overall.Allfeaturesarecomputedattwoscalestocapturemultiscalebehavior,bylowpasslteringanddownsamplingbyafactorof2,yieldingasetof36features.D.MultivariateGaussianModelAsimplemodeloftheNSSfeaturescomputedfromnaturalimagepatchescanbeobtainedbyttingthemwithanMVGdensity,providingarichrepresentationofthem:fX(x1;:::;xk)=1 (2)k=2jj1=2exp(1 2(x)T1(x))(9)where(x1;:::;xk)aretheNSSfeaturescomputedin(5)-(8),andanddenotethemeanandcovariancematrixoftheMVGmodel,whichareestimatedusingastan-dardmaximumlikelihoodestimationprocedure[16].Wese-lectedavariedsetof125naturalimageswithsizesrang-ingfrom480320to1280720toobtainthemulti-variateGaussianmodel.Imageswereselectedfromcopy-rightfreeFlickrdataandfromtheBerkeleyimageseg-mentationdatabase[17]makingsurethatnooverlapoccurswiththetestimagecontent.Theimagesmaybeviewedathttp://live.ece.utexas.edu/research/quality/pristinedata.zip.TABLEIMEDIANSPEARMANRANKORDEREDCORRELATIONCOEFFICIENT(SROCC)ACROSS1000TRAIN-TESTCOMBINATIONSONTHELIVEIQADATABASE.ItalicsINDICATE(OA/OU)-DANO-REFERENCEALGORITHMSANDboldfaceINDICATESTHENEWOU-DUMODELALGORITHM. JP2K JPEG WN Blur FF All PSNR 0.8646 0.8831 0.9410 0.7515 0.8736 0.8636 SSIM 0.9389 0.9466 0.9635 0.9046 0.9393 0.9129 MS-SSIM 0.9627 0.9785 0.9773 0.9542 0.9386 0.9535 CBIQ 0.8935 0.9418 0.9582 0.9324 0.8727 0.8954 LBIQ 0.9040 0.9291 0.9702 0.8983 0.8222 0.9063 BLIINDS-II 0.9323 0.9331 0.9463 0.8912 0.8519 0.9124 DIIVINE 0.9123 0.9208 0.9818 0.9373 0.8694 0.9250 BRISQUE 0.9139 0.9647 0.9786 0.9511 0.8768 0.9395 TMIQ 0.8412 0.8734 0.8445 0.8712 0.7656 0.8010 NIQE 0.9172 0.9382 0.9662 0.9341 0.8594 0.9135 E.NIQEIndexThenewOU-DUIQAindex,calledNIQE,isappliedbycomputingthe36identicalNSSfeaturesfrompatchesofthesamesizePPfromtheimagetobequalityanalyzed,ttingthemwiththeMVGmodel(9),thencomparingitsMVGttothenaturalMVGmodel.Thesharpnesscriterion(4)isnotappliedtothesepatchesbecauselossofsharpnessindistortedimagesisindicativeofdistortionandneglectingthemwouldleadtoincorrectevaluationofthedistortionseverity.Thepatchsizewassetto9696inourimplementation.However,weobservedstableperformanceacrosspatchsizesrangingfrom3232to160160.Finally,thequalityofthedistortedimageisexpressedasthedistancebetweenthequalityawareNSSfeaturemodelandtheMVGttothefeaturesextractedfromthedistortedimage:D(1;2;1;2)=vuut (12)T1+2 21(12)!(10)where1,2and1,2arethemeanvectorsandcovariancematricesofthenaturalMVGmodelandthedistortedimage'sMVGmodel.III.PERFORMANCEEVALUATIONA.CorrelationwithHumanJudgmentsofVisualQualityTotesttheperformanceoftheNIQEindex,weusedtheLIVEIQAdatabase[2]of29referenceimagesand779distortedimagesspanningvedifferentdistortioncategories–JPEGandJPEG2000(JP2K)compression,additivewhiteGaussiannoise(WN),Gaussianblur(blur)andaRayleighfastfadingchanneldistortion(FF).Adifferencemeanopinionscore(DMOS)associatedwitheachimagerepresentsitssubjectivequality.SincealloftheOAIQAapproachesthatwecompareNIQEtorequireatrainingproceduretocalibratetheregressormodule,wedividedtheLIVEdatabaserandomlyintochosensubsetsfortrainingandtesting.AlthoughourblindapproachandtheFRapproachesdonotrequirethisprocedure,toensureafaircomparisonacrossmethods,thecorrelationsofpredictedscoreswithhumanjudgmentsofvisualqualityareonlyreportedonthetestset.Thedatasetwasdividedinto80%trainingand20%testing–takingcarethatnooverlapoccursbetweentrainandtestcontent.Thistrain-testprocedurewas 4 Fig.2.VariationofperformancewiththenumberofnaturalimagesK.Errorschmearsaroundeachpointindicatethestandarddeviationinperformanceacross100iterationsfor5K125.TABLEIIMEDIANLINEARCORRELATIONCOEFFICIENTACROSS1000TRAIN-TESTCOMBINATIONSONTHELIVEIQADATABASE.ItalicsINDICATE(OA/OU)-DANO-REFERENCEALGORITHMSANDboldfaceINDICATESTHENEWOU-DUMODELALGORITHM. JP2K JPEG WN Blur FF All PSNR 0.8762 0.9029 0.9173 0.7801 0.8795 0.8592 SSIM 0.9405 0.9462 0.9824 0.9004 0.9514 0.9066 MS-SSIM 0.9746 0.9793 0.9883 0.9645 0.9488 0.9511 CBIQ 0.8898 0.9454 0.9533 0.9338 0.8951 0.8955 LBIQ 0.9103 0.9345 0.9761 0.9104 0.8382 0.9087 BLIINDS-II 0.9386 0.9426 0.9635 0.8994 0.8790 0.9164 DIIVINE 0.9233 0.9347 0.9867 0.9370 0.8916 0.9270 BRISQUE 0.9229 0.9734 0.9851 0.9506 0.9030 0.9424 TMIQ 0.8730 0.8941 0.8816 0.8530 0.8234 0.7856 NIQE 0.9370 0.9564 0.9773 0.9525 0.9128 0.9147 repeated1000timestoensurethattherewasnobiasduetothespatialcontentusedfortraining.Wereportthemedianperformanceacrossalliterations.WeuseSpearman'srankorderedcorrelationcoefcient(SROCC),andPearson's(linear)correlationcoefcient(LCC)totestthemodel.TheNIQEscoresarepassedthroughalogisticnon-linearity[2]beforecomputingLCCformappingtoDMOSspace.WecomparedNIQEwiththreeFRindices:PSNR,SSIM[9]andmultiscaleSSIM(MS-SSIM)[18],vegeneralpurposeOA-DAalgorithms-CBIQ[6],LBIQ[7],BLIINDS-II[5],DIIVINE[4],BRISQUE[3]andtheDA-OUapproachTMIQ[8].AscanbeseenfromTablesIandII,NIQEperformsbetterthantheFRPSNRandSSIMandcompeteswellwithallofthetopperformingOA-DANRIQAalgorithms.Thisisafairlyremarkabledemonstrationoftherelationshipbetweenquantiedimagenaturalnessandperceptualimagequality.B.NumberofNaturalImagesWeaddressedthequestion:`Howmanynaturalimagesareneededtoobtainastablemodelthatcancorrectlypredictimagequality?'SuchananalysisprovidesanideaofthequalitypredictionpoweroftheNSSfeaturesandhowwelltheygeneralizewithrespecttoimagecontent.Toundertakethisevaluation,wevariedthenumberofnaturalimagesKfromwhichpatchesareselectedandusedformodeltting.Figure2showstheperformanceagainstthenumberofimages.Anerrorbandisdrawnaroundeachpointtoindicatethestandarddeviationinperformanceacross100iterationsofdifferentsamplesetsofKimages.Itmaybeobservedthatastablenaturalmodelcanbeobtainedusingasmallsetofimages.IV.CONCLUSIONWehavecreatedarstofakindblindIQAmodelthatas-sessesimagequalitywithoutknowledgeofanticipateddistor-tionsorhumanopinionsofthem.Thequalityofthedistortedimageisexpressedasasimpledistancemetricbetweenthemodelstatisticsandthoseofthedistortedimage.ThenewmodeloutperformsFRIQAmodelsandcompeteswithtopperformingNRIQAtrainedonhumanjudgmentsofknowndistortedimages.Suchamodelhasgreatpotentialtobeappliedinunconstainedenvironments.ACKNOWLEDGMENTThisresearchwassupportedbyIntelandCiscocorpora-tionundertheVAWNprogramandbytheNationalScienceFoundationundergrantsCCF-0728748andIIS-1116656.REFERENCES[1]“Imageobsessed,”NationalGeographic,vol.221,p.35,2012.[2]H.R.Sheikh,M.F.Sabir,andA.C.Bovik,“Astatisticalevaluationofrecentfullreferenceimagequalityassessmentalgorithms,”IEEETransImageProcess,vol.15,no.11,pp.3440–3451,2006.[3]A.Mittal,A.K.Moorthy,andA.C.Bovik,“No-referenceimagequalityassessmentinthespatialdomain,”IEEETrans.ImageProcess.(toappear),2012.[4]A.K.MoorthyandA.C.Bovik,“Blindimagequalityassessment:Fromnaturalscenestatisticstoperceptualquality,”IEEETrans.ImageProcess.,vol.20,no.12,pp.3350–3364,2011.[5]M.Saad,A.C.Bovik,andC.Charrier,“Blindimagequalityassessment:AnaturalscenestatisticsapproachintheDCTdomain,”IEEETrans.ImageProcess.,vol.21,no.8,pp.3339–3352,2012.[6]P.YeandD.Doermann,“No-referenceimagequalityassessmentusingvisualcodebook,”inIEEEInt.Conf.ImageProcess.,2011.[7]H.Tang,N.Joshi,andA.Kapoor,“Learningablindmeasureofperceptualimagequality,”inInt.Conf.Comput.VisionPatternRecog.,2011.[8]A.Mittal,G.S.Muralidhar,J.Ghosh,andA.C.Bovik,“Blindimagequalityassessmentwithouthumantrainingusinglatentqualityfactors,”inIEEESignalProcess.Lett.,vol.19,2011,pp.75–78.[9]Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli,“Imagequalityassessment:Fromerrorvisibilitytostructuralsimilarity,”IEEETrans.ImageProcess.,vol.13,no.4,pp.600–612,2004.[10]D.L.Ruderman,“Thestatisticsofnaturalimages,”NetworkComputa-tioninNeuralSyst.,vol.5,no.4,pp.517–548,1994.[11]A.C.Bovik,“Perceptualimageprocessing:Seeingthefuture,”Proc.IEEE,vol.98,no.11,pp.1799–1803,2010.[12]R.Hassen,Z.Wang,andM.Salama,“No-referenceimagesharpnessassessmentbasedonlocalphasecoherencemeasurement,”inIEEEInt.Conf.Acoust.SpeechSig.Process.,2010,pp.2434–2437.[13]A.K.MoorthyandA.C.Bovik,“Statisticsofnaturalimagedistortions,”inIEEEInt.Conf.Acoust.SpeechSig.Process.,pp.962–965.[14]K.ShariandA.Leon-Garcia,“EstimationofshapeparameterforgeneralizedGaussiandistributionsinsubbanddecompositionsofvideo,”IEEETrans.Circ.Syst.VideoTechnol.,vol.5,no.1,pp.52–56,1995.[15]N.E.Lasmar,Y.Stitou,andY.Berthoumieu,“Multiscaleskewedheavytailedmodelfortextureanalysis,”inIEEEInt.Conf.ImageProcess.,2009,pp.2281–2284.[16]C.Bishop,PatternRecognitionandMachineLearning.SpringerNewYork,2006,vol.4.[17]D.Martin,C.Fowlkes,D.Tal,andJ.Malik,“Adatabaseofhumansegmentednaturalimagesanditsapplicationtoevaluatingsegmentationalgorithmsandmeasuringecologicalstatistics,”inInt.Conf.Comput.Vision,vol.2,2001,pp.416–423.[18]Z.Wang,E.P.Simoncelli,andA.C.Bovik,“Multiscalestructuralsimilarityforimagequalityassessment,”inAsilomarConf.Sig.,Syst.Comput.,vol.2.IEEE,2003,pp.1398–1402.