/
Restoring An Image Taken Through a Window Covered with Dirt or Rain David Eigen Dilip Restoring An Image Taken Through a Window Covered with Dirt or Rain David Eigen Dilip

Restoring An Image Taken Through a Window Covered with Dirt or Rain David Eigen Dilip - PDF document

luanne-stotts
luanne-stotts . @luanne-stotts
Follow
459 views
Uploaded On 2015-03-14

Restoring An Image Taken Through a Window Covered with Dirt or Rain David Eigen Dilip - PPT Presentation

of Computer Science Courant Institute New York University deigendilipfergus csnyuedu Abstract Photographs taken through a window are often compro mised by dirt or rain present on the window surface Com mon cases of this include pictures taken from i ID: 45332

Computer Science Courant

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "Restoring An Image Taken Through a Windo..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Figure2.Asubsetofrainmodelnetworkweights,sortedbyl2-norm.Left:rstlayerlterswhichactasdetectorsfortheraindrops.Right:toplayerltersusedtoreconstructthecleanpatch.usesa“valid”convolution,whilethelastlayerusesa“full”(thesearethesameforthemiddlelayerssincetheirkernelshave11support).Inoursystem,theinputkernels'supportisp1=16,andtheoutputsupportispL=8.Weusetwohiddenlayers(i.e.L=3),eachwith512units.Asstatedearlier,themiddlelayerkernelhassupportp2=1.Thus,W1applies512kernelsofsize16163,W2applies512kernelsofsize11512,andW3applies3kernelsofsize88512.Fig.2showsexamplesofweightslearnedfortheraindata.2.2.TrainingWetraintheweightsWlandbiasesblbyminimizingthemeansquarederroroveradatasetD=(xi;yi)ofcorre-spondingnoisyandcleanimagepairs.ThelossisJ()=1 2jDjXi2DjjF(xi)�yijj2where=(W1;:::;WL;b1;:::;bL)arethemodelparame-ters.ThepairsinthedatasetDarerandom6464pixelsubregionsoftrainingimageswithandwithoutcorruption(seeFig.4forsamples).Becausetheinputandoutputker-nelsizesofournetworkdiffer,thenetworkFproducesa5656pixelpredictionyi,whichiscomparedagainstthemiddle5656pixelsofthetruecleansubimageyi.WeminimizethelossusingStochasticGradientDescent(SGD).Theupdateforasinglestepattimetist+1 t�t(F(xi)�yi)T@ @F(xi)wheretisthelearningratehyper-parameterandiisaran-domlydrawnindexfromthetrainingset.ThegradientisfurtherbackpropagatedthroughthenetworkF.Weinitializetheweightsatalllayersbyrandomlydraw-ingfromanormaldistributionwithmean0andstandardde-viation0.001.Thebiasesareinitializedto0.Thelearningrateis0.001withdecay,sothatt=0:001=(1+5t10�7).Weusenomomentumorweightregularization. Figure3.Denoisingnearapieceofnoise.(a)showsa6464im-ageregionwithdirtoccluders(top),andtargetgroundtruthcleanimage(bottom).(b)and(c)showtheresultsobtainedusingnon-convolutionalandconvolutionallytrainednetworks,respectively.Thetoprowshowsthefulloutputafteraveraging.Thebottomrowshowsthesignederrorofeachindividualpatchpredictionforall88patchesobtainedusingaslidingwindowintheboxedarea,displayedasamontage.Theerrorsfromtheconvolutionally-trainednetwork(c)arelesscorrelatedwithoneanothercomparedto(b),andcanceltoproduceabetteraverage.2.3.EffectofConvolutionalArchitectureAkeyimprovementofourmethodover[2]isthatweminimizetheerrorofthenalimageprediction,whereas[2]minimizestheerroronlyofindividualpatches.Wefoundthisdifferencetobecrucialtoobtaingoodperformanceonthecorruptionweaddress.Sincethemiddlelayerconvolutioninournetworkhas11spatialsupport,thenetworkcanbeviewedasrstpatchifyingtheinput,applyingafully-connectedneuralnetworktoeachpatch,andaveragingtheresultingoutputpatches.Moreexplicitly,wecansplittheinputimagexintostride-1overlappingpatchesfxpg=patchify(x),andpredictacorrespondingcleanpatchyp=f(xp)foreachxpusingafully-connectedmultilayernetworkf.Wethenformthepredictedimagey=depatchify(fypg)bytakingtheaverageofthepatchpredictionsatpixelswheretheyoverlap.Inthiscontext,theconvolutionalnetworkFcanbeexpressedintermsofthepatch-levelnetworkfasF(x)=depatchify(ff(xp):xp2patchify(x)g).Incontrastto[2],ourmethodtrainsthefullnetworkF,includingpatchicationanddepatchication.Thisdrivesadecorrelationoftheindividualpredictions,whichhelpsbothtoremoveoccludersaswellasreduceblurinthe-naloutput.Toseethis,considertwoadjacentpatchesy1andy2withoverlapregionsyo1andyo2,anddesiredoutputyo.Ifweweretotrainaccordingtotheindividualpredic-tions,thelosswouldminimize(yo1�yo)2+(yo2�yo)2,thesumoftheirerror.However,ifweminimizetheer-roroftheiraverage,thelossbecomes�yo1+yo2 2�yo2=1 4[(yo1�yo)2+(yo2�yo)2+2(yo1�yo)(yo2�yo)]. Figure5.Exampleimagecontainingdirt,andtherestorationproducedbyournetwork.Notethedetailpreservedinhigh-frequencyareaslikethebranches.Thenonconvolutionalnetworkleavesbehindmuchofthenoise,whilethemedianltercausessubstantialblurring.5.Experiments5.1.DirtWetesteddirtremovalbyrunningournetworkonpic-turesofvariousscenestakenbehinddirt-on-glasspanes.Boththescenesandglasspaneswerenotpresentinthetrainingset,ensuringthatthenetworkdidnotsimplymem-orizeandmatchexactpatterns.Wetestedrestorationofbothrealandsyntheticcorruption.Althoughthetrainingsetwascomposedentirelyofsyntheticdirt,itwasrepresen-tativeenoughforthenetworktoperformwellinbothcases.Thenetworkwastrainedusing5.8millionexamplesof6464imagepatcheswithsyntheticdirt,pairedwithgroundtruthcleanpatches.Wetrainedonlyonexampleswherethevarianceoftheclean6464patchwasatleast0.001,andalsorequiredthatatleast1pixelinthepatchhadadirt -maskvalueofatleast0.03.Tocompareto[2],wetrainedanon-convolutionalpatch-basednetworkwithpatchsizescorrespondingtoourconvolutionkernelsizes,using20million1616patches.5.1.1SyntheticDirtResultsWerstmeasurequantitativeperformanceusingsyntheticdirt.TheresultsareshowninTable1.Here,wegeneratedtestexamplesusingimagesanddirtmasksheldoutfromthetrainingset,usingtheprocessdescribedinSection3.1.Ourconvolutionalnetworksubstantiallyoutperformsitspatch-basedcounterpart.Bothneuralnetworksaremuchbetter PSNR Input Ours Nonconv Median Bilateral BM3D Mean 28.93 35.43 34.52 31.47 29.97 29.99 Std.Dev. 0:93 1.24 1.04 1:45 1:18 0:96 Gain - 6.50 5.59 2.53 1.04 1.06 Table1.PSNRforourconvolutionalneuralnetwork,nonconvolu-tionalpatch-basednetwork,andbaselinesonasyntheticallygen-eratedtestsetof16images(8sceneswith2differentdirtmasks).Ourapproachsignicantlyoutperformstheothermethods.thanthethreebaselines,whichdonotmakeuseofthestruc-tureinthecorruptionthatthenetworkslearn.Wealsoappliedournetworktotwotypesofarticialnoiseabsentfromthetrainingset:synthetic“snow”madefromsmallwhitelinesegments,and“scratches”ofrandomcubicsplines.AnexampleregionisshowninFig.6.Incontrasttothegainof+6.50dBfordirt,thenetworkleavesthesecorruptionslargelyintact,producingnear-zeroPSNRgainsof-0.10and+0.30dB,respectively,overthesamesetofimages.Thisdemonstratesthatthenetworklearnstoremovedirtspecically.5.1.2DirtResultsFig.5showsarealtestimagealongwithouroutputandtheoutputofthepatch-basednetworkandmedianlter.Be-causeofilluminationchangesandmovementinthescenes,wewerenotabletocapturegroundtruthimagesforquanti-tativeevaluation.Ourmethodisabletoremovemostofthecorruptionwhileretainingdetailsintheimage,particularlyaroundthebranchesandshutters.Thenon-convolutional