The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S Drew Graham D Finlayson Petra Aurora Troncoso Rey School of Computer Science Simon Fraser University School of Computer Sciences ID: 533151
Download Presentation The PPT/PDF document "November 2012" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
November 2012
The Role of Bright Pixels in Illumination Estimation
Hamid Reza Vaezi Joze
Mark S. DrewGraham D. FinlaysonPetra Aurora Troncoso ReySchool of Computer ScienceSimon Fraser University School of Computer SciencesThe University of East AngliaSlide2
Outline
MotivationRelated researchExtending the white-patch hypothesisThe effect if bright pixels in well-known methods
The bright-pixels frameworkFurther experimentConclusion
2Slide3
Motivation
White-Patch methodOne of the first colour constancy methods
Estimates the illuminant colour by the max response of three channelsFew researchers or commercial cameras use it nowRecent research reconsider white patch
Local mean calculation as a preprocessing can significantly improve[Choudhury & Medioni (CRICV09)] [Funt & Li (CIC2010)] Analytically, the geometric mean of bright (specular) pixels is the optimal estimate for the illuminant, based on dichromatic model[Drew et al. (CPCV12)]3Slide4
Bright Pixels
4
Light Source
HighlightsWhite surfaceJust a bright surfaceSlide5
Previous Research
5
White Patch
Local mean calculation as a preprocessing step for White PatchUsing Specular ReflectionSpecular reflection colour is same as the illumination within a Neutral Interface ReflectionIt usually includes the bright areas of imageIllumination estimation methodIntersection of dichromatic planes [Tominaga and Wandell (JOSA89)]Intersection of the lines generates by chromaticity values of pixels of each surface in the CIE chromaticity diagram by [Lee (JOSA86)] Extending Lee’s algorithms by constraint on the colours of illumination Slide6
Grey-based illumination estimation
Grey-worldThe average reflectance in the scene is achromaticShade-of-grey
Minkowski p-normGrey-edgeThe average of the reflectance differences in a scene is achromatic
6Slide7
Extending the White Patch Hypothesis
7
Let us extend white-patch hypothesis that there is always include any of: white patch, specularities, or light source in an image
Gamut of bright pixels, in contradistinction to maximum channel response of the White-Patch method, which include the brightest pixels in the image Removing clipped pixels (exceed 90% of the dynamic range)Define bright pixels as the top T % of luminance given by R+G+B.What is the probability of having an image without strong highlights, source of light, or white surface in the real world?Slide8
Simple ExperimentExperiment whether or not the actual illuminant colour falls inside the 2D gamut of top 5% brightness pixels
SFU Laboratory Dataset : 88.16%
ColorChecker : 74.47%GreyBall : 66.02%
8White surfaceSpecularityFAILSlide9
The Effect of Bright Pixels on Grey-base methods
9
ColorChecker Dataset
Experiment the effect of bright pixels Run grey-based method for the top 20% brightness pixels in each image, and compare to using all image pixels (colour)Using one fifth of the pixels performance is better or equalSlide10
The Effect of Bright Pixels on Gamut Mapping method
White-patch gamut and canonical white-patch gamut introduced
[Vaezi Joze & Drew (ICIP12)]
White-patch gamut is the gamut of top 5% bright pixels in an image10Adding new constraints based on the white-patch gamut to standard Gamut Mapping constraints outperforms the Gamut Mapping method and its extensions.
Canonical gamut vs. WP canonical gamutSlide11
The Bright-Pixels Framework
If these bright pixels represent highlights, a white surface, or a light source, they approximate the colour of the illuminantTry Mean, Median, Geomean, p-norm (p=2,p=4) for top T% brightness
11Slide12
The Bright-Pixels Framework
A local mean calculation can help: Resizing to 64 × 64 pixels by bicubic interpolationMedian filtering
Gaussian blurring filter
It does not help so much on these images12ColorChecker Dataset Slide13
Dataset
SFU Laboratory [Barnard
& Funt (CRA02)]321 images under 11 different measured illuminants
Reprocessed version of ColorChecker [Gehler et al. (CVPR08)] 568 images, both indoor and outdoorGreyBall [Cieurea & Funt (CIC03)]11346 images extracted from video recorded under a wide variety of imaging conditionsHDR dataset [Funt et al. (2010)]105 HDR images13Slide14
The Bright-Pixels Method
14
Remove clipped pixels
Do local mean {no, Median, Gaussian, Bicubic }Select top T% brightness pixels Threshold = {.5%,1%,2%,5%,10%}Estimate illuminant by shade of grey eq. p = {1,2,4,8} if
the estimated illuminant is not in the possible illuminant
gamut use grey-edge Slide15
Further Experiment
Comparison with well-known colour constancy methods
15Slide16
Optimal parameters
16
p
TblurringSFU Laboratory Dataset2.5 % no Color Checker Dataset22% GaussianGreyBall Dataset21% noHDR Dataset81%GaussianGaussian for high resolution images and no blurring for lower resolution images
Even .5% threshold is enough for in-laboratory images, for real images threshold should be 1-2%Slide17
Conclusion
17Based on current datasets in the field we saw that the simple idea of using the p-norm of bright pixels, after a local mean preprocessing step, can perform surprisingly competitively to complex methods.
Either the probability of having an image without strong highlights, source of light, or white surface in the real world is not overwhelmingly great or the current color constancy datasets are conceivably not good indicators of performance with regard to possible real world images. Slide18
Questions?
Thank you.
18