Single Exposure Enhancement and Reconstruction Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman 1
Reconstruction as an Inverse Problem Original image f Distortion & Sampling h n noise ( f ) n g = h + measurements g Reconstruction Algorithm fˆ 2
g = h( f ) + n g h 1( g n) fˆ Typically: The distortion h is non-invertible or ill-posed. The noise n is unknown, only its statistical properties can be learnt. 3
Typical Degradation Sources Low Illumination Optical distortions (geometric, blurring) Sensor distortion (quantization, sampling: spatial+temporal+spectral, sensor noise) Atmospheric attenuation (haze, turbulence, ) 4
Today s s Topics Spatial and spectral sampling De-mosaicing Contaminated noise De-noising Geometrical distortion Geometrical rectification Illumination White balancing 5
Key point: Prior knowledge of Natural Images 6
The Image Prior P f (f) 1 Image space 0 7
Problem: P(image) is complicated to model Defined over a huge dimensional space. Sparsely sampled. Known to be non Gaussian. form Mumford & Huang, 2000 8
Spatial and Spectral Sampling Demosaicing CCD CMOS 9
Spatial/Spectral Sampling Possible Configurations light beam splitter 3 CCD 1 CCD 10
Color Filter Arrays (CFAs) Bayer pattern 11
Image Mosaicing 12
Inverse Problem: Image Demosaicing The CCD sensor in a digital camera acquires a single color component for each pixel. Problem: How to interpolate the missing components? 13
Demosaicing Types of Solutions Exploiting spatial coherence: Non adaptive: Nearest Neighbor interpolation Bilinear interpolation Bicubic interpolation, etc. Adaptive: Edge sensing interpolation Exploiting color coherence: Non Adaptive Adaptive 14
Nearest neighbor interpolation: Assumption: Neighboring pixels are similar. Conclusion: Take the value of a missing pixel from its nearest neighbor s value Assuming piecewise constant function Problematic in piecewise linear areas and near edges G 15
Linear interpolation 1D: G G(k)=(G(k+1)+G(k-1)) /2 Assuming piecewise linear function Artifacts near edges original input linear interpolation 16
Bilinear interpolation 2D: 17
Mosaic Image 18
Bilinear Interpolation 19
Bilinear Interpolation 20
Bilinear Interpolation 21
Adaptive Interpolation 2D (example 1): Assumption: Neighboring pixels are similar- if there is no edge h v = abs = abs [ G 45 G 43 ] [ ] G 34 G 54 G 44 = G 34 G G + G 34 43 54 + G 2 + G 2 + G 4 54 45 43 + G 45 if if if h >> v v v >> h h Problem: G 44 does not exploit all available information. 22
Adaptive Interpolation 2D (example 2): 0.08 ϕ(x) 0.06 0.04 0.02 0-1 00-50 0 50 100 w w w w 45 34 43 54 = ϕ = ϕ = ϕ = ϕ ( B46 B44 ) ( B24 B44 ) ( B42 B44 ) ( B B ) 64 44 G 44 = w 45 G 45 + w w 45 34 G + w 34 34 + w + w 43 43 G 43 + w + w 54 54 G 54 23
24 Color Coherence: Diffuse Model: In the Log domain: + = + = = B G R B G R K K K L K K K n l B G R b g r log ) log( log ( ) ( ) ( ) ( ) = B G R K K K y x n l y x b y x g y x r,,,, most edges uncommon edges Assumption: Color bands are highly correlated in high frequencies
Color Coherence: Example R=R L +R H luminance edge (common) G=G L +G H R-G R L +G L I chrominance edge (uncommon) R-G 25 x
Outcomes: R(x)-G(x) rg I R(x) G(x) + rg R x (x) = B x (x) = G x (x) k x Spatial coherence: G(k)=(G(k-1)+G(k+1))/2 Color coherence: rg =average{r(x)-g(x)} R(k)=G(k)+ rg Problem: - Assumes piecewise constant - Fails near luminance and chrominance edges Solution: Assume piecewise linear + adaptive interpolation 26
27 Exploiting color coherence for green interpolation: Since R x (x)=g x (x) we have x I ( ) ( ) 2 ~ 2 ~ 2 1 1 2 1 1 + + + + = = k k k k k k k k k k R R G G R R G G k These estimates for G k can be improved using adaptive scheme
28 Edge Sensing Interpolation using Color Coherence: 54 43 34 45 44 54 54 44 43 43 44 34 34 44 45 45 44 ~ ~ ~ ~ w w w w G w G w G w G w G + + + + + + = ( ) ( ) ( ) ( ) 2 ~ 2 ~ 2 ~ 2 ~ 44 64 54 44 54 44 24 34 44 34 44 42 43 44 43 44 46 45 44 45 B B G G B B G G B B G G B B G G = = = = ( ) ij ij ij G R G R w w w w R w R w R w R w R + = + + + + + + = 44 44 53 55 35 33 44 53 53 44 55 55 44 35 35 44 33 33 44 ~ ~ ~ ~ ~
Bilinear 29
Adaptive + color coherence 30
Bilinear 31
Adaptive + color coherence 32
Bilinear 33 Adaptive
Image Denoising 34
Noise Models Additive noise: g = f + n Multiplicative noise g = f+f*n General case: ( g f ) P 35
Example Noise additive noise multiplicative noise 36
Examples of independent and identically distributed (i.i.d) Noise Gaussian white noise (i.i.d.): P 1 ( ) 2 2 ( f g ) 2σ g f = e 2π σ 0.08 0.06 0.04 0.02 0-100 -50 0 50 100 37 σ=20
Additive uniform noise [a,b]: P ( g f ) = 1 ( b a) a ( g f ) 0 otherwise b 0.1 0.08 0.06 0.04 0.02 0-100 -50 0 50 100 b-a=20 38
Impulse noise (S & P): P ( g f ) = 1 P P P a a b P b for for for g g g = = = a b f Note: this noise is not additive! 1 0.8 0.6 0.4 0.2 0 0 50 100 150 200 250 39
Bayesian denoising Assume the noise component is i.i.d. Gaussian: g= f + n where n is distributed ~N(0,σ) A MAP estimate for the original f: f ˆ = arg max P = arg max f ( f g) Using Bayes rule this leads to : fˆ = arg min f f P {( g f ) + λψ ( f )} 2 ( g f ) P( f ) P( g) where ψ(f) is a penalty for non probable f 40
First try prior Similarity of neighboring pixels ( f ) = ( ) ( ) p Ws p q f p fq ψ p q q N p W s is a Gaussian profile giving less weight to distant pixels. (f p -f q ) 2 0.08W s 2 0.06 0.04 0.02 0 0 0-100 -50 0 50 100 41
42 This leads to Gaussian smoothing filters: Reduces noise but blurs out edges. The parameter α depends on the noise variance. ( ) ( ) ( ) = N p q q p s p f f q p W f 2 ψ ( ) ( ) ( ) + = p p N q s q N q s p p q p W g q p W g f α α 1 ˆ
Noisy Image 43
Filtered Image 44
The role of prior term: Gaussian smoothing ˆ f = arg min g f + W p q f f f q Np 2 2 ( ) λ ( ) ( ) p p p s p q 10000 9000 8000 data term prior term sum 7000 150 50 6000 5000 4000 p q 3000 2000 1000 0 0 50 100 150 200 250 300 45
Example 2: Prior Term Edge sensitive similarity: ψ ( ) 2 ( f ) = ( ) + ( ) p Ws p q log 1 f p fq q N p log(1+(f p -f q ) 2 ) p q 0 0 46
This leads to the Bilateral filter (edge-preserving smoothing): fˆ p = ( 1 ) α g p + α q N p q N W p S W ( p q) W ( g g ) S R ( p qw ) ( g g ) R p p q q g q W S is a monotonically descending spatial weight W R is a monotonically descending radiometric weight A.k.a. Bilateral Filtering 47
Bilateral filter =. =. =. from P. Milinfar. 48
Gaussian Smoothing I W S = e ( p q) σ s 2 I Smooth edges x slide from Darya Frolova and Denis Simakov 49
Bilateral Filtering I W W S R = e 2 ( p q) ( gp gq ) σ s + σ R 2 I Preserves discontinuities x slide from Darya Frolova and Denis Simakov 50
Noisy Image 51
Filtered Image 52
Edge preserving smoothing Gaussian smoothing 53
Bilateral Filer- Example 54
55
56
The role of prior term: Robust {( ) 2 ψ ( )} fˆ = arg min g f + f f p p p robust p q f ψ ( ) robust v 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0-100 -50 0 50 100 57
The role of prior term: Robust {( ) 2 ψ ( )} fˆ = arg min g f + f f p p p robust p q f 10000 9000 8000 data term prior term sum 200 170 150 130 50 7000 6000 5000 4000 p q 3000 2000 1000 0 0 50 100 150 200 250 300 58
Example 3: Prior Term Images are homogeneous Out Images are self-similar - In 59
Example 3: Prior Term ψ ( f ) = ( )( ) p WN N p Nq f p fq q N p N p (and N q ) denotes the local neighborhood of p (and q). 2 N q N p 60
Neighborhood weights: examples 61
62 This leads to the non-local mean filter: W is a monotonically descending function, e.g. ( ) ( ) ( ) + = q q p N q q q p N p p N N W g N N W g f α α 1 ˆ ( ) 2 N N p N q q p N e N N W σ =
Patch-based Denoising (NL-mean) Noisy patches w 1 w 2 + w N Linear combination v(n i ) v(n j ) 63
Non-Local means - Example 64
Non-Local means - Example 65
White Balancing Also known as Illumination Estimation Color constancy Color correction 66
Experiment 1: Yellow illumination From David Brainard 67
Experiment 2: Blue illumination From David Brainard 68
Experiment 3: Blue illumination From David Brainard 69
Yellow illumination Blue illumination Blue illumination 70
Conclusions: The eye cares more about objects intrinsic color, not the color of the light leaving the objects The HVS applies color adaptation. 71
Color under different illuminations from S. Alpert & D. Simakov 72
Experiment 2: which one looks more natural? 73
White Balance Problem The eye cares more about objects intrinsic color, not the color of the light leaving the objects The HVS applies color adaptation. When watching a picture on screen or print, we adapt to the illuminant of the room, not that of the scene in the picture We need to discount the color of the light source White Balancing 74
White Balance & Film Different types of film for fluorescent, tungsten, daylight Need to change film! Electronic & Digital imaging are more flexible 75
Image formation Image color E k k = R,G,B = e( λ) s( λ) ρ ( λ) λ k d λ Illumination object reflectance Sensor response from S. Alpert & D. Simakov 76
White Balancing If Spectra of Light Source Changes Spectra of Reflected Light Changes The goal : Evaluate the surface color as if it was illuminated with white light (canonical) from S. Alpert & D. Simakov 77
Von Kries adaptation Multiply each channel by a gain factor R G = B W r W E E E Note that the light source could have a more complex effect Arbitrary 3x3 matrix More complex spectrum transformation Can be justified if the sensor spectral sensitivity function are narrow. g W b 1 2 3 78
Manual white balancing white spot Take a picture of a neutral object (white or gray) Deduce the weight of each channel If the object is recoded as r w, g w, b w use weights 1/r w, 1/g w, 1/b w 79
Manual white balancing light selection Select type of light from a finite set. Correct accordingly. 80
Automatic white balancing Gray world assumption The average color in the image is grey Use weights ( ) 1 1 1 W R WG WB =,, R G B image image image Note that this also sets the exposure/brightness Usually assumes 18% grey 81
Automatic white balancing Brightest pixel assumption Highlights usually have the color of the light source Apply white balancing by using the highlight pixels Problem: Highlight pixels are contaminated with diffuse components 82
How can we detect highlight pixels? Following G. J. Klinker, S. A. Shafer and T. Kanade. A Physical Approach to Color Image Understanding. International Journal of Computer Vision, 1990. Two reflectance components total = diffuse + specular = + from S. Alpert & D. Simakov 83
Reminder - Image Formation R V N θ L Diffuse reflection: I diff = K(λ) e p (λ) (N L) Specular reflection: I spec = K s (λ)e p (λ) (R V) n e p - the point light intensities. K, K s [0,1] - the surface diffuse / specular reflectivity. N - the surface normal, L - the light direction, V viewing direction 84
Diffuse object in RGB space Linear cluster in color space from S. Alpert & D. Simakov 85
Specular object in RGB space Linear cluster in the direction of the illuminant color from S. Alpert & D. Simakov 86
Combined reflectance in RGB space Skewed T from S. Alpert & D. Simakov 87
Combined reflectance of several objects Several T-clusters Specular lines are parallel (why?) from S. Alpert & D. Simakov 88
Step I Group objects by region grouping Group together diffuse and specular image parts of the same object Grow regions in image domain so that to form clusters in color domain from S. Alpert & D. Simakov 89
Step II: Decompose into diffuse + specular Coordinate transform in color space R G B = [ C C C C ] matte spec matte spec matte specular noise Cspec Cmatte CmatteXCspec matte specular noise = [ C C C C ] matte spec matte spec 1 R G B from S. Alpert & D. Simakov 90
Decompose into matte + specular (2) + [ C C C ] 1 C matte spec matte spec * = in RGB space from S. Alpert & D. Simakov 91
Decompose into matte + specular (3) + Cmatte + Cspec = + from S. Alpert & D. Simakov 92
Final results: Reflectance Decomposition input image matte component specular component = + G. J. Klinker, S. A. Shafer and T. Kanade. A Physical Approach to Color Image Understanding. International Journal of Computer Vision, 1990. from S. Alpert & D. Simakov 93
Geometric Distortion 94
Geometric Distortion Correction No distortion Pin cushion Barrel Radial distortion of the image Caused by imperfect lenses Deviations are most noticeable for rays that pass through the edge of the lens 95
General Geometric Transformation Operations depend on Pixel s Coordinates. Independent of pixel values. x y f f x y ( x, y) = x ( x, y) = y I( x, y) = I' ( f ( x, y), f ( x y) ) x y, (x,y) (x,y ) I(x,y) I (x,y ) 96
Radial Distortions No Distortion Barrel Distortion Pincushion Distortion Distortion model (inverse mapping): 2 d r = r + ar + u d br 3 d r u = undistorted radius r d = distorted radius 97
Geometric Rectification using Calibration target 98
99 Parameter Estimation Minimum Least Squared Error solution: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) b M p r r r r b a r r r r d u d u d d d d = = M M M 2 2 1 1 2 2 2 2 1 3 1 2 3 2 d d d u br ar r r + + = ( ) b M M M p T T 1 ˆ =
Correcting radial distortion from Helmut Dersch 100
Correcting Radial Distortions Before After http://www.grasshopperonline.com/barrel_distortion_correction_software.html 101
Single Image Enhancement - Summary Demosiacing Sensor correction Denoising Sensor/Acqusition correction White Balance Color correction Barrel/Pincusion Distortion Geometric correction (Lens) 102
THE END 103