SuperResolution. Shai Avidan TelAviv University


 Brandon Welch
 7 months ago
 Views:
Transcription
1 SuperResolution Shai Avidan TelAviv University
2 Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov HelOr Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
3 Basic SuperResolution Idea Given: A set of lowquality images: Required: Fusion of these images into a higher resolution image How? Comment: This is an actual superresolution reconstruction result
4 Example Surveillance 40 images ratio :4 4
5 Example Enhance Mosaics 5
6 6
7 SuperResolution  Agenda The basic idea Image formation process Formulation and solution Special cases and related problems Limitations of SuperResolution SR in time 7
8 Intuition For a given bandlimited image, the yquist sampling theorem states that if a uniform sampling is fine enough ( D), perfect reconstruction is possible. D D 8
9 Intuition Due to our limited camera resolution, we sample using an insufficient D grid D D 9
10 Intuition However, if we tae a second picture, shifting the camera slightly to the right we obtain: D D 0
11 Intuition Similarly, by shifting down we get a third image: D D
12 Intuition And finally, by shifting down and to the right we get the fourth image: D D
13 Intuition It is trivial to see that interlacing the four images, we get that the desired resolution is obtained, and thus perfect reconstruction is guaranteed. 3
14 Rotation/Scale/Disp. What if the camera displacement is Arbitrary? What if the camera rotates? Gets closer to the object (zoom)? 4
15 Rotation/Scale/Disp. There is no sampling theorem covering this case 5
16 Agenda Modeling the SuperResolution Problem Defining the relation between the given and the desired images The MaximumLielihood Solution A simple solution based on the measurements Bayesian SuperResolution Reconstruction Taing into account behavior of images Some Results and Variations Examples, Robustifying, Handling color SuperResolution: A Summary The bottom line
17 Chapter : Modeling the SuperResolution Problem
18 The Model High Resolution Image Geometric Warp F =I Blur H Decimation D V Y Low Resolution Images Additive oise F H D Y V { } = DH F + V Y = Assumed nown
19 { } V Y = + = F H D The Model as One Equation V V V V Y Y Y Y + = + = = H F H D D H F D H F M M M
20 A Rule of Thumb Y Y Y Y H F H D D H F D H F = = = M M In the noiseless case we have Clearly, this linear system of equations should have more equations than unnowns in order to mae it possible to have a unique LeastSquares solution. Example: Assume that we have images of 00by00 pixels, and we would lie to produce an image of size 300 by300. Then, we should require 9.
21 Chapter : The MaximumLielihood Solution
22 SuperResolution  Model Geometric warp Blur Decimation High Resolution Image F =I H D V Y Low Resolution Exposures Additive oise F H D Y Y V { 0, } = D H F + V, V ~ σ n =
23 Simplified Model Geometric warp Blur Decimation High Resolution Image F =I H D V Y Low Resolution Exposures Additive oise F H D Y Y V { 0, } = DHF + V, V ~ σ n = 3
24 The SuperResolution Problem Y = DHF + V { 0, } σ, V ~ n Given Y The measured images (noisy, blurry, downsampled..) H The blur can be extracted from the camera characteristics D The decimation is dictated by the required resolution ratio F The warp can be estimated using motion estimation σ n The noise can be extracted from the camera / image Recover HR image 4
25 The Model as One Equation Y Y DH F V Y V DHF + = = = G M M M Y DHF V + V Y G r = resolution factor = 4 MM = size of the frames = = number of frames = 0 of of, V size size of [ M ] [ M r M ] [ r M ] size =[0M ] =[0M 6M] =[6M ] Linear algebra notation is intended only to develop algorithm 5
26 SR  Solutions Maximum Lielihood (ML): = argmin = = DHF Y Often ill posed problem! Maximum Aposteriori Probability (MAP) = argmin DHF Y { } +λa Smoothness constraint regularization6
27 ML Reconstruction (LS) ( ) = = ML Y DHF ε Minimize: ( ) ( ) 0 ˆ = = = T T T ML Y DHF H D F ε Thus, require: T T T T T T Y = = = ˆ D H F DHF D H F A B B A = ˆ 7
28 LS  Iterative Solution Steepest descent ˆ T T T n+ = ˆ β n F H D n = ( DHF ˆ Y ) Bac projection Simulated error All the above operations can be interpreted as operations performed on images. There is no actual need to use the MatrixVector notations as shown here. 8
29 LS  Iterative Solution Steepest descent ( ) T T T ˆ n+ = ˆ n β F H D DHF ˆ n Y = For =.. ˆ n Y geometry wrap convolve with H down sample  up sample convolve with H T inverse geometry wrap F H D T D T H T F β ˆ n+ 9
30 Chapter 3: Bayesian SuperResolution Reconstruction
31 The Model A Statistical View V V V V Y Y Y Y + = + = = H F H D D H F D H F M M M We assume that the noise vector, V, is Gaussian and white. { } { } exp Pr v V T V Const obv σ = For a nown, Y is also Gaussian with a shifted mean { } ( ) ( ) { } exp Pr v T Y Y Const Y σ H H =
32 MaximumLielihood Again The ML estimator is given by ˆ ML = ArgMaxProb Y { } which means: Find the image such that the measurements are the most liely to have happened. In our case this leads to what we have seen before ˆ ML { } = ArgMaxProb Y = ArgMin H Y
33 ML Often Sucs!!! For Example ˆ For the image denoising problem we get ML = ArgMin We got that the best ML estimate for a noisy image is the noisy image itself. Y ˆ=Y The ML estimator is quite useless, when we have insufficient information. A better approach is needed. The solution is the Bayesian approach.
34 Using The Posterior Instead of maximizing the Lielihood function Pr { Y } maximize the Posterior probability function Pr{ Y} This is the MaximumAposteriori Probability (MAP) estimator: Find the most probable, given the measurements A major conceptual change is assumed to be random
35 Why Called Bayesian? Bayes formula states that { Y} Pr = Pr { Y } Pr{ } Pr { Y} and thus MAP estimate leads to ˆ = ArgMaxPr = MAP { Y} ArgMax Pr{ Y } Pr{ } This part is already nown What shall it be?
36 Image Priors? { }? Pr = This is the probability law of images. How can we describe it in a relatively simple expression? Much of the progress made in image processing in the past 0 years (PDE s in image processing, wavelets, MRF, advanced transforms, and more) can be attributed to the answers given to this question.
37 MAP Reconstruction If we assume the Gibbs distribution with some energy function A() for the prior, we have Pr { } = Const exp{ A{ } ˆ MAP = = ArgMax Pr ArgMin { Y } Pr{ } HY { } +λa This additional term is also nown as regularization
38 MAP Choice of Regularization = ( ) Y D H F λa{ } ε + = Possible Prior functions  Examples:. A = S  simple smoothness (Wiener filtering), T T. A{ } = S W( 0) S spatially adaptive smoothing, A { } { } = ρ{ S} 3.  Mestimator (robust functions), 4. The bilateral prior the one used in our recent wor: A P P { } ( n m = a ρ S S ) n= P m= P mn 4. Other options: Total Variation, Beltrami flow, examplebased, sparse representations, h v
39 MAP Reconstruction MAP Regularization term: = ( ) DHF Y λa{ } ε + = Tihonov cost function A T { } = Γ Total variation A TV { } = Bilateral filter A B P P l+ m l m α SxS y 39 l= P m= P { } =
40 Robust Estimation + Regularization ( ) = = + = + = P P l P P m m y l x m l S S Y α λ ε DHF Minimize: ( ) [ ] ( ) + = = = + = + P P l P P m n m y l x n m y l x m l n T T T n n S S S S I Y ˆ ˆ sign ˆ sign ˆ ˆ α λ β DHF H D F 40
41 Robust Estimation + Regularization ˆ β P P l+ m ( ) + [ ] ( ) l m l m DHF ˆ Y λ I S S sign ˆ S S ˆ T T T = ˆ n F H D sign n n+ α = l= P m= P x y n x y n For =.. geometry wrap convolve with H down sample Y  sign up sample convolve with H T inverse geometry wrap ˆ n β ˆ + n For l,m=p..p horizontal shift l vertical shift m  sign horizontal shift l vertical shift m  m+ l λα From Farisu at al. IEEE trans. On Image Processing, 04 4
42 Chapter 4: Some Results and Variations
43 Example 0 Sanity Chec Synthetic case: 9 images, no blur, :3 ratio One of the lowresolution images The higher resolution original The reconstructed result
44 Example SR for Scanners 6 scanned images, ratio : Taen from one of the given images Taen from the reconstructed result
45 Example SR for IR Imaging 8 images*, ratio :4 * This data is courtesy of the US Air Force
46 40 images ratio :4 Example 3 Surveillance
47 MAP = Robust SR ( ) Y D H F λa{ } ε + = Cases of measurements outlier: Some of the images are irrelevant, Error in motion estimation, Error in the blur function, or General model mismatch. MAP = ( ) Y D H F λa{ } ε + =
48 Example 4 Robust SR 0 images, ratio :4 L norm based L norm based
49 Example 5 Robust SR 0 images, ratio :4 L norm based L norm based
50 MAP Handling Color in SR = ( ) Y D H F λa{ } ε + = Handling color: the classic approach is to convert the measurements to YCbCr, apply the SR on the Y and use trivial interpolation on the Cb and Cr. Better treatment can be obtained if the statistical dependencies between the color layers are taen into account (i.e. forming a prior for color images). In case of mosaiced measurements, demosaicing followed by SR is suboptimal. An algorithm that directly fuse the mosaic information to the SR is better.
51 Example 6 SR for Full Color 0 images, ratio :4
52 Example 7 SR+Demoaicing 0 images, ratio :4 Mosaiced input Mosaicing and then SR Combined treatment
53 Chapter 5: Example based Super Resolution
54 Examplebased Super Resolution
55 Failure
56 Marov etwor Model
57 Single Pass
58 Super Resolution Result Original 70x70 Cubic Spline Example based, training: generic True 80x80
59 Results MRF etwor One pass Original Cubicspline One pass
60 Failure Original Cubicspline One pass
61 Chapter 6: Combining example based and motion based SR
62 Idea Classical MultiImage SR SingleImage MultiPatch SR
63 Why should it wor? Image scales All image patches High variance patches only (top 5%)
64 Putting everything together
65 Results Input. Bicubic interpolation (3). Unified singleimage SR (3). Ground truth image.