Super-Resolution. Shai Avidan Tel-Aviv University

Similar documents
Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion

ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION

Shai Avidan Tel Aviv University

Single Exposure Enhancement and Reconstruction. Some slides are from: J. Kosecka, Y. Chuang, A. Efros, C. B. Owen, W. Freeman

Bayesian Paradigm. Maximum A Posteriori Estimation

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

Computational Photography

MCMC Sampling for Bayesian Inference using L1-type Priors

Inverse problem and optimization

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis

Expectation Maximization Mixture Models HMMs

Regularization Theory

Modeling Multiscale Differential Pixel Statistics

Recent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems

Introduction to Nonlinear Image Processing

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Templates, Image Pyramids, and Filter Banks

What is Image Deblurring?

Estimating Gaussian Mixture Densities with EM A Tutorial

6.869 Advances in Computer Vision. Bill Freeman, Antonio Torralba and Phillip Isola MIT Oct. 3, 2018

Recent Advances in Bayesian Inference for Inverse Problems

Supplementary Information for cryosparc: Algorithms for rapid unsupervised cryo-em structure determination

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Motion Estimation (I)

Lecture 14 October 22

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Sparse Recovery Beyond Compressed Sensing

Taking derivative by convolution

F denotes cumulative density. denotes probability density function; (.)

LPA-ICI Applications in Image Processing

Part III Super-Resolution with Sparsity

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning

Motion Estimation (I) Ce Liu Microsoft Research New England

Edge Detection. Computer Vision P. Schrater Spring 2003

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Feature extraction: Corners and blobs

10-701/15-781, Machine Learning: Homework 4

Estimation Error Bounds for Frame Denoising

Multiple-Model Adaptive Estimation for Star Identification with Two Stars

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Satellite image deconvolution using complex wavelet packets

There is a unique function s(x) that has the required properties. It turns out to also satisfy

Sparse & Redundant Signal Representation, and its Role in Image Processing

Mixture Models and EM

Introduction to Compressed Sensing

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Fast Local Laplacian Filters: Theory and Applications

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

Graphical Models for Collaborative Filtering

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Wavelet Footprints: Theory, Algorithms, and Applications

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

An Introduction to Expectation-Maximization

Abel Inversion using the Maximum Entropy Method

The Expectation-Maximization Algorithm

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

Markov Random Fields

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Bayesian Methods for Sparse Signal Recovery

Covariance-Based PCA for Multi-Size Data

Inverse Problems in Image Processing

Gradient-domain image processing

CPSC 340: Machine Learning and Data Mining

Basic concepts in estimation

Lecture 3: Linear Filters

Noise-Blind Image Deblurring Supplementary Material

A Comparison of Multiple-Model Target Tracking Algorithms

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Learning-Based Image Super-Resolution

Subsampling and image pyramids

Lecture 3: Linear Filters

Detectors part II Descriptors

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson

Face recognition Computer Vision Spring 2018, Lecture 21

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Multichannel Deconvolution of Layered Media Using MCMC methods

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Robust Camera Location Estimation by Convex Programming

Multiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen

PATTERN RECOGNITION AND MACHINE LEARNING

A Generative Perspective on MRFs in Low-Level Vision Supplemental Material

Tracking of Extended Objects and Group Targets using Random Matrices A New Approach

ACTIVE safety systems on vehicles are becoming more

Matrix and Tensor Factorization from a Machine Learning Perspective

sparse and low-rank tensor recovery Cubic-Sketching

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Covariance Matrix Simplification For Efficient Uncertainty Management

Deep convolutional framelets:

Bayesian Learning (II)

arxiv: v1 [astro-ph.im] 16 Apr 2009

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Transcription:

Super-Resolution Shai Avidan Tel-Aviv University

Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris

Basic Super-Resolution Idea Given: A set of low-quality images: Required: Fusion of these images into a higher resolution image How? Comment: This is an actual superresolution reconstruction result

Example Surveillance 40 images ratio :4 4

Example Enhance Mosaics 5

6

Super-Resolution - Agenda The basic idea Image formation process Formulation and solution Special cases and related problems Limitations of Super-Resolution SR in time 7

Intuition For a given band-limited image, the yquist sampling theorem states that if a uniform sampling is fine enough ( D), perfect reconstruction is possible. D D 8

Intuition Due to our limited camera resolution, we sample using an insufficient D grid D D 9

Intuition However, if we tae a second picture, shifting the camera slightly to the right we obtain: D D 0

Intuition Similarly, by shifting down we get a third image: D D

Intuition And finally, by shifting down and to the right we get the fourth image: D D

Intuition It is trivial to see that interlacing the four images, we get that the desired resolution is obtained, and thus perfect reconstruction is guaranteed. 3

Rotation/Scale/Disp. What if the camera displacement is Arbitrary? What if the camera rotates? Gets closer to the object (zoom)? 4

Rotation/Scale/Disp. There is no sampling theorem covering this case 5

Agenda Modeling the Super-Resolution Problem Defining the relation between the given and the desired images The Maximum-Lielihood Solution A simple solution based on the measurements Bayesian Super-Resolution Reconstruction Taing into account behavior of images Some Results and Variations Examples, Robustifying, Handling color Super-Resolution: A Summary The bottom line

Chapter : Modeling the Super-Resolution Problem

The Model High- Resolution Image Geometric Warp F =I Blur H Decimation D V Y Low- Resolution Images Additive oise F H D Y V { } = DH F + V Y = Assumed nown

{ } V Y = + = F H D The Model as One Equation V V V V Y Y Y Y + = + = = H F H D D H F D H F M M M

A Rule of Thumb Y Y Y Y H F H D D H F D H F = = = M M In the noiseless case we have Clearly, this linear system of equations should have more equations than unnowns in order to mae it possible to have a unique Least-Squares solution. Example: Assume that we have images of 00-by-00 pixels, and we would lie to produce an image of size 300- by-300. Then, we should require 9.

Chapter : The Maximum-Lielihood Solution

Super-Resolution - Model Geometric warp Blur Decimation High- Resolution Image F =I H D V Y Low- Resolution Exposures Additive oise F H D Y Y V { 0, } = D H F + V, V ~ σ n =

Simplified Model Geometric warp Blur Decimation High- Resolution Image F =I H D V Y Low- Resolution Exposures Additive oise F H D Y Y V { 0, } = DHF + V, V ~ σ n = 3

The Super-Resolution Problem Y = DHF + V { 0, } σ, V ~ n Given Y The measured images (noisy, blurry, down-sampled..) H The blur can be extracted from the camera characteristics D The decimation is dictated by the required resolution ratio F The warp can be estimated using motion estimation σ n The noise can be extracted from the camera / image Recover HR image 4

The Model as One Equation Y Y DH F V Y V DHF + = = = G M M M Y DHF V + V Y G r = resolution factor = 4 MM = size of the frames = 000000 = number of frames = 0 of of, V size size of [ M ] [ M r M ] [ r M ] size =[0M ] =[0M 6M] =[6M ] Linear algebra notation is intended only to develop algorithm 5

SR - Solutions Maximum Lielihood (ML): = argmin = = DHF Y Often ill posed problem! Maximum Aposteriori Probability (MAP) = argmin DHF Y { } +λa Smoothness constraint regularization6

ML Reconstruction (LS) ( ) = = ML Y DHF ε Minimize: ( ) ( ) 0 ˆ = = = T T T ML Y DHF H D F ε Thus, require: T T T T T T Y = = = ˆ D H F DHF D H F A B B A = ˆ 7

LS - Iterative Solution Steepest descent ˆ T T T n+ = ˆ β n F H D n = ( DHF ˆ Y ) Bac projection Simulated error All the above operations can be interpreted as operations performed on images. There is no actual need to use the Matrix-Vector notations as shown here. 8

LS - Iterative Solution Steepest descent ( ) T T T ˆ n+ = ˆ n β F H D DHF ˆ n Y = For =.. ˆ n Y geometry wrap convolve with H down sample - up sample convolve with H T inverse geometry wrap F H D T D T H T F -β ˆ n+ 9

Chapter 3: Bayesian Super-Resolution Reconstruction

The Model A Statistical View V V V V Y Y Y Y + = + = = H F H D D H F D H F M M M We assume that the noise vector, V, is Gaussian and white. { } { } exp Pr v V T V Const obv σ = For a nown, Y is also Gaussian with a shifted mean { } ( ) ( ) { } exp Pr v T Y Y Const Y σ H H =

Maximum-Lielihood Again The ML estimator is given by ˆ ML = ArgMaxProb Y { } which means: Find the image such that the measurements are the most liely to have happened. In our case this leads to what we have seen before ˆ ML { } = ArgMaxProb Y = ArgMin H Y

ML Often Sucs!!! For Example ˆ For the image denoising problem we get ML = ArgMin We got that the best ML estimate for a noisy image is the noisy image itself. Y ˆ=Y The ML estimator is quite useless, when we have insufficient information. A better approach is needed. The solution is the Bayesian approach.

Using The Posterior Instead of maximizing the Lielihood function Pr { Y } maximize the Posterior probability function Pr{ Y} This is the Maximum-Aposteriori Probability (MAP) estimator: Find the most probable, given the measurements A major conceptual change is assumed to be random

Why Called Bayesian? Bayes formula states that { Y} Pr = Pr { Y } Pr{ } Pr { Y} and thus MAP estimate leads to ˆ = ArgMaxPr = MAP { Y} ArgMax Pr{ Y } Pr{ } This part is already nown What shall it be?

Image Priors? { }? Pr = This is the probability law of images. How can we describe it in a relatively simple expression? Much of the progress made in image processing in the past 0 years (PDE s in image processing, wavelets, MRF, advanced transforms, and more) can be attributed to the answers given to this question.

MAP Reconstruction If we assume the Gibbs distribution with some energy function A() for the prior, we have Pr { } = Const exp{ A{ } ˆ MAP = = ArgMax Pr ArgMin { Y } Pr{ } HY { } +λa This additional term is also nown as regularization

MAP Choice of Regularization = ( ) Y D H F λa{ } ε + = Possible Prior functions - Examples:. A = S - simple smoothness (Wiener filtering), T T. A{ } = S W( 0) S- spatially adaptive smoothing, A { } { } = ρ{ S} 3. - M-estimator (robust functions), 4. The bilateral prior the one used in our recent wor: A P P { } ( n m = a ρ S S ) n= P m= P mn 4. Other options: Total Variation, Beltrami flow, example-based, sparse representations, h v

MAP Reconstruction MAP Regularization term: = ( ) DHF Y λa{ } ε + = Tihonov cost function A T { } = Γ Total variation A TV { } = Bilateral filter A B P P l+ m l m α SxS y 39 l= P m= P { } =

Robust Estimation + Regularization ( ) = = + = + = P P l P P m m y l x m l S S Y α λ ε DHF Minimize: ( ) [ ] ( ) + = = = + = + P P l P P m n m y l x n m y l x m l n T T T n n S S S S I Y ˆ ˆ sign ˆ sign ˆ ˆ α λ β DHF H D F 40

Robust Estimation + Regularization ˆ β P P l+ m ( ) + [ ] ( ) l m l m DHF ˆ Y λ I S S sign ˆ S S ˆ T T T = ˆ n F H D sign n n+ α = l= P m= P x y n x y n For =.. geometry wrap convolve with H down sample Y - sign up sample convolve with H T inverse geometry wrap ˆ n -β ˆ + n For l,m=-p..p horizontal shift l vertical shift m - sign horizontal shift -l vertical shift -m - m+ l λα From Farisu at al. IEEE trans. On Image Processing, 04 4

Chapter 4: Some Results and Variations

Example 0 Sanity Chec Synthetic case: 9 images, no blur, :3 ratio One of the lowresolution images The higher resolution original The reconstructed result

Example SR for Scanners 6 scanned images, ratio : Taen from one of the given images Taen from the reconstructed result

Example SR for IR Imaging 8 images*, ratio :4 * This data is courtesy of the US Air Force

40 images ratio :4 Example 3 Surveillance

MAP = Robust SR ( ) Y D H F λa{ } ε + = Cases of measurements outlier: Some of the images are irrelevant, Error in motion estimation, Error in the blur function, or General model mismatch. MAP = ( ) Y D H F λa{ } ε + =

Example 4 Robust SR 0 images, ratio :4 L norm based L norm based

Example 5 Robust SR 0 images, ratio :4 L norm based L norm based

MAP Handling Color in SR = ( ) Y D H F λa{ } ε + = Handling color: the classic approach is to convert the measurements to YCbCr, apply the SR on the Y and use trivial interpolation on the Cb and Cr. Better treatment can be obtained if the statistical dependencies between the color layers are taen into account (i.e. forming a prior for color images). In case of mosaiced measurements, demosaicing followed by SR is sub-optimal. An algorithm that directly fuse the mosaic information to the SR is better.

Example 6 SR for Full Color 0 images, ratio :4

Example 7 SR+Demoaicing 0 images, ratio :4 Mosaiced input Mosaicing and then SR Combined treatment

Chapter 5: Example based Super- Resolution

Example-based Super Resolution

Failure

Marov etwor Model

Single Pass

Super Resolution Result Original 70x70 Cubic Spline Example based, training: generic True 80x80

Results MRF etwor One pass Original Cubic-spline One pass

Failure Original Cubic-spline One pass

Chapter 6: Combining example based and motion based SR

Idea Classical Multi-Image SR Single-Image Multi-Patch SR

Why should it wor? Image scales All image patches High variance patches only (top 5%)

Putting everything together

Results Input. Bicubic interpolation (3). Unified single-image SR (3). Ground truth image. http://www.wisdom.weizmann.ac.il/~vision/singleimagesr.html