C. A. Bouman: Digital Image Processing - January 29, Image Restortation

Similar documents
UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

Computer Vision & Digital Image Processing

The Visual Perception of Images

Perceptually Uniform Color Spaces

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?

Acoustic MIMO Signal Processing

MODIFIED CENTRAL WEIGHTED VECTOR MEDIAN FILTER 1. STANDARD NOISE REDUCTION FILTERS

III.C - Linear Transformations: Optimal Filtering

4 Derivations of the Discrete-Time Kalman Filter

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Fourier Transforms 1D

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Adaptive Filtering Part II

Solutions to Homework Set #6 (Prepared by Lele Wang)

Lecture Note 12: Kalman Filter

ECE534, Spring 2018: Solutions for Problem Set #5

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Lecture 4: Least Squares (LS) Estimation

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Continuous State MRF s

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

11 - The linear model

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

26. Filtering. ECE 830, Spring 2014

Biomedical Image Analysis. Segmentation by Thresholding

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Lifting Detail from Darkness

Special Topics: Data Science

Old painting digital color restoration

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

Revision of Lecture 4

Nonlinear Signal Processing ELEG 833

6.435, System Identification


ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Inverse problem and optimization

Markov Random Fields

Final Examination Solutions (Total: 100 points)

Lecture 26: Neural Nets

' Liberty and Umou Ono and Inseparablo "

Modelling Non-linear and Non-stationary Time Series

A Generative Perspective on MRFs in Low-Level Vision Supplemental Material

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

Privacy-Preserving Adversarial Networks

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

ENGR352 Problem Set 02

Estimation Error Bounds for Frame Denoising

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

Deconvolution. Parameter Estimation in Linear Inverse Problems

Medical Image Analysis

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

Least Squares and Kalman Filtering Questions: me,

Ch. 12 Linear Bayesian Estimators

Detectors part II Descriptors

Spatially adaptive alpha-rooting in BM3D sharpening

r/lt.i Ml s." ifcr ' W ATI II. The fnncrnl.icniccs of Mr*. John We mil uppn our tcpiiblicnn rcprc Died.

W i n t e r r e m e m b e r t h e W O O L L E N S. W rite to the M anageress RIDGE LAUNDRY, ST. H E LE N S. A uction Sale.

H A M M IG K S L IM IT E D, ' i. - I f

.-I;-;;, '.-irc'afr?*. P ublic Notices. TiffiATRE, H. aiety

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

MMSE DECISION FEEDBACK EQUALIZER FROM CHANNEL ESTIMATE

EE482: Digital Signal Processing Applications

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

ECE 468: Digital Image Processing. Lecture 8

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Q3. Derive the Wiener filter (non-causal) for a stationary process with given spectral characteristics

Sampling versus optimization in very high dimensional parameter spaces

9 Multi-Model State Estimation

SIMON FRASER UNIVERSITY School of Engineering Science

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

Chapter 2 Wiener Filtering

Recursive Least Squares for an Entropy Regularized MSE Cost Function

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

Properties of the Autocorrelation Function

Error Entropy Criterion in Echo State Network Training

Empirical Mean and Variance!

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Shape of Gaussians as Feature Descriptors

Lecture 8: Interest Point Detection. Saad J Bedros

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Feature detectors and descriptors. Fei-Fei Li

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Statistical and Adaptive Signal Processing

AdaptiveFilters. GJRE-F Classification : FOR Code:

Convolutional networks. Sebastian Seung

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

Deformation and Viewpoint Invariant Color Histograms

Lecture 8: Interest Point Detection. Saad J Bedros

Feature detectors and descriptors. Fei-Fei Li

IMAGE ENHANCEMENT II (CONVOLUTION)

Transcription:

C. A. Bouman: Digital Image Processing - January 29, 2013 1 Image Restortation Problem: You want to know some image X. But you only have a corrupted version Y. How do you determine X from Y? Corruption may result from: Additive noise Nonadditive noise Linear distortion Nonlinear distortion

C. A. Bouman: Digital Image Processing - January 29, 2013 2 Optimum Linear FIR Filter Find an optimum linear filter to compute X from Y. Filter uses input window of Y to estimate each output pixelx s. Filter can be designed to be minimize mean squared error (MMSE). The estimate of X s is denoted by ˆX s. W(s) denotes the window abouts. The estimate, ˆXs, is a function ofy W(s).

C. A. Bouman: Digital Image Processing - January 29, 2013 3 Application of Optimum Filter ˆX s = f(y W(s) ) s window about s s Measured Image Y Restored Image ^X(Y) The function f(y W(s) ) is designed to produce a MMSE estimate of X. Iff(Y W(s) ) is: Linear linear space invariant filter. Nonlinear nonlinear space invariant filter. This filter can reduce the effects of all types of corruption.

C. A. Bouman: Digital Image Processing - January 29, 2013 4 Optimality Properities of Linear Filter If both images are jointly Gaussian: Then MMSE filter is linear. ˆX s = E[X s Y W(s) ] = AY W(s) +b If images are not jointly Gaussian: Then MMSE filter is generally not linear. ˆX s = E[X s Y W(s) ] = f(y W(s) ) However, the MMSE linear filter can still be very effective!

C. A. Bouman: Digital Image Processing - January 29, 2013 5 Formulation of MMSE Linear Filter: Definitions W(s) - window about the pixels. p - number of pixels in W(s) z s - row vector containing pixels of Y W(s). - column containing filter parameters Detailed definitions: Definition ofw(s) W(s) = [s,s+r 1,...,s+r p 1 ] where r 1,...,r p 1 index neighbors. Definition ofz s Definition of z s = [y s,y s+r1,...,y s+rp 1 ] = [ 0,..., p 1 ]

C. A. Bouman: Digital Image Processing - January 29, 2013 6 Formulation of MMSE Linear Filter: Objectives Linear filter is given by ˆx s = z s Mean squared error is given by MSE = E[ x s ˆx s 2 ] = E[ x s z s 2 ] The MMSE filter parameters are given by = argmin E[ x s z s 2 ]. How do we solve this problem?

C. A. Bouman: Digital Image Processing - January 29, 2013 7 More Matrix Notation Define the subsets 0 of image pixels. 1. S 0 S 2. S 0 contains N 0 < N pixels 3. S 0 usually does not contain pixels on the boundary of the image. 4. S 0 = [s 1,...,s N0 ] Define then 0 p matrix Z Z = z s1 z s2. z sn0. Define then 0 1 column vectors X and ˆX x s1 X = x s2. and ˆX = x sn0 ˆx s1 ˆx s2. ˆx sn0. Then X ˆX = Z

C. A. Bouman: Digital Image Processing - January 29, 2013 8 We expect that Least Squares Linear Filter MSE = E[ x s z s 2 ] 1 N 0 s S 0 x s z s 2 So we may solve the equation = 1 N 0 X Z 2 = argmin X Z 2 The solution is the least squares estimate, of, and the estimate ˆX = Z is known as the least squares filter.

C. A. Bouman: Digital Image Processing - January 29, 2013 9 So Computing Least Squares Linear Filter = argmin = argmin = argmin = argmin = argmin 1 = argmin X Z 2 N 0 ( ) 1 X Z 2 ( N 0 ) 1 (X Z) t (X Z) ( N 0 ) 1 (X t X 2 t Z t X + t Z t Z) N ( 0 X t ) X 2 tzt X + tzt Z N 0 N 0 N ( ) 0 tzt Z 2 tzt X N 0 N 0

C. A. Bouman: Digital Image Processing - January 29, 2013 10 Define thep p matrix ˆR zz = Z t Z N 0 Covariance Estimates = 1 N 0 [ z t s 1,z t s 2,...,z t s N0 ] = 1 N 0 N 0 i=1 Define thep 1 vector ˆr zx = Z t X N 0 So z t s i z si = 1 N 0 [ z t s 1,z t s 2,...,z t s N0 ] = 1 N 0 N 0 i=1 z t s i x si z s1 z s2. z sn0 x s1 x s2. x sn0 ( = argmin t ˆRzz 2 tˆr ) zx

C. A. Bouman: Digital Image Processing - January 29, 2013 11 Interpretation of ˆR zz and ˆr zx ˆR zz is an estimate of the covariance of z s. [ ] ] 1 N E [ˆRzz = E zsz t s N 0 = E[zsz t s ] = R zz ˆr zx is an estimate of the cross correlation between z s and x s. [ ] 1 N E[ˆr zx ] = E zsx t s N 0 = E[zsx t s ] = r zx s=1 s=1

C. A. Bouman: Digital Image Processing - January 29, 2013 12 Solution to Least Squares Linear Filter We need 1 = argmin X Z 2 N 0 We have shown this is equivalent to ( = argmin t ˆRzz 2 tˆr ) zx Taking the gradient of the cost functional 0 = ( t ˆRzz 2 tˆr ) = zx = ( 2ˆR zz 2ˆr zx ) = Solving for yeilds = (ˆRzz ) 1 ˆrzx

C. A. Bouman: Digital Image Processing - January 29, 2013 13 Summary of Solution to Least Squares Linear Filter First compute ˆR zz = 1 N z N sz t s 0 s=1 Then compute ˆr zx = 1 N z N sx t s 0 s=1 = (ˆRzz ) 1 ˆrzx The vector then contains the values of the filter coefficients.

C. A. Bouman: Digital Image Processing - January 29, 2013 14 Training is usually estimated from training data. Training data Generally consists of image pairs (X,Y) where Y is the measured data and X is the undistorted image. Should be typical of what you might expect. Can often be difficult to obtain. Testing data Also consists of image pairs (X,Y). Is used to evaluate the effectiveness of the filters. Should never be taken from the training data set. Training versus Testing Performance on training data is always better than performance on testing data. As the amount of training data increases, the performance on training and testing data both approach the best achievable performance.

C. A. Bouman: Digital Image Processing - January 29, 2013 15 Comments Wiener filter is the MMSE linear filter. Wiener filter may be optimal, but it isn t always good. Linear filters blur edges Linear filters work poorly with non-gaussian noise. Nonlinear filters can be designed using the same methodologies.

C. A. Bouman: Digital Image Processing - January 29, 2013 16 Is MMSE a Good Quality Criteria for Images? In general, NO!... But sometimes it is OK. For achromatic images, it is best to choose X andy inl or gamma corrected coordinates. Let H be a filter that implements the CSF for the human visual system. Then a better metric of error is HVSE = H(X ˆX) 2 ( ˆX) th = X t H = X ˆX 2 B ( X ˆX ) where B = H t H. X ˆX 2 B is a quadratic norm. What is the minimum HVSE estimate ˆX?

C. A. Bouman: Digital Image Processing - January 29, 2013 17 Answer The answer is ˆX = E[X Y]. This is the same as for mean squared error! The conditional expectation minimizes any quadratic norm of the error. This is also true for non-gaussian images. Let ˆX = AY W(s) +b be the MMSE linear filter. This filter is also the minimum HVSE linear filter. This is also true for non-gaussian images.

C. A. Bouman: Digital Image Processing - January 29, 2013 18 Proof DefineV = HX and B = H t H min ˆX = min ˆX = min ˆV E E E [ X ˆX ] 2 B [ H(X ˆX) 2 ] [ V ˆV 2 ] = E [ V E[V Y] 2] = E [ HX E[HX Y] 2] = E [ H(X E[X Y]) 2] = E [ ] X E[X Y] 2 B So, ˆX = E[X Y] minimizes the error measure. HVSE = X ˆX 2 B.