C. A. Bouman: Digital Image Processing - January 29, 2013 1 Image Restortation Problem: You want to know some image X. But you only have a corrupted version Y. How do you determine X from Y? Corruption may result from: Additive noise Nonadditive noise Linear distortion Nonlinear distortion
C. A. Bouman: Digital Image Processing - January 29, 2013 2 Optimum Linear FIR Filter Find an optimum linear filter to compute X from Y. Filter uses input window of Y to estimate each output pixelx s. Filter can be designed to be minimize mean squared error (MMSE). The estimate of X s is denoted by ˆX s. W(s) denotes the window abouts. The estimate, ˆXs, is a function ofy W(s).
C. A. Bouman: Digital Image Processing - January 29, 2013 3 Application of Optimum Filter ˆX s = f(y W(s) ) s window about s s Measured Image Y Restored Image ^X(Y) The function f(y W(s) ) is designed to produce a MMSE estimate of X. Iff(Y W(s) ) is: Linear linear space invariant filter. Nonlinear nonlinear space invariant filter. This filter can reduce the effects of all types of corruption.
C. A. Bouman: Digital Image Processing - January 29, 2013 4 Optimality Properities of Linear Filter If both images are jointly Gaussian: Then MMSE filter is linear. ˆX s = E[X s Y W(s) ] = AY W(s) +b If images are not jointly Gaussian: Then MMSE filter is generally not linear. ˆX s = E[X s Y W(s) ] = f(y W(s) ) However, the MMSE linear filter can still be very effective!
C. A. Bouman: Digital Image Processing - January 29, 2013 5 Formulation of MMSE Linear Filter: Definitions W(s) - window about the pixels. p - number of pixels in W(s) z s - row vector containing pixels of Y W(s). - column containing filter parameters Detailed definitions: Definition ofw(s) W(s) = [s,s+r 1,...,s+r p 1 ] where r 1,...,r p 1 index neighbors. Definition ofz s Definition of z s = [y s,y s+r1,...,y s+rp 1 ] = [ 0,..., p 1 ]
C. A. Bouman: Digital Image Processing - January 29, 2013 6 Formulation of MMSE Linear Filter: Objectives Linear filter is given by ˆx s = z s Mean squared error is given by MSE = E[ x s ˆx s 2 ] = E[ x s z s 2 ] The MMSE filter parameters are given by = argmin E[ x s z s 2 ]. How do we solve this problem?
C. A. Bouman: Digital Image Processing - January 29, 2013 7 More Matrix Notation Define the subsets 0 of image pixels. 1. S 0 S 2. S 0 contains N 0 < N pixels 3. S 0 usually does not contain pixels on the boundary of the image. 4. S 0 = [s 1,...,s N0 ] Define then 0 p matrix Z Z = z s1 z s2. z sn0. Define then 0 1 column vectors X and ˆX x s1 X = x s2. and ˆX = x sn0 ˆx s1 ˆx s2. ˆx sn0. Then X ˆX = Z
C. A. Bouman: Digital Image Processing - January 29, 2013 8 We expect that Least Squares Linear Filter MSE = E[ x s z s 2 ] 1 N 0 s S 0 x s z s 2 So we may solve the equation = 1 N 0 X Z 2 = argmin X Z 2 The solution is the least squares estimate, of, and the estimate ˆX = Z is known as the least squares filter.
C. A. Bouman: Digital Image Processing - January 29, 2013 9 So Computing Least Squares Linear Filter = argmin = argmin = argmin = argmin = argmin 1 = argmin X Z 2 N 0 ( ) 1 X Z 2 ( N 0 ) 1 (X Z) t (X Z) ( N 0 ) 1 (X t X 2 t Z t X + t Z t Z) N ( 0 X t ) X 2 tzt X + tzt Z N 0 N 0 N ( ) 0 tzt Z 2 tzt X N 0 N 0
C. A. Bouman: Digital Image Processing - January 29, 2013 10 Define thep p matrix ˆR zz = Z t Z N 0 Covariance Estimates = 1 N 0 [ z t s 1,z t s 2,...,z t s N0 ] = 1 N 0 N 0 i=1 Define thep 1 vector ˆr zx = Z t X N 0 So z t s i z si = 1 N 0 [ z t s 1,z t s 2,...,z t s N0 ] = 1 N 0 N 0 i=1 z t s i x si z s1 z s2. z sn0 x s1 x s2. x sn0 ( = argmin t ˆRzz 2 tˆr ) zx
C. A. Bouman: Digital Image Processing - January 29, 2013 11 Interpretation of ˆR zz and ˆr zx ˆR zz is an estimate of the covariance of z s. [ ] ] 1 N E [ˆRzz = E zsz t s N 0 = E[zsz t s ] = R zz ˆr zx is an estimate of the cross correlation between z s and x s. [ ] 1 N E[ˆr zx ] = E zsx t s N 0 = E[zsx t s ] = r zx s=1 s=1
C. A. Bouman: Digital Image Processing - January 29, 2013 12 Solution to Least Squares Linear Filter We need 1 = argmin X Z 2 N 0 We have shown this is equivalent to ( = argmin t ˆRzz 2 tˆr ) zx Taking the gradient of the cost functional 0 = ( t ˆRzz 2 tˆr ) = zx = ( 2ˆR zz 2ˆr zx ) = Solving for yeilds = (ˆRzz ) 1 ˆrzx
C. A. Bouman: Digital Image Processing - January 29, 2013 13 Summary of Solution to Least Squares Linear Filter First compute ˆR zz = 1 N z N sz t s 0 s=1 Then compute ˆr zx = 1 N z N sx t s 0 s=1 = (ˆRzz ) 1 ˆrzx The vector then contains the values of the filter coefficients.
C. A. Bouman: Digital Image Processing - January 29, 2013 14 Training is usually estimated from training data. Training data Generally consists of image pairs (X,Y) where Y is the measured data and X is the undistorted image. Should be typical of what you might expect. Can often be difficult to obtain. Testing data Also consists of image pairs (X,Y). Is used to evaluate the effectiveness of the filters. Should never be taken from the training data set. Training versus Testing Performance on training data is always better than performance on testing data. As the amount of training data increases, the performance on training and testing data both approach the best achievable performance.
C. A. Bouman: Digital Image Processing - January 29, 2013 15 Comments Wiener filter is the MMSE linear filter. Wiener filter may be optimal, but it isn t always good. Linear filters blur edges Linear filters work poorly with non-gaussian noise. Nonlinear filters can be designed using the same methodologies.
C. A. Bouman: Digital Image Processing - January 29, 2013 16 Is MMSE a Good Quality Criteria for Images? In general, NO!... But sometimes it is OK. For achromatic images, it is best to choose X andy inl or gamma corrected coordinates. Let H be a filter that implements the CSF for the human visual system. Then a better metric of error is HVSE = H(X ˆX) 2 ( ˆX) th = X t H = X ˆX 2 B ( X ˆX ) where B = H t H. X ˆX 2 B is a quadratic norm. What is the minimum HVSE estimate ˆX?
C. A. Bouman: Digital Image Processing - January 29, 2013 17 Answer The answer is ˆX = E[X Y]. This is the same as for mean squared error! The conditional expectation minimizes any quadratic norm of the error. This is also true for non-gaussian images. Let ˆX = AY W(s) +b be the MMSE linear filter. This filter is also the minimum HVSE linear filter. This is also true for non-gaussian images.
C. A. Bouman: Digital Image Processing - January 29, 2013 18 Proof DefineV = HX and B = H t H min ˆX = min ˆX = min ˆV E E E [ X ˆX ] 2 B [ H(X ˆX) 2 ] [ V ˆV 2 ] = E [ V E[V Y] 2] = E [ HX E[HX Y] 2] = E [ H(X E[X Y]) 2] = E [ ] X E[X Y] 2 B So, ˆX = E[X Y] minimizes the error measure. HVSE = X ˆX 2 B.