A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

Similar documents
No-reference (N-R) image quality metrics.

Lecture Notes 5: Multiresolution Analysis

Deterministic sampling masks and compressed sensing: Compensating for partial image loss at the pixel level

Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques

A Class of Image Metrics Based on the Structural Similarity Quality Index

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

no reference video quality assessment metrics for multimedia: state of the art of signal-based

Modeling Multiscale Differential Pixel Statistics

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Spatially adaptive alpha-rooting in BM3D sharpening

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction

Wavelet Footprints: Theory, Algorithms, and Applications

CSE 554 Lecture 7: Alignment

Advanced Introduction to Machine Learning CMU-10715

Some Interesting Problems in Pattern Recognition and Image Processing

Rate Bounds on SSIM Index of Quantized Image DCT Coefficients

A Generative Perspective on MRFs in Low-Level Vision Supplemental Material

I. INTRODUCTION. A. Related Work

DS-GA 1002 Lecture notes 10 November 23, Linear models

6 The SVD Applied to Signal and Image Deblurring

Multiscale Principal Components Analysis for Image Local Orientation Estimation

Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?

8 The SVD Applied to Signal and Image Deblurring

Old painting digital color restoration

Signal Denoising with Wavelets

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki

A NOVEL SPATIAL POOLING TECHNIQUE FOR IMAGE QUALITY ASSESSMENT BASED ON LUMINANCE-CONTRAST DEPENDENCE

Sparse linear models and denoising

Spread Spectrum Watermarking. Information Technologies for IPR Protection

Polynomial Chaos and Karhunen-Loeve Expansion

Statistical Machine Learning

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

8 The SVD Applied to Signal and Image Deblurring

Principal Components Analysis (PCA)

Dimensionality Reduction and Principle Components

Lecture Notes 1: Vector spaces

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

Announcements (repeat) Principal Components Analysis

Solving Corrupted Quadratic Equations, Provably

Multiscale Image Transforms

METRIC. Ming-Te WU and Shiunn-Jang CHERN

Least Squares Optimization

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

An Investigation of 3D Dual-Tree Wavelet Transform for Video Coding

Principal Component Analysis

SIMPLE GABOR FEATURE SPACE FOR INVARIANT OBJECT RECOGNITION

OPTIMAL SURE PARAMETERS FOR SIGMOIDAL WAVELET SHRINKAGE

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Wavelet Analysis of Print Defects

Singular Value Decomposition in Image Noise Filtering and Reconstruction

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Discriminative Direction for Kernel Classifiers

Lecture Notes 2: Matrices

Design of Image Adaptive Wavelets for Denoising Applications

New image-quality measure based on wavelets

Deep Learning: Approximation of Functions by Composition

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that

The Optimality of Beamforming: A Unified View

Histogram Processing

Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms

Sparsity Measure and the Detection of Significant Data

Statistical Data Analysis

Machine vision, spring 2018 Summary 4

Scalable color image coding with Matching Pursuit

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

SPARSE SIGNAL RESTORATION. 1. Introduction

Introduction to Machine Learning

Multiresolution Analysis

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Properties of Matrices and Operations on Matrices

Linear Heteroencoders

Wavelet Packet Based Digital Image Watermarking

Sample ECE275A Midterm Exam Questions

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Perceptual image quality assessment using a normalized Laplacian pyramid

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Statistical Geometry Processing Winter Semester 2011/2012

Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays

Computation. For QDA we need to calculate: Lets first consider the case that

Dimensionality Reduction and Principal Components

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Abstract The aim of this paper is to improve the performance of spatial domain watermarking. To this end, a new perceptual mask

Data Preprocessing Tasks

Principal Component Analysis (PCA) CSC411/2515 Tutorial

SPARSE signal representations have gained popularity in recent

Lecture 2: Linear Algebra Review

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse.

Least Squares Optimization

Machine vision. Summary # 4. The mask for Laplacian is given

Least Squares Optimization

15 Singular Value Decomposition

1 Cricket chirps: an example

Feature detectors and descriptors. Fei-Fei Li

arxiv: v1 [cs.cv] 6 Jan 2016

Transcription:

A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference objective sharpness metric detecting both blur and noise is proposed in this paper This metric is based on the local gradients of the image and does not require any edge detection Its value drops either when the test image becomes blurred or corrupted by random noise It can be thought of as an indicator of the signal to noise ratio of the image Experiments using synthetic, natural, and compressed images are presented to demonstrate the effectiveness and robustness of this metric Its statistical properties are also provided Index Terms Sharpness metric, blur, noise, singular value, gradient, covariance INTRODUCTION A no-reference image quality metric can play a significant role in image processing, as it can be used in many applications such as algorithm performance analysis, algorithm parameter optimization, video compression and communication Various methods focusing on sharpness or blur measurement have been proposed Some of them are based on pixel derivatives [], kurtosis [], and DCT coefficients [3], and a large group of metrics analyze the spread of the edges in an image [4], [5] Although these metrics are often applied to assess the image quality, most neglect the effects from other possible degradation sources, such as random noise Some sharpness metrics perform well in detecting blur in the presence of white Gaussian noise (WGN) [6], [7]; an example is the Riemannian tensor based metric [6], whose value drops when the image is increasingly more blurred owever, the value of this measure rises if the variance of noise is increased, which means that it can not be used to distinguish image quality decay against high frequency behavior due to noise In this paper we propose a new sharpness metric that can quantify the amount of both blur and random noise The value of this metric drops either when image becomes blurred or noisy, so that it can be used to capture the change of visual quality in many image processing applications The metric is This work was supported in part by AFOSR grant FA955-7--365 based on the singular value decomposition of the local image gradient matrix We first introduce the gradient matrix and its largest singular value s for various types of local image regions, and characterize the behavior of s in the presence of blur and noise Then we give the definition of our sharpness metric Several experiments are provided to highlight the performance of the metric Conclusions and further research directions based on metric are provided in the final section SINGULAR VALUES OF TE IMAGE GRADIENT MATRIX Image structure can be measured effectively by using the differences in pixels (or image gradients) Consider an image of interest g(x, y) The gradient matrix of a region within an N N local analysis window (w i ) is defined as: G = g x (k) g y (k), k w i () where [g x (k), g y (k)] denote the gradient of the image at point (x k, y k ) The corresponding gradient covariance matrix is [ C = G T k w G = i gx(k) ] k w i g x (k)g y (k) k w i g x (k)g y (k) k w i gy(k) () Not surprisingly, some interesting information about the content of the image patch w i can be derived from the gradient matrix G or the gradient covariance matrix C One example is to calculate the local dominant orientation by computing the (compact) Singular Value Decomposition (SVD) of G [8] G = USV T s T = U v v s (3) where both U and V are orthonormal matrices The column vector v represents the dominant orientation of the local gradient field Correspondingly, the second singular vector v (which is orthogonal to v ) will describe the dominant orientation of this patch The singular values s s represent the energy in the directions v and v, respectively

(a) Flat (b) Linear (c) Quadratic (d) Edged Fig Types of patches that are used in the experiments through this paper The above quantities can equivalently be measured using the eigenvectors of C, because [ C = VS T SV T s = V s ] V T (4) As we will describe below, since the singular values reflect the strength of the gradients along dominant direction and its perpendicular direction, they are sensitive to blurring, and therefore, may be used to define a sharpness metric Before we proceed with the definition of our sharpness metric, we analyze the behavior of s and s on several types of idealized patches They include flat, linear, quadratic, and edged patches (shown in Fig ) In the flat case, all points within the N N patch share a common intensity value: g(x k, y k ) = c (5) Both g x (k) and g y (k) are equal to for k =,,, N, and s = s = Naturally, ignoring boundary effect arising from the finite nature of the window, a flat patch remains unchanged after being blurred In what follows, we will apply a space invariant Gaussian blur function iteratively to the canonical regions shown in Fig, and observe how the singular values behave In this sense, the flat region can be thought of as the final result as the number of iterations (or equivalently the strength of the blur) is made arbitrarily large In the linear patch, the gray value of each point can be modeled as: g(x k, y k ) = a(x k cos θ + y k sin θ) + b (6) where a decides the slope, θ decides the orientation, and b is the bias It can be deduced that s and s have the following values: s = an (7) s = Both s and s are independent from the orientation, and s is proportional to the slope given a fixed patch size, while s remains at zero For the sake of simplicity, the quadratic patch is modeled as: g(x k, y k ) = a (x k x c ) + a (y k y c ) (8) where (x c, y c ) is the center point The singular values of its gradient matrix are: where s = a max N s = a min N (N )(N+) 3 (N )(N+) 3 a max = max(a, a ) a min = min(a, a ) (9) () ere s and s reflect the value of a and a, which determine the slope at each point, and thus determine the sharpness of the region Another type of image region that is very sensitive to blurring is the ideal edged patch Again, in the interest of convenience we just look at an ideal vertical edge: { b + c xn > x g(x k, y k ) = c () b otherwise where, without loss of generality, c is a positive value The corresponding singular values are: s = c N s = () Only s here reflects the value of parameter c, which gives the intensity difference between two sides of the edge In general, regardless of the patch type, rotating an image patch by an arbitrary angle θ will not change the singular values of the gradient matrix The relationship between the rotated gradient matrix G θ and the unrotated G is G θ = GR T θ (3) where R θ is the (orthonormal) rotation matrix: cos θ sin θ R θ = sin θ cos θ Therefore, the SVD of G θ becomes: (4) G θ = US(R θ V) T (5) This equation says that the directions v and v are correspondingly rotated, but the singular values s and s remain unchanged It is observed through the above analysis that the singular value s is quite intimately related with the sharpness of the local region This is valid not only in regions with strong direction and contrast (edged patch), but also in regions which may be isotropic (quadratic patch, where a = a ), or very smooth (linear patch) To verify the usefulness of s in the presence of blur, we applied a Gaussian blur (of size 5 5 and variance 5) iteratively to the above patch types and recorded the resulting s values, which are shown in Fig The size of the patches is

s s 5 5 4 6 8 76 74 7 7 68 66 (a) Flat 64 4 6 8 (c) Quadratic s s 465 46 455 45 95 9 85 8 75 7 65 6 4 6 8 (b) Linear 55 4 6 8 (d) Edged Fig Plots of s in blurring process for different patches Patch size N = 7 The distribution of the smoothing kernel is Gaussian, and its standard deviation σ G = 5 We apply this kernel iteratively to make each patch more and more blurred, and we only analyze the 7 7 window in the center to avoid border effects It is observed that as the goes on, s for all the non-flat patches drops steadily as expected Next, we take noise into account A good sharpness metric should react reasonably to both blur and random noise So the next question we address is what happens to s if the image (or patch) is corrupted by white Gaussian noise (WGN) Assume that we have an N N image patch denoted in column-stacked vector format as an N vector n, which contains iid samples of zero-mean WGN with variance σ In practice the statistics of G n depend upon the way we calculate the discrete derivatives For example, the gradient of n in x and y directions can be produced by applying the filters:, or the filter masks: 8, 8 The gradient matrix G n can be calculated as: (6) (7) G n = [ D x n D y n ] (8) where the matrices D x and D y are derived from filters such as the above Because the noise is zero-mean, the expected value of G n is: E(G n ) = [ ] (9) and the expected gradient covariance matrix becomes: E(C n ) = E(G T n G n ) () n = E T D T x D x n n T D T x D y n n T D T y D x n n T D T y D y n where the first entry can be deduced as: E(C n ), = E ( n T D T x D x n ) = E ( tr ( D x nn T D T )) x and similarly we have: () = σ tr(d x D T x ) () E(C n ), = σ tr(d y D T x ), E(C n ), = σ tr(d x D T y ) E(C n ), = σ tr(d y D T y ) The value of the trace tr(dd T ) depends upon the specific filter used in (8) It can be shown that if we choose (6) or (7), the expected C n will have the form: ξn E(C n ) = σ ξn σ () where ξ = if we use filters (6), and for filters in (7) ξ = 3 6 Now consider how the value of s changes when a clean image g is corrupted by the noise image denoted by n The gradient matrix of the noisy image ĝ would become: Ĝ = G + G n (3) Since G is deterministic, the expected Ĉ would have the form: E(Ĉ) = E(ĜT Ĝ) = G T G + E(G T n G n ) + G T E(G n ) s = V + ξn σ s + ξn σ V T (4) So on average the singular value ŝ of the noisy image can approximately be written as: ŝ s + ξn σ (5) This equation tells us that ŝ is determined by both s and σ Given a fixed σ, the value of ŝ drops as s gets decreased, or say when the image g is more blurry Unfortunately, ŝ is also monotonically increasing with the noise variance σ So our definition of a sharpness metric must be a modification of ŝ, as described below It is useful to note that if σ is sufficiently large, ŝ becomes approximately proportional to the standard deviation σ: ŝ ξ Nσ (6)

5 4 3 3 4 5 6 7 9 8 7 6 5 4 3 (a) Flat 3 4 5 6 7 (c) Quadratic 7 6 5 4 3 3 4 5 6 7 8 6 4 (b) Linear 3 4 5 6 7 (d) Edged Fig 3 Plots of the average versus the noise deviation σ in Monte-Carlo simulations for different patches, where ɛ = 5 4 and the pixel intensity range is [, ] different noise realizations were used for each σ to get the average The patch size N = 7 3 SARPNESS METRIC The above analysis demonstrates that ŝ itself cannot be used directly as a sharpness metric in the presence of both noise and blur To alleviate this problem, we define the sharpness metric as follows: = ŝ ɛ + σ (7) where ɛ is a fixed, small positive constant ere we assume that the noise variance σ is known, or at least can be estimated For a fixed σ, the behavior of is basically the same as ŝ As σ rises, however, the value of drops to zero, as desired Monte-Carlo simulations are carried out to examine the behavior of in different image patches We add a realization of WGN with σ ranging from to 8 to the test patch, and calculate the metric For each σ, this process is repeated with independent noise realizations to get the average sharpness metric ɛ is set to be 5 4, and the true standard deviation σ is used in calculating The plots are illustrated in Fig 3, which shows that the sharpness metric is decreasing as the variance of noise rises In Section 4 we present experimental validation of using more general natural images with noise, blur, and compression artifacts We note that ŝ = s and = s ɛ if no noise is present Probability Density 6 5 4 3 N = 5 N = 7 N = 9 N = 5 5 (a) Probability Density 5 5 σ = 6 σ = 4 σ = σ = 4 6 8 Fig 4 The probability density function of sharpness metric for white Gaussian noise (a) Density functions with different window size N, where σ = /, ɛ = (b) Density functions with different standard deviation σ, where N = 9 and ɛ = Before describing further experimental results, we pause to note that when noise is present, is of course a random variable If the variance of the noise is assumed to be given, then a particular measured value of can be used to decide how much true content the image contains as compared to noise Said another way, can be thought of as a rough indicator of the signal to noise ratio In order for this concept to be useful, it is necessary to know the actual statistical distribution of Unfortunately, this distribution is in general quite complicated But happily, it can be derived in closed form when the image is pure WGN (or approximately when the noise is overwhelming the signal) This is still useful as a baseline value for the statistic More specifically, based on results from [9] we can derive this density as: h f (h) = (N )! e where µ = ξ hβ µ µ β β ( β h µ σ+ɛ/σ, x γ(α, x) = β = N (b) [ ( ) β h e h µ µ ) )] γ (β, h µ (8) t α e t dt (9) Plots of this density with different window sizes and different noise variances are shown in Fig4 (Due to lack of space, we do not describe the detailed derivation in this paper) 4 EXPERIMENTS In this section we illustrate further experimental results for the proposed no-reference sharpness metric The experimental measurement strategy we take is the same for all the experiments Namely, the image is first divided into M nonoverlapping blocks, and then i is calculated for each block

6 55 noise σ= noise σ= noise σ= 4 35 3 5 5 5 45 5 5 blur width σb Fig 5 Plots of versus smoothing kernel deviation σ b WGN with σ b =,, was added respectively to the test image Lena In calculating local, the window size N = 6, ɛ = and the noise standard deviation estimated through MAD method were used 5 4 6 8 Fig 6 Plots of versus Gaussian The test image is Lena, and the smoothing kernel σ b = In calculating local, the window size N = 6 and ɛ = i Finally we use the average = M M i= i as the metric for the whole image It is worth mentioning that in practice, when calculating i, if the standard deviation of noise is not known, it too can be estimated using a variety of methods In the experiments here we employ the median absolute deviation (MAD) method [] to measure the standard deviation σ of Gaussian noise globally, and assume that this value is unchanged through all the blocks 4 Simulations First, we examine the validity of in measuring blur for natural images with random noise The test gray image is Lena (5 5) A Gaussian blur kernel (of size ), whose standard deviation σ b ranges from 4 to, is applied to the image respectively, and WGN with variance σ is added after each blurring process The plots of are shown in Fig 5 The block size is N = 6, and ɛ = Next, we keep the blur kernel fixed and increase the variance of WGN Results are shown in Fig 6 It is observed that both blur and noise decrease the value of the proposed sharpness, as desired 4 JPEG experiments As an extension of the application of our metric, here we use it to compute the quality of JPEG compressed images from the LIVE database [] The specific images used are shown in Fig 7 is calculated from the luminance channel of the images All the parameters for are the same as in Section 4 above, and we assume no random noise is involved The recently introduced and popularized image quality metric SSIM [] is also calculated for comparison From the results shown in Fig 8, it can be seen that the rate distortion curves using (or say across the whole image) and SSIM as the distortion metric respectively behave The value of ɛ is higher compared with the experiments before, because the pixel intensity range here is [, 55] (a) Buildings (b) Lighthouse (c) Monarch Fig 7 Images from the LIVE database for JPEG test Their size is 5 768 in a similar fashion This may indicate that the no-reference metric is also able to capture the visual quality of the images in the presence of compression artifacts In addition, it is observed that the metric has a larger range of values, compared with SSIM (which basically computes the fidelity, and is always between and ) This is because indicates the signal to noise ratio as we described before Images with more detail will get higher value in By looking at the estimated of different test images at the same, we can see that the value for the Buildings image, which is more detailed, is higher than Lighthouse; and Monarch, whose background is blurry, is always the smallest 5 CONCLUSION In this paper, we proposed a no-reference sharpness metric based on image gradients Experiments show that it can quantify the amount of both blur and random noise, and thus, can capture the change of visual quality in many image processing applications As described in Section 3, the metric actually serves as an indicator of the signal to noise ratio owever, it requires a prior estimation of noise variance Further study is already under way to devise an image content metric based on an extension of, which does not need any prior knowledge, and can be utilized in solving the parameter optimization problem for image restoration algorithms, such as denoising [3]

SSIM 95 9 85 8 75 7 65 6 55 5 5 (a) SSIM (Buildings) SSIM 95 9 85 8 75 7 5 5 5 3 (b) SSIM (Lighthouse) SSIM 95 9 85 8 75 5 5 5 3 (c) SSIM (Monarch) 36 34 3 3 8 6 8 7 6 5 4 3 5 4 3 9 4 5 5 (d) (Buildings) 5 5 5 3 (e) (Lighthouse) 8 5 5 5 3 (f) (Monarch) Fig 8 Plots of mean SSIM and versus (bpp) for the JPEG compressed images Buildings, Monarch and Lighthouse In calculating local, N = 6, ɛ = and σ = 6 REFERENCES [] C F Batten, Autofocusing and astigmatism correction in the scanning electron microscope, in MPhil Thesis, University of Cambridge, August [] J Caviedes and F Oberti, A new sharpness metric based on local kurtosis, edge and energy information, in Signal Processing: Image Communication, 4, vol 9, pp 47 6 [3] X Marichal, W Y Ma, and J Zhang, Blur determination in the compressed domain using DCT information, in Proceedings of the International Conference on Image Processing, Kobe, Japan, October 999, vol, pp 386 39 [4] P Marziliano, F Dufaux, S Winkler, and T Ebrahimi, A no-reference perceptual blur metric, in Proceedings of the International Conference on Image Processing, Rochester, NY,, vol 3, pp 57 6 [5] S Varadarajan and L J Karam, An improved perception-based no-reference objective image sharpness metric using iterative edge refinement, in Proceedings of the 5th IEEE International Conference on Image Processing, October 8, pp 4 44 [6] R Ferzli and L J Karam, A no reference objective sharpness metric using Riemannian tensor, in Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, Arizona, January 7, pp 5 6 [7] R Ferzli and L J Karam, No-reference objective wavelet-based noise immune image sharpness metric, in IEEE International Conference on Image Processing, September 5, pp 45 48 [8] X Feng and P Milanfar, Multiscale principal components analysis for image local orientation estimation, in Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, November, vol, pp 478 48 [9] A Edelman, Eigenvalues and condition numbers of random matrices, SIAM Journal on Matrix Analysis and Applications, vol 9, no 4, pp 543 56, October 988 [] F R ampel, The influence curve and its role in robust estimation, Journal of the American Statistical Association, vol 69, pp 383 393, 974 [] R Sheikh, Z Wang, L Cormack, and A C Bovik, LIVE image quality assessment database, http://liveeceutexasedu/research/quality [] Z Wang, A C Bovik, R Sheikh, and E P Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing, vol 3, no 4, pp 6 6, April 4 [3] X Zhu and P Milanfar, A no-reference metric of true image content and its application in denoising, In preparation