Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Similar documents
ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

Nonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Medical Image Analysis

NONLINEAR DIFFUSION PDES

graphics + usability + visualization Edge Aware Anisotropic Diffusion for 3D Scalar Data on Regular Lattices Zahid Hossain MSc.

NON-LINEAR DIFFUSION FILTERING

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany

Edge Detection in Computer Vision Systems

Empirical Mean and Variance!

Computer Vision & Digital Image Processing

A Riemannian Framework for Denoising Diffusion Tensor Images

Introduction to Computer Vision. 2D Linear Systems

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

Linear Diffusion and Image Processing. Outline

Space-Variant Computer Vision: A Graph Theoretic Approach

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Lecture 7: Edge Detection

Medical Image Analysis

Where is the bark, the tree and the forest, mathematical foundations of multi-scale analysis

Linear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc

Tensor Visualisation

The structure tensor in projective spaces

EE Camera & Image Formation

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

The GET Operator. LiTH-ISY-R-2633

CSC2541 Lecture 5 Natural Gradient

Estimators for Orientation and Anisotropy in Digitized Images

Scale Space Analysis by Stabilized Inverse Diffusion Equations

Images have structure at various scales

IMAGE ENHANCEMENT II (CONVOLUTION)

Edge Detection. CS 650: Computer Vision

Machine vision. Summary # 4. The mask for Laplacian is given

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Upon successful completion of MATH 220, the student will be able to:

Tensor Visualisation

Signal Analysis. Principal Component Analysis

Machine vision, spring 2018 Summary 4

Introduction to Nonlinear Image Processing

Announcements. Filtering. Image Filtering. Linear Filters. Example: Smoothing by Averaging. Homework 2 is due Apr 26, 11:59 PM Reading:

Basics of Diffusion Tensor Imaging and DtiStudio

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Tensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals

Introduction LECTURE 1

Expansion of 1/r potential in Legendre polynomials

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

ECE Digital Image Processing and Introduction to Computer Vision

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Filtering in Frequency Domain

A Tensor Variational Formulation of Gradient Energy Total Variation

EECS 275 Matrix Computation

c Springer, Reprinted with permission.

Digital Image Processing: Sharpening Filtering in Spatial Domain CSC Biomedical Imaging and Analysis Dr. Kazunori Okada

Detectors part II Descriptors

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Vectors [and more on masks] Vector space theory applies directly to several image processing/ representation problems

Consistent Positive Directional Splitting of Anisotropic Diffusion

Total Variation Image Edge Detection

Today s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform

Photometric Stereo: Three recent contributions. Dipartimento di Matematica, La Sapienza

Lecture 8: Interest Point Detection. Saad J Bedros

DOA Estimation using MUSIC and Root MUSIC Methods

Computational Methods. Eigenvalues and Singular Values

Lesson 04. KAZE, Non-linear diffusion filtering, ORB, MSER. Ing. Marek Hrúz, Ph.D.

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

PROTEIN NMR SPECTROSCOPY

HST.582J/6.555J/16.456J

Lecture Notes 5: Multiresolution Analysis

ESE 531: Digital Signal Processing

Applied Linear Algebra

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

FFTs in Graphics and Vision. The Laplace Operator

A UNIFIED THEORY FOR STEERABLE AND QUADRATURE FILTERS

Partial Differential Equations and Image Processing. Eshed Ohn-Bar

Image Processing 1 (IP1) Bildverarbeitung 1

Corner detection: the basic idea

Mini-Course 1: SGD Escapes Saddle Points

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

A Brief Introduction to Medical Imaging. Outline

Unravel Faults on Seismic Migration Images Using Structure-Oriented, Fault-Preserving and Nonlinear Anisotropic Diffusion Filtering

CN780 Final Lecture. Low-Level Vision, Scale-Space, and Polyakov Action. Neil I. Weisenfeld

T H E S I S. Computer Engineering May Smoothing of Matrix-Valued Data. Thomas Brox

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Spectral Processing. Misha Kazhdan

ELEC546 MIMO Channel Capacity

Notes on the point spread function and resolution for projection lens/corf. 22 April 2009 Dr. Raymond Browning

Derivation of the Kalman Filter

Filtering and Edge Detection

SUPPLEMENTARY INFORMATION

2.4 Eigenvalue problems

PDEs in Image Processing, Tutorials

Multivariate Statistical Analysis

Introduction to Computer Vision

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued

ECG782: Multidimensional Digital Signal Processing

Digital Image Processing COSC 6380/4393

Chapter 8 Gradient Methods

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

Transcription:

13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12 13 14 15 Why image enhancement? Example of motion blur Why image enhancement? Example of an image with sensor noise ultrasound image of a beating heart 14 15 1

16 17 Why image enhancement? IR-image fixed pattern noise = spatial variations in gain and offset Possibly even variations over time! Hot/dead pixels A digital camera with short exposure time Shot noise (photon noise) Methods for image enhancement Inverse filtering: the distortion process is modeled and estimated (e.g. motion blur) and the inverse process is applied to the image Image restoration: an objective quality (e.g. sharpness) is estimated in the image. The image is modified to increase the quality Image enhancement: modify the image to improve the visual quality, often with a subjective criteria 16 17 18 19 Additive noise Some types of image distortion can be described as Noise added on each pixel intensity The noise has the identical distribution and is independent at each pixel (i.i.d.) Not all type of image distortion are of this type: Multiplicative noise Data dependent noise Position dependent What about pixel shot noise? The methods discussed here assume additive i.i.d.-noise Removing additive noise Image noise typically contains higher frequencies than images generally do ) a low-pass filter can reduce the noise BUT: we also remove high-frequency signal components, e.g. at edges and lines HOWEVER: A low-pass filter works in regions without edges and lines (ergodicity) 18 19 2

20 21 Example: LP filter Basic idea The problem of low-pass filters is that we apply the same filter on the whole image Filter, ¾ = 1 We need a filter that locally adapts to the image structures Image with some noise A space-variant filter Filter, ¾ = 2 20 21 22 23 Ordinary filtering / convolution Ordinary filtering can be described as a convolution of the signal f and the filter g: Adaptive filtering If we apply an adaptive (or position dependent, or space-variant) filter g x, the operation cannot be expressed as a convolution, but instead as For each x, we compute the integral between the filter g and a shifted signal f For each x, we compute the integral between a shifted signal f and the filter g x where the filter depends on x 22 23 3

24 25 Orientation-selective g x If the signal is i1d the filter can maintain the signal by reducing the frequency components orthogonal to the local structure The human visual system is less sensitive to noise along linear structures than to noise in the orthogonal direction Results in good subjective improvement of image quality Oriented noise White noise in the image domain 24 25 26 27 Oriented noise White noise in the Fourier domain Oriented noise Oriented white noise in the Fourier domain 26 27 4

28 29 Oriented noise Oriented white noise in the image domain A B C Oriented noise Edges and lines A. Without noise B. With oriented noise along C. With isotropic noise D. With oriented noise across D 28 29 30 31 Local structure information We compute the local orientation tensor T(x) at all points x to control / steer g x At a point x that lies in a locally i1d region, we obtain Scale space recap (from lecture 2) The linear Gaussian scale space related to the image f is a family of images L(x,y;s) parameterized by the scale parameter s, where Convolution over (x,y) only! ê ê is normal to the linear structure A Gaussian LP-filter with ¾ 2 = s Note: g s (x,y) = (x,y) for s = 0 30 31 5

Scale space recap (from lecture 2) L(x,y;s) can also be seen as the solution to the PDE 32 Repetition: Vector Analysis Nabla operator 33 with boundary condition L(x,y;0) = f(x,y) The diffusion equation Example: L = temperature s = time On a scalar function On a vector field Laplace operator note: 32 33 34 35 Enhancement based on linear diffusion This means that L(x,y;s) is an LP-filtered version of f(x,y) for s > 0. The larger s is, the more LP-filtered is f High-frequency noise will be removed for larger s Also high-frequency image components (e.g. edges) will be removed We need to control the diffusion process such that edges remain - How? Step 1 Modify the PDE by introducing a parameter ¹: This PDE is solved by ¹ can be seen as a diffusion speed : Small ¹: the diffusion process is slow when s increases Large ¹: the diffusion process is fast when s increases Same as before Slightly different 34 35 6

36 37 Step 2 We want the image content to control ¹ In flat regions: fast diffusion (large ¹) In non-flat region: slow diffusion (small ¹) We need to do space-variant diffusion ¹ is a function of position (x,y) We will introduce another spacevariant filter g x in adaptive filtering Inhomogeneous diffusion Perona & Malik suggested to use where rf is the image gradient at (x,y) and is fixed a parameter Close to edges: rf is large ) ¹ is small In flat regions: rf is small ) ¹ is large 36 37 38 39 Inhomogeneous diffusion Inhomogeneous diffusion Noise is effectively removed in flat region Edges are preserved Noise is preserved close to edges We want to be able to LP-filter along but not across edges, same as for adaptive filtering 38 39 7

40 41 Step 3 The previous PDEs are all isotropic ) The resulting filter g is isotropic The last PDE can be written: Step 3 Change ¹ from a scalar to a 2 2 symmetric matrix D The solution is now given by ( Same as before Divergence of ( ) maps 2D vector field to scalar field Gradient of L, a 2D vector field 40 41 42 43 Ansiotropic diffusion The filter g is now anisotropic, i.e., not necessary circular symmetric The diffusion tensor Since D is symmetric 2 2: The shape of g depends on D D is called a diffusion tensor Can be given a physical interpretation, e.g. for anisotropic heat diffusion where 1, 2 are the eigenvalues of D, and e 1 and e 2 are corresponding eigenvectors e 1 and e 2 form an ON-basis 42 43 8

44 45 The filter g The corresponding shape of g is given by The width of the filter in direction e k is given by k 1 e 1 Step 4 We want g to be narrow across edges and wide along edges This means: D should depend on (x,y) A space variant anisotropic diffusion Iso-curves for g ) 2 e 2 This is referred to as anisotropic diffusion in the literature Introduced by Weickert 44 45 46 47 Anisotropic diffusion Information about edges and their orientation can be provided by an orientation tensor, e.g., the structure tensor T in terms of its eigenvalues 1, 2 From T to D The diffusion tensor D is obtained from the orientation tensor T by modifying the eigenvalues and keeping the eigenvectors, e.g. However: We want k to be close to 0 when k is large We want k to be close to 1 when k is close to 0 For example m is a control parameter 46 47 9

48 49 Anisotropic diffusion: summary 1. At all points: 1. compute a local orientation tensor T(x) 2. compute D(x) from T(x) 2. Apply anisotropic diffusion onto the image by locally iterating Left hand side: the change in L at (x,y) between s and s+ s This defines how scale space level L(x,y;s+ s) is generated from L(x,y;s) Right hand side: can be computed locally at each point (x,y) Implementation aspects The anisotropic diffusion iterations can be done with a constant diffusion tensor field D(x), computed once from the original image (faster) Alternatively: re-compute D(x) between every iteration (slower) 48 49 50 51 Simplification We assume D to have a slow variation with respect to x (cf. adaptive filtering) This means (see [EDUPACK ORIENTATION (22)]) Numerical implementation Several numerical schemes for implementing anisotropic diffusion exist Simplest one: Replace all partial derivatives with finite differences The Hessian of L = second order derivatives of L The Hessian of L can be approximated by convolving L with: 50 51 10

52 53 Algorithm Outline 1. Set parameters e.g.: k, s, number of iterations, 2. Iterate 1. Compute orientation tensor T 2. Modify eigenvalues ) D 3. Computer Hessian H L 4. Update L according to: Comparison Inhomogenous diffusion Anisotropic diffusion 52 53 54 55 A note The image f is never convolved by the space-variant anisotropic filter g Instead, the effect of g is generated incrementally based on the diffusion eq. In adaptive filtering: we never convolve f with g x either, instead several fixed filters are applied onto f and their results are combined in a non-linear way How to choose g x? According to the discussion in the introduction, we choose g x such that: It contains a low-pass component that maintains the local image mean intensity Independent of x It contains a high-pass component that depends on the local signal structure Dependent of x Also: the resulting operation for computing h should be simple to implement Computational efficient 54 55 11

56 57 Ansatz for g x We apply a filter that is given in the Fourier domain as How to implement g x? We know that [EDUPACK ORIENTATION (20)] G HP is polar separable It attenuates frequency components that are? to ê It maintains all frequency components that are to ê where T(x) = êê T (assume A = 1!) ^ Using a N-D tensor basis N k = n^ ^ k n kt and its dual Ñ k, we obtain: 56 57 58 59 How to implement g x? How to implement g x? Plug this into the expression for G HP : depends on u but not on ê depends on ê but not on u depends on ê but not on u depends on u but not on ê 58 59 12

60 61 How to implement g x? Consequently, the filter G HP is a linear combination of N filters, where each filter has a Fourier transform: How to implement g x? Summarizing, the adaptive filter can be written as Independent of x and N scalars: Dependent of x A fixed LP-filter N fixed HP-filters N position dependent scalars 60 61 62 63 How to implement g x? If the filter is applied to a signal, we obtain Standard convolution Standard convolutions Outline Adaptive Filtering v.1 1. Estimate the orientation tensor T(x) at each point x 2. Apply a number of fixed filters to the image: one LP-filter g LP and the N HP-filters g HP,k 3. At each point x: 1. Compute the N scalars 2. Form the linear combination of the N HP-filter responses and the N scalars and add the LP-filter response 4. At each point x, the result is the filter response h(x) of the locally adapted filter g x The filter g x is also called a steerable filter Position dependent scalars 62 63 13

64 65 Observation T can be estimated for any image dimension The filters g LP and g HP,k can be formulated for any image dimension ) The method can be implemented for any dimension of the signal (2D, 3D, 4D, ) Remaining questions 1. What happens in regions that are not i1d, i.e., if T has not rank 1? 2. What happens if A 1? 3. How to choose the radial function G? 64 65 66 67 Non-i1D signals The tensor s eigenvectors with non-zero eigenvalues span the subspace of the Fourier domain that contains the signal s energy Equivalent: For a given local region with orientation tensor T, let û define an arbitrary orientation. The product û T T û is a measure of how much energy in this orientation the region contains. Non-i1D signals But which means that the adaptive filtering should work in general, even if the signal is non-i1d 66 67 14

68 69 How about A = 1? Previously we assumed A = 1, but normally A depends on the local amplitude of the signal (depends on x) In order to achieve A = 1, T must be pre-processed The resulting tensor is called the control tensor C Replace T with C in all previous expressions! Pre-processing of T The filter g x is supposed to vary slowly with x, but T contains high-frequency noise that comes from the image noise This noise can be reduced by an initial LP-filtering of T (i.e., of its elements) The result is denoted T LP 68 69 70 71 Pre-processing of T Modification of the eigenvalues T LP must be normalized (similar to the diffusion tensor D): Eigen-decomposition of T LP Same eigenvectors as T LP, but different eigenvalues 70 71 15

72 73 Modification of the eigenvalues The radial function G Should mainly be equal to 1 Should tend to 0 for u = Together with the LP-filter g LP : an all-pass filter Radial part of G LP Radial part of G HP Radial part of G LP + G HP 72 73 74 75 The adaptive filter in 2D Examples of G(u) for different C(x) Outline Adaptive Filtering v.2 1. Estimate the local tensor in each image point: T(x) 2. LP-filter the tensor: T LP (x) 3. In each image point: 1. Compute the eigenvalues and eigenvectors of T LP (x). 2. Map the eigenvalues k to k. 3. Re-combine k and the eigenvectors to form the control tensor C 4. Compute the scalars hc Ñ k i for all k = 1,, N 4. Filter the image with g LP and the N HP-filters g HP,k 5. In each image point: form the linear combination of the filter responses and the scalars 74 75 16

76 77 Example Example 76 77 78 An iterative method Adaptive filtering can be iterated for reducing the noise If the filter size is reduced at the same time, a close-to continuous transition is achieved (evolution) This is closely related to the previous method for image enhancement: anisotropic diffusion www.liu.se Michael Felsberg michael.felsberg@liu.se 78 79 17