Mathematical Beer Goggles or The Mathematics of Image Processing

Similar documents
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Ill Posed Inverse Problems in Image Processing

Numerical Linear Algebra and. Image Restoration

What is Image Deblurring?

Inverse Ill Posed Problems in Image Processing

Advanced Numerical Linear Algebra: Inverse Problems

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

8 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

ETNA Kent State University

Blind image restoration as a convex optimization problem

The Singular Value Decomposition

Singular Value Decomposition

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems

Blind Image Deconvolution Using The Sylvester Matrix

arxiv: v1 [math.na] 3 Jan 2019

Mathematics and Computer Science

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Linear Least Squares. Using SVD Decomposition.

The Global Krylov subspace methods and Tikhonov regularization for image restoration

Singular Value Decomposition in Image Noise Filtering and Reconstruction

The Singular Value Decomposition and Least Squares Problems

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

CHAPTER 7. Regression

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Singular Value Decompsition

Linear Algebra Methods for Data Mining

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares.

Lecture 5 Singular value decomposition

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

MATH36001 Generalized Inverses and the SVD 2015

Tikhonov Regularization of Large Symmetric Problems

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

8. the singular value decomposition

Tikhonov Regularization in General Form 8.1

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

1. Introduction. We consider linear discrete ill-posed problems of the form

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Computational Methods. Eigenvalues and Singular Values

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Linear Algebra Review. Fei-Fei Li

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Section 1.1: Systems of Linear Equations. A linear equation: a 1 x 1 a 2 x 2 a n x n b. EXAMPLE: 4x 1 5x 2 2 x 1 and x x 1 x 3

Linear Systems. Carlo Tomasi

Main matrix factorizations

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

! # %& () +,,. + / 0 & ( , 3, %0203) , 3, &45 & ( /, 0203 & ( 4 ( 9

Linear Algebra Review. Vectors

Structured Linear Algebra Problems in Adaptive Optics Imaging

Linear Algebra Review. Fei-Fei Li

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

Problem set 5: SVD, Orthogonal projections, etc.

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Image Compression Using Singular Value Decomposition

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

On the regularization properties of some spectral gradient methods

On the solving of matrix equation of Sylvester type

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Matrix-Product-States/ Tensor-Trains

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

HOSVD Based Image Processing Techniques

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

Truncated decompositions and filtering methods with Reflective/Anti-Reflective boundary conditions: a comparison

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

Introduction to Numerical Linear Algebra II

UNIT 6: The singular value decomposition.

The Mathematics of Facial Recognition

SIGNAL AND IMAGE RESTORATION: SOLVING

Matrices A brief introduction

Mathematical Foundations of Applied Statistics: Matrix Algebra

c 1999 Society for Industrial and Applied Mathematics

10. Multi-objective least squares

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

Transcription:

How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008

1 How 2 How 3 4 5

How 1 2 How 3 4 5

How 1 2 How 3 4 5

Grayscale How One dimensional matrix 0 0 0 0 0 0 0 0 0 0 8 8 0 4 4 0 2 0 X = 0 8 8 0 4 4 0 2 0 0 8 8 0 4 4 0 2 0 0 8 8 0 4 4 0 2 0 0 0 0 0 0 0 0 0 0 c(x), colormap(gray)

MATLAB image How

RGB How Three dimensional matrix 0 0 0 0 0 0 0 0 0 X(:, :, 1) = 0 1 1 0 1 1 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X(:, :, 2) = 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X(:, :, 3) = 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 c(x)

MATLAB image How X = imread( pic.jpg ), imwrite(x, pic.jpg )

How 1 2 How 3 4 5

The singular value decomposition (SVD) How Existence and Uniqueness Let X C m,n, m n Then σ 1 X [ ] [ ] σ 2 v 1 v 2... v n = u1 u 2... u m or X = UΣV T,... σn 0 where U T U = I, with columns of U called left singular vectors and V T V = I with right singular vectors as columns of V and Σ = diag(σ 1,...,σ n ) called singular values ordered such that σ 1 σ 2... σ n 0.

Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T. 0. 0..

Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T. 0. 0.. Theorem (Another representation) X is the sum of r rank-one matrices r X = σ j u j vj T. j=1

Low-rank approximations Theorem For any ν with 0 ν r, define How Then X X ν 2 = X ν = ν σ j u j vj T, j=1 inf X B 2 = σ ν+1. B C m,n,rank(b) ν

Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. σ n

Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. Let B have rank ν, null space of (B) has dimension n ν {v 1,...,v ν+1 } has dimension ν + 1 Let h be a unit vector in their intersection: σ n X B 2 (X B)h 2 = Xh 2 = UΣV T h 2 = Σ(V T h) 2 σ 2 ν+1 V T h 2 σ 2 ν+1.

Example m = 604, n = 453, m n = 273612 How Compression ratio c := (m + n)ν mn

Rank-1 approximation m = 604, n = 453, m + n = 1057 How Compression ratio c := 3.8631e 03

Rank-2 approximation How Compression ratio c := 7.7263e 03

Rank-3 approximation How Compression ratio c := 1.1589e 02

Rank-4 approximation How Compression ratio c := 1.5453e 02

Rank-5 approximation How Compression ratio c := 1.9316e 02

Rank-10 approximation How Compression ratio c := 3.8631e 02

Rank-20 approximation How Compression ratio c := 7.7263e 02

Rank-30 approximation How Compression ratio c := 0.11589

Rank-40 approximation How Compression ratio c := 0.15453

Rank-60 approximation How Compression ratio c := 0.23179

Rank-80 approximation How Compression ratio c := 0.30905

How 1 2 How 3 4 5

Blurred and exact Let X be the exact image Let B be the blurred image How

Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n

Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n First attempt at X Naive = Ac 1 BA T r.

First attempt at How

X Naive = A 1 c BA T r How

What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r.

What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r. Error The naive solution satisfies X Naive X F X F cond(a c )cond(a r ) E F B F.

using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model Ax = b b 1.. b N R N

using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model b = b exact + e Ax = b b 1.. b N R N x Naive = A 1 b = A 1 b exact + A 1 e = x + A 1 e

Separable two dimensional blurs How The Kronecker product If horizontal and vertical flow can be separated then where Ax = b Avec(X) = vec(b) = vec(a c XA T r ) (A r A c )vec(x) = vec(a c XA T r ), A = A r A c = a r 11 A c... a r 1n A c..., a r n1a c... a r nna c (U r Σ r V T r ) (U c Σ c V T c ) = (U r U c )(Σ r Σ c )(V r V c ) T.

x Naive = x + A 1 e How

How 1 2 How 3 4 5

Taking bad pictures How Sources of bad pictures defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring

Taking bad pictures How Sources of bad pictures Noise E defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring background photons from both natural or artificial sources signal represented by finite number of bits (quantisation error)

Modelling the blurring matrix A Single bright pixel x = e i Ae i = column i of A How Figure: Point source (single bright pixel) Figure: Point spread (PSF)

Modelling the blurring matrix A How Figure: Motion blur Figure: Out-of-focus blur { 1/(πr) 2 if (i k) p ij = 2 + (j l) 2 r 2 0 otherwise

Modelling the blurring matrix A How p ij = exp p ij = Figure: Atmospheric turbulence blur ( ( 1 + 1 2 [ i k j l [ i k j l ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] 1 [ ] ) T i k j l ] 1 [ ] ) T β i k j l

Boundary conditions and structured matrix computations How Boundary conditions Zero boundary conditions Periodic boundary conditions Reflexive boundary conditions The matrix A which is obtained from P by convolution s Block Toeplitz matrix Block Circulant matrix Sum of Block Toeplitz and Block Hankel and Block Toeplitz plus Hankel matrices

Spectral filtering How The SVD again With σ 1 A = UΣV T = [ ] σ 2 u 1 u 2... u N... σn v T 1.. v T N we have X Naive = x Naive = A 1 b = V Σ 1 U T b = N i=1 u T i b σ i V i = N i=1 N i=1 u T i b exact σ i V i + u T i b σ i v i N i=1 u T i e σ i V i

Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N

Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N The regularised solution Introduce filter factors Φ i x Naive = N i=1 Φ i u T i b σ i v i

Two methods How TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N

Two methods TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N How Tikhonov regularisation Φ i = σ 2 i σ 2 i + α2 where α > 0 is a regularisation parameter, This choice of filter factor yields solution to the minimisation min x { b Ax 2 2 + α 2 x 2 2}.

Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error

Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error Oversmoothing and undersmoothing small regularisation error, large perturbation error leads to undersmoothed solution large regularisation error, small perturbation error leads to oversmoothed solution

Regularisation and perturbation errors How Parameter choice methods Discrepancy Principle Generalised Cross Validation L-Curve Criterion

A second attempt at How

X Naive = A 1 c BA T r How

Filtered solution using TSVD How Figure: k = 4801, N = 83000 Figure: k = 6630, N = 238650s

Tikhonov regularisation How Figure: α = 0.0276 Figure: α = 0.0137

P. C. Hansen, J. G. Nagy, and D. P. O Leary, Images - Matrices, Spectra and Filtering, SIAM, Philadelphia, 1st ed., 2006. How