How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008
1 How 2 How 3 4 5
How 1 2 How 3 4 5
How 1 2 How 3 4 5
Grayscale How One dimensional matrix 0 0 0 0 0 0 0 0 0 0 8 8 0 4 4 0 2 0 X = 0 8 8 0 4 4 0 2 0 0 8 8 0 4 4 0 2 0 0 8 8 0 4 4 0 2 0 0 0 0 0 0 0 0 0 0 c(x), colormap(gray)
MATLAB image How
RGB How Three dimensional matrix 0 0 0 0 0 0 0 0 0 X(:, :, 1) = 0 1 1 0 1 1 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X(:, :, 2) = 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X(:, :, 3) = 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 c(x)
MATLAB image How X = imread( pic.jpg ), imwrite(x, pic.jpg )
How 1 2 How 3 4 5
The singular value decomposition (SVD) How Existence and Uniqueness Let X C m,n, m n Then σ 1 X [ ] [ ] σ 2 v 1 v 2... v n = u1 u 2... u m or X = UΣV T,... σn 0 where U T U = I, with columns of U called left singular vectors and V T V = I with right singular vectors as columns of V and Σ = diag(σ 1,...,σ n ) called singular values ordered such that σ 1 σ 2... σ n 0.
Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T. 0. 0..
Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T. 0. 0.. Theorem (Another representation) X is the sum of r rank-one matrices r X = σ j u j vj T. j=1
Low-rank approximations Theorem For any ν with 0 ν r, define How Then X X ν 2 = X ν = ν σ j u j vj T, j=1 inf X B 2 = σ ν+1. B C m,n,rank(b) ν
Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. σ n
Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. Let B have rank ν, null space of (B) has dimension n ν {v 1,...,v ν+1 } has dimension ν + 1 Let h be a unit vector in their intersection: σ n X B 2 (X B)h 2 = Xh 2 = UΣV T h 2 = Σ(V T h) 2 σ 2 ν+1 V T h 2 σ 2 ν+1.
Example m = 604, n = 453, m n = 273612 How Compression ratio c := (m + n)ν mn
Rank-1 approximation m = 604, n = 453, m + n = 1057 How Compression ratio c := 3.8631e 03
Rank-2 approximation How Compression ratio c := 7.7263e 03
Rank-3 approximation How Compression ratio c := 1.1589e 02
Rank-4 approximation How Compression ratio c := 1.5453e 02
Rank-5 approximation How Compression ratio c := 1.9316e 02
Rank-10 approximation How Compression ratio c := 3.8631e 02
Rank-20 approximation How Compression ratio c := 7.7263e 02
Rank-30 approximation How Compression ratio c := 0.11589
Rank-40 approximation How Compression ratio c := 0.15453
Rank-60 approximation How Compression ratio c := 0.23179
Rank-80 approximation How Compression ratio c := 0.30905
How 1 2 How 3 4 5
Blurred and exact Let X be the exact image Let B be the blurred image How
Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n
Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n First attempt at X Naive = Ac 1 BA T r.
First attempt at How
X Naive = A 1 c BA T r How
What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r.
What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r. Error The naive solution satisfies X Naive X F X F cond(a c )cond(a r ) E F B F.
using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model Ax = b b 1.. b N R N
using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model b = b exact + e Ax = b b 1.. b N R N x Naive = A 1 b = A 1 b exact + A 1 e = x + A 1 e
Separable two dimensional blurs How The Kronecker product If horizontal and vertical flow can be separated then where Ax = b Avec(X) = vec(b) = vec(a c XA T r ) (A r A c )vec(x) = vec(a c XA T r ), A = A r A c = a r 11 A c... a r 1n A c..., a r n1a c... a r nna c (U r Σ r V T r ) (U c Σ c V T c ) = (U r U c )(Σ r Σ c )(V r V c ) T.
x Naive = x + A 1 e How
How 1 2 How 3 4 5
Taking bad pictures How Sources of bad pictures defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring
Taking bad pictures How Sources of bad pictures Noise E defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring background photons from both natural or artificial sources signal represented by finite number of bits (quantisation error)
Modelling the blurring matrix A Single bright pixel x = e i Ae i = column i of A How Figure: Point source (single bright pixel) Figure: Point spread (PSF)
Modelling the blurring matrix A How Figure: Motion blur Figure: Out-of-focus blur { 1/(πr) 2 if (i k) p ij = 2 + (j l) 2 r 2 0 otherwise
Modelling the blurring matrix A How p ij = exp p ij = Figure: Atmospheric turbulence blur ( ( 1 + 1 2 [ i k j l [ i k j l ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] 1 [ ] ) T i k j l ] 1 [ ] ) T β i k j l
Boundary conditions and structured matrix computations How Boundary conditions Zero boundary conditions Periodic boundary conditions Reflexive boundary conditions The matrix A which is obtained from P by convolution s Block Toeplitz matrix Block Circulant matrix Sum of Block Toeplitz and Block Hankel and Block Toeplitz plus Hankel matrices
Spectral filtering How The SVD again With σ 1 A = UΣV T = [ ] σ 2 u 1 u 2... u N... σn v T 1.. v T N we have X Naive = x Naive = A 1 b = V Σ 1 U T b = N i=1 u T i b σ i V i = N i=1 N i=1 u T i b exact σ i V i + u T i b σ i v i N i=1 u T i e σ i V i
Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N
Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N The regularised solution Introduce filter factors Φ i x Naive = N i=1 Φ i u T i b σ i v i
Two methods How TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N
Two methods TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N How Tikhonov regularisation Φ i = σ 2 i σ 2 i + α2 where α > 0 is a regularisation parameter, This choice of filter factor yields solution to the minimisation min x { b Ax 2 2 + α 2 x 2 2}.
Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error
Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error Oversmoothing and undersmoothing small regularisation error, large perturbation error leads to undersmoothed solution large regularisation error, small perturbation error leads to oversmoothed solution
Regularisation and perturbation errors How Parameter choice methods Discrepancy Principle Generalised Cross Validation L-Curve Criterion
A second attempt at How
X Naive = A 1 c BA T r How
Filtered solution using TSVD How Figure: k = 4801, N = 83000 Figure: k = 6630, N = 238650s
Tikhonov regularisation How Figure: α = 0.0276 Figure: α = 0.0137
P. C. Hansen, J. G. Nagy, and D. P. O Leary, Images - Matrices, Spectra and Filtering, SIAM, Philadelphia, 1st ed., 2006. How