Advanced Numerical Linear Algebra: Inverse Problems

Size: px
Start display at page:

Download "Advanced Numerical Linear Algebra: Inverse Problems"

Transcription

1 Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23

2 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23

3 References Deblurring Images Matrices Spectra and Filtering, Hansen, Nagy, and O Leary, SIAM 26 asuedu/doi/book/37/ pch/hno/ Computational Methods for Inverse Problem, Vogel, SIAM 22 vogel/book/ book/37/ Rank Deficient and Discrete Ill-Posed Inverse Problems, Hansen, SIAM pch/regutools/ book/37/ Discrete Inverse Problems: Insights and Algorithms book/37/ Parameter Estimation and Inverse Problems, Aster, Borchers and Thurber

4 Illustrative Example: Fredholm first kind integral equation g(x) = h(x, x )f (x )dx, < x, x < This is a one dimensional example of signal blur - similar to the two dimensional image blur problem f is the light source intensity g is the measured image intensity h is the kernel characterizing blurring effects When h(x, x ) = h(x x ) The kernel is spatially invariant Typical choice of h(x) in D Gaussian h(x) = σ 2π exp( x 2 2σ 2 ), σ > Out of focus h(x) = { C l x u otherwise

5 Forward Problem: Given f, h calculate g Suppose that the integral is approximated using numerical quadrature yielding a matrix A Let dx = /n, x i = idx, i n Then A ij = w j h ij = w j h(x i, x j ) or w j h((i j)dx), where w j is the weight for the quadrature, eg w j = dx f is sampled at f (x j ), g(x) is sampled at x i Matrix equation g = Af describes the linear relationship Requirement b a h(x)dx (for PSF no energy is introduced, the signal is spread) leads to the normalization j w jh(x j ) =

6 Example of two Blurs, D Figure: Notice that without restoration of the data it will be difficult to extract image features

7 Matrices D Figure: These are the matrices for N = and 2 -notice the structure

8 Inverse Problem solve g Af when the problem ill-posed The solution depends on the conditioning of A, method and on the noise in measurements g 25 no noise inverse crime with backslash 25 no noise take inverse 4 with noise

9 Inverse Problem solve g Af when the problem ill-posed: Larger N = 2 The solution depends on the conditioning of A, method and on the noise in measurements g 5 5 no noise inverse crime with backslash no noise take inverse x with noise

10 Inverse Problem solve g Af problem that is spatially variant model N = x 3 data solutions

11 Inverse Problem solve g Af problem that is spatially variant model N = x 3 data x 2 solutions and with noise

12 Inverse Problem solve g Af problem that is spatially variant model N = x data solutions

13 Inverse Problem solve g Af problem that is spatially variant model N = x data solutions

14 Two Dimensional Problem The model is the same just extended to two dimensions: G(x, y) exact = H(x, x, y, y )X(x, y )dx dy H(x, x, y, y ) is often smooth, Impulse Response or Point Spread Function of imaging system X(x, y) is the original exact image, or light source When H is invariant in space isotropic H(x, x, y, y ) = H(x x, y y ) H might represent a projection as for example with PET images Typically a noisy image is obtained G(x, y) = H(x, x, y, y )X(x, y )dx dy + E(x, y) Often the noise distribution E(x, y) is known Distribution of E should be used in finding a solution

15 Example of restoration of 2D image with noise Reconstruction with noise N(, 5 )

16 Actual PET Images are Obtained from Projected Data Figure: Sinogram Data

17 Noisy Sinogram x

18 Typical PET Frames - noisy dependent on time window of data collection

19 D Formulation for the Convolution Blurred Signal g(x) = h(x x )f (x )dx Limits Take h(x) = for x > r then integration is limited g(x) = x+r x r h(x x )f (x )dx Description Flip the PSF function to obtain h( x ) 2 Shift by x to give h(x x ) 3 Then we see that g is a weighted local average of f with weights from h

20 Flip and Translate f (t) = < t < 2π h(t) = exp( t) < t < 2π Convolution g(x) = h(x t)f (t)dt < x < 2π g(x) = = x h(x t)f (t)dt = exp( (x t))dt 2π min {x,2π} h(x t)dt = max {,x 2π} < x < 2π = exp( x)(exp(x) ) = exp( x) exp( (x t))dt h(x) h( x) h(π x) g(x) g(x) f(x)

21 True Convolution: h(t) = 5 < t < f (t) = exp( t) < t < 2π x+ g(x) = f (t)dt = 2 max {,x } 2 exp( t) x+ max {,x } = { exp( x)(2 sinh()) x > 2 ( exp( (x + ))) x Convolution h( x) f(x) g(x)

22 Discrete Convolution of two sequences Discrete Convolution assuming normalization to ignore constant weighting in PSF h g i = j h i j f j Assume f R n, h R m and n > m g i = (f h) i = j Ω(i) f j h i j+ Ω(i) is the set of integers for the sum and may depend on i, depending on how the extent of h outside the defined range of the signal is implemented Equivalently we suppose that f is given only for a limited range, a Full convolution will require elements of f beyond the defined range Elements of f and h are only defined for positive index

23 The equations g = h f g 2 = h 2 f + h f 2 = g m = h m f + h m f 2 + h f m = g n = h m f n m+ + h m f n m+2 + h f n = g m+n = Notice that Ω(i) = m, i = n : m Ω(i) = {max(, i + m),, min(i, n)} h m f n For other i not all values of h because relevant indices of f are not defined

24 Matrix Formulation g g 2 g 3 g m g m+ g n g m+n = h h 2 h h 3 h 2 h h m h m h h m h m h h m h m h h m f f 2 f 3 f m f m+ f n Notice that the middle block contains the terms which can be calculated using all values of h In this formulation the lack of use of h amounts to padding f by zeros outside the valid domain

25 The complete Toeplitz System g g ṃ g n g m+n = h m h m h h m h m h h m h m h w f f n y w l, y l correspond to padding of f i Total number of padded entries is m f R m+n Pad h to length m + n similarly (Note that for only n entries for g, use the middle n entries) w l = y l = corresponds to using zero boundary conditions on f

26 Other Boundary Conditions : Periodic on f Suppose that the original values of f are assumed to be periodic Then we fill in the entries in the Toeplitz matrix: g m 2 g m g m g m+ g n g n+ g n+2 = h m 2 h m 3 h m h m h m h m 2 h h m h m h m h h m h m h h m h m h h h m h m h 2 h 2 h h m h 3 f f 2 f 3 f m f m+ f n This is called a CIRCULANT matrix Each row (column) is a periodic shift of previous row (column)

27 Reflexive Boundary Conditions Assume that the function f is reflected at the boundary For example look at the equations for g m and g n+ yielding g m = h m f + h m f + h m 2 f 2 + h f m g n+ = h m f n m+2 + h m f n m+3 + h 2 f n + h f n g m = (h m + h m )f + h m 2 f h f m g n+ = h m f n m+2 + h m f n m (h 2 + h )f n A matrix which has constant entries on each anti diagonal is a Hankel matrix Hence the resulting matrix is Toeplitz+Hankel

28 Example of Forward Operation Different Boundary Conditions Global Domain FFT Trunc and global Trunc Domain FFT Trunc : Periodic BC Trunc : Zero BC Trunc : RBC data gauss motion Global Domain FFT Trunc and global Trunc Domain FFT Trunc : Periodic BC Trunc : Zero BC 5 5 Trunc : RBC data gauss motion Figure: Convolution With Different Boundary Conditions

29 Example of Inversion Different Boundary Conditions Deblurred Gauss Deblurred Gauss 2 data Zero BC 2 data Zero BC Periodic Periodic Reflexive Reflexive Deblurred Step Deblurred Step 2 data Zero BC 2 data Zero BC Periodic Periodic Reflexive Reflexive Figure: Convolution With Different Boundary Conditions

30 Forward and Inversion Different Boundary Conditions: Small N Global Domain FFT Trunc and global Trunc Domain FFT Deblurred Gauss data Zero BC Periodic Reflexive Trunc : Periodic BC Trunc : Zero BC Trunc : RBC Deblurred Step data gauss motion 2 5 data Zero BC Periodic Reflexive Figure: Convolution With Different Boundary Conditions

31 d example code rosie/classes/spring23_ files/onedexample_pubhtml rosie/classes/spring23_ files/htmlfeb4/oned_basis_example_pub_v2html

32 Extension to 2D : In Matrix Notation Let x = vec(x), b = vec(g), be the matrices X, G with entries stacked as a vector ie we take the entries of X R m n column wise and set N = mn, vec(x) = x x 2 x n RN Then the convolution operator is rewritten as Ax = b where A represents the PSF H, and entries of A depend on how this is implemented Condition of the problem can still be investigated naïvely for the linear system This requires the generation of the matrix A for a two dimensional problem

33 Descriptive Version of 2D Convolution Regard the PSF as a mask with elements in an array 2 Rotate the PSF array by 8 3 To compute convolution for a certain pixel we have to line up the central point of the rotated mask on that pixel 4 Multiply the corresponding elements within the mask 5 Sum the elements within the mask 6 Take account of boundary conditions as for the D Case

34 Simple Example X = x x 2 x 3 x 2 x 22 x 23 x 3 x 32 x 33 H = h h 2 h 3 h 2 h 22 h 23 h 3 h 32 h 33 G = g g 2 g 3 g 2 g 22 g 23 g 3 g 32 g 33 Rotated H, align with x 22 and multiply Hflip = h 33 h 32 h 3 h 23 h 22 h 2 X Hflip = x h 33 x 2 h 32 x 3 h 3 x 2 h 23 x 22 h 22 x 23 h 2 h 3 h 2 h x 3 h 3 x 32 h 2 x 33 h where X H denotes element wise multiplication We then sum all the entries In the Vec notation we have g 22 = vec(hflip) T vec(x) as an inner product All entries are obtained as inner products

35 Calculating all entries for the case of zero boundary conditions g g 2 g 3 g 2 g 22 g 32 g 3 g 23 g 33 = h 22 h 2 h 2 h h 32 h 22 h 2 h 3 h 2 h h 32 h 22 h 3 h 2 h 23 h 3 h 22 h 2 h 2 h h 33 h 23 h 3 h 32 h 22 h 2 h 3 h 2 h h 33 h 23 h 32 h 22 h 3 h 2 h 23 h 3 h 22 h 2 h 33 h 23 h 3 h 32 h 22 h 2 h 33 h 23 h 32 h 22 Note that each block is of Toeplitz form and that as a block matrix the matrix is also Toeplitz Matrix is block Toeplitz with Toeplitz Blocks BTTB x x 2 x 3 x 2 x 22 x 32 x 3 x 23 x 33

36 Constructing matrix A: Boundary Conditions PSF is usually of finite extent, but for a finite section of an image, convolution with the PSF uses values outside the specific image If this is not handled correctly, boundary effects will contaminate the image, leading to artifacts This is true whether for the forward or inverse model Suggestions Zero Padding: Embed the image in a larger image, with sufficient zeros to handle the extent of the PSF O O O O X O O O O Introduces edges, discontinuities, leads to ringing and artificial black borders Matrix is block Toeplitz with Toeplitz Blocks BTTB

37 Constructing matrix A Periodicity: Embed the image periodically in a larger image X X X X X X X X X Introduces edges, discontinuities, leads to ringing and artificial black borders Matrix is block Circulant with Circulant Blocks BCCB Matrix is a sum of BTTB, BTHB,BHTB, BHHB H for a Hankel block

38 Constructing matrix A Periodicity: Embed the image periodically in a larger image X X X X X X X X X Introduces edges, discontinuities, leads to ringing and artificial black borders Matrix is block Circulant with Circulant Blocks BCCB Reflexive: Reflect and embed X lrud X ud X lrud X lr X X lr X lrud X ud X lrud Obvious reflections about the central block: no edges Matrix is a sum of BTTB, BTHB,BHTB, BHHB H for a Hankel block

39 Separable PSF: A = cr T c coefficients of blur across columns of the image r coefficients of blur across rows of the image c = (c, c 2,, c m ) T r = (r, r 2,, r n ) T The PSF matrix A is a rank one matrix A kj = c k r j A = c r c r 2 c r 3 c 2 r c 2 r 2 c 2 r 3 c 3 r c 3 r 2 c 3 r 3 Then let A r = r 2 r r 3 r 2 r r 3 r 2 A c = c 2 c c 3 c 2 c c 3 c 2 A r and A c inherit the structure of the D matrices dependent on how boundary conditions are applied We obtain the overall blur matrix A = A r A c

40 Kronecker Product To obtain the two dimensional PSF matrix A which acts on the image in vec form we map A using its special structure A r A c = = a a 2 a n a 2 a 22 a 2n a m a m2 a mn A c a A c a 2 A c a n A c a 2 A c a 22 A c a 2n A c a m A c a m2 A c a mn A c

41 Evaluating using the Kronecker Product Assume A c R m m and A r R n n, for an image of size m n Then, for the exact blurred image, G = A c XA T r Left multiplication by A c applies PSF to each column of X, is vertical blur Left multiplication of X T by A r applies PSF to each column of X T, which are rows of X Hence the horizontal blur is applied as (A r X T ) T = XA T r Let A = A r A c be the Kronecker product of size mn mn, the ij block of A is (A r ) ij A c Then G = A c XA T r in vector form is b = Ax

42 Summary: Constructing matrix A Typically we only see a finite region of an image that extends in all directions We have to be careful in constructing the matrix A Remark The blurring matrix A depends on two ingredients The PSF defines the blur But the boundary conditions imposed on the image, and hence on A, specify assumptions on the scene outside the image, which then impact the reconstruction of the image Remark Practically, we look for algorithms that do not calculate A

43 Practical Implementation: Fourier Transform Discrete convolution of two equal length sequences is accomplished by point wise multiplication in the Fourier domain Suppose f is a sequence of length n with components f j The unitary DFT of f is ˆf with component k ˆfk = n The inverse DFT is given by f j = n n j= n k= f j e 2πi(j )(k ) n ˆfk e 2πi(j )(k ) n Notice that the sequences are periodic with period n: f j+n = f j Extending to length m + n can be done by periodicity, or reflection at the boundaries

44 Fourier transform as a Matrix Operator It may be useful to note that the Fourier transform can be implemented (thought of) as a matrix operation: ˆf = Ff F kj = e 2πi(k )(j ) n n f = F ˆf = F f Fkj = e 2πi(k )(j ) n n using the fact that the matrix F is unitary, F F = I n

45 Summary Implementation of the D Convolution by FFT g i = (f h) i = j Ω(i) f j h i j+, i = : m + n Pad sequences with zeros to length m + n Further pad to length power of two 2 Transform both sequences to frequency domain by DFT 3 Multiply DFTS pointwise ĝ i = ˆf i ĥ i 4 Inverse transform ĝ to yield g sampled at x i, i = : m + n 5 For real data ignore imaginary part of FFT which occurs due to floating point accuracy 6 Extract needed components of g

46 Constructing Ax for structured A Spectral decompositions Solutions in terms of spectral decompositions Rosemary Renaut January, 23

47 Circulant Matrix Definition Toeplitz A is circulant if rows / columns are circular shifts h h h (n 3) h (n 2) h 2 h h (n 4) h (n 3) A = (a ij ) = (h i j+ ) = h n h 2 h h h n h n h 2 h Write A = toeplitz(h ext ) where h ext is the extension of h of length 2n h ext = (h (n 2), h (n 3),, h, h, h 2,, h n ) For circulant it is periodic with period n and A = circulant(h, h 2, h n ) Important : h is the first column of A

48 Relating circulant matrices and the DFT Definition (Circulant Right Shift Matrix) R = or R kj = δ([j k + ] n) Notation δ(j) = only for j = Note right shift property R(h, h 2, h 3,, h n) T = (h n, h,, h n ) T Induction: R j (h, h 2, h 3,, h n) T = (h n j+, h n j+2,, h n j ) T R j = Row j

49 Expressing the Circulant Matrix A = h h n h 3 h 2 h 2 h h 4 h 3 = h n h 2 h h n h n h n h 2 h n h j+ R j (periodicity h n+ = h ) j= Let ω = exp( 2πi/n), then nf kj = ω (k )(j ), nf kj = ω (k )(j ) and use n n ω (j )(k ) = δ([k ] n), F F = FF = I n j= Then it is immediate to show (i) R = F diag(, ω, ω 2,, ω n )F = F diag(ω l )F (ii) RF l = ω l F l, R j = F diag(ω l ) j F Here F l is the l th column of F l = : n

50 The Spectral Decomposition We apply the spectral decomposition results for R to show A = n h j+ R j j= = F diag( n h j+ ω (l )j )F j= = nf diag(ĥ)f = F ΛF, Λ = n diag(ĥ, ĥ2,, ĥn) ( nĥl, F l ) are the eigenvalue - eigenvector pairs of A To find eigenvalues of A observe FA = ΛF and F = (/ n)(,,, ) T hence ( nfa ) l = λ l

51 Computing Ax If the system is nonsingular A = F Λ F Note also that A is normal AA = F ΛFF ΛF = F (Λ) 2 F = A A Ax is calculated via Ax = F diag(nĥ)fx, transform Fx = ˆx = nf (ĥ ˆx) convolution ŷ = ĥ ˆx = n y inverse y = F ŷ Practically if Λ is known we write Ax = y where y = F (ˆλ ˆx) To calculate the product Ax when A is Toeplitz, embed in a larger circulant matrix (see Vogel chap 5) A similar spectral decomposition is obtained

52 The Two Dimensional BCCB Case : Simple Recall the earlier case with 3 3 PSF h 22 h 2 h 32 h 2 h h 3 h 23 h 3 h 33 h 32 h 22 h 2 h 3 h 2 h h 33 h 23 h 3 h 2 h 32 h 22 h h 3 h 2 h 3 h 33 h 23 h 23 h 3 h 33 h 22 h 2 h 32 h 2 h h 3 h 33 h 23 h 3 h 32 h 22 h 2 h 3 h 2 h = h 3 h 33 h 23 h 2 h 32 h 22 h h 3 h 2 h 2 h h 3 h 23 h 3 h 33 h 22 h 2 h 32 h 3 h 2 h h 33 h 23 h 3 h 32 h 22 h 2 h h 3 h 2 h 3 h 33 h 23 h 2 h 32 h 22 H 2 H H 3 H 3 H 2 H H H 3 H 2 H = circulant(h, h 2, h 3 ) = circulant(h, ) H 2 = circulant(h 2, h 22, h 32 ) = circulant(h,2 ) H 3 = circulant(h 3, h 23, h 33 ) = circulant(h,3 ) and the entire PSF can be obtained from the first column of A

53 The general case for BCCB A(h) = H 2 H H 4 H 3 H 3 H 2 H H 4 H ny H3 H 2 H H H ny H 3 H 2 Each block is circulant, H j = circulant(h,j ), h of size n x n y A is of size n x n y n x n y, n = n x n y The first column defines h Again A is normal and we have a spectral decomposition A = F diag( n vec(ĥ))f = F ΛF Here F = F r F c and ĥ is the two dimensional FFT of h of size n x n y ĥ = F(h) = (F r F c )h Calculation scheme: A vec(x) = vec(f (ˆλ vec(fx)))

54 Calculating the Spectral Decomposition Λ for 2D BCCB As for the D case to find eigenvalues of A observe FA = ΛF and F = (/ n)(,,, ) T, (look at the Kronecker product) Hence ( nfa ) l = λ l For the BCCB first column of A gives on h Suppose center point of h in spatial domain is at index center = (r, c ) Matlab can be used to find this column A and F acting on A is accomplished as the 2D transform fft2(circshift(h, center)) ie circshift performs necessary circular shift of the PSF matrix Note that fft2 in Matlab implements nf but also ifft2 implements / nf

55 Spectral Decomposition Λ for Reflexive : Hansen (2) Assume underlying Toeplitz matrix is symmetric, ie the PSF is doubly symmetric about the central point: fliplr(h) = flipud(h) = h The flip operators in Matlab flip columns(rows) of array As D circulant use expansion: A = n j= h js j for basis matrices S j (S j ) lk = l k = j l + k = j l + k = 2n + 2 j To find the decomposition use the discrete cosine transform (DCT) dct vectors w l, l = : n are eigenvectors of each of basis matrices S j w l = c l (cos((l )θ ), cos((l )θ 2 ),, cos((l )θ n)) T, l = : n (2k )π 2 θ k =, k = : n, c l = l >, c = 2n n n Matrix W n defined with columns w l is orthogonal Using the eigenvector decomposition for each basis matrix, the spectrum for A can be obtained A = Ae = W nλw T n e W T n A = ΛW T n e Eigenvalues are λ l = [dct(a )] l /[dct(e )] l because dct(x) = W T n x

56 DCT analysis extends for the 2D case with reflexive BCs A = W T ΛW where W is the orthogonal two-dimensional DCT matrix 2D DCT is calculated for array X by computing DCT of each column, followed by DCT of each row Inverse is calculated equivalently A = W T Λ W Use Matlab functions dct2 and idct2 in the image processing toolbox Alternatively, toolbox for the HNO book! The spectrum is obtained from dct2(dctshift(h, center))/dct2(e ) DCT Implementation uses real arithmetic rather than complex for the FFT Note cost equivalent to FFT cost

57 Summary: Spectral Decomposition for (Most) PSF Matrices :BTTB, BCCB, Double Symmetric Reflexive Given h the PSF array, center = (r, c ) the central index of the PSF Eigenvalues of A are obtained by a transform applied to shifted h A spectral decomposition exists A = U ΛU which can be used for forward and inverse operations All forward and inverse operations use transforms and convolution We do not know anything about the ordering of the eigenvalues in Λ in terms of size

58 The Singular Value Decomposition (SVD): General A Suppose, here, A R N N : A = UΣV T, U and V are orthogonal U T U = I N = V T V, σ i are the singular values of A, Σ = diag(σ i ) σ σ 2 σ N u i (columns of U) are the left singular vectors of A v i (columns of V ) are the right singular vectors of A Assume σ N >, A is nonsingular, then A = V Σ U T = N i= v i u T i σ i, A = N σ i u i v T i i= Solution expressed with the SVD components is a weighted linear combination of basis vectors v i x = N i= ( ut i b )v i σ i

59 The Two Dimensional Result We note that x = vec(x) Likewise we can write V i = array(v i ), where array is the reverse of vec converting vector to array Then X = = N i= N i= ( ut i b σ i )array(v i ) ( ut i b σ i )V i This is a decomposition in the basis images V i

60 Kronecker Product Relations 2 Transpose is distributive (A B)vec(X) = vec(bxa T ) (A B) T = (A T B T ) 3 When both matrices are invertible (A B) = A B 4 Mixed product property for matrices of appropriate dimensions (A B)(C D) = (AB CD) 5 The Kronecker product is associative, but not commutative

61 Spectral Expansion for DFT : BCCB A = F ΛF Use F = F r F c, the one dimensional FT matrices x = F Λ Fb = (F r F c ) Λ vec(f c GF T r ) Notice vec(f c GFr T ) is the two dimensional FT of the blurred image G Let vec( G) = Λ vec(f c GFr T ) X = Fc G(F r ) T = conj(f c ) G conj(fr T ) X = G ij conj(f c,i f T r,j ) i j f c,i and f r,j are the i th and j th columns of F c and F r This is a decomposition in the basis images f c,i f T r,j

62 Spectral Expansion for the Reflexive Case A = W T ΛW Use W = W r W c, the one dimensional DCT matrices x = W T Λ W b = (W r W c ) T Λ vec(w c GW T r ) vec(w c GW T r ) is the two dimensional DCT of the blurred image G Let vec( G) = Λ vec(w c GW T r ) X = (W c ) T G (Wr ) X = G ij (w c,i w T r,j ) i j w c,i and w r,j are the i th and j th columns of W c and W r This is a decomposition in the basis images w c,i w T r,j

63 The Separable PSF: A = cr T Matrix A is obtained as Kronecker product A = A r A c Ax = A c XA T r SVD of each matrix A r = U r Σ r V T r SVD of Kronecker product A c = U c Σ c V T c (U r Σ r V T r ) (U c Σ c V T c ) = (U r U c )(Σ r Σ c )(V r V c ) T Extend the one dimensional SVD solution for b = Ax X = V c Σ c U T c GU r Σ r Vr T vec(u c GUr T ) = (U r U c ) T vec(g) represents the spectral transform of the blurred image G Let G be the matrix Σ c Uc T GU r Σ r then X = V c GV T r = (V r V c ) G = Gij (v c,i v T r,j ) v c,i and v r,j are the i th and j th columns of V c and V r This is a decomposition in the basis images v c,i v T r,j

64 Summary: Spectral Decomposition of the Solution For all cases we have a decomposition of the solution as a sum of basis images For the Fourier case the order of the spectral components is not defined as it is for the SVD We can examine the solution by examining the spectral components in each case G ij are equivalent to the weights u T i b/σ i of the SVD We note that the DFT and DCT provide feasible approaches for finding the defining spectrum, the large scale SVD may be too expensive for practical problems

65 The Noise Practically the measured data is contaminated by noise, we denote by e or array(e) = E The spectral decomposition acts on the noise term in the same way it acts on the exact right hand side b eg x = r i= ( ut i (b exact + e) σ i )v i = x exact + r i= ( ut i e σ i r is the rank of A If e is uniform, we expect u T i e to be similar magnitude i Note cond(a) = σ /σ r Typically, expect σ i to decay to zero Hence cond(a) may be large σ i small represents high frequency component in the sense that u i, v i have more sign changes as i increases ) is the coefficient of v i in the error image, the weight of v i in the error image, where v i is associated with a spatial frequency When /σ i is large the contribution of the high frequency error is magnified due to ( ut i e σ ) ( ut i e σ i )v i

66 Example of singular values dependent on Width of Gaussian PSF and two noise levels Example taken from Hansen, Nagy and O Leary, Figure 55

67 Example of basis images for Middle PSF in 55, using SVD Example taken from Hansen, Nagy and O Leary, Figure 57

68 Example of basis images : Periodic Boundary Conditions Example taken from Hansen, Nagy and O Leary, Figure 58 Notice the periodicity effect on the basis images

69 Example of basis images : Reflexive Boundary Conditions Example taken from Hansen, Nagy and O Leary, Figure 59 Notice the reflexive effect on the basis images

70 Observations σ i depend on matrix A, which depends on width of the PSF σi decay faster for wider PSF than narrow PSF implemented boundary conditions In particular the basis images satisfy the boundary conditions Moreover u T i b depends on the image, and on the noise level in the image Thus (u T i b)/σ i depends on A, e and b When σ i is small relative u T i e solution may be contaminated

71 Example of basis images: Periodic BCCB using DFT Example taken from Hansen, Nagy and O Leary, Figure 5 Notice that DFT basis images are quite different

72 Example of basis images: Doubly Symmetric PSF using DCT Example taken from Hansen, Nagy and O Leary, Figure 5 Notice that DCT basis images are again also quite different

73 Comparing magnitudes of Spectral Components in DFT, DCT and SVD Bases: separable Gaussian blur Example taken from Hansen, Nagy and O Leary, Figure 52 Ordered: we see FFT and DCT have fewer large SVs

74 Summary: The Spectral Decomposition or SVD SVD assists with understanding the impact of noise on the solution process Spectral decomposition shows how the solution is composed of series of basis images Practically SVD is not useful for large scale problems Practically when A arises from image deblurring the FFT and DCT can be used to find the spectral decomposition and to efficiently implement forward operations Ay, A T y and A y for any y Spectral decomposition suggests truncation to remove the components which lead to contamination of the solution

75 Truncating the SVD expansion The SVD expansion shows the impact of the noise on the calculation of x = r i= ( ut i b σ i )v i Therefore it seems reasonable to consider the truncated solution x k = Here A k is a rank k matrix k i= ( ut i b σ i )v i = A k b The use of the truncated expansion is feasible only if we can first calculate the SVD of A efficiently The limit on the sum k is regarded as a regularization parameter We can change k and obtain different images Choice of k controls the degree of low pass filtering which is applied ie controls the attenuation of the high frequency components Look at the example: the image is over or under smoothed dependent on k

76 Example of Truncated SVD Example from Hansen, Nagy and O Leary, Fig 5 Notice under/over smoothing dependent on choice of k in the TSVD

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Introduction to Linear Systems

Introduction to Linear Systems cfl David J Fleet, 998 Introduction to Linear Systems David Fleet For operator T, input I, and response R = T [I], T satisfies: ffl homogeniety: iff T [ai] = at[i] 8a 2 C ffl additivity: iff T [I + I 2

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

Multiscale Image Transforms

Multiscale Image Transforms Multiscale Image Transforms Goal: Develop filter-based representations to decompose images into component parts, to extract features/structures of interest, and to attenuate noise. Motivation: extract

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Dealing with edge effects in least-squares image deconvolution problems

Dealing with edge effects in least-squares image deconvolution problems Astronomy & Astrophysics manuscript no. bc May 11, 05 (DOI: will be inserted by hand later) Dealing with edge effects in least-squares image deconvolution problems R. Vio 1 J. Bardsley 2, M. Donatelli

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Contents. Acknowledgments

Contents. Acknowledgments Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

AMS classification scheme numbers: 65F10, 65F15, 65Y20

AMS classification scheme numbers: 65F10, 65F15, 65Y20 Improved image deblurring with anti-reflective boundary conditions and re-blurring (This is a preprint of an article published in Inverse Problems, 22 (06) pp. 35-53.) M. Donatelli, C. Estatico, A. Martinelli,

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Empirical Mean and Variance!

Empirical Mean and Variance! Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Linear Operators and Fourier Transform

Linear Operators and Fourier Transform Linear Operators and Fourier Transform DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 13, 2013

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Singular Value Decomposition in Image Noise Filtering and Reconstruction

Singular Value Decomposition in Image Noise Filtering and Reconstruction Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 4-22-2008 Singular Value Decomposition in Image Noise Filtering and Reconstruction

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. B332 B359 c 2015 Society for Industrial and Applied Mathematics A FRAMEWORK FOR REGULARIZATION VIA OPERATOR APPROXIMATION JULIANNE M. CHUNG, MISHA E. KILMER, AND

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J Hefferon E-mail: kapitula@mathunmedu Prof Kapitula, Spring

More information

Efficient Computational Approaches for Multiframe Blind Deconvolution

Efficient Computational Approaches for Multiframe Blind Deconvolution Master s Thesis Efficient Computational Approaches for Multiframe Blind Deconvolution Helen Schomburg Advisors Prof. Dr. Bernd Fischer Institute of Mathematics and Image Computing University of Lübeck

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Lecture 7 January 26, 2016

Lecture 7 January 26, 2016 MATH 262/CME 372: Applied Fourier Analysis and Winter 26 Elements of Modern Signal Processing Lecture 7 January 26, 26 Prof Emmanuel Candes Scribe: Carlos A Sing-Long, Edited by E Bates Outline Agenda:

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 2D SYSTEMS & PRELIMINARIES Hamid R. Rabiee Fall 2015 Outline 2 Two Dimensional Fourier & Z-transform Toeplitz & Circulant Matrices Orthogonal & Unitary Matrices Block Matrices

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Filtering in the Frequency Domain http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Linear Inverse Problems. A MATLAB Tutorial Presented by Johnny Samuels

Linear Inverse Problems. A MATLAB Tutorial Presented by Johnny Samuels Linear Inverse Problems A MATLAB Tutorial Presented by Johnny Samuels What do we want to do? We want to develop a method to determine the best fit to a set of data: e.g. The Plan Review pertinent linear

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Blind Image Deconvolution Using The Sylvester Matrix

Blind Image Deconvolution Using The Sylvester Matrix Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

COMPUTATIONAL METHODS IN MRI: MATHEMATICS COMPUTATIONAL METHODS IN MATHEMATICS Imaging Sciences-KCL November 20, 2008 OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD 2 LINEAR SYSTEMS: INVERSE

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Digital Image Processing. Filtering in the Frequency Domain

Digital Image Processing. Filtering in the Frequency Domain 2D Linear Systems 2D Fourier Transform and its Properties The Basics of Filtering in Frequency Domain Image Smoothing Image Sharpening Selective Filtering Implementation Tips 1 General Definition: System

More information

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1 Week 2 Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Part I Other datatypes, preprocessing 2 / 1 Other datatypes Document data You might start with a collection of

More information

Chapter 8 Structured Low Rank Approximation

Chapter 8 Structured Low Rank Approximation Chapter 8 Structured Low Rank Approximation Overview Low Rank Toeplitz Approximation Low Rank Circulant Approximation Low Rank Covariance Approximation Eculidean Distance Matrix Approximation Approximate

More information