Mathematical Beer Goggles or The Mathematics of Image Processing

Size: px
Start display at page:

Download "Mathematical Beer Goggles or The Mathematics of Image Processing"

Transcription

1 How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008

2 1 How 2 How 3 4 5

3 How 1 2 How 3 4 5

4 How 1 2 How 3 4 5

5 Grayscale How One dimensional matrix X = c(x), colormap(gray)

6 MATLAB image How

7 RGB How Three dimensional matrix X(:, :, 1) = X(:, :, 2) = X(:, :, 3) = c(x)

8 MATLAB image How X = imread( pic.jpg ), imwrite(x, pic.jpg )

9 How 1 2 How 3 4 5

10 The singular value decomposition (SVD) How Existence and Uniqueness Let X C m,n, m n Then σ 1 X [ ] [ ] σ 2 v 1 v 2... v n = u1 u 2... u m or X = UΣV T,... σn 0 where U T U = I, with columns of U called left singular vectors and V T V = I with right singular vectors as columns of V and Σ = diag(σ 1,...,σ n ) called singular values ordered such that σ 1 σ 2... σ n 0.

11 Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T

12 Low-rank approximations How Theorem (The rank of a matrix) The rank of X is r, the number of nonzero singular values in σ 1 σ 2... X = UΣV T = U σr V T Theorem (Another representation) X is the sum of r rank-one matrices r X = σ j u j vj T. j=1

13 Low-rank approximations Theorem For any ν with 0 ν r, define How Then X X ν 2 = X ν = ν σ j u j vj T, j=1 inf X B 2 = σ ν+1. B C m,n,rank(b) ν

14 Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. σ n

15 Low-rank approximations How Proof (m = n) 0 r X X ν 2 = σ j u j vj T 2 = U j=ν+1 σ ν+1... V T 2 = σ ν+1 Remains to show that there is no closer rank ν matrix to X. Let B have rank ν, null space of (B) has dimension n ν {v 1,...,v ν+1 } has dimension ν + 1 Let h be a unit vector in their intersection: σ n X B 2 (X B)h 2 = Xh 2 = UΣV T h 2 = Σ(V T h) 2 σ 2 ν+1 V T h 2 σ 2 ν+1.

16 Example m = 604, n = 453, m n = How Compression ratio c := (m + n)ν mn

17 Rank-1 approximation m = 604, n = 453, m + n = 1057 How Compression ratio c := e 03

18 Rank-2 approximation How Compression ratio c := e 03

19 Rank-3 approximation How Compression ratio c := e 02

20 Rank-4 approximation How Compression ratio c := e 02

21 Rank-5 approximation How Compression ratio c := e 02

22 Rank-10 approximation How Compression ratio c := e 02

23 Rank-20 approximation How Compression ratio c := e 02

24 Rank-30 approximation How Compression ratio c :=

25 Rank-40 approximation How Compression ratio c :=

26 Rank-60 approximation How Compression ratio c :=

27 Rank-80 approximation How Compression ratio c :=

28 How 1 2 How 3 4 5

29 Blurred and exact Let X be the exact image Let B be the blurred image How

30 Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n

31 Blurred and exact How Let X be the exact image Let B be the blurred image If the blurring of the columns is independent of the blurring in the rows then A c XA T r = B, A c R m,m, A r R n,n First attempt at X Naive = Ac 1 BA T r.

32 First attempt at How

33 X Naive = A 1 c BA T r How

34 What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r.

35 What is the? A noisy blurred image How and therefore B = B exact + E = A c XA T r + E X Naive = X + A 1 c EA T r. Error The naive solution satisfies X Naive X F X F cond(a c )cond(a r ) E F B F.

36 using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model Ax = b b 1.. b N R N

37 using a general model How process as a linear model We assume the blurring process is linear, i.e. x 1 x = vec(x) =. R N, b = vec(b) = x N N = m n are related by the linear model b = b exact + e Ax = b b 1.. b N R N x Naive = A 1 b = A 1 b exact + A 1 e = x + A 1 e

38 Separable two dimensional blurs How The Kronecker product If horizontal and vertical flow can be separated then where Ax = b Avec(X) = vec(b) = vec(a c XA T r ) (A r A c )vec(x) = vec(a c XA T r ), A = A r A c = a r 11 A c... a r 1n A c..., a r n1a c... a r nna c (U r Σ r V T r ) (U c Σ c V T c ) = (U r U c )(Σ r Σ c )(V r V c ) T.

39 x Naive = x + A 1 e How

40 How 1 2 How 3 4 5

41 Taking bad pictures How Sources of bad pictures defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring

42 Taking bad pictures How Sources of bad pictures Noise E defocus the camera lens (limitations in the optical system) motion blur air turbulence atmospheric blurring background photons from both natural or artificial sources signal represented by finite number of bits (quantisation error)

43 Modelling the blurring matrix A Single bright pixel x = e i Ae i = column i of A How Figure: Point source (single bright pixel) Figure: Point spread (PSF)

44 Modelling the blurring matrix A How Figure: Motion blur Figure: Out-of-focus blur { 1/(πr) 2 if (i k) p ij = 2 + (j l) 2 r 2 0 otherwise

45 Modelling the blurring matrix A How p ij = exp p ij = Figure: Atmospheric turbulence blur ( ( [ i k j l [ i k j l ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] T [ s 2 1 ρ 2 ρ 2 s 2 2 ] 1 [ ] ) T i k j l ] 1 [ ] ) T β i k j l

46 Boundary conditions and structured matrix computations How Boundary conditions Zero boundary conditions Periodic boundary conditions Reflexive boundary conditions The matrix A which is obtained from P by convolution s Block Toeplitz matrix Block Circulant matrix Sum of Block Toeplitz and Block Hankel and Block Toeplitz plus Hankel matrices

47 Spectral filtering How The SVD again With σ 1 A = UΣV T = [ ] σ 2 u 1 u 2... u N... σn v T 1.. v T N we have X Naive = x Naive = A 1 b = V Σ 1 U T b = N i=1 u T i b σ i V i = N i=1 N i=1 u T i b exact σ i V i + u T i b σ i v i N i=1 u T i e σ i V i

48 Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N

49 Spectral filtering How Behaviour of singular values σ i 0 as i grows the more blurry the, the faster the decay rate cond(a) = σ 1 /σ N The regularised solution Introduce filter factors Φ i x Naive = N i=1 Φ i u T i b σ i v i

50 Two methods How TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N

51 Two methods TSVD Φ i = { 1 i = 1,...,k 0 i = k + 1,...,N How Tikhonov regularisation Φ i = σ 2 i σ 2 i + α2 where α > 0 is a regularisation parameter, This choice of filter factor yields solution to the minimisation min x { b Ax α 2 x 2 2}.

52 Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error

53 Regularisation and perturbation errors Regularised solution How x filt = V ΦΣ 1 U T b = V ΦΣ 1 U T Ax exact + V ΦΣ 1 U T e = V ΦV T x exact + V ΦΣ 1 U T e x exact x filt = (I V ΦV T )x exact V }{{}} ΦΣ {{ 1 U T e } Regularisation error Perturbation error Oversmoothing and undersmoothing small regularisation error, large perturbation error leads to undersmoothed solution large regularisation error, small perturbation error leads to oversmoothed solution

54 Regularisation and perturbation errors How Parameter choice methods Discrepancy Principle Generalised Cross Validation L-Curve Criterion

55 A second attempt at How

56 X Naive = A 1 c BA T r How

57 Filtered solution using TSVD How Figure: k = 4801, N = Figure: k = 6630, N = s

58 Tikhonov regularisation How Figure: α = Figure: α =

59 P. C. Hansen, J. G. Nagy, and D. P. O Leary, Images - Matrices, Spectra and Filtering, SIAM, Philadelphia, 1st ed., How

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Viktoria Taroudaki Dianne P. O Leary May 1, 2015 Abstract We consider regularization methods for numerical solution of

More information

Blind Image Deconvolution Using The Sylvester Matrix

Blind Image Deconvolution Using The Sylvester Matrix Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

Singular Value Decomposition in Image Noise Filtering and Reconstruction

Singular Value Decomposition in Image Noise Filtering and Reconstruction Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 4-22-2008 Singular Value Decomposition in Image Noise Filtering and Reconstruction

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

CHAPTER 7. Regression

CHAPTER 7. Regression CHAPTER 7 Regression This chapter presents an extended example, illustrating and extending many of the concepts introduced over the past three chapters. Perhaps the best known multi-variate optimisation

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems A. H. Bentbib Laboratory LAMAI, FSTG, University of Cadi Ayyad, Marrakesh, Morocco. M. El Guide Laboratory LAMAI, FSTG,

More information

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MATH 3795 Lecture 10. Regularized Linear Least Squares. MATH 3795 Lecture 10. Regularized Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Understanding the regularization. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least

More information

MARN 5898 Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares. MARN 5898 Regularized Linear Least Squares. Dmitriy Leykekhman Spring 2010 Goals Understanding the regularization. D. Leykekhman - MARN 5898 Parameter estimation in marine sciences Linear Least Squares

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

8. the singular value decomposition

8. the singular value decomposition 8. the singular value decomposition cmda 3606; mark embree version of 19 February 2017 The singular value decomposition (SVD) is among the most important and widely applicable matrix factorizations. It

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011 Numerical Analysis Carmen Arévalo Lund University carmen@maths.lth.se Discrete cosine transform C = 2 n 1 2 1 2 1 2 cos π 2n cos 3π 2n cos (2n 1)π 2n cos 6π 2n cos 2(2n 1)π 2n cos 2π 2n... cos (n 1)π 2n

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Section 1.1: Systems of Linear Equations. A linear equation: a 1 x 1 a 2 x 2 a n x n b. EXAMPLE: 4x 1 5x 2 2 x 1 and x x 1 x 3

Section 1.1: Systems of Linear Equations. A linear equation: a 1 x 1 a 2 x 2 a n x n b. EXAMPLE: 4x 1 5x 2 2 x 1 and x x 1 x 3 Section 1.1: Systems of Linear Equations A linear equation: a 1 x 1 a 2 x 2 a n x n b EXAMPLE: 4x 1 5x 2 2 x 1 and x 2 2 6 x 1 x 3 rearranged rearranged 3x 1 5x 2 2 2x 1 x 2 x 3 2 6 Not linear: 4x 1 6x

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

! # %& () +,,. + / 0 & ( , 3, %0203) , 3, &45 & ( /, 0203 & ( 4 ( 9

! # %& () +,,. + / 0 & ( , 3, %0203) , 3, &45 & ( /, 0203 & ( 4 ( 9 ! # %& () +,,. + / 0 & ( 0111 0 2 0+, 3, %0203) 0111 0 2 0+, 3, &45 & ( 6 7 2. 2 0111 48 5488 /, 0203 & ( 4 ( 9 : BLIND IMAGE DECONVOLUTION USING THE SYLVESTER RESULTANT MATRIX Nora Alkhaldi and Joab Winkler

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Image Compression Using Singular Value Decomposition

Image Compression Using Singular Value Decomposition Image Compression Using Singular Value Decomposition Ian Cooper and Craig Lorenc December 15, 2006 Abstract Singular value decomposition (SVD) is an effective tool for minimizing data storage and data

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

On the solving of matrix equation of Sylvester type

On the solving of matrix equation of Sylvester type Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 7, No. 1, 2019, pp. 96-104 On the solving of matrix equation of Sylvester type Fikret Ahmadali Aliev Institute of Applied

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z. Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. B332 B359 c 2015 Society for Industrial and Applied Mathematics A FRAMEWORK FOR REGULARIZATION VIA OPERATOR APPROXIMATION JULIANNE M. CHUNG, MISHA E. KILMER, AND

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

HOSVD Based Image Processing Techniques

HOSVD Based Image Processing Techniques HOSVD Based Image Processing Techniques András Rövid, Imre J. Rudas, Szabolcs Sergyán Óbuda University John von Neumann Faculty of Informatics Bécsi út 96/b, 1034 Budapest Hungary rovid.andras@nik.uni-obuda.hu,

More information

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

1. Introduction. We are concerned with the approximate solution of linear least-squares problems FRACTIONAL TIKHONOV REGULARIZATION WITH A NONLINEAR PENALTY TERM SERENA MORIGI, LOTHAR REICHEL, AND FIORELLA SGALLARI Abstract. Tikhonov regularization is one of the most popular methods for solving linear

More information

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

More information

Truncated decompositions and filtering methods with Reflective/Anti-Reflective boundary conditions: a comparison

Truncated decompositions and filtering methods with Reflective/Anti-Reflective boundary conditions: a comparison Università di Milano Bicocca Quaderni di Matematica Truncated decompositions and filtering methods with Reflective/Anti-Reflective boundary conditions: a comparison Cristina Tablino Possio Quaderno n.

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

The Mathematics of Facial Recognition

The Mathematics of Facial Recognition William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

COMPUTATIONAL METHODS IN MRI: MATHEMATICS COMPUTATIONAL METHODS IN MATHEMATICS Imaging Sciences-KCL November 20, 2008 OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD 2 LINEAR SYSTEMS: INVERSE

More information