Algorithmen zur digitalen Bildverarbeitung I

Similar documents
MATH36001 Generalized Inverses and the SVD 2015

Linear Least Squares. Using SVD Decomposition.

The SVD-Fundamental Theorem of Linear Algebra

EE731 Lecture Notes: Matrix Computations for Signal Processing

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

The Singular Value Decomposition and Least Squares Problems

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Singular Value Decomposition

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

. = V c = V [x]v (5.1) c 1. c k

COMP 558 lecture 18 Nov. 15, 2010

Pseudoinverse and Adjoint Operators

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Singular Value Decomposition

Problem set 5: SVD, Orthogonal projections, etc.

Singular value decomposition

ON THE SINGULAR DECOMPOSITION OF MATRICES

Introduction to Numerical Linear Algebra II

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Linear Systems. Carlo Tomasi

Singular Value Decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Lecture notes: Applied linear algebra Part 1. Version 2

Matrix decompositions

14 Singular Value Decomposition

Problem # Max points possible Actual score Total 120

UNIT 6: The singular value decomposition.

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Linear Algebra Fundamentals

Pseudoinverse & Moore-Penrose Conditions

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

EE263: Introduction to Linear Dynamical Systems Review Session 9

Block Bidiagonal Decomposition and Least Squares Problems

ECE 275A Homework # 3 Due Thursday 10/27/2016

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

ECE 275A Homework #3 Solutions

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Probabilistic Latent Semantic Analysis

The Singular Value Decomposition

Lecture 6. Numerical methods. Approximation of functions

15 Singular Value Decomposition

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

EE263: Introduction to Linear Dynamical Systems Review Session 2

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)

Normed & Inner Product Vector Spaces

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Review of similarity transformation and Singular Value Decomposition

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Numerical Methods for Solving Large Scale Eigenvalue Problems

Singular Value Decomposition (SVD)

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

DS-GA 1002 Lecture notes 10 November 23, Linear models

AMS526: Numerical Analysis I (Numerical Linear Algebra)

There are six more problems on the next two pages

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

EE731 Lecture Notes: Matrix Computations for Signal Processing

Notes on Linear Algebra

3D Computer Vision - WT 2004

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Conjugate Gradient Tutorial

Chapter 3. Vector spaces

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Basic Elements of Linear Algebra

The Singular Value Decomposition

Singular Value Decomposition

Lecture 2: Linear Algebra Review

Maths for Signals and Systems Linear Algebra in Engineering

The Singular Value Decomposition

Stat 159/259: Linear Algebra Notes

Review of Some Concepts from Linear Algebra: Part 2

Conditions for Robust Principal Component Analysis

Homework 1. Yuan Yao. September 18, 2011

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

MATH 350: Introduction to Computational Mathematics

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

8. the singular value decomposition

LEAST SQUARES SOLUTION TRICKS

Chapter 7: Symmetric Matrices and Quadratic Forms

Solution of Linear Equations

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Moore-Penrose Conditions & SVD

Linear Algebra in Actuarial Science: Slides to the lecture

Computer Vision. CPS Supplementary Lecture Notes Carlo Tomasi Duke University Fall Mathematical Methods

Linear Algebra - Part II

Solutions of Linear system, vector and matrix equation

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Characterization of half-radial matrices

arxiv: v1 [math.na] 1 Sep 2018

Transcription:

ALBERT-LUDWIGS-UNIVERSITÄT FREIBURG INSTITUT FÜR INFORMATIK Lehrstuhl für Mustererkennung und Bildverarbeitung Prof. Dr.-Ing. Hans Burkhardt Georges-Köhler-Allee Geb. 05, Zi 01-09 D-79110 Freiburg Tel. 0761-03 - 860 Algorithmen zur digitalen Bildverarbeitung I Pseudoinverse: A draft tutorial Author: D. Katsoulas 1 Singular Value Decomposition 1.1 Definition For every real matrix A of dimensions m n, there exist two orthogonal matrices: U = [u 1,..u m ] R m m and V = [v 1,..v n ] R n n such that Σ = U T AV = diag(σ 1, σ,.., σ p ) R m n, (1) where p = min{m, n}, and σ 1 σ.. σ p, or equivalently (due to orthogonality of U, V ): A = UΣV T () The, i 1..p are the singular values of A and this method for decomposing A according to eq. () is widely known as the Singular Value Decomposition (or SVD) of the matrix A. 1. Properties The SVD reveals a lot about the internals of a matrix A. If we define as σ r the smallest non zero diagonal element of Σ, that is : σ 1 σ... σ r σ r+1 =.. = σ p = 0, (3) then the following hold: rank(a) = r (4) N(A) = span{v r+1,.., v n } (5) R(A) = span{u 1,.., u r }, (6) where N(A) and R(A) the null space and the range of the matrix A respectively. The most appealing property of SVD, is expressed by the following theorem:

Theorem: Given the linear system of equations: Ax = y, (7) then the solution x LS R n of this system of the form: x LS = u T i b v i, (8) satisfies the following: ρ LS = Ax LS b = min Ax b x (9) m ρ LS = Ax LS b = (u T i b) (10) i=r+1 x 1 R n, x 1 = min x Ax b : x LS x (11) Proof: so then: Due to the orthogonality of U, V, we have that: U T = 1, and V V T = I. If Ax b = U T A(V V T )x b = (U T AV )V T x U T b (1) Now setting α = V T x and using eq. (1), eq. (1) becomes: Ax b = Σα U T b (13) but since: Σα U T b = σ 1 a 1 u T 1 b σ a u T b. σ r a r u T r b u T r+1b. u T m b, (14) m 1 we can write the squared -norm of the vector Ax b as Ax b = m ( α i u T i b) + (u T i b) (15) i=r+1 Our task is to find the vector x which minimizes Ax b. Since the vector x depends on α, we can attempt to find the α minimizing Ax b instead. From eq. (15) we see that Ax b is minimized if α i = ut i b for i = 1..r. The elements α i for i = r + 1..m are not important for the minimization. However if we set them to zero, we get the vector α which minimizes Ax b on one hand, and has the smallest norm of all minimizers on the other. Since, x = V α, and x = V α = α, the solution x LS shown in eq. (8) minimizes ρ LS = Ax b, so that eq. (9) is satisfied, and has the smallest -norm of all minimizers of ρ LS, which means that eq. (11) is as well satisfied. By setting the α i s computed above into eq. (15), we get the residual value appearing in eq. (10).

The Pseudoinverse matrix.1 Definition Given the matrix A its pseudoinverse A + is a matrix of dimensions n m, which amounts to: A + = V Σ + U T (16) In the preceding equation, the matrix Σ + is a diagonal matrix of the form: Σ + = diag( 1 σ 1, 1 σ,.., 1 σ r, 0,.., 0) R n m (17) If so, then: A + b = (V Σ + U T )b = u T i b v i = x LS (18). Insight "X" SPACE "Y" SPACE N(A) A R(A) x1=()b x3 A c = A()b b 0 = f = A()e x4 = ()d = ()Ax3 e N(A) x d Xs R(A) Abbildung 1: Geometric interpretation of pseudoinverse functionality. Let s suppose we are interested in solving the linear system (7). To ease the illustration of the problem, we suppose that A is a matrix. We also assume that dim(n(a)) = 1. If X the vector space in which x belongs, and since dim(x) = dim(r(a)) + dim(n(a)), we deduce that dim(r(a)) = 1 for our example. We suppose as well that Y is the vector space in which the vector y belongs. The twodimensional vector spaces X and Y are depicted in Fig. 1. The one-dimensional subspaces N(A) and R(A) of X and Y respectively are depicted as lines in the figure. Note that in the figure, R(A) and N(A), stand for R(A) and N(A) respectively. For solving (7), we consider the following two cases: y = d and d R(A): The system has an infinite number of solutions. If x 3 a solution to the system, then x s = x 3 + x N, where x N N(A), is as well a solution. All those solutions belong to a one-dimensional translate of N(A), illustrated by the line X s in the

figure. This line passes through x 3, and is parallel to the line defined by N(A). As implied by equations (18), (11), from all the solutions of the system, the solution delivered by the pseudoinverse x 4 = A + d will be the one with the minimum norm, that is the minimum distance from the origin of the X space. The distance of x 4 to the origin is minimized only if x 4 belongs to the orthogonal space of N(A), that is, x 4 N(A). Note that since x 3 a solution of Ax = d, we have that Ax 3 = d. In addition, x 4 = A + d. Thereby, x 4 = A + Ax 3. This means that the matrix A + A, projects vectors of X-space to N(A). This projection is orthogonal, since N(A) is orthogonal to the X s line. y = b and b R(A). In this case the system has no solution. However, as implied by equations (18), (9), the solution delivered by the pseudoinverse: x 1 = A + b, when mapped via A to the Y -space, gives a vector c = Ax 1, on the R(A), so that the distance of c to b is minimized. Since c = Ax 1 and x 1 = A + b, we get c = AA + b. Since the vector c is an element of R(A) with the minimum distance to b, it is the orthogonal projection of b to R(A). The matrix AA +, defines therefore an orthogonal projection of a vector of the Y space to the R(A). Note that in every case, the matrix A +, takes vectors of the Y - space and brings them to vectors of N(A). This implies that: R(A + ) = N(A). Those vectors of N(A), solve the system Ax = y R, where y R the orthogonal projection of y to R(A) on one hand, and have minimum distance from the origin of the X- space on the other. Lets consider now the case, where Ax = e, e R(A). The orthogonal projection of e to R(A) is the null vector in this case. The vector which solves the system Ax = 0 with minimum norm is the null vector. In other words the pseudoinverse maps vectors of R(A) to the null vector. That is : N(A + ) = R(A). Based on these observations, we could conclude the following regarding the properties of the pseudoinverse matrix:.3 Properties 1. R(A + ) = N(A). Since N(A) = R(A T ) (why?) we finally get: R(A + ) = N(A) = R(A T ).. N(A + ) = R(A). Since R(A) = N(A T ) (why?) we finally get: N(A + ) = R(A) = N(A T ). 3. A + A, defines the orthogonal projection of an arbitrary vector of the X space to N(A). This has the following consequences: (a) (A + A) T = A + A (why?) (b) The matrix I A + A defines the orthogonal projection of the vector to N(A). (c) Since I A + A N(A), we have: A(I A + A) = 0, or AA + A = A. 4. AA +, defines the orthogonal projection of an arbitrary vector of the Y -space to R(A). This has the following consequences: (a) (AA + ) T = AA + (why?) (b) The matrix I AA + is the orthogonal projection of the vector to R(A). (c) We have already shown that: R(A) = N(A + ). Thereby, A + (I AA + ) = 0, or A + AA + = A +.

3 Exercises Based on what discussed in the previous sections show that: 1. (A T ) + = (A + ) T. A ++ = A 3. A T AA + = A T 4. A + AA T = A + 5. If r = n then A + = (A T A) 1 A T 6. If r = m then A + = A T (AA T ) 1 References: [1] G. H. Golub and C. F. Van Loan Matrix Computations. John Hopkins University Press, 1989. [] W. Press, S. Teukolsky, W. Vetterling, B. Flannery. Numerical Recipes in C++ (The Art of Scientific Computing), Second Edition. Cambridge University Press, 001.