Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Size: px
Start display at page:

Download "Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo"

Transcription

1 Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010

2 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined (m < n): No solution, or an infinite number of solutions. square (m = n): Unique solution if nonsingular. Otherwise either no solution, or an infinite number of solutions. over-determined (m > n): Either no solution, a unique solution or an infinite number of solutions. But an over-determined system usually has no solution.

3 The Least Squares Problem Definition Given A C m,n and b C m. We call an x C n which minimizes r(x) 2 2 = Ax b 2 2 a least squares solution of Ax = b. We set E(x) := Ax b 2 2 = r(x) 2 2. To find x which minimizes E(x) is called the Least Squares Problem (LSQ). Since the square root function is monotone, minimizing E(x) or E(x) is equivalent.

4 Example 1 x 1 = x 1 = 1, A = 1, x = [x 1 ], b = 1, x 1 = Ax b 2 2 = (x 1 1) 2 + (x 1 1) 2 + (x 1 2) 2. Setting the first derivative with respect to x 1 equal to zero we obtain 2(x 1 1) + 2(x 1 1) + 2(x 1 2) = 0 x 1 = 4/3, the average of b 1, b 2, b 3. The second derivative is positive and x 1 = 4/3 is a global minimum.

5 The Normal Equations E(x) = Ax b 2 2 = x Bx 2c x + β, B = A A, c = A b, β = b b. E(x) := [ E x 1,..., E x n ] T = 2(Bx c). x least squares solution Bx c = 0 A Ax = A b (the Normal Equations). We will show the converse: A Ax = A b x least squares solution.

6 Example 2 We choose m abscissas t 1,..., t m, n functions φ 1, φ 2,..., φ n defined for t {t 1, t 2,..., t m }, and m positive numbers w 1,..., w m. We want to find x = [x 1, x 2,..., x n ] T such that E(x) := m k=1 w k [ n j=1 x j φ j (t k ) y k ] 2 is as small as possible. Typical examples of functions might be polynomials, trigonometric functions, exponential functions, or splines. The numbers w k are called weights.

7 Example 2 is a Least Squares Problem E(x) := m k=1 w k [ n j=1 x j φ j (t k ) y k ] 2 = Ax b 2 2, B = A T A = [ m k=1 w kφ i (t k )φ j (t k )] n i,j=1 Rn,n. [ c = A T m ] n b = k=1 w 1/2 k φ i (t k ) R n. i=1

8 w i = 1, i = 1,..., m, φ 1 (t) = 1,φ 2 (t) = t,φ n (t) = t n 1 The normal equations n = 3: m t k t 2 k B 3 x = tk t 2 k t 3 k t 2 k t 3 k t 4 k x 1 x 2 x 3 = yk tk y k t 2 k y k. symmetric positive definite if at least n distinct t s.

9 Hilbert Matrix t i = (i 1)/(m 1), i = 1,..., m. 1 B m n H n H 3 = m m k=1 ti+j k = 1 m m k=1 K 1 (H 6 ) ( k 1 ) i+j 1 x i+j dx = 1. m 1 0 i+j+1 Extremely ill-conditioned even for moderate n. Use different basis for polynomials.

10 What is next? The Pseudoinverse Orthogonal Projections LSQ: Existence and Uniqueness Numerical Solution Methods Perturbation Theory

11 Recall SVD, SVF, and Outer product form m n m n n r r n V * Σ * 1 V 1 A = m U Σ = U 1 = σ 1 u 1 v 1 * +...+σr u r v r * Figure: Three forms of SVD, decomposition (center left), factorization (center right), and outer product (right).

12 The Pseudoinverse Suppose A = U 1 Σ 1 V 1 C m,n is a singular value factorization of A. The matrix A := V 1 Σ 1 1 U 1 is called the pseudo-inverse of A. If A C m,n then A C n,m. If (1) ABA = A, (2) BAB = B, (3) (BA) = BA, and (4) (AB) = AB then B := A. A is independent of the factorization chosen to represent it. If A is square and nonsingular then A A = AA = I and A is the usual inverse of A. Any matrix has a pseudo-inverse, and so A is a generalization of the usual inverse.

13 Example 1 1 1/ 2 A = 1 1 = 1/ 2 [2] [ 1/ 2 1/ 2 ] [ ] 1/ 2 B := A = 1/ [1/2] [ 1/ 2 1/ 2 0 ] = [ ] If we guess a candidate B for A and (1) ABA = A, (2) BAB = B, (3) (BA) = BA, and (4) (AB) = AB. Then B := A.

14 What is next? The Pseudoinverse Orthogonal Projections LSQ: Existence and Uniqueness Numerical Solution Methods Perturbation Theory

15 Recall Direct sum and Orthogonal Sum Suppose S and T are subspaces of R n or C n. We define Sum: X := S + T := {s + t : s S and t T }; Direct Sum: If S T = {0}, then S T := S + T. Orthogonal Sum: Suppose, is an inner product on R n or C n. S T is an orthogonal sum if s, t = 0 for all s S and all t T. span(a) ker(a ) is an orthogonal sum with respect to the usual inner product s, t := s t. For if y = Ax span(a) and z ker(a ) then y z = (Ax) z = x (A z) = 0. orthogonal complement: T = S := {x X : s, x = 0 for all s S}.

16 Basic facts Suppose S and T are subspaces of R n or C n. Then S + T = T + S and S + T is a subspace of R n or C n. dim(s + T ) = dim S + dim T dim(s T ) dim(s T ) = dim S + dim T. C m = span(a) ker(a ) Every v S T can be decomposed uniquely as v = s + t, where s S and t T. If S T is an orthogonal sum then s is called the orthogonal projection of v into S.

17 Pythagoras v t S s If s, t = 0 then s + t 2 = s 2 + t 2. Here v := v, v.

18 Orthogonal basis for span(a) and ker(a ) A = UΣV, A = VΣ T U. AV = UΣ, A U = VΣ T A [ V 1 V 2 ] = [ U 1 U 2 ] [ ] Σ , A [ U 1 U 2 ] = [ V 1 V 2 ] [ ] Σ AV 1 = U 1 Σ 1, AV 2 = 0, A U 1 = V 1 Σ 1, A U 2 = 0 U 1 is an orthonormal basis for span(a) U 2 is an orthonormal basis for ker(a ).

19 Orthogonal Projections and SVD A = [ U 1 U 2 ] [ ] [ ] Σ 1 0 V V 2 U 1 is an orthonormal basis for span(a) U 2 is an orthonormal basis for ker(a ). Let b C m. Then b = UU b = [U 1 U 2 ] [ ] U 1 U b = U 1 (U 1b) + U 2 (U 2b). 2 b 1 := U 1 U 1b span(a) is the orthogonal projection of b into span(a). b 2 = U 2 U 2b ker(a ) is the orthogonal projection into ker(a ).

20 Orthogonal Projections and SVD b 1 = AA b, b 2 = (I AA )b b b2 Span(A) b1

21 Example 1 0 The singular value decomposition of A = 0 1 is 0 0 A = I 3 AI 2. [ ] [ ] A = I 2 I = [ ] AA = = I 3 AA = If b = [b 1, b 2, b 3 ] T, then b 1 = AA b = [b 1, b 2, 0] and b 2 = (I 3 AA )b = [0, 0, b 3 ] T.

22 What is next? The Pseudoinverse Orthogonal Projections LSQ: Existence and Uniqueness Numerical Solution Methods Perturbation Theory

23 Existence, Uniqueness, Characterization Theorem The least squares problem always has a solution. The solution is unique if and only if A has linearly independent columns. Moreover, the following are equivalent. 1. x is a solution of the least squares problem. 2. A Ax = A b 3. x = A b + z, for some z ker(a), and where A is the pseudo-inverse of A. We have x 2 A b 2 for all solutions x of the least squares problem.

24 Proof Existence Let b = b 1 + b 2, where b 1 span(a) and b 2 ker(a ) are the orthogonal projections into span(a) and ker(a ), respectively Since b 2v = 0 for any v span(a) we have b 2(b 1 Ax) = 0 for any x C n. Therefore, for x C n, b Ax 2 2 = (b 1 Ax)+b = b 1 Ax 2 2+ b b 2 2 2, with equality if and only if Ax = b 1. Since b 1 span(a) we can always find such an x and existence follows.

25 Proof 1,2,3 and Uniqueness 1 2: By what we have shown x solves the least squares problem if and only if Ax = b 1 so that b Ax = b 1 + b 2 Ax = b 2 ker(a ), or A (b Ax) = 0 or A Ax = A b. 1 = 3: Suppose Ax = b 1 and define z := x A b. Then Az = Ax AA b = b 1 b 1 = 0 and z ker(a). 3 = 1: If x = A b + z with z ker(a) then Ax = AA b + Az = b 1. If A has linearly independent columns then ker(a) = {0} and x = A b is the unique solution.

26 Proof Minimum norm property Suppose x = A b + z, with z ker(a) is a solution. Right singular vectors of A: [v 1,..., v r, v r+1,..., v n ] = [V 1, V 2 ], V 2 is a basis for ker(a) and V 2V 1 = 0. If A = V 1 Σ 1 1 U 1 and z ker(a) then z = V 2 y for some y C n r and z A b = y V 2V 1 Σ 1 1 U 1b = 0. Thus z and A b are orthogonal so that x 2 2 = A b + z 2 2 = A b z 2 2 A b 2 2.

27 What is next? The Pseudoinverse Orthogonal Projections LSQ: Existence and Uniqueness Numerical Solution Methods Normal Equations QR Decomposition and Factorization SVD and SVF Perturbation Theory

28 Numerical Solution: Normal Equations Assume A R m,n, b R m A has linearly independent columns (m n). Then B := A T A is symmetric positive definite, Can use R T R factorization of B.

29 Computing A T A A = [a :1,..., a :n ] = [ a T 1:. a T m: ]. 1. (A T A) ij = a T :i a :j, (A T b) i = a T :i b, (inner product form), 2. A T A = m i=1 a i:a T i:, AT b = m i=1 b ia i:,(outer product form). The outer product form is suitable for large problems since it uses only one pass through the data importing one row of A at a time from some separate storage.

30 Complexity Consider the number of operations to compute B := A T A. We need 2m flops to find each a T :i a :j. Since B is symmetric we only need to compute n(n + 1)/2 such inner products. It follows that B can be computed in O(mn 2 ) flops. The computation of B using outer products can also be done in O(mn 2 ) flops by computing only one half of A. In conclusion, the number of operations are O(mn 2 ) to find B, 2mn to find A T b, O(n 3 /3) to find R, O(n 2 ) to solve R T y = c and O(n 2 ) to solve Rx = y. Since m n, the bulk of the work is to find B.

31 Condition Number Issue: Squaring the Trouble A problem with the normal equation approach is that the linear system can be poorly conditioned. The 2-norm condition number of B := A T A is the square of the condition number of A. K 2 (B) = (σ 1 /σ n ) 2 = K 2 (A) 2. One difficulty which can be encountered is that the computed A T A might not be positive definite.

32 Numerical Solution using the QR Factorization Assume A R m,n, b R m A has linearly independent columns (m n). Then Suppose A = Q 1 R 1 is a QR factorization of A. A T A = R T 1 Q T 1 Q 1 R 1 = R T 1 R 1, A T b = R T 1 Q T 1 b. Since A has rank n the matrix R T 1 is nonsingular and can be canceled. Thus A T Ax = A T b = R 1 x = c 1, c 1 := Q T 1 b.

33 R 1 x = c 1, c 1 := Q T 1 b. We can use Householder transformations or Givens rotations to find R 1 and c 1. Consider using the Householder triangulation algorithm. We find R = Q T A and c = Q T b, where A = QR is the QR decomposition of A. The matrices R 1 and c 1 are located in the first n rows of R and c.

34 Example Consider the least squares problem with A = and b = A QR decomposition A = QR is =

35 Example A QR-factorization A = Q 1 R 1 is obtained by dropping the last column of Q and the last row of R so that A = = Q 1 R The least squares solution x is found by solving the system x = x 2 x 3 we find x = [1, 0, 0] T. 1 1

36 Complexity The leading term in the number of flops to compute a QR decomposition is approximately 2mn 2 2n 3 /3. The number of flops needed to form the normal equations, taking advantage of symmetry is approximately mn 2. Thus for m much larger than n using Householder triangulation requires twice as many flops as the an approach based on the normal equations. Also, Householder triangulation have problems taking advantage of the structure in sparse problems.

37 Condition Number; less trouble The 2 norm condition number for the system R 1 x = c 1 is K 2 (R 1 ) = K 2 (Q 1 R 1 ) = K 2 (A) = K 2 (A T A), the square root of the condition number for the normal equations. Thus if A is mildly ill-conditioned the normal equations can be quite ill-conditioned and solving the normal equations can give inaccurate results. The QR factorization approach is quite stable.

38 Numerical Solution using the Singular Value Factorization This method can be used even if A does not have full rank. It requires knowledge of the pseudo-inverse of A. x = A b + z is a least squares solution for any z ker(a). When rank(a) is less than the number of columns of A then ker(a) {0}, and we have a choice of z. One possible choice is z = 0 giving the minimal norm solution A b.

39 Example 1 1 Find all least square solution of A = [ ] The pseudo-inverse of A is A = [ 1, 1] T is a basis for ker(a). If b = [b 1, b 2, b 3 ] T, then for any z R the vector x = 1 [ ] [ ] b b z 1 b 3 is a solution of min Ax b 2 and this gives all solutions. z = 0 gives the minimal norm solution.

40 Matlab x = lscov(a,b) returns the ordinary least squares solution to the linear system of equations A*x = b, b can also be an m-by-k matrix, and lscov returns one solution for each column of b. When rank(a) n, lscov sets the maximum possible number of elements of x to zero to obtain a basic solution. Not the same as minimal norm solution. Uses QR factorization

41 What is next? The Pseudoinverse Orthogonal Projections LSQ Theory: Existence and Uniqueness Numerical Solution Methods Perturbation Theory Perturbing the right hand side

42 Perturbing the right hand side Theorem Suppose A C m,n has linearly independent columns, and let b, e C m. Let x, y C n be the solutions of min Ax b 2 and min Ay b e 2. Finally, let b 1, e 1 be the projections of b and e on span(a). If b 1 0, we have for any operator norm 1 e 1 K(A) b 1 y x x K(A) e 1 b 1, K(A) = A A. τ = N( A * ) b b 2 e span( A) e 1 b 1

43 Proof Subtract x = A b from y = A b + A e y x = A e = A e 1 since A e = A e 1. Thus y x = A e 1 A e 1 Moreover, b 1 = Ax A x. Therefore y x / x A A e 1 / b 1 proving the rightmost inequality. From A(x y) = e 1 and x = A b 1 we obtain the leftmost inequality.

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

EE263: Introduction to Linear Dynamical Systems Review Session 9

EE263: Introduction to Linear Dynamical Systems Review Session 9 EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1 Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T,

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

CS 322 Homework 4 Solutions

CS 322 Homework 4 Solutions CS 322 Homework 4 Solutions out: Thursday 1 March 2007 due: Wednesday 7 March 2007 Problem 1: QR mechanics 1 Prove that a Householder reflection is symmetric and orthogonal. Answer: Symmetry: H T = (I

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Dot product and linear least squares problems

Dot product and linear least squares problems Dot product and linear least squares problems Dot Product For vectors u,v R n we define the dot product Note that we can also write this as u v = u,,u n u v = u v + + u n v n v v n = u v + + u n v n The

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

The Singular Value Decomposition

The Singular Value Decomposition CHAPTER 6 The Singular Value Decomposition Exercise 67: SVD examples (a) For A =[, 4] T we find a matrixa T A =5,whichhastheeigenvalue =5 Thisprovidesuswiththesingularvalue =+ p =5forA Hence the matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information