The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Size: px
Start display at page:

Download "The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)"

Transcription

1 Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD Review of Key Concepts We review some key definitions and results about matrices that will be used in this section. The transpose of a matrix A, denoted A T is the matrix obtained from A by switching its rows and columns. In other words, if A = (a ij ) then A T = (a ji ). The conjugate transpose of a matrix A, denoted A is obtained from A by switching its rows and columns and taking the conjugate of its entries. In other words, if A = (a ij ) then A = (a ji ). A matrix A is said to be symmetric if A = A T. Symmetric matrices have the following properties: Their eigenvalues are always real. They are always diagonalizable. Their eigenvectors are orthogonal. A is orthogonally diagonalizable that is there exists an orthogonal matrix P such that P 1 AP is diagonal. 87

2 88CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN A matrix A is said to be Hermitian if A = A. For matrices with real entries, being Hermitian is the same as being symmetric. An n n matrix A is said to be normal if A A = AA. Hermitian matrices are also normal. Obviously, A matrix A is said to be unitary if AA = A A = I. Unitary matrices have the following properties: They preserve the dot product that is Ax, Ay = x, y Their columns and rows are orthogonal. They are always diagonalizable. det A = 1. A 1 = A A matrix A is said to be orthogonal if AA T matrices have the following properties: = A T A = I. Orthogonal They preserve the dot product that is Ax, Ay = x, y Their columns and rows are orthogonal. det A = 1. A 1 = A T A quadratic form on R n is a function Q defined on R n by Q (x) = x T Ax for some n n matrix A. Here are a few important facts about quadratic forms: In the case A is symmetric, there exists a change of variable x = P y that transforms x T Ax into y T Dy where D is a diagonal matrix. In the case A is symmetric, the maximum value of x T Ax is the absolute value of the largest eigenvalue λ 1 of A and it happens in the direction of u 1 the corresponding eigenvector Introduction to the Singular Value Decomposition (SVD) of an mxn Matrix Recall that an n n matrix is diagonalizable if and only if it has n linearly independent eigenvectors. In particular, symmetric matrices are diagolizable. If P is a matrix whose columns are the eigenvectors of A then P 1 AP = D where D is the n n diagonal matrix whose entries on the diagonal are the eigenvectors of A and all other entries are 0. This means that in the case A is diagonalizable, we can write A = P DP 1. This factorization, as we know, is not possible for all matrices. However, a factorization of the form A = QDP 1 is always possible for any m n matrix A. A special factorization of this type,

3 5.1. BASICS OF SVD 89 the Singular Value Decomposition (SVD) is a very power tool in applied linear algebra. We discuss this factorization in this section. The SVD is based on the following property of diagonalization which can be imitated for rectangular matrices. If A is symmetric, then its eigenvalues are real. Moreover, if Ax = λx and x = 1 then Ax = λx = λ x = λ. Hence, λ measures the amount by which A stretches (or shrinks) vectors which have the same direction as its corresponding eigenvectors. If λ 1 is the eigenvalue with largest magnitude and v 1 is its corresponding eigenvectors, then v 1 gives the direction in which the stretching effect of A is the greatest. It is this property of eigenvalues we will use to derive the SVD. We begin with an example. [ ] Example Let A = and consider the linear transformation T : R R. Find a unit vector x at which the length of Ax is maxi- 8 7 x Ax mized and compute this maximum length. The problem here is that A is not a square matrix, so we cannot use what we said above, we cannot find its eigenvalues. However, we note that Ax is maximized at the same x that maximizes Ax. Also note that Ax = (Ax) T Ax = x T A T Ax = x ( T A T A ) x. This is a quadratic form defined by the matrix A T A. Note that A T A is symmetric since ( A T A ) T ( = A T A ) T T = A T A. We know the largest value of this quadratic form is the absolute value of the largest eigenvalue of its matrix. Since A T A = , its eigenvalues: are 60, 90, 0 and its corresponding eigenvectors are: , 1, Unit vectors in the same direction are, 1, 1. In conclusion, we see that for x = 1, the largest values of Ax is 60 and it is Remark 5.1. In this example, we note the importance of the matrix A T A. To find the direction in which Ax stretches (or shrinks) the most, we computed the eigenvalues and eigenvectors of A T A. That direction is the direction of the eigenvector corresponding to the eigenvalue of A T A with the largest absolute value The Singular Values of an mxn Matrix Let A be an m n matrix. Then as noted in the example, A T A is an n n symmetric matrix hence orthogonally diagonalizable. Let {v 1, v,..., v n } be.

4 90CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN an orthonormal basis for R n consisting of the eigenvectors of A T A and let λ 1, λ,..., λ n be the corresponding eigenvalues of A T A. Then, for 1 i n, we have: Av i = (Av i ) T (Av i ) = v T i A T Av i = v T i λ i v i since λ i is an eigenvalue of A T A = λ i v T i v i = λ i since v T i v i = v i = 1 (v i is a unit vector) Thus, we see that all the eigenvalues of A T A are nonnegative. by renumbering, we may assume that λ 1 λ... λ n 0. We give this as a theorem as it is an important result. Theorem 5.1. Let A be an m n matrix. Let λ i be the n eigenvalues of A T A. Then λ i 0 for 1 i n. Definition The singular values of A are the square roots of the eigenvalues λ i of A T A, denoted σ i and they are arranged in decreasing order. In other words, σ i = λ i Since Av i = λ i, we see that σ i is the length of the vectors Av i, where v i are the eigenvectors of A T A. Example Find the singular values of A where A is as in example Recall that we found λ 1 = 60, λ = 90 and λ = 0 hence σ 1 = 60 = 6 10, σ = 90 = 10 and σ = 0. We have the following important theorem. Theorem Suppose that {v 1, v,..., v n } is an orthonormal basis for R n consisting of the eigenvectors of A T A arranged so that the corresponding eigenvalues of A T A satisfy λ 1 λ... λ n, and suppose that A has r nonzero singular values. Then, {Av 1, Av,..., Av r } is an orthogonal basis for col A (the space spanned by the columns of A), and ranka = r. Proof. There are three things to prove. 1. {Av 1, Av,..., Av r } is orthogonal. Since for i j, v i v j hence v i (λ j v j ) thus v T i λ jv j = 0. It follows that for i i (Av i ) T Av j = v T i A T Av j = v T i λ j v j = 0 Thus {Av 1, Av,..., Av n } are orthogonal. Since A has exactly r nonzero singular values the the length of the vectors Av 1, Av,..., Av r are these singular values, Av i 0 1 i r. So, {Av 1, Av,..., Av r }are orthogonal.

5 5.1. BASICS OF SVD 91. {Av 1, Av,..., Av r } are linearly independent. Since they are orthogonal, they are linearly independent.. {Av 1, Av,..., Av r } spans col A. If y col A then y = Ax for some vector x. Since {v 1, v,..., v n } is an orthonormal basis for R n, we can write n n r x = c i v i thus y = c i Av i = c i Av i since Av i = 0 for i > r. i=1 i=1 Thus {Av 1, Av,..., Av r } spans col A. 4. It also follows that ranka = r. i= The Singular Value Decomposition of an mxn Matrix Let A be an m n matrix with r nonzero singular values where r min (m, n). Define D to be the r r diagonal matrix consisting of these r nonzero singular values of A such that σ 1 σ... σ r. Let [ ] D 0 Σ = (5.1) 0 0 be an m n matrix. The SVD decomposition of A will involve Σ. More specifically, we have the following theorem. Theorem Let A be an m n matrix with rank r. Then there exists an m n matrix Σ as in 5.1 as well as an m m orthogonal matrix U and an n n orthogonal matrix V such that A = UΣV T Proof. We outline the proof by constructing the various matrices involved and by showing they satisfy the requirements of the theorem. Let λ i and v i be as in theorem Then {Av 1, Av,..., Av r } is an orthogonal basis for col A. We normalize each Av i to obtain an orthonormal basis {u 1, u,..., u r }, where thus u i = 1 Av i Av i = 1 σ i Av i Av i = σ i u i (1 i r) (5.) Next, we extend {u 1, u,..., u r } to an orthonormal basis {u 1, u,..., u m } of R m and let U = [u 1, u,..., u m ] and V = [v 1, v,..., v n ]. By constructions, both U and V are orthogonal matrices. Also, from 5., AV = [Av 1 Av Av r 0 0] = [σ 1 u 1 σ u σ r u r 0 0]

6 9CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN Let D and Σ be as above, then σ UΣ = 0 σ [u 1, u,..., u m ] 0 0 σ r = [σ 1 u 1 σ u σ r u r 0 0] = AV Therefore, UΣV T = AV V T = A since V is orthogonal (V V T = I) Definition Any decomposition A = UΣV T with U and V orthogonal, Σ as in 5.1 and positive diagonal entries for D,is called a singular value decomposition (SVD) of A. The matrix U and V are not uniquely determined, but the diagonal entries of Σ are necessarily the singular values of A. The columns of U are called the left singular vectors of A and the columns of V are called the right singular vectors of A. Example Find the SVD of the matrix in example This example is designed to give the reader an idea of where the various components come from. An any algorithm developed for a computer would proceed diff erently. The construction[ can be divided] into three steps Recall that A = 8 7 Step 1: Find an orthogonal diagonalization of A T A. For small matrices, this computation can be done by hand. For larger matrices, it will be done with a computer. Since A is, A T A is. Since the eigenvalues are 60, 90, 0 the diagonalization of A T A is Step : Set up V and Σ. Σ is m n that is in our case. V is n n that is in our case. The eigenvectors corresponding to the eigenvalues 1 of A T A written in decreasing order are [v 1 v v ] = 1 1

7 5.1. BASICS OF SVD 9 1 hence V = 1 10 and σ = 0 thus D = 1 [ The singular values are σ 1 = 6 10, σ = ] and Σ = [ ]. Step : Construct U. U is going to be m m that is in our case. The rows of U = [u 1 u ] are 1 Av i for i = 1, thus u 1 = 1 [ σ i and u = 1 10 [ ] 1 = that U is already a basis for R so we do not need to extend it. Hence ] Note 1 = A = UΣV T = [ ] Summary If A is m n such that ranka = r then A = UΣV T where: V = [v 1 v...v n ] is an n n matrix which consists of the unit eigenvectors of A T A arranged such that the corresponding eigenvalues of A T A satisfy λ 1 λ >... > λ n 0. [ ] D 0 Σ = is an m n matrix such that D = diag (σ 0 0 1, σ,..., σ r ) is an r r diagonal matrix where σ i = λ i. U is an m m matrix built from [Av 1 Av...Av r ] then extended to an orthonormal basis of R m. This decomposition gives us information about the directions in which the stretching (shrinking) of A is the largest, second largest, third largest,... as well as the actual largest, second largest, third largest,... amount of this stretching (shrinking).

8 94CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN From this, we can simplify A (make smaller) by eliminating the directions in which the stretching (shrinking) is small, less than a fixed threshold. This is what we will do in the applications. You should recognize the similarity between this approach and what we have done before. To reduce the size of information we have, we: 1. express that information in terms of "essential components".. Identify and eliminate the components which only play a small role in the data The SVD and MATLAB The MATLAB command is svd. Two useful formats for this command are: 1. X=svd(A) will returns a vector X containing the singular values of A.. [U,S,V] = svd(x) produces a diagonal matrix S, of the same dimension as X and with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X = USV T An Application: Image Compression Suppose that A is the m n matrix which represents an image. We describe a technique which would compress the image using the SVD of A. Using the notation of this section, we write the SVD of A as A = UΣV T where U is m m, Σ is m n and V is n n. Σ is a diagonal matrix consisting of zeros except for the elements of its diagonal which contain the singular values of A, in decreasing order. The number of non-zero diagonal elements of Σ is the rank of A. Let r be the rank of A. Then r min (m, n). The idea behind the compression technique is that the larger a singular value of A is, the more it contributes to A hence to the image. Singular values which are very small can therefore be omitted and the image reconstructed without them. More specifically, suppose we decide to keep k of the r singular values of A and set the other ones to 0. We note the following: 1. Σ is a matrix which consists of zeros and a block at the upper left corner. This block is a k k diagonal matrix containing the singular values of A we kept, in decreasing order. For storage purposes, Σ is k 1.. When we do the product ΣV T, we are only using the first k rows of V T hence the first k columns of V. So V T, for storage purposes, is k n.. The result of the product ΣV T will be a matrix in which only the first k rows will contain non-zero entries. The remaining rows will only contain zeros. Hence, for storage purposes, ΣV T is k n.

9 5.1. BASICS OF SVD 95 Figure 5.1: Full Rank Image (99) 4. When we do the product UΣV T, we only use the first k columns of U hence for storage purposes, U is m k. We conclude that the total storage needed for UΣV T if we only keep k singular values of A is m k + k + k n = k (m + n + 1). The size of the original image was m n. So, we even have a formula for the compression ratio as a function of m, n, k. It is original image compressed image = mn k (m + n + 1) (5.) We look at a specific example. Consider the image This image is The rank of the corresponding matrix is 99. Below are the results of compressing this image by reducing the number of singular values we keep. For each image, we give how many singular values we kept (rank) the percentage of the original rank it represents as well as the compression ratio. The code which performs this can be downloaded here. A nice website which explains this compression and where I got this information can be found here Exercises 1. Using your old Algebra book and/or doing some research in other books or the internet, prove the following results.

10 96CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN Figure 5.: Rank 99 Image (75%) - Compression 1. x

11 5.1. BASICS OF SVD 97 Figure 5.: Rank 199 Image (50%) - Compression 1.9 x

12 98CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONEN Figure 5.4: Rank 159 Image (5%) - Compression. x

13 5.1. BASICS OF SVD 99 Figure 5.5: Rank 99 Image (5%) - Compression.7 x

14 100CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONE Figure 5.6: Rank 79 Image (0%) - Compression 4.7 x

15 5.1. BASICS OF SVD 101 Figure 5.7: Rank 9 Image (10%) - Compression 9.5 x

16 10CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONE Figure 5.8: Rank 19 Image (5%) - Compression 19.5 x

17 5.1. BASICS OF SVD 10 Figure 5.9: Rank 7 Image (%) - Compression 5.9 x

18 104CHAPTER 5. THE SINGULAR VALUE DECOMPOSITION (SVD) AND PRINCIPAL COMPONE Figure 5.10: Rank Image (1%) - Compression 1.5 x

19 5.1. BASICS OF SVD 105 (a) If A is a symmetric matrix, then the eigenvalues of A are real. (b) If A is orthogonal then det A = 1.. Suppose the factorization below is an SVD of a matrix A with the entries in U and V rounded to two decimal places A = (a) What is the rank of A? (b) Use this decomposition of A to write without calculations a basis for col A. (c) What are the singular values of A? (d) What are the eigenvalues of A T A? (e) What is the largest value Ax can have for any unit vector x?. Suppose that A is an n n square and invertible matrix. Find an SVD for A 1. (hint: If A is invertible, ranka = n, this also gives information about Σ) [ ] 1 4. Find an SVD for A = by following the steps in example You cannot use the MATLAB svd function but you can use MATLAB for the intermediate computations. Show all the intermediate steps.

20 Bibliography [1] M. C, S. M, Y. Z, V. C. L, Big data: related technologies, challenges and future prospects, Springer, 014. [] J. D, Big data, data mining, and machine learning: value creation for business leaders and practitioners, John Wiley & Sons, 014. [] L. G, What s this all about?, Time, 186 (015), pp [4] H. A. K, Big Data: techniques and technologies in geoinformatics, CRC Press, 014. [5] J. N. K, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems and Big Data, Oxford University Press, 01. [6] F. L R. I, Google le nouvel einstein, Science & Vie, 118 (01), pp [7] K.-C. L, H. J, L. T. Y, A. C, Big Data: Algorithms, Analytics, and Applications, CRC Press, 015. [8] T. MP, Will our data drown us, IEEE Spectrum, (015), pp [9] T. P, Giving your body a "check engine" light, IEEE Spectrum, (015), pp [10] L. S, Should you get paid for your data, IEEE Spectrum, (015), pp [11] E. S, Their prescription: Big data, IEEE Spectrum, (015), pp [1] S. Q. Y, Big data analysis for bioinformatics and biomedical discoveries, CRC Press,

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

3.3 Eigenvalues and Eigenvectors

3.3 Eigenvalues and Eigenvectors .. EIGENVALUES AND EIGENVECTORS 27. Eigenvalues and Eigenvectors In this section, we assume A is an n n matrix and x is an n vector... Definitions In general, the product Ax results is another n vector

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

3.2 Iterative Solution Methods for Solving Linear

3.2 Iterative Solution Methods for Solving Linear 22 CHAPTER 3. NUMERICAL LINEAR ALGEBRA 3.2 Iterative Solution Methods for Solving Linear Systems 3.2.1 Introduction We continue looking how to solve linear systems of the form Ax = b where A = (a ij is

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

3D Computer Vision - WT 2004

3D Computer Vision - WT 2004 3D Computer Vision - WT 2004 Singular Value Decomposition Darko Zikic CAMP - Chair for Computer Aided Medical Procedures November 4, 2004 1 2 3 4 5 Properties For any given matrix A R m n there exists

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information