Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Similar documents
Linear Algebra Practice Problems

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Linear Algebra Review. Vectors

8 General Linear Transformations

Math 110: Worksheet 3

The Singular Value Decomposition

MATH 235. Final ANSWERS May 5, 2015

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

Definitions for Quizzes

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Review problems for MA 54, Fall 2004.

Mathematical foundations - linear algebra

Singular Value Decomposition

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

235 Final exam review questions

Functional Analysis Review

Basic Calculus Review

Lecture 2: Linear Algebra

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Problem # Max points possible Actual score Total 120

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

UNIT 6: The singular value decomposition.

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EE731 Lecture Notes: Matrix Computations for Signal Processing

1 Last time: least-squares problems

Chapter 6: Orthogonality

Solution: (a) S 1 = span. (b) S 2 = R n, x 1. x 1 + x 2 + x 3 + x 4 = 0. x 4 Solution: S 5 = x 2. x 3. (b) The standard basis vectors

I. Multiple Choice Questions (Answer any eight)

Multivariate Statistical Analysis

2. Review of Linear Algebra

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

1 Inner Product and Orthogonality

CS 143 Linear Algebra Review

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

NORMS ON SPACE OF MATRICES

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

The Singular Value Decomposition

Linear Algebra in Actuarial Science: Slides to the lecture

Computational math: Assignment 1

Mathematical foundations - linear algebra

MAT Linear Algebra Collection of sample exams

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Summary of Week 9 B = then A A =

The Singular Value Decomposition and Least Squares Problems

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Math Linear Algebra II. 1. Inner Products and Norms

MATH36001 Generalized Inverses and the SVD 2015

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Chapter 7: Symmetric Matrices and Quadratic Forms

Lecture 3: QR-Factorization

Examination paper for TMA4145 Linear Methods

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Solutions to Review Problems for Chapter 6 ( ), 7.1

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Lecture 3: Review of Linear Algebra

Lecture 1: Review of linear algebra

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Math 407: Linear Optimization

Maths for Signals and Systems Linear Algebra in Engineering

Lecture 3: Review of Linear Algebra

Mathematics Department Stanford University Math 61CM/DM Inner products

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Example Linear Algebra Competency Test

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Chap 3. Linear Algebra

Section 6.4. The Gram Schmidt Process

Applied Linear Algebra in Geoscience Using MATLAB

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 24 Winter 2010 Sample Solutions to the Midterm

Properties of Matrices and Operations on Matrices

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra (Review) Volker Tresp 2017

MA 265 FINAL EXAM Fall 2012

Spectral Theorem for Self-adjoint Linear Operators

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Notes on Eigenvalues, Singular Values and QR

Lecture notes: Applied linear algebra Part 1. Version 2

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

MTH 362: Advanced Engineering Mathematics

National University of Singapore. Semester II ( ) Time allowed: 2 hours

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Stat 159/259: Linear Algebra Notes

Math 113 Final Exam: Solutions

Positive Definite Matrix

Linear Algebra (Math-324) Lecture Notes

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MTH 2032 SemesterII

Basic Elements of Linear Algebra

Linear Algebra (Review) Volker Tresp 2018

Elementary linear algebra

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Transcription:

Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an answer, not the answer itself. Given the matrix A =. a) Compute the singular value decomposition of A. b) Use the result of a) to find:. Bases for the following vector spaces: ker(a), ker(a ), ran(a), ran(a ).. The pseudo-inverse of A.. Find the minimal norm solution of Ax = b for b =. Solution. a) We need to find matrices U, V, Σ with certain properties such that A = UΣV. If we look back to the proof of the SVD in the lecture notes (theorem 7.), we can deduce how we construct U, V and Σ. Note that since A is a - matrix with rank, we will find that V is a matrix, Σ a -matrix and U a -matrix.. V is picked as the matrix that diagonalizes A A, and therefore the columns of U are the normalized eigenvectors of A A.. Σ is the matrix with the positive singular values of A (= the square roots of the positive eigenvalues of A A) along the diagonal, and zero elsewhere.. The first columns of the U will be the vectors Av and Av, where v, v are the columns of V. Since we need U to be a matrix, we need one more column v, which we find by picking a normalized vector v that is orthogonal to v and v. If we did not have enough singular values to fill the diagonal of Σ, we would just put zeros in the rest of the diagonal of Σ. This will not be an issue in this example. November 8, 07 Page of 5

Exercise set Let us now find these matrices. 9 8. A A = has as characteristic polynomial x 8 9 8x + 7 = (x 7)(x ). Hence the eigenvalues of A A are λ = ) 7 and λ =. The ) corresponding normalized eigenvectors are v = ( we have V = ( and v = (. The singular values of A are σ = 7 and σ =. Thus 7 0 Σ = 0. 0 0. The first two columns of U are given by u = Av = 7 4 ). = 4. Consequently, 4 and by u = Av = = 0. Consequently, U has the form 4 x 0 x 4 x The last column is determined by the assumption that it has to be orthogonal to the first two columns. The choice u = 7 satisfies these conditions, but there are many other ways to complete the first two columns to become an orthonormal basis for C, 4 7 0 7. 4 7 November 8, 07 Page of 5

Exercise set 4. The SVD of A is 4 7 7 0 0 7 0 4 7 0 0 ( ). b). This follows from proposition 7.4. in the notes (recall that r = in our case, by inspection): ker(a) = {0}, ker(a ) = span(u ), ran(a) = span(u, u ), ran(a ) = span(v, v ).. By the discussion on page 04 in the notes, A = V Σ + U where Σ + is the matrix obtained from Σ by replacing the singular values σ i with σi and taking the transpose. Hence ( ) ( ) 4 A = 7 0 0 4 4 4 7 0 0 0 0 = 7 7 7 0. 7 7 7. By the discussion on page 04 of the notes, the least squares solution is given by 7 A 0 7 b = 7 7 7 0 7 = 7 7. 7 7 7 7 7 7 7 7 Let U be a n n matrix with columns u,..., u n. Show that the following statements are equivalent:. U is unitary.. {u,..., u n } is an orthonormal basis of C n. Solution. Note that the column u i is of the form u i = (u,i, u,i,..., u n,i ) T, and the inner product between two columns is given by u i, u j = u k,i u k,j k= = u i u j, where u i u j is the usual dot product for vectors in C n and u,j u u j =,j.... u n,j November 8, 07 Page of 5

Exercise set Assume that U U = I. Recall that element (i, j) of the matrix product U U is the dot product of row i of U with column j of U. Since row i of U is u i, this means that element (i, j) of U U is u i u j. Furthermore U U = I, which implies that u i u j = δ i,j for i, j =,..., n. Then we have u j, u i = u i u j = δ i,j, hence (u, u,..., u n ) is an orthonormal system of vectors in C n. To show that it is a basis for C n it is enough to note that C n has dimension n, and the system consists of n vectors. Hence the columns form a linearly independent subset of n vectors in an n-dimensional space, and it follows that the columns form a basis. Assume that the columns u, u,..., u n of U are an orthonormal basis of C n, i.e. u i, u j = δ i,j, for i, j =,..., n. Then we have u i u j = u i, u j = δ i,j, hence we have U U = I. One gets that U U = I from exactly the same argument, or by knowing that a left inverse of matrix is also a two-sided inverse. Let T be the shift operator on l defined by T (x, x,...) = (0, x, x,...).. Show that T has no eigenvalues.. Does T have any eigenvalues? Solution.. Assume that λ is an eigenvalue of T with eigenvector y = (y, y,... ). Then T y = λy, and writing out both sides we find (0, y, y,... ) = (λy, λy, λy... ). () In particular λy = 0, which implies that either y = 0 or λ = 0. If λ = 0, then equality () becomes (0, y, y,... ) = (0, 0,... ) which shows that y = 0, hence not an eigenvector. We may therefore assume that y = 0 and λ 0. In this case the equality () becomes (0, 0, y,... ) = (0, λy, λy... ), () November 8, 07 Page 4 of 5

Exercise set which implies that y = 0. Inserting this back into the equation, we find (0, 0, 0,... ) = (0, 0, λy... ), () hence y = 0. We may clearly continue like this to show that y is 0 in all coordinates, hence y = 0 and y is not an eigenvector.. We know from the lectures (and it is not difficult to show) that T (x, x,... ) = (x, x,... ). This operator has eigenvalues. For instance, let y = (,,,...,,... ). p Then T y = (,,...,,... ) = p y, hence y is an eigenvector with eigenvalue. Note that y l, which we needed since we defined T on l. 4 Let X be a finite dimensional vector space and T : X X a linear transformation on X. Show that X = ker(t ) ran(t ), where denotes the direct sum of the vector spaces. Hint: Use that we know that ker(t ) = ran(t ). Solution. First note that any finite-dimensional vector space can be made into a Hilbert space: if {e i } n i= is a basis for X and x, y X, then we may write x = x i e i y = y i e i i= i= for coefficients {x i } n i= and {y i } n i=. If we define the inner product x, y = n i= x i y i, then X becomes a Hilbert space. Any linear transformation on a finite-dimensional normed space is bounded, and by a previous exercise we then know that ker(t ) is a closed subspace of X. By the projection theorem we get that X = ker(t ) ker(t ), and since ker(t ) = ran(t ) we have proved the result. Note: If we look at the dimensions of X = ker(t ) ran(t ), we see that the dimension of X must be the sum of the dimensions of ker(t ) and ran(t ). If we pick a basis to write T = (T ij ) as an n n-matrix, this says that n = dim(ker(t )) + dim(ran(t )) = nullity(t) + rank(t ), the familiar rank-nullity theorem (assuming that we know that dim(ran(t )) = dim(ran(t )), which is the statement that the row space and column space of a matrix have the same dimension). November 8, 07 Page 5 of 5