MATH 5524 MATRIX THEORY Problem Set 4

Similar documents
Chap 3. Linear Algebra

MATH 5524 MATRIX THEORY Problem Set 5

Lecture notes: Applied linear algebra Part 2. Version 1

1 Linear Algebra Problems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

The matrix sign function

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra Practice Problems

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Eigenvalue and Eigenvector Problems

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

ECE 275A Homework # 3 Due Thursday 10/27/2016

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Linear Algebra: Matrix Eigenvalue Problems

MATH36001 Generalized Inverses and the SVD 2015

Solving Homogeneous Systems with Sub-matrices

Diagonalizing Matrices

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

COMP 558 lecture 18 Nov. 15, 2010

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants

Math 215 HW #11 Solutions

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

The Singular Value Decomposition

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

MATH 5524 MATRIX THEORY Pledged Problem Set 2

MATHEMATICS 217 NOTES

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

ECE 275A Homework #3 Solutions

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Math 408 Advanced Linear Algebra

Final Exam Practice Problems Answers Math 24 Winter 2012

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Moore-Penrose Conditions & SVD

Moore Penrose inverses and commuting elements of C -algebras

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MIT Final Exam Solutions, Spring 2017

Math 307 Learning Goals. March 23, 2010

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra Massoud Malek

MATH 581D FINAL EXAM Autumn December 12, 2016

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Chapter 1. Matrix Algebra

Singular Value Decompsition

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Characterization of half-radial matrices

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

EIGENVALUE PROBLEMS (EVP)

8. Diagonalization.

235 Final exam review questions

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Numerical Methods I Eigenvalue Problems

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Computing Real Logarithm of a Real Matrix

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh

Linear Algebra in Actuarial Science: Slides to the lecture

7. Symmetric Matrices and Quadratic Forms

Reduction to the associated homogeneous system via a particular solution

Matrix Theory. A.Holst, V.Ufnarovski

Linear Least Squares. Using SVD Decomposition.

w T 1 w T 2. w T n 0 if i j 1 if i = j

Pseudoinverse & Orthogonal Projection Operators

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Review problems for MA 54, Fall 2004.

Some Formulas for the Principal Matrix pth Root

Numerical Methods I Singular Value Decomposition

Algebra Exam Topics. Updated August 2017

Properties of Matrices and Operations on Matrices

1. General Vector Spaces

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

LinGloss. A glossary of linear algebra

Notes on Eigenvalues, Singular Values and QR

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math Abstract Linear Algebra Fall 2011, section E1 Practice Final. This is a (long) practice exam. The real exam will consist of 6 problems.

Lecture 2: Computing functions of dense matrices

Computing the pth Roots of a Matrix. with Repeated Eigenvalues

PARTIAL DIFFERENTIAL EQUATIONS

Numerical Linear Algebra Homework Assignment - Week 2

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Applied Numerical Linear Algebra. Lecture 8

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

A Brief Outline of Math 355

Pseudoinverse & Moore-Penrose Conditions

Definition (T -invariant subspace) Example. Example

Math 504 (Fall 2011) 1. (*) Consider the matrices

Eigenvalues and eigenvectors

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Transcription:

MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each. 1. Let A C m n be a full rank matrix, with m > n. In general, Ax b for all x C n. The least squares problem amounts to finding the optimal approximation to b C m from R(A): min x C b Ax n 2 = min b b 2. b R(A) In other words, the standard least squares problem seeks the smallest perturbation δb such that there exists some x for which Ax = b + δb. Implicitly, we are thus assuming that the matrix A is exact, but the data b has some errors. An alternative approach, called total least squares, allows for errors in both A and b. Now we look for the smallest δa and δb such that there exists some x for which (A + δa)x = b + δb, i.e., [A + δa b + δb] x =. ( ) This equation implies that the matrix [A + δa b + δb] C m (n+1) has rank less than n + 1. (Recall that m > n.) (a) Use the singular value decomposition of the matrix [A b] to describe how to compute the matrix [δa δb] that makes [A + δa b + δb] rank-deficient and minimizes [δa δb] 2. (b) Use the optimal [δa δb] in (b) to write a simple formula for the solution x in ( ) in terms of appropriate singular values and/or vectors of [A b]. (You might note when this construction breaks down, as a unique solution need not always exist.) (c) Explain why min x C n Ax b 2 cannot be smaller than the smallest singular value of [A b]. (d) Compute (in MATLAB) the solution x produced by (i) standard least squares and (ii) total least squares for 1 2 2 1, b = 1. 1 2 2 For (i), also report δb = Ax b and δb ; for (ii), report δa, δb, and [δa δb] as in part (b). 2. (a) Let A C m n be a rank-r matrix whose singular value decomposition can be expressed as We defined the pseudoinverse of A to be A + = r s j u j vj. j=1 r j=1 1 s j v j u j.

Show that X = A + satisfies the four Penrose conditions: (i) AX A; (ii) XAX = X; (iii) (AX) = AX; (iv) (XA) = XA. (b) Conditions (iii) and (iv) in part (a) might seem trivial, but they are important. Suppose we write the full singular value decomposition of the rank-r matrix A as Σr U V, where Σ r = diag(s 1,..., s r ) C r r, the zero blocks have appropriate dimension (e.g., in the (1,2) entry, C r (n r) ), and U C m m and V C n n are unitary. Show that X = V [ Σ r L ] K U LΣ r K satisfies Penrose conditions (i) and (ii) for any K C r (m r) and L C (n r) r. Does X satisfy (iii) and (iv) when L and K are nonzero? (c) For arbitrary A C m n, show A + = lim(a A + ti) A. t (d) For arbitrary A C m n, show A + = e A At A dt. (e) [optional] For arbitrary A C m n, show Let Γ be a closed contour in the complex plane that encloses all nonzero eigenvalues of A A but does not enclose the origin. Then A + = 1 1 2πi Γ z (zi A A) A dz. [Stewart; Campbell & Meyer] 3. (a) Given the matrix 1, 1 1 2 construct a cubic polynomial p such that p(a) = (I + A). (b) Given the matrix 1 1, 1 construct a cubic polynomial p such that p(a) = e A. (c) The Drazin inverse is an alternative to the pseudoinverse for square matrices; it is defined as follows. Suppose the Jordan canonical form of A C n n can be written as Jλ [ V λ V ] [ V J λ V ], where σ(j λ ) and {} = σ(j ). Then the Drazin inverse can be written as A D J = [ V λ V ] λ [ V λ V ]. Construct a degree n 1 polynomial p such that A D = p(a). (The Drazin inverse plays an important role in the solution of differential algebraic equations. Stewart and Sun write, The clear winner in the generalized inverse sweepstakes is the pseudoinverse applied to full rank problems.... A distant second is the Drazin generalized-inverse. )

4. In class we will claim that e A e B e A+B in general. This question investigates some of the subtleties involved in this statement. (a) Prove that if A and B commute (AB = BA), then e A e B = e A+B. (b) Consider the matrices 1, B =. 2πi 2πi Show (by hand) that A and B do not commute, yet e e B = e A+B = I. (Hence e A e B = e A+B.) [Horn and Johnson] (c) Consider the matrices πi, B = πi 1. Show (by hand) that A and B do not commute, but e A e B = e B e A e A+B. [Horn and Johnson] 5. This problem concerns the matrix sign function. For scalar z, define {, Re z < ; sign(z) = 1, Re z > ; (sign(z) is not defined for z on the imaginary axis). The matrix sign function sign(a) is useful tool in control theory and quantum chromodynamics. (a) Let VJV denote the Jordan canonical form of a matrix A with no purely imaginary eigenvalues. Suppose that V and J are partitioned in the form J1 V = [ V 1 V 2 ], J =, J 2 where all eigenvalues associated with the Jordan blocks in J 1 are in the left half of the complex plane, while those associated with the blocks in J 2 are in the right half plane. Use one of our usual approaches for defining f(a) to write down a concise expression for sign(a) in terms of the Jordan form, and confirm that sign(a) 2 = I. (b) Consider a generic matrix-valued function F(X) : C n n C n n. Newton s method attempts to compute a solution of the equation F(X) = via the iteration X k+1 = X k G(X k ) F(Xk ), where G(X k ) : C n n C n n denotes the Fréchet derivative of F evaluated at X k ; often this is easier to view as solving G(X k )(X k X k+1 ) = F(X k ). To compute the matrix sign function, we will show how this method works for F(X) = X 2 I. We can compute G(X)E, the Fréchet derivative of F applied to the matrix E, as the linear term (in E) in the expansion F(X + E) = (X + E) 2 I = (X 2 I) + (XE + EX) + E 2, i.e., G(X)E = XE + EX. Newton s method seeks the E that makes F(X + E) =, neglecting the quadratic E term, i.e., XE + EX = I X 2.

Since we neglected E 2, we do not exactly have F(X + E) =, so instead we iterate. Given X k, solve X k E k + E k X k = I X 2 k ( ) for E k, and then set X k+1 = X k + E k, and repeat. All you need to do for part (b) of this problem is to show that E k = 1 2 (X k X k ) satisfies ( ) and hence Newton s method yields the iteration X k+1 = 1 2 (X k + X k ). (c) Write a MATLAB code to implement the iteration in part (b), using X = A. Test your code out on the matrix in part (e) below. (Higham observes that this is one of the rare circumstances in numerical analysis where explicit computation of a matrix inverse is required. So, use inv with abandon!) (d) Suppose A is Hermitian, and that you have a black box for computing sign(a). Describe a (not necessarily efficient!) numerical algorithm for computing all the eigenvalues of A. (e) Implement the eigenvalue algorithm in part (d), and test it on the matrix 1. 1........ 1 C16 16. 1 (Within your algorithm, use your Newton-based algorithm from part (c) to compute sign(a).) [adapted from Higham] 6. Recall that our earlier block diagonalization of a square matrix required the solution of a Sylvester equation of the form AX + XB = C. The same equation arises in control theory, where it is common for A C n n and B C m m to both be stable, meaning that all of their eigenvalues have negative real part. Make that assumption about A and B throughout this problem. (a) Show that solves the equation AX + XB = C. X = e ta Ce tb dt (b) Let µ R be positive. Manipulate AX + XB = C to show that and X = (A µi) X(B + µi) + (A µi) C X = (A + µi)x(b µi) + C(B µi) and hence conclude that X satisfies the Stein equation X = A µ XB µ + C µ, for A µ := (A + µi)(a µi), B µ := (B + µi)(b µi), C µ := 2µ(A µi) C(B µi).

(c) Explain why the series X = A j µc µ B j µ converges, and show that it solves the Stein equation X = A µ XB µ + C µ. j= (d) In many control theory applications, C has low rank. Suppose that C has rank-1 and that A and B are diagonalizable: VΛV and B = YΦY. Using the partial sum k X k = A j µc µ B j µ j= to develop an upper bound on the singular values of X: s k+1 (X) γρ k for constants γ 1 and ρ (, 1) that you should specify. By constructing the partial sum X k, you have an algorithm for constructing the solution X, called Smith s method or the Alternating Direction Implicit (ADI) method. The bound on s k+1 (X) proves the singular values of X decay exponentially at the rate ρ, a fact with deep implications. 7. Suppose the columns of A C m n for m n are approximately orthonormal. In many situations we must assess the departure of these columns from orthonormality. The quantity A A I provides one natural way to measure this departure. Another approach comes from the polar decomposition ZR, where Z C m n is a subunitary matrix (i.e., Z Z = I) with R(A) = R(Z). Then one could use A Z to gauge the departure of A from orthonormality. Show that A A I 1 + s 1 (A) A Z A A I 1 + s n (A), where s 1 (A) and s n (A) denote the largest and smallest singular values of A. (This bound implies that these two measures of the departure from orthonormality are essentially equivalent.) [Higham]