MATH 5524 MATRIX THEORY Problem Set 5

Similar documents
MATH 5524 MATRIX THEORY Pledged Problem Set 2

MATH 5524 MATRIX THEORY Problem Set 4

Pseudospectra and Nonnormal Dynamical Systems

Characterization of half-radial matrices

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Lecture notes: Applied linear algebra Part 2. Version 1

Differential equations

21 Linear State-Space Representations

Math Ordinary Differential Equations

Linear Algebra: Matrix Eigenvalue Problems

Computational math: Assignment 1

GENERALIZED STABILITY OF THE TWO-LAYER MODEL

Matrix Solutions to Linear Systems of ODEs

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Symmetric and anti symmetric matrices

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

The Singular Value Decomposition

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

The Eigenvalue Problem: Perturbation Theory

1. General Vector Spaces

The following definition is fundamental.

Chap 3. Linear Algebra

Math 315: Linear Algebra Solutions to Assignment 7

Compound matrices and some classical inequalities

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

14 Singular Value Decomposition

Foundations of Matrix Analysis

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

C/CS/Phys 191 Quantum Gates and Universality 9/22/05 Fall 2005 Lecture 8. a b b d. w. Therefore, U preserves norms and angles (up to sign).

Balanced Truncation 1

Math 408 Advanced Linear Algebra

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

2. Review of Linear Algebra

Singular Value Decomposition

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

On the simultaneous diagonal stability of a pair of positive linear systems

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Math 108b: Notes on the Spectral Theorem

Linear Algebra Exercises

Analysis of Transient Behavior in Linear Differential Algebraic Equations

Clarkson Inequalities With Several Operators

Chapter Two Elements of Linear Algebra

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

15 Singular Value Decomposition

Jordan normal form notes (version date: 11/21/07)

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Periodic Linear Systems

1 Math 241A-B Homework Problem List for F2015 and W2016

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Mathematical Methods wk 2: Linear Operators

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

EE363 homework 8 solutions

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

Linear Algebra March 16, 2019

Review of Some Concepts from Linear Algebra: Part 2

Tutorials in Optimization. Richard Socher

Types of convergence of matrices. Olga Pryporova. A dissertation submitted to the graduate faculty

ON THE QR ITERATIONS OF REAL MATRICES

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

MS 3011 Exercises. December 11, 2013

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Department of Mathematics IIT Guwahati

Background Mathematics (2/2) 1. David Barber

8. Diagonalization.

Eigenvalue and Eigenvector Problems

REVIEW OF DIFFERENTIAL CALCULUS

3 (Maths) Linear Algebra

Lecture 7. Econ August 18

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Math 118, Fall 2014 Final Exam

Extreme Values and Positive/ Negative Definite Matrix Conditions

Linear Algebra Review. Vectors

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Section 3.9. Matrix Norm

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Properties of Linear Transformations from R n to R m

ELEMENTARY LINEAR ALGEBRA

0.1 Rational Canonical Forms

1 Readings. 2 Unitary Operators. C/CS/Phys C191 Unitaries and Quantum Gates 9/22/09 Fall 2009 Lecture 8

Quantum Computing Lecture 2. Review of Linear Algebra

2 Constructions of manifolds. (Solutions)

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Find the general solution of the system y = Ay, where

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

MATH 581D FINAL EXAM Autumn December 12, 2016

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

MATH 829: Introduction to Data Mining and Analysis Principal component analysis

Transcription:

MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the numerical range (field of values) of A C n n, and the ε-pseudospectrum for ε > 0, W (A) = { x Ax x x : x Cn}, σ ε (A) = {z C : (zi A) > /ε} = {z C : z σ(a + E) for some E C n n with E < ε}.. The derivative of a distinct eigenvalue λ k of a generic matrix A(t) = A 0 + te is given by λ k(t) = v k Ev k t=0 v k v, k where v k and v k are the corresponding right and left eigenvectors of A 0. (a) Suppose A C n n has singular value decomposition A = n j= s ju j vj, and consider the matrix 0 A H = A C n n. 0 Show that σ(h) = {±s,..., ±s n }. What are the corresponding right and left eigenvectors of H? (b) Use the result stated at the beginning of this problem to derive a formula for the derivative of a distinct nonzero singular value of A + te. This result forms the basis of curve tracing algorithms for computing σ ε (A). Given some z on the boundary of σ ε (A), one seeks to follow the contour in C for which s n (zi A) = / (zi A) = ε.. Let A C n n be a matrix with distinct eigenvalues λ,..., λ n and corresponding right eigenvectors v,..., v n and left eigenvectors v,..., v n. The derivative of the eigenvalue λ k (t) of A + te with respect to t (for E = ) at t = 0 is bounded in magnitude by the eigenvalue condition number κ(λ k ) = v k v k v k v. k Consider the matrix where α is a real constant. A = α, 0 (a) Compute the eigenvalue condition numbers κ(λ ) and κ(λ ) for eigenvalues λ = and λ =. (b) Compute the matrix exponential e ta.

(c) What is lim t e ta? (You do not need to explicitly compute the norm to answer this question.) (d) For fixed α > 4, find the value of t 0 that maximizes the largest entry in e ta. (e) Compute a formula (dependent on α) for d dt eta at t = 0. For what values of α is this quantity positive? (f) What do your solutions to (c), (d) and (e) suggest about the general behavior of e ta for t (0, ) with α > 4? How does max t 0 e ta depend on α? (Again, you do not need to explicitly compute any matrix norms; just describe the general behavior at small t, intermediate t, and large t.). Consider the matrix for real numbers α, β > 0. A = 0 α β 0 (a) For every z W (A), one can show that there exists a unit vector of the form t x = e iθ t with t [0, ] and θ [0, π) such that z = x Ax. (You do not need to prove this; up to a factor of a complex sign (e iφ ), this form for x is just a generic unit vector in C.) Show that x Ax = t ) t ((α + β) cos θ + i(α β) sin θ and explain why this implies that W (A) contains all the points on an ellipse in the complex plane. (b) What is the center of this ellipse? What are the major and minor axes? (c) For which pairs of α and β does this ellipse degenerate into a point or line segment? (Describe all such cases.) (d) Use this result to plot (in MATLAB etc.) the boundary of W (A) and the eigenvalues of A for (i) α = and β = 0; (ii) α = and β = ; (iii) α = and β =. This problem is the first step in proving that W (A) is convex for any A C n n. One can reduce the n n problem down to a problem, then reduce a generic problem to this form using shifts and unitary transformations. For the complete proof, see Horn and Johnson. 4. The exercise is designed to give you some intuition for transient growth in a dynamical system. The Leslie matrix arises in models for the (female) population of a given species with fixed birth rates and survivability levels. The population is divided into n brackets of y-years each, and an average member of bracket k gives birth to b k 0 females in the next y years, and has probability s k [0, ] of surviving the next y years. Letting k denote the population in the kth bracket in the jth generation, we see that the population evolves according to the matrix equation (e.g., for n = 5) 4 5 b b b b 4 b 5 s = s s s 4 4 5,

with unspecified entries equal to zero. (We presume the mortality of the last age bracket.) Denote the matrix as A, so that = A, and hence = A j p (0). Earlier on this problem set, we considered how transient growth in matrix powers is linked to the sensitivity of eigenvalues to perturbations in the matrix entries. This problem is designed to reinforce this connection in the context of physically meaningful transient behavior. (a) Design a set of parameters b,..., b 5 > 0 and s,..., s 4 > 0 (for n = 5) so that the population will eventually decay in size to zero (A j 0 as j ), but this will be preceded by a period of significant transient growth in the population, where p (0). (Think about what kind of birth and survivability values might suggest this demographic pattern.) (b) Plot your population for a number of generations to demonstrate the transient growth and eventual decay. (You may modify pop.m from the class website.) (c) Show that this transient growth coincides with sensitivity of the eigenvalues of your matrix. You may choose to do this in several different ways: compute the condition numbers of the eigenvalues; show that the numerical range of A contains points of significantly larger magnitude than the spectral radius; or download EigTool for MATLAB (see the link on the class website) and use it to plot the pseudospectra of A. 5. Johnson s algorithm for approximating the boundary of W (A) proceeds as follows.. Pick m angles 0 = θ < θ < < θ m = π.. For k =,..., m a. Define H k := (e iθ k A + e iθ k A )/ b. Compute the extreme eigenvalues λ (k) min and λ(k) max of H k c. Plot the lines {e iθ k (λ (k) min + iγ) : γ R} and {e iθ k (λ (k) max + iγ) : γ R}, which bound W (A) For example, the figure below shows these lines in C for 0 values of θ for the matrix 0 i A =. i The numerical range is contained between each pair of these lines (i.e., in the empty region in the center of the plot)..5.5.5 0.5 0 0.5.5 0

(a) Implement Johnson s algorithm. (b) Demonstrate Johnson s algorithm (with m = 0, say) by producing plots for the following: (i) (ii) (iii) (iv) The matrix A shown above; A = diag(ones(5, ), ) (a 6 6 Jordan block); A = diag(ones(5, ), ) + diag(, 5) (a circulant shift); A = randn(64)/8 (a random 64 64 matrix. (c) Confirm the accuracy of your bounds by including in each of your plots 00 random points in W (A). For example, let x = orth(randn(n, ) + i randn(n, )) (to get a random complex unit vector) and plot z = x Ax; repeat this for 99 more random complex unit vectors. 6. Prove the following two results that bound the ε-pseudospectrum of A. Roughly speaking, they can be interpreted to mean, the ε-pseudospectrum cannot be significantly larger than the field of values or the Gerschgorin disks, beyond a factor of order ε. (a) Prove that for all A C n n, σ ε (A) W (A) + D(ε), where D(ε) = {z C : z < ε} is the open disk of radius ε. (Addition of sets is defined in the usual way: Given sets S and S, the sum S + S = {s + s : s S, s S } is the set of all pairwise sums between elements in S and S.) (b) Use Gerschgorin s theorem to show that for all A C n n σ ε (A) n D(a j,j, r j + εn) j= where a j,k denotes the (j, k) entry of A, r j denotes the absolute sum of the off-diagonal entries in the jth row of A, n r j := a j,k, k=,k j and D(c, r) = {z C : z c < r} is the open disk in C centered at c C with radius r > 0. (In fact, one can prove a stronger bound with D(a j,j, r j + εn) replaced by D(a j,j, r j + ε n), but you are not required to obtain this sharper result.) 7. We have often discussed the linear, constant-coefficient dynamical system x (t) = Ax(t), whose solutions x(t) decay to zero as t, provided all eigenvalues of A have negative real part. Is the same true for variable-coefficient problems? Suppose x (t) = A(t)x(t), and that all eigenvalues of the matrix A(t) C n n have negative real part for all t 0. Is this enough to guarantee that x(t) 0 as t? This problem asks you to explore this possibility. (a) Consider the matrix cos(γt) sin(γt) U(t) =. sin(γt) cos(γt) Show that U(t) is unitary for any fixed real values of γ and t.

(b) Now consider the matrix A(t) C n n defined by A(t) = U(t)A 0 U(t), A 0 = α. 0 (Notice that A 0 is the matrix that featured in Problem.) Explain why σ(a(t)) = σ(a 0 ), W (A(t)) = W (A 0 ), and σ ε (A(t)) = σ ε (A 0 ) for all real t. (In other words, show the spectrum, numerical range, and ε-pseudospectra are identical for all t.) (c) Now we wish to investigate the behavior of the dynamical system x (t) = A(t)x(t). ( ) Define y(t) = U(t) x(t). Explain why equation ( ) implies that y (t) = ( A 0 + (U(t) ) U(t) ) y(t). ( ) (Here (U(t) ) C n n denotes the t-derivative of the conjugate-transpose of U(t).) (d) Compute (U(t) ) U(t). Does this matrix vary with t? (e) Define the matrix  = A 0 + (U(t) ) U(t). Fix α = 7. Plot the (real) eigenvalues of  (e.g., in MATLAB) as a function of γ [0, 7]. Do the eigenvalues of  fall in the left half of the complex plane for all γ? (f) Calculate the eigenvalues of  for γ = and α = 7. What can be said of solutions y(t) to the system ( ) as t for these α, γ values? What then can be said of x(t) = U(t)y(t), where x(t) solves ( ), as t? How does this compare to the similar constant coefficient problem x (t) = A 0 x(t) (where we have seen that A 0 has the same spectrum, numerical range, and pseudospectra as A(t) for all t)? [Examples of this sort were perhaps first constructed by Vinograd; see Dekker and Verwer, or Lambert.] 8. Prove or disprove Crouzeix s conjecture: For any matrix A C n n and any function f analytic on W (A), f(a) max f(z). z W (A) [ This is a challenge problem: a solution has not yet been discovered! ]