EE5120 Linear Algebra: Tutorial 7, July-Dec Covers sec 5.3 (only powers of a matrix part), 5.5,5.6 of GS

Similar documents
EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Symmetric and anti symmetric matrices

Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 215 HW #11 Solutions

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra: Matrix Eigenvalue Problems

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Recall : Eigenvalues and Eigenvectors

Econ Slides from Lecture 7

Numerical Linear Algebra Homework Assignment - Week 2

Spectral Theorem for Self-adjoint Linear Operators

1 Last time: least-squares problems

Math Matrix Algebra

Math 2331 Linear Algebra

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Lecture 15, 16: Diagonalization

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Lecture 3 Eigenvalues and Eigenvectors

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Linear algebra II Tutorial solutions #1 A = x 1

Eigenvalues and Eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 108b: Notes on the Spectral Theorem

Examination paper for TMA4110 Matematikk 3

Eigenvalues and Eigenvectors

Quantum Computing Lecture 2. Review of Linear Algebra

Properties of Linear Transformations from R n to R m

18.06 Problem Set 9 - Solutions Due Wednesday, 21 November 2007 at 4 pm in

I. Multiple Choice Questions (Answer any eight)

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Math Homework 8 (selected problems)

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Further Mathematical Methods (Linear Algebra)

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010

4. Linear transformations as a vector space 17

The Singular Value Decomposition

18.06SC Final Exam Solutions

Problems of Eigenvalues/Eigenvectors

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Repeated Eigenvalues and Symmetric Matrices

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Eigenvalue and Eigenvector Problems

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

CS 143 Linear Algebra Review

1. General Vector Spaces

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

GQE ALGEBRA PROBLEMS

Eigenvalues and Eigenvectors

Math 489AB Exercises for Chapter 2 Fall Section 2.3

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Further Mathematical Methods (Linear Algebra) 2002

Math 310 Final Exam Solutions

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Part IA. Vectors and Matrices. Year

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Lecture 10 - Eigenvalues problem

CS 246 Review of Linear Algebra 01/17/19

NOTES ON BILINEAR FORMS

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

1.6: 16, 20, 24, 27, 28

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Linear Algebra Lecture Notes-II

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Maths for Signals and Systems Linear Algebra in Engineering

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Latent Semantic Analysis (Tutorial)

ACM 104. Homework Set 5 Solutions. February 21, 2001

235 Final exam review questions

Tutorials in Optimization. Richard Socher

Math 3191 Applied Linear Algebra

Further Mathematical Methods (Linear Algebra)

EE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

6 EIGENVALUES AND EIGENVECTORS

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation

Section 8.2 : Homogeneous Linear Systems

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Problems of Eigenvalues/Eigenvectors

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Systems of Algebraic Equations and Systems of Differential Equations

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Maths for Signals and Systems Linear Algebra in Engineering

Review of some mathematical tools

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Linear Algebra - Part II

Math Spring 2011 Final Exam

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Introduction to Iterative Solvers of Linear Systems

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

UNIT 6: The singular value decomposition.

Transcription:

EE5 Linear Algebra: Tutorial 7, July-Dec 7-8 Covers sec 5. (only powers of a matrix part), 5.5,5. of GS. Prove that the eigenvectors corresponding to different eigenvalues are orthonormal for unitary matrices. Hint: Use properties of unitary matrices See pg. 87 of GS [ b. Suppose A =, where b and c are some non-zero finite real numbers with c =. c [ A O Compute the eigenvalues and eigenvectors of A. Further, if X =, which is a O A block diagonal matrix and O is an all-zero matrix, then what are its eigenvalues and eigenvectors? Hint: Use basic procedure for finding eigenvalues and eigenvectors. A is upper triangular. So, diagonal entries are its eigenvalues, i.e., λ A =, c. Eigenvectors can be obtained as, [ T and [ b c T respectively. X is block diagonal. Since A is upper triangular, so is X. Thus eigen values of X are: λ X =,, c, c and corresponding eigenvectors are [ T, [ T b, [ c T and [ b c T respectively.. Let A be an n n hermitian matrix with n distinct positive eigen values and for any column vector x, let x(k) denote the k th entry in x. (a) Prove that if A = UΛU is the eigen decomposition of A, then (i) entries of Λ are real, and (ii) U is a unitary matrix. Hint: First prove that (i) y H Ay is real for any y, (ii) columns of U can be orthonormal. (b) Let u = u aau, v i = U H u i, i =, and a is a real positive number. If v (k) < v (k), k, then prove that a < λ max, where λ max is the maximum eigen value of A. Hint: Replace A in terms of Λ and U, and then proceed. (a) For any non-zero vector y of appropriate dimension, note that y H Ay is a scalar. Now, (y H Ay) = (y H ) A y = (y T A y ) T = y H A H y = y H Ay, where the third equality is due to the fact that y H Ay is a scalar. Since (y H Ay) = y H Ay, y H Ay is a real number. Now, let p be a eigenvector corresponding to eigenvalue λ for A. Then, p H Ap is real by above arguement. And, p H Ap = p H λp = λ p. When LHS is real and p is real, λ must also be real. Hence, all eigen values of a hermitian matrix

are real. Further, let λ and λ be two eigen values of A with eigenvectors p and p respectively. Now, since λ and λ are real and distinct, we have, λ p H p = (λ p ) H p = (Ap ) H p = p H AH p = p H Ap = λ p H p. Since eigenvalues are distinct the above relation holds only if p Hp ph p p p =. Thus, eigen vectors are orthonormal, which implies U is unitary. (b) Now, u = u aau = u auλu H u = (UU H auλu H )u = U(I aλ)u H u. Thus, U H u = (I aλ)u H u v = (I aλ)v. Let k th diagonal entry in Λ be λ k. Then, v (k) = aλ k v (k), k. If we want v (k) < v (k), k, then we must have aλ k <, k < aλ k < < a < λ k. Since, this is true for all k, we have < a < λ max, where λ max is the maximum eighen value of A. 4. Find a third column so that U is unitary. How much freedom in column? / i/ x U = / y i/ / z Hint: Use the definition of unitary matrix. Given U H U = I. On equating LHS and RHS, we get 9 equations of which 4 are redundant. The other 5 equations are given below, From equations (), () and (), x = exp (iθ x ), y = From euqations (4) and (5), θ x θ y = (n + )π and θ x θ z = (k + + x x = () + yȳ = () + + z z = () + xȳ = (4) i + i + x z = (5) exp (iθ y) and z = exp (iθ z ), respectively. y = exp i(θ x (n + )π) = and z = exp i(θ x (k )π) = )π, respectively. So, x = exp (iθ x ), exp i(θ x π) exp i(θ x + π ). Page

The third column is exp (iθ x ) exp ( iπ), which has no degree of freedom. exp (i π ) 5. Suppose each Gibonacci number G [ k+ is the average of the two previous numbers G k+ and G k. Then G k+ = (G Gk+ k+ + G k ) : = [ A [ G k+ G k+ G k (a) Find the eigenvalues and eigenvectors of A. Hint: Use basic procedure for finding eigenvalues and eigenvectors. (b) Find the limit as n of the matrices A n = SΛ n S. Hint: Compute Λ. (c) If G = and G =, show that the Gibonacci numbers approach. [ G Hint: Compute A G. (a) A = (b) S = (c) [.5.5. Eigenvalues (λ, s.t. det(a λi) = ) are and -.5, with eigen- vectors (v i, s.t. Av i = λv i ) are [ A = S [ G G 5 5 [.5, S = S = [ 5 [ [ = A G = A G [ and [ 5. =. [ [ 5 5. Suppose there is an epidemic in which every month half of those who are well become sick, and a quarter of those who are sick become dead. Find the steady state for the corresponding Markov process: d k+ s k+ = w k+ 4 4 d k s k w k Hint: Steady state is the eigen vector corresponding eigen value. Let us denote the given matrix equation as u k+ = Au k d k+ where u k+ = s k+, A = w k+ 4 4 and u k = d k s k w k. Page

The seady state is given by Au = u, i.e., u is the eigenvector corresponding to eigenvalue (A Markov matrix will always have one of the eigenvalues as and the rest of magnitude ). To find it, we equate (A I)u =. 4 4 d s w = d This gives u = s =, in other words, everyone will die from the epidemic. w 7. Show that every number is an eigenvalue for the transformation T( f (x)) = d f dx, but the transformation T( f (x)) = x f (t)dt has no eigenvalues (here < x < ). Hint: Take f (x) = e ax for any real number a and solve. If f (x) = e ax for any real number a, for the first T, T( f (x)) = d f dx = aeax = a f (x) Hence, any number a is an eigenvalue of T. [Note: the eigenvalues of a linear transformation are not dependent on a basis, so if we have found them in one basis that is good enough. Why? Any change of basis is given by a similarity transform, and it is easy to prove that similar matrices have the same eigenvalues. In this case we chose f (x) = e ax because it was an easy to guess eigenfunction. Since the question is regarding eigenvalues, it is important to only choose those f that are eigenfunctions, not arbitrary functions. For the second T, suppose T( f (x)) = a f (x) for some number a and some function f, i.e., x f (t)dt = a f (x) Now, differentiate both sides to get f (x) = a f (x) Solving for f, we get f (x)dx f (x) = a dx This gives ln f (x) = x a + C or f (x) = e x a +C or f (x) = e C e x a We can get rid of the absolute value signs by substituting A for e C (allowing A to possibly be negative): Page 4

f (x) = Ae x a Therefore we have T( f (x)) = x f (t)dt = x Ae t a dt = aae t a x o = aae x a aa = a(ae x a A) = a( f (x) A) But our initial assumption was that T( f (x) = a f (x). For this either a = or A =, either of which implies f (x) =. Hence T has no eigenvalues. 8. Let A be a square matrix. Define B = (A + AH ), B = i (A AH ) (a) Prove that B, B are hermitian and A=B +ib (b) Suppose that A = C +ic, where C and C are hermitian, then prove that C =B and C =B (c) What conditions of B and B make A normal? Hint: Use definitions. (a) B H = (AH + A) = B similarly, B H = i (AH A) = B and B +ib = (A + A H )+ (A AH ) = A (b) A = C +ic, A H = C H +ic H = C - ic using the exressions of A and A H, we get C = (A + AH ) = B and C = i (A AH ) = B (c) AA H =A H A (B +ib )(B -ib )=(B -ib )(B +ib ) simplification yields B B =B B 9. Let A be a normal matrix. Prove the following: (a) Ax = A H x for every x C n (b) A ci is a normal operator for every c C (c) If x is an eigen vector of A with eigen value λ, then x is also an eigen vector of A H with eigen value λ (λ is the complex conjugate of λ ) Hint: Use part (a) and part (b) to solve part (c) (a) Ax =(Ax) H Ax = x H A H Ax = x H AA H x = (A H x) H A H x = A H x (b) (A ci)(a ci) H = AA H c A ca H + c I = A H A c A ca H + c I = (A ci) H (A ci) therefore normal (c) Let x be an eigen vector of A with eigen value λ Ax = λx. Define U = A λi. From part b, U is normal. From part a, Ux = U H x = (A λi)x = A H x λ x A H x = λ x Page 5

. If the transformation T is a reflection across the 45 line in the plane, find its matrix with respect to the standard basis v = (, ), v = (, ), and also with respect to V = (, ), V = (, ). Show that those matrices are similar. Hint: Use Change of basis or Similarity transformation B = M AM. Let T be the Transformation matrix for reflection across any θ line given by, T = [ c cs cs s Here, θ = 45. Substituting this value, we get the transformation [ matrix wrt standard basis v, v. Lets call this matrix A, where A =. Now, we change the basis to V = (, ), V = (, ) and find the reflection transformation matrix (call it B). Observe where the new basis vectors land after reflection through the given line, TV = V TV = V [ Thus, the transformation matrix wrt V, V will be B =. Next, to show that A and B are similar matrices, we need to find a matrix M such that, B = M AM. Matrix M is constructed by writing new basis vectors V, V as a linear combination of standard basis vectors v, v. Since (, ) = (, ) + (, ) and (, ) = (, ) (, ), we get [ M = Using this M, it can be verified that B = M AM. Note that, (, ) and (, ) are also the eigenvectors of A (or T). Page