ACM106a - Homework 4 Solutions

Similar documents
The Singular Value Decomposition

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

1 Last time: least-squares problems

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Maths for Signals and Systems Linear Algebra in Engineering

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Singular Value Decomposition

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Linear Algebra (Review) Volker Tresp 2018

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Lecture 5 Singular value decomposition

Maths for Signals and Systems Linear Algebra in Engineering

Jordan Normal Form and Singular Decomposition

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Positive Definite Matrix

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

1. The Polar Decomposition

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

I. Multiple Choice Questions (Answer any eight)

Linear Algebra (Review) Volker Tresp 2017

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Orthogonal iteration to QR

UNIT 6: The singular value decomposition.

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Eigenvalues and eigenvectors

Linear Algebra, part 3 QR and SVD

Summary of Week 9 B = then A A =

CS 246 Review of Linear Algebra 01/17/19

Numerical Methods I Singular Value Decomposition

1 Linearity and Linear Systems

Math 408 Advanced Linear Algebra

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Eigenvalue and Eigenvector Problems

18.06SC Final Exam Solutions

Problem # Max points possible Actual score Total 120

The Singular Value Decomposition

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Example Linear Algebra Competency Test

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

7. Symmetric Matrices and Quadratic Forms

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Orthogonal iteration to QR

Eigenvectors and Eigenvalues 1

Diagonalizing Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Methods I Eigenvalue Problems

Notes on Eigenvalues, Singular Values and QR

Linear Algebra Review. Vectors

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Fundamentals of Matrices

Eigenvalues and Eigenvectors

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

TBP MATH33A Review Sheet. November 24, 2018

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Review of Linear Algebra

Review problems for MA 54, Fall 2004.

Exercise Set 7.2. Skills

LinGloss. A glossary of linear algebra

Extreme Values and Positive/ Negative Definite Matrix Conditions

Algebra C Numerical Linear Algebra Sample Exam Problems

Numerical Linear Algebra Homework Assignment - Week 2

Math 502 Fall 2005 Solutions to Homework 4

COMP 558 lecture 18 Nov. 15, 2010

Fiedler s Theorems on Nodal Domains

Cheat Sheet for MATH461

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

Pseudoinverse & Moore-Penrose Conditions

Computational Methods. Eigenvalues and Singular Values

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EECS 275 Matrix Computation

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Review of similarity transformation and Singular Value Decomposition

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Vector and Matrix Norms. Vector and Matrix Norms

Diagonalization by a unitary similarity transformation

MATH 581D FINAL EXAM Autumn December 12, 2016

Eigenvalue and Eigenvector Homework

Properties of Matrices and Operations on Matrices

Math 108b: Notes on the Spectral Theorem

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

Singular Value Decomposition

Numerical methods for eigenvalue problems

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Eigenvalues and Eigenvectors

Conceptual Questions for Review

Lecture notes: Applied linear algebra Part 1. Version 2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

5 Linear Algebra and Inverse Problem

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Lecture 6. Numerical methods. Approximation of functions

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Eigenvalues and Eigenvectors

STA141C: Big Data & High Performance Statistical Computing

Linear Algebra Lecture Notes-II

Schur s Triangularization Theorem. Math 422

Transcription:

ACM106a - Homework 4 Solutions prepared by Svitlana Vyetrenko November 17, 2006 1. Problem 1: (a) Let A be a normal triangular matrix. Without the loss of generality assume that A is upper triangular, i.e. a 11 a 12... a 1n A = 0 a 22... a 2n............ 0 0... a nn Assume that a 1j 0 for j = 2... n. Then the (1, 1) element of A A is equal to a 2 11+a 2 12+...+a 2 1n. The (1, 1) element of AA is equal to a 2 11. Since A is normal, a 2 11 + a 2 12 +... + a 2 1n = a 2 11 and, therefore, a 1j = 0 for j = 2... n. Now assume that a 2j 0 for j = 3... n. Using the above proven fact that a 12 = 0 we can compare the (2,2) entries of A A and AA to show that all a 2j = 0 for j = 3... n. Similarly, proceeding row by row and comparing the diagonal entries of A A and AA we can see that in order for an upper triangular A to be normal, it has to be diagonal. (b) Any n n matrix A can be represented as A = UT U, where T is upper triangular and U is unitary. Assume that A is normal, then A A = AA and we have UT T U = UT T U, thus, T T = T T. By part (a) we know that if an upper triangular matrix T is normal, then it is diagonal. Since the diagonal entries of the Schur form T are the eigenvalues of A and T is a diagonal matrix, then A = UT U gives an eigenvalue decomposition of A. Thus, A has n orthogonal eigenvectors. Now suppose A has n orthogonal eigenvectors (denote the matrix of these eigenvectors by U), then it can be represented as A = U DU, where D is a diagonal matrix. Then by employing the fact that any diagonal matrix has to be normal we have A A = U D UU DU = U D DU = U DD U = U DUU D U = AA. Thus, A is normal. 2. Chapter 25, problem 25.3: (a) (a) can be obtained by a sequence of left multiplications, but not by a sequence of left- and right-multiplication by matrices Q j (examples on pp.196-197 illustrate this). (b) (b) can be obtained by both a sequence of left multiplications and left- and right-multiplication by matrices Q j (again, see pp.196-197 for an example). (c) (c) can be obtained by a sequence of left multiplications by Q j (for example, consider a 3*3 matrix of rank 2). 3. Chapter 27, problem 27.5: Since A is symmetric, it has a basis of orthonormal eigenvectors q 1, q 2... q m. Let λ 1, λ 2... λ m be the corresponding eigenvalues. Also assume that λ 1 is the eigenvalue that is much smaller than the others 1

in absolute value. The goal of the problem is to investigate what happens if λ 1 is very close to the shift µ and A µi is very ill-conditioned. Let µ = λ 1 + ɛ, where ɛ is very small. Then using the computations on p.95 after k inverse iterations we have: (A µi + δa) w k+1 = v k Divide both the left and the right hand sides by : (A λ 1 I ɛi + δa) w k+1 = v k (1) Let v k+1 = w k+1. Then Because v k+1 = v k = 1, we have: w k+1 (A λ 1 I ɛi + δa) = v k (A λ 1 I)v k+1 = (ɛi δa)v k+1 + v k (A λ 1 I)v k+1 ( ɛ + δa ) + 1 Therefore, if is large, then v k+1 = w k+1 gives a good approximation of the eigenvector q 1. Because q i form a basis of R m, for some α i and β i we have the following representations: v k = α i q i After plugging this into (1) we have: w k+1 = β i q i (A λ 1 I) β i q i ɛ β i q i + δa β i q i = α i q i (2) Because (A λ 1 I) β i q i = (λ i λ 1 )β i q i after multiplication of (2) by q T 1 on the left we have: ɛβ 1 + q T 1 δa w k+1 = α 1 Therefore, Thus, α 1 ɛ β 1 + δa ɛ + δa α 1 ɛ + δa (3) Hence, is large and despite ill-conditioning inverse iteration produces an accurate solution. Note: For this problem I used Wilkinson, The algebraic eigenvalue problem, 1965. 4. Problem 4: 2

(a) function [lambda,v]=rayleigh_quotient(a, num_iter,starting_vector) n=size(a); v=starting_vector; v=v/norm(v); lambda=v *A*v; for i=1:num_iter w=(a-lambda*eye(n))\v; v=w/norm(w); lambda=v *A*v; (b) Let S = UΣV be the singular value decomposition of S. We know that Cond(S) = σ max(s) σ min(s). Therefore, if we fix the ratio of maximum to minimum singular value of S to be equal to 20, and generate the intermediate singular values and orthogonal matrices U and V randomly (for example, by running the QR factorization on randomly generated 4 4 matrices to ensure orthogonality), then Cond(S) = 20. Example of the code: function S=gen_s; %generate entries in the range 2..19 rand1=rand(1)*17+2; rand2=rand(1)*17+2; Sigma=diag([20 rand1 rand2 1]); U_rand=rand(4,4); V_rand=rand(4,4); [U_orth, R1]=qr(U_rand); [V_orth, R2]=qr(V_rand); S=U_orth*Sigma*V_orth; (c) function speed_of_conv %obtained by S=gen_s; S= [-1.62886147322250 4.24469694941438-4.08799056407617-4.90119487824613-6.78153604613332 8.55038958204748 3.55067404752499-5.55946396709622 5.19316051425327 15.54586417118852-8.43124975651081 0.25890869393220-7.76317650562957 1.71798066224471 0.30131439216199-6.48956716145272]; A=S*diag([1 2 6 30])*inv(S); F=S; %for lambda=1 starting_vector=[0.32708034490656; 0.47620926031277; 0.22598334525969; 0.96200619568934]; error_1_lambda(num_iter)=abs(1-lambda); if (F(1,1)*v(1)<0) v=-v; error_1_v(num_iter)=norm(v-f(:,1)/norm(f(:,1))); %for lambda=2 starting_vector=[0.30645822520195; 0.77892558525256; 0.89534244333904; 0.25372331901581]; 3

error_2_lambda(num_iter)=abs(2-lambda); if (F(1,2)*v(1)<0) v=-v; error_2_v(num_iter)=norm(v-f(:,2)/norm(f(:,2))); %for lambda=6 starting_vector=[0.54423875980860; 0.55844540041460; 0.66968372236257; 0.58392692252473]; error_3_lambda(num_iter)=abs(6-lambda); if (F(1,3)*v(1)<0) v=-v; error_3_v(num_iter)=norm(v-f(:,3)/norm(f(:,3))); %for lambda=30 starting_vector=[0.43192792375934; 0.43609798620903; 0.22430472608039; 0.01318434025798]; error_4_lambda(num_iter)=abs(30-lambda); if (F(1,4)*v(1)<0) v=-v; error_4_v(num_iter)=norm(v-f(:,4)/norm(f(:,4))); plot(1:10,log10(error_1_lambda),1:10,log10(error_2_lambda),1:10,log10(error_3_lambda),1:10, plot(1:10,log10(error_1_v),1:10,log10(error_2_v),1:10,log10(error_3_v),1:10,log10(error_4_v 2 Approximation of the eigenvalues 0 2 4 log of the error 6 8 10 12 14 16 1 2 3 4 5 6 7 8 9 10 number of iterations 4

0 Approximation of the eigenvectors 2 4 log of the error 6 8 10 12 14 16 1 2 3 4 5 6 7 8 9 10 number of iterations (d) We can deduce from the graphs that for a non-symmetric case the speed of convergence of both eigenvalues and eigenvectors is roughly quadratic. 5