Problems of Eigenvalues/Eigenvectors

Similar documents
Problems of Eigenvalues/Eigenvectors

Solving Linear Systems of Equations

Vector Space and Linear Transform

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Eigenvalues and Eigenvectors

Lecture 10 - Eigenvalues problem

Eigenvalue and Eigenvector Problems

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Algebra C Numerical Linear Algebra Sample Exam Problems

Math 108b: Notes on the Spectral Theorem

1 Last time: least-squares problems

LinGloss. A glossary of linear algebra

Computational math: Assignment 1

Knowledge Discovery and Data Mining 1 (VO) ( )

CS 143 Linear Algebra Review

Eigenvalues and Eigenvectors

Numerical Methods for Solving Large Scale Eigenvalue Problems

Basic Elements of Linear Algebra

Foundations of Matrix Analysis

Lecture 4 Eigenvalue problems

Notes on Eigenvalues, Singular Values and QR

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

The Eigenvalue Problem: Perturbation Theory

UNIT 6: The singular value decomposition.

Matrices A brief introduction

Eigenvalues and Eigenvectors

CS 246 Review of Linear Algebra 01/17/19

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MAT 610: Numerical Linear Algebra. James V. Lambers

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Math Homework 8 (selected problems)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Math 315: Linear Algebra Solutions to Assignment 7

G1110 & 852G1 Numerical Linear Algebra

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

STAT200C: Review of Linear Algebra

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

ACM 104. Homework Set 5 Solutions. February 21, 2001

I. Multiple Choice Questions (Answer any eight)

Linear Algebra Review. Vectors

Computational Methods. Eigenvalues and Singular Values

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Properties of Matrices and Operations on Matrices

Chap 3. Linear Algebra

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

NORMS ON SPACE OF MATRICES

A Brief Outline of Math 355

4. Linear transformations as a vector space 17

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Linear Algebra Lecture Notes-II

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra- Final Exam Review

Linear Algebra - Part II

Math 408 Advanced Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

EE263: Introduction to Linear Dynamical Systems Review Session 5

EE5120 Linear Algebra: Tutorial 7, July-Dec Covers sec 5.3 (only powers of a matrix part), 5.5,5.6 of GS

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Eigenvalues and eigenvectors

Spectral radius, symmetric and positive matrices

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Chapter 3. Matrices. 3.1 Matrices

The Singular Value Decomposition

Lecture 7: Positive Semidefinite Matrices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Nonlinear Programming Algorithms Handout

Chapter 1. Matrix Calculus

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Eigenvalues and Eigenvectors

Jordan normal form notes (version date: 11/21/07)

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Positive Definite Matrix

Lec 2: Mathematical Economics

Linear Algebra Formulas. Ben Lee

Linear Algebra Methods for Data Mining

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Introduction to Matrix Algebra

Linear System Theory

MATH 581D FINAL EXAM Autumn December 12, 2016

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Matrices, Moments and Quadrature, cont d

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

Linear Algebra Primer

Lecture notes: Applied linear algebra Part 1. Version 2

Direct methods for symmetric eigenvalue problems

Cayley-Hamilton Theorem

Lecture 15, 16: Diagonalization

Numerical Methods I Eigenvalue Problems

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Chapter 7: Symmetric Matrices and Quadratic Forms

Transcription:

5 Problems of Eigenvalues/Eigenvectors Reveiw of Eigenvalues and Eigenvectors Gerschgorin s Disk Theorem Power and Inverse Power Methods Jacobi Transform for Symmetric Matrices Singular Value Decomposition with Applications QR Iterations for Computing Eigenvalues Other Topics with Applications

52 Definition and Examples Let A R n n. If v 0 such that Av = λv, λ is called an eigenvalue of matrix A, and v is called an eigenvector corresponding to (or belonging to) the eigenvalue λ. Note that v is an eigenvector implies that αv is also an eigenvector for all α 0. We define the Eigenspace(λ) as the vector space spanned by all of the eigenvectors corresponding to the eigenvalue λ. Examples: 2 0. A = 0 2 2. A = 0 3 3. A = 3 0 4. A = 0 3 0 5. B = 8 3 6. C = 3, λ =2,u =, λ =2,u =, λ =4,u =, λ = j, u = 0 0, thenλ =3,u =, λ 2 =,u 2 =, λ 2 =,u 2 =, λ 2 =2,u 2 = j 0, λ 2 = j, u 2 =, thenτ =4,v = 5 2 5 2 2... j ; λ 2 =, u 2 = ; τ 2 =2,v 2 =, j =. 0. 2 2. Ax = λx (λi A)x = 0, x 0 det(λi A) =P (λ) =0.

53 Gershgorin s Disk Theorem Note that u i 2 =and v i 2 =fori =, 2. Denote U =[u, u 2 ]andv =[v, v 2 ], then 3 0 4 0 U BU =, V CV = 0 0 2 Note that V t = V but U t U. Let A R n n,thendet(λi A) is called the characteristic polynomial of matrix A. Fundamental Theorem of Algebra A real polynomial P (λ) =λ n + a n λ n + + a 0 of degree n has n roots {λ i } such that ( n ) ( P (λ) =(λ λ )(λ λ 2 ) (λ λ n )=λ n n ) λ i λ n + +( ) n λ i i= i= n i= λ i = n i= a ii = tr(a) n i= λ i = det(a) Gershgorin s Disk/Circle Theorem Every eigenvalue of matrix A R n n lies in at least one of the disks D i = {x x a ii a ij }, i n j i 3 Example: B = 0 4, λ,λ 2,λ 3 S S 2 D 3, where S = {z z 3 2}, S 2 = 2 2 5 {z z 4 }, S 3 = {z z 5 4}. Notethatλ =6.566, λ 2 =3.0000, λ 3 =2.4383. Amatrixissaidtobediagonally dominant if j i a ij < a ii, i n. A diagonally dominant matrix is invertible.

54 Theorem: Let A, P R n n,withp nonsingular, then λ is an eigenvalue of A with eigenvector x iff λ is an eigenvalue of P AP with eigenvector P x. Theorem: Let A R n n and let λ be an eigenvalue of A with eigenvector x. Then (a) αλ is an eigenvalue of matrix αa with eigenvector x (b) λ μ is an eigenvalue of matrix A μi with eigenvector x (c) If A is nonsingular, then λ 0andλ is an eigenvalue of A with eigenvector x Definition: A matrix A is similar to B, denotebya B, iff there exists an invertible matrix U such that U AU = B. Furthermore, a matrix A is orthogonally similar to B, iff there exists an orthogonal matrix Q such that Q t AQ = B. Theorem: Two similar matrices have the same eigenvalues, i.e., A B λ(a) = λ(b).

55 Diagonalization of Matrices Theorem: Suppose A R n n has n linearly independent eigenvectors v, v 2,..., v n corresponding to eigenvalues λ, λ 2,..., λ n. Let V = [v, v 2,..., v n ], then V AV = diag[λ,λ 2,...,λ n ]. If A R n n has n distinct eigenvalues, then their corresponding eigenvectors are linearly independent. Thus, any matrix with distinct eigenvalues can be diagonalized. Not all matrices have distinct eigenvalues, therefore not all matrices are diagonalizable. Spectrum Decomposition Theorem* Every real symmetric matrix can be diagonalized. Nondiagonalizable Matrices A = 2 0 0 2 0 0 2, B = 0 0 2 0 3 5 2 Diagonalizable Matrices C =, D = 2 0, E = 0 2 0 0 2 2 0 3 0, K = 0 Theorem: Let {(λ i, v i ), i n} be eigenvalues/eigenvectors of matrix A R n n,then A k v j = λ k j v j, k. Moreover, if {v i } are linearly independent, then y R n can be written in the form y = c v + c 2 v 2 + + c n v n Then A k y = λ k c v + λ k 2c 2 v 2 + + λ k nc n v n. If λ > λ i, 2 i n, andc 0,thenA k y αv as k.

56 AMarkovProcess Suppose that 0% of the people outside Taiwan move in, and 20% of the people indside Taiwan move out in each year. Let y k and z k be the population at the end of the k th year, outside Taiwan and inside Taiwan, respectively. Then we have y k z k = 0.9 0.2 0. 0.8 y k λ =.0, λ 2 =0.7 z k y k z k = k 0.9 0.2 y 0 0. 0.8 z 0 = 3 2 k 0 0 (0.7) k 2 y 0 z 0 A Markov matrix A is nonnegative with each colume adding to. (a) λ = is an eigenvalue with a nonnegative eigenvector x. (b) The other eigenvalues satisfy λ i. (c) If any power of A has all positive entries, and the other λ i <. Then A k u 0 approaches the steady state of u which is a multiple of x as long as the projection of u 0 in x is not zero. Check Perron-Fröbenius theorem in Strang s book.

57 Differential Equations and e A e A = I + A! + A2 2! + + Am m! + du dt = λu u(t) =e λt u(0) du dt = Au = 2 u u(t) =e ta u(0) 2 A = UΛU t for an orthogonal matrix U, then e A = Ue Λ U = Udiag[e λ,e λ 2,...,e λn ]U t Solve x 3x +2x =0. Let y = x, z = y = x,andletu =[x, y, z] t. The problem is reduced to solving 0 0 u = Au = 0 0 u 0 2 3 Then u(t) =e ta u(0) = 2 3 2 3 2 0 4 3 2 0 e 2t 0 0 0 e t 0 0 0 0 2.293 2.293 0 3.464.732.5000 0.5000 u(0)

58 Similarity transformation and triangularization Schur s Theorem: A R n n, an orthogonal matrix U such that U t AU = T is upper- Δ. The eigenvlues must be shared by the similarity matrix T and appear along its main diagonal. Hint: By induction, suppose that the theorem has been proved for all matrices of order n, and consider A R n n with Ax = λx and x 2 =,then ahouseholder matrix H such that H x = βe, e.g., β = x 2, hence H AH t e = H A(H e )=H A(β x)=h β Ax = β λ(h x)=β λ(βe )=λe Thus, H AH t = λ O A () Spectrum Decomposition Theorem: Every real symmetric matrix can be diagonalized by an orthogonal matrix. Q t AQ =ΛorA = QΛQ t = n i= λ i q i q t i Definition: A symmetric matrix A R n n is nonnegative definite if x t Ax 0 x R n, x 0. Definition: A symmetric matrix A R n n is positive definite if x t Ax > 0 x R n, x 0. Singular Value Decomposition Theorem: Each matrix A R m n can be decomposed as A = UΣV t, where both U R m m and V R n n are orthogonal. Moreover, Σ R m n = diag[σ,σ 2,...,σ k, 0,...,0] is essentially diagonal with the singular values satisfying σ σ 2... σ k > 0. A = UΣV t = k i= σ i u i v t i Example: 2 A =, B = 2 0 0 0 0 0 0, C = 0 0

59 A Jacobi Transform (Givens Rotation) J(i, k; θ) = 0 0.... 0 0 c s 0..... 0 s c 0 0.... 0 0 0 J hh =ifh i or h k, wherei<k J ii = J kk = c =cosθ J ki = s = sin θ, J ik = s =sinθ Let x, y R n,theny = J(i, k; θ)x implies that y i = cx i + sx k y k = sx i + cx k c = x i x 2 i +x 2 k, s = x k x 2 i +x 2 k, x = 2 3 4, cos θ sin θ = / 5 2/ 5, then J(2, 4; θ)x = 20 3 0

60 Jacobi Transforms (Givens Rotations) The Jacobi method consists of a sequence of orthogonal similarity transformations such that JK t J K t J 2 t J t AJ J 2 J K J K =Λ where each J i is orthogonal, so is Q = J J 2 J K J K. Each Jacobi transform (Given rotation) is just a plane rotation designed to annihilate one of the off-diagonal matrix elements. Let A =(a ij ) be symmetric, then B = J t (p, q, θ)aj(p, q, θ), where b rp = ca rp sa rq b rq = sa rp + ca rq for r p, r q for r p, r q b pp = c 2 a pp + s 2 a qq 2sca pq b qq = s 2 a pp + c 2 a qq +2sca pq b pq =(c 2 s 2 )a pq + sc(a pp a qq ) To set b pq =0,wechoosec, s such that α = cot(2θ) = c2 s 2 2sc = a qq a pp 2a pq () For computational convenience, let t = s c,thent2 +2αt = 0 whose smaller root (in absolute sense) can be computed by t = sgn(α) α2 ++ α, and c =, s = ct, τ = s +t 2 +c (2) Remark b pp = a pp ta pq b qq = a qq + ta pq b rp = a rp s(a rq + τa rp ) b rq = a rq + s(a rp τa rq )

6 Algorithm of Jacobi Transforms to Diagonalize A A (0) A for k = 0,,, until convergence Let a (k) pq = Max i<j{ a (k) ij } Compute α k = a(k) t = qq a (k) pp 2a (k) pq sgn(α) α 2 ++ α, solve cot(2θ k )=α k for θ k. c = +t 2,, s = ct τ = s +c A (k+) J t k A(k) J k,wherej k = J(p, q, θ k ) endfor

62 Convergence of Jacobi Algorithm to Diagonalize A Proof: Since a (k) pq a (k) ij for i j, p q, then a pq (k) 2 off(a (k) /2N, wheren = n(n ),and 2 off(a (k) )= ( ) n (k) 2, i j a ij the sum of square off-diagonal elements of A (k) Furthermore, off(a (k+) ) = off(a (k) ) 2 ( a (k) pq ) 2 ( ) 2 +2 a (k+) pq = off(a (k) ) 2 ( ) 2 a pq (k), since a (k+) pq =0 off(a (k) ) ( N ), since a (k) pq 2 off(a (k) /2N Thus off(a (k+) ) ( N ) k+ off(a (0) ) 0 as k Example: 4 2 0 c s 0 A = 2 3, J(, 2; θ) = s c 0 0 2 0 0 Then 4c 2 4cs +3s 2 2c 2 + cs 2s 2 s A () = J t (, 2; θ)aj(, 2; θ) = 2c 2 + cs 2s 2 3c 2 +4cs +4s 2 c s c Note that off(a () )=2< 0 = off(a (0) )=off(a)

63 Example for Convergence of Jacobi Algorithm A (0) =.0000 0.5000 0.2500 0.250 0.5000.0000 0.5000 0.2500 0.2500 0.5000.0000 0.5000 0.250 0.2500 0.5000.0000, A () =.5000 0.0000 0.5303 0.2652 0.0000 0.5000 0.768 0.0884 0.5303 0.768.0000 0.5000 0.2652 0.0884 0.5000.0000 A (2) =.8363 0.0947 0.0000 0.497 0.0947 0.5000 0.493 0.0884 0.0000 0.493 0.6637 0.2803 0.497 0.0884 0.2803.0000, A (3) = 2.0636 0.230 0.76 0.0000 0.230 0.5000 0.493 0.0405 0.76 0.493 0.6637 0.2544 0.0000 0.0405 0.2544 0.7727 A (4) = 2.0636 0.230 0.095 0.0739 0.230 0.5000 0.0906 0.254 0.095 0.0906 0.4580 0.0000 0.0739 0.254 0.0000 0.9783, A (5) = 2.0636 0.08 0.095 0.02 0.08 0.469 0.0880 0.0000 0.095 0.0880 0.4580 0.027 0.02 0.0000 0.027.0092 A (6) = 2.070 0.0000 0.0969 0.00 0.0000 0.4627 0.0820 0.0064 0.0969 0.0820 0.4580 0.027 0.00 0.0064 0.027.0092, A (5) = 2.0856 0.0000 0.0000 0.0000 0.0000 0.5394 0.0000 0.0000 0.0000 0.0000 0.3750 0.0000 0.0000 0.0000 0.0000.0000

64 Power of A Matrix and Its Eigenvalues are eigenval- Theorem: Let λ,λ 2,,λ n be eigenvalues of A R n n.thenλ k,λk 2,,λk n ues of A k R n n with the same corresponding eigenvectors of A. Thatis, Av i = λ i v i A k v i = λ k i v i i n Suppose that the matrix A R n n has n linearly independent eigenvectors v, v,, v n corresponding to eigenvalues λ,λ 2,,λ n.thenanyx R n can be written as x = c v + c 2 v 2 + + c n v n Then A k x = λ k c v + λ k 2c 2 v 2 + + λ k nc n v n In particular, if λ > λ j for 2 j n and c 0,thenA k x will tend to lie in the direction v when k is large enough.

65 Power Method for Computing the Largest Eigenvalues Suppose that the matrix A R n n is diagonalizable and that U AU = diag(λ,λ 2,,λ n ) with U =[v, v 2,, v n ]and λ > λ 2 λ 3 λ n.givenu (0) R n,thenpower method produces a sequence of vectors u (k) as follows. for k =, 2, z (k) = Au (k ) r (k) = z (k) m = z (k),forsome m n. u (k) = z (k) /r (k) endfor λ must be real since the complex eigenvalues must appear in a relatively conjugate pair. 2 λ =3 A =, v = [ ], v 2 λ 2 = 2 2 = [ ] 2 Let u (0) = [ 0 ] [,thenu (5) =.0 0.998 ],andr (5) =2.9756.

66 QR Iterations for Computing Eigenvalues % % Script File: shiftqr.m % Solving Eigenvalues by shift-qr factorization % Nrun=5; fin=fopen( datamatrix.txt ); fgetl(fin); % read off the header line n=fscanf(fin, %d,); A=fscanf(fin, %f,[n n]); A=A ; SaveA=A; for k=:nrun, s=a(n,n); A=A-s*eye(n); [Q R]=qr(A); A=R*Q+s*eye(n); end eig(savea) % % datamatrix.txt % Matrices for computing eigenvalues by QR factorization or shift-qr 5.0 0.5 0.25 0.25 0.0625 0.5.0 0.5 0.25 0.25 0.25 0.5.0 0.5 0.25 0.25 0.25 0.5.0 0.5 0.0625 0.25 0.25 0.5.0 4 for shift-qr studies 2.9766 0.3945 0.498.59 0.3945 2.7328-0.3097 0.29 0.498-0.3097 2.5675 0.6079.59 0.29 0.6097.723

67 Norms of Vectors and Matrices Definition: A vector norm on R n is a function that satisfies τ : R n R + = {x 0 x R} () τ(x) > 0 x 0, τ(0) =0 (2) τ(cx) = c τ(x) c R, x R n (3) τ(x + y) τ(x)+τ(y) x, y R n Hölder norm (p-norm) x p =( n i= x i p ) /p for p. (p=) x = n i= x i (Mahattan or City-block distance) (p=2) x 2 =( n i= x i 2 ) /2 (Euclidean distance) (p= ) x = max i n { x i } ( -norm)

68 Definition: A matrix norm on R m n is a function τ : R m n R + = {x 0 x R} that satisfies () τ(a) > 0 A O, τ(o) =0 (2) τ(ca) = c τ(a) c R, A R m n (3) τ(a + B) τ(a)+τ(b) A, B R m n Consistency Property: τ(ab) τ(a)τ(b) A, B (a) τ(a) =max{ a ij i m, j n} (b) A F = [ mi= nj= a 2 ij] /2 (Fröbenius norm) Subordinate Matrix Norm: A = max x 0 { Ax / x } () If A R m n,then A = max j n ( m i= a ij ) (2) If A R m n,then A = max i m ( nj= a ij ) (3) Let A R n n be real symmetric, then A 2 = max i n λ i,whereλ i λ(a)

69 Theorem: Let x R n and let A =(a ij ) R n n. Define A = Sup u ={ Au } Proof: For u =, A = Sup{ Au } = n n n n n n a ij u j a ij u j = u j a ij i= j= j= i= j= i= Then n n n A Max j n { a ij } u j = Max j n { a ij } i= j= i= On the other hand, let n i= a ik = Max j n { n i= a ij } and choose u = e k,which completes the proof. Theorem: Let A =[a ij ] R m n, and define A = Max u ={ Au }. n Show that A = Max i m a ij j= n n Proof: Let a Kj = Max i m a ij, for any x Rn with x =,wehave j= j= Ax = Max i m { nj= a ij x j } Max i m { nj= a ij x j } Max i m { nj= a ij x } Max i m { nj= a ij } = n j= a Kj In particular, if we pick up y R n such that y j = sign(a Kj ), j n, then y =,and Ay = n j= a Kj, which completes the proof.

70 Theorem: Let A =[a ij ] R n n, and define A 2 = Max x 2 ={ Ax 2 }. Show that A 2 = ρ(a t A)= maximum eigenvalue of A t A (spectral radius) (Proof) Letλ, λ 2,, λ n be eigenvalues and their corresponding unit eigenvectors u, u 2,, u n of matrix A t A,thatis, (A t A)u i = λ i u i and u i 2 = i n. Since u, u 2,, u n must be an orthonormal basis based on spectrum decomposition n theorem, for any x R n,wehavex = c i u i.then i= A 2 = Max x 2 ={ Ax 2 } = Max x 2 ={ Ax 2 2 } = Max x 2 ={x t A t Ax} = = n Max x 2 = λ i c 2 i i= Max j n { λ j }

7 Problems Solved by Matlab A = LetA,B,H,x, y, u, b be matrices and vectors defined below, and H = I 2uu t 2 4 6 0 2 7 2, B = 3 0 3 0 0 0 3, u = /2 /2 /2 /2, b = 6 2 5, x =, y =. Let A=LU=QR, find L, U; Q, R. 2. Find determinants and inverses of matrices A, B, and H. 3. Solve Ax = b, how to find the number of floating-point operations are required? 4. Find the ranks of matrices A, B, and H. 5. Find the characteristic polynomials of matrices A and B. 6. Find -norm, 2-norm, and -norm of matrices A, B, and H. 7. Find the eigenvalues/eigenvectors of matrices A and B. 8. Find matrices U and V such that U AU and V BV are diagonal matrices. 9. Find the singular values and singular vectors of matrices A and B. 0. Randomly generate a 4 4 matrix C with 0 C(i, j) 9.