then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

Similar documents
Norms and Perturbation theory for linear systems

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

Linear Algebra: Characteristic Value Problem

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Introduction to Linear Algebra. Tyrone L. Vincent

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

4 Linear Algebra Review

12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number

Lecture 10 - Eigenvalues problem

Scientiae Mathematicae Vol. 2, No. 3(1999), 263{ OPERATOR INEQUALITIES FOR SCHWARZ AND HUA JUN ICHI FUJII Received July 1, 1999 Abstract. The g

2. Matrix Algebra and Random Vectors

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Section 3.9. Matrix Norm

or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

5.6. PSEUDOINVERSES 101. A H w.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

7. Symmetric Matrices and Quadratic Forms

Matrix Factorization and Analysis

Dominant Eigenvalue of a Sudoku Submatrix

4 Frequent Directions

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Lecture 2: Review of Prerequisites. Table of contents

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Knowledge Discovery and Data Mining 1 (VO) ( )

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

Linear Algebra Review. Vectors

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

The Singular Value Decomposition

Linear Algebra Formulas. Ben Lee

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

CS 246 Review of Linear Algebra 01/17/19

Functions of Several Variables

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Quantum Computing Lecture 2. Review of Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Analysis of a Fast Hankel Eigenvalue Algorithm Franklin T. Luk a and Sanzheng Qiao b a Department of Computer Science Rensselaer Polytechnic Institute

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

Linear Algebra Solutions 1

Review problems for MA 54, Fall 2004.

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Computational Linear Algebra

Fundamentals of Engineering Analysis (650163)

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Lecture notes: Applied linear algebra Part 1. Version 2

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Linear algebra for computational statistics

AMS526: Numerical Analysis I (Numerical Linear Algebra)

14 Singular Value Decomposition

total work bounds to approximately solve the linear systems are O(n); if = O(1): For class (4) we are within a small increasing factor of these bounds

Review of Some Concepts from Linear Algebra: Part 2

1 Inner Product and Orthogonality

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT

MA2501 Numerical Methods Spring 2015

Some inequalities for sum and product of positive semide nite matrices

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

The Singular Value Decomposition

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

STAT200C: Review of Linear Algebra

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Properties of Matrices and Operations on Matrices

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

that of the SVD provides new understanding of left and right generalized singular vectors. It is shown

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra Review

Review of Linear Algebra

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Practical Linear Algebra: A Geometry Toolbox

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Math/CS 466/666: Homework Solutions for Chapter 3

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Problem # Max points possible Actual score Total 120

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra (Review) Volker Tresp 2018

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Notes on Linear Algebra and Matrix Theory

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

Math Fall Final Exam

Linear Algebra Practice Final

15 Singular Value Decomposition

Numerical Methods - Numerical Linear Algebra

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

OR MSc Maths Revision Course

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Background Mathematics (2/2) 1. David Barber

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Transcription:

Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk kyk kxk ky xk kxk kyk kx yk Exercise : (An Induced Matrix Norm.) Let A be a real n x n matrix. Show that: kak 1 = max ;;:::;n ja ij j: Hint: Use the denition of the induced matrix norm: kak 1 = max kaxk 1 : kxk 1 =1 Answer: let kxk 1 = jx j j 1

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja ij j (1) kaxk 1 ckxk 1 and thus kak 1 c to show this as an equality, we demonstrate an x for which kaxk 1 kxk 1 = c let k be the column index for which the maximum in (1) is attained. Let x = e k, the k th unit vector. Then kxk 1 = 1 and kak 1 = j a ij x j j = ja ik j = c This proves that for the vector norm k k, the operator norm is kak 1 = max 1jn ja ij j Exercise : An induced norm is dened as kak = max kaxk kx

if we let A = I n then but kak F = kik F = vu u t n X a ij = p 1 + 1 + : : : + 1 = p n kak = kik = max kixk = max kxk = 1 kx kx Since max kaxk 6= kaxk F the Frobenius norm is not an induced norm. kx Exercise 4 : Does the spectral radius itself dene a norm? Why or why not? Answer. As we have seen it in class, for an arbitrary square matrix A, r (A) kak () Moreover, if " > 0 be given, then there is an operator matrix norm, for which kak " r (A) + " () This shows that r (A) is almost a matrix norm. But notice that it does not satisfy all the norm properties. For example kak = 0 $ A = 0 but this is not neccessarly true for spectral radius of a matrix. Take this example: A = 4 0 0 0 1 0 0 0 5 (4) The spectral radius of A is 0, but A is not the zero matrix. Exercise 5 : Derivation: Given Ax = b we want to obtain an approximation ex to x, ~x 6= x. This leads to kek = k~x xk kek ( relative error) kxk Which leads to the following set of equations: Ax = b! kbk 6 kakkxj (1) A 1 b = x! kxk 6 ka 1 kkbk () Ae = r! krk 6 kakkek () A 1 r = e! kek 6 ka 1 kkrk (4)

Dividing the smaller side of (4) by the larger side of (1), and the larger side of (4) by the smaller side of (1) gives Similarly with () and (): kek kxk 6 kakka 1 k krk kbk krk ka 1 kkbk 6 kakkek kxk Which leads to 1 krk kakka 1 k kbk 6 kek kxk 6 kakka 1 k krk kbk In general this inequality can be shown to be sharp trivialy by denition. That is given kak = max kaxk kx By denition, there is some x such that kbk = kaxk = kakkxk Similarly for equations (),(),(4). Therefore there is some x such that for the right hand side we have kek kakkxk = ka 1 k krk kbk And for the left hand side we have: krk ka 1 kkbk = kakkek kxk or a) For the more specic case of k k : Using SVD kaxk = kbk becomes: Let x = v 1 then kek or kxk = kakja 1 k krk kbk krk ka 1 kkakkbk = kek kxk (i) kaxk = kuv T xk kaxk = kuv T v 1 k = (UV T v 1 ) T (UV T v 1 ) = v T 1 V T U T UV T v 1 4

Since U and V are orthogonal this becomes T = kk By denition of the induced matrix norm this becomes r n kk = max kxk orkk = max kxk =1 kxk =1 Since by denition 1 = max( i ), And since kxk = 1. r n kxk = 1 max x i = kxk 1 =1 i x i Similarly for ka 1 rk = kek letting x = u i yields: r n ka 1 rk = ka T u i k = kv 1 U T k = n 1 max x i = 1 n Similar steps for equations () and () yield kxk =1 Ax = b! kbk = kaxjj = 1 kxk (1.a) A 1 b = x! kxk = ka 1 bk = 1 n kbk (.a) Ae = r! krk = kaek = 1 kek (.a) A 1 r = e! kek = ka 1 rk = 1 n krk (4.a) Which gives for the right hand side: and for the left hand side: kejj kxk = 1 krk n kbk krk 1 n 1 kbk = kek kxk Therefore the inequality is sharp when A is a diagonal matrix. b) For k k 1 By denition: kak 1 = max kaxk 1 = max kx i 5 ja ij x j j 6 max i ja ij j = ja kj j

Supposing that the max row sum is obtained in the k th row, pick x to be 1 based on the signs of this k th -row. x = 0 B @ + 1 + 1 : : : + 1 1 C A Given this x the inequality (i) becomes an equality P Ax = b! kbk = max kaxjj 1 = n ja kj j = kx P A 1 b = x! kxk = maxka 1 bk1 = n kb Ae = r! krk = max ke ja 1 kj j = P kaek= n ja kj j = P A 1 r = e! kek = max ka 1 rk 1 = n kr ja 1 kj j = (1.a) (.a) (.a) (4.a) And thus, for the right hand side: kejj kxk = krk kbk And for the left hand side: krk kbk = kek kxk Exercise 6 : General case A is the Hermitian (complex symmetric matrix) i) If A = A then so hax; xi is real hax; xi = hx; Axi = hx; Axi = hax; xi ii) If Ax = x, then hax; xi = hx; xi = hx; xi. Since hax; xi and hx; xi are real must be real. 6

It follows directly that A = A T is a special case of this. Therefore, the eigenvalues of A are real. Exercise 7 : Let be an eigenvalue of AA nxn. Show that there exists an such that i f1; ; : : : :; ng ja ii j 6 Xn jaij j Proof: Let x i be the component of largest magnitude of one of the eigenvectors of A. From the ith equation of the system (A I)x = 0, we have Which leads to: (a ii )x i = ja ii j 6 j6=i j6=i j6=i ja ij j jx jj jx i j 6 a ij x j Let S be a set that is the union of k 6 n Gershgorin circles s.t. the intersection of S with all other Gershgorin circles is empty. Show that S contains precisely k eignvalues of A (counting multiplicities) Let k = n, this implies that A is a diagonal matrix, A = D, This yields n circles centered about the diagonal entries of a ii with radius 0. Give an example showing that on the other hand there may be Gershgorin circles that contain no eigenvalues of A at all. Let: 1 1 A = 1 4 The eigenvalues for this matrix are = 1:908 : : : ; 0:596 : : : which are both outside the circle of radius 0.5 centered at 1. NOTE: A woman by the name of Olga Taussky Todd was famous for using this theorem and worked with the utter group to help build more ecient aircrafts during WWII. The utter speed is extremely important to an aircrafts making and must be carefully calculated for it to be able to get o the ground. 4 j6=i ja ij j 7

Olga found a more simplied way to nd these calculations by nding the eigenvalues and eigenvectors using the Gershgorin Theorem. \Still, matrix theory reached me only slowly," Taussky noted in a 1988 article in the American Mathematical Monthly. \Since my main subject was number theory, I did not look for matrix theory. It somehow looked for me." (IvarsP Peterson's MathTrek). Exercise 8 : Let A R ml and B R ln and show that if we use partitioned matrices A11 A A = 1 B11 B B = 1 A 1 A B 1 B Where A ij R m il j and B ij R l in j. Then the matrix product can be computed using: A11 B AB = 11 + A 1 B 1 A 11 B 1 + A 1 B A 1 B 11 + A B 1 A 1 B 1 + A B To show this, we will take the statement for the upper left block matrix AB and expand it: (AB) ij = l 1 X (a 11 ) ik (b 11 ) kj + l X (a 1 ) ik (b 1 ) kj (5) Which can be written in terms of the original matrices A and B Xl 1 a ik b kj + lx 1 +l k=l 1 +1 And we can combine these sums by writing In general we can write lx 1 +l (AB) ij = a ik b kj (6) a ik b kj (7) A^ik B k^j (8) where n is the number of row matrices of A and the number of column matrices of B. And if A R ml with sub-matrices A ij R m il j where 8

l 1 + l + : : : + l n = l and, ^i, ^j refer to the appropriate matrices. Exercise 9 : a. False. By denition, the determinant is the sum of all possible products where we pick an element from each row and column of the matrix. This means that if we take a matrix A R 44 4, that we should have 4 = 4 products to sum. However, by this method we only get 8. det (A) = det (A 11 ) det (A ) det (A 1 ) det (A 1 ) = a 11 a a a 44 a 11 a a 4 a 4 a 1 a 1 a a 44 + a 1 a 1 a 4 a 4 a 1 a 4 a 1 a 4 + a 1 a 4 a a 41 + a 14 a a 1 a 4 a 14 a a a 41 Notice that there are only two terms with the coecient a 11, if we compute the determinant using the minor expansion, we should have six such terms. b. False, take the matrix Where we have sub-matrices A = 6 4 B = 1 C = D = 1 0 0 0 1 0 0 1 0 4 0 4 0 1 0 0 4 4 0 1 0 Then we have rank(b) + rank(d) = but rank(a) =. This statement could be true if rank(b) = rank(c). Exercise 10 : To count the number of multiplications and divisions in the Cholesky decomposition, we use the equations derived in the book for nding the individual elements of the matrix L l ij = a ij 9 j 1 X l jj 0 5 7 5 5 l ik l jk

l ii = " a ii Xi 1 l ik # 1= The number of divisions can be seen from the rst equation, there are divisions for every element below the diagonal. This is n n To get the number of multiplications we notice that we need the double sum X X n 1 n 1 j=i n j Using a method similar to the one used in the notes, we approximate this expression with integrals to get the number of multiplications Z n 1 Z n 1 1 i (n j) dj di = 1 6 n 1 n + For the case when i = j, we use the second equation to get the sum (n i) Z n 1 (n i) di = n n + 1 When we approximate using integrals, the leading term is the only one that is important. Therefore, we see that the Cholesky decomposition is of order n 6 10