Foundations of Matrix Analysis

Similar documents
Matrices A brief introduction

G1110 & 852G1 Numerical Linear Algebra

Lecture Summaries for Linear Algebra M51A

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1. General Vector Spaces

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

NOTES on LINEAR ALGEBRA 1

Linear Algebra: Matrix Eigenvalue Problems

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Math Linear Algebra Final Exam Review Sheet

MATH2210 Notebook 2 Spring 2018

Elementary maths for GMT

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Control Systems. Linear Algebra topics. L. Lanari

Phys 201. Matrices and Determinants

Topic 1: Matrix diagonalization

Linear Algebra Review

Matrices A brief introduction

Mathematical Foundations

CS 246 Review of Linear Algebra 01/17/19

Matrices and Linear Algebra

Linear Algebra Highlights

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Quantum Computing Lecture 2. Review of Linear Algebra

MATH 583A REVIEW SESSION #1

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Algebra C Numerical Linear Algebra Sample Exam Problems

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Lecture Notes in Linear Algebra

Properties of Matrices and Operations on Matrices

Mathematics. EC / EE / IN / ME / CE. for

Linear Algebra Primer

Linear algebra and applications to graphs Part 1

ELE/MCE 503 Linear Algebra Facts Fall 2018

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra Lecture Notes-II

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

3 (Maths) Linear Algebra

Online Exercises for Linear Algebra XM511

Review problems for MA 54, Fall 2004.

Mathematical Methods wk 2: Linear Operators

4. Determinants.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

LINEAR ALGEBRA REVIEW

Eigenvalues and Eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Matrix Algebra

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Systems and Matrices

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Notes on Mathematics

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

Elementary linear algebra

Math 315: Linear Algebra Solutions to Assignment 7

Linear Algebra Primer

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Chapter 3. Determinants and Eigenvalues

Chapter 1. Matrix Algebra

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra Primer

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Conceptual Questions for Review

Lecture 1: Review of linear algebra

Linear Algebra. Workbook

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MTH 464: Computational Linear Algebra

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

Matrices and Determinants

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MAT Linear Algebra Collection of sample exams

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Introduction to Matrices

Matrices and systems of linear equations

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Fundamentals of Engineering Analysis (650163)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 1. Matrix Calculus

Matrix Algebra: Summary

Eigenvalues and eigenvectors

EE731 Lecture Notes: Matrix Computations for Signal Processing

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Linear Algebra Review. Vectors

Transcription:

1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the reader is referred to [Bra75], [Nob69], [Hal58] Further results on eigenvalues can be found in [Hou75] and [Wil65] 11 Vector Spaces Definition 11 A vector space over the numeric field K (K = R or K = C) is a nonempty set V, whose elements are called vectors and in which two operations are defined, called addition and scalar multiplication, that enjoy the following properties: 1 addition is commutative and associative; 2 there exists an element 0 V (the zero vector or null vector) such that v + 0 = v for each v V ; 3 0 v = 0, 1 v = v, for each v V, where 0 and 1 are respectively the zero and the unity of K; 4 for each element v V there exists its opposite, v, in V such that v + ( v) = 0; 5 the following distributive properties hold α K, v, w V, α(v + w) = αv + αw, α, β K, v V, (α + β)v = αv + βv; 6 the following associative property holds α, β K, v V, (αβ)v = α(βv)

4 1 Foundations of Matrix Analysis Example 11 Remarkable instances of vector spaces are: - V = R n (respectively V = C n ): the set of the n-tuples of real (respectively complex) numbers, n 1; - V = P n : the set of polynomials p n (x) = n k=0 a kx k with real (or complex) coefficients a k having degree less than or equal to n, n 0; - V = C p ([a, b]): the set of real (or complex)-valued functions which are continuous on [a, b] up to their p-th derivative, 0 p < Definition 12 We say that a nonempty part W of V is a vector subspace of V iff W is a vector space over K Example 12 The vector space P n is a vector subspace of C (R), which is the space of infinite continuously differentiable functions on the real line A trivial subspace of any vector space is the one containing only the zero vector In particular, the set W of the linear combinations of a system of p vectors of V, {v 1,, v p }, is a vector subspace of V, called the generated subspace or span of the vector system, and is denoted by W = span {v 1,, v p } = {v = α 1 v 1 + + α p v p with α i K, i = 1,, p} The system {v 1,, v p } is called a system of generators for W If W 1,, W m are vector subspaces of V, then the set S = {w : w = v 1 + + v m with v i W i, i = 1,, m} is also a vector subspace of V We say that S is the direct sum of the subspaces W i if any element s S admits a unique representation of the form s = v 1 + + v m with v i W i and i = 1,, m In such a case, we shall write S = W 1 W m Definition 13 A system of vectors {v 1,, v m } of a vector space V is called linearly independent if the relation α 1 v 1 + α 2 v 2 + + α m v m = 0 with α 1, α 2,, α m K implies that α 1 = α 2 = = α m = 0 Otherwise, the system will be called linearly dependent We call a basis of V any system of linearly independent generators of V If {u 1,, u n } is a basis of V, the expression v = v 1 u 1 + +v n u n is called the decomposition of v with respect to the basis and the scalars v 1,, v n K are the components of v with respect to the given basis Moreover, the following property holds

12 Matrices 5 Property 11 Let V be a vector space which admits a basis of n vectors Then every system of linearly independent vectors of V has at most n elements and any other basis of V has n elements The number n is called the dimension of V and we write dim(v ) = n If, instead, for any n there always exist n linearly independent vectors of V, the vector space is called infinite dimensional Example 13 For any integer p the space C p ([a, b]) is infinite dimensional The spaces R n and C n have dimension equal to n The usual basis for R n is the set of unit vectors {e 1,, e n } where (e i ) j = δ ij for i, j = 1, n, where δ ij denotes the Kronecker symbol equal to 0 if i = j and 1 if i = j This choice is of course not the only one that is possible (see Exercise 2) 12 Matrices Let m and n be two positive integers We call a matrix having m rows and n columns, or a matrix m n, or a matrix (m, n), with elements in K, a set of mn scalars a ij K, with i = 1,, m and j = 1, n, represented in the following rectangular array a 11 a 12 a 1n a 21 a 22 a 2n A = (11) a m1 a m2 a mn When K = R or K = C we shall respectively write A R m n or A C m n, to explicitly outline the numerical fields which the elements of A belong to Capital letters will be used to denote the matrices, while the lower case letters corresponding to those upper case letters will denote the matrix entries We shall abbreviate (11) as A = (a ij ) with i = 1,, m and j = 1, n The index i is called row index, while j is the column index The set (a i1, a i2,, a in ) is called the i-th row of A; likewise, (a 1j, a 2j,, a mj ) is the j-th column of A If n = m the matrix is called squared or having order n and the set of the entries (a 11, a 22,, a nn ) is called its main diagonal A matrix having one row or one column is called a row vector or column vector respectively Unless otherwise specified, we shall always assume that a vector is a column vector In the case n = m = 1, the matrix will simply denote a scalar of K Sometimes it turns out to be useful to distinguish within a matrix the set made up by specified rows and columns This prompts us to introduce the following definition Definition 14 Let A be a matrix m n Let 1 i 1 < i 2 < < i k m and 1 j 1 < j 2 < < j l n two sets of contiguous indexes The matrix S(k l)

6 1 Foundations of Matrix Analysis of entries s pq = a ip j q with p = 1,, k, q = 1,, l is called a submatrix of A If k = l and i r = j r for r = 1,, k, S is called a principal submatrix of A Definition 15 A matrix A(m n) is called block partitioned or said to be partitioned into submatrices if A 11 A 12 A 1l A 21 A 22 A 2l A =, A k1 A k2 A kl where A ij are submatrices of A Among the possible partitions of A, we recall in particular the partition by columns A = (a 1, a 2,, a n ), a i being the i-th column vector of A In a similar way the partition by rows of A can be defined To fix the notations, if A is a matrix m n, we shall denote by A(i 1 : i 2, j 1 : j 2 ) = (a ij ) i 1 i i 2, j 1 j j 2 the submatrix of A of size (i 2 i 1 + 1) (j 2 j 1 + 1) that lies between the rows i 1 and i 2 and the columns j 1 and j 2 Likewise, if v is a vector of size n, we shall denote by v(i 1 : i 2 ) the vector of size i 2 i 1 + 1 made up by the i 1 -th to the i 2 -th components of v These notations are convenient in view of programming the algorithms that will be presented throughout the volume in the MATLAB language 13 Operations with Matrices Let A = (a ij ) and B = (b ij ) be two matrices m n over K We say that A is equal to B, if a ij = b ij for i = 1,, m, j = 1,, n Moreover, we define the following operations: matrix sum: the matrix sum is the matrix A + B = (a ij + b ij ) The neutral element in a matrix sum is the null matrix, still denoted by 0 and made up only by null entries; matrix multiplication by a scalar: the multiplication of A by λ K, is a matrix λa = (λa ij ); matrix product: the product of two matrices A and B of sizes (m, p) and p (p, n) respectively, is a matrix C(m, n) whose entries are c ij = a ik b kj, for i = 1,, m, j = 1,, n k=1

13 Operations with Matrices 7 The matrix product is associative and distributive with respect to the matrix sum, but it is not in general commutative The square matrices for which the property AB = BA holds, will be called commutative In the case of square matrices, the neutral element in the matrix product is a square matrix of order n called the unit matrix of order n or, more frequently, the identity matrix given by I n = (δ ij ) The identity matrix is, by definition, the only matrix n n such that AI n = I n A = A for all square matrices A In the following we shall omit the subscript n unless it is strictly necessary The identity matrix is a special instance of a diagonal matrix of order n, that is, a square matrix of the type D = (d ii δ ij ) We will use in the following the notation D = diag(d 11, d 22,, d nn ) Finally, if A is a square matrix of order n and p is an integer, we define A p as the product of A with itself iterated p times We let A 0 = I Let us now address the so-called elementary row operations that can be performed on a matrix They consist of: multiplying the i-th row of a matrix by a scalar α; this operation is equivalent to pre-multiplying A by the matrix D = diag(1,, 1, α, 1,, 1), where α occupies the i-th position; exchanging the i-th and j-th rows of a matrix; this can be done by premultiplying A by the matrix P (i,j) of elements 1 if r = s = 1,, i 1, i + 1,, j 1, j + 1, n, p (i,j) rs = 1 if r = j, s = i or r = i, s = j, (12) 0 otherwise Matrices like (12) are called elementary permutation matrices The product of elementary permutation matrices is called a permutation matrix, and it performs the row exchanges associated with each elementary permutation matrix In practice, a permutation matrix is a reordering by rows of the identity matrix; adding α times the j-th row of a matrix to its i-th row This operation can also be performed by pre-multiplying A by the matrix I+N (i,j) α, where is a matrix having null entries except the one in position i, j whose N (i,j) α value is α 131 Inverse of a Matrix Definition 16 A square matrix A of order n is called invertible (or regular or nonsingular) if there exists a square matrix B of order n such that A B = B A = I B is called the inverse matrix of A and is denoted by A 1 A matrix which is not invertible is called singular If A is invertible its inverse is also invertible, with (A 1 ) 1 = A Moreover, if A and B are two invertible matrices of order n, their product AB is also invertible, with (A B) 1 = B 1 A 1 The following property holds

8 1 Foundations of Matrix Analysis Property 12 A square matrix is invertible iff its column vectors are linearly independent Definition 17 We call the transpose of a matrix A R m n the matrix n m, denoted by A T, that is obtained by exchanging the rows of A with the columns of A Clearly, (A T ) T = A, (A+B) T = A T +B T, (AB) T = B T A T and (αa) T = αa T α R If A is invertible, then also (A T ) 1 = (A 1 ) T = A T Definition 18 Let A C m n ; the matrix B = A H C n m is called the conjugate transpose (or adjoint) of A if b ij = ā ji, where ā ji is the complex conjugate of a ji In analogy with the case of the real matrices, it turns out that (A+B) H = A H + B H, (AB) H = B H A H and (αa) H = ᾱa H α C Definition 19 A matrix A R n n is called symmetric if A = A T, while it is antisymmetric if A = A T Finally, it is called orthogonal if A T A = AA T = I, that is A 1 = A T Permutation matrices are orthogonal and the same is true for their products Definition 110 A matrix A C n n is called hermitian or self-adjoint if A T = Ā, that is, if AH = A, while it is called unitary if A H A = AA H = I Finally, if AA H = A H A, A is called normal As a consequence, a unitary matrix is one such that A 1 = A H Of course, a unitary matrix is also normal, but it is not in general hermitian For instance, the matrix of the Example 14 is unitary, although not symmetric (if s = 0) We finally notice that the diagonal entries of an hermitian matrix must necessarily be real (see also Exercise 5) 132 Matrices and Linear Mappings Definition 111 A linear map from C n into C m is a function f : C n C m such that f(αx + βy) = αf(x) + βf(y), α, β K and x, y C n The following result links matrices and linear maps Property 13 Let f : C n C m be a linear map Then, there exists a unique matrix A f C m n such that f(x) = A f x x C n (13) Conversely, if A f C m n then the function defined in (13) is a linear map from C n into C m

13 Operations with Matrices 9 Example 14 An important example of a linear map is the counterclockwise rotation by an angle ϑ in the plane (x 1, x 2 ) The matrix associated with such a map is given by c s G(ϑ) =, c = cos(ϑ), s = sin(ϑ) s c and it is called a rotation matrix 133 Operations with Block-Partitioned Matrices All the operations that have been previously introduced can be extended to the case of a block-partitioned matrix A, provided that the size of each single block is such that any single matrix operation is well-defined Indeed, the following result can be shown (see, eg, [Ste73]) Property 14 Let A and B be the block matrices A 11 A 1l B 11 B 1n A =, B =, A k1 A kl B m1 B mn where A ij and B ij are matrices (k i l j ) and (m i n j ) Then we have 1 λa 11 λa 1l λa = λa k1 λa kl, λ C; A T = 2 if k = m, l = n, m i = k i and n j = l j, then A 11 + B 11 A 1l + B 1l A + B = ; A k1 + B k1 A kl + B kl 3 if l = m, l i = m i and k i = n i, then, letting C ij = C 11 C 1l AB = C k1 C kl A T 11 A T k1 A T 1l AT kl m A is B sj, s=1 ;

10 1 Foundations of Matrix Analysis 14 Trace and Determinant of a Matrix Let us consider a square matrix A of order n The trace of a matrix is the sum n of the diagonal entries of A, that is tr(a) = a ii We call the determinant of A the scalar defined through the following formula i=1 det(a) = π Psign(π)a 1π1 a 2π2 a nπn, where P = π = (π 1,, π n ) T is the set of the n! vectors that are obtained by permuting the index vector i = (1,, n) T and sign(π) equal to 1 (respectively, 1) if an even (respectively, odd) number of exchanges is needed to obtain π from i The following properties hold det(a) = det(a T ), det(ab) = det(a)det(b), det(a 1 ) = 1/det(A), det(a H ) = det(a), det(αa) = α n det(a), α K Moreover, if two rows or columns of a matrix coincide, the determinant vanishes, while exchanging two rows (or two columns) produces a change of sign in the determinant Of course, the determinant of a diagonal matrix is the product of the diagonal entries Denoting by A ij the matrix of order n 1 obtained from A by eliminating the i-th row and the j-th column, we call the complementary minor associated with the entry a ij the determinant of the matrix A ij We call the k-th principal (dominating) minor of A, d k, the determinant of the principal submatrix of order k, A k = A(1 : k, 1 : k) If we denote by Δ ij = ( 1) i+j det(a ij ) the cofactor of the entry a ij, the actual computation of the determinant of A can be performed using the following recursive relation a 11 if n = 1, det(a) = n Δ ij a ij, for n > 1, j=1 (14) which is known as the Laplace rule If A is a square invertible matrix of order n, then A 1 = 1 det(a) C, where C is the matrix having entries Δ ji, i, j = 1,, n

15 Rank and Kernel of a Matrix 11 As a consequence, a square matrix is invertible iff its determinant is nonvanishing In the case of nonsingular diagonal matrices the inverse is still a diagonal matrix having entries given by the reciprocals of the diagonal entries of the matrix Every orthogonal matrix is invertible, its inverse is given by A T, moreover det(a) = ±1 15 Rank and Kernel of a Matrix Let A be a rectangular matrix m n We call the determinant of order q (with q 1) extracted from matrix A, the determinant of any square matrix of order q obtained from A by eliminating m q rows and n q columns Definition 112 The rank of A (denoted by rank(a)) is the maximum order of the nonvanishing determinants extracted from A A matrix has complete or full rank if rank(a) = min(m,n) Notice that the rank of A represents the maximum number of linearly independent column vectors of A that is, the dimension of the range of A, defined as range(a) = {y R m : y = Ax for x R n } (15) Rigorously speaking, one should distinguish between the column rank of A and the row rank of A, the latter being the maximum number of linearly independent row vectors of A Nevertheless, it can be shown that the row rank and column rank do actually coincide The kernel of A is defined as the subspace The following relations hold: ker(a) = {x R n : Ax = 0} 1 rank(a) = rank(a T ) (if A C m n, rank(a) = rank(a H )); 2 rank(a) + dim(ker(a)) = n In general, dim(ker(a)) = dim(ker(a T )) If A is a nonsingular square matrix, then rank(a) = n and dim(ker(a)) = 0 Example 15 Let A = 1 1 0 1 1 1 Then, rank(a) = 2, dim(ker(a)) = 1 and dim(ker(a T )) = 0

12 1 Foundations of Matrix Analysis We finally notice that for a matrix A C n n the following properties are equivalent: 1 A is nonsingular; 2 det(a) = 0; 3 ker(a) = {0}; 4 rank(a) = n; 5 A has linearly independent rows and columns 16 Special Matrices 161 Block Diagonal Matrices These are matrices of the form D = diag(d 1,, D n ), where D i are square matrices with i = 1,, n Clearly, each single diagonal block can be of different size We shall say that a block diagonal matrix has size n if n is the number of its diagonal blocks The determinant of a block diagonal matrix is given by the product of the determinants of the single diagonal blocks 162 Trapezoidal and Triangular Matrices A matrix A(m n) is called upper trapezoidal if a ij = 0 for i > j, while it is lower trapezoidal if a ij = 0 for i < j The name is due to the fact that, in the case of upper trapezoidal matrices, with m < n, the nonzero entries of the matrix form a trapezoid A triangular matrix is a square trapezoidal matrix of order n of the form l 11 0 0 u 11 u 12 u 1n l 21 l 22 0 L = or U = l n1 l n2 l nn 0 u 22 u 2n 0 0 u nn The matrix L is called lower triangular while U is upper triangular Let us recall some algebraic properties of triangular matrices that are easy to check The determinant of a triangular matrix is the product of the diagonal entries; the inverse of a lower (respectively, upper) triangular matrix is still lower (respectively, upper) triangular; the product of two lower triangular (respectively, upper trapezoidal) matrices is still lower triangular (respectively, upper trapezoidal); if we call unit triangular matrix a triangular matrix that has diagonal entries equal to 1, then, the product of lower (respectively, upper) unit triangular matrices is still lower (respectively, upper) unit triangular

17 Eigenvalues and Eigenvectors 13 163 Banded Matrices The matrices introduced in the previous section are a special instance of banded matrices Indeed, we say that a matrix A R m n (or in C m n ) has lower band p if a ij = 0 when i > j + p and upper band q if a ij = 0 when j > i + q Diagonal matrices are banded matrices for which p = q = 0, while trapezoidal matrices have p = m 1, q = 0 (lower trapezoidal), p = 0, q = n 1 (upper trapezoidal) Other banded matrices of relevant interest are the tridiagonal matrices for which p = q = 1 and the upper bidiagonal (p = 0, q = 1) or lower bidiagonal (p = 1, q = 0) In the following, tridiag n (b, d, c) will denote the triadiagonal matrix of size n having respectively on the lower and upper principal diagonals the vectors b = (b 1,, b n 1 ) T and c = (c 1,, c n 1 ) T, and on the principal diagonal the vector d = (d 1,, d n ) T If b i = β, d i = δ and c i = γ, β, δ and γ being given constants, the matrix will be denoted by tridiag n (β, δ, γ) We also mention the so-called lower Hessenberg matrices (p = m 1, q = 1) and upper Hessenberg matrices (p = 1, q = n 1) that have the following structure h 11 h 12 0 h 11 h 12 h 1n H = h 21 h 22 h m 1n or H = h 21 h 22 h 2n h m1 0 h mn 1 h mn h mn Matrices of similar shape can obviously be set up in the block-like format 17 Eigenvalues and Eigenvectors Let A be a square matrix of order n with real or complex entries; the number λ C is called an eigenvalue of A if there exists a nonnull vector x C n such that Ax = λx The vector x is the eigenvector associated with the eigenvalue λ and the set of the eigenvalues of A is called the spectrum of A, denoted by σ(a) We say that x and y are respectively a right eigenvector and a left eigenvector of A, associated with the eigenvalue λ, if Ax = λx, y H A = λy H The eigenvalue λ corresponding to the eigenvector x can be determined by computing the Rayleigh quotient λ = x H Ax/(x H x) The number λ is the solution of the characteristic equation p A (λ) = det(a λi) = 0, where p A (λ) is the characteristic polynomial Since this latter is a polynomial of degree n with respect to λ, there certainly exist n eigenvalues of A not necessarily distinct The following properties can be proved

14 1 Foundations of Matrix Analysis n det(a) = λ i, tr(a) = i=1 n λ i, (16) i=1 and since det(a T λi) = det((a λi) T ) = det(a λi) one concludes that σ(a) = σ(a T ) and, in an analogous way, that σ(a H ) = σ(ā) From the first relation in (16) it can be concluded that a matrix is singular iff it has at least one null eigenvalue, since p A (0) = det(a) = Π n i=1 λ i Secondly, if A has real entries, p A (λ) turns out to be a real-coefficient polynomial so that complex eigenvalues of A shall necessarily occur in complex conjugate pairs Finally, due to the Cayley-Hamilton Theorem if p A (λ) is the characteristic polynomial of A, then p A (A) = 0, where p A (A) denotes a matrix polynomial (for the proof see, eg, [Axe94], p 51) The maximum module of the eigenvalues of A is called the spectral radius of A and is denoted by ρ(a) = max λ (17) λ σ(a) Characterizing the eigenvalues of a matrix as the roots of a polynomial implies in particular that λ is an eigenvalue of A C n n iff λ is an eigenvalue of A H An immediate consequence is that ρ(a) = ρ(a H ) Moreover, A C n n, α C, ρ(αa) = α ρ(a), and ρ(a k ) = [ρ(a)] k k N Finally, assume that A is a block triangular matrix A 11 A 12 A 1k 0 A 22 A 2k A = 0 0 A kk As p A (λ) = p A11 (λ)p A22 (λ) p Akk (λ), the spectrum of A is given by the union of the spectra of each single diagonal block As a consequence, if A is triangular, the eigenvalues of A are its diagonal entries For each eigenvalue λ of a matrix A the set of the eigenvectors associated with λ, together with the null vector, identifies a subspace of C n which is called the eigenspace associated with λ and corresponds by definition to ker(a-λi) The dimension of the eigenspace is dim [ker(a λi)] = n rank(a λi), and is called geometric multiplicity of the eigenvalue λ It can never be greater than the algebraic multiplicity of λ, which is the multiplicity of λ as a root of the characteristic polynomial Eigenvalues having geometric multiplicity strictly less than the algebraic one are called defective A matrix having at least one defective eigenvalue is called defective The eigenspace associated with an eigenvalue of a matrix A is invariant with respect to A in the sense of the following definition