Chap 3. Linear Algebra
Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions of a Square Matrix 7. Lyapunov Equation 8. Some Useful Formulas 9. Quadratic Form and Positive Definiteness 10. Singular-Value Decomposition 11. Norms of Matrices 2
1. Introduction Linear algebra is essential in linear system theory Only real numbers are considered Notations ti A, B, C, and D are n-by-m, m-by-r, l-by-n, and r-by-p real matrices a i is the ith column of A b j is the jth row of B
2. Basis, Representation, and Orthomormalization n-dimensional real linear space, R n Vector Linearly dependent and linearly independent Dimension The dimension of a linear space is the maximum number of linearly independent vectors in the space
2. Basis, Representation, and Orthomormalization Basis A set of linearly independent vectors in R n is called basis if every vector in R n can be expressed as a unique linear combination of the set In R n, any set of n linearly independent vectors can be used as a basis Representation Let {q 1, q 2,,q n } be such a set
2. Basis, Representation, and Orthomormalization Representation Norms of vectors Norm is a generalization of length or magnitude Any real-valued function of x, denoted by x, can be defined as a norm if:
2. Basis, Representation, and Orthomormalization Norms of vectors Common norms Orthonormalization Normalized, if Orthogonal, if two vectors satisfy Orthonormal, if a set of vectors satisfy
2. Basis, Representation, and Orthomormalization Orthonormalization Schmidt orthonormalization procedure (from e 1, e 2,, e m to q 1, q 2,,q q m ) If all columns of A are orthonormal
3. Linear Algebraic Equations Range space: all possible linear combinations of all columns Rank: dimension of range space/the number of linearly independent columns use ρ to denote the rank of a matrix Null vector of A: Ax = 0 Null space of A consists of all its null vectors Nullity: the maximum number of linearly independent null vectors
3. Linear Algebraic Equations Theorem 3.1 Theorem 3.2 (Parameterization of solutions) Proof
3. Linear Algebraic Equations Corollary Matlab command The solution of Ax = y, A\y The solution of xa = y, y/a
3. Linear Algebraic Equations Determinant and inverse of square matrices Determinant Inverse Theorem
4. Similarity Transformation A and are n-by-ny square matrices Q is an n-by-n nonsingular matrix
5. Diagonal Form and Jordan Form A square matrix A has different representations with respect to different sets of basis Eigenvalue A real or complex number λ is an eigenvalue of n-by-n real matrix A if there exists nonzero vector x such that Ax = λx Nonzero x is eigenvector Characteristic polynomial of A Δ(λ) has degree n, A has n eigenvalues (not necessarily all ( ) g, g ( y distinct)
5. Diagonal Form and Jordan Form Eigenvalues are all distinct Eigenvectors {q 1, q 2,, q n } are linearly independent and can be used as a basis Similarly, we have
5. Diagonal Form and Jordan Form Eigenvalues are all distinct Define Q = [q 1, q 2,, q n ]. Since, we have Matlab command: eig(a) Complex eigenvalues Comple linear space and comple scalars are considered Complex linear space and complex scalars are considered Transpose is replaced by complex-conjugate transpose
5. Diagonal Form and Jordan Form Eigenvalues are not all distinct Multiplicity = 1: simple eigenvalue Multiplicity > 1: repeated eigenvalue An illustrating example Let n-by-ny matrix A has only one eigenvalue (multiplicity = n) ) If (A-λI) is rank 3, i.e., nullity 1, (A-λI)q = 0 has 1 independent solution, 3 more linearly independent vectors are needed
5. Diagonal Form and Jordan Form Eigenvalues are not all distinct An illustrating example v is a generalized eigenvector of grade n Jordan Block
5. Diagonal Form and Jordan Form Eigenvalues are not all distinct An illustrating example If (A-λI) is rank n-2, i.e., nullity 2, (A-λI)q = 0 has 2 linearly independent solutions, and n-2 more generalized eigenvectors are needed Jordan form Matlab command: jordan Property: A is nonsingular iff it has no zero eigenvalue
6. Functions of a Square Matrix Polynomials of a square matrix Some properties If we have, If because we have,
6. Function of a Square Matrix Polynomials of a square matrix Theorem 3.4: Cayley-Hamilton Theorem Corollary: o A n, A n+1 can be expressed as a linear combination of {I, A,, A n-1 } Every polynomial of A can be expressed as a linear combination of {I, A,, A n-1 1}
6. Functions of a Square Matrix Polynomials of a square matrix Polynomials of a square matrix Theorem 3.5
6. Functions of a Square Matrix
6. Functions of a Square Matrix Polynomials of a square matrix Using power series ----define the function of A as an infinite power series If with the radius of convergence ρ, and if all eigenvalues of A have magnitudes less than ρ, we have
7. Lyapunov Equation ' At At M e Ne dt 0 ' d ' ' e AM MA e e Me e Ne dt ' A t At At At At At ' ' At At At At e Me e Ne dt 0 0 ' At At At At e Me 0 M M e Ne dt 0 0 '
7. Lyapunov Equation General Version A and B are n-by-n and m-by-m M and C are n-by-m A and B are n-by-n and m-by-m, M and C are n-by-m An illustrating example
7. Lyapunov Equation Define, then A mapping from nm-dimensional linear space to itself Define eigenvalue then λ i and μ j are eigenvalues of A and B Why
8. Some Useful Formulas Rank If C and D are n-by-n and m-by-m nonsingular matrices Determinant
9. Quadratic Form and Positive Definiteness Symmetric: its transpose equals itself, M=M Quadratic form: x Mx, where x is a vector and M is a symmetric matrix Complex conjugate transpose Consequently,
9. Quadratic Form and Positive Definiteness Every symmetric matrix can be diagonalized using a similarity transformation even it has repeated eigenvalues Proof by contradiction, where Q is nonsingular and D is diagonal with real eigenvalues Orthogonal matrix Q is nonsingular and orthogonal, because from we have
9. Quadratic Form and Positive Definiteness Theorem 3.6 Positive definite A symmetric M is positive definite (M>0) if x Mx > 0 for every nonzero x A symmetric M is positive semidefinite (M 0) if x Mx 0 for every nonzero x
9. Quadratic Form and Positive Definiteness Positive definite Positive definite Theorem 3.7
9. Quadratic Form and Positive Definiteness The symmetric matrix H H His always positive semidefinite, and is positive definite if H H is nonsingular Theorem 3.8 Eigenvalues H H and HH have n and m eigenvalues They have the same nonzero eigenvalues but may have different number of zero eigenvalues
10. Singular-Value Decomposition Singular Value of H H is m-by-n real matrix The eigenvalues of M = H H HH The singular values of H Theorem 3.9 (singular-value decomposition)
10. Singular-Value Decomposition Example (Matlab)
11. Norms of Matrices Definition Induced norms (based on vector norms) Commonly used norms
11. Norms of Matrices Example Property