Linear System Theory

Similar documents
Chap 3. Linear Algebra

1. General Vector Spaces

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Chapter 5 Eigenvalues and Eigenvectors

ELE/MCE 503 Linear Algebra Facts Fall 2018

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Linear Algebra Lecture Notes-II

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Review of Some Concepts from Linear Algebra: Part 2

Recall : Eigenvalues and Eigenvectors

MAT Linear Algebra Collection of sample exams

Lecture 1: Review of linear algebra

The Jordan Normal Form and its Applications

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Dimension. Eigenvalue and eigenvector

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Econ Slides from Lecture 7

MA 265 FINAL EXAM Fall 2012

Eigenvalues and Eigenvectors

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

235 Final exam review questions

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Lecture 11: Diagonalization

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

EE263: Introduction to Linear Dynamical Systems Review Session 5

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Linear Algebra Formulas. Ben Lee

Lec 2: Mathematical Economics

Linear System Theory

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Elementary linear algebra

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

MATH 221, Spring Homework 10 Solutions

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Eigenvalues and Eigenvectors A =

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MATH 235. Final ANSWERS May 5, 2015

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Definitions for Quizzes

Math 308 Practice Test for Final Exam Winter 2015

2. Every linear system with the same number of equations as unknowns has a unique solution.

Solutions to practice questions for the final

Linear Algebra: Matrix Eigenvalue Problems

Math 315: Linear Algebra Solutions to Assignment 7

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math Final December 2006 C. Robinson

Study Guide for Linear Algebra Exam 2

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

6 Inner Product Spaces

Linear algebra II Tutorial solutions #1 A = x 1

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Solutions Problem Set 8 Math 240, Fall

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

LINEAR ALGEBRA REVIEW

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Math 24 Spring 2012 Sample Homework Solutions Week 8

Numerical Linear Algebra

Notes on Eigenvalues, Singular Values and QR

Linear Algebra II Lecture 13

Linear Algebra Highlights

MTH 2032 SemesterII

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Recall the convention that, for us, all vectors are column vectors.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture Summaries for Linear Algebra M51A

Solutions to Final Exam

Math 215 HW #9 Solutions

4. Linear transformations as a vector space 17

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F.

Basic Elements of Linear Algebra

Online Exercises for Linear Algebra XM511

Math 3191 Applied Linear Algebra

Lecture 7: Positive Semidefinite Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Chapter 4 Euclid Space

and let s calculate the image of some vectors under the transformation T.

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture notes: Applied linear algebra Part 1. Version 2

Math 310 Final Exam Solutions

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Eigenvalues and Eigenvectors

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Transcription:

Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40

Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

Linear Maps (Linear Operators) Recall that Function: f : X Y for any x X, f assigns a unique f (x) = y Y X : domain, Y : codomain {f (x) : x X }: range f : function, operator, map, transformation injective, surjective, bijective 3 / 40

Linear Maps (Linear Operators) Linear Mapping: Let (V, F) and (W, F) be (finite-dimensional) linear vector spaces on the SAME FIELD!!!. Let A be a map from V to W, i.e., A : V W such that A(v) = w, v V and w W. Then A is said to be a linear mapping (equiv. linear transformation, linear operator, linear map), if and only if A(α 1 v 1 + α 2 v 2 ) = α 1 A(v 1 ) + α 2 A(v 2 ), α 1, α 2 F We will see that if A is linear, then A is a matrix. In other words, we will see that any linear mapping between finite-dimensional linear spaces can be represented as matrix multiplication 4 / 40

Linear Maps (Linear Operators) Example: Consider the following mapping on the set of polynomials of degree 2: A : as 2 + bs + c cs 2 + bs + a, where a, b, c R. Is this a linear map? Let = v 1 = a 1 s 2 + b 1 s + c 1 and v 2 = a 2 s 2 + b 2 s + c 2. Then for any α 1, α 2 R A(α 1 v 1 + α 2 v 2 ) = A(α 1 (a 1 s 2 + b 1 s + c 1 ) + α 2 a 2 s 2 + b 2 s + c 2 ) = (α 1 c 1 + α 2 c 2 )s 2 + (α 1 b 1 + α 2 b 2 )s + α 1 a 1 + α 2 a 2 = α 1 A(v 1 ) + α 2 A(v 2 ) 5 / 40

Linear Maps (Linear Operators) Example: The mapping A(v) where w = 1 2 3 0 1 7 v 7 0 16 6 / 40

Linear Maps (Linear Operators) Theorem Let (V, F) have a basis {v 1,..., v n }, and let (W, F) have a basis {w 1,..., w m }. Then w.r.t. these bases, the linear map A is represented by m n matrix. Namely, y = A(x) = Ax, where A is a m n matrix. Proof: For any x V, there exists a unique α 1,..., α n F such that By linearity, x = n α i v j = [v]α j=1 y = A(x) = α 1 A(v 1 ) + + α n A(v n ) = n α j A(v j ) Note that A : V W; hence for each A(v i ) W, it has a unique representation w.r.t. {w 1,..., w m }: j=1 A(v j ) = a 1j w 1 + a mj w m, j = 1, 2,..., n 7 / 40

Linear Maps (Linear Operators) Hence, (A1 (v 1 ) A 1 (v 2 ) A n (v n ) ) = ( ) y 1 y 2 y n a 11 a 12 a 1n = ( ) a 21 a 22 a 2n w 1 w 2 w m...... = [w]a a m1 a m2 a mn Note that A is a m n matrix, and ith column of A is the representation of y i w.r.t basis of W. Now, for any y = Ax W, there exists a unique β 1,..., β m F w.r.t. basis {w 1,..., w m } such that y = β 1 w 1 + + β m w m = [w]β = α 1 A(v 1 ) + α n A(v n ) = [A(v)]α = [w]aα Hence, β = Aα. Since we have unique α and β, we have the desired result 8 / 40

Linear Maps (Linear Operators) We will now use A instead of A for a given basis. i)the matrix A gives the relation between the representations (coordinates) α and β, not x and y. ii) This also means that A depends on bases. So, with different bases, we have the different representation of the same operator A. This means that A is not unique under different bases. The most important case is when V = W, in which case we can use the same basis for the domain and codomain. For a different basis, we may obtain Ā, which is a different representation of A. 9 / 40

Similarity Transformation Suppose that v = {v 1,..., v n } and e = {e 1,..., e n } are bases for V. Then for the linear operator A, it can have different representations: A w.r.t. v or Ā w.r.t. e. We now see the relationship between A and Ā. Think about the change of basis, we have shown that given two bases, we can change the coordinate (representation) of a vector by using the transformation matrix. This means that if the domain and the codomain are the same, then we can use the same transformation matrix 10 / 40

Similarity Transformation 11 / 40

Similarity Transformation We have β = Āᾱ = Pβ = PAα = PAP 1 ᾱ This implies Ā = PAP 1. Similarly, Ā = Q 1 AQ Moreover, A = P 1 ĀP = QĀQ 1 Similar Matrix: A and Ā are similar if there exists a nonsingular matrix P (or Q) satisfying the above transformation. If P (or Q) exists, the above transformation is called a similarity transformation Note: all the matrix representations (w.r.t the different bases) of the same operator are similar 12 / 40

Similarity Transformation Example: Second-order differential equation with x 1 = x and x 2 = ẋ = ẋ 1 ( ) ( 0 1 0 ẍ + 3ẋ + 2x = u ẋ = x + u 2 3 1) The eigenvalue and eigenvector of A Let z = Qx. Then ż = Qẋ. Hence, det(si A) = 0 s = 1, 2 Q = ( ( ) ) 1 1 e 1 e 1 = 1 2 ( ) Q 1 2 1 = P = 1 1 ż = Q 1 AQz + Q 1 u = ( ) 1 0 z + 0 2 ( ) 1.41 u 2.23 13 / 40

Range and Null Spaces Definition: Range Space Suppose that (V, F) and (W, F) are n and m dimensional vector spaces, respectively. The range of a linear operator A from V to W is the set R(A) defined by R(A) = {y W : y = Ax for some x V} Theorem: The range of a linear operator A is a subspace of (W, F). Proof: 1) We have R(A) W 2) For any y 1, y 2 R(A), and α, β F, we have α 1 y 1 + α 2 y 2 = α 1 Ax 1 + α 2 Ax 2 = A(α 1 x 1 + α 2 x 2 ) (why??). Hence, by the definition of R(A), α 1 y 1 + α 2 y 2 R(A). 14 / 40

Range and Null Spaces R(A) W, and R(A) is a linear space Let x = (x 1,..., x n ) T, and A = (A 1,..., A n ), where A i is the ith column of A. Then Ax = y can be written as y = A 1 x 1 + A n x n R(A) is the set of all the possible linear combinations of the columns of A. Note that R(A) is a linear subspace of (W, F). Hence, dim(r(a)) is the maximum number of linearly independent columns of A. 15 / 40

Range and Null Spaces Definition: Null Space Suppose that (V, F) and (W, F) are n and m dimensional vector spaces, respectively. The null space of a linear operator A from V to W is the set N(A) defined by N(A) = {x V : Ax = 0} The null space of a linear operator A is a subspace of (V, F). The null space is a subspace of domain, whereas the range space is a subspace of (W, F) 16 / 40

Range and Null Spaces Rank and Nullity The dimension of R(A) is denoted by rank(a) The dimension of N(A) is denoted by nullity(a) Theorem: Dimension Theorem or Rank-Nullity Theorem Suppose that (V, F) and (W, F) are n and m dimensional vector spaces, respectively. Then for a linear operator A from V to W, we have rank(a) + nullity(a) = dim(v) = n 17 / 40

Range and Null Spaces R(A T ) = {x V : x = A T y, for some y W} (row space) N(A T ) = {y W : A T y = 0} rank(a T ) + nullity(a T ) = m rank(a) = number of linearly independent columns of A rank(a T ) = number of linearly independent raws of A rank(a) = rank(a T ) A square matrix matrix A in nonsingular if and only if all the rows and columns of A are linearly independent 18 / 40

Range and Null Spaces Example: A 1 = ( ) 1 2 3, A 4 8 12 2 = 1 3 2 6 3 9 We can check ( ) ( ) ( ) ( 1 2 3 0 x 1 + x 4 2 + x 8 3 =, x = (1, 1, 1) 12 0) T, (2, 0, 1) T 1 3 0 v 1 2 + v 2 6 = 0, v = (3, 1) T 3 9 0 Hence, nullity(a 1 ) = 2, rank(a 1 ) = 1, nullity(a 2 ) = 1, rank(a 2 ) = 1 19 / 40

Invariant Subspace Invariant subspace Let A be a linear operator of (V, F) into itself. A subspace Y of V is said to be an invariant subspace of V under A, or an A-invariant subspace of V if A(Y) Y Ay Y, y Y Trivial Example: A : R n R n : R n is an invariant subspace 20 / 40

Eigenvalues and Eigenvectors Definition: Eigenvalues and Eigenvectors Let A be a linear operator that maps (C n, C) into itself. Then a scalar λ C is called an eigenvalue of A if there exists a nonzero x C n such that Ax = λx Any nonzero x satisfying Ax = λx is called an eigenvector of A associated with the eigenvalue λ (A λi )x = 0 or (λi A)x = 0 For a nonzero x C n, λ is an eigenvalue of A if and only if det(λi A) = 0 or det(a λi ) = 0 det(λi A) is called the characteristic polynomial of A The degree of the characteristic polynomial of A is n. Hence, the n n matrix A has n eigenvalues 21 / 40

Eigenvalues and Eigenvectors Question: For a n n square matrix A, are all the eigenvalues of A distinct? Answer: Not necessarily. There might be a repetition of some eigenvalues. Namely, it can be λ i = λ j for some i j i, j = 1, 2,.., n Case I: We consider the case when all the eigenvalues of A are distinct Theorem: Let λ 1,..., λ n be distinct eigenvalues of A, and v 1,..., v n are associated eigenvectors. Then the set {v 1,..., v n } is linearly independent, and for a n n matrix V = [v] = (v 1,..., v n ), rank(v ) = n. 22 / 40

Eigenvalues and Eigenvectors We prove the first part. Assume that the set {v 1,..., v n } is linearly dependent. Then there exist nonzero α 1,..., α n C such that α 1 v 1 + + α n v n = 0 Without loss of generality, assume that α 1 0. Note that n (A λ 2 I )(A λ 3 I ) (A λ n I ) α i v i = 0 We have (A λ j I )v i = (λ i λ j )v i, j i (A λ i I )v i = 0 i=1 α 1 (λ 1 λ 2 )(λ 1 λ 3 ) (λ 1 λ n )v 1 = 0 Since all the eigenvalues are distinct, we have n (λ 1 λ i )v 1 = 0 α 1 i=2 which implies α 1 = 0. This is a contradiction. Hence, {v 1,..., v n } is linearly independent 23 / 40

Eigenvalues and Eigenvectors Theorem: A is diagonalizable if and only if it has n linearly independent eigenvectors. A consequence of diagonalization A = Q 1 ΛQ det(λi A) = det(λi Q 1 ΛQ) = det(q 1 (λi Λ)Q) = det(λi Λ) Example: Second-order differential equation with x 1 = x and x 2 = ẋ = ẋ 1 ( ) ( ) 0 1 0 ẍ + 3ẋ + 2x = u ẋ = x + u 2 3 1 The eigenvalue and eigenvector of A det(si A) = 0 s = 1, 2 Q = ( ( ) ( ) ) 1 1 e 1 e 1 =, Q 1 2 1 = P = 1 2 1 1 Let z = Qx. Then ż = Qẋ. Hence, ż = Q 1 AQz + Q 1 u = ( ) 1 0 z + 0 2 ( ) 1.41 u 2.23 24 / 40

Eigenvalues and Eigenvectors Case II: We consider the case when eigenvalues of A are not distinct Example I: The case when A has repeated eigenvalues, but has linearly independent eigenvectors A is diagonalizable (theorem) [v 1 v 2 v 3 ] forms a basis 1 0 1 A = 0 1 0, det(λi A) = 0, λ = 1, 1, 2 0 0 2 v 1 = (1, 1, 0) T, v 2 = (0, 1, 0) T, v 3 = ( 1, 0, 1) T Example II: The case when A has repeated eigenvalues, and has linear dependent eigenvectors 1 1 2 A = 0 1 3, det(λi A) = 0, λ = 1, 1, 2 0 0 2 v 1 = (1, 0, 0) T, v 2 = (1, 0, 0) T, v 3 = (5, 3, 1) T [v 1 v 2 v 3 ] is not sufficient to form a basis 25 / 40

Jordan Form As we have seen, there is repeated eigenvalues of A, A can be diagonalizable or not depending on the corresponding eigenvectors. Assume that A has an eigenvalue of λ i with multiplicity of m i. Then the number of linearly independent eigenvectors associated with an eigenvalue λ i is equal to the dimension of N(λ i I A). Hence, we have q i = n rank(λ i I A) = nullity(λ i I A) Previous examples: λ i = 1 with m i = 2 0 0 1 (I A) 0 0 0, 0 0 2 0 1 2 0 0 3 0 0 2 26 / 40

Jordan Form We want to diagonalize a matrix A when A has repeated eigenvalues We want to fine some other eigenvectors of A Generalized eigenvectors: A vector v is said to be a generalized eigenvector of grade k 1 if and only if (A λi ) k v = 0, (A λi ) k 1 v 0 k = 1: v is an eigenvector of A Example II 1 1 2 A = 0 1 3, det(λi A) = 0, λ = 1, 1, 2 0 0 2 v 1 = (1, 0, 0) T = v 2, v 3 = (5, 3, 1) T We compute the generalized eigenvector of A associated with λ = 1. 27 / 40

Jordan Form 0 0 5 B := (A I ) 2 = 0 0 3, Bv 2 = 0, v 2 = (0, 1, 0) T 0 0 1 Note that (A I )v 0. Then 1 0 5 1 1 0 Q = (v 1, v 2, v 3 ) = 0 1 3, J = Q 1 AQ = 0 1 0 0 0 1 0 0 2 J: Jordan matrix N := J diag{1, 1, 2} = 0 1 0 0 0 0 0 0 0 N: Nilpotent 28 / 40

Jordan Form Assume that A is 5 5 matrix with λ 1 with multiplicity of 4, and with λ 2 with multiplicity of 1 (simple). Then we may have the following Jordan forms λ 1 1 0 0 0 λ 1 1 0 0 0 0 λ 1 1 0 0 A 1 = 0 0 λ 1 1 0 0 0 0 λ 1 0, A 0 λ 1 1 0 0 2 = 0 0 λ 1 0 0 0 0 0 λ 1 0 0 0 0 0 λ 2 0 0 0 0 λ 2 λ 1 1 0 0 0 λ 1 1 0 0 0 0 λ 1 0 0 0 A 3 = 0 0 λ 1 1 0 0 0 0 λ 1 0, A 0 λ 1 0 0 0 4 = 0 0 λ 1 0 0 0 0 0 λ 1 0 0 0 0 0 λ 2 0 0 0 0 λ 2 A 5 = diag{λ 1, λ 1, λ 1 λ 1, λ 2 } J 1 : order 4, J 2 : order 3 J 3 : two order 2, J 4 : two order 1, order 2 29 / 40

Jordan Form Theorem: Suppose that A C n n. Then there exists a nonsingular matrix T C n n and an integer 1 p n such that J 1 0 T 1 J 2 AT = J =..., 0 J p where J k are Jordan matrix (or Jordan block) with order n k 30 / 40

Caley-Hamilton Theorem Caley-Hamilon Theorem Let us define the characteristic polynomial of A Then (λ) = det(λi A) = λ n + α 1 λ n 1 + α 2 λ n 2 + + α n 1 λ + α n (A) = A n + α 1 A n 1 + + α n 1 A + α n I = 0 Simple proof: when A is diagonalizable (general proof: HW) Similarity transformation implies that Consider (A) = Q Λ = Q 1 AQ A = QΛQ 1 det(a) = det(q) det(λ) det(q) 1 = det(λ) A 2 = QΛ 2 Q 1,..., A k = QΛ k Q 1 [ ] Λ n + α 1 Λ n 1 + + α n 1 Λ + α n I Q 1 = 0 31 / 40

Caley-Hamilton Theorem Note that A n = α 1 A n 1 α n I It is a linear combination of {A n 1,..., I }!!! Similarly A n+1 = α 1 A n α n A It is a linear combination of {A n 1,..., I }!!!, which is also a linear combination of {A n 1,..., I } Given any polynomial function f (λ), f (A) can be expressed as follows f (A) = β 0 I + β 1 A + + β n 1 A n 1 We can extend f (λ) to any function, that is, f (λ) is not necessarily a polynomial. 32 / 40

Caley-Hamilton Theorem For A with n n, let λ 1,..., λ m be distinct eigenvalues with multiplicity n 1,..., n m, respectively. (note that m i=1 n i = n) Theorem (Theorem 3.5 of the textbook) Given f (λ) and n n matrix A with Define (A) = Π m i=1(λ λ i ) n i h(λ) = β 0 + β 1 λ + + β n 1 λ n 1 The coefficients {β 0,..., β n 1 } are to be solved from the following set of n equations: Then df l (λ) dλ λ=λ i = dhl (λ) dλ λ=λ i, l = 0, 1,..., n i 1, i = 1, 2,..., m f (A) = h(a) 33 / 40

Caley-Hamilton Theorem Example: ( ) 0 1 A = 1 2 We want to compute A 100. det(λi A) = λ 2 + 2λ + 1 = (λ + 1) 2. We let h(λ) = β 0 + β 1 λ, and f (λ) = λ 100. We have f ( 1) = h( 1) ( 1) 100 = β 0 β 1 f ( 1) = h ( 1) 100( 1) 99 = β 1 β 1 = 100, β 0 = 1 + β 1 = 99, h(λ) = 99 100λ ( ) f (A) = A 100 99 100 = h(a) = 99I 100A = 100 101 34 / 40

Caley-Hamilton Theorem Example: 0 0 2 A = 0 1 0 1 0 3 We want to compute e At. det(λi A) = (λ 1) 2 (λ 2). Let h(λ) = β 0 + β 1 λ + β 2 λ 2, and f (λ) = e λt. Then We have f (1) = h(1) e t = β 0 + β 1 + β 2 f (1) = h (1) te t = β 1 + 2β 2 f (2) = h(2) e 2t = β 0 + 2β 1 + 4β 2 β 0 = 2te t + e 2t, β 1 = 3te t + 2e t 2e t β 2 = e 2t e t + te t f (A) = e At = h(a) = β 0 I + β 1 A + β 2 A 2 35 / 40

Positive (semi)-definite Matrix A matrix A is symmetric if A = A T For any n n matrix A, and x R n, x T Ax is called a quadratic form Definition: A symmetric matrix A is said to be positive definite if x T Ax > 0 for all x 0 positive semi-definite if x T Ax 0 for all x 0 Theorem: A symmetric matrix A is positive definite (positive semi-definite) if and only if all eigenvalues of A are positive (nonnegative) A symmetric positive definite matrix is invertible For a symmetric positive definite matrix A, det(a) > 0 36 / 40

Norm and Inner Product Normed vector space (Normed linear space): Length of the vector Let (V, F) be a n-dimensional vector space. A function x : V R, where x V, is said to be a norm if the following properties hold x 0 and x = 0 if and only if x = 0 (separate points) αx = α x (absolute homogeneity) x + y x + y (triangular inequality) Example: Let (R n, R). Then the norm can be chosen as x 1 := n ( n x i, x 2 := x i 2) 1/2, x := max x i i i=1 Example: signal norm for the real-valued continuous function f (t) where 1 p < i=1 ( t f p = 0 ) 1/p f (t) p dt 37 / 40

Norm and Inner Product Inner Product: measure angle of two vectors An inner product between two vectors, x, y, on the vector space (V, C) is a function that maps from V V to C such that the following properties hold x, y = y, x complex conjugate x, α 1 y 1 + α 2 y 2 = α 1 x, y 1 + α 2 x, y 2, x, y 1, y 2 V, α 1, α 2 C x, x 0 and x, x = 0 if and only if x = 0 Example: Let (R n, R). Then the inner product is x 2 2 = x, x = n x i 2, x, y = x T y = i=1 n x i y i Example: signal inner product for the real-valued continuous function f (t) f 2 2 = t 0 f (t) 2 dt, f, g = t 0 i=1 f (t)g(t)dt where 1 p < 38 / 40

Orthogonal and Orthonormal Vectors Two vectors x and y are said to be orthogonal if and only if x, y = 0. If x, y = x T y, then x and y are orthogonal if and only if x T y = 0. Note that if x T y = 0, then the angle between two vectors is 90 Fact: Let A be a symmetric matrix, that is, A = A T. If A has distinct eigenvalues, then the corresponding eigenvectors are orthogonal. Example: A 1 = ( ) 2 1 1 2 det(λi A 1 ) = 0, λ = 1, 3, (v 1, v 2 ) = v T 1 v 2 = 0, orthogonal ( ) 1 1 1 1 39 / 40

Orthogonal and Orthonormal Vectors A set of vectors, x 1, x 2,..., x m is said to be orthonormal if { xi T 0 if i j x j = 1 if i = j 40 / 40