Solution for Homework 5

Similar documents
Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

MATH 583A REVIEW SESSION #1

MATH 221, Spring Homework 10 Solutions

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Chap 3. Linear Algebra

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Definition (T -invariant subspace) Example. Example

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Example Linear Algebra Competency Test

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

Linear Algebra. Min Yan

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

SYSTEMTEORI - ÖVNING Stability of linear systems Exercise 3.1 (LTI system). Consider the following matrix:

1. General Vector Spaces

Chapter 2 Linear Transformations

A PRIMER ON SESQUILINEAR FORMS

Solutions to Final Exam

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Math 108b: Notes on the Spectral Theorem

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

6 Inner Product Spaces

Designing Information Devices and Systems II

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

Eigenvectors and Hermitian Operators

Math 115A: Homework 5

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Linear algebra II Homework #1 due Thursday, Feb A =

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Notes on the matrix exponential

Robust Control 2 Controllability, Observability & Transfer Functions

Math 240 Calculus III

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Linear Algebra Practice Problems

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

The Jordan canonical form

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Math Linear Algebra II. 1. Inner Products and Norms

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

MATHEMATICS 217 NOTES

NORMS ON SPACE OF MATRICES

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Topics in linear algebra

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Linear Algebra- Final Exam Review

ACM 104. Homework Set 4 Solutions February 14, 2001

1 Invariant subspaces

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

Eigenvalues & Eigenvectors

Linear System Theory

Linear Algebra Review

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

The set of all solutions to the homogeneous equation Ax = 0 is a subspace of R n if A is m n.

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Chapter 3 Transformations

Math 113 Final Exam: Solutions

Math 21b: Linear Algebra Spring 2018

Foundations of Matrix Analysis

Review problems for MA 54, Fall 2004.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Linear Algebra Highlights

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

The minimal polynomial

Review of some mathematical tools

Lecture 7: Positive Semidefinite Matrices

ANSWERS. E k E 2 E 1 A = B

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Intro. Computer Control Systems: F8

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Math 307 Learning Goals

Chapter 5 Eigenvalues and Eigenvectors

MAT Linear Algebra Collection of sample exams

LINEAR ALGEBRA QUESTION BANK

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Chapter Two Elements of Linear Algebra

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

0.1 Diagonalization and its Applications

Linear Algebra II Lecture 22

Math 4377/6308 Advanced Linear Algebra

Linear Algebra. Grinshpan

Spring 2014 Math 272 Final Exam Review Sheet

A proof of the Jordan normal form theorem

Definitions for Quizzes

Transcription:

Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement of a subspace and adjoint operator Given a vector space V on a field K, an inner product between the vectors v and w in V, which we denote by the symbol v, w, is any operation from V V to K such that the following three properties are satisfied for every u, v, w V and for every α K: (positivity) v, v and v, v = if and only if v = ; (sesquilinearity) αv, w = α v, w and u + v, w = u, w + v, w ; (conjugate symmetry) v, w = w, v Here, and in the rest of this paper, the symbol denotes conjugation or, when applied to a matrix, conjugate transposition A vector space with a inner product is called an inner product space The most common example of inner product space is R n with the dot product: x, y = x 1y 1 + x 2y 2 + + x ny n = x y It is easy to check that the three properties above hold for the dot product If now our vector space is C 1 {[, t]}, that is the space of all the continuously differentiable, real and bounded functions defined from the interval [, ) to R, we may wonder what is a convenient way of defining an inner product for such a space It turns out that a good inner product for a space of functions is the convolution, so for f and g in C 1 {[, t]} we define: f, g = f g = f(τ)g(t τ)dτ The fact that the convolution satisfies the properties of inner products follows immediately from the properties of integrals 1

The main purpose of introducing inner products is to measure the angle between two vectors In particular, two vectors of an inner product space are orthogonal if and only if their inner product is zero If V is an inner product space, and W is a subspace of V, we define the orthogonal complement of W, which we denote by W, as the set of all the vectors of V that are orthogonal to every vector of W: W = {v V v, w = w W} If V is a finite-dimensional inner product space, then: V = W W The following two facts will be needed in the later proof Lemma 1 For every subspace W Lemma 2 For every matrix M (W ) = W ker M = (Im M ), where ker M is the null space of M (also called the kernel) and Im M is the range of M (also called the image) Let V and W be inner product spaces, and let T : V W be a linear operator We call the null space (or kernel, or nucleus) of T, and we write ker T, the set of all vectors of V that are mapped by T into the zero vector of W: ker T = {v V T (v) = } We call the range (or image) of T, and we write Im T, the set of all vectors of W that can be written as T (v) for some v V: Im T = {w W w = T (v) for some v V} We define the adjoint operator of T to be another linear operator T : W V such that the following property holds for every v V and w W: T (v), w = v, T (w), where the inner product in the right-hand side is the inner product of W and the inner product in the left-hand side is the inner product of V In the following, we will be using the key properties of adjoint operators stated below Lemma 3 For adjoint operators T and T Im T = Im T T 2

Lemma 4 For adjoint operators T and T Im T = ker(t ) With the algebraic tools we reviewed so far, we are now ready to compute the set of reachable states in continuous time In other words, we are asking what are the states to which we can drive our system under the action of an input u we are free to choose We know that the state response of the system is given by: x(t) = e A(t τ) Bu(τ)dτ (1) Equation (1) can be viewed as the definition of a linear operator x = T (u) that takes an input function from the space C 1 {[, t]} and maps it into the state x in R n, reached at time t Within this framework, computing the set of reachable states amounts to computing the image of the operator T In order to do so, let s first of all show that T (x) = B e A (t τ) x We have: ( T (u), x = e Bu(τ)dτ) A(t τ) x = = B e A (t τ) u(τ)dτx = u(τ)b e A (t τ) xdτ = u, T (x) Now, let s compute the null space of T (x), which is the set of all vectors x R n such that: B e A (t τ) x = Expanding the matrix exponential in Taylor series, we get: B x + B A(t τ)x + B 2 (t τ)2 A x + B 3 (t τ)3 A x + =, 2 3! which by the principle of polynomials identity can be satisfied for every (t τ) if and only if: B x = B Ax = B A 2 x = The Cayley-Hamilton theorem tells us that of these infinite equations only the first n are linearly independent, so we can reduce our system to: B B A B A 2 x = (2) B A n 1 3

What (2) shows is that x is a vector of the null space of T if and only if x is a vector of the null space of the matrix U Now, using Lemma 1, Lemma 2 and Lemma 4, we can write: Im T = (ker T ) = (ker U ) = ((Im U) ) = Im U This computation proves the equivalence R = Im U, that is second equivalence in part (c) All the others are consequences of this main fact In particular, note that the proof is completely independent of the time t we pick as final integration time in (1) This proves part (a), and means that the set of reachable states in continuous time is independent of time Once we observe that T T = W tf x, the first equivalence in part (c) is a direct consequence of Lemma 3 The chain of implications in part (b) can be stated as the set of equivalences: all of which have been proved already Exercise 2 Im W tf = (ker T ) = (ker U ) = R, The proof needs to be carried out only for values of λ that are eigenvalues of A, because for these values we have the lowest possible rank for the matrix λi A The proof goes by contraddiction Suppose that for the eigenvalue λ we have: rank [ λi A B ] < n Then, there exists a non-zero vector v such that: This condition can be split into: v T [ λi A B ] = v T A = λv T (3) v T B = (4) Equation (3) shows that v is an eigenvector of A, so we can write: v T A k = λ k v T for k = 1, 2,, n 1 (5) Post-multiplying equations (5) by B, and considering (4), we get: v T A k B = λ k v T B =, 4

and, finally: v T [ B AB A 2 B A n 1 B ] = v T U = So, since v, the controllability matrix U cannot have full rank, and the system is not controllable Let s now suppose that U is not full rank Then, there exists a non-zero vector such that: v T [ B AB A 2 B A n 1 B ] = Let p(s) be the characteristic polynomial of A and A T, then p(a T )v = v T p(a) = If λ is a zero of p(s), meaning that it is an eigenvalue of A or A T, then we can factor p(s) in the following way: and we have: p(s) = g(s)(s λ), v T p(a) = v T g(a) }{{} w T (λi A) = The vector w T is non-zero and satisfies: w T (λi A) = (6) Moreover, setting g(s) = α +α 1 s+ +α n 1 s n 1, the vector w T also satisfies: w T B = v T g(a)b = v T (α B + α 1 AB + + α n 1 A n 1 B) = (7) Equations (6) and (7) show that w T [ λi A B ] =, for a non-zero vector w, so the matrix [ λi A B ] is not full rank Exercise 3 For the given system, we have 1 1 U = 1 2 2 2 and 1 1 V = 3 1 2 4 Since rank U = rank V = 3, the system is both controllable and observable 5

Exercise 4 Using long division, we can rewrite G(s) as: [ 1 s+2 1 ] [ ] G(s) = s+1 s 2 1 s 1 2 1 + s 2 s+1 s 1 1 1 }{{}}{{} G( ) Ĝ(s) Let ψ(s) be the least common multiple of all the denominators of the elements of Ĝ(s), then: ψ(s) = s 2 (s + 1)(s 1) = s 4 s 2 = a + a 1 s + a 2 s 2 + a 3 s 3 + s 4 The matrix ψ(s)ĝ(s) can be written as: [ ] s 3 s 2 s 3 + 2s 2 s 3 s s 2 1 2s 3 + 2s 2 s 3 + s 2 = [ ] [ ] [ ] [ ] 1 1 2 = + s + s 2 1 1 1 + s 3 1 1 2 1 2 1 }{{}}{{}}{{}}{{} C C 1 C 2 C 3 A controllable state space realization of G(s) is the following: I A = I I, a I a 1 I a 2 I a 3 I B =, I C = [ C C 1 C 2 C 3 ], D = G( ) Note, however, that this is not the controllable canonical form of the system This realization solves parts (a) and (b) To get an observable one, let s first of all check the rank of the observability matrix We get rank V = 7, meaning that we can get rid of 5 states out of the total 12 In order to do that, we pick 7 linearly independent rows of V, which we call v 1,, v 7 and we choose 5 more vectors e 8,, e 12 such that matrix: v 1 v 7 e 8 e 12 6

has full rank Calling this matrix T 1, the similarity transformation: Ā = T 1 AT, B = T 1 B, C = CT, puts the system into the following form: ] [Ā11 Ā =, Ā 21 Ā 22 [ ] B1 B =, B 2 C = [ C1 ] The system { Ā 11, B 1, C 1, D } is the controllable realization we are looking for, which solves part (c) This procedure, called Kalman decomposition with respect to observability, can be used in general to spot the observable subsystem in a larger system that is not observable Exercise 5 Inserting a dummy state in the minimal realization in canonical controllable form, together with suitable zeros in the B and C matrices, we get: 1 A = 1 3 B = 1 C = [ 1 1 ], 1 which is not controllable, nor observable Note the selection of a negative number for a 33 This is to avoid introducing a zero (or, even worse, positive) eigenvalue in A, which would make A no longer Hurwitz 7