Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Similar documents
Lecture 2 and 3: Controllability of DT-LTI systems

Lecture 7 and 8. Fall EE 105, Feedback Control Systems (Prof. Khan) September 30 and October 05, 2015

1 Similarity transform 2. 2 Controllability The PBH test for controllability Observability The PBH test for observability...

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear System Theory

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

1 Continuous-time Systems

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

The Jordan Normal Form and its Applications

Properties of Matrices and Operations on Matrices

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Solution for Homework 5

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Study Guide for Linear Algebra Exam 2

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

EE731 Lecture Notes: Matrix Computations for Signal Processing

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

EE451/551: Digital Control. Chapter 8: Properties of State Space Models

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Balanced Truncation 1

MAT Linear Algebra Collection of sample exams

Review of Some Concepts from Linear Algebra: Part 2

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Online Exercises for Linear Algebra XM511

MATH 221, Spring Homework 10 Solutions

DS-GA 1002 Lecture notes 10 November 23, Linear models

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Systems and Control Theory Lecture Notes. Laura Giarré

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

2. Review of Linear Algebra

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

B553 Lecture 5: Matrix Algebra Review

Row Space, Column Space, and Nullspace

Lecture 19 Observability and state estimation

Math Final December 2006 C. Robinson

EEE582 Homework Problems

1 Last time: least-squares problems

Chapter Two Elements of Linear Algebra

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Math Linear Algebra Final Exam Review Sheet

Module 03 Linear Systems Theory: Necessary Background

Chapter 3 Transformations

1. Select the unique answer (choice) for each problem. Write only the answer.

ECE504: Lecture 9. D. Richard Brown III. Worcester Polytechnic Institute. 04-Nov-2008

Vector Spaces, Orthogonality, and Linear Least Squares

CME 345: MODEL REDUCTION

MATH 583A REVIEW SESSION #1

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Analysis of Discrete-Time Systems

Linear Algebra- Final Exam Review

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Intro. Computer Control Systems: F8

Observability and state estimation

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Mathematics I. Exercises with solutions. 1 Linear Algebra. Vectors and Matrices Let , C = , B = A = Determine the following matrices:

Module 07 Controllability and Controller Design of Dynamical LTI Systems

MATH2210 Notebook 3 Spring 2018

Zeros and zero dynamics

Chapter 2: Matrix Algebra

Jordan Canonical Form

MTH 362: Advanced Engineering Mathematics

MCE693/793: Analysis and Control of Nonlinear Systems

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Maths for Signals and Systems Linear Algebra in Engineering

Measurement partitioning and observational. equivalence in state estimation

EE263: Introduction to Linear Dynamical Systems Review Session 2

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Topic # Feedback Control

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Numerical Linear Algebra Homework Assignment - Week 2

Chap 3. Linear Algebra

Geometric Control Theory

Lecture 22: Section 4.7

Analysis of Discrete-Time Systems

Jordan Canonical Form

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

7. Symmetric Matrices and Quadratic Forms

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

The Singular Value Decomposition

The Essentials of Linear State-Space Systems

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Controllability, Observability, Full State Feedback, Observer Based Control

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Transcription:

1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS Consider the DT LTI dynamics again with the observation model. We are interested in deriving the necessary conditions for optimal estimation and thus the additional noise terms can be removed. The system to be considered is the following: x k+1 = Ax k + Bu k, (1) y k = Cx k, (2) as before we consider the worst case of one observation per k, i.e., p = 1. Definition 1 (Observability). A DT LTI system is said to be observable in n time-steps when the initial condition, x 0, can be recovered from a sequence of observations, y 0,..., y n 1, and inputs, u 0,..., u n 1. The observability problem is to find x 0 from a sequence of observations, y 0,..., y k 1, and The lecture material is largely based on: Fundamentals of Linear State Space Systems, John S. Bay, McGraw-Hill Series in Electrical Engineering, 1998.

2 inputs, u 0,..., u k 1. To this end, note that y 0 = Cx 0, y 1 = Cx 1, = CAx 0 + CBu 0, y 2 = Cx 2, = CA 2 x 0 + CABu 0 + CBu 1,. y k 1 = CA k 1 x 0 + CA k 2 Bu 0 +... + CBu k 2. In matrix form, the above is given by y 0 y 1 y 2. y k 1 = C CA CA 2. CA k 1 x 0 + Υ u 0 u 1 u 2. u k 1. The above is a linear system of equations compactly written as Ψ 0:k 1 = O 0:k 1 }{{} R k n x 0, (3) and hence, from the known inputs and known observations, the initial condition, x 0, can be recovered if and only if rank(o 0:k 1 ) = n. (4) Using the same arguments from controllability, we note that rank(o) < n for k n 2 as the observability matrix, O, has at most n 1 rows. Similarly, adding another observation after the nth observation does not help by Cayley-Hamilton arguments as rank(o 0:n 1 ) = rank(o 0:n ) = rank(o 0:n+1 ) =.... (5)

3 Hence, the observability matrix is defined as O = C CA CA 2. CA n 1, (6) without subscripts. Remark 1. In literature, rank(c) = n is often referred to as the system being (A, B) controllable. Similarly, rank(o) = n is often referred to as the system being (A, C) observable. Remark 2. Show that rank(c) = n CC T 0 (CC T ) 1 exists, (7) rank(o) = n O T O 0 (O T O) 1 exists. (8) Remark 3. Show that (A, C) observable (A T, C T ) controllable, (9) (A, B) controllable (A T, B T ) observable. (10)

4 II. KALMAN DECOMPOSITIONS Suppose it is known that a SISO (single-input single-output) system has rank(c) = n c < n, and rank(o) = n o < n. In other words, the SISO system, in question, is neither controllable nor observable. We are interested in a transformation that rearranges the system modes (eigenvalues) into: modes that are both controllable and observable; modes that are neither controllable nor observable; modes that are controllable but not observable; and modes that are observable but not controllable. Such a transformation is called Kalman decomposition. A. Decomposing controllable modes Firstly, consider separating only the controllable subspace from the rest of the state-space. To this end, consider a state transformation matrix, V, whose first n c columns are a maximal set of linearly independent columns from the controllability matrix, C. The remaining n n c columns are chosen to be linearly independent from the first n c columns (and linearly independent among themselves). The transformation operator (matrix) is n n, defined as [ ] [ ] V = v 1... v nc v nc+1... v n V 1 V 2. (11) It is straightforward to show that the transformed system (x k = V x k ) is x k+1 = V 1 AV x k + V 1 Bu, (12) y k = CV x k. (13) Partition V 1 as V 1 = V 1 V 2, (14) where V 1 is n c n and V 2 is (n n c ) n. Now note that I n = V 1 V = V 1 V 2 [ V 1 V 2 ] = V 1V 1 V 1 V 2 V 2 V 1 V 2 V 2 = I n c I n nc. (15)

5 The above leads to the following 1 : further leading to V 2 V 1 = 0 C lies in the null space of V 2, (16) V 2 B = 0, and V 2 AV 1 = 0. (17) The transformed system in Eq. (12) is thus given by A V 1 AV = V [ ] 1 A V 1 V 2 = V 1AV 1 V 1 AV 2 A c A cc, (18) V 2 V 2 AV 1 V 2 AV 2 0 A c B V 1 B = V 1 B = V 1B B c. (19) V 2 V 2 B 0 Now recall that a similarity transformation does not change the rank of the controllability matrix, C, of the transformed system. Mathematically, ([ rank(c) = rank V 1 B (V 1 AV )V 1 B... (V 1 AV ) n 1 V 1 B ([ ]) = rank V 1 B V 1 AB... V 1 A n 1 B, [ ]) = rank (V 1 B AB... A n 1 B, ([ ]) = rank B AB... A n 1 B, = rank (C), = n c. ]), Furthermore, ([ rank(c) = rank B AB... A nc 1 B = rank B c A c B c... Ac nc 1 0 0... 0 = rank B c A c B c... Ac nc 1 0 0... 0 = n c,... A n 1 B B c B c ]),... An 1 c 0, B c, 1 This is because all of the freedom in the linear independence of C is already captured in V 1.

6 where to go from second equation to the third, we use Cayley-Hamilton theorem. All of this to show that the subsystem (A c, B c ) is controllable. Conclusions: Using the transformation matrix, V, defined in Eq. (11), we have transformed the original system, x k+1 = Ax k + Bu k into the following system: x k+1 = Ax k + Bu k, (20) x c(k + 1) = A c A cc x c(k) + B c u k, (21) x c (k + 1) 0 A c x c (k) 0 such that the transformed state-space is separated into a controllable subspace and an uncontrollable subspace. Consider the transformed system in Eq. (20) and partition the (transformed) state-vector, x(k), into an n c 1 component, x c (k), and an n n c 1, x c (k). We get x c (k + 1) = A c x c (k) + A cc x c (k) + B c u k, (22) x c (k + 1) = A c x c (k). (23) Clearly, the lower subsystem, x c (k), is not controllable. In the transformed coordinate space, x k, we can only steer a portion, x c (k), of the transformed states. The subspace of R n that can be steered is denoted as R(C), defined as the row space of C, where R(C) is called the controllable subspace. Finally, recall that the eigenvalues of similar matrices, A and V 1 AV, are the same. Recall the structure of V 1 AV from Eq. (18); since V 1 AV is block upper-triangular, the eigenvalues of A are given by the eigenvalues of A c and A c, called the controllable eigenvalues (modes) and uncontrollable eigenvalues (modes), respectively. Example 1. Consider the CT LTI system: 2 1 1 ẋ = 5 3 6 5 1 4 1 2 4 C = 0 5 5, 0 5 5 x + 1 0 0 u,

7 with rank(c) = 2 < 3. Defines V as V = 1 2 0 5 0 5 0 0, 1 and note that the requirement for choosing V are satisfied. Finally, verify that the transformed system matrices are V 1 AV = 0 6 1 1 1 0, V 1 B = 0 0 0 1 0. 0 Verify that the top-left subsystem is controllable. From this structure, we see that the twodimensional controllable system is grouped into the first two state variables and that the third state variable has neither an input signal nor a coupling from the other subsystem. It is therefore not controllable. B. Decomposing observable modes Now we can easily follow the same drill and come up with another transformation, x k = W x k, where W contains n o linearly independent rows of O and the remaining rows are chosen to be linearly independent of the first n o rows (and linearly independent among themselves). Finally, the transformed system, x k+1 = Ax k + Bu k, = A o 0 x k + y k = [ A oo C o 0 A o B o B o u k, (24) ] x k (25) is separated into an observable subspace and an unobservable subspace. In fact, we can now write this result as a theorem. Theorem 1. Consider the DT LTI system, x k+1 = Ax k, y k = Cx k,

8 where A R n n and C R p n. Let O be the observability matrix as defined in Eq. (6). If rank(o) = n o < n, there there exists a non-singular matrix, W R n n such that W 1 AW = A o 0 [ ], CW = C o 0, A oo A o where A o R no no, C o R no n, and the pair (A o, C o ) is observable. By carefully applying a sequence of such controllability and observability transformations, the complete Kalman decomposition results in a system of the following form: x co (k + 1) A co 0 A 13 0 x co (k) B co x co (k + 1) A 21 A co A 23 A 24 x co (k) B co = + u(k), (26) x co (k + 1) 0 0 A co 0 x co (k) 0 x co (k + 1) 0 0 A 43 A co x co (k) 0 x co (k) [ ] x co (k) y(k) = C co 0 C co 0. (27) x co (k) x co (k)

9 C. Kalman decomposition theorem Theorem 2. Given a DT (or CT) LTI system that is neither controllable nor observable, there exists a non-singular transformation, x = T x, such that the system dynamics can be transformed into the structure given by Eqs. (26) (27). Proof: The proof is left as an exercise and constitutes Homework 1.

10 APPENDIX A SIMILARITY TRANSFORMS Consider any invertible matrix, V R n n, and a state transformation, x k = V z k. The transformed DT-LTI system is given by x k+1 = Ax k + Bu k, V z k+1 = AV z k + Bu k, z k+1 = V} 1 {{ AV} z k + V} 1 {{ B} u k, A B y k = }{{} CV z k. C The new (transformed) system is now defined by the system matrices, A, B, and C. The two systems, x k and z k, are called similar systems. Remark 4. Each of the following can be proved. Given any DT-LTI systems, there are infinite possible transformations. Similar systems have the same eigenvalues; the eigenvectors of the corresponding system matrices are different. Similar systems have the same transfer functions; in other words, the transfer function of the DT-LTI dynamics is unique with infinite possible state-space realizations. The rank of the controllability and observability are invariant to a similar transformation. APPENDIX B TRIVIA: NULL SPACES Consider a set of m vectors, x i R n, i = 1,..., m. Definition 2 (Span). The span of m vectors, x i R n, i = 1,..., m is defined as the set of all vectors that can be written as a linear combination of {x i } s, i.e., { m } span(x 1,..., x m ) = y y = α i x i, (28) for α i R. i=1

11 Definition 3. The null space, N (A) R n, of an n n matrix, A, is defined as the set of all vectors such that Ay = 0, (29) for y N (A). Consider a real-valued matrix, A R n n and let x 1 0 and x 2 0 be the maximal set of linearly independent vectors 2 such that Ax 1 = 0, Ax 2 = 0, (30) which leads to 2 2 A α i x i = α i Ax i = 0. (31) i=1 i=1 Following the definition of span and null space, we can say that span(x 1, x 2 ) lies in the null space of A, i.e., N (A) = span(x 1, x 2 ). (32) In this particular case, the dimension of N (A) is 2 and 3 rank(a) = n 2. 2 In other words, there does not exist any other vector, x 3, that is linearly independent of x 1 and x 2 such that Ax 3 = 0. 3 This further leads to an alternate definition of rank as n n c, where n c dim(n (A)).

12 Lets recall the uncontrollable case with rank(c) = n c. Clearly, C has n c linearly independent (l.i.) columns (vectors in R n ); let the matrix V 1 R n nc consists of these n c l.i. columns. On the other hand, let the matrix Ṽ1 R n n nc consists of the remaining columns of C and note that every column of Ṽ1 span(columns of V 1 ), or, span(columns of V 1 ) contains every column of Ṽ1. The above leads to the following: Remark 5. If there exists a matrix, V 2, such that V 2 V 1 = 0, then every column of V 1 N (V 2 ), span(columns of V 1 ) N (V 2 ), every column of Ṽ1 N (V 2 ).

13 A. Kalman decomposition, revisited Construct a transformation matrix, V R n n, whose whose first n n c block is V 1. Choose the remaining n n c columns to be linearly independent from the first n c columns (and linearly independent among themselves); call the second n (n n c ) block as V 2 : [ ] V V 1 V 2, V 1 V 1, V 2 I n = V 1 V = V 1 V 2 [ V 1 V 2 ] = V 1V 1 V 1 V 2 V 2 V 1 V 2 V 2 = I n c 0 0 I n nc From the above, note that V 2 V 1 = 0. Now consider the transformed system matrices, A and B: A V 1 AV = V [ ] 1 A V 1 V 2 = V 1AV 1 V 1 AV 2, V 2 V 2 AV 1 V 2 AV 2 B V 1 B = V 1 B = V 1B. V 2 V 2 B. Note that AV 1 span(columns of V 1 ) N (V 2 ), B N (V 2 ).