x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

Similar documents
= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

RECURSIVE ESTIMATION AND KALMAN FILTERING

State space control for the Two degrees of freedom Helicopter

Lab 6a: Pole Placement for the Inverted Pendulum

The Control of an Inverted Pendulum

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Laboratory 11 Control Systems Laboratory ECE3557. State Feedback Controller for Position Control of a Flexible Joint

Optimization-Based Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

6.241 Dynamic Systems and Control

State Feedback MAE 433 Spring 2012 Lab 7

Optimization-Based Control

ECE-320: Linear Control Systems Homework 8. 1) For one of the rectilinear systems in lab, I found the following state variable representations:

The Control of an Inverted Pendulum

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Steady State Kalman Filter

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

CONTROL DESIGN FOR SET POINT TRACKING

6 OUTPUT FEEDBACK DESIGN

Optimization-Based Control

Linearization problem. The simplest example

CDS 101: Lecture 4.1 Linear Systems

Control Systems Design

THE REACTION WHEEL PENDULUM

Introduction to Modern Control MT 2016

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

ECEn 483 / ME 431 Case Studies. Randal W. Beard Brigham Young University

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

CDS 101/110: Lecture 3.1 Linear Systems

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EEE582 Homework Problems

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

State Feedback Controller for Position Control of a Flexible Link

Controls Problems for Qualifying Exam - Spring 2014

1 Steady State Error (30 pts)

A Crash Course on Kalman Filtering

Math 504 (Fall 2011) 1. (*) Consider the matrices

Professor Fearing EE C128 / ME C134 Problem Set 10 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Computer Problem 1: SIE Guidance, Navigation, and Control

The Hilbert Space of Random Variables

Denis ARZELIER arzelier

Simple Harmonic Motion Concept Questions

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

ECE 301 Fall 2011 Division 1. Homework 1 Solutions.

1 Lyapunov theory of stability

ROBOTICS Laboratory Problem 02

Lecture 7 and 8. Fall EE 105, Feedback Control Systems (Prof. Khan) September 30 and October 05, 2015

Chap. 3. Controlled Systems, Controllability

Lecture 2 and 3: Controllability of DT-LTI systems

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Final Exam Solutions

Problem Set 5 Solutions 1

Math Assignment 3 - Linear Algebra

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

The control of a gantry

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Automatic Control II Computer exercise 3. LQG Design

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Inverted Pendulum: State-Space Methods for Controller Design

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Lab 5a: Pole Placement for the Inverted Pendulum

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

D(s) G(s) A control system design definition

EC Control Engineering Quiz II IIT Madras

Lecture 8. Applications

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Linear Systems Theory

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

Linear State Feedback Controller Design

Suppose that we have a specific single stage dynamic system governed by the following equation:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Optimal control and estimation

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

EE C128 / ME C134 Final Exam Fall 2014

CDS 101/110a: Lecture 2.1 Dynamic Behavior

MSMS Matlab Problem 02

Time-Invariant Linear Quadratic Regulators!

Homework Solution # 3

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

1 Kalman Filter Introduction

Linear and Nonlinear Oscillators (Lecture 2)

Signals and Systems. Problem Set: The z-transform and DT Fourier Transform

Balanced Truncation 1

FINAL EXAM SOLUTIONS, MATH 123

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Lecture 19 IIR Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

University of Toronto Department of Electrical and Computer Engineering ECE410F Control Systems Problem Set #3 Solutions = Q o = CA.

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Transcription:

A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You must work alone. Exam is due in a box outside my office ARMS 321 by 12: pm Thursday May 2. NAME: 1

1 Problem Consider the linear system x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) 1 [ ] A = 1 and C = 1. Assume that the initial condition x() = ξ ξ 1 ξ 2 is a mean zero Gaussian random vector satisfying Ex()x() = I where I is the identity matrix. Moreover, {v(j)} is a mean zero, variance one, Gaussian white noise random process, which is independent of the initial condition x(). Find the optimal estimate of the initial condition x() given {y(j)} 1, that is, find where M = span{y(j)} 1 j=. ξ k = E ( ξ k {y(j)} 1 j=) = PM ξ k (for k =, 1, 2) 2

2 Problem Consider the joint density function f x,y (x, y) determined by f x,y (x, y) = cx if x 2 y 1 and x 1 = otherwise where c is a constant. Find: The constant c. The conditional expectation ĝ(y) = E(x y = y). Compute the error E(x ĝ(y)) 2. Find a polynomial p(y) = α + βy + γy 2 of degree at most two which is the best approximation of x, that is, E(x p(y)) 2 = inf{e(x p(y)) 2 : p is a polynomial of degree at most 2} Compute the error E(x p(y)) 2. 3

3 Problem Let θ and φ be two independent uniform random variables over the interval [, 2π]. Consider the random process y(n) defined by where n is an integer. y(n) = 2 sin(πn + θ) + 2 cos( πn 2 + φ) (i) Show that R y (n) = Ey(n + m)y(m) is just a function of n where n and m are integers, that is, compute R y (n) = Ey(n + m)y(m). In particular, this implies that Ey(n)y(m) = R y (n m). (ii) Consider the following prediction problem going back two steps: σ 2 2 = min{e y(n) α 1 y(n 1) α 2 y(n 2) 2 : α 1 and α 2 R}. The minimum is taken over all constants α 1 and α 2. Find the optimal constants α 1 and α 2 and the error σ 2 such that σ 2 2 = E y(n) α 1 y(n 1) α 2 y(n 2) 2. (iii) Consider the following prediction problem going back three steps: σ 2 3 = min{e y(n) α 1 y(n 1) α 2 y(n 2) α 3 y(n 3) 2 : α j R}. The minimum is taken over all constants {α j } 3 1. Find the optimal constants α 1, α 2 and α 3 and the error σ 3 such that σ 2 3 = E y(n) α 1 y(n 1) α 2 y(n 2) α 3 y(n 3) 2. What is your guess for the error σ ν in the following prediction problem: ν σν 2 = min{e y(n) α j y(n j) 2 }. when ν 4. j=1 4

4 Problem Consider the discrete time system x 1(n + 1) = e n 5 1 x 1(n) + e n 5 cos( n ) 5 u(n) x 2 (n + 1) 2 cos( n ) x 5 2 (n) 1 ] y(n) = [1 + e n5 2 + sin( n5 ) x 1(n) + (1 + 1 x 2 (n) 2 sin( n ) 5 ) v(n) where u and v are independent Gaussian white noise processes. The initial condition x() =. To generate u and v in Matlab, set rng(1); rng(2); u = randn(1,2); v = randn(1,2); Let M n = span{y(j)} n. Find the following (i) P Mn 1 x 1 (n) for n= 8,9,1. (ii) P Mn x 2 (n) for n= 8,9,1. (Note the indices on the state x 1 (n) in Part (i), and x 2 (n) in Part (ii).) Be careful Matlab does not have a zero index. So for example, in Matlab A() = A{1} = 1 1 and A(1) = A{2} = e 1 5 1 2 1 2 cos( 1 5 ect. 5

5 Problem Let x be a mean zero variance one Gaussian random variable, and {v n } be a mean zero variance one Gaussian white noise process, which is independent of x. Consider the discrete time random process y n defined by y n = x + v n where n is a positive integer. (i) Find best estimate for x given {y j } n 1 j=, that is, find x n = E(x y, y 1,, y n 1 ) = P Mn 1 x where M n 1 = span{y j } n 1 j=. (ii) Find the error σ n in your estimate, that is, σ 2 n = E(x x n ) 2 6

6 Problem Consider the second order state space system ẋ = Ax and y = Cx A = 4 15 [ ] and C = 2 6 2 7 (i) Find the error σ, and the set S of all initial conditions ξ to solve the following optimization problem: σ 2 = 25e 3t Ce At ξ { } 2 dt = inf 25e 3t Ce At x 2 dt : x C 2 (ii) Find the initial condition x S which solves this optimization problem and satisfies x ξ for all ξ S. Is this x unique? 7

7 Problem: The inverted pendulum problem Part of this problem is taken from the web site: Control tutorials for Matlab, http://www.engin.umich.edu/group/ctm/index.html Consider the classical inverted pendulum problem, consisting of a cart of mass m 1 with an inverted pendulum with inertia J on the cart. The linearized equations of motion in state space form are given by ẋ 1 x ẍ (J+m 2l 2 )ζ m 2 2 gl2 ẋ = q q + φ 1 φ φ m 2lζ m 2 gl(m 1 +m 2 ) q q φ y = x 1 ẋ + 1 4 v 1 1 φ 1 v 2 2 φ q = J(m 1 + m 2 ) + m 1 m 2 l 2. J+m 2 l 2 q m 2 l q 1 w + 4 u 1 u 2 1 1 Here x is the position of the cart and φ is the angle of the inverted pendulum from the vertical position. The control force on the cart is denoted by w. Moreover, u 1, u 2, v 1 and v 2 are all independent white noise processes. Notice that u 1 is a random force on the cart and u 2 is a random torque on the pendulum. The notation is m 1 =.5kg is the mass of the cart m 2 =.2kg is the mass of the pendulum ζ =.1N/m/sec is the friction of the cart l =.3m is the distance to the pendulum s center of mass 8

J =.6kgm 2 is the inertia of the pendulum w is the force applied to the cart x is the position of the cart φ is the angle in radians of the pendulum from the vertical position. Assume that all the initial conditions are zero. Design a feedback controller w = K x based on the steady state Kalman filter such that the cart moves not more that one meter from the origin ( x 1) and the angle moves no more than ±2 degrees from ] the vertical position ( φ.35). Your state feedback gain K = [k 1 k 2 k 3 k 4 must satisfy k j 3 for j = 1, 2, 3, 4. Simulate your controller in Simulink for 3 seconds. Hand the graphs from your Simulink program for: (i) The position x and the estimated position of the cart x on the same graph. (ii) The velocity ẋ and the estimated velocity of the cart x on the same graph. (iii) The angle φ and the estimated angle of the pendulum φ on the same graph. (iv) The angular velocity φ and the estimated angular velocity of the pendulum φ on the same graph. (v) Hand in your gain K. On the band limited white nose generators, set both the noise power and sample time equal to.1. The seed for u 1, u 2, v 1 and v 2 is respectively, 23341, 23342, 23343 and 23344. 9

8 A review of steady state Kalman filtering Let us review the continuous time steady state Kalman filter. Throughout we assume that the system is time invariant, that is, ẋ = Ax + Bu and y = Cx + Dv (8.1) where A is an operator on X and B maps U into X while C maps X into Y and D maps V into Y. The operators {A, B, C, D} are all fixed and do not depend upon time t. We also assume that the operator D is onto Y, or equivalently, DD is invertible. Let Q(t) be the solution to the Riccati differential equation: Q = AQ + QA + BB QC (DD ) 1 CQ subject to Q() =. (8.2) The following is a classical result in linear quadratic regulator theory and Riccati differential equations. THEOREM 8.1 Consider the linear time invariant system {A, B, C, D} where D is onto. Then there exists a solution Q(t) to the Riccati differential equation in (8.2) for all t. In other words, the Riccati differential equation in (8.2) does not have a finite escape time. Moreover, the following holds. (i) The solution {Q(t) : t } forms an increasing sequence of positive operators, that is, Q(σ) Q(t) for all σ t. (ii) If the pair {C, A} is observable, then Q(t) converges to a positive operator P as t tends to infinity, that is, P = lim t Q(t). (8.3) In this case, P is a positive solution for the algebraic Riccati equation = AP + P A + BB P C (DD ) 1 CP. (8.4) (iii) If {A, B, C, D} is controllable and observable, then P is strictly positive. 1

(iv) If {A, B, C, D} is controllable and observable, then P is the only strictly positive solution to the algebraic Riccati equation (8.4). To be precise, if Ξ is a strictly positive solution to (8.4), then Ξ = P = lim t Q(t). (v) If {A, B, C, D} is controllable and observable, then A P C (DD ) 1 C is a continuous time stable operator, that is, all the eigenvalues of A P C (DD ) 1 C are contained in the open left half plane. 8.1 The steady state Kalman filter with control Assume that {A, B, C, D} is a controllable and observable system. Let P be the unique positive solution to the algebraic Riccati equation in (8.4). By passing limits in the Kalman filter, we arrive at the steady state Kalman filter defined by x = (A LC) x + Ly L = P C (DD ) 1. (8.5) Theorem 8.1 guarantees that A LC is a continuous time stable operator. Moreover, L is called the Kalman gain or Kalman observer. The steady state Kalman filter provides an estimate x(t) for the state x(t) given the past {y(σ) : σ < t}. The Kalman filter converges to the steady state Kalman filter. In other words, the steady state Kalman filter converges to the optimal state estimator as t tends to infinity. Assume that {A, B, C, D} is a controllable and observable system. Consider the Kalman filtering problem with a control added to the state equation, that is, ẋ = Ax + Bu + B 1 w and y = Cx + Dv. (8.6) Here w is a known input or control with values in W, and B 1 is an operator mapping W into X. As before, u and v are independent white noise processes, which are also independent to the initial condition x(). In this case, the steady state Kalman filter 11

is given by x = (A LC) x + Ly + B 1 w L = P C (DD ) 1. (8.7) As expected, P is the unique positive solution to the algebraic Riccati equation (8.4) in Theorem 8.1. Finally, it is noted that (8.7) reduces to the steady state Kalman filter in (8.5) when B 1 =. simouty To Workspace3 output y simoutx1 To Workspace1 Band Limited White Noise2 x = Ax+Bu y = Cx+Du State Space em x = Ax+Bu y = Cx+Du em x1 c* u Kalman Filter d Matrix Gain1 x2 Band Limited White Noise1 Gain simoutx2 To Workspace2 Sine Wave Figure 1: The Simulink block diagram for w(t) = sin(t). 12

5 The plot of the output y 2 The plot of x 1 and hatx 1 4 1.5 3 1 2 1.5 1.5 2 1 3 4 1.5 5 5 1 15 2 25 3 time 2 5 1 15 2 25 3 time (a) The output y feed into the Kalman filter. (b) The state space x 1 and x 1. 1.5 The plot of x 2 and hatx 2 1.5.5 1 1.5 5 1 15 2 25 3 time (c) The state space x 2 and x 2 Figure 2: The plots of y, x 1, x 1, x 2 and x 2. 9 An example of a sinusoid input Consider the stable state space system given by ẋ1 = 1 x 1 + u + 1 sin(t) 2 3 1 2 ẋ 2 x 2 y = x 1 x 2 + v. (9.1) 13

Notice that { 1, 2} are the eigenvalues for the 2 2 state space matrix in (9.1). As expected, u and v are independent white noise processes. For simplicity we have assumed that all the initial conditions are zero. The known input w = sin(t). Using [ ] tr [ ] P = are(a, C C/(d 2 ), B B ) in Matlab with d = 1, B = 1 and C = 1 1 (the transpose of a matrix is denoted by tr), we arrived at.818 P =.31 and L =.787..31.165.1574 The eigenvalues for P are {.817,.166}. Since the eigenvalues for P are small, we expect the Kalman filter to perform rather well. The Simulink block diagram to implement this Kalman filter is given in Figure 1. In this Simulink program, we used.1 for both the noise power and sample time for our white noise generators. The seed for the band limited white noise generator 1 is 23342, while the seed for the band limited white noise generator 2 is 23341. The output y for the state space model (9.1) is given in Figure 2, Part (a). The graphs for x 1 and its state estimate x 1 are presented in Figure 2, Part (b). Finally, the graphs for x 2 and its corresponding state estimate x 2 are given in Figure 2, Part (c). 1 State feedback and the Kalman filter As before, assume that {A, B, C, D} is a controllable and observable system. The state equations for the Kalman filtering problem with a control w are given by ẋ = Ax + Bu + B 1 w and y = Cx + Dv. (1.1) Here w is a known input or control with values in W, and B 1 is an operator mapping W into X. As before, u and v are independent white noise processes. Recall that the 14

steady state Kalman filter is given by x = (A LC) x + Ly + B 1 w L = P C (DD ) 1 = AP + P A + BB P C (DD ) 1 CP where P >. (1.2) Now let us use the estimated state x to design a state feedback controller K. To this end, compute a state feedback controller K mapping X into W such that A B 1 K is stable. One can use pole placement or the linear quadratic regulator to determine a state feedback gain K satisfying a specified performance criteria. Since x is the estimated state, the feedback control is given by w = K x. Substituting w = K x into ẋ = Ax + Bu + B 1 w and (1.2) with y = Cx + Dv yields ẋ = A B 1K x x + B u. (1.3) x LC A LC B 1 K LD v So the feedback system is precisely a state space system driven by the white noise [ tr. process u v] Recall that two state space systems {A 2, B 2 } and {A 3, B 3 } are called similar if there exists an invertible transformation Ξ such that A 3 = ΞA 2 Ξ 1 and B 3 = ΞB 2. If {A 2, B 2 } and {A 3, B 3 } are similar, then A 2 and A 3 have the same eigenvalues. In particular, A 2 is stable if and only if A 3 is stable. system We claim that the state space system (1.3) is similar to the following state space ẋ = A B 1K B 1 K x x + B u. (1.4) x A LC B LD v The error in estimation is denoted by x = x x. By construction A B 1 K and A LC are both stable. Since the state space matrix in (1.4) is upper triangular with A B 1 K and A LC on the diagonal, the state space system in (1.4) is stable. Because the two state space matrices in (1.3) and (1.4) are similar, the state space 15

operator T = A LC B 1K on X (1.5) A LC B 1 K X in (1.3) must also be stable. In fact, the eigenvalues of T are precisely the eigenvalues of A B 1 K union the eigenvalues of A LC. To show that (1.3) and (1.4) are similar, consider the operator determined by Ξ = I on X. I I X Recall that x = x x is the error in estimation. Now observe that Ξ x x = I x x = x x. I I [ ] tr [ tr. In other words, Ξ maps x x into x x] The operator Ξ is invertible. In fact, Ξ 1 = Ξ. To verify that (1.3) is similar to (1.4), notice that ΞT Ξ 1 = I A B 1K Ξ I I LC A LC B 1 K = A B 1K I = A B 1K B 1 K A LC LC A I I A LC Ξ B = I B = B. LD I I LD B LD This yields the state space system in (1.4) and proves our claim. 11 An example of state feedback control Consider the state space system ẋ1 = 1 x 1 + u + 2 w(t) 2 3 1 1 ẋ 2 x 2 y = x 1 + x 2 + v. (11.1) 16

The eigenvalues for the 2 2 state space matrix in (11.1) are {1, 2}. The state space system in (11.1) is unstable. Using P = are(a, C C/(d 2 ), B B ) in Matlab with [ ] tr [ ] d = 1, B = 1 and C = 1 1, we arrived at.5 P =.5 and L = 1..5 4.7361 5.2361 The eigenvalues for the matrix P are {.4418, 4.7943}. By employing the linear [ tr, quadratic regulator in Matlab K = lqr(a, B 1, eye(2), 1/4) with B 1 = 2 1] we obtained K = [ ] 17.1685 24.21. In this case, the matrix T in (1.5) is given by 1 34.3371 48.421 T = A B 1K 2 3 17.1685 24.21 =. LC A LC B 1 K 1 1 35.3371 48.421 5.2361 5.2361 24.446 21.965 The eigenvalues for T are { 1, 3.568±.484i, 2.2361}. As expected, T is stable. The Simulink block diagram to implement this state feedback control w = K x with the Kalman filter to estimate x is given in Figure 3. In this Simulink program, we used.1 for both the noise power and sample time for our white noise generators. Moreover, the seed for the band limited white noise generator 1 is 23342 while the seed for the band limited white noise generator 2 is 23341. The output y for the state space model (11.1) is given in Figure 4, Part (a). Figure 4, Part (b) graphs x 1 and its state estimate x 1, while Figure 4, Part (c) graphs x 2 and its state estimate x 2. Finally, it is noted that our estimate x 1 for the state space x 1 is very good, that is, the graphs for x 1 and x 1 are practically identical. 17

simout1 To Workspace1 output y simout2 To Workspace2 Band Limited White Noise2 x = Ax+Bu y = Cx+Du State Space em x = Ax+Bu y = Cx+Du em x1 c* u Kalman FIlter d Matrix Gain1 x2 Band Limited White Noise1 Gain simout k* u To Workspace Feedback Figure 3: The Simulink block diagram for w = K x. 18

3 The plot of the output y 2 The plot of x 1 and hatx 1 2 15 1 1 5 1 5 2 1 3 5 1 15 2 25 3 time 15 5 1 15 2 25 3 time (a) The output y feed into the Kalman filter. (b) The state space x 1 and x 1. 15 The plot of x 2 and hatx 2 1 5 5 1 5 1 15 2 25 3 time (c) The state space x 2 and x 2 Figure 4: The plots of y, x 1, x 1, x 2 and x 2. 12 State feedback and the discrete Kalman filter Assume that {A, B, C, D} is a controllable and observable discrete time system. The state equations for the discrete time Kalman filtering problem with a control w are given by x(n + 1) = Ax(n) + Bu(n) + B 1 w(n) and y(n) = Cx(n) + Dv(n). (12.1) 19

Here w is a known input or control with values in W, and B 1 is an operator mapping W into X. As before, u and v are independent white noise processes. Recall that the steady state Kalman filter is given by x(n + 1) = (A LC) x(n) + Ly(n) + B 1 w(n) L = AP C (CP C + DD ) 1 (12.2) P = AP A + BB AP C (CP C + DD ) 1 CP A where P >. Finally, A LC is stable, that is all the eigenvalues of A LC are contained in the open unit disc. Now let us use the estimated state x to design a state feedback controller K. To this end, compute a state feedback controller K mapping X into W such that A B 1 K is stable. One can use pole placement or the discrete time linear quadratic regulator to determine a state feedback gain K satisfying a specified performance criteria. Since x is the steady state estimate, the feedback control is given by w = K x. Substituting w(n) = K x(n) into x(n + 1) = Ax(n) + Bu(n) + B 1 w(n) and (12.2) with y = Cx + Dv yields x(n + 1) = A B 1K x(n) + B u(n). (12.3) x(n + 1) LC A LC B 1 K x(n) LD v(n) So the feedback system is precisely a state space system driven by the white noise [ tr. process u v] Recall that two state space systems {A 2, B 2 } and {A 3, B 3 } are called similar if there exists an invertible transformation Ξ such that A 3 = ΞA 2 Ξ 1 and B 3 = ΞB 2. If {A 2, B 2 } and {A 3, B 3 } are similar, then A 2 and A 3 have the same eigenvalues. In particular, A 2 is stable if and only if A 3 is stable. 2

We claim that the state space system (12.3) is similar to the following state space system: x(n + 1) = A B 1K B 1 K x(n) + B u(n). (12.4) x(n + 1) A LC x(n) B LD v(n) The error in estimation is denoted by x = x x. By construction A B 1 K and A LC are both stable. Since the state space matrix in (12.4) is upper triangular with A B 1 K and A LC on the diagonal, the state space system in (12.4) is stable. Because the two state space matrices in (12.3) and (12.4) are similar, the state space operator T = A LC B 1K on X (12.5) A LC B 1 K X in (12.3) must also be stable. In fact, the eigenvalues of T are precisely the eigenvalues of A B 1 K union the eigenvalues of A LC. To show that (12.3) and (12.4) are similar, consider the operator determined by Ξ = I on X. I I X Recall that x = x x is the error in estimation. Now observe that Ξ x x = I x x = x x. I I [ ] tr [ tr. In other words, Ξ maps x x into x x] The operator Ξ is invertible. In fact, Ξ 1 = Ξ. To verify that (12.3) is similar to (12.4), notice that ΞT Ξ 1 = I A B 1K Ξ I I LC A LC B 1 K = A B 1K I = A B 1K B 1 K A LC LC A I I A LC Ξ B = I B = B. LD I I LD B LD 21

This yields the state space system in (12.4) and proves our claim. 13 An example of state feedback control Consider the state space system: x 1(n + 1) = 1 x 1(n) + u + 2 w(t) x 2 (n + 1) 2 3 x 2 (n) 1 1 y = x 1 + x 2 + v. (13.1) The eigenvalues for the 2 2 state space matrix in (13.1) are {.5616, 3.56162}. The discrete state space system in (13.1) is unstable. Using P = dare(a, C, B B, d d ) [ ] tr [ ] in Matlab with d = 1, B = 1 and C = 1 1, we arrived at.649 P = 2.118 and L =.7345. 2.118 8.352 2.5936 The eigenvalues for the matrix P are {.121, 8.8521}. By employing the discrete time [ tr, linear quadratic regulator in Matlab K = dlqr(a, B 1, eye(2), 1/8) with B 1 = 2 1] we obtained K = [ ].7327 1.4145. In this case, the matrix T in (12.5) is given by 1 1.4653 2.829 T = A B 1K 2 3.7327 1.4145 =. LC A LC B 1 K.7345.7345 2.1999 2.5636 2.5936 2.5936 1.3263 1.81 The eigenvalues for T are {.5738,.344,.1545,.2456}. As expected, T is stable. The Simulink block diagram to implement this state feedback control w = K x with the steady state Kalman filter to estimate x is given in Figure 5. In this Simulink 22

program, we used.1 for the sample time of the white noise or random number generators. Moreover, we set the seed equal to 2 and 22. Furthermore, we presented a Simulink program based on the equations in (12.3). The output y for the state space model (13.1) is given in Figure 6, Part (a). Figure 6, Part (b) graphs x 1 and its steady state estimate x 1, while Figure 6, Part (c) graphs x 2 and its steady state estimate x 2. Finally, it is noted that our steady state estimate x 1 for the state space x 1 is very good, that is, the graphs for x 1 and x 1 are practically identical. Let M n = span{y(j) : j n}. Recall that x(n) = P Mn 1 x(n) is the optimal state estimate given the past {y(j)} n 1. Now consider the state estimate x(n n) = P Mn x(n). Clearly, x(n n) is a better estimate of the state x(n) than x(n). According to the class notes in steady state x(n n) ( I P C (CP C + DD ) 1 C ) x(n) + P C (CP C + DD ) 1 y(n). This steady state estimate of x(n n) is also included in our Simulink program as the third variable going into the respective scopes. As expected, x(n n) is a slightly better estimate of the state x(n) than x(n); see Figure 6. 23

K*u Gain y output K*u Gain2 Random Number Random Number1 y(n)=cx(n)+du(n) x(n+1)=ax(n)+bu(n) Discrete State Space K*u Gain1 first state second state Figure 5: The Simulink block diagram for w = K x. 24

5 1 15 2 25 3 5 6 4 4 3 2 2 1-2 -4-1 -2-6 -3-8 5 1 15 2 25 3 Time offset: Time offset: (a) The output y. (b) The state x 1, x 1 and x 1 (n n). 8 6 4 2-2 -4-6 5 1 15 2 25 3 Time offset: (c) The state x 2, x 2 and x 2 (n n) Figure 6: The plots of y, x 1, x 1, x 2 and x 2. 25

14 The Kalman filter and state feedback control Assume that {A, B, C, D} is a controllable and observable discrete time system. The state equations for the discrete time Kalman filtering problem with a control w are given by x(n + 1) = Ax(n) + Bu(n) + B 1 w(n) and y(n) = Cx(n) + Dv(n). (14.1) Here w is a known input or control with values in W, and B 1 is an operator mapping W into X. As before, u and v are independent white noise processes. Recall that the Kalman filter is given by x(n + 1) = (A L n C) x(n) + L n y(n) + B 1 w(n) L n = AQ n C (CQ n C + DD ) 1 (14.2) Q n+1 = AQ n A + BB A n C (C n C + DD ) 1 C n A. The initial condition Q = Ex()x(). Finally, if Q =, then Q n is increasing (Q n Q n+1 for all n), and Q n converges to P, the unique positive solution to the algebraic Riccati equation in (12.2). Now let us use the estimated state x to design a state feedback controller K. To this end, compute a state feedback controller K mapping X into W such that A B 1 K is stable. One can use pole placement or the discrete time linear quadratic regulator to determine a state feedback gain K satisfying a specified performance criteria. Since x is the estimated state, the feedback control is given by w = K x. Substituting w(n) = K x(n) into x(n + 1) = Ax(n) + Bu(n) + B 1 w(n) and (14.2) with y = Cx + Dv yields x(n + 1) = A B 1K x(n) + B u(n). (14.3) x(n + 1) L n C A L n C B 1 K x(n) LD v(n) 26

So the feedback system is precisely a state space system driven by the white noise [ tr. process u v] By repeating our previous arguments, with x = x x, we see that the system in (14.3) is similar to the following time varying state space system x(n + 1) = A B 1K B 1 K x(n) + B u(n). (14.4) x(n + 1) A L n C x(n) B L n D v(n) The example in (13.1) revisited. system in (13.1), that is, x 1(n + 1) = x 2 (n + 1) Now let us apply the Kalman filter to the 1 x 1(n) + u + 2 w(t) 2 3 x 2 (n) 1 1 y = x 1 + x 2 + v. (14.5) The Matlab commands we used to simulate the state space system in (14.3) are given by a = [, 1; 2, 3]; b = [; 1]; b1 = [2; 1]; c = [1, 1]; d = 1; q{1} = zeros(2, 2); for j = 1 : 1; q{j+1} = a q{j} a +b b a q{j} c inv(c q{j} c +d d ) c q{j} a ; end for j = 1 : 1; L{j} = a q{j} c inv(c q{j} c + d d ); end p = dare(a, c, b b, d d ); norm(p q{1}) k = dlqr(a, b1, eye(2), 1/8) for j = 1 : 1; t{j} = [a, b1 k; L{j} c, a L{j} c b1 k]; B{j} = [b, [; ]; [; ], L{j} d]; end for j = 1 : 1; u1{j} = randn(2, 1); end x1 = zeros(4, 1); for j = 1 : 1; x{j + 1} = t{j} x{j} + B{j} u1{j}; end 27

for j = 1 : 1; x1(j) = x{j}(1); x2(j) = x{j}(2); x3(j) = x{j}(3); x4(j) = x{j}(4); end Next we compute the steady state response in (12.3). L s = a p c inv(c p c + d d ); t s = [a, b1 k; L s c, a L s c b1 k]; B s = [b, [; ]; [; ], L s d]; x s {1} = zeros(4, 1); for j = 1 : 1; x s {j + 1} = t s x s {j} + B s u1{j}; end for j = 1 : 1; x s1 (j) = x s {j}(1); x s2 (j) = x s {j}(2); x s3 (j) = x s {j}(3); x s4 (j) = x s {j}(4); end The stairs command was used to generate the graphs. Notice that we computed the state x which comes from the system using Q n, and the steady state x s using P. In this example, P Q 1 where Q =. Figure 7 Parts (a) and (b) compare x 1 with the steady state x 1s, and respectively, x 2 with the steady state x 2s. Because Q n yields a more accurate estimate of the state than the steady state, in the beginning x 1 x 1s and x 2 x 2s. Since the feedback controller is trying to make the state as small as possible, using Q n yields a better result for about 1 steps. Then the system converges to the steady state response. Finally, Figure 7, Parts (c) and (d) compares x and x. The estimates are quite good. 28

5 1 15 2 25 3 The first state with Q n and P The second state with Q n and P 8 6 6 4 2 4 2 2 4 2 6 4 8 5 1 15 2 25 3 (a) states x 1 and x 1s. (b) The states x 2 and x 2s. 8 The first state and its estimate 6 The second state and its estimate 6 4 2 4 2 2 4 2 6 4 5 1 15 2 25 3 8 5 1 15 2 25 3 (c) The state x 1 and x 1. (d) The state x 2 and x 2 Figure 7: The plots of y, x 1, x 1, x 2 and x 2. 29