Steady State Kalman Filter
|
|
- Dwain Beasley
- 6 years ago
- Views:
Transcription
1 Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE: A T P + P A P BR 1 B T P + Q = 0. Optimal feedback gain: u = Kx K = R 1 B T P A BK stable. Steady State Kalman Filter: ˆx = Aˆx + Bu + L(y Cˆx) Choose L so that A LC or A T C T L is stable. So solve LQ control problem for the dual problem: ẋ = A T x + C T u (60) M.E. University of Minnesota 276
2 Transform the Algebraic Riccati Equation: C T B, A T A. AP + P A T P C T R 1 CP + Q = 0. Stabilizing feedback gain K for (60) is: K = L T = R 1 CP L = P C T R 1 This generates the observer: ˆx = Aˆx + Bu + P C T R 1 (y Cˆx) with observer error dynamics: x = (A P C T R 1 C) x which will be stable, as long as 1. (A T, C T ) is stabilizable. This is the same as (A, C) being detectable. 2. (A T, Q 1 2) is detectable. This is the same as (A, Q 1 2) being stabilizable. M.E. University of Minnesota 277
3 Questions we need to ask are: In what way is this observer optimal? How does finite time LQ extend to observer design? M.E. University of Minnesota 278
4 Deterministic Kalman Filter Problem statement: Suppose we are given the output y(τ) of a process over the interval [t 0, t f ]. It is anticipated that y(τ) is generated from a process of the form: ẋ = A(t)x + B(t)u y = C(t)x To account for model uncertainty and measurement noise, the process model used is modified to: ẋ(t) = A(t)x(t) + B(t)u + B w (t)w y = C(t)x(t) + v (61) The state ˆx(t t f ) = estimate of the state of the actual process at time t given all the measurements up to time t f. ˆx(t t f ) means derivative w.r.t. t. The final time is assumed to be fixed. In (61), w and v are unknown process and measurement noise. M.E. University of Minnesota 279
5 Initial state ˆx(t 0 t f ) unknown. The deterministic Kalman filter problem is to so choose the initial state ˆx(t 0 t f ) and the measurement and process noises w(τ t f ), v(τ t f ) for τ [t 0, t f ], so that: 1. The measured output is explained: i.e. y(τ) = C(τ)ˆx(τ t f ) + v(τ t f ) for all τ [t 0, t f ]. 2. Minimize the following objective function: J(y( ), ˆx(t 0 t f ), w( t f )) = 1 2ˆxT (t 0 t f )S 1ˆx(t 0 t f ) tf t 0 w T (τ t f )Q 1 (τ)w(τ t f ) + v T (τ t f )R 1 (τ)v(τ t f ) dτ where v(τ t f ) = y(τ) C(τ)x(τ t f ). M.E. University of Minnesota 280
6 Remarks: Thus, the goal is to use as little noise or initial condition as possible so as to explain the measured output y(τ). Set R 1 large when measurement is accurate, i.e. assume little measurement noise v. Set Q 1 large when process model is accurate, i.e. assume little process noise w. Set S 1 large when estimate of ˆx(t 0 t f ) = 0 is a confident guess of the initial state. If a non-zero estimate (say x 0 ) for the initial state is desired, it is possible to modify the cost function so that: 1 2ˆxT (t 0 t f )S 1ˆx(t 0 t f ) 1 2 (ˆx(t 0 t f ) x 0 ) T S 1 (ˆx(t 0 t f ) x 0 ). It is clear that the optimal estimate at t 0 given no measurements is: ˆx(t 0 t 0 ) = x 0. M.E. University of Minnesota 281
7 In the observer problem, we are interested in ˆx(t t) where t is the current time. Thus, we need to solve ˆx(t f t f ) and then determine how this varies with t f. Conceptually, we need to resolve this problem when t f changes. But, in practice, this is not necessary. Filtering problem is to solve ˆx(t t f ) where t < t f. Prediction problem is to solve ˆx(t t f ) where t > t f. When solving the problem, we drop the B(t)u term in ẋ = A(t)x + B(t)u for convenience. It can be carried through. M.E. University of Minnesota 282
8 Optimal filtering / observer Problem The free variables in the optimal filtering / observer problem are: ˆx(t 0 t f ), the estimate of the initial state and process noise w(τ). Notice that once these have been chosen, v(τ t f ) = y(τ) C(τ)ˆx(τ t f ) is fixed. Let the final time t f be fixed. We omit the final time argument t f. Use ˆx(τ) to denote ˆx(τ x f ) etc. The constraint of the system is: ˆx(τ) = Aˆx(τ) + w(τ), τ [t 0, t f ]. We can convert this constrained optimal control problem into unconstrained optimization problem using the Lagrange multiplier λ(τ) R n. The cost function for the unconstrained optimization M.E. University of Minnesota 283
9 problem is: J = 1 2ˆxT (t 0 )S 1ˆx(t 0 ) tf t 0 w T Q 1 (τ)w +v T R 1 (τ)v + 2λ T (τ)[ ˆx (A(τ)ˆx + w)] dτ Using integration by parts and considering variation in ˆx(τ), w(τ), ˆx(t f ) and ˆx(t 0 ). δ J = tf t 0 [ w T (τ)q 1 (τ) λ T (τ) ] δw(τ)+ [ v T (τ)r 1 (τ)c T (τ) λ ] T (τ) λ T (τ)a(τ) + λ T (t f )δˆx(t f ) + [ˆx(t 0 )S 1 λ T (t 0 ) ] δˆx(t 0 ). δˆx(τ) dt Since δx(τ), δw(τ), δˆx(t 0 ) and δˆx(t f ) can be arbitrary, as long as ˆx = A(τ)ˆx + w, the necessary condition for optimality is: δw(τ) w(τ) = Q(τ)λ(τ) δˆx(τ) λ(τ) = A T (τ)λ(τ) C T (τ)r 1 (τ)[y(τ) C(τ)ˆx(τ)] M.E. University of Minnesota 284
10 So that the differential equations are: ( ) ˆẋ λ = ( A Q T)(ˆx ) C T R 1 C A λ and the boundary conditions are: ( ) 0 C T R 1 y(τ) (62) δˆx(t f ) λ(t f ) = 0 δˆx(t 0 ) λ(t 0 ) = S 1ˆx(t 0 ) This is a two point boundary value problem. Notice that there is some similarity to the Hamilton- Jacobi-Bellman equation we saw for LQ control problem. We solve this using a similar technique as for LQ to resolve the two point boundary value problem. Assume that ˆx(t) = P(t)λ(t) + g(t) (63) Here g(t) is used to account for the exogenous term C T (τ)r 1 (τ)y(τ). For (63) to be consistent with the HJB equations and M.E. University of Minnesota 285
11 boundary conditions, we need: P = A(τ)P + PA T (τ) + Q(τ) PC T (τ)r 1 (τ)c(τ)p (64) ġ = A(τ)g P(τ)C T (τ)r 1 (τ)[c(τ)g y(τ)] (65) with initial condition g(t 0 ) = 0 and P(t 0 ) = S. The first differential equation is the Filtering Riccati Differential Equation, which reduces to the CTRDE for LQ control if A A T, C B T and τ (t f τ). Notice that all boundary conditions are assigned at τ = t 0. Thus, (64)-(65) can be integrated forward in time. Substituting (63) into (62), λ = (A T C T R 1 CP)λ + C T R 1 [Cg(t) y]; (66) with boundary condition λ(t f ) = 0. λ(t) can be integrated backwards in time, using P(t) and g(t) obtained from (64)-(65). M.E. University of Minnesota 286
12 Once λ(t) is obtained, the optimal estimate of the state at τ = t is then given by: ˆx(t t f ) = P(t t f )λ(t t f ) + g(t t f ). Here we re-inserted the argument t f to emphasize that the measurements y(t) is available on the interval [t 0, t f ]. M.E. University of Minnesota 287
13 Det. Kalman Filter: Summary Forward pass differential equations P = A(τ)P + PA T (τ) + Q(τ) PC T (τ)r 1 (τ)c(τ)p ġ = A(τ)g P(τ)C T (τ)r 1 (τ)[c(τ)g(τ) y(τ)] Backward pass differential equation λ = (A T C T R 1 CP)λ + C T R 1 [C(τ)g(τ) y(τ)]. Boundary conditions: g(t 0 ) = 0, P(t 0 ) = S λ(t f ) = 0. State estimate: ˆx(t t f ) = P(t t f )λ(t t f ) + g(t t f ). P(t) and g(t) are integrated first (forward in time). We will see that g(t) is an estimate of ˆx(t t). The backward integration of λ updates the estimate of ˆx(t t) using future information. M.E. University of Minnesota 288
14 Deterministic Optimal Observer Let us denote P(t t f ) and g(t t f ) as the solutions to (64)-(65) with the available measurement upto t f. Then, the solution ˆx(t t) is given by taking t f = t, ˆx(t t) = P(t t)λ(t t) + g(t t) = g(t t) since λ(t t) = λ(t f ) = 0. Now, since the boundary conditions for (64)-(65) are given at t 0, so for any > 0, P(t t f ) = P(t t f + ); g(t t f ) = g(t t f + ) Hence, as t f increases, the solution for P(t t f ) and g(t t f ) can be resused. In fact, the terminal time t f does not enter into the solution of P( ) or g( ). M.E. University of Minnesota 289
15 This means that the optimal observer is given by: d t) = ġ(t) dtˆx(t =A(t)g(t) P(t)C T (t)r 1 (t)[c(t)g(t) y(t)] (67) =Aˆx(t t) P(t)C T (t)r 1 (t)[c(t)ˆx(t t) y(t)]; }{{} L(t) (68) with initial condition ˆx(t 0 t 0 ) = x 0 = 0, and P = A(t)P + PA T (t) + Q(t) PC T (t)r 1 (t)c(t)p; P(t 0 ) = S and L(t) is the observer gain. The Filtering Riccati Equation becomes the control Riccati Equation: A T A, C T B, τ (t f τ) The asymptotic solution P(t ) for the Kalman Filtering problem is analogous to P(t 0 ) for the LQ control problem. Asymptotic properties of the Kalman filter can be obtained using this analogy. M.E. University of Minnesota 290
16 The ARE to be solved is obtained using the above substitutions: AP + P A T + Q P C T (t)r 1 CP = 0. In the time invariant case, P(t ) converges if (A, C) is detectable. If (A T, Q 1 2) is detectable, i.e. stabilizable, then (A, Q 1 2 T ) is A c = A P C T R 1 C is stable. Notice that although Q 1 is used in the performance function, the Riccati equation uses Q. So, it is possible to implement a Kalman filter with Q not invertible. In this case, this means that process noise does not affect each state directly: ẋ = Ax + B w w The matrix Q 1 in the performance criteria can be modified accordingly. M.E. University of Minnesota 291
17 Stochastic Kalman Filter Consider a stochastic process: ẋ = Ax + w; x(t 0 ) = x 0 y = Cx + v (69) Here w(t) and v(t) are random noise, and x 0 is a random variable: such that E[w(t)w T (τ)] = Q(t)δ(t τ) (70) E[v(t)v T (τ)] = R(t)δ(t τ) (71) E[v i (t)w j (τ) = 0 (72) E[x 0 x T 0 ] = S (73) where E(x) is the expected value (mean) of the random variable x. In other words, w(t) is un-related to w(τ) whenever τ t. v(t) is un-related to v(τ) whenever τ t. v(t) is not related to w(τ) at all. M.E. University of Minnesota 292
18 Q(t) is the expected size of w(t)w T (t). R(t) is the expected size of v(t)v T (t). S is the expected size of x 0 x T 0. Assume that y(τ) is measured for τ [t 0, t]. If we apply the deterministic Kalman Filter to this stochastic process to obtain ˆx(t t), and let x(t) = ˆx(t t) x(t). Then, it turns out that this estimate minimizes: E[ x(t) x T (t)] Moreover, the solution to the Filtering Riccati Equation is P(t) = E[ x(t) x T (t)] See Goodwin Chapter for the derivation of this result. M.E. University of Minnesota 293
19 Pink Noise Noise does not uniformly cover a broad frequency spectrum, but instead is concentrated in some bands, are pink. They can be modeled by the output of a bandpass filter with input of white noise. w pink (s) = F(s)w white (s) In states space, where ẋ w = A p x w + B p w white w pink = C p x w + D p w white F(s) = C p (si A p ) 1 B p + D p The states space model is then incorporated into the plant equation with the augmented state, with white noise as inputs. Procedure is applied for both process noise or output measurement nosie. M.E. University of Minnesota 294
20 Linear Quadratic Gaussian (LQG) Consider the system driven by Gaussian noise: where ẋ = Ax + Bu + w y = Cx + v E[w T (t)w(τ)] = Q w (t)δ(t τ) (74) E[v T (t)v(τ)] = R w (t)δ(t τ) (75) E[v i (t)w j (τ) = 0 (76) E[x 0 x T 0 ] = S w (77) Moreover, assume that the probability distributions of w and v are zero mean, and Gaussian. Suppose that one wants to minimize: J = E where T. { 1 T t0 +T t 0 [ x T Qx + u T Ru ] dt } M.E. University of Minnesota 295
21 This is called the Linear Quadratic Gaussian (LQG) problem. It turns out that the optimal control in this sense is given by: u = Kˆx where K is the gain obtained by considering the LQR problem. ˆx is the state estimate obtained from using the Kalman Filter. From the separation theorem, we know that the observer estimated state feedback generates a stable controller such that the closed loop system has eigenvalues given by the eigenvalues of the observer error and the eigenvalues of the state feedback controller. The LQG result shows that Kalman Filter and LQ generates an Optimal Controller in the stochastic sense. The LQG controller is optimal over ALL causal, linear and nonlinear controllers! M.E. University of Minnesota 296
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following
More informationSYSTEMTEORI - KALMAN FILTER VS LQ CONTROL
SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic
More informationLecture 10 Linear Quadratic Stochastic Control with Partial State Observation
EE363 Winter 2008-09 Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation partially observed linear-quadratic stochastic control problem estimation-control separation principle
More information6 OUTPUT FEEDBACK DESIGN
6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple
More informationCALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationOptimization-Based Control
Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This
More informationLinear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters
Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationLQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin
LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More informationThere are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].
Chapter 7 Output Feedback There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. In the last chapter we considered the use of state feedback to modify the dynamics
More information5. Observer-based Controller Design
EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1
More informationLecture 5 Linear Quadratic Stochastic Control
EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over
More information9 Controller Discretization
9 Controller Discretization In most applications, a control system is implemented in a digital fashion on a computer. This implies that the measurements that are supplied to the control system must be
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More informationLinear Quadratic Optimal Control Topics
Linear Quadratic Optimal Control Topics Finite time LQR problem for time varying systems Open loop solution via Lagrange multiplier Closed loop solution Dynamic programming (DP) principle Cost-to-go function
More informationCONTROL DESIGN FOR SET POINT TRACKING
Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationPOLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19
POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order
More informationOPTIMAL CONTROL AND ESTIMATION
OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION
More informationRobotics. Control Theory. Marc Toussaint U Stuttgart
Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,
More informationOptimization-Based Control
Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c California Institute of Technology All rights reserved. This
More informationRobust Control 5 Nominal Controller Design Continued
Robust Control 5 Nominal Controller Design Continued Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 4/14/2003 Outline he LQR Problem A Generalization to LQR Min-Max
More informationOptimization-Based Control
Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7b, 23 February 2008 c California Institute of Technology All rights reserved. This
More informationLecture 4 Continuous time linear quadratic regulator
EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon
More informationLinear System Theory
Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationMODERN CONTROL DESIGN
CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller
More informationEE C128 / ME C134 Final Exam Fall 2014
EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket
More informationHamilton-Jacobi-Bellman Equation Feb 25, 2008
Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS
More informationLecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004
MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical
More informationLecture 5: Control Over Lossy Networks
Lecture 5: Control Over Lossy Networks Yilin Mo July 2, 2015 1 Classical LQG Control The system: x k+1 = Ax k + Bu k + w k, y k = Cx k + v k x 0 N (0, Σ), w k N (0, Q), v k N (0, R). Information available
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationDeterministic Kalman Filtering on Semi-infinite Interval
Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control
More informationAustralian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System
Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationLinear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich
Linear Systems Manfred Morari Melanie Zeilinger Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Spring Semester 2016 Linear Systems M. Morari, M. Zeilinger - Spring
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationLinear State Feedback Controller Design
Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University
More informationState estimation and the Kalman filter
State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35
More informationChapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control
Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design
More informationEE363 homework 2 solutions
EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationOutline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations
Outline Linear regulation and state estimation (LQR and LQE) James B. Rawlings Department of Chemical and Biological Engineering 1 Linear Quadratic Regulator Constraints The Infinite Horizon LQ Problem
More informationKalman Filter and Parameter Identification. Florian Herzog
Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationNonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México
Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationHomework Solution # 3
ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found
More informationSuppose that we have a specific single stage dynamic system governed by the following equation:
Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More information= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +
ECE 553, Spring 8 Posted: May nd, 8 Problem Set #7 Solution Solutions: 1. The optimal controller is still the one given in the solution to the Problem 6 in Homework #5: u (x, t) = p(t)x k(t), t. The minimum
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationFinal Exam Solutions
EE55: Linear Systems Final Exam SIST, ShanghaiTech Final Exam Solutions Course: Linear Systems Teacher: Prof. Boris Houska Duration: 85min YOUR NAME: (type in English letters) I Introduction This exam
More informationLecture 8. Applications
Lecture 8. Applications Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 8, 205 / 3 Logistics hw7 due this Wed, May 20 do an easy problem or CYOA hw8 (design problem) will
More informationOptimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:
Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic
More information= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m
A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You
More informationx(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I
A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You
More informationLINEAR QUADRATIC GAUSSIAN
ECE553: Multivariable Control Systems II. LINEAR QUADRATIC GAUSSIAN.: Deriving LQG via separation principle We will now start to look at the design of controllers for systems Px.t/ D A.t/x.t/ C B u.t/u.t/
More informationState Regulator. Advanced Control. design of controllers using pole placement and LQ design rules
Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state
More informationSequential State Estimation (Crassidas and Junkins, Chapter 5)
Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationEL2520 Control Theory and Practice
EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller
More informationLecture 2: Discrete-time Linear Quadratic Optimal Control
ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon
More informationMin-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems
Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July -3, 27 FrC.4 Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain
More informationBayesian Decision Theory in Sensorimotor Control
Bayesian Decision Theory in Sensorimotor Control Matthias Freiberger, Martin Öttl Signal Processing and Speech Communication Laboratory Advanced Signal Processing Matthias Freiberger, Martin Öttl Advanced
More informationEL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)
EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationEE221A Linear System Theory Final Exam
EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,
More informationCourse Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)
Amme 35 : System Dynamics Control State Space Design Course Outline Week Date Content Assignment Notes 1 1 Mar Introduction 2 8 Mar Frequency Domain Modelling 3 15 Mar Transient Performance and the s-plane
More informationStochastic Optimal Control!
Stochastic Control! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2015 Learning Objectives Overview of the Linear-Quadratic-Gaussian (LQG) Regulator Introduction to Stochastic
More informationExtensions and applications of LQ
Extensions and applications of LQ 1 Discrete time systems 2 Assigning closed loop pole location 3 Frequency shaping LQ Regulator for Discrete Time Systems Consider the discrete time system: x(k + 1) =
More informationIntro. Computer Control Systems: F9
Intro. Computer Control Systems: F9 State-feedback control and observers Dave Zachariah Dept. Information Technology, Div. Systems and Control 1 / 21 dave.zachariah@it.uu.se F8: Quiz! 2 / 21 dave.zachariah@it.uu.se
More informationLinear Quadratic Optimal Control
156 c Perry Y.Li Chapter 6 Linear Quadratic Optimal Control 6.1 Introduction In previous lectures, we discussed the design of state feedback controllers using using eigenvalue (pole) placement algorithms.
More informationTopic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback
Topic #17 16.31 Feedback Control State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Back to reality Copyright 21 by Jonathan How. All Rights reserved 1 Fall
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationModule 07 Controllability and Controller Design of Dynamical LTI Systems
Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October
More informationDECENTRALIZED CONTROL OF INTERCONNECTED SINGULARLY PERTURBED SYSTEMS. A Dissertation by. Walid Sulaiman Alfuhaid
DECENTRALIZED CONTROL OF INTERCONNECTED SINGULARLY PERTURBED SYSTEMS A Dissertation by Walid Sulaiman Alfuhaid Master of Engineering, Arkansas State University, 2011 Bachelor of Engineering, King Fahd
More informationOPTIMAL CONTROL SYSTEMS
SYSTEMS MIN-MAX Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University OUTLINE MIN-MAX CONTROL Problem Definition HJB Equation Example GAME THEORY Differential Games Isaacs
More informationProblem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations:
EEE58 Exam, Fall 6 AA Rodriguez Rules: Closed notes/books, No calculators permitted, open minds GWC 35, 965-37 Problem (Dynamic Augmentation: State Space Representation) Consider a dynamical system consisting
More informationObservability. It was the property in Lyapunov stability which allowed us to resolve that
Observability We have seen observability twice already It was the property which permitted us to retrieve the initial state from the initial data {u(0),y(0),u(1),y(1),...,u(n 1),y(n 1)} It was the property
More informationAdvanced Mechatronics Engineering
Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input
More informationObservability and state estimation
EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More information1 Continuous-time Systems
Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output
More informationLecture 2 and 3: Controllability of DT-LTI systems
1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be
More informationLecture 19 Observability and state estimation
EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time
More information(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control
My general research area is the study of differential and difference equations. Currently I am working in an emerging field in dynamical systems. I would describe my work as a cross between the theoretical
More informationOutline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear
More informationLinear Quadratic Zero-Sum Two-Person Differential Games
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,
More informationChapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 9 Observers,
More informationLinear-Quadratic-Gaussian Controllers!
Linear-Quadratic-Gaussian Controllers! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 2017!! LTI dynamic system!! Certainty Equivalence and the Separation Theorem!! Asymptotic
More informationECEEN 5448 Fall 2011 Homework #4 Solutions
ECEEN 5448 Fall 2 Homework #4 Solutions Professor David G. Meyer Novemeber 29, 2. The state-space realization is A = [ [ ; b = ; c = [ which describes, of course, a free mass (in normalized units) with
More informationStochastic Tube MPC with State Estimation
Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,
More informationTopic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis
Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More information