Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Size: px
Start display at page:

Download "Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation"

Transcription

1 EE363 Winter Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation partially observed linear-quadratic stochastic control problem estimation-control separation principle solution via dynamic programming 10 1

2 Linear stochastic system linear dynamical system, over finite time horizon: x t+1 = Ax t + Bu t + w t, t = 0,..., N 1 with state x t, input u t, and process noise w t linear noise corrupted observations: y t = Cx t + v t, t = 0,..., N y t is output, v t is measurement noise x 0 N(0, X), w t N(0, W), v t N(0, V ), all independent Linear Quadratic Stochastic Control with Partial State Observation 10 2

3 Causal output feedback control policies causal feedback policies: input must be function of past and present outputs roughly speaking: current state x t is not known u t = φ t (Y t ), t = 0,..., N 1 Y t = (y 0,..., y t ) is output history at time t φ t : R p(t+1) R m called the control policy at time t closed-loop system is x t+1 = Ax t + Bφ t (Y t ) + w t, y t = Cx t + v t x 0,..., x N, y 0,..., y N, u 0,..., u N 1 are all random Linear Quadratic Stochastic Control with Partial State Observation 10 3

4 Stochastic control with partial observations objective: J = E with Q 0, R > 0 ( N 1 t=0 ( x T t Qx t + u T t Ru t ) + x T N Qx N ) partially observed linear quadratic stochastic control problem (a.k.a. LQG problem): choose output feedback policies φ 0,..., φ N 1 to minimize J Linear Quadratic Stochastic Control with Partial State Observation 10 4

5 Solution optimal policies are φ t (Y t ) = K t E(x t Y t ) K t is optimal feedback gain matrix for associated LQR problem E(x t Y t ) is the MMSE estimate of x t given measurements Y t (can be computed using Kalman filter) called separation principle: optimal policy consists of estimating state via MMSE (ignoring the control problem) using estimated state as if it were the actual state, for purposes of control Linear Quadratic Stochastic Control with Partial State Observation 10 5

6 LQR control gain computation define P N = Q, and for t = N,...,1, P t 1 = A T P t A + Q A T P t B(R + B T P t B) 1 B T P t A set K t = (R + B T P t+1 B) 1 B T P t+1 A, t = 0,..., N 1 K t does not depend on data C, X, W, V Linear Quadratic Stochastic Control with Partial State Observation 10 6

7 Kalman filter current state estimate define ˆx t = E(x t Y t ) (current state estimate) Σ t = E(x t ˆx t )(x t ˆx t ) T (current state estimate covariance) Σ t+1 t = AΣ t A T + W (next state estimate covariance) start with Σ 0 1 = X; for t = 0,..., N, Σ t+1 t Σ t = Σ t t 1 Σ t t 1 C T (CΣ t t 1 C T + V ) 1 CΣ t t 1, = AΣ t A T + W define L t = Σ t t 1 C T (CΣ t t 1 C T + V ) 1, t = 0,..., N Linear Quadratic Stochastic Control with Partial State Observation 10 7

8 set ˆx 0 = L 0 y 0 ; for t = 0,..., N 1, ˆx t+1 = Aˆx t + Bu t + L t+1 e t+1, e t+1 = y t+1 C(Aˆx t + Bu t ) e t+1 is next output prediction error e t+1 N(0, CΣ t+1 t C T + V ), independent of Y t Kalman filter gains L t do not depend on data B, Q, R Linear Quadratic Stochastic Control with Partial State Observation 10 8

9 Solution via dynamic programming let V t (Y t ) be optimal value of LQG problem, from t on, conditioned on the output history Y t : V t (Y t ) = min E φ t,...,φ N 1 ( N 1 ) (x T τ Qx τ + u T τ Ru τ ) + x T NQx N Y t τ=t we ll show that V t is a quadratic function plus a constant, in fact, V t (Y t ) = ˆx T t P tˆx t + q t, t = 0,..., N, where P t is the LQR cost-to-go matrix (ˆx t is a linear function of Y t ) Linear Quadratic Stochastic Control with Partial State Observation 10 9

10 we have V N (Y N ) = E(x T NQx N Y N ) = ˆx T NQˆx N + Tr(QΣ N ) (using x N Y N N(ˆx N,Σ N )) so P N = Q, q N = Tr(QΣ N ) dynamic programming (DP) equation is V t (Y t ) = min u t E ( x T t Qx t + u T t Ru t + V t+1 (Y t+1 ) Y t ) (and argmin, which is a function of Y t, is optimal input) with V t+1 (Y t+1 ) = ˆx T t+1p t+1ˆx t+1 + q t+1, DP equation becomes V t (Y t ) = min u t E ( x T t Qx t + u T t Ru t + ˆx T t+1p t+1ˆx t+1 + q t+1 Y t ) = E(x T t Qx t Y t ) + q t+1 + min u t ( u T t Ru t + E(ˆx T t+1p t+1ˆx t+1 Y t ) ) Linear Quadratic Stochastic Control with Partial State Observation 10 10

11 using x t Y t N(ˆx t, Σ t ), the first term is E(x T t Qx t Y t ) = ˆx T t Qˆx t + Tr(QΣ t ) using ˆx t+1 = Aˆx t + Bu t + L t+1 e t+1, with e t+1 N(0, CΣ t+1 t C T + V ), independent of Y t, we get E(ˆx T t+1p t+1ˆx t+1 Y t ) = ˆx T t A T P t+1 Aˆx t + u T t B T P t+1 Bu t + 2ˆx T t A T P t+1 Bu t + Tr ( (L T t+1p t+1 L t+1 )(CΣ t+1 t C T + V ) ) using L t+1 = Σ t+1 t C T (CΣ t+1 t C T + V ) 1, last term becomes Tr(P t+1 Σ t+1 t C T (CΣ t+1 t C T +V ) 1 CΣ t+1 t ) = TrP t+1 (Σ t+1 t Σ t+1 ) Linear Quadratic Stochastic Control with Partial State Observation 10 11

12 combining all terms we get V t (Y t ) = ˆx T t (Q + A T P t+1 A)ˆx t + q t+1 + Tr(QΣ t ) + TrP t+1 (Σ t+1 t Σ t+1 ) + min u t (u T t (R + B T P t+1 B)u t + 2ˆx T t A T P t+1 Bu t ) minimization same as in deterministic LQR problem thus optimal policy is φ t(y t ) = K tˆx t, with K t = (R + B T P t+1 B) 1 B T P t+1 A plugging in optimal u t we get V t (Y t ) = ˆx T t P tˆx t + q t, where P t = A T P t+1 A + Q A T P t+1 B(R + B T P t+1 B) 1 B T P t+1 A q t = q t+1 + Tr(QΣ t ) + TrP t+1 (Σ t+1 t Σ t+1 ) recursion for P t is exactly the same as for deterministic LQR Linear Quadratic Stochastic Control with Partial State Observation 10 12

13 Optimal objective optimal LQG cost is J = EV 0 (y 0 ) = q 0 + E ˆx T 0 P 0ˆx 0 = q 0 + TrP 0 (X Σ 0 ) using ˆx 0 N(0, X Σ 0 ) using q N = TrQΣ N and q t = q t+1 + Tr(QΣ t ) + TrP t+1 (Σ t+1 t Σ t+1 ) we get J = using Σ 0 1 = X N Tr(QΣ t ) + t=0 N TrP t (Σ t t 1 Σ t ) t=0 Linear Quadratic Stochastic Control with Partial State Observation 10 13

14 we can write this as J = N Tr(QΣ t )+ t=0 N TrP t (AΣ t 1 A T +W Σ t )+Tr(P 0 (X Σ 0 )) t=1 which simplifies to where J = J lqr + J est J lqr = Tr(P 0 X) + N Tr(P t W), t=1 J est = Tr((Q P 0 )Σ 0 ) + N Tr((Q P t )Σ t ) + Tr(P t AΣ t 1 A T ) t=1 J lqr is the stochastic LQR cost, i.e., the optimal objective if you knew the state J est is the cost of not knowing (i.e., estimating) the state Linear Quadratic Stochastic Control with Partial State Observation 10 14

15 when state measurements are exact (C = I, V = 0), we have Σ t = 0, so we get N J = J lqr = Tr(P 0 X) + Tr(P t W) t=1 Linear Quadratic Stochastic Control with Partial State Observation 10 15

16 Infinite horizon LQG choose policies to minimize infinite horizon average stage cost J = lim N optimal average stage cost is N 1 1 N E t=0 ( x T t Qx t + u T t Ru t ) J = Tr(QΣ) + Tr(P( Σ Σ)) where P and Σ are PSD solutions of AREs P = Q + A T PA A T PB(R + B T PB) 1 B T PA, Σ = A ΣA T + W A ΣC T (C ΣC T + V ) 1 C ΣA T and Σ = Σ ΣC T (C ΣC T + V ) 1 C Σ Linear Quadratic Stochastic Control with Partial State Observation 10 16

17 optimal average stage cost doesn t depend on X (an) optimal policy is u t = Kˆx t, ˆx t+1 = Aˆx t + Bu t + L(y t+1 C(Aˆx t + Bu t )) where K = (R + B T PB) 1 B T PA, L = ΣC T (C ΣC T + V ) 1 K is steady-state LQR feedback gain L is steady-state Kalman filter gain Linear Quadratic Stochastic Control with Partial State Observation 10 17

18 Example system with n = 5 states, m = 2 inputs, p = 3 outputs; infinite horizon A, B, C chosen randomly; A scaled so max i λ i (A) = 1 Q = I, R = I, X = I, W = 0.5I, V = 0.5I we compare LQG with the case where state is known (stochastic LQR) Linear Quadratic Stochastic Control with Partial State Observation 10 18

19 Sample trajectories sample trace of (x t ) 1 and (u t ) 1 in steady state 2 (xt) (ut)1 0 blue: LQG, red: stochastic LQR t Linear Quadratic Stochastic Control with Partial State Observation 10 19

20 Cost histogram histogram of stage costs for 5000 steps in steady state J J lqr Linear Quadratic Stochastic Control with Partial State Observation 10 20

Lecture 5 Linear Quadratic Stochastic Control

Lecture 5 Linear Quadratic Stochastic Control EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt + ECE 553, Spring 8 Posted: May nd, 8 Problem Set #7 Solution Solutions: 1. The optimal controller is still the one given in the solution to the Problem 6 in Homework #5: u (x, t) = p(t)x k(t), t. The minimum

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

Lecture 5: Control Over Lossy Networks

Lecture 5: Control Over Lossy Networks Lecture 5: Control Over Lossy Networks Yilin Mo July 2, 2015 1 Classical LQG Control The system: x k+1 = Ax k + Bu k + w k, y k = Cx k + v k x 0 N (0, Σ), w k N (0, Q), v k N (0, R). Information available

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

EE363 homework 5 solutions

EE363 homework 5 solutions EE363 Prof. S. Boyd EE363 homework 5 solutions 1. One-step ahead prediction of an autoregressive time series. We consider the following autoregressive (AR) system p t+1 = αp t + βp t 1 + γp t 2 + w t,

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition Junfeng Wu, Karl Henrik Johansson and Ling Shi E-mail: jfwu@ust.hk Stockholm, 9th, January 2014 1 / 19 Outline Outline

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

9 Controller Discretization

9 Controller Discretization 9 Controller Discretization In most applications, a control system is implemented in a digital fashion on a computer. This implies that the measurements that are supplied to the control system must be

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Bayesian Decision Theory in Sensorimotor Control

Bayesian Decision Theory in Sensorimotor Control Bayesian Decision Theory in Sensorimotor Control Matthias Freiberger, Martin Öttl Signal Processing and Speech Communication Laboratory Advanced Signal Processing Matthias Freiberger, Martin Öttl Advanced

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

Lecture 2 and 3: Controllability of DT-LTI systems

Lecture 2 and 3: Controllability of DT-LTI systems 1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be

More information

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University .. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

EE221A Linear System Theory Final Exam

EE221A Linear System Theory Final Exam EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

A Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley

A Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley A Tour of Reinforcement Learning The View from Continuous Control Benjamin Recht University of California, Berkeley trustable, scalable, predictable Control Theory! Reinforcement Learning is the study

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Lecture 20: Linear Dynamics and LQG

Lecture 20: Linear Dynamics and LQG CSE599i: Online and Adaptive Machine Learning Winter 2018 Lecturer: Kevin Jamieson Lecture 20: Linear Dynamics and LQG Scribes: Atinuke Ademola-Idowu, Yuanyuan Shi Disclaimer: These notes have not been

More information

Robust Control 5 Nominal Controller Design Continued

Robust Control 5 Nominal Controller Design Continued Robust Control 5 Nominal Controller Design Continued Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 4/14/2003 Outline he LQR Problem A Generalization to LQR Min-Max

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

Output-based Event-triggered Control with Performance Guarantees

Output-based Event-triggered Control with Performance Guarantees 1 Output-based Event-triggered Control with Performance Guarantees B. Asadi Khashooei, D. J. Antunes and W. P. M. H. Heemels Abstract We propose an output-based event-triggered control solution for linear

More information

EE363 Review Session 1: LQR, Controllability and Observability

EE363 Review Session 1: LQR, Controllability and Observability EE363 Review Session : LQR, Controllability and Observability In this review session we ll work through a variation on LQR in which we add an input smoothness cost, in addition to the usual penalties on

More information

Pontryagin s maximum principle

Pontryagin s maximum principle Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

Feedback Control of Turbulent Wall Flows

Feedback Control of Turbulent Wall Flows Feedback Control of Turbulent Wall Flows Dipartimento di Ingegneria Aerospaziale Politecnico di Milano Outline Introduction Standard approach Wiener-Hopf approach Conclusions Drag reduction A model problem:

More information

Linear Quadratic Regulator (LQR) Design I

Linear Quadratic Regulator (LQR) Design I Lecture 7 Linear Quadratic Regulator LQR) Design I Dr. Radhakant Padhi Asst. Proessor Dept. o Aerospace Engineering Indian Institute o Science - Bangalore LQR Design: Problem Objective o drive the state

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

5. Observer-based Controller Design

5. Observer-based Controller Design EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1

More information

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Dr. Burak Demirel Faculty of Electrical Engineering and Information Technology, University of Paderborn December 15, 2015 2 Previous

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

DECENTRALIZED CONTROL OF INTERCONNECTED SINGULARLY PERTURBED SYSTEMS. A Dissertation by. Walid Sulaiman Alfuhaid

DECENTRALIZED CONTROL OF INTERCONNECTED SINGULARLY PERTURBED SYSTEMS. A Dissertation by. Walid Sulaiman Alfuhaid DECENTRALIZED CONTROL OF INTERCONNECTED SINGULARLY PERTURBED SYSTEMS A Dissertation by Walid Sulaiman Alfuhaid Master of Engineering, Arkansas State University, 2011 Bachelor of Engineering, King Fahd

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Linear-Quadratic-Gaussian Controllers!

Linear-Quadratic-Gaussian Controllers! Linear-Quadratic-Gaussian Controllers! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 2017!! LTI dynamic system!! Certainty Equivalence and the Separation Theorem!! Asymptotic

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c California Institute of Technology All rights reserved. This

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

The Uncertainty Threshold Principle: Some Fundamental Limitations of Optimal Decision Making under Dynamic Uncertainty

The Uncertainty Threshold Principle: Some Fundamental Limitations of Optimal Decision Making under Dynamic Uncertainty The Uncertainty Threshold Principle: Some Fundamental Limitations of Optimal Decision Making under Dynamic Uncertainty Michael Athans, Richard Ku, Stanley Gershwin (Nicholas Ballard 538477) Introduction

More information

6. Linear Quadratic Regulator Control

6. Linear Quadratic Regulator Control EE635 - Control System Theory 6. Linear Quadratic Regulator Control Jitkomut Songsiri algebraic Riccati Equation (ARE) infinite-time LQR (continuous) Hamiltonian matrix gain margin of LQR 6-1 Algebraic

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7b, 23 February 2008 c California Institute of Technology All rights reserved. This

More information

MPC/LQG for infinite-dimensional systems using time-invariant linearizations

MPC/LQG for infinite-dimensional systems using time-invariant linearizations MPC/LQG for infinite-dimensional systems using time-invariant linearizations Peter Benner 1 and Sabine Hein 2 1 Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg,

More information

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA

More information

False Data Injection Attacks in Control Systems

False Data Injection Attacks in Control Systems False Data Injection Attacks in Control Systems Yilin Mo, Bruno Sinopoli Department of Electrical and Computer Engineering, Carnegie Mellon University First Workshop on Secure Control Systems Bruno Sinopoli

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Linear Systems Manfred Morari Melanie Zeilinger Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Spring Semester 2016 Linear Systems M. Morari, M. Zeilinger - Spring

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

Time Varying Optimal Control with Packet Losses.

Time Varying Optimal Control with Packet Losses. Time Varying Optimal Control with Packet Losses. Bruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Shankar S. Sastry Department of Electrical Engineering and Computer Sciences University

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,

More information

EE363 homework 8 solutions

EE363 homework 8 solutions EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

THIS work is concerned with motion planning in Dynamic,

THIS work is concerned with motion planning in Dynamic, 1 Robot Motion Planning in Dynamic, Uncertain Environments Noel E. Du Toit, Member, IEEE, and Joel W. Burdick, Member, IEEE, Abstract This paper presents a strategy for planning robot motions in dynamic,

More information

High-Gain Observers in Nonlinear Feedback Control

High-Gain Observers in Nonlinear Feedback Control High-Gain Observers in Nonlinear Feedback Control Lecture # 1 Introduction & Stabilization High-Gain ObserversinNonlinear Feedback ControlLecture # 1Introduction & Stabilization p. 1/4 Brief History Linear

More information

LINEAR QUADRATIC GAUSSIAN

LINEAR QUADRATIC GAUSSIAN ECE553: Multivariable Control Systems II. LINEAR QUADRATIC GAUSSIAN.: Deriving LQG via separation principle We will now start to look at the design of controllers for systems Px.t/ D A.t/x.t/ C B u.t/u.t/

More information

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations Outline Linear regulation and state estimation (LQR and LQE) James B. Rawlings Department of Chemical and Biological Engineering 1 Linear Quadratic Regulator Constraints The Infinite Horizon LQ Problem

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. Chapter 7 Output Feedback There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. In the last chapter we considered the use of state feedback to modify the dynamics

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

State Observers and the Kalman filter

State Observers and the Kalman filter Modelling and Control of Dynamic Systems State Observers and the Kalman filter Prof. Oreste S. Bursi University of Trento Page 1 Feedback System State variable feedback system: Control feedback law:u =

More information

Structured State Space Realizations for SLS Distributed Controllers

Structured State Space Realizations for SLS Distributed Controllers Structured State Space Realizations for SLS Distributed Controllers James Anderson and Nikolai Matni Abstract In recent work the system level synthesis (SLS) paradigm has been shown to provide a truly

More information

Lecture 6. Foundations of LMIs in System and Control Theory

Lecture 6. Foundations of LMIs in System and Control Theory Lecture 6. Foundations of LMIs in System and Control Theory Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 4, 2015 1 / 22 Logistics hw5 due this Wed, May 6 do an easy

More information

Linear State Feedback Controller Design

Linear State Feedback Controller Design Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Optimal Control Theory

Optimal Control Theory Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal

More information

Lecture Note 12: Kalman Filter

Lecture Note 12: Kalman Filter ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring

More information

ECEEN 5448 Fall 2011 Homework #4 Solutions

ECEEN 5448 Fall 2011 Homework #4 Solutions ECEEN 5448 Fall 2 Homework #4 Solutions Professor David G. Meyer Novemeber 29, 2. The state-space realization is A = [ [ ; b = ; c = [ which describes, of course, a free mass (in normalized units) with

More information

LQG CONTROL WITH MISSING OBSERVATION AND CONTROL PACKETS. Bruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Shankar Sastry

LQG CONTROL WITH MISSING OBSERVATION AND CONTROL PACKETS. Bruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Shankar Sastry LQG CONTROL WITH MISSING OBSERVATION AND CONTROL PACKETS Bruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Shankar Sastry Department of Electrical Engineering and Computer Sciences

More information

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Discrete-time Linear Quadratic Optimal Control ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon

More information

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft Control Systems Design, SC426 SC426 Fall 2, dr A Abate, DCSC, TU Delft Lecture 5 Controllable Canonical and Observable Canonical Forms Stabilization by State Feedback State Estimation, Observer Design

More information

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor Dr.-Ing. Sudchai Boonto Assistant Professor Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Quadratic Gaussian The state space

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information