= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

Size: px
Start display at page:

Download "= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +"

Transcription

1 ECE 553, Spring 8 Posted: May nd, 8 Problem Set #7 Solution Solutions: 1. The optimal controller is still the one given in the solution to the Problem 6 in Homework #5: u (x, t) = p(t)x k(t), t. The minimum expected coss: min E{J = m() + p(t) dt + p() + k() = m() + 4e ( 3e ) 3e 1 + ln 1 e, m() = (k + k ) dt.. For the LQG problem, we have shown in class that J can be equivalently written as J = E u + R 1 B T P x R dt + T r[p DD T ] dt E i= P ( ) satisfies: u + R 1 B T P x R dt + T r[p DD T ] dt P + A T P + P A P BR 1 B T P + Q = ; P (t f ) = Q f. But now we don t have perfect state measurement x(t) for all t. In the interval [, +1 ), decompose x into two parts, x = y + z, ẏ = Ay + Bu, y( ) = x( ), ż = Az + Dw, z( ) =. Since u can only use information of x( ) during the interval [, +1 ), conditioned on this value of x( ), u and z are independent for all t [, +1 ), and hence J is equivalent to: i= E u + R 1 B T P y R dt + i= E R 1 B T P z R dt + T r[p DD T ] dt. From this, it readily follows that, in the interval [, +1 ), the unique optimal control is u (t) = R 1 B T P y(t), ẏ = (A BR 1 B T P )y, y( ) = x( ). Let Φ(t, τ) be the state transition matrix corresponds to Ã. = A BR 1 B T P. Then, u (t) = R 1 B T P (t)φ(t, )x( ), t < +1. 1

2 Note that this controller also obeys the separation principle between estimation and control since is exactly the controller that solves the optimal control problem for the deterministic case with sampled state measurements. The corresponding minimum cost of J is ti+1 J = T r[p BR 1 B T P M i (t)] dt + M i satisfies: i= Ṁ i = AM i + M i A T + DD T, M i ( ) =. 3. Since a is not time-varying, the state space model for this system is with the measurement equation being ẋ 1 = x, x 1 () = 1, ẋ = x 3, x () = 1, ẋ 3 =, x 3 () = a N(, ρ), y = x 1 + v = [1,, ] x + v. x 3 Hence, in terms of notation adopted in class, we have 1 A = 1, H = [1,, ], D =. x 1 T r[p DD T ] dt, The error covariance equation is Σ = ΣA T + AΣ ΣH T HΣ, Σ() =, ρ which can be solved explicitly to yield: The Kalman gain is Σ(t) = ρ ρt 5 + K(t) = ΣH T = t 4 4 t 3 t t t t, t. t t 1 ρ [ ] t 4 ρt 5 + 4, t3, t, and the filter for the system is ˆx 1 1 ˆx 1 ˆx = 1 ˆx + K(t)[y ˆx 1 ], ˆx 3 ˆx 3 ˆx 1 () 1 ˆx () = 1. ˆx 3 () In the limit as t, Σ(t) = and K(t) =. This means that the random variable a is identified exactly as t. Since x 1 () and x () are both known, x 1 and x also become exactly estimated as t. Hence, as t, the noisy measurements are of little extra information, and the Kalman gain also goes to zero. (This is possible largely due to D =.)

3 4. When the state equation and the expected cost function are, respectively, ẋ = Ax + Bu + Dw + c, x( ) = x, { J(u) = E x(t f ) T Q f x(t f ) + [x(t) T Q(t)x(t) + u(t) T R(t)u(t)] dt, with R( ) > and Q( ), Q f, we have already shown in class that there exists a unique optimal control strategy given by u (t) = R 1 B T [P (t)x(t) + k(t)], P ( ) is the unique nonnegative definite solution of the RDE and k( ) uniquely solves the ODE P + P A + A T P P BR 1 B T P + Q =, P (t f ) = Q f, k + A T k P BR 1 B T k + P c =, k(t f ) =. (a) Specializing to the problem under consideration, A =, B = 1, c = t, D = 1, Q f = 1, Q =, R = 1 yields Solving this gives Therefore, the optimal control is P P =, P (1) = 1, k P k + P c =, k(1) =. P (t) = 1 1 t3, k(t) = t 6( t). u (t) = 1 t [x(t) + 1 t3 (b) To obtain the open loop optimal control, you should notice that this is very similar to that sampling case that we studied in Problem. But we here have a deterministic drift term c = t /, and we need to employ the solution to the more general LQG model (as in the case (a)). Similarly, we decompose the state x(t) into a deterministic part y(t) and an uncontrolled stochastic part z(t): x = y + z 6 ]. ẏ = Ay + Bu + c, y( ) = x ż = Az + Dw, z( ) =. The open loop control u is independent of z and E[z(t)] =. Constructing the same P (t) and k(t) as in (a), J can be rewritten as: J = E u + R 1 B T (P y + k) R dt + k T (t)c(t) dt + + E {T r[p DD T ] k T (t)br 1 B T k(t) dt z(t) T P BR 1 B T P z(t) dt + x T P ( )x + x T k( ). 3

4 Only the first term above involves the control u (the sepearation principle again), and this gives the optimal controller u (t) = R 1 B T (P y (t) + k(t)), ẏ = (A BR 1 B T P )y BR 1 B T k + c; y ( ) = x and P (t) and k(t) are as above. For the specific problem under consideration, we have ẏ = 1 t y 1 t3 6( t) + t, y () = 1. This yields Hence the control is y (t) = 7 6 t 1 + t [ 1 u (t) = t y + 1 ] t3 = 7 6( t) 7. Note thanterestingly the open loop control is independent of time. (c) The difference between J O and J f is J O J f = E z(t) T P BR 1 B T P z(t) dt = T r P BR 1 B T P M dt, M(t) = E[z(t)z(t) T ] satisfies the Lyapunov differential equation: Ṁ = AM + MA T + DD T, M( ) =. For the specific problem, M(t) = t and it gives J O J f = t dt = 1 ln. ( t) 5. The LQG discounted cost problem was discussed in class under the perfect state feedback. Without noisy state measurements, the controller part will be the same as in that case (except the state is now from the filter), and the filter part will clearly not make use of the discount factor. The parameters are A =, β =, B = 1, D = 1.5, H = 1, G = 1, Q = R = 1. Then A β = 1, and the systems are obviously both controllable and observable. Hence the infinite horizon problem is well-defined, i.e. there exists an optimal feedback controller that is stationary: u (t) = P ˆx(t), P + 1 P = P = 1 + =.414. ˆx = P ˆx + K(y ˆx), ˆx() = 1, 4

5 K is the Kalman gain. In our case, Σ = Σ + 9 4, Σ() =. For t, Σ(t) Σ = = K. Then the steady state optimal control is 3 The corresponding value of J is u (t) =.414ˆx(t), ˆx =.414ˆx + 1.5[y ˆx], ˆx() = 1. J = T r[ P Σ()] + T r[ P ˆx ] + 1 β T r[ P DD T ] e t (.414) Σ(t) dt. e βt T r[σ P BR 1 B T P ] dt Solving Σ(t) yields Σ(t) = 3 1 e 3t 1+e 3t. Substitute into the expression for J and obtain J = If we use instead the steady value Σ = Σ for all t, then the value is which is slightly higher, as expected. J = 1.8, Please report to yima@uiuc.edu if you find any typos. 5

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation EE363 Winter 2008-09 Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation partially observed linear-quadratic stochastic control problem estimation-control separation principle

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Deterministic Kalman Filtering on Semi-infinite Interval

Deterministic Kalman Filtering on Semi-infinite Interval Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Sequential State Estimation (Crassidas and Junkins, Chapter 5) Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

ECEEN 5448 Fall 2011 Homework #4 Solutions

ECEEN 5448 Fall 2011 Homework #4 Solutions ECEEN 5448 Fall 2 Homework #4 Solutions Professor David G. Meyer Novemeber 29, 2. The state-space realization is A = [ [ ; b = ; c = [ which describes, of course, a free mass (in normalized units) with

More information

Chapter Introduction. A. Bensoussan

Chapter Introduction. A. Bensoussan Chapter 2 EXPLICIT SOLUTIONS OFLINEAR QUADRATIC DIFFERENTIAL GAMES A. Bensoussan International Center for Decision and Risk Analysis University of Texas at Dallas School of Management, P.O. Box 830688

More information

Lecture 5 Linear Quadratic Stochastic Control

Lecture 5 Linear Quadratic Stochastic Control EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over

More information

Problem 1 Cost of an Infinite Horizon LQR

Problem 1 Cost of an Infinite Horizon LQR THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics Sensitivity Analysis of Disturbance Accommodating Control with Kalman Filter Estimation Jemin George and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY, 14-44 The design

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

ECE 388 Automatic Control

ECE 388 Automatic Control Controllability and State Feedback Control Associate Prof. Dr. of Mechatronics Engineeering Çankaya University Compulsory Course in Electronic and Communication Engineering Credits (2/2/3) Course Webpage:

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

Intro. Computer Control Systems: F9

Intro. Computer Control Systems: F9 Intro. Computer Control Systems: F9 State-feedback control and observers Dave Zachariah Dept. Information Technology, Div. Systems and Control 1 / 21 dave.zachariah@it.uu.se F8: Quiz! 2 / 21 dave.zachariah@it.uu.se

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

Lecture 31. Basic Theory of First Order Linear Systems

Lecture 31. Basic Theory of First Order Linear Systems Math 245 - Mathematics of Physics and Engineering I Lecture 31. Basic Theory of First Order Linear Systems April 4, 2012 Konstantin Zuev (USC) Math 245, Lecture 31 April 4, 2012 1 / 10 Agenda Existence

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information

14 - Gaussian Stochastic Processes

14 - Gaussian Stochastic Processes 14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state

More information

Taylor expansions for the HJB equation associated with a bilinear control problem

Taylor expansions for the HJB equation associated with a bilinear control problem Taylor expansions for the HJB equation associated with a bilinear control problem Tobias Breiten, Karl Kunisch and Laurent Pfeiffer University of Graz, Austria Rome, June 217 Motivation dragged Brownian

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

5. Observer-based Controller Design

5. Observer-based Controller Design EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

1 (30 pts) Dominant Pole

1 (30 pts) Dominant Pole EECS C8/ME C34 Fall Problem Set 9 Solutions (3 pts) Dominant Pole For the following transfer function: Y (s) U(s) = (s + )(s + ) a) Give state space description of the system in parallel form (ẋ = Ax +

More information

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004 ECSE.6440 MIDTERM EXAM Solution Optimal Control Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004 This is a take home exam. It is essential to SHOW ALL STEPS IN YOUR WORK. In NO circumstance is

More information

Introduction to numerical simulations for Stochastic ODEs

Introduction to numerical simulations for Stochastic ODEs Introduction to numerical simulations for Stochastic ODEs Xingye Kan Illinois Institute of Technology Department of Applied Mathematics Chicago, IL 60616 August 9, 2010 Outline 1 Preliminaries 2 Numerical

More information

Stochastic Optimal Control!

Stochastic Optimal Control! Stochastic Control! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2015 Learning Objectives Overview of the Linear-Quadratic-Gaussian (LQG) Regulator Introduction to Stochastic

More information

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series

More information

9 Controller Discretization

9 Controller Discretization 9 Controller Discretization In most applications, a control system is implemented in a digital fashion on a computer. This implies that the measurements that are supplied to the control system must be

More information

José C. Geromel. Australian National University Canberra, December 7-8, 2017

José C. Geromel. Australian National University Canberra, December 7-8, 2017 5 1 15 2 25 3 35 4 45 5 1 15 2 25 3 35 4 45 5 55 Differential LMI in Optimal Sampled-data Control José C. Geromel School of Electrical and Computer Engineering University of Campinas - Brazil Australian

More information

Lec 6: State Feedback, Controllability, Integral Action

Lec 6: State Feedback, Controllability, Integral Action Lec 6: State Feedback, Controllability, Integral Action November 22, 2017 Lund University, Department of Automatic Control Controllability and Observability Example of Kalman decomposition 1 s 1 x 10 x

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Lecture 9. Systems of Two First Order Linear ODEs

Lecture 9. Systems of Two First Order Linear ODEs Math 245 - Mathematics of Physics and Engineering I Lecture 9. Systems of Two First Order Linear ODEs January 30, 2012 Konstantin Zuev (USC) Math 245, Lecture 9 January 30, 2012 1 / 15 Agenda General Form

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows Plenary talk on the conference Stochastic and Analytic Methods in Mathematical Physics, Yerevan, Armenia,

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x) Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback:

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Lecture Note 12: Kalman Filter

Lecture Note 12: Kalman Filter ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring

More information

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Module 07 Controllability and Controller Design of Dynamical LTI Systems Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Q.-C. Pham N. Tabareau J.-J. Slotine Q.-C. Pham, N. Tabareau, J.-J. Slotine () Stochastic contraction 1 / 19 Why stochastic contraction?

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant

Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant Boris M. Mirkin and Per-Olof Gutman Faculty of Agricultural Engineering Technion Israel Institute of Technology Haifa 3, Israel

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7b, 23 February 2008 c California Institute of Technology All rights reserved. This

More information

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Wed. Dec. 5, 2 8- am Closed book. Two pages of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 2 2 6 3 4 4 5 6 6 7 8 2 Total

More information

On perturbations in the leading coefficient matrix of time-varying index-1 DAEs

On perturbations in the leading coefficient matrix of time-varying index-1 DAEs On perturbations in the leading coefficient matrix of time-varying index-1 DAEs Institute of Mathematics, Ilmenau University of Technology Darmstadt, March 27, 2012 Institute of Mathematics, Ilmenau University

More information

A lecture on the game theoretic approach to H optimal control

A lecture on the game theoretic approach to H optimal control A lecture on the game theoretic approach to H optimal control Pierre Bernhard INRIA-Sophia Antipolis, France August 21, 1991 1 Introduction : The problems considered 1.1 Control design problems with H

More information

6. Linear Quadratic Regulator Control

6. Linear Quadratic Regulator Control EE635 - Control System Theory 6. Linear Quadratic Regulator Control Jitkomut Songsiri algebraic Riccati Equation (ARE) infinite-time LQR (continuous) Hamiltonian matrix gain margin of LQR 6-1 Algebraic

More information

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Thu. May 4, 25 5-8 pm Closed book. One page, 2 sides of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 4 2 4 3 6 4 8 5 3

More information

Optimal Decentralized Control with. Asymmetric One-Step Delayed Information Sharing

Optimal Decentralized Control with. Asymmetric One-Step Delayed Information Sharing Optimal Decentralized Control with 1 Asymmetric One-Step Delayed Information Sharing Naumaan Nayyar, Dileep Kalathil and Rahul Jain Abstract We consider optimal control of decentralized LQG problems for

More information

Bayesian Decision Theory in Sensorimotor Control

Bayesian Decision Theory in Sensorimotor Control Bayesian Decision Theory in Sensorimotor Control Matthias Freiberger, Martin Öttl Signal Processing and Speech Communication Laboratory Advanced Signal Processing Matthias Freiberger, Martin Öttl Advanced

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels)

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels) GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM DATE: 30-Apr-14 COURSE: ECE 3084A (Prof. Michaels) NAME: STUDENT #: LAST, FIRST Write your name on the front page

More information

4 Optimal State Estimation

4 Optimal State Estimation 4 Optimal State Estimation Consider the LTI system ẋ(t) = Ax(t)+B w w(t), y(t) = C y x(t)+d yw w(t), z(t) = C z x(t) Problem: Compute (Â,F) such that the output of the state estimator ˆx(t) = ˆx(t) Fy(t),

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

EE363 homework 8 solutions

EE363 homework 8 solutions EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

ME 132, Fall 2015, Quiz # 2

ME 132, Fall 2015, Quiz # 2 ME 132, Fall 2015, Quiz # 2 # 1 # 2 # 3 # 4 # 5 # 6 Total NAME 14 10 8 6 14 8 60 Rules: 1. 2 sheets of notes allowed, 8.5 11 inches. Both sides can be used. 2. Calculator is allowed. Keep it in plain view

More information

Computer Problem 1: SIE Guidance, Navigation, and Control

Computer Problem 1: SIE Guidance, Navigation, and Control Computer Problem 1: SIE 39 - Guidance, Navigation, and Control Roger Skjetne March 12, 23 1 Problem 1 (DSRV) We have the model: m Zẇ Z q ẇ Mẇ I y M q q + ẋ U cos θ + w sin θ ż U sin θ + w cos θ θ q Zw

More information

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11 sc46 - Control Systems Design Q Sem Ac Yr / Mock Exam originally given November 5 9 Notes: Please be reminded that only an A4 paper with formulas may be used during the exam no other material is to be

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Leyla Jianyu Han 1 Kenneth Kasa 2 Yulei Luo 1 1 The University of Hong Kong 2 Simon Fraser University December 15, 218 1 /

More information

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

State Feedback and State Estimators Linear System Theory and Design, Chapter 8. 1 Linear System Theory and Design, http://zitompul.wordpress.com 2 0 1 4 2 Homework 7: State Estimators (a) For the same system as discussed in previous slides, design another closed-loop state estimator,

More information

Dichotomy, the Closed Range Theorem and Optimal Control

Dichotomy, the Closed Range Theorem and Optimal Control Dichotomy, the Closed Range Theorem and Optimal Control Pavel Brunovský (joint work with Mária Holecyová) Comenius University Bratislava, Slovakia Praha 13. 5. 2016 Brunovsky Praha 13. 5. 2016 Closed Range

More information

Dynamical systems: basic concepts

Dynamical systems: basic concepts Dynamical systems: basic concepts Daniele Carnevale Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata Fondamenti di Automatica e Controlli Automatici A.A. 2014-2015

More information

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and Kalman Decomposition Controllable / uncontrollable decomposition Suppose that the controllability matrix C R n n of a system has rank n 1

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c California Institute of Technology All rights reserved. This

More information

Anytime Planning for Decentralized Multi-Robot Active Information Gathering

Anytime Planning for Decentralized Multi-Robot Active Information Gathering Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania

More information

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations THREE DIMENSIONAL SYSTEMS Lecture 6: The Lorenz Equations 6. The Lorenz (1963) Equations The Lorenz equations were originally derived by Saltzman (1962) as a minimalist model of thermal convection in a

More information

Lecture 15: H Control Synthesis

Lecture 15: H Control Synthesis c A. Shiriaev/L. Freidovich. March 12, 2010. Optimal Control for Linear Systems: Lecture 15 p. 1/14 Lecture 15: H Control Synthesis Example c A. Shiriaev/L. Freidovich. March 12, 2010. Optimal Control

More information

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form ME 32, Spring 25, UC Berkeley, A. Packard 55 7 Review of SLODEs Throughout this section, if y denotes a function (of time, say), then y [k or y (k) denotes the k th derivative of the function y, y [k =

More information