Topic # Feedback Control Systems

Size: px
Start display at page:

Download "Topic # Feedback Control Systems"

Transcription

1 Topic # Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection

2 Fall Linear Quadratic Regulator (LQR) Have seen the solutions to the LQR problem, which results in linear full-state feedback control. Would like to demonstrate from first principles that this is the optimal form of the control. Deterministic Linear Quadratic Regulator Plant: ẋ(t) = A(t)x(t) + B u (t)u(t), x(t 0 ) = x 0 z(t) = C z (t)x(t) Cost: J LQR = 1 2 tf t 0 [ z T (t)r zz (t)z(t) + u T (t)r uu (t)u(t) ] dt x(t f) T P tf x(t f ) Where P tf 0, R zz (t) > 0 and R uu (t) > 0 Define R xx (t) = C T z R zz (t)c z 0 A(t) is a continuous function of time. B u (t), C z (t), R zz (t), R uu (t) are piecewise continuous functions of time, and all are bounded. Problem Statement: Find input u(t) t [t 0, t f ] to min J LQR This is not necessarily specified to be a feedback controller.

3 Fall This is the most general form of the LQR problem we rarely need this level of generality and often suppress the time dependence of the matrices. Finite horizon problem is important for short duration control - such as landing an aircraft Control design problem is a constrained optimization, with the constraints being the dynamics of the system. The standard way of handling the constraints in an optimization is to add them to the cost using a Lagrange multiplier Results in an unconstrained optimization. Example: min f(x, y) = x 2 + y 2 subject to the constraint that c(x, y) = x + y + 2 = y x Figure 1: Optimization results Clearly the unconstrained minimum is at x = y = 0

4 Fall To find the constrained minimum, form augmented cost function L f(x, y) + λc(x, y) = x 2 + y 2 + λ(x + y + 2) Where λ is the Lagrange multiplier Note that if the constraint is satisfied, then L f The solution approach without constraints is to find the stationary point of f(x, y) ( f/ x = f/ y = 0) With constraints we find the stationary points of L L x = L y = L λ = 0 which gives L x = 2x + λ = 0 L y = 2y + λ = 0 L λ = x + y + 2 = 0 This gives 3 equations in 3 unknowns, solve to find x = y = 1 The key point here is that due to the constraint, the selection of x and y during the minimization are not independent The Lagrange multiplier captures this dependency.

5 Fall LQR optimization follows the same path, but it is greatly complicated by the fact that the cost involves an integration over time. To optimize the cost, we follow the procedure of augmenting the constraints in the problem (the system dynamics) to the cost (integrand) to form the Hamiltonian: H = 1 2 ( x T (t)r xx x(t) + u T (t)r uu u(t) ) +p T (t) (Ax(t) + B u u(t)) p(t) R n 1 is called the Adjoint variable or Costate It is the Lagrange multiplier in the problem. The necessary conditions (see notes online) for optimality are that: 1. ẋ(t) = H T = Ax(t) + B(t)u(t) with x(t0 ) = x 0 p 2. ṗ(t) = H T = Rxx x(t) A T p(t) with p(t f ) = P tf x(t f ) x 3. H u = 0 R uuu + B T u p(t) = 0, so u = R 1 uu B T u p(t) Can check for a minimum by looking at 2 H 2 0 (need to check u that R uu 0)

6 Fall Key point is that we now have that ẋ(t) = Ax(t) + Bu (t) = Ax(t) B u R 1 uu B T u p(t) which can be combined with equation for the adjoint variable ṗ(t) = R xx x(t) A T p(t) = Cz T R zz C z x(t) A T p(t) [ ] [ ] [ ẋ(t) A B u Ruu 1 B T ] u x(t) = ṗ(t) Cz T R zz C z A T p(t) which is called the Hamiltonian Matrix. This matrix describes coupled closed loop dynamics for both x(t) and p(t). Dynamics of x(t) and p(t) are coupled, but x(t) known initially and p(t) known at terminal time, since p(t f ) = P tf x(t f ) Two point boundary value problem typically hard to solve. However, in this case, we can introduce a new matrix variable P (t) and it is relatively easy to show that: 1. p(t) = P (t)x(t) 2. It is relatively easy to find P (t).

7 Fall How proceed? 1. For the 2n system [ ] [ ẋ(t) A = ṗ(t) Cz T R zz C z B u Ruu 1 Bu T A T define a transition matrix [ ] F11 (t 1, t 0 ) F 12 (t 1, t 0 ) F (t 1, t 0 ) = F 21 (t 1, t 0 ) F 22 (t 1, t 0 ) ] [ x(t) p(t) and use this to relate x(t) to x(t f ) and p(t f ) [ ] [ ] [ ] x(t) F11 (t, t f ) F 12 (t, t f ) x(tf ) = p(t) F 21 (t, t f ) F 22 (t, t f ) p(t f ) so x(t) = F 11 (t, t f )x(t f ) + F 12 (t, t f )p(t f ) ] = [F 11 (t, t f ) + F 12 (t, t f )P tf x(t f ) ] 2. Now find p(t) in terms of x(t f ) ] p(t) = [F 21 (t, t f ) + F 22 (t, t f )P tf x(t f ) 3. Eliminate x(t f ) to get: ] [ ] 1 p(t) = [F 21 (t, t f ) + F 22 (t, t f )P tf F 11 (t, t f ) + F 12 (t, t f )P tf x(t) P (t)x(t)

8 Fall Now have p(t) = P (t)x(t), must find the equation for P (t) ṗ(t) = C T z R zz C z x(t) A T p(t) = P (t)x(t) + P (t)ẋ(t) P (t)x(t) = C T z R zz C z x(t) + A T p(t) + P (t)ẋ(t) = C T z R zz C z x(t) + A T p(t) + P (t)(ax(t) B u R 1 uu B T u p(t)) = (C T z R zz C z + P (t)a)x(t) + (A T P (t)b u R 1 uu B T u )p(t) = (C T z R zz C z + P (t)a)x(t) + (A T P (t)b u R 1 uu B T u )P (t)x(t) = [ A T P (t) + P (t)a + C T z R zz C z P (t)b u R 1 uu B T u P (t) ] x(t) This must be true for arbitrary x(t), so P (t) must satisfy P (t) = A T P (t) + P (t)a + C T z R zz C z P (t)b u R 1 uu B T u P (t) Which, is the matrix differential Riccati Equation. Note that the optimal value of P (t) must be found by solving this equation backwards in time from t f with P (t f ) = P tf

9 Fall The control gains are then u opt (t) = R 1 uu B T u p(t) = R 1 uu B T u P (t)x(t) = K(t)x(t) So the optimal control inputs can in fact be computed using linear feedback on the full system state Key point: This controller works equally well for MISO and MIMO regulator designs. Normally we are interested in problems with t 0 = 0 and t f =, in which case we can just use the steady-state value of P ss. If we assume LTI dynamics and let t f, then at any finite time t, would expect the Differential Riccati Equation to settle down to a steady state value (requires that A, B u, C z be stabilize and detectable) which is the solution of A T P + P A + R xx P B u R 1 uu B T u P = 0 Called the (Control) Algebraic Riccati Equation (CARE) Find optimal steady state feedback gains in Matlab using u(t) = K ss x(t) from K ss = lqr(a, B u, C T z R zz C z, R uu )

10 Fall LQR Stability Margins LQR approach selects closed-loop poles that balance between system errors and the control effort. Easy design iteration using R uu Sometimes difficult to relate the desired transient response to the LQR cost function. Particularly nice thing about the LQR approach is that the designer is focused on system performance issues Turns out that the news is even better than that, because LQR exhibits very good stability margins Consider the LQR stability robustness. J = ẋ = Ax + Bu z T z + ρu T u dt z = C z x, R xx = C T z C z C z z u B (si A) 1 K x Study robustness in the frequency domain. Loop transfer function L(s) = K(sI A) 1 B Cost transfer function C(s) = C z (si A) 1 B

11 Fall Can develop a relationship between the open-loop cost C(s) and the closed-loop return difference I +L(s) called the Kalman Frequency Domain Equality Sketch of Proof [I + L( s)] T [I + L(s)] = ρ CT ( s)c(s) Start with u = Kx, K = 1 ρ BT P, where 0 = A T P P A R xx + 1 ρ P BBT P Introduce Laplace variable s using ±sp 0 = ( si A T )P + P (si A) R xx + 1 ρ P BBT P Pre-multiply by B T ( si A T ) 1, post-multiply by (si A) 1 B Complete the square... [I + L( s)] T [I + L(s)] = ρ CT ( s)c(s) Can handle the MIMO case, but look at the SISO case to develop further insights (s = jω) [I + L( jω)] [I + L(jω)] = (I + L r (ω) jl i (ω))(i + L r (ω) + jl i (ω)) 1 + L(jω) 2 and C T ( jω)c(jω) = C 2 r + C 2 i = C(jω) 2 0

12 Fall Thus the KFE becomes 1 + L(jω) 2 = ρ C(jω) 2 1 Implications: The Nyquist plot of L(jω) will always be outside the unit circle centered at (-1,0). 4 3 L N (jω) 2 1+L N (jω) Imag Part ( 1,0) Real Part Great, but why is this so significant? Recall the SISO form of the Nyquist Stability Theorem: If the loop transfer function L(s) has P poles in the RHP s-plane (and lim s L(s) is a constant), then for closed-loop stability, the locus of L(jω) for ω : (, ) must encircle the critical point (-1,0) P times in the counterclockwise direction (Ogata528)

13 Fall So we can directly prove stability from the Nyquist plot of L(s). But what if the model is wrong and it turns out that the actual loop transfer function L A (s) is given by: L A (s) = L N (s)[1 + Δ(s)], Δ(jω) 1, ω We need to determine whether these perturbations to the loop TF will change the decision about closed-loop stability can do this directly by determining if it is possible to change the number of encirclements of the critical point stable OL L A (jω) Imag Part ω 1 ω 2 L N (jω) Real Part Figure 2: Perturbation to the LTF causing a change in the number of encirclements

14 Fall Claim is that since the LTF L(jω) is guaranteed to be far from the critical point for all frequencies, then LQR is VERY robust. Can study this by introducing a modification to the system, where nominally β = 1, but we would like to consider: The gain β R The phase β e jφ K(sI A) 1 B β In fact, can be shown that: If open-loop system is stable, then any β (0, ) yields a stable closed-loop system. For an unstable system, any β (1/2, ) yields a stable closed-loop system gain margins are (1/2, ) Phase margins of at least ±60 which are both huge.

15 Fall stable OL 2 Imag Part L ω=0 L 1 2 ω Real Part Figure 3: Example of LTF for an open-loop stable system Figure 4: Example loop transfer functions for open-loop unstable system.

16 Fall Weighting Matrix Selection A good rule of thumb when selecting the weighting matrices R xx and R uu is to normalize the signals: R xx = α 2 1 (x 1 ) 2 max α 2 2 (x 2 ) 2 max... α 2 n (x n ) 2 max R uu = ρ β 2 1 (u 1 ) 2 max β 2 2 (u 2 ) 2 max... β 2 m (u m ) 2 max The (x i ) max and (u i ) max represent the largest desired response/control input for that component of the state/actuator signal. The i α2 i = 1 and i β2 i = 1 are used to add an additional relative weighting on the various components of the state/control ρ is used as the last relative weighting between the control and state penalties gives us a relatively concrete way to discuss the relative size of R xx and R uu and their ratio R xx /R uu

17 Fall LQR Summary While we have large margins, be careful because changes to some of the parameters in A or B u can have a very large change to L(s). Similar statements hold for the MIMO case, but it requires singular value analysis tools.

18 Fall LQ Servo Using scaling N can achieve zero steady state error, but the approach is sensitive to the accurate knowledge of all plant parameters Can modify the LQ formulation to ensure that zero steady state error is robustly achieved in response to a constant reference commands. Done in the LQ formulation by augmenting integrators to the system output and then including a penalty on the integrated output in the cost. If the relevant system output is y(t) = C y x(t) and reference r(t), add extra states x I (t), where ẋ I (t) = e(t) Then penalize both x(t) and x I (t) in the cost If the state of the original system is x(t), then the dynamics are modified to be [ ] [ ] [ ] [ ] [ ] ẋ(t) A 0 x(t) Bu 0 = + u(t) + r(t) ẋ I (t) C y 0 x I (t) 0 I and define x(t) = [ x T (t) x I T (t) ] T The optimal feedback for the cost [ J = x T R xx x + u T R uu u ] dt is of the form: [ x u(t) = [ K K I ] 0 x I ] = Kx(t)

19 Fall Once we have used LQR to design the control gains K and K I, we typically implement the controller using the architecture: r 1 s x I K I K u G(s) y x Figure 5: Typical implementation of the LQ servo For the DOFB problem, ˆx(t) = Aˆx(t) + Bu(t) + L(y(t) ŷ(t)) = (A LC)ˆx(t) + Bu(t) + Ly(t) ẋ I (t) = r(t) y(t) u(t) = [ K K I ] [ ˆx x I ] so with x(t) = [ ˆx T (t) x T I (t) ] T we have [ ] A LC BK BKI x(t) = x(t) u(t) = K x(t) [ L I which gives the closed-loop system: [ ] A B K ẋ(t) [ ] [ ] = LC A LC BK BKI x(t) C 0 0 y(t) = [ C 0 ] [ ] x(t) x(t) ] y(t) + [ x(t) x(t) [ 0 I ] + ] r(t) [ 0 I ] r(t)

20 Fall Example: G(s) = (s + 2)/(s + 1)(s + s + 1) Use LQR and its dual to find K and L using the dynamics augmented with an integrator. Implement as given before to get the following results Figure 6: LQ servo root locus closed-loop poles, open-loop poles, Compensator poles, Compensator zeros Figure 7: Step response of the LQ servo

21 Fall LQ Servo Code (lqservo.m) 1 % 2 % LQ servo 3 % 4 G=tf([1 2],conv(conv([1 1],[1 1 1]),[1])); 5 [a,b,c,d]=ssdata(g);ns=size(a,1); 6 klqr=lqr(a,b,c *c,1); 7 z=zeros(ns,1); 8 9 Rxx1=[c *c z;z 10]; 10 Rxx2=[c *c z;z 0.1]; abar=[a z;-c 0];bbar=[b;0]; 13 kbar1=lqr(abar,bbar,rxx1,1); 14 kbar2=lqr(abar,bbar,rxx2,1); l=lqr(a,c,b*b,.001);l=l ; ac1=[a-l*c-b*kbar1(1:ns) -b*kbar1(end);z 0]; 19 ac2=[a-l*c-b*kbar2(1:ns) -b*kbar2(end);z 0]; 20 by=[l;-1];br=[l*0;1]; 21 cc1=-[kbar1];cc2=-[kbar2]; 22 Gc_uy=tf(ss(ac1,by,cc1,0));[nn,dd]=tfdata(Gc_uy, v ); 23 Gc_ur=tf(ss(ac1,br,cc1,0)); acl1=[a -b*kbar1;[l;-1]*c ac1]; 26 acl2=[a -b*kbar2;[l;-1]*c ac2]; 27 bcl=[b*0;br]; 28 ccl=[c c*0 0]; figure(1); 31 [y2,t2]=step(ss(acl2,bcl,ccl,0)); 32 [y1,t1]=step(ss(acl1,bcl,ccl,0),t2); 33 plot(t1,y1,t2,y2, LineWidth,2) 34 legend( Int Penalty 10, Int Penalty 0.1, Location, SouthEast ) 35 print -dpng -r300 lqservo1.png figure(2); 38 rlocus(-g*gc_uy) % is a positive feedback loop now 39 hold on;plot(eig(acl1)+j*eps, bd, MarkerFaceColor, b );hold off 40 hold on;plot(eig(a)+j*eps, mv, MarkerFaceColor, m );hold off 41 hold on;plot(roots(nn)+j*eps, ro, MarkerFaceColor, r );hold off 42 hold on;plot(roots(dd)+j*eps, rs, MarkerFaceColor, r );hold off 43 axis([ ]) 44 print -dpng -r300 lqservo2.png 45

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems Topic #17 16.30/31 Feedback Control Systems Improving the transient performance of the LQ Servo Feedforward Control Architecture Design DOFB Servo Handling saturations in DOFB Fall 2010 16.30/31 17 1 LQ

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #16 16.31 Feedback Control Systems State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Back to reality Fall 2007 16.31 16 1 Combined Estimators and Regulators

More information

Topic # Feedback Control

Topic # Feedback Control Topic #5 6.3 Feedback Control State-Space Systems Full-state Feedback Control How do we change the poles of the state-space system? Or,evenifwecanchangethepolelocations. Where do we put the poles? Linear

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems Spectral Properties of Linear- Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018! Stability margins of single-input/singleoutput (SISO) systems! Characterizations

More information

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Topic #17 16.31 Feedback Control State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Back to reality Copyright 21 by Jonathan How. All Rights reserved 1 Fall

More information

MODERN CONTROL DESIGN

MODERN CONTROL DESIGN CHAPTER 8 MODERN CONTROL DESIGN The classical design techniques of Chapters 6 and 7 are based on the root-locus and frequency response that utilize only the plant output for feedback with a dynamic controller

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #15 16.31 Feedback Control Systems State-Space Systems Open-loop Estimators Closed-loop Estimators Observer Theory (no noise) Luenberger IEEE TAC Vol 16, No. 6, pp. 596 602, December 1971. Estimation

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems Topic #16 16.30/31 Feedback Control Systems Add reference inputs for the DOFB case Reading: FPE 7.8, 7.9 Fall 2010 16.30/31 16 2 Reference Input - II On page 15-6, compensator implemented with reference

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Topic # Feedback Control

Topic # Feedback Control Topic #4 16.31 Feedback Control Stability in the Frequency Domain Nyquist Stability Theorem Examples Appendix (details) This is the basis of future robustness tests. Fall 2007 16.31 4 2 Frequency Stability

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #20 16.31 Feedback Control Systems Closed-loop system analysis Bounded Gain Theorem Robust Stability Fall 2007 16.31 20 1 SISO Performance Objectives Basic setup: d i d o r u y G c (s) G(s) n control

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #19 16.31 Feedback Control Systems Stengel Chapter 6 Question: how well do the large gain and phase margins discussed for LQR map over to DOFB using LQR and LQE (called LQG)? Fall 2010 16.30/31 19

More information

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10) Amme 35 : System Dynamics Control State Space Design Course Outline Week Date Content Assignment Notes 1 1 Mar Introduction 2 8 Mar Frequency Domain Modelling 3 15 Mar Transient Performance and the s-plane

More information

Lecture 4 Continuous time linear quadratic regulator

Lecture 4 Continuous time linear quadratic regulator EE363 Winter 2008-09 Lecture 4 Continuous time linear quadratic regulator continuous-time LQR problem dynamic programming solution Hamiltonian system and two point boundary value problem infinite horizon

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #14 16.31 Feedback Control Systems State-Space Systems Full-state Feedback Control How do we change the poles of the state-space system? Or, even if we can change the pole locations. Where do we

More information

Exam. 135 minutes, 15 minutes reading time

Exam. 135 minutes, 15 minutes reading time Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.

More information

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Design Methods for Control Systems

Design Methods for Control Systems Design Methods for Control Systems Maarten Steinbuch TU/e Gjerrit Meinsma UT Dutch Institute of Systems and Control Winter term 2002-2003 Schedule November 25 MSt December 2 MSt Homework # 1 December 9

More information

Pole placement control: state space and polynomial approaches Lecture 2

Pole placement control: state space and polynomial approaches Lecture 2 : state space and polynomial approaches Lecture 2 : a state O. Sename 1 1 Gipsa-lab, CNRS-INPG, FRANCE Olivier.Sename@gipsa-lab.fr www.gipsa-lab.fr/ o.sename -based November 21, 2017 Outline : a state

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 18: State Feedback Tracking and State Estimation Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 18:

More information

Linear State Feedback Controller Design

Linear State Feedback Controller Design Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

1 Steady State Error (30 pts)

1 Steady State Error (30 pts) Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)

More information

Controls Problems for Qualifying Exam - Spring 2014

Controls Problems for Qualifying Exam - Spring 2014 Controls Problems for Qualifying Exam - Spring 2014 Problem 1 Consider the system block diagram given in Figure 1. Find the overall transfer function T(s) = C(s)/R(s). Note that this transfer function

More information

EE C128 / ME C134 Fall 2014 HW 9 Solutions. HW 9 Solutions. 10(s + 3) s(s + 2)(s + 5) G(s) =

EE C128 / ME C134 Fall 2014 HW 9 Solutions. HW 9 Solutions. 10(s + 3) s(s + 2)(s + 5) G(s) = 1. Pole Placement Given the following open-loop plant, HW 9 Solutions G(s) = 1(s + 3) s(s + 2)(s + 5) design the state-variable feedback controller u = Kx + r, where K = [k 1 k 2 k 3 ] is the feedback

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Topic # /31 Feedback Control Systems

Topic # /31 Feedback Control Systems Topic #12 16.30/31 Feedback Control Systems State-Space Systems Full-state Feedback Control How do we change location of state-space eigenvalues/poles? Or, if we can change the pole locations where do

More information

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators! Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Thu. May 4, 25 5-8 pm Closed book. One page, 2 sides of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 4 2 4 3 6 4 8 5 3

More information

AA/EE/ME 548: Problem Session Notes #5

AA/EE/ME 548: Problem Session Notes #5 AA/EE/ME 548: Problem Session Notes #5 Review of Nyquist and Bode Plots. Nyquist Stability Criterion. LQG/LTR Method Tuesday, March 2, 203 Outline:. A review of Bode plots. 2. A review of Nyquist plots

More information

Richiami di Controlli Automatici

Richiami di Controlli Automatici Richiami di Controlli Automatici Gianmaria De Tommasi 1 1 Università degli Studi di Napoli Federico II detommas@unina.it Ottobre 2012 Corsi AnsaldoBreda G. De Tommasi (UNINA) Richiami di Controlli Automatici

More information

Analysis of SISO Control Loops

Analysis of SISO Control Loops Chapter 5 Analysis of SISO Control Loops Topics to be covered For a given controller and plant connected in feedback we ask and answer the following questions: Is the loop stable? What are the sensitivities

More information

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Observability. It was the property in Lyapunov stability which allowed us to resolve that Observability We have seen observability twice already It was the property which permitted us to retrieve the initial state from the initial data {u(0),y(0),u(1),y(1),...,u(n 1),y(n 1)} It was the property

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Guidance and Control Introduction and PID Loops Dr. Kostas Alexis (CSE) Autonomous Robot Challenges How do I control where to go? Autonomous Mobile Robot Design Topic:

More information

D(s) G(s) A control system design definition

D(s) G(s) A control system design definition R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure

More information

Classify a transfer function to see which order or ramp it can follow and with which expected error.

Classify a transfer function to see which order or ramp it can follow and with which expected error. Dr. J. Tani, Prof. Dr. E. Frazzoli 5-059-00 Control Systems I (Autumn 208) Exercise Set 0 Topic: Specifications for Feedback Systems Discussion: 30.. 208 Learning objectives: The student can grizzi@ethz.ch,

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries . AERO 632: of Advance Flight Control System. Preliminaries Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Preliminaries Signals & Systems Laplace

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

ME 132, Fall 2017, UC Berkeley, A. Packard 334 # 6 # 7 # 13 # 15 # 14

ME 132, Fall 2017, UC Berkeley, A. Packard 334 # 6 # 7 # 13 # 15 # 14 ME 132, Fall 2017, UC Berkeley, A. Packard 334 30.3 Fall 2017 Final # 1 # 2 # 3 # 4 # 5 # 6 # 7 # 8 NAME 20 15 20 15 15 18 15 20 # 9 # 10 # 11 # 12 # 13 # 14 # 15 # 16 18 12 12 15 12 20 18 15 Facts: 1.

More information

ECEEN 5448 Fall 2011 Homework #4 Solutions

ECEEN 5448 Fall 2011 Homework #4 Solutions ECEEN 5448 Fall 2 Homework #4 Solutions Professor David G. Meyer Novemeber 29, 2. The state-space realization is A = [ [ ; b = ; c = [ which describes, of course, a free mass (in normalized units) with

More information

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Frequency Response-Design Method

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Frequency Response-Design Method .. AERO 422: Active Controls for Aerospace Vehicles Frequency Response- Method Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. ... Response to

More information

5. Observer-based Controller Design

5. Observer-based Controller Design EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1

More information

Robust Control 5 Nominal Controller Design Continued

Robust Control 5 Nominal Controller Design Continued Robust Control 5 Nominal Controller Design Continued Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 4/14/2003 Outline he LQR Problem A Generalization to LQR Min-Max

More information

LINEAR QUADRATIC GAUSSIAN

LINEAR QUADRATIC GAUSSIAN ECE553: Multivariable Control Systems II. LINEAR QUADRATIC GAUSSIAN.: Deriving LQG via separation principle We will now start to look at the design of controllers for systems Px.t/ D A.t/x.t/ C B u.t/u.t/

More information

Control System Design

Control System Design ELEC ENG 4CL4: Control System Design Notes for Lecture #11 Wednesday, January 28, 2004 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Relative Stability: Stability

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

Control Systems. Design of State Feedback Control.

Control Systems. Design of State Feedback Control. Control Systems Design of State Feedback Control chibum@seoultech.ac.kr Outline Design of State feedback control Dominant pole design Symmetric root locus (linear quadratic regulation) 2 Selection of closed-loop

More information

Robust Multivariable Control

Robust Multivariable Control Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms

More information

Time Response of Systems

Time Response of Systems Chapter 0 Time Response of Systems 0. Some Standard Time Responses Let us try to get some impulse time responses just by inspection: Poles F (s) f(t) s-plane Time response p =0 s p =0,p 2 =0 s 2 t p =

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem. Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear

More information

Model Uncertainty and Robust Stability for Multivariable Systems

Model Uncertainty and Robust Stability for Multivariable Systems Model Uncertainty and Robust Stability for Multivariable Systems ELEC 571L Robust Multivariable Control prepared by: Greg Stewart Devron Profile Control Solutions Outline Representing model uncertainty.

More information

Control Systems Lab - SC4070 Control techniques

Control Systems Lab - SC4070 Control techniques Control Systems Lab - SC4070 Control techniques Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 16, 2015 (slides modified from

More information

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering

More information

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review Week Date Content Notes 1 6 Mar Introduction 2 13 Mar Frequency Domain Modelling 3 20 Mar Transient Performance and the s-plane 4 27 Mar Block Diagrams Assign 1 Due 5 3 Apr Feedback System Characteristics

More information

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant How to control the thermal power plant in order to ensure the stable operation of the plant? In the assignment Production

More information

TRACKING AND DISTURBANCE REJECTION

TRACKING AND DISTURBANCE REJECTION TRACKING AND DISTURBANCE REJECTION Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 13 General objective: The output to track a reference

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University .. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

EEE 184: Introduction to feedback systems

EEE 184: Introduction to feedback systems EEE 84: Introduction to feedback systems Summary 6 8 8 x 7 7 6 Level() 6 5 4 4 5 5 time(s) 4 6 8 Time (seconds) Fig.. Illustration of BIBO stability: stable system (the input is a unit step) Fig.. step)

More information

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability Lecture 6. Loop analysis of feedback systems 1. Motivation 2. Graphical representation of frequency response: Bode and Nyquist curves 3. Nyquist stability theorem 4. Stability margins Frequency methods

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Control Systems I Lecture 10: System Specifications

Control Systems I Lecture 10: System Specifications Control Systems I Lecture 10: System Specifications Readings: Guzzella, Chapter 10 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 24, 2017 E. Frazzoli (ETH) Lecture

More information

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators.

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators. Name: SID: EECS C128/ ME C134 Final Wed. Dec. 14, 211 81-11 am Closed book. One page, 2 sides of formula sheets. No calculators. There are 8 problems worth 1 points total. Problem Points Score 1 16 2 12

More information

Control Systems I. Lecture 9: The Nyquist condition

Control Systems I. Lecture 9: The Nyquist condition Control Systems I Lecture 9: The Nyquist condition Readings: Åstrom and Murray, Chapter 9.1 4 www.cds.caltech.edu/~murray/amwiki/index.php/first_edition Jacopo Tani Institute for Dynamic Systems and Control

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

Chapter 20 Analysis of MIMO Control Loops

Chapter 20 Analysis of MIMO Control Loops Chapter 20 Analysis of MIMO Control Loops Motivational Examples All real-world systems comprise multiple interacting variables. For example, one tries to increase the flow of water in a shower by turning

More information

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31 Introduction Classical Control Robust Control u(t) y(t) G u(t) G + y(t) G : nominal model G = G + : plant uncertainty Uncertainty sources : Structured : parametric uncertainty, multimodel uncertainty Unstructured

More information

Module 05 Introduction to Optimal Control

Module 05 Introduction to Optimal Control Module 05 Introduction to Optimal Control Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html October 8, 2015

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Time Response Analysis (Part II)

Time Response Analysis (Part II) Time Response Analysis (Part II). A critically damped, continuous-time, second order system, when sampled, will have (in Z domain) (a) A simple pole (b) Double pole on real axis (c) Double pole on imaginary

More information

Synthesis via State Space Methods

Synthesis via State Space Methods Chapter 18 Synthesis via State Space Methods Here, we will give a state space interpretation to many of the results described earlier. In a sense, this will duplicate the earlier work. Our reason for doing

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #1 16.31 Feedback Control Systems Motivation Basic Linear System Response Fall 2007 16.31 1 1 16.31: Introduction r(t) e(t) d(t) y(t) G c (s) G(s) u(t) Goal: Design a controller G c (s) so that the

More information

Control Systems I. Lecture 9: The Nyquist condition

Control Systems I. Lecture 9: The Nyquist condition Control Systems I Lecture 9: The Nyquist condition adings: Guzzella, Chapter 9.4 6 Åstrom and Murray, Chapter 9.1 4 www.cds.caltech.edu/~murray/amwiki/index.php/first_edition Emilio Frazzoli Institute

More information

Professor Fearing EE C128 / ME C134 Problem Set 7 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley

Professor Fearing EE C128 / ME C134 Problem Set 7 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley Professor Fearing EE C8 / ME C34 Problem Set 7 Solution Fall Jansen Sheng and Wenjie Chen, UC Berkeley. 35 pts Lag compensation. For open loop plant Gs ss+5s+8 a Find compensator gain Ds k such that the

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 17: Robust Stability Readings: DDV, Chapters 19, 20 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology April 6, 2011 E. Frazzoli

More information

1 (30 pts) Dominant Pole

1 (30 pts) Dominant Pole EECS C8/ME C34 Fall Problem Set 9 Solutions (3 pts) Dominant Pole For the following transfer function: Y (s) U(s) = (s + )(s + ) a) Give state space description of the system in parallel form (ẋ = Ax +

More information

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,

More information

Singular Value Decomposition Analysis

Singular Value Decomposition Analysis Singular Value Decomposition Analysis Singular Value Decomposition Analysis Introduction Introduce a linear algebra tool: singular values of a matrix Motivation Why do we need singular values in MIMO control

More information

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. ISS0031 Modeling and Identification Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. Aleksei Tepljakov, Ph.D. September 30, 2015 Linear Dynamic Systems Definition

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control Chapter 2 Classical Control System Design Overview Ch. 2. 2. Classical control system design Introduction Introduction Steady-state Steady-state errors errors Type Type k k systems systems Integral Integral

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

Linear Quadratic Regulator (LQR) Design I

Linear Quadratic Regulator (LQR) Design I Lecture 7 Linear Quadratic Regulator LQR) Design I Dr. Radhakant Padhi Asst. Proessor Dept. o Aerospace Engineering Indian Institute o Science - Bangalore LQR Design: Problem Objective o drive the state

More information