Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Similar documents
Final exam: Automatic Control II (Reglerteknik II, 1TT495)

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

DISCRETE-TIME SYSTEMS AND DIGITAL CONTROLLERS

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

ECE 388 Automatic Control

Problem Set 3: Solution Due on Mon. 7 th Oct. in class. Fall 2013

FRTN 15 Predictive Control

RECURSIVE ESTIMATION AND KALMAN FILTERING

EEE582 Homework Problems

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

State estimation and the Kalman filter

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250)

Intro. Computer Control Systems: F9

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

EL1820 Modeling of Dynamical Systems

Controls Problems for Qualifying Exam - Spring 2014

Exam. 135 minutes, 15 minutes reading time

Control Systems Lab - SC4070 Control techniques

EL2520 Control Theory and Practice

Multivariable Control Exam

Optimal control and estimation

Exam in Systems Engineering/Process Control

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

State Observers and the Kalman filter

EL1820 Modeling of Dynamical Systems

ECEEN 5448 Fall 2011 Homework #4 Solutions

EE C128 / ME C134 Final Exam Fall 2014

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

Linear State Feedback Controller Design

Exam in Systems Engineering/Process Control

Automatic Control II Computer exercise 3. LQG Design

A Crash Course on Kalman Filtering

4 Derivations of the Discrete-Time Kalman Filter

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

Optimization-Based Control

CS 532: 3D Computer Vision 6 th Set of Notes

6.4 Kalman Filter Equations

6 OUTPUT FEEDBACK DESIGN

1 Kalman Filter Introduction

Analysis of Discrete-Time Systems

FIR Filters for Stationary State Space Signal Models

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Industrial Model Predictive Control

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Kalman Filter and Parameter Identification. Florian Herzog

Parameter Estimation in a Moving Horizon Perspective

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators.

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

EE 565: Position, Navigation, and Timing

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

AN EXTENSION OF GENERALIZED BILINEAR TRANSFORMATION FOR DIGITAL REDESIGN. Received October 2010; revised March 2011

EECE Adaptive Control

Analysis of Discrete-Time Systems

5. Observer-based Controller Design

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Control Design. Lecture 9: State Feedback and Observers. Two Classes of Control Problems. State Feedback: Problem Formulation

9 Controller Discretization

Discrete-time Controllers

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Prüfung Regelungstechnik I (Control Systems I) Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Lecture 4 Continuous time linear quadratic regulator

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels)

Distributed Real-Time Control Systems

Intro. Computer Control Systems: F8

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

2 Introduction of Discrete-Time Systems

Pole placement control: state space and polynomial approaches Lecture 2

Lec 6: State Feedback, Controllability, Integral Action

Lecture 11. Frequency Response in Discrete Time Control Systems

14 - Gaussian Stochastic Processes

Discrete-time models and control

Lecture 5 Linear Quadratic Stochastic Control

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels)

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Optimal Polynomial Control for Discrete-Time Systems

QUALIFYING EXAM IN SYSTEMS ENGINEERING

State Estimation using Moving Horizon Estimation and Particle Filtering

Exercises Automatic Control III 2015

Open Economy Macroeconomics: Theory, methods and applications

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

Automatique. A. Hably 1. Commande d un robot mobile. Automatique. A.Hably. Digital implementation

Disturbance modelling

Steady State Kalman Filter

Automatic Control II: Summary and comments

4F3 - Predictive Control

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Introduction to Controls

Time-Invariant Linear Quadratic Regulators!

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

ME 132, Fall 2015, Quiz # 2

1. For each statement, either state that it is True or else Give a Counterexample: (a) If a < b and c < d then a c < b d.

Transcription:

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks by Glad & Ljung (Reglerteori/Control theory & Reglerteknik). Additional notes in the textbooks are allowed. Preliminary grades: 3p for grade 3, 33p for grade 4, 43p for grade 5. Use separate sheets for each problem, i.e. no more than one problem per sheet. Write your exam code on every sheet. Important: Your solutions should be well motivated unless else is stated in the problem formulation! Vague or lacking motivations may lead to a reduced number of points. Problem 6 is an alternative to the homework assignments from the spring semester 018. (In case you choose to hand in a solution to Problem 6 you will be accounted for the best performance of the homework assignments and Problem 6.) Good luck!

Problem 1 (a) A continuous-time system has the state space representation 1 a ẋ(t) = x(t) + u(t), 0 b y(t) = 1 0 x(t). (1) Show that (1) is a minimal realisation for all values of a and b, except for a = b = 0. (b) For which sampling periods h > 0 is the zero-order-hold sampled, discrete-time version of (1) observable? (c) A discrete-time system is controlled by proportional feedback: y(k) = q + q(q 1) u(k), For which K R is the closed loop system stable? u(k) = K(r(k) y(k)). (d) When designing a sampling controller for a continuous-time system you can either perform the design in continuous time or in discrete time. In the former case you obtain a continuous-time controller, which then must be discretized so that it can be implemented as a sampling controller. There are several possible ways to perform this discretization. One way is to use Tustin s approximation, in which the time derivative is approximated in the following way: p q 1 h q + 1. () Here h > 0 is the sampling period, p is the differentiation operator (i.e. py(t) = ẏ(t)) and q is the forward shift operator (i.e. qy(kh) = y(kh + h)). Show that Tustin s approximation () is equivalent to the trapezoidal rule for numerical integration. Problem The block diagram below represents a stationary continuoustime stochastic process. The transfer operator G(p) is minimum phase (with G(0) 0), and w is zero mean white noise. w z y 4ω Φ w (ω) = 4 G(p) p+3 Φ y (ω) = +8 ω 4 +5ω +144 (a) Determine the spectrum for z. (b) Determine the variance, Ez, of z. (c) Determine the transfer operator G(p). (4p) 1

Problem 3 A stock market analyst has the following model for the daily variations of the price for a certain share: { x(k + 1) = x(k) + w(k), y(k) = x(k) + e(k), Ew(k) = 0, Φ w (ω) =, Ee(k) = 0, Φ e (ω) = 4. (3) (a) Initially it is assumed that w and e are uncorrelated. Determine the Kalman filter that gives the optimal prediction ˆx(k + 1 k) for the model (3) under this assumption. (3p) (b) What is the covariance of the estimation error x = x ˆx for the Kalman filter of (3) under the assumption in (a)? That is, determine E x. (1p) (c) A deeper analysis of the stock market reveals that w and e are not uncorrelated, but that w(k) = v(k) 0.5e(k), where Ev(k) = 0, Φ v (ω) = 1, (4) and v(k) and e(k) are uncorrelated. (For e(k) (3) still holds.) Assume that the Kalman filter in (a) still would be used (as an observer), what would E x then be for the model (3)? (3p) (d) What is the smallest possible value of E x given the model (3), and under the circumstances given in (c)? (3p) Problem 4 Specify for each of the following statements whether it is true or false. No motivations required only answers true / false are considered! (a) A Kalman filter is an observer. (b) For a Kalman filter, based on a correct model, the output innovations are white noise. (c) MPC is typically implemented as PID controllers. (d) In MPC the prediction/output horizon is typically longer than the control/input horizon. (e) An advantage with MPC is that it can account for bounds and constraints, for example of the type u U max. (f) Stability is always preserved under zero-order-hold sampling. (g) Observability is always preserved under zero-order-hold sampling. Each correct answer scores +1, each incorrect answer scores 1, and omitted answers score 0 points. (Minimal total score is 0 points.) (7p)

Problem 5 The block diagram below shows a version of a continuous-time double integrator. } u 1 s 1 x 1 s x A state space model of the system, with state vector x = [ x 1 0 0 1 0 ẋ(t) = x(t) + u(t) + w(t), 1 0 0 1 y(t) = x(t). w y x ] T, is Here w is a process disturbance. The double integrator should be stabilized, and since the full state vector is measured (with no measurement noise) pure state feedback can be used, ie. the control law u(t) = Lx(t) can be applied. (a) Initially it was assumed that the process disturbance is negligible, and it was set to w 0 so that the model (5) became purely deterministic. The feeback gain L was then computed by solving the LQ problem where the criterion function { V a = y T (t)q 1 y(t) + u (t) } dt, Q 1 R, 0 is minimized. It turned out that the solution of the associated Riccati equation is 8 5 S =. 5 199 How was the weighting matrix Q 1 chosen? Determine the value of Q 1 (a matrix) such that S above solves the associated Riccati equation. (3p) (b) Determine the poles of the closed loop system when the LQ controller in (a) is used. (c) Later it turned out that the process disturbance is not negligible, but that it can be modeled as w(t) = p + 0.9 v(t), Ev(t) = 0, Φ v(ω) = 1. (6) Combine (5) and (6) into a state space model on standard form, with u as input, y as output and x = [ x 1 x w ] T as state vector. (3p) (d) It also turned out that w can be measured (without measurement noise), so that the pure state feedback u(t) = L x(t) is applicable. State the equations needed to find the feedback gain L R 1 3 that minimizes V d = E { y T (t)q 1 y(t) + u (t) }, (You need/should not solve this LQ problem.) 3 with the same Q 1 as in (a). (5)

Problem 6 The HW bonus points (from the spring 018) are exchangeable for this problem. A discrete-time system with one input and two outputs is described by the difference equations { y1 (k + 1) + 0.y 1 (k) y (k) = u(k + 1) u(k), y (k + 1) 0.7y (k) + y 1 (k) = u(k). (a) Give the transfer operator G(q) in the following model of the system: y1 (k) = G(q)u(k). y (k) (b) Give a state space representation for the system. (c) Is your state space model in (b) observable from y? (3p) (3p) (1p) 4

Solutions to the exam in Automatic Control II, 018-08-4: 1. (a) We notice that (1) is on oberver canonical form, and hence observable. Check for controllability: S = [ B AB ] a a + b =, det S = a +ab b = (a b) a 0 b a for all a and b except for a = b = 0. Both controllable and observable minimal realisation. (b) Theorem 4.1 the ZOH sampled system is qx = F x + Gu, y = Cx, where F = e Ah and G = h 0 eat Bdt. To check for observability we need F and C. Use the Laplace transform to compute e At : e At = L 1 [(si A) 1 ]. We get (si A) 1 = s + 1 = s [ (s+1) 1 (s+1) +1 (s+1) +1 1 (s+1) +1 (s+1)+1 (s+1) +1 and by inverse Laplace transformation e e At = t (cos t sin t) e t sin t e t sin t e t. (cos t + sin t) Set t = h to get F, and plug it into the observability matrix : [ ] C 1 0 O = = CF e h (cos h sin h) e h det O = e h sin h. sin h Thus, the ZOH sampled system is not observable for sin h = 0, i.e. it is observable for h nπ, n = 1,,.... (c) The closed loop poles are given by ], 0 = 1+KG(q) = 1+K q + q(q 1) 0 = q(q 1)+K(q+) = q +(K 1)q+K. The zeros of the polynomial z + αz + β lies inside the unit circle if and only if α 1 < β < 1: β < 1 : K < 1 K < 0.5, α 1 < β : K 1 1 < K < K, α 1 < β : K + 1 1 < K 0 < K. Hence, the closed loop system is stable for 0 < K < 0.5. (d) Let D(kh) denote the Tustin approximation the time derivative of y(kh) (i.e. D(kh) ẏ(kh)). Then D(kh) = (q 1) y(kh) h(d(kh+h)+d(kh)) = (y(kh+h) y(kh)). h(q + 1) This is a difference equation that shows how D(kh + h) can be computed (recursively) from D(kh), y(kh) and y(kh + h), where the latter two can be 1

regarded as inputs. Now observe that integration is the inverse operation of differentiation. Thus, if we instead regard D(t) as input and y(t) as output, we can see y(t) as the integral of D(t). Particulary, in the time interval kh t kh + h we can see the difference y(kh + h) y(kh) as the integral D(t)dt. Numerical integration by use of the trapezoid rule then gives kh+h kh kh+h kh D(t)dt D(kh) + D(kh + h) (kh+h kh) = h (D(kh)+D(kh+h)). By equating this with the difference above we get y(kh + h) y(kh) = h (D(kh) + D(kh + h)), which is exactly the same difference equation as the one for Tustin s approximation.. (a) We have Φ z (ω) = iω + 3 (b) Set up a state space representation for z: Φ w (ω) = iω + 3 iω + 3 4 = 16 ω + 9. z = w ż = 3z + w. p + 3 Now Π z = Ez solves the continuous-time Lyapunov equation, 0 = AΠ z +Π z A T +NR w N T = ( 3)Π z + 4 = 6Π z +16 Π z = 16 6 = 8 3. (c) Again we can use that Φ y (ω) = G(iω) Φ z (ω), and from (a) we have Φ z (ω). Thus, ω 4 +5ω +144 16 ω +9 G(iω) = Φ y(ω) 4ω+8 Φ z (ω) = = (4ω + 8)(ω + 9) 16(ω 4 + 5ω + 144) = (4ω + 8)(ω + 9) 16(ω + 9)(ω + 16) = 0.5ω + 0.5. ω + 16 Based on the degrees of ω in numerator and denominator we try with G(p) = b 1p + b p + a G(iω) = (ib 1ω + b )( ib 1 ω + b ) (iω + a)( iω + a) = b 1ω + b ω + a. Comparison with the expression above gives b 1 = 0.5, b 1 = 0.5, b = 0.5, b = 0.5, G(p) = 0.5p + 0.5. a = 16, p + 4 a = 4,

The negative roots are omitted since stationarity stability a > 0, minimum phase and G(0) 0 b 1, b 0. 3. (a) Theorem 5.6: The Kalman filter is qˆx = F ˆx + Gu + K(y H ˆx) with K = (F P H T + NR 1 )(HP H T + R ) 1, where P = P T 0 solves the DARE P = F P F T +NR 1 N T (F P H T +NR 1 )(HP H T +R ) 1 (F P H T +NR 1 ) T. Here F = N = H = 1, R 1 =, R = 4 and R 1 = 0. The DARE then is P = P + P P + 4 P P 8 = 0 P = 1 + 1 + 8 = 4. Thus, K = P P +R = 4 4+4 = 0.5 and the Kalman filter is qˆx = ˆx + 0.5(y ˆx) = 0.5ˆx + 0.5y. (b) Theorem 5.6 E x x T = P = 4. (c) For any observer q x = (F KH) x + Nv 1 Kv holds. Then E x x T = Π x solves the discrete-time Lyapunov equation Π x = (F KH)Π x (F KH) T + T N K Rν N K, where Rν = Eνν T, ν = v1 T v T T. Here we have F KH = 0.5, N = 1, K = 0.5 and w = v 0.5e, so q x = 0.5 x + w 0.5e = 0.5 x + v 0.5e 0.5e = 0.5 x + v e, and the Lyapunov equation becomes (since v and e are uncorrelated) Π x = 0.5 Π x + 1 + 4 0.75Π x = 5 Π x = 0 3 = 6 3. (d) The minimal covariance is obtained for the Kalman filter and, as stated in (a) and (b), is the P that solves the DARE. Here the circumstances differ from those in (a): We still have F = N = H = 1 and R = Ee = 4. However, R 1 = Ew = E(v 0.5e) = Ev Eve + 0.5 Ee = 1 + 0.5 4 =, R 1 = Ewe = E(v 0.5e)e = Eve 0.5Ee = 0.5 4 =. The DARE becomes (P ) P = P + P + 4 P 6P 4 = 0 P = 3+ 9 + 4 = 3+ 13 6.606. 4. (a) True; (b) True (Theorem 5.5); (c) False (MPC requires numerical optimization); (d) True; (e) True (This is handled by the numerical optimization); (f) True (The left half plane is mapped onto the unit disc); (g) False (For some systems observability is lost for certain sampling intervals); 5. (a) The associated Riccati equation for the LQ problem is the CARE 0 = A T S + SA + M T Q 1 M SBQ 1 B T S. Here we notice that z = y = x 3

M = I, and that B = [ 1 0 ] T and Q = 1. Thus we get 0 1 8 5 8 5 0 0 Q 1 = A T S SA + SBB T S = 0 0 5 199 5 199 1 0 8 5 1 [1 ] 8 5 14 1 + 0 =. 5 199 0 5 199 1 65 (b) The feedback gain is L = Q 1 B T S = [ 8 5 ]. The poles are then given by ( ) s 0 0 0 8 5 0 = det(si A + BL) = det + 0 s 1 0 0 0 s + 8 5 = det = s + 8s + 5. 1 s The poles are s = 4 ± 4 5 = 4 ± i3. (c) From (6) we get pw = 0.9w + v. Combining this with (5) gives =Ā = {}}{ B = {}}{ N {}}{ px 1 = u, 0 0 0 1 0 px = x 1 + w, x = 1 0 1 x + 0 u + 0 v, pw = 0.9w + v, 0 0 0.9 0 y 1 = x 1, 1 0 0 y = x. y = x, 0 1 0 }{{} = C (d) Now z = y = C x M = C. Then L = B T S, where S = ST 0 solves the CARE 0 = ĀT S + SĀ + C T Q 1 C S B BT S, (since Q = 1,) with matrices given in (c). (See Theorem 9.1.) 6. (a) Rewrite the difference equations with the shift operator q: { (q + 0.)y1 y = (q 1)u, q + 0. 1 y1 q 1 = u (q 0.7)y + y 1 = u, 1 q 0.7 [ y1 y ] = [ q + 0. 1 1 q 0.7 ] 1 [ q 1 ] u = q.4q+.7 q 0.5q+0.86 1.4 u = q 0.5q+0.86 y + 1.4q+0.98 q 0.5q+0.86 1.4 u. q 0.5q+0.86 (b) One input controller canonical form works. Notice that y 1 requires a direct term: 0.5 0.86 1 qx = x + u, 1 0 0 y 1 1.4 0.98 = x + u. 0 1.4 0 y 4

(c) Observability from y check whether or not the observability matrix H O = has full rank: H F O = [ 0 ] 1.4 1.4 0 full rank observable from y. 5