FRTN 15 Predictive Control

Similar documents
Lecture 6: Deterministic Self-Tuning Regulators

Pole-Placement Design A Polynomial Approach

RELAY CONTROL WITH PARALLEL COMPENSATOR FOR NONMINIMUM PHASE PLANTS. Ryszard Gessing

EECE Adaptive Control

Lecture 12. Upcoming labs: Final Exam on 12/21/2015 (Monday)10:30-12:30

Iterative Learning Control (ILC)

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Pole placement control: state space and polynomial approaches Lecture 2

Linear State Feedback Controller Design

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Introduction to System Identification and Adaptive Control

D(s) G(s) A control system design definition

Pole placement control: state space and polynomial approaches Lecture 3

EL2520 Control Theory and Practice

Subspace Based Identification for Adaptive Control

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR. Ryszard Gessing

Optimal Polynomial Control for Discrete-Time Systems

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

Time Response of Systems

EL2520 Control Theory and Practice

State Observers and the Kalman filter

Control Systems Design

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

Predictive Control - Computer Exercise 1

EECE Adaptive Control

Process Modelling, Identification, and Control

MODEL PREDICTIVE CONTROL and optimization

Mathematics for Control Theory

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

Dr Ian R. Manchester

EECE Adaptive Control

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

EE 422G - Signals and Systems Laboratory

MAE 143B - Homework 7

TRACKING AND DISTURBANCE REJECTION

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Analysis of Discrete-Time Systems

DC-motor PID control

Identification in closed-loop, MISO identification, practical issues of identification

EEE 550 ADVANCED CONTROL SYSTEMS

Intro. Computer Control Systems: F9

Outline. Classical Control. Lecture 1

DERIVATIVE FREE OUTPUT FEEDBACK ADAPTIVE CONTROL

Optimal control. University of Strasbourg Telecom Physique Strasbourg, ISAV option Master IRIV, AR track Part 2 Predictive control

Feedback Control of Linear SISO systems. Process Dynamics and Control

Pole Placement Design

EL1820 Modeling of Dynamical Systems

Identification of ARX, OE, FIR models with the least squares method

Analysis of Discrete-Time Systems

Online monitoring of MPC disturbance models using closed-loop data

Review: transient and steady-state response; DC gain and the FVT Today s topic: system-modeling diagrams; prototype 2nd-order system

EL1820 Modeling of Dynamical Systems

1 Kalman Filter Introduction

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

Lecture 9 Nonlinear Control Design

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Automatic Control II Computer exercise 3. LQG Design

Control of Electromechanical Systems

EECE 460 : Control System Design

EEE582 Homework Problems

LABORATORY OF AUTOMATION SYSTEMS Analytical design of digital controllers

9. Two-Degrees-of-Freedom Design

Adaptive Control. Bo Bernhardsson and K. J. Åström. Department of Automatic Control LTH, Lund University

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Exam in Systems Engineering/Process Control

COMBINED ADAPTIVE CONTROLLER FOR UAV GUIDANCE

CS534 Machine Learning - Spring Final Exam

Controls Problems for Qualifying Exam - Spring 2014

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

Control Systems Lab - SC4070 System Identification and Linearization

ECE 388 Automatic Control

EG4321/EG7040. Nonlinear Control. Dr. Matt Turner

Iterative Feedback Tuning

System Identification for MPC

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

12. Prediction Error Methods (PEM)

L2 gains and system approximation quality 1

Least Squares Estimation Namrata Vaswani,

Problem Weight Score Total 100

Automatic Control A. A.A. 2016/2017 July 7, Corso di Laurea Magistrale in Ingegneria Meccanica. Prof. Luca Bascetta.

Exam in FRT110 Systems Engineering and FRTN25 Process Control

Final exam: Computer-controlled systems (Datorbaserad styrning, 1RT450, 1TS250)

Iterative Learning Control Analysis and Design I

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Closed Loop Identification (L26)

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

Adaptive Robust Control

Computer Problem 1: SIE Guidance, Navigation, and Control

YTÜ Mechanical Engineering Department

Introduction to Nonlinear Control Lecture # 4 Passivity

Transcription:

Department of AUTOMATIC CONTROL FRTN 5 Predictive Control Final Exam March 4, 27, 8am - 3pm General Instructions This is an open book exam. You may use any book you want, including the slides from the lecture, but no exercises, exams, or solution manuals are allowed. Solutions and answers to the problems should be well motivated. The exam consists of 6 problems. The credit for each problem is indicated in the problem. The total number of credits is 25 points. Preliminary grade limits: Grade 3: Grade 4: Grade 5: 2 6 points 7 2 points 22 25 points Results The results of the exam will be presented in LADOK by March 24.

. Choose suitable controller designs for the following control problems (a-c). Provide clear and concise justification for your design choices along with necessary adjustments according to the stated specifications. No calculations are needed. a. Design a controller for the MIMO system ( ) ( ) ( ) x (k + ) x (k) u (k) = A + B, x 2 (k + ) x 2 (k) u 2 (k) ( ) ( ) y (k) x (k) = C y 2 (k) x 2 (k) so that the constraints ( ) ( ) l x x (k) x 2 (k) l x2 ( dx d x2 ), ( lu l u2 ) ( ) u (k) u 2 (k) ( du d u2 ), k are fulfilled. The controller should provide zero offset. The controller should also account for that variations in u 2 are much more expensive than variations in u and that the control performance of y is of higher priority than that of y 2. (2 p) b. Design a controller for the system y k+2 = a y k+ a 2 y k + b u k+ + b u k so that the closed-loop dynamics from set-point u c to output y are given by y k+2 = a m y k+ a m2 y k + b m u c k+ + b mu c k The system parameters a, a 2, b and b are slowly time varying and unknown, it is also expected that the system is non-minimum phase. The controller should have integral action. (2 p) c. Design a controller for the system y k = H(q)u k where y should follow a known and repetitive signal u c in a manufacturing application. The system H(q) is not easily modeled but is known to be stable. ( p) Solution problemsonly a. MPC is a suitable control approach for systems with constraints since the constraints are included in the optimization problem solved by the MPC. In order to obtain zero offset, disturbance states acting on the outputs could be introduced, these states can then be estimated using a Kalman filter. 2

Due to the higher cost of using u 2, the control-action weight matrix R should be chosen so that R(, ) << R(2, 2). Due to the importance of controlling y, the output weight matrix Q should be chosen so that Q(, ) >> Q(2, 2). b. A direct or indirect self-tuning regulator is suitable for this type of control problem. Here we adapt the indirect approach. The controller is on the form R(q)u k = S(q)y k + T(q)u c k where R, S and T depends on H and the closed-loop specifications. The model parameters are estimated using the RLS algorithm with a forgetting factor λ since the parameters are time-varying, λ can be set close to since the parameter variation is known to be slow. With the estimated model parameters the closed-loop poles are placed according to the closed-loop specifications. This is done by solving a Diophantine equation AR + BS = A o A m B + which gives expressions for R and S. T is obtained from the equation BT AR + BS = B m A o A m B + In order to fulfill the specifications we have to include a factor q in the controller denominator R. Since B is non-minumum phase it not possible to cancel B, therefore B has to be included in B m. With these specifications we have to introduce an observer polynomial A o of second order. c. Iterative Learning Control is suitable since the tracking problem is repetetive. After each sequence of u c, the input sequence u k+ (t) is updated accordingly u k+ (t) = Q(q)(u k+ (t) + L(q)(u c (t) y k (t))) where Q is a low-pass filter and L(q) is a corrective filter. The filters have to be chosen so that the ILC iterations are stable. 2. a. Compute the controller parameters for an indirect self-tuning regulator with respect to the system y k = H(q)u k + w k where w is unit-variance white Gaussian noise and b q + b H(q) = q 2 + a q + a 2 so that the transfer function from set-point u c to output y is given by b m q + b m H m (q) = q 2 + a m q + a m2 The model parameters are unknown and constant. The zero of the system H should be cancelled in the design and the controller should have integral action. (2 p) 3

b. Modify the indirect self-tuning regulator so that it becomes direct. This time the controller should not have integral action. (2 p) c. It was found that the pole-cancellation led to oscillating u. Suggest a minor modification of the direct self-tuning regulator that solves the problem. ( p) Solution a. First we have to estimate the model parameters, with an indirect STR, this is done by applying the RLS algorithm w.r.t. the linear-regression model where y k = φ T k θ + w k y k = y k θ = ( a a 2 b b ) φ k = ( y k y k 2 u k u k 2 ) A forgetting factor is not needed since the parameters are known to be constant. The controller is given by Ru = Sy + Tu c which gives the closed-loop transfer function y = BT AR + BS u c Pole placement with zero cancellation gives the equation A(q )R + B S = A o A m where R = B + (q )R, B = b and B + = q + b /b, q is introduced to obtain integral action. The admissibility condition gives that S should be of second order in order to fulfill deg(s) < deg( A) + This gives R = B + (q ) since deg(r) = deg(s) = deg(t) and an observer polynomial A o = a o + z of first order. Now, the equation is given by (q 2 + a q + a 2 )(q ) + b (s q 2 + s q + s 2 ) = (q + a o )(q 2 + a m q + a m2 ) from which we obtain the coefficients in S a + b s = a o + a m a 2 a + b s = a m a o + a m2 a 2 + b s 2 = a m2 a o 4

This gives the controller s = (a o + a m a + )/b s = (a m a o + a m2 a 2 + a )/b s 2 = (a m2 a o + a 2 )/b where T is obtained by matching R = B + (q ) S = s z 2 + s z + s 2 T = B m A o /b BT AR + BS = B m A m b. In direct MRAC we want to formulate a regression problem from which we can estimate the controller parameters directly. This can be achieved by letting the Diophantine equation AR + B S = A o A m operate on y. This gives AR y(t) + B Sy(t) = A o A m y(t) BR u(t) + B Sy(t) = A o A m y(t) B Ru(t) + B Sy(t) = A o A m y(t) where y f (t) = y(t) = Ru f (t d ) + Su f (t d ) () A o (q ) A m(q ) y(t), u f (t) = A o (q ) A m(q ) u(t) R = b R, S = b S and d is the pole excess of H. This is now a linear regression problem from which R and S can be estimated. T is computed in the same way as before. This time deg(r) = deg(s) = deg(t) = and deg( A o ) = since we no longer demand integral action. c. Zero cancellation can be avoided by increasing d from to 2 in Eq (), see lecture notes p. 67. 3. a. Design a mimimum-variance 2-step ahead predictor for the system y k+2 =.5y k+.5y k + 2w k+2 2.4w k+ +.2w k and compute its prediction-error variance. Here w is unit-variance white Gaussian noise. (2 p) 5

b. Which of the following statements concerning minimum-variance control (MVC) are true/false, motivate your answer:. The MVC can be thought of as consisting of two parts: one predictor which predicts the effect of the disturbance of the output, and one deadbeat regulator which computes the control signal required to make the predicted output equal to the desired value. 2. MVC is good at handling poorly damped zeros. 3. MVC is insensitive to parameter variations. (.5 p) Solution a. The Diophantine equation to be solved is given by C (q ) = A (q )F (q ) + q 2 G (q ) where F and G is of order. 2 2.4q +.2q 2 = (.5q +.5q 2 )(f + f q ) + q 2 ( + q ) = f + (f.5f )q + (.5f +.5f + )q 2 + ( +.5f )q 3 Matching the coefficients gives the system of equations with the solution This gives the predictor 2 = f 2.4 = f.5f with the estimation-error variance.2 =.5f +.5f + = +.5f f = 2 f =.6 =. =.3 ŷ k+2 = G (q ) C (q ) y k E{(ŷ k+2 y k+2 ) 2 } = (f 2 + f 2 )σ 2 w = 4.36 b.. True, in MVC we have that G (q ) C (q ) y k + B (q )F (q ) C (q u k = ) the left hand side of this expression is the optimal d step ahead prediction of y k, u k is then chosen so that this prediction is. 6

2. MVC introduces zero cancellation and the system zero polynomial B ends up as poles in the controller. Poorly damped zeros thus lead to undamped control signals. 3. It is well-known that an optimal solution under special circumstances may be sensitive to parameter variations. A good example is the MVC controller in Lab 2, in this case sensitivity was reduced by using an adaptive MVC controller. 4. Fig. presents the estimation performance of an RLS-algorithm for different initial covariance matrices P and different input sequences u for the system y k = ay k + bu k + w k, w k N(, σ 2 w) Match the RLS experiments in Fig. with the following RLS settings. u - square wave with amplitude 2 and period, σ 2 w =.5, P =.5I. 2. u =, σ 2 w =.5, P = I. 3. u = 2 sin(t), σ 2 w = 25, P = I. 4. u - square wave with amplitude 2 and period, σ 2 w =.5, P = I. A B.5.5 â ˆb 5 5 2 sample [ ] C â ˆb 5 5 2 sample [ ] D.5.5 â ˆb 5 5 2 sample [ ] â ˆb 5 5 2 sample [ ] Figure RLS Experiments (4 p) Solution D gives the most noisy estimates, D can thus be paired with 3 where the noice variance is high. 7

C does not converge to the correct estimates, this can be explained by the insufficiently exciting input signal in 2 where u is constant. Now we have two cases left, with the same input and different P. A small P gives a slower initial transient, A therefore belongs to 4 and B belongs to. 5. The system G(q) = q + q(q +.6) is to be controlled using iterative learning control (ILC) according to the block diagram in Figure 2. The control signal is updated according to u k+ (t) = Q(q)(u k (t) + L(q)e k (t)) a. Explain the roles of the filters Q(q) and L(q) and why they do not have to be causal. (.5 p) b. Choose Q = and L = κq(q +.6) and find a κ such that the ILC iterations are stable. (2 p) Figure 2 An ILC feedback system. Solution a. L is the corrective filter that updates u so that the error is counteracted whilst Q is a low-pass filter that is introduced for improved robustness and noise-insensitivity. The filters do not have to be causal since the filtering is done in-between (and not during) iterations where the whole error signal is available. b. A sufficient condition for stability is given by sup L(iω)G(iω) < ω sup ω L(iω)G(iω) = sup κ cos(wh) κi sin(wh) κ ω = sup ω = sup ω ( κ cos(wh) κ) 2 + κ 2 sin(wh) 2 ( κ) 2 2( κ)κ cos(wh) + κ 2 8

here we can see that κ =. gives = ( κ) 2 2( κ)κ cos(wh) + κ 2 = κ 2 =. < 6. a. Give an intuitive explanation for what a Lyapunov function is. Why can it be used to determine the stability of an arbitrary system? ( p) b. Consider the MRAS in Figure 3. Use passivity theory to show that the system interconnection is stable if G(s) is the strictly positive real (SPR) system G(s) = s + (3 p) Figure 3 A model-reference adaptive system where the feedforward gain θ is adapted. Solution a. A Lyapunov function V(x) describes a quantity in the system that is decreasing with time. We can think of it as an energy function, where the energy is a quantity present in the system that, if it decreases with time, ensures the stability of the system. If V(x, u) contains a controlled variable (control input) u, we can construct a control law that modifies V(x, u) such that it fulfills the requirements of a Lyapunov function. The formal requirements on a Lyapunov function are It increases radially with the state vector, i.e. when the state vector is large, the Lyapunov function is large. The time derivative V is non-positive, i.e. the function does not grow over time. It is zero at the origin. 9

Figure 4 Feedback interconnection. b. In Figure 3 we have a feedback interconnection between the systems G and H on the form displayed in Figure 4. If we can show that one of the systems is strictly passive and the other is passive, then the interconnection is stable. Since G(s) is SPR and therefore strictly passive we only have to show that H with input e and output u c θ is passive. We have that T yudt = = T T u c θ edt = T w γ p (w)dt u c γ p (u ce)edt = [ω = u c e] where p is the derivative operator. This expression is since γ s real, H is therefore passive and the interconnection is stable. is positive