EECE Adaptive Control

Similar documents
EECE Adaptive Control

EECE Adaptive Control

EL1820 Modeling of Dynamical Systems

EECE Adaptive Control

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

EL1820 Modeling of Dynamical Systems

12. Prediction Error Methods (PEM)

Identification of ARX, OE, FIR models with the least squares method

Lecture 6: Deterministic Self-Tuning Regulators

User s guide for ireg (Identification and Controller Design and Reduction) Ioan Doré Landau and Tudor-Bogdan Airimiţoaie

Stability of Parameter Adaptation Algorithms. Big picture

On Input Design for System Identification

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

EECE Adaptive Control

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

OPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP. KTH, Signals, Sensors and Systems, S Stockholm, Sweden.

IDENTIFICATION FOR CONTROL

On Identification of Cascade Systems 1

Direct adaptive suppression of multiple unknown vibrations using an inertial actuator

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu

EECE 460 : Control System Design

FRTN 15 Predictive Control

EECE Adaptive Control

Control Systems Lab - SC4070 System Identification and Linearization

Optimal Polynomial Control for Discrete-Time Systems

Adaptive Robust Control

Design Methods for Control Systems

Further Results on Model Structure Validation for Closed Loop System Identification

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Closed Loop Identification (L26)

Lecture 12. Upcoming labs: Final Exam on 12/21/2015 (Monday)10:30-12:30

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

Outline 2(42) Sysid Course VT An Overview. Data from Gripen 4(42) An Introductory Example 2,530 3(42)

System Identification: From Data to Model

MODERN CONTROL DESIGN

Modeling and Identification of Dynamic Systems (vimmd312, 2018)

Identification, Model Validation and Control. Lennart Ljung, Linköping

Final Exam January 31, Solutions

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

Linear Quadratic Gausssian Control Design with Loop Transfer Recovery

2 Introduction of Discrete-Time Systems

Analysis of Discrete-Time Systems

Digital Control: Part 2. ENGI 7825: Control Systems II Andrew Vardy

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

Introduction to System Identification and Adaptive Control

Ten years of progress in Identification for Control. Outline

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Closed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables

CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING

Properties of Open-Loop Controllers

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EE 230 Lecture 24. Waveform Generators. - Sinusoidal Oscillators

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.

Finite-time experiment design with multisines

Journal of the Franklin Institute. Data-Driven Model Reference Control Design by Prediction Error Identification

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina

Models for Control and Verification

Analysis of Discrete-Time Systems

Control System Design

10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Lecture 7 Open-loop & closedloop experiments. The Bias Formula goes closed-loop and the returns

z Transform System Analysis

Exponential Convergence Bounds in Least Squares Estimation: Identification of Viscoelastic Properties in Atomic Force Microscopy

Improving performance and stability of MRI methods in closed-loop

14 th IFAC Symposium on System Identification, Newcastle, Australia, 2006

Expressions for the covariance matrix of covariance data

Exam. 135 minutes, 15 minutes reading time

1 Loop Control. 1.1 Open-loop. ISS0065 Control Instrumentation

Feedback Control of Linear SISO systems. Process Dynamics and Control

Experiment Design with Applications in Identification for Control

Problem Set 5 Solutions 1

6.435, System Identification

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

Growing Window Recursive Quadratic Optimization with Variable Regularization

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Introduction to system identification

EE Control Systems LECTURE 9

TIME DELAY TEMPERATURE CONTROL WITH IMC AND CLOSED-LOOP IDENTIFICATION

State Observers and the Kalman filter

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Adaptive Rejection of Periodic Disturbances Acting on Linear Systems with Unknown Dynamics

FOR traditional discrete-time sampled systems, the operation

Model Validation for a Self-bearing Motor Prototype

Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Bounding the parameters of linear systems with stability constraints

Optimal input design for nonlinear dynamical systems: a graph-theory approach

ECE317 : Feedback and Control

Virtual Reference Feedback Tuning for non-linear systems

Sign-Perturbed Sums (SPS): A Method for Constructing Exact Finite-Sample Confidence Regions for General Linear Systems

Robustness in Experiment Design

Transcription:

EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 1 / 43

Tracking Time-Varying parameters Tracking Time-Varying Parameters All previous methods use the least-squares criterion V(t) = 1 t t i=1 [y(i) x T (i) ˆθ] 2 and thus identify the average behaviour of the process. For standard RLS, the estimation gain eventually converges to zero and adaptation stops. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 2 / 43

Forgetting Factor Tracking Time-Varying parameters Forgetting Factor When the parameters are time varying, it is desirable to base the identification on the most recent data rather than on the old one, not representative of the process anymore. This can be achieved by exponential discounting of old data, using the criterion V(t) = 1 t t i=1 where 0 < λ is called the forgetting factor. λ t i [y(i) x T (i) ˆθ] 2 Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 3 / 43

Forgetting Factor Tracking Time-Varying parameters Forgetting Factor The new criterion can also be written V(t) = λv(t 1) + [y(t) x T (t) ˆθ] 2 Then, it can be shown (Goodwin and Payne, 1977) that the RLS scheme becomes RLS with Forgetting ˆθ(t + 1) = ˆθ(t) + K(t + 1)[y(t + 1) x T (t + 1) ˆθ(t)] K(t + 1) = P(t)x(t + 1)/[λ + x T (t + 1)P(t)x(t + 1)] { P(t + 1) = P(t) P(t)x(t + } 1)xT (t + 1)P(t) 1 [λ + x T (t + 1)P(t)x(t + 1)] λ In choosing λ, one has to compromise between fast tracking and long term quality of the estimates. The use of the forgetting may give rise to problems. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 4 / 43

Forgetting Factor Tracking Time-Varying parameters Forgetting Factor The smaller λ is, the faster the algorithm can track, but the more the estimates will vary, even the true parameters are time-invariant. A small λ may also cause blowup of the covariance matrix P, since in the absence of excitation, covariance matrix update equation essentially becomes P(t + 1) = 1 λ P(t) in which case P grows exponentially, leading to wild fluctuations in the parameter estimates. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 5 / 43

Tracking Time-Varying parameters Variable Forgetting Factor Forgetting Factor One way around this is to vary the forgetting factor according to the prediction error ε as in λ(t) = 1 kε 2 (t) Then, in case of low excitation ε will be small and λ will be close to 1. In case of large prediction errors, λ will decrease. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 6 / 43

Tracking Time-Varying parameters EFRA Exponential Forgetting and Resetting Algorithm The following scheme 1 is recommended: EFRA Algorithm ε(t + 1) = y(t + 1) x T (t + 1) ˆθ(t) ˆθ(t + 1) = ˆθ T αp(t)x(t + 1) (t) + λ + x T (t + 1)P(t)x(k + 1) ε(t) P(t + 1) = 1 [ P(t) P(t)x(t + ] 1)xT (t + 1)P(t) λ λ + x(t + 1) T P(t)x(t + 1) +βi γp(t) 2 where I is the identity matrix, and α, β and γ are constants. 1 M.E. Salgado, G.C. Goodwin, and R.H. Middleton, Exponential Forgetting and Resetting, International Journal of Control, vol. 47, no. 2, pp. 477 485, 1988. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 7 / 43

Tracking Time-Varying parameters EFRA Exponential Forgetting and Resetting Algorithm With the EFRA, the covariance matrix is bounded on both sides: σ min I P(t) σ max I t where with σ min β α η η = 1 λ λ σ max η γ + β η With α = 0.5, β = γ = 0.005 and λ = 0.95, σ min = 0.01 and σ max = 10. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 8 / 43

Identification in Closed Loop Identification in Closed Loop The Identifiability Problem The Identifiability Problem Let the system be described by with y(t) + a y(t 1) = b u(t 1) + e(t) u(t) = g y(t) Let â and ˆb be closed-loop estimates of a and b. Then,the closed-loop system can be written as: y(t) + (â ˆb g)y(t 1) = e(t) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 9 / 43

Identification in Closed Loop Identification in Closed Loop The Identifiability Problem Hence any estimates â and ˆb that satisfy â ˆb g = a b g will give the same value for the identification criterion. All estimates such that â = a + k g ˆb = b + k will give a good description of the process. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 10 / 43

Identification in Closed Loop Identification in Closed Loop The Identifiability Problem If the identification is performed using two feedback gains g 1 and g 2 or if the parameter a is fixed, then the system becomes identifiable, because we have as many equations as unknowns. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 11 / 43

Definitions Identification in Closed Loop Definitions Let the discrete plant be described by y(t) = G D (q 1 )u(t) + G N (q 1 )e(t) where G D and G N are linear rational transfer functions in the backward shift operatorq 1 (i.e. q 1 y(t) = y(t 1) that can be parameterized by a vector θ, {e(t)} = N(0,σ). Let S denote this true system. Let us assume that the identification is performed with the feedback controller R such that u(t) = R y(t). The problem is then to find ˆθ, an estimate of θ such that the modelm( ˆθ) given by y(t) = Ĝ D (q 1 )u(t) + Ĝ N (q 1 )e(t) where Ĝ D (q 1 ) and Ĝ N (q 1 ) are parameterized by ˆθ, describes the system S. Assume that an identification method denoted I is used to obtain the estimate ˆθ. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 12 / 43

Definitions Identification in Closed Loop Definitions A loose and intuitive definition of identifiability is that M( ˆθ) describes S as the number of measurements N tends to infinity. Let us define D T (S,M) = { ˆθ Ĝ D (q 1 ) G D (q 1 ) and Ĝ N (q 1 ) = G N (q 1 ) q} This is the set of desired estimates which corresponds to models M( ˆθ) with the same plant and noise transfer functions as the actual system S(θ). Note that this set does not depend on the regulator R nor on the identification method I. Note also that the orders of the model transfer functions can be greater than those of the system, in which case there exist some pole-zero cancellations. The actual estimates ˆθ depend on the number of measurements, the system and its model, the regulator and the identification method and can be written as ˆθ(N;S,M,I,R). Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 13 / 43

Definitions Identification in Closed Loop Definitions Definition 1: The system S is said to be system identifiable under M,I,R, i.e.si(m,i,r) if ˆθ(N;S,M,I,R) D T (S,M)w.p.1 as N. Definition 2: The system S is said to be strongly system identifiable under I and R, i.e. SSI(I,R) if it is SI(M,I,R) for all M s.t. D T (S,M) φ where φ denotes the empty set. Definition 3: The system S is said to be parameter identifiable under M,I and R, i.e.pi(m,i,r) if it is SI(M,I,R) and D T (S,M) consists of only one element. According to those definitions, the system of Example 1 is neither SI nor PI for the class of models (2). Moreover, since for the model (2) D T (S,M) φ, the system is not SSI either. However, when two regulators are used, the system becomes PI. It is also important to note that when a system is SSI(I,R), the fact that the identification is performed under closed-loop is irrelevant and the identification can then be performed as if the system were operating in open-loop. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 14 / 43

Example Identification in Closed Loop Definitions Consider the system A(q 1 )y(t) = B(q 1 )q k u(t) + C(q 1 )e(t) with the feedback F(q 1 )u(t) = G(q 1 )y(t) The closed-loop system can then be described as (AF q k BG)y(t) = CFe(t) It is then obvious that any estimates  and ˆB such that (ÂF q k ˆBG) = (AF q k BG) will describe the above system. Hence, if L(q 1 ) is an arbitrary polynomial, any Âand ˆB s.t. {  = A + LG ˆB = B + q k LF are possible estimates. This means that the true order of the system cannot be established from this type of closed-loop experiment and thus, has to be known à-priori. (1) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 15 / 43

Identification in Closed Loop Identifiability Conditions Identifiability Conditions Let the system S be described by the following ARMAX process: A(q 1 )y(t) = q k B(q 1 )u(t) + C(q 1 )e(t) where A(q 1 ) = 1 + a 1 q 1 +... + a na q n a B(q 1 ) = b 1 q 1 +... + b nb q n b C(q 1 ) = 1 + c 1 q 1 +... + c nc q n c k 0 n a 0 n b 1 n c 0 A, B, C are assumed to co-prime, i.e. have no common factors. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 16 / 43

Identification in Closed Loop Identifiability Conditions Identifiability Conditions Let the regulator R be given by: F(q 1 )u(t) = G(q 1 )y(t) where F(q 1 ) = 1 + f 1 g 1 +... + f nf q n f G(q 1 ) = g 0 + g 1 q 1 +... + g ng q n g F and G are co-prime and G may contain a delay, i.e. some of its leading coefficients may be zero. Let the class of models M( ˆθ) be with ˆn a 0, ˆn c 0, ˆk 0, ˆn b 1 Â(q 1 )y(t) = q ˆk ˆB(q 1 )u(t) + Ĉ(q 1 )e(t) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 17 / 43

Identification in Closed Loop Identifiability Conditions Identifiability Conditions Introduce n = min [(ˆn a n a ),(ˆn b + ˆk n b k),(ˆn c n c )] The system with its feedback can be written as y = CF AF q k BG Let us define n p be the number of common factors between C and (AF q k BG) and let the order of the polynomial AF q k BG be max[(n a + n f ),(k + n b + n g )] r with r 0. Note that r > 0 means n a + n f = k + n b + n g and that some of highest order coefficients are zeros. Assume that a direct identification, using the sequences {u(t)} and {y(t)}, is employed to estimates ˆθ such that M( ˆθ) describes S(θ). Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 18 / 43

Identification in Closed Loop Identifiability Conditions Identifiability Conditions Theorem (Söderström, 1973) A necessary and sufficient condition for the set of desired estimates D T (S,M) to be non-empty is n 0 and k k From the definition of n, it is seen that the condition simply means that the true system is contained in the model set, i.e. that the model order is sufficient and that the dead-time is not overestimated. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 19 / 43

Identification in Closed Loop Identifiability Conditions Identifiability Conditions A related well known sufficient identifiability condition is that the order of the feedback law be greater than or equal to the order of the forward path, see Goodwin and Payne (1977). The presence of dead-time k > 0 helps identifiability. This can be explained by the fact that the feedback signal then contains a component more or less independent of the current output y(t). The previous condition unfortunately involve the true order of the system. If the true order isnot known à-priori, then these conditions cannot be checked à-posteriori, since the order cannot be determined in closed-loop identification. These conditions are thus of little use in a practical application. The following theorem provides a way to guarantee SSI without knowledge of the systems order. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 20 / 43

Identification in Closed Loop Identifiability in Closed Loop Identifiability Conditions Theorem (Söderström et al. (1975)) Consider the system S: where and y(t) = G D (q 1 )u(t) + G N (q 1 )e(t) dim y = n y,dim u = n u,dim e = n y, e = N(O,Λ) G N (q 1 ). Let the control signal be given by has all zeros outside unit circle u(t) = F i (q 1 )y(t) + k i (q 1 )v(t) {v} is an external signal of dim v = n v and F i and k i are rational functions and i = 1,...r. Assume, without loss of generality, that there is a dead-time either in the process or in the regulator, i.e. G D (0)F i (0) = 0. Then, the system S is identifiable if and only if n 0 { rank n y { n y n v {}}{{}}{ k 1 (z)...k r (z) F i (z)...f r (z) 0...0 I...I = n y + n u z Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 21 / 43

Identification in Closed Loop Identifiability Conditions Practical Conditions for Closed-Loop Identifiability A necessary condition for identifiability is r(n v + n y ) n y + n u or r (n u + n y )/(n v + n y ) If n v = n u, then closed-loop identifiability always satisfied Without external signal (n v = 0), r 1 + n u /n y which when n y = n u becomes r 2 Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 22 / 43

Identification in Closed Loop Identifiability Conditions Consequences for Adaptive Control Under time-varying feedback, identifiability should be garanteed. Because in adaptive control, the controller parameters are constantly updated, identifiability should be garanteed (at least during the transient phase). Nonlinearities in the controller should also help identifiability. Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 23 / 43

Identification in Closed Loop Identification for Control Relevant Loops Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 24 / 43

Effect of Under-modelling Prediction Error Identification PEM and Undermodelling (Gevers, 2005) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 25 / 43

Effect of Under-modelling Properties of the Identified Model PEM and Undermodelling (Gevers, 2005) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 26 / 43

Effect of Under-modelling Bias and Variance Errors Bias and Variance Errors Ljung (1985) was the first to formalize the situation when the true plant does not belong to the model set Modelling error consists of two distinct components: The bias error arises when the model structure is unable to represent the true system The variance error is caused by the noise and the finiteness of the data set The variance error goes asymptotically to zero as the number of data goes to zero. If the model is not exact, then its quality should reflect its intended use Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 27 / 43

Effect of Under-modelling Bias and Variance Errors Bias and Variance Errors (Gevers, 2005) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 28 / 43

Effect of Under-modelling Bias and Variance Errors Bias and Variance Errors Introduce θ as the convergence point of the prediction error estimate: θ = lim ˆθ N = arg min V(θ) N θ D θ As defined by Ljung (1985), the bias and variance of the transfer function estimates are: Definition: Bias and Variance Errors G 0 (e jω ) Ĝ(e jω, ˆθ N ) = G 0 (e jω ) Ĝ(e jω,θ ) +Ĝ(e jω,θ ) Ĝ(e jω, ˆθ }{{} N ) }{{} Bias Variance Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 29 / 43

Effect of Under-modelling Quantifying the Variance Bias and Variance Errors Define Var(Ĝ N (e jω )) = E[ G(e jω, ˆθ N ) E[G(e jω, ˆθ N )] 2 ] With Φ u and Φ v the power spectral densities of u and H 0 e 0, Ljung (1985) showed that where n is the model order lim lim = Var(Ĝ N (e jω )) = n Φ v (ω) n N N Φ u (ω) Recently, exact expressions for finite-order models have been derived Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 30 / 43

Input Design PRBS Pseudo Random Binary Sequences (PRBS) (Landau, Lozano, M Saad, Adaptive Control, Springer-Verlag, 1998, pp. 179-182) A PRBS is a sequence of rectangular pulses of constant magnitude, modulated in width that approximates a white noise, and thus is rich in frequencies. A PRBS of length 2 N 1 is generated by means of a shift register of N stages: Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 31 / 43

Input Design PRBS Pseudo Random Binary Sequences (PRBS) Maximum duration of a PRBS rectangular pulse is d max = NT s For estimation of static gain, the duration of at least one pulse must be greater than the rise time of the process, d max = NT s > t R Typically, duration L of test is taken s.t. L = (2 N 1)T s The condition NT s > t R might result in too long an experiment Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 32 / 43

Input Design PRBS Pseudo Random Binary Sequences (PRBS) By choosing the PRBS clock frequency s.t. f PRBS = f s p p = 1,2,3, the condition becomes d max = pnt s > t R If p is the PRBS frequency divider, then If N is increased by p 1, then d max = pnt s L = pl p = 1,2,3 d max = (N + p 1)T s L = 2 (p 1) L p = 1,2,3 Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 33 / 43

Input Design PRBS Pseudo Random Binary Sequences (PRBS) Dividing the PRBS clock frequency reduces high frequencies while augmenting low frequencies Recommended to choose p < 4 For example for N = 8 and p = 1,2,3 Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 34 / 43

Optimal Input Design Input Design Optimal Input Design Optimal design requires knowledge of the true system! This advocates iterative method... First use a PRBS Then, using estimated model, calculate an input optimal for that model Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 35 / 43

Persistent Excitation Input Design Persistent Excitation A signal u is called persistently exciting of order n if the matrix C n defined below is positive definite Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 36 / 43

Persistent Excitation Input Design Persistent Excitation A step is PE of order 1 (q 1)u(t) = 0 A sinusoid is PE of order 2 q 2 2qcosωh + 1)u(t) = 0 To identify a transfer function of order n one needs n sinusoids Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 37 / 43

Example 1 Examples Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 38 / 43

Example 1: Excitation Examples Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 39 / 43

Example 1: Feedback Examples Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 40 / 43

Examples Example 1: Forgetting Factor Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 41 / 43

Examples Example 2: Model Structure Consider the process y(t) 0.8y(t 1) = 0.5u(t 1) + e(t) 0.5e(t 1) First use the estimation model with RLS y(t) + ay(t 1) = bu(t 1) + e(t) Then use the estimation model with RELS y(t) + ay(t 1) = bu(t 1) + e(t) + ce(t 1) Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 42 / 43

Examples Example 2: Model Structure Guy Dumont (UBC EECE) EECE 574 - Adaptive Control January 2010 43 / 43