EECE Adaptive Control

Similar documents
EECE Adaptive Control

EECE Adaptive Control

EL1820 Modeling of Dynamical Systems

12. Prediction Error Methods (PEM)

EECE Adaptive Control

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Control Systems Lab - SC4070 System Identification and Linearization

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

EL1820 Modeling of Dynamical Systems

On Identification of Cascade Systems 1

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

System Identification

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop

Optimal Polynomial Control for Discrete-Time Systems

Modeling and Analysis of Dynamic Systems

Panel Data Models. James L. Powell Department of Economics University of California, Berkeley

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Introduction to Estimation Methods for Time Series models. Lecture 1

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1.

Adaptive Robust Control

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal

Econometrics I, Estimation

Closed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

User s guide for ireg (Identification and Controller Design and Reduction) Ioan Doré Landau and Tudor-Bogdan Airimiţoaie

Identification of ARX, OE, FIR models with the least squares method

ECE 636: Systems identification

11. Further Issues in Using OLS with TS Data

On Input Design for System Identification

Accuracy Analysis of Time-domain Maximum Likelihood Method and Sample Maximum Likelihood Method for Errors-in-Variables Identification

On instrumental variable-based methods for errors-in-variables model identification

Improving performance and stability of MRI methods in closed-loop

Econometrics II - EXAM Answer each question in separate sheets in three hours

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

MODEL PREDICTIVE CONTROL and optimization

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

System Identification

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Modeling and Identification of Dynamic Systems (vimmd312, 2018)

Identification of Linear Systems

PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu

Brief Review on Estimation Theory

Model Identification and Validation for a Heating System using MATLAB System Identification Toolbox

Identification, Model Validation and Control. Lennart Ljung, Linköping

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

Outline 2(42) Sysid Course VT An Overview. Data from Gripen 4(42) An Introductory Example 2,530 3(42)

Nonlinear Identification of Backlash in Robot Transmissions

Chapter 8: Least squares (beginning of chapter)

Gov 2001: Section 4. February 20, Gov 2001: Section 4 February 20, / 39

Lecture 6: Deterministic Self-Tuning Regulators

388 Index Differencing test ,232 Distributed lags , 147 arithmetic lag.

Sign-Perturbed Sums (SPS): A Method for Constructing Exact Finite-Sample Confidence Regions for General Linear Systems

Estimation, Detection, and Identification

Continuous-Time Model Identification and State Estimation Using Non-Uniformly Sampled Data

6.435, System Identification

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Introduction to System Identification and Adaptive Control

Computer Exercise 1 Estimation and Model Validation

Time Series Analysis

Further Results on Model Structure Validation for Closed Loop System Identification

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan

Advanced Signal Processing Introduction to Estimation Theory

Identification in closed-loop, MISO identification, practical issues of identification

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM

Adaptive Dual Control

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

Parameter Estimation in a Moving Horizon Perspective

6.435, System Identification

Virtual Reference Feedback Tuning for non-linear systems

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

EECE 460 : Control System Design

FIR Filters for Stationary State Space Signal Models

System Identification for MPC

LTI Systems, Additive Noise, and Order Estimation

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

CHBE507 LECTURE II MPC Revisited. Professor Dae Ryook Yang

X t = a t + r t, (7.1)

Continuous-Time Model Identification and State Estimation Using Non-Uniformly Sampled Data

Automatic Control II Computer exercise 3. LQG Design

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

EIE6207: Maximum-Likelihood and Bayesian Estimation

System Identification, Lecture 4

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

Association studies and regression

POLI 8501 Introduction to Maximum Likelihood Estimation

4.0 Update Algorithms For Linear Closed-Loop Systems

F2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1

Subspace-based Identification

Transcription:

EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of System Identification 1 / 40

Identification for Adaptive Control Introduction Models for identification Identification methods Recursive identification Tracking of time-varying parameters Pseudo-random binary sequences (PRBS) Identification in closed loop Recommended textbook on identification: L. Ljung, System Identification: Theory for the User, 2nd edition, Prentice Hall, 1999 Guy Dumont (UBC) EECE574 - Basics of System Identification 2 / 40

Introduction System Identification This is the field of building a dynamic model of a process from experimental data It consists of the following: Experimental planning Selection of model structure Parameter estimation Validation Guy Dumont (UBC) EECE574 - Basics of System Identification 3 / 40

Introduction Experimental Design Experiments are difficult and costly Input signal should excite all modes Problematic in adaptive control with no active learning Closed-loop identifiability helped by: Nonlinear, or time-varying feedback Setpoint changes Guy Dumont (UBC) EECE574 - Basics of System Identification 4 / 40

Models for Identification Models for Identification Models are really approximations of the real system There is NO unique model to describe a process Structure derived from prior knowledge and purpose of model Natural to use general linear system, called a black-box model Impulse response model Step response model Transfer function model General stochastic model State-space model Laguerre model Guy Dumont (UBC) EECE574 - Basics of System Identification 5 / 40

Models for Identification Models for Identification A few words of caution: Do NOT fall in love with your model... When map disagrees with Nature, TRUST Nature Swedish Army Manual Guy Dumont (UBC) EECE574 - Basics of System Identification 6 / 40

Models for Identification Models for Identification The Impulse Response Model One of the simplest: y(k) = h j u(k j 1) j=0 h j is the j th element of the impulse response of the process. Assuming that the impulse response asymptotically goes to zero, it may be truncated after the n th h element: n h y(k) = h j u(k j 1) j=0 This is called a Finite Impulse Response (FIR) model and can only be used for stable processes. Used mainly in adaptive filtering. Guy Dumont (UBC) EECE574 - Basics of System Identification 7 / 40

Models for Identification Models for Identification The Step Response Model Also one of the simplest: y(k) = n s j=0 s j u(k j 1) s j is the j th element of the step response of the process. is the differencing operator: u(k) = u(k) u(k 1) = (1 q 1 )u(k) here q is the forward-shift operator, i.e. qy(k) = y(k + 1). Conversely, q 1 y(k) = y(k 1) This is the Finite Step Response (FSR) model used in Dynamic Matrix Control (DMC). It can only be used for stable processes. Guy Dumont (UBC) EECE574 - Basics of System Identification 8 / 40

Models for Identification Models for Identification The Transfer Function Model This is the classical discrete-time transfer function model: y(k) = B(q 1 ) A(q 1 ) q d u(k 1) where d is the time delay of the process in sampling intervals (d 0) and the polynomials A and B are given by: A(q 1 ) = 1 + a 1 q 1 + + a na q n A B(q 1 ) = b 0 + b 1 q 1 + + b nb q n B This can be used for both stable and unstable processes. Note that both FIR and FSR models can be seen as subsets of the transfer function model. This model requires less parameters than FIR or FSR, but assumptions about the orders n A and n B and the time delay d must be made. Guy Dumont (UBC) EECE574 - Basics of System Identification 9 / 40

Models for Identification Models for Identification The General Stochastic Model This is better known as the Box-Jenkins model, and describes both the process and the disturbance: y(k) = B(q 1 ) A(q 1 ) q d u(k 1) + C(q 1 ) D(q 1 ) e(k) where e(k) is a discrete white noise sequence with zero mean and variance σ e. This general form is usually an overkill for practical applications. Guy Dumont (UBC) EECE574 - Basics of System Identification 10 / 40

Models for Identification Models for Identification The General Stochastic Model The form most used in adaptive control is the so-called CARIMA, or ARIMAX model: y(k) = B(q 1 ) A(q 1 ) q d u(k 1) + C(q 1 ) A(q 1 ) e(k) Putting in the numerator of the noise model means that the noise is nonstationary, (e(t)/ is known as a random walk). This forces an integration in the controller. This is the model used in Generalized Predictive Control (GPC). Guy Dumont (UBC) EECE574 - Basics of System Identification 11 / 40

Models for Identification Models for Identification The State-Space Model Not used very often in adaptive control: x(k) = A x(k 1) + B u(k 1) y(k) = c T x(k) The ARMAX model can also be written in state-space form. One of the advantages of the state-space representation is that it simplifies the prediction. However, system identification is more complex for a state-space model than for a transfer function model. Guy Dumont (UBC) EECE574 - Basics of System Identification 12 / 40

Models for Identification Models for Identification The Laguerre Model The use of this model in adaptive control will be developed in detail later on. In this representation, the process output is represented as a truncated weighted sum of Laguerre functions l i (k): y(k) = N c i l i (k) i=0 This representation is much more parsimonious than the FIR or FSR ones, but contrary to the transfer function model does not require assumptions about the order and time delay of the process. Although it is implemented in a state-space form, it is straightforward to use in system identification. Guy Dumont (UBC) EECE574 - Basics of System Identification 13 / 40

Least-Squares Identification Least-Squares Identification Developed by Karl Gauss in his study of the motion of celestial bodies... the unknown parameters are chosen so that the sum of the squares of the differences between the observed and the computed values, multiplied by a number that measure the degree of precision is a minimum Applied to a variety of problems in mathematics, statistics, physics, economics, signal processing and control Basic technique for parameter estimation Guy Dumont (UBC) EECE574 - Basics of System Identification 14 / 40

Least-Squares Identification Least-Squares Estimation Example Assume that data generated by y = bx + n where n is white measurement noise. The problem is to estimate b, from k data pairs (x, y). The least-squares performance index is J = 1 2 k [y(i) x(i)b] 2 i=1 Guy Dumont (UBC) EECE574 - Basics of System Identification 15 / 40

Least-Squares Identification Least-Squares Estimation Defining the measurement vector: y = [y(1),..., y(k)] T and the regressor: one can write x = [x(1),...,x(k)] T J = 1 2 [y bx]t [y bx] Guy Dumont (UBC) EECE574 - Basics of System Identification 16 / 40

Least-Squares Identification Least-Squares Estimation The value that minimizes J is then found as dj db = x T [y bx] = 0 The least-squares estimate ˆb is then given by the solution to the normal equation: x T y + x T ˆbx = 0 which, if [x T x] is invertible, is ˆb = [x T x] 1 x T y x T x] 1 x T is known as the (Moore-Penrose) pseudoinverse. This idea can easily be extended to dynamic systems. Guy Dumont (UBC) EECE574 - Basics of System Identification 17 / 40

Least-Squares Identification Least-Squares Identification Let a discretized dynamic system be described by y(t) = n n a i y(t i) + b i u(t i) + w(t) i=1 i=1 where u(t) and y(t) are respectively the input and ouput of the plant and w(t) is the process noise. Defining θ = [ a 1 a n b 1 b n ] T x = [ y(t) y(t n) u(t 1) u(t n)] T the system above can be written as y(t) = x T (t)θ + w(t) Guy Dumont (UBC) EECE574 - Basics of System Identification 18 / 40

Least-Squares Identification Least-Squares Identification With N observations, the data can be put in the following compact form: Y = Xθ + W where Y T = [ y(1) y(n)] W T = [ w(1) w(n)] X = x T (1). x T (N) Guy Dumont (UBC) EECE574 - Basics of System Identification 19 / 40

Least-Squares Identification Least-Squares Identification Defining the modelling error as ǫ(t) = y(t) x T (t)θ the least-squares performance index to be minimized is: J = 1 2 N ǫ 2 (t) = 1 2 [Y Xθ]T [Y Xθ] 1 Differentiate J with respect to θ and equate to zero. ˆθ, the least-squares estimate of θ, which, if [X T X] is invertible, is: 1 ˆθ = [X T X] 1 X T Y 1 I am now tired of having to underline vectors, so I will stop doing so from now on Guy Dumont (UBC) EECE574 - Basics of System Identification 20 / 40

Least-Squares Identification Properties of the Least-Squares Estimates ˆθ = [X T X] 1 X T [Xθ + W] or ˆθ = θ + [X T X] 1 X T W Taking the expecting value gives E(ˆθ) = θ + E{[X T X] 1 X T W } The second term in the above expression represents a bias in the estimate. Guy Dumont (UBC) EECE574 - Basics of System Identification 21 / 40

Least-Squares Identification Properties of the Least-Squares Estimates Note that the second term in the RHS is generally non-zero except when 1 {w(t)} is zero mean and uncorrelated, or 2 {w(t)} is independent of {u(t)} and y does not appear in the regressor FIR and step models Laguerre model Output-error models Only in the above cases is the least-squares estimate unbiased. Furthermore, if it is the case as the number of observations increases to infinity, then we say that the method gives consistent estimates. Guy Dumont (UBC) EECE574 - Basics of System Identification 22 / 40

Least-Squares Identification Properties of the Least-Squares Estimates The estimate covariance is then given by cov ˆθ = E{[X T X] 1 X T WW T X[X T X] 1 } which, if {w(t)} is white, zero-mean gaussian with covariance σ 2 w, becomes cov ˆθ = σ 2 w[x T X] 1 The matrix (X T X) 1 X is called the pseudo-inverse of X. Because of this, [X T X] 1 is sometimes called the covariance matrix. The condition that X T X be invertible is called an excitation condition. Guy Dumont (UBC) EECE574 - Basics of System Identification 23 / 40

Alternatives to Least-Squares Alternatives to Least-Squares Need a method that gives consistent estimates in presence of coloured noise Generalized Least-Squares Instrumental Variable Method Maximum Likelihood Identification Guy Dumont (UBC) EECE574 - Basics of System Identification 24 / 40

Alternatives to Least-Squares Generalized Least Squares Generalized Least Squares Let the noise model be w(t) = e(t) where {e(t)} = N(0, σ) and C(q 1 ) C(q 1 ) is monic, of degree n System described as Defining filtered sequences System becomes A(q 1 )y(t) = B(q 1 )u(t) + ỹ(t) = C(q 1 )y(t) ũ(t) = C(q 1 )u(t) 1 C(q 1 ) e(t) A(q 1 )ỹ(t) = B(q 1 )ũ(t) + e(t) Guy Dumont (UBC) EECE574 - Basics of System Identification 25 / 40

Alternatives to Least-Squares Generalized Least Squares Generalized Least Squares If C(q 1 ) is known, then least-squares gives consistent estimates of A and B, given ỹ and ũ The problem however, is that in practice C(q 1 ) is not known and {ỹ} and {ũ} cannot be obtained An iterative method proposed by Clarke (1967) solves that problem Guy Dumont (UBC) EECE574 - Basics of System Identification 26 / 40

Alternatives to Least-Squares Generalized Least Squares Generalized Least Squares 1 Set Ĉ(q 1 ) = 1 2 Compute {ỹ(t)} and {ũ(t)} for t = 1,..., N 3 Use least-squares to estimate  and ˆB from ỹ and ũ 4 Compute the residuals ŵ(t) = Â(q 1 )y(t) ˆB(q 1 )u(t) 5 Use least-squares method to estimate Ĉ from Ĉ(q 1 )ŵ(t) = ε(t) i.e. ŵ(t) = c 1 ŵ(t 1) c 2 ŵ(t 2) + ε(t) where {ε(t)} is white 6 If converged, then stop, otherwise repeat from step 2 Guy Dumont (UBC) EECE574 - Basics of System Identification 27 / 40

Alternatives to Least-Squares Generalized Least Squares Generalized Least Squares For convergence, the loss function and/or the whiteness of the residuals can be tested An advantage of GLS is that it not only may give consistent estimates of the deterministic part of the system, but also gives a representation of the noise that the LS method does not give The consistency of GLS depends on the signal to noise ratio, the probability of consistent estimation increasing with the S/N There is, however no guarantee of obtaining consistent estimates Guy Dumont (UBC) EECE574 - Basics of System Identification 28 / 40

Alternatives to Least-Squares Instrumental Variables Instrumental Variable Methods (IV) T. Söderström and P.G. Stoica,Instrumental Variable Methods for System Identification, Spinger-Verlag, Berlin, 1983 As seen previously, the LS estimate ˆθ = [X T X] 1 X T Y is unbiased if W is independent of X. Assume that a matrix V is available, which is correlated with X but not with W and such that V T X is positive definite, i.e. E[V T X] is nonsingular E[V T W] = 0 Guy Dumont (UBC) EECE574 - Basics of System Identification 29 / 40

Alternatives to Least-Squares Instrumental Variables Instrumental Variable Methods (IV) Then, and θ estimated by V T Y = V T Xθ + V T W ˆθ = [V T X] 1 V T Y V is called the instrumental variable matrix. Guy Dumont (UBC) EECE574 - Basics of System Identification 30 / 40

Alternatives to Least-Squares Instrumental Variables Instrumental Variable Methods (IV) Ideally V is the noise-free process output and the IV estimate ˆθ is consistent There are many possible ways to construct the instrumental variable. For instance, it may be built using an initial least-squares estimates: and the k th row of V is given by  1 ŷ(t) = ˆB 1 u(t) v T k = [ ŷ(k 1),..., ŷ(k n), u(k 1),..., u(k n)] Guy Dumont (UBC) EECE574 - Basics of System Identification 31 / 40

Alternatives to Least-Squares Instrumental Variables Instrumental Variable Methods (IV) Consistent estimation cannot be guaranteed in general A two-tier method in its off-line version, the IV method is more useful in its recursive form Use of instrumental variable in closed-loop Often the instrumental variable is constructed from the input sequence. This cannot be done in closed-loop, as the input is formed from old outputs, and hence is correlated with the noise w, unless w is white. In that situation, the following choices are available. Guy Dumont (UBC) EECE574 - Basics of System Identification 32 / 40

Alternatives to Least-Squares Instrumental Variables Instrumental Variable Methods (IV) Instrumental variables in closed-loop Delayed inputs and outputs. If the noise w is assumed to be a moving average of order n, then choosing v(t) = x(t d) with d > n gives an instrument uncorrelated with the noise. This, however will only work with a time-varying regulator. Reference signals. Building the instruments from the setpoint will satisfy the noise independence condition. However, the setpoint must be a sufficiently rich signal for the estimates to converge. External signal. This is effect relates to the closed-loop identifiability condition (covered a bit later...). A typical external signal is a white noise independent of w(t). Guy Dumont (UBC) EECE574 - Basics of System Identification 33 / 40

Alternatives to Least-Squares Maximum Likelihood Maximum-Likelihood Identification The maximum-likelihood method considers the ARMAX model below where u is the input, y the output and e is zero-mean white noise with standard deviation σ: A(q 1 )y(t) = B(q 1 )u(t k) + C(q 1 )e(t) where A(q 1 ) = 1 + a 1 q 1 + + a n q n B(q 1 ) = b 1 q 1 + + b n q n C(q 1 ) = 1 + c 1 q 1 + + c n q n The parameters of A, B, C as well as σ are unknown. Guy Dumont (UBC) EECE574 - Basics of System Identification 34 / 40

Alternatives to Least-Squares Maximum Likelihood Maximum-Likelihood Identification Defining θ T = [ a 1 a n b 1 b n c 1 c n ] x T = [ y(t) u(t k) e(t 1) ] the ARMAX model can be written as y(t) = x T (t)θ + e(t) Unfortunately, one cannot use the least-squares method on this model since the sequence e(t) is unknown. In case of known parameters, the past values of e(t) can be reconstructed exactly from the sequence: ǫ(t) = [A(q 1 )y(t) B(q 1 )]/C(q 1 ) Guy Dumont (UBC) EECE574 - Basics of System Identification 35 / 40

Alternatives to Least-Squares Maximum Likelihood Maximum-Likelihood Identification V = 1 2 N ǫ 2 (t) t=1 Minimize V with respect to ˆθ, using for instance a Newton-Raphson algorithm. Note that ǫ is linear in the parameters of A and B but not in those of C. We then have to use some iterative procedure ˆθ i+1 = ˆθ i α i (V (ˆθ i )) 1 V (ˆθ i ) Initial estimate ˆθ 0 is usually obtained from a least-squares estimate Estimate the noise variance as ˆσ 2 = 2 N V(ˆθ) Guy Dumont (UBC) EECE574 - Basics of System Identification 36 / 40

Alternatives to Least-Squares Maximum Likelihood Properties of the Maximum-Likelihood Estimate (MLE) If the model order is sufficient, the MLE is consistent, i.e. ˆθ θ as N The MLE is asymptotically normal with mean θ and standard deviation σ θ The MLE is asymptotically efficient, i.e. there is no other unbiased estimator giving a smaller σ θ Guy Dumont (UBC) EECE574 - Basics of System Identification 37 / 40

Alternatives to Least-Squares Maximum Likelihood Properties of the Maximum-Likelihood Estimate (MLE) The Cramer-Rao inequality says that there is a lower limit on the precision of an unbiased estimate, given by cov ˆθ M 1 θ where M θ is the Fisher Information Matrix M θ = E[(log L) θθ ] For the MLE if σ is estimated, then σ 2 θ = M 1 θ = σ 2 V 1 θθ σ 2 θ = 2V N V 1 θθ Guy Dumont (UBC) EECE574 - Basics of System Identification 38 / 40

Practice Identification In Practice 1 Specify a model structure 2 Compute the best model in this structure 3 Evaluate the properties of this model 4 Test a new structure, go to step 1 5 Stop when satisfactory model is obtained Guy Dumont (UBC) EECE574 - Basics of System Identification 39 / 40

Practice MATLAB System Identification Toolbox Most used package Graphical User Interface, automates all the steps, easy to use Familiarize yourself with it by running the examples Guy Dumont (UBC) EECE574 - Basics of System Identification 40 / 40