UNIVERSITY OF CALGARY. Nonlinear Closed-Loop System Identification In The Presence Of. Non-stationary Noise Sources. Ibrahim Aljamaan A THESIS

Size: px
Start display at page:

Download "UNIVERSITY OF CALGARY. Nonlinear Closed-Loop System Identification In The Presence Of. Non-stationary Noise Sources. Ibrahim Aljamaan A THESIS"

Transcription

1 UNIVERSITY OF CALGARY Nonlinear Closed-Loop System Identification In The Presence Of Non-stationary Noise Sources by Ibrahim Aljamaan A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY GRADUATE PROGRAM IN ELECTRICAL AND COMPUTER ENGINEERING CALGARY, ALBERTA June, 216 Ibrahim Aljamaan 216

2 Abstract In this dissertation, nonlinear identification approaches are presented that construct Wiener and Hammerstein models. These are block-oriented models consisting of a memoryless nonlinearity either preceded or followed by a linear filter, respectively. The algorithms were developed to handle several practical challenges common in chemical process control applications. These challenges include systems running in closed-loop, incorporating non-stationary process disturbances, and with possibly unstable plant dynamics. Identification methods based on the prediction error method are developed to address these challenges. One of the main factors required for successful application of PEM algorithms is having a good initial estimate of the system under study. In this work, Instrumental Variable scheme is used to initialize the Hammerstein models, and a non-iterative overparameterized algorithm is developed to initialize the Wiener models. In all cases, the algorithms are developed theoretically, and then validated using Monte Carlo simulations. The closed-loop Hammerstein identification algorithms are validated using data from differential equation based simulation of a continuous stirred tank reactor. ii

3 Acknowledgements All praises and thanks be to ALLAH, the Most Gracious, the Most Merciful. Peace and blessing be upon the last and final prophet and messenger Muhammed, his family and companions, and all those who follow in their footsteps until the day of judgment. I praise and thank almighty ALLAH for all the blessings he bestows upon me among which was the ability and strength to undertake and complete this thesis. I would like to thank my supervisor Dr. David Westwick for his excellent teaching, continuous supporting, helpful suggesting, dedication toward his students. Thanks and acknowledgement are due to the Higher Ministry of Education in Saudi Arabia and the Saudi Cultural Bureau in Canada for the financial support. Finally, a special thanks to my parents, siblings, uncles, aunts and friends for their love support, sacrifices, supplications and inspiration. iii

4 Dedication To my parents, friends, siblings, nieces and nephews. iv

5 Table of Contents Abstract ii Acknowledgements iii Dedication iv Table of Contents v List of Tables viii List of Figures ix List of Symbols xi 1 INTRODUCTION Problem Statement and Contribution Overview LITERATURE REVIEW Introduction System Identification Identification of Linear Systems Noise Models Equation Error Models Output Error Models Models of Nonlinear Systems Volterra Series Expansion Nonlinear Block-structured Models Common Nonlinearities Identification Methods Prediction Error Method Ordinary Least Squares (OLS) method Iterative Optimization Methods Separable Least Square (SLS) Method Instrumental Variable (IV) Methods Exciting Signals Gaussian White Noise Signal Pseudo-random Binary Signal Multi-sine Signal Periodic Signal Nonlinear Open-loop Identification Common Iden. Appr. for Hammer. Models Common Iden. Appr. For Wiener Models Linear Closed-loop Identification Direct Methods Indirect Methods The Joint Input-Output Approach Nonlinear Closed-loop Identification Model Evaluation and Validation Methods Conclusion v

6 3 PEM IDENTIFICATION OF NL MODELS WITH ARIMA NOISE Introduction Problem Description Assumptions and condition for identifiability Identification Details of Algorithm Initialization Model Recovering Simulation Additional Simulation Examples Conclusion CLOSED-LOOP IDENTIFICATION OF CSTR WITH ARIMA NOISE Introduction Formulation and illustration of CSTR Model Description And Assumptions Assumptions and condition for identifiability Identification Direct identification method Open-Loop Unstable Process Dynamics Separable Least Squares Algorithm CSTR Simulation Examples Example Using Gaussian Input Example Using Sequence of Bump Tests Input Conclusion AN OVER-PARAMETRIZATION IDENTIFICATION APPROACH OF WIENER SYSTEMS Introduction Problem Formulation Identification of FIR Wiener Models Kernel Basis Expansion System Iden. of long memory Wiener Sys. Using Orth. Poly Non-polynomial nonlinearity Simulation Examples First Simulation Example Second Simulation Example Third Simulation Example Conclusion C-L ID. OF WIENER MODELS Problem Description Assumptions and condition for identifiability Iterative Optimization Methods Examples and Results Conclusion WIENER IDENTIFICATION WITH ARIMA PROCESS NOISE Introduction vi

7 7.2 Theory Problem Description Static nonlinearity Assumptions and conditions for identifiability Identification Prediction Error Minimization Initial Estimate Approach Simulation Study Conclusion CONCLUSION Summary of Contribution Future Work and Recommendations Bibliography vii

8 List of Tables 3.1 The norm. true & est. par. of Hammer. C-L model The percentage of parameter change The percentage of parameter change using low pass noise model The percentage of parameter change using ARI noise model The percentage of parameter change The percentage of parameter change The norm. true and est. linear subsystem s par. using poly The norm. true and est. parameters Sample ave. and SD of the MSE Standard errors in est. linear subsystem par. Gauss Standard errors in the est. linear subsystem par. Uniform The norm. true & the est. par. of the Wiener C-L model The true & the est. par. of the Wiener model True, initial and final est. par. of Wiener model viii

9 List of Figures and Illustrations 1.1 General Open-loop System General Closed-loop System Block Diagram of a General Form of a Time Series Model Block Diagram of an Autoregressive (AR) Model Block Diagram of a MA Model Block Diagram of an ARMA Model Block Diagram of an ARX Model Block Diagram of an ARMAX Model Block Diagram of an OE Model Block Diagram of BJ Model Block Diagram of a Hammerstein (NL) Model Block Diagram of a Wiener (LN) Model Examples of Polynomial, Hermite and Tchebyshev Nonlinearities of orders to 5. Left Column: polynomials over domain [-1 1]. Middle column: Hermite polynomials over domain [-3 3] as this is the practical extent of the unit variance Gaussian distribution used in the orthogonalization. Right Column: Tchebyshev polynomials over domain [-1 1] because in practice the data are scaled to the range between -1 and 1 before fitting Tchebyshev polynomials Hyperbolic Tangent Nonlinearity Saturation Nonlinearity Relay Nonlinearity Dead-zone Nonlinearity White Gaussian Noise Signal Pseudo-random Binary Signal Multi-sign Signal Examples of Two Periodic Signals Conver. Non-lin. SISO into MIMO Lin. Sys Conver. Non-lin. SISO into MISO Lin. Sys A general form closed-loop system Block diag. of Hammer. model with ARIMA noise BJ Hammer., input, innov. and dif. output Measured and pred. output Nonlinearity of est. and original models Poles and Zeros of the plant model, where the exact values are in black, the estimated values using the Differenced method in red and estimated values using the undifferenced method in cyan color Poles and Zeros of the noise model, where the exact values are in black, the estimated values using the Differenced method in red and estimated values using the undifferenced method in cyan color Six Higher order valid. tests using diff. method ix

10 3.8 Six Higher order valid. tests using undiff. method Schematic of a Non-isothermal CSTR Simulink CSTR Block diagram Block diag. of Hammer. model with ARIMA noise Dynamic Response of State Var. of Exothermic CSTR Asequence of bump tests and the Gaussian input Sim. and pred. output using Gauss. input The error using Gauss. input The measured input to the system The exact and est. output Averaged Error The averaged nonlinearity Block diagram of a Wiener sys The linear part of the system x(t) Multi-index block diag. illustrat. LN output Wiener model output expanded onto GBF Imp. responses of 3rd Butterworth filter Imp. response for exact and est. values Imp. response for est. and exact value The 1 runs est. nonlinearities order Butter. and fitted Laguerre filter using imp. inp Exact and est. nonlinearity Est. imp. of linear part and exact for Hermit Gauss. scaled Condition no. against MSE val C-L Wiener Sys. including process noise model The average cost of 1 runs Measured and Predicted Output Exact and Est. nonlinearities Histogram of the process parameters Histogram of the noise elements Histogram of the polynomial coefficients Six high order Valid. Tests using Diff. method O-L Wiener model includ. ARIMA process disturb ID. of O-L LN model with ARIMA proc. noise Histogram of plant elements Histogram of noise elements Exact and Est. y(t) The cost function convergence Ave. exact and est. nonlinearities Six high order valid. Tests Using diff. method Six high order valid. Tests Using Undiff. method x

11 List of Symbols, Abbreviations and Nomenclature Symbol AR ARIMA ARMA ARMAX ARX BJ C-L CSTR db DCS FIR G-N GOBF H IFAC iid IIR I/O IV J L L-M LN Definition Auto Regressive Model Auto Regressive Integral Moving Ave. Model Auto Regressive Moving Average Model Autoregres Moving Ave. with Exog. Inp. Model Autoregressive with Exogenous Input Model Box-Jenkins model Closed Loop Model Continuous Stirred Tank Reactor Decibel Distributed Control System Finite Impulse Response Gauss-Newton Global Orthogonal Basis Function Approximated Hessian International Federation of Automatic Control Independent identical distribution Infinite Impulse Response Input/Output Instrumental Variable Method Jacobian Linear Dynamic element Levenberg-Marquardt Wiener Model xi

12 LNL LS LTI MA MIMO ML MSE N NL NLN NMSE ODE OE O-L PDF PE PEM PRBS PV RBS SISO SLS SNR SS SVD φ Wiener-Hammerstein Model Least Squares Method Linear Time Invariant Moving Average Model Multi Input Multi Output Maximum Likelihood Mean Squares Error Static Non-linearity Hammerstein Model Hammerstein-Wiener Model Normalized Mean Squares Error Ordinary Differential Equation Output Error Model Open Loop Model Probability Density Function Persistently Exciting Prediction Error Method Pseudo-Random Binary Signal Photovoltaics Random Binary Signal Single Input Single Output Separable Least Squares Method Signal to Noise Ratio State Space Singular Value Decomposition Model structure xii

13 θ Parameter vector µ t System u (t) y (t) ŷ(t, θ) N G(z 1 ) H(z 1 ) e(t) v(t) t y(t) z 1 B(z 1 ), A(z 1 ) C(z 1 ), D(z 1 ) Φ v (ω) H n [u(t)] m(.) γ (i) Q ɛ(t) X κ M T ŷ(t t 1, θ) Input to the system Output of the system Estimated output Number of data Process transfer function Noise transfer function innovation input Noise model output Time The output of the system Backward shift operator Num. and den. of process filter na and nb Num. and den. of noise filter nc and nd Spectrum The n th-order Volterra operator Static nonlinearity The coefficient i of the nonlinearity The order of the Volterra Kernel The prediction error a regression matrix Scaled constant Hermite polynomial Tchebyshev Polynomial One step ahead prediction for output signal xiii

14 V N k Cost function Iteration number µ (k) The k th step size d (k) J Ĥ I M δ (k) θ l θ n P l Q l J s ζ Step direction The Jacobian The approximate Hessian Identity matrix The regularization parameter The linear parameter vector The nonlinear parameter vector Projection Complementary projection The Jacobian of the separable The instrumental variable µ x The mean σ x φ xx C xx M z d (t) G(z 1 ) A B F and F out C A and C A out The variance The autocorrelation The auto covariance Maximum degree of polynomial Differenced output Process Model with 1 extra numerator order partial derivative Component A Component B Inlet and outlet flow rate, respectively Inlet and outlet concentration of A xiv

15 T and T out V and V out E R λ ρ ρ J A H U C p C J F J F a (z 1 ) F s (z 1 ) Inlet and outlet temperature Inlet and outlet volumes in reactor vessel Activation energy for the reaction The ideal gas constant The heat of reaction A The density of the tank The density of cooling water The heat transfer area The overall heat transfer coefficient The heat capacity of the process liquid The heat capacity of the cooling water The manipulated cooling water flow rate Polynomial with roots outside unit circle Polynomial with roots inside unit circle F a (z 1 ) The stable inverse of the 1 F a(q 1 ) h (sym) p The p th order symmetric Volterra kernel α! α! The no. of redundant symmetric kernel values D n+1 p The number of regressors G(z 1 ) r(t) ỹ(t) δ i,k Known controller The reference input Output inverse nonlin. applied to sys. out. Tuning par. controls convergence rate A Kronecker delta xv

16 Chapter 1 INTRODUCTION During the science and technology revolution in the last century, our ability to deal with life s problems and to modify our environment have changed. Examples include the ability to predict the daily weather, to build new high-rise buildings, to design new robots for medical surgery, to test new strategies to increase production, to optimize the size of electronic chips to minimize energy loss, and to select the best photovoltaic (PV) materials to increase the efficiency of a solar panel. In order to make a successful engineering decision, the problem is fitted into a system: it could be any object that interacts with variables of different kinds and generates measurable information outputs. A system is usually excited by one or more manipulated signal(s) called input(s) and may also be affected by measured and unmeasured disturbances. A system becomes dynamic if the current output value relies on the current external stimuli and their previous values [63]. A model, however, defines a rule or relationship that explains how the variables interact with each other within the system. This describes the system and captures its essential behaviour. Humans have long used qualitative approaches to describe simple real-life systems, such as the ball will roll downhill, it seems it will rain today, the sky is too dark, and if I switch on the air-conditioning, the temperature will go down. In science and engineering applications, these simple qualitative models are augmented with quantitative, mathematical models, such as Newton s laws of motion, or Maxwell s equations. Using mathematical tools and methods to construct an appropriate model of a dynamical system from measurements of its inputs and outputs is called system identification [76]. This quantitative approach has 1

17 four steps or processes: recording of data and collection of information about the system; selection of a model structure that represents the system and a suitable parameter estimation method for that model structure; determination of the best parameters for the model; and, finally validation of the selected model [85]. System identification has produced models used to develop the design of systems and their controllers, such as channel equalization and compensation for amplifier nonlinearities in mobile phones or fly-by-wire systems in modern jets, to generate better predictions, including climate forecasting and stock market predictions, to simulate experiments that are either too hazardous or too expensive to achieve, such as trying new operation strategies, and to detect faults in complex systems, such the detection of fraudulent credit card transactions [85]. System identification methods can be divided into two approaches: parametric and nonparametric techniques. Parametric models describe the system using a well defined model structure and a small number of variables, or parameters, e.g. a transfer function, or a differential or difference equation. The prediction error method (PEM), which uses iterative optimization techniques to minimize a function of prediction errors, known as the loss function, over the space defined by the parameter vector, is a commonly used parametric identification method [63]. On the other hand, non-parametric models provide estimates that are represented by a graph or curve, such as an impulse, step or frequency response [63,76,85,93]. In the literature, most of the common approaches have been developed for open-loop systems, as shown in Figure 1.1, where there is no feedback. However, some situations require that the identification be performed in a closed-loop form, as illustrated in Figure 1.2. In a closed-loop system, there is feedback from the output to the input, because the system may not be stable if the feedback loop is opened, or the controller cannot be removed for safety, financial or production reasons. Moreover, closed-loop system identification may give better performance than applying an open-loop identification in some limited applications, such as those described in [57]. 2

18 Figure 1.1: General Open-loop System Figure 1.2: General Closed-loop System 3

19 In spite of the benefits of closed-loop identification, it is usually more difficult to implement than open-loop techniques. The additional difficulties that arise in closed-loop are due to the causal relationship between the disturbance and/or measurement noise and the plant input that is produced by the feedback path [25,58]. Also, there has to be an external perturbation to be able to identify the system. Another challenge, the controller keeps the signal inside the loop within a limited range that might not rich enough to excite the system. Moreover, there is no direct control over the input except by changing the controller set-point [88] or adding an external reference input. If there is no external input or change in the set point, it is impossible to differentiate between the plant model and the inverse of the controller. Systems may be either linear or nonlinear. Unlike linear systems, a nonlinear system does not obey the superposition and scaling principles. Saturation in power amplifiers, backlash in gears and hysteresis in magnetic materials are examples of some of the most familiar nonlinearities [52, 84]. A wide variety of nonlinear systems can be represented by nonlinear block oriented models. Nonlinear block structures consist of linear dynamic elements, L, and static (i.e. memoryless) nonlinearities, N, interconnected in different arrangements. The Hammerstein (NL), Wiener (LN), Wiener-Hammerstein (LNL) and Hammerstein-Wiener (NLN) models are the most common nonlinear block structures [39, 4, 7, 93]. Nonlinear system identification is important, given the prevalence of nonlinear systems in real-life applications. Nonlinear system identification is more challenging than linear identification, due to the uniqueness of each nonlinear process, in that there are not many common properties. Consequently, different methods are required, depending on the type or position of the nonlinearity. A major attribute for any system identification method is the potential for describing a large class of different system structures. Given these challenges, many researchers working in several different fields have been attracted to nonlinear system identification. Examples 4

20 of books that have focussed on this field are [36,39,4,71,93]. Moreover, many papers have been published in this research field, such as [5, 47, 59, 8, 83, 92, 96, 97]. There are three main approaches presented in the literature for closed-loop identification. The first approach is the direct method, where a noise model is applied to reduce the unmeasured input to white noise, which then breaks the correlation between it and any current or past values of the plant input, provided there is a delay somewhere in the feedback loop. The feedback can then be ignored, and open-loop methods can be used. The second approach is the indirect method, where the closed-loop transfer function is identified from the data assuming that the controller is exactly known. Knowledge of that controller is then used to extract the open-loop parameters from the estimated closed-loop model. The third approach is called a joint input-output method. In this method, an extra input or set point and disturbance is assumed to be driving the input and output of the system, which are jointly considered as outputs from this augmented system. Blind system identification methods, which identify a system s dynamics based on output measurements only, assuming a Gaussian, white noise input, are used to identify the augmented model. An appropriate method can then be used to find the process and noise parameters from an approximation of the augmented system [3, 63]. As a result of the researchers interests are shifted from the linear to the nonlinear identification, the importance of the nonlinear closed-loop identification is increasing. For the same reason, scientists were attracted towards the linear closed-loop system identifications before around four decades [25], some scientists are fascinated by the nonlinear closed-loop system identification. These reasons are usually critical because they may increase the safety, financially, efficiency or stability level of the identified system. Practical applications where the nonlinear closed-loop identification may be employed include: process chemical applications (e.g. distillation column as in [82] and continuous stirred tank reactor (CSTR) as in [87]), clean power generation plants (e.g. wind energy systems as in [26]) and medical technologies 5

21 (e.g. heart rate baroreflex control [91]). Little work has been done on the closed-loop identification of nonlinear systems; however, direct methods appear to hold the most promise, since one does not need to derive the relationship between the open-loop and closed-loop systems. In most direct approaches, an iterative optimization method is applied to reduce a cost function (generally the mean of squared prediction errors). They are among the common optimization methods, where the gradient, as the derivative of the cost function, is computed analytically with respect to the parameter vector. The main objective of this method is minimization of cost function by iteratively updating the parameter vector applying the local gradient and curvature of the cost function to calculate a proper parameter vector update. Since the outputs of many models are linear functions of one or more of the model parameters, the optimization may often be simplified using a Separable Least Squares (SLS) approach, which is discussed in [8, 83, 92]. These methods have been applied extensively in the identification of blockstructured nonlinear systems working in open-loop form. In SLS methods, the identification is simplified by splitting the parameters into linear parameters, which are calculated after each iteration by applying a simple least square (LS) method, and nonlinear coefficients, are approximated by using an iterative optimization method. I have shown that SLS methods for Wiener and Hammerstein models could be extended to include a linear noise model, which results in a direct method for the closed-loop identification of Wiener and Hammerstein models [4]. However, the resulting optimization is not convex, so a good initial estimate is required [4]. This estimate is a major factor in successful identification by increasing the chance of converging to the global minimum. Methods, such as instrumental variable (IV), have been presented to initiate linear closed-loop models and simple nonlinear models, such as Hammerstein models. Moreover, Gilson et al. have presented a method based on the iterative instrumental variable (IV) method developed for mainly identifying Hammerstein closed-loop systems [58, 59]. However, no such initialization exists for the Wiener model. 6

22 A common approach in the system identification of general model structures is to assume that the noise can be described as the output of an auto regressive moving average (ARMA) model [63, 85, 93]. This assumption is added to be able to apply the spectral factorization theorem, which guaranteed if an appropriate function as a spectrum is given, a stable, and inversely stable filter can be computed when driven by an i.i.d. sequence will generate a noise sequence with the measured spectrum. However, in many chemical process control applications the disturbances are modelled as random steps, which are non-stationary, and cannot be described by an ARMA model. One possibility is to use an auto regressive integral moving average (ARIMA) structure [5, 65, 66] for the noise model. The integral term transforms the innovation input into a series of random steps, which are then filtered by an ARMA model. The resulting signal is non-stationary as its variance increases linearly with time [75]. Another common challenge in real applications such as exothermic reactions in chemical processes, control of robot arms and high-performance aircraft that have unstable linear dynamic plant. As a result, it is necessary to generate a predictor for the unstable plant model [3]. Note that the unstable system is usually sitting inside a stabilizing control loop, one of the main reasons to apply system identification in closed-loop form, and creates a stable predictor (even though the open-loop plant is unstable). In this dissertation, direct closed-loop identification algorithms for complex block-structured nonlinear systems that represent practical applications are developed based on second-order gradient decent optimization methods, including techniques to deal with non-stationary noise models and unstable systems. Practical simulation models are included to validate the methods. 7

23 1.1 Problem Statement and Contribution The main purpose of this research was the development and application of nonlinear closedloop system identification techniques for practical applications (e.g. chemical process control applications), including developing approaches for two main nonlinear block structures Hammerstein and Wiener models which required solutions to the following issues: Having a good initial estimate is an important factor, as it helps to ensure that the iterative optimization converges to the global minimum and hence produces a successful identification. The Instrumental Variable (IV) method is commonly applied in the literature for this purpose. Several variants of this method for linear models appear in the literatures e.g [34,35,86,95]. Few approaches extended the IV method to estimate the initial guess for nonlinear Hammerstein model for open-loop and closed-loop forms as in [58,59] and in my MSc thesis [4]. On the other hand, the equivalent problem for Wiener systems is still an open problem. This dissertation will develop the first practical method for non-iteratively estimating the elements of Wiener models operating in both open-loop and closed-loop. This contribution is described in Chapters 5 and 6. Also, another approach to initiate Wiener model with process noise added before the nonlinearity will be presented in Chapter 7. A Common approach in the system identification of general model structures is to assume that the noise can be described as the output of an Auto Regressive Moving Average (ARMA) model [63, 85, 93]. However, in many chemical process control applications the disturbances are modelled as random steps, which are non-stationary, and cannot be described by an ARMA model. One possibility is to use an Auto Regressive Integrated Moving Average (ARIMA) structure [5, 65, 66] for the noise model. The integral term transforms the innovation input into a series of random steps, which are then filtered by an 8

24 ARMA model. The resulting signal is a known as a random walk, and is nonstationary as its variance increases linearly with time [75]. These methods are mainly applied to linear models, which is not representing the reality. In this work, methods for identifying Hammerstein model with nonstationary disturbances working in both open-loop and closed-loop will be presented in Chapter 3 and 4. Also, Development of methods to handle Wiener models, including a process noise model added between the linear dynamic block and nonlinearity, as in [41, 96, 97]. This structure is more practical in chemical precess applications. Consequently, a larger gain effect is influenced by the disturbance. An identification method developed for this type of Wiener model is presented in Chapter 7. In many chemical process applications such as continuous stirred tank reactor (CSTR), the cooling-water to reaction temperature dynamics are unstable, due to the exothermic nature of the chemical reaction. As a results, it is necessary to generate a predictor for the unstable plant model. Ljung and Forssell presented a method to deal with this challenge in [29] designed for linear models. In this dissertation, the method will applied to Hammerstein nonlinear model working in closed-loop form in Chapter 4. There are many system identification approaches applied to CSTR system in the literature. Only few papers deal with the nonlinear identification of CSTR models. In [87], Su and Ma presented a fuzzy Hammerstein CSTR model based on predictive control. On the other hand, Jianzhong and Qingchao in [49] established a method based on least squares support vector machines LS-SVD. Hammerstein identification method using direct closed-loop identification will be applied in Chapter 4 on data from a closed-loop simulation of a CSTR with a non-stationary ARIMA disturbance. 9

25 1.2 Overview The organization of the thesis is as following: Chapter 2 begins with a discussion of the models used throughout the thesis. These models are divided into linear and nonlinear models. Prediction error methods and their implementation using iterative optimization methods are discussed. A review of the literature on the identification of open-loop and closed-loop models, which includes brief descriptions of the common approaches to closed-loop identification, is presented. Finally, different model evaluation methods and validation using correlation tests are reviewed. In Chapter 3, system identification methods are developed for a Hammerstein system, comprising a static nonlinearity followed by a linear time-invariant system, that is subject to disturbances from a non-stationary noise model. A differencing based approach is used to eliminate the integrator from the noise model, resulting in a Box-Jenkins Hammerstein structure, as previously considered in [4]. Simulation examples, results and validations are presented at the end of this chapter. In Chapter Four, direct closed-loop system identification methods are applied to the Box-Jenkins Hammerstein models with a non-stationary noise models and unstable process models. The proposed identification method is tested on data from a continuous-time simulation of a continuous stirred tank reactor (CSTR) in which an exothermic reaction is taking place. A validation test based on a high-order cross-correlation method is used to detect deficiencies in the noise model. Finally, conclusions and suggested solutions are presented. A two-step method for generating an initial estimate of the Wiener model is provided in Chapter 5. This over-parameterized method is based on a multi- 1

26 index technique and is an extension of the method proposed in [56]. Global orthogonal basis functions and orthogonal Hermite polynomials are used as expansion bases for the linear subsystem and the nonlinearity, respectively, to allow for the robust identification of systems with long memories, and highorder nonlinearities. Monte-Carlo simulation and cross-validation are used to demonstrate the method and validate the results. Finally, short summary of the chapter is provided. The over-parametrized method that presented in Chapter 5 is then used in the closed-loop identification of Wiener models using the direct method. MATLAB simulation example is demonstrated. Finally, Validation tests and a discussion of the results are included. Chapter 7 presents an identification method for the Wiener models in the presence of ARIMA process noise that is added between the process subsystem and nonlinearity. The method is tested on a simulation example. Validation test and results discussion are included. A summary of the research contributions, limitations of the proposed approaches and suggestions for future work are provided in Chapter 8. 11

27 Chapter 2 LITERATURE REVIEW 2.1 Introduction System identification involves building models of dynamic systems from input/output measurements. These mathematical models may be used to answer scientific questions, as a mathematical model may help to extract the essential information from complicated data and allow one to make meaningful, quantitative conclusions. As a result, modelling increases the understanding of some mechanisms by finding the connection between the observations related to it. Halley s Comet, named after the British astronomer Edmond Halley, is the only short-period comet clearly visible to the naked eye from Earth, and might appear twice in a human lifetime (observed from the earth every 75 to 76 years). A mathematical model of its orbit allows astronomers to estimate its period, and hence to predict when the comet will next appear [1, 68]. Prediction is another motive for dynamical modelling. Hydrologists have a range of techniques for predicting river flow from measurements of rainfall and the flow of soil moisture [51, 53]. Mathematical models are essential for state estimation, which is used to track the variables that determine the behaviour of dynamical systems. Identification is essential in the diagnosis of faults and inadequacies. For example, Norton and Smith [74] tested one reactor tube for a pilot-scale gas-heated catalytic reactor on a practical computer control schemes to study the reactor performance and its economic feasibility. Moreover, Brown et al. [17] tested the methionine tolerance of humans. As a result of these tests, liver diseases or diabetes may be detected, as shown in the paper, by abnormalities in the tolerance to changes in the oral dose of methionine. Furthermore, mathematical models help in simulation, operator training and testing in hazardous, difficult 12

28 or expensive situations. Common examples of this include the extensive use of aircraft and space-vehicle simulators in pilot or astronaut training and in accident investigation [73]. The first system identification method known in the literature was proposed over 2 years ago, when Gauss developed the least squares method (LS) in Widespread application was not possible until the advent of digital computers. The Italian astronomer Piazzi discovered the asteroid Ceres after six years of searching, and tracked it for 4 days, collecting data regarding its orbit. Keppler invented a nonlinear equation to determine the motion of planets, but astronomers tried to avoid this model due to its complexity. It was not until 1919, that Von Zach was able to predict the same results by performing the LS method, as originally proposed by Gauss [2]. The system identification process starts by exciting the system with some sort of input. This signal plays a key factor in successful identification: it has to be able to excite the system, but in a limited manner. System identification can be divided into four steps as follows: 1. Collection of data inputs u (t) and outputs y (t) : u (t) = [u(1) u(2) u(t)] T (2.1) y (t) = [y(1) y(2) y(t)] T (2.2) 2. Testing of different model structures and parameter approximation approaches. In this step, different model structures are fitted to the measured inputs/outputs (I/O) using different statistical methods, usually iterative-based methods, to estimate the unknown parameters. 3. Selection of the best model structure and estimation method from step Validation of the selected model and method to judge whether the results are good enough or not. The tested model can then be delivered to the end user. 13

29 Assume that I/O data are given in: µ t = [u (t) y (t) ] where u (t) and y (t) are formulated in (2.1) and (2.2) respectively. The system is represented by a model structure φ that in turn relies on a vector, θ, which contains a list of parameters. The current output of the system given the past observation µ (t 1) is given by: y(t) = φ(µ t 1, θ) + v(t) where v(t) encompasses the effects of additive noise and/or disturbances, which cannot be measured directly, and θ represents the true values of the unknown parameters. This model is disturbed with the innovation signal v(t). Note that the first term can be viewed as a one-step ahead prediction of the output, as it relies only on data from past samples of the input and output. Let ŷ(t, θ) = φ(µ t 1, θ) be the one-step ahead prediction of the output resulting from the model structure φ with an arbitrary parameter vector θ. The main goal of the system identification, given the observed data from time 1 to N, µ N, is to search for the suitable model structure φ and corresponding parameter vector ˆθ that minimizes some measure of the difference between the observed and predicted outputs, y(t) ŷ(t, θ). These methods are called Prediction Error Methods (PEM), as they search for the model that minimizes some measure of the prediction errors. The tool that measures the size of this difference, and hence the quality of the estimated model, is called the cost function. The most common such cost function is the sum of squared errors. The arrangement of this chapter is as follows: Section 2.2 presents time-domain models of linear time-invariant systems. Models of nonlinear systems are described in Section 2.3, including material on the representation of nonlinear systems using Volterra kernels, a short summary of the most common nonlinear block structures and common nonlinearities. Common identification approaches based on prediction error method are described in Section

30 Then, common identification excitation signals are discussed in Section 2.5 A summary of the most common open-loop identification approaches is provided in Section 2.6. Section 2.7 describes three well-known methods for identifying closed-loop models of linear time-invariant systems available in the literature. Nonlinear closed-loop identification is discussed in Section 2.8. Finally, common validation and evaluation methods are reviewed in Section System Identification Identification methods are divided into parametric and non-parametric methods. Each of these divisions can be split it into frequency and time domain types. This thesis focuses on time-domain parametric methods [63, 73, 93] Identification of Linear Systems A signal that changes with time is called a time series. This term can be applied to continuous and discrete time signals; however, this work focuses only on discrete signals. For example, currency exchange rates, joblessness rate, population numbers, rain-fall quantity at a particular place, or temperature measurements recorded in a distributed control system (DCS) can all be treated as time series [71]. There is another type of signal, the frequency series; however, it is not addressed in this dissertation. The general form of the linear time series model is illustrated in Figure 2.1. Although this model form is often impractical, all linear models can be derived from it. It consists of two linear digital filters: the process transfer function G(z 1 ), which the measured input u(t) passes through producing the process model output x(t); and, the noise filter H(z 1 ), where unmeasured white noise(i.e. innovation input e(t)) is filtered to produce the noise model output v(t). The time series, w(t), is the output at time t and results from the sum of v(t) and x(t). Note that G(z 1 ) and H(z 1 ) are transfer functions, where the numerator 15

31 and denominator are referred to as B(z 1 ) and A(z 1 ) for the process, respectively, and C(z 1 ) and D(z 1 ), respectively, for the disturbance filter, where z 1 is the backward shift operator. Note that z can be used in both time and frequency domain expressions. In the time domain, z 1 refers to the backward shift operator, and in the frequency domain, z is the discrete frequency variable. Figure 2.1: Block Diagram of a General Form of a Time Series Model The time series models used in this thesis are as follows: Noise Models Noise models can be used to model systems where the input is unknown (such as the weather or stock market), as well as more classic applications where they really do model noise. Autoregressive Model An autoregressive (AR) model comes in the form of difference equation and has the innovation input e(t), which is; a white noise signal, pass through a digital filter 1 A(z 1 ) 16 to produce

32 output y(t). The AR model is shown in Figure 2.2. This type of model is very common, because it does not have a leakage issue in the frequency range discretization and it permits model oscillation with few constraints. The AR model is preferred when the order of the filter is low [71]. The output equation of the model is as follows: Figure 2.2: Block Diagram of an Autoregressive (AR) Model where y(t) = 1 e(t) (2.3) A(z 1 ) The AR difference equation becomes: A(z 1 ) = 1 + a 1 z a na z na (2.4) y(t) = a 1 y(t 1) a na (t na) + e(t) (2.5) This model is called autoregressive because the current output is a linear regression onto its previous values. 17

33 The one step ahead prediction of the output is given by: ŷ(t t 1) = a 1 y(t 1) a na y(t na) Moving Average Model Figure 2.3 illustrates the moving average (MA) model. This model is not as common as the AR model. Model oscillation with few coefficients is not permitted with the MA model as it is in the AR model [71]. Output y(t) from the MA model is generated by filtering the innovation input e(t) by a numerator polynomial B(z 1 ) as explained below: Figure 2.3: Block Diagram of a Moving Average (MA) Model y(t) = B(z 1 )e(t) (2.6) where B(z 1 ) is defined analogously to A(z 1 ) in Equation 2.4. The MA difference equation is: y(t) = e(t) + b 1 e(t 1) + + b nb e(t nb) (2.7) 18

34 Thus, the current output is a weighted average of the current and past nb samples of e(t). The one step ahead prediction for the MA model is: ŷ(t t 1) = (1 B 1 (z 1 ))y(t) Unlike fitting AR model, fitting MA model is more challenging because it contains are unobservable lagged error terms. As a results, iterative non-linear fitting method is required in place of linear LS method. Autoregressive Moving Average Model The autoregressive moving average (ARMA) model is, as the name suggests, a combination of the AR and MA models. It is used as widely in practical applications as the AR model. The ARMA model is shown in Figure 2.4 and formulated as follows: Figure 2.4: Block Diagram of an Autoregressive Moving Average (ARMA) Model y(t) = B(z 1 ) e(t) (2.8) A(z 1 ) 19

35 where B(z 1 ) and A(z 1 ) are polynomials in the backward shift operator, z 1, of degrees nb and na, respectively, and the leading term of A(z 1 ) and B(z 1 ) are a = b = 1. The difference equation of the ARMA model is defined as follows: y(t) = a 1 y(t 1) a na (t na) + e(t) +b 1 e(t 1) + + b nb e(t nb) The ARMA one step ahead predictor is formulated as follows: ŷ(t t 1) = (1 A(z 1 ) B(z 1 ) )y(t) It has been shown that any rational spectrum can be reproduced by an infinite impulse response (IIR) filter driven by an i.i.d. noise source based on the spectral factorization theorem [63], which is stated in Theorem 1: Theorem 1. Suppose that Φ v (ω) > and is a rational function of cos(ω) (or of e iω ), then there exists a monic rational function of z, R(z), with no poles and no zeros on or outside of the unit circle, such that: Φ v (ω) = λ R(e iω ) 2. Other ways of stating the theorem, if an appropriate function as a spectrum is given, a stable, and inversely stable filter can be computed when driven by an i.i.d. sequence will generate a noise sequence with the measured spectrum Equation Error Models Equation error models add a deterministic input to the noise models in the previous subsections, resulting in plant and noise models that share the same denominator polynomials. Autoregressive with Exogenous Input Model The autoregressive with exogenous input (ARX) model is the most common linear model, due to the computational simplicity of its model parameters. Two external signals are applied to the ARX model: a measured signal, u(t), and an unmeasured white noise signal, 2

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Identification of ARX, OE, FIR models with the least squares method

Identification of ARX, OE, FIR models with the least squares method Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. ISS0031 Modeling and Identification Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. Aleksei Tepljakov, Ph.D. October 21, 2015 Discrete-time Transfer Functions

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

A Mathematica Toolbox for Signals, Models and Identification

A Mathematica Toolbox for Signals, Models and Identification The International Federation of Automatic Control A Mathematica Toolbox for Signals, Models and Identification Håkan Hjalmarsson Jonas Sjöberg ACCESS Linnaeus Center, Electrical Engineering, KTH Royal

More information

Rozwiązanie zagadnienia odwrotnego wyznaczania sił obciąŝających konstrukcje w czasie eksploatacji

Rozwiązanie zagadnienia odwrotnego wyznaczania sił obciąŝających konstrukcje w czasie eksploatacji Rozwiązanie zagadnienia odwrotnego wyznaczania sił obciąŝających konstrukcje w czasie eksploatacji Tadeusz Uhl Piotr Czop Krzysztof Mendrok Faculty of Mechanical Engineering and Robotics Department of

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Computer Exercise 1 Estimation and Model Validation

Computer Exercise 1 Estimation and Model Validation Lund University Time Series Analysis Mathematical Statistics Fall 2018 Centre for Mathematical Sciences Computer Exercise 1 Estimation and Model Validation This computer exercise treats identification,

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture

More information

Identification in closed-loop, MISO identification, practical issues of identification

Identification in closed-loop, MISO identification, practical issues of identification Identification in closed-loop, MISO identification, practical issues of identification CHEM-E7145 Advanced Process Control Methods Lecture 4 Contents Identification in practice Identification in closed-loop

More information

Using Neural Networks for Identification and Control of Systems

Using Neural Networks for Identification and Control of Systems Using Neural Networks for Identification and Control of Systems Jhonatam Cordeiro Department of Industrial and Systems Engineering North Carolina A&T State University, Greensboro, NC 27411 jcrodrig@aggies.ncat.edu

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Stochastic Processes

Stochastic Processes Stochastic Processes Stochastic Process Non Formal Definition: Non formal: A stochastic process (random process) is the opposite of a deterministic process such as one defined by a differential equation.

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

2 Introduction of Discrete-Time Systems

2 Introduction of Discrete-Time Systems 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic

More information

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants

More information

Analysis and Synthesis of Single-Input Single-Output Control Systems

Analysis and Synthesis of Single-Input Single-Output Control Systems Lino Guzzella Analysis and Synthesis of Single-Input Single-Output Control Systems l+kja» \Uja>)W2(ja»\ um Contents 1 Definitions and Problem Formulations 1 1.1 Introduction 1 1.2 Definitions 1 1.2.1 Systems

More information

A Guide to Modern Econometric:

A Guide to Modern Econometric: A Guide to Modern Econometric: 4th edition Marno Verbeek Rotterdam School of Management, Erasmus University, Rotterdam B 379887 )WILEY A John Wiley & Sons, Ltd., Publication Contents Preface xiii 1 Introduction

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Study of Time Series and Development of System Identification Model for Agarwada Raingauge Station

Study of Time Series and Development of System Identification Model for Agarwada Raingauge Station Study of Time Series and Development of System Identification Model for Agarwada Raingauge Station N.A. Bhatia 1 and T.M.V.Suryanarayana 2 1 Teaching Assistant, 2 Assistant Professor, Water Resources Engineering

More information

Identification of Linear Systems

Identification of Linear Systems Identification of Linear Systems Johan Schoukens http://homepages.vub.ac.be/~jschouk Vrije Universiteit Brussel Department INDI /67 Basic goal Built a parametric model for a linear dynamic system from

More information

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS M Gevers,1 L Mišković,2 D Bonvin A Karimi Center for Systems Engineering and Applied Mechanics (CESAME) Université Catholique de Louvain B-1348 Louvain-la-Neuve,

More information

Introduction to System Identification and Adaptive Control

Introduction to System Identification and Adaptive Control Introduction to System Identification and Adaptive Control A. Khaki Sedigh Control Systems Group Faculty of Electrical and Computer Engineering K. N. Toosi University of Technology May 2009 Introduction

More information

Overview of the Seminar Topic

Overview of the Seminar Topic Overview of the Seminar Topic Simo Särkkä Laboratory of Computational Engineering Helsinki University of Technology September 17, 2007 Contents 1 What is Control Theory? 2 History

More information

Automatic Control Systems theory overview (discrete time systems)

Automatic Control Systems theory overview (discrete time systems) Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations

More information

Statistics 349(02) Review Questions

Statistics 349(02) Review Questions Statistics 349(0) Review Questions I. Suppose that for N = 80 observations on the time series { : t T} the following statistics were calculated: _ x = 10.54 C(0) = 4.99 In addition the sample autocorrelation

More information

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop V. Laurain, M. Gilson, H. Garnier Abstract This article presents an instrumental variable method dedicated

More information

ECON3327: Financial Econometrics, Spring 2016

ECON3327: Financial Econometrics, Spring 2016 ECON3327: Financial Econometrics, Spring 2016 Wooldridge, Introductory Econometrics (5th ed, 2012) Chapter 11: OLS with time series data Stationary and weakly dependent time series The notion of a stationary

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

An Iterative Algorithm for the Subspace Identification of SISO Hammerstein Systems

An Iterative Algorithm for the Subspace Identification of SISO Hammerstein Systems An Iterative Algorithm for the Subspace Identification of SISO Hammerstein Systems Kian Jalaleddini R. E. Kearney Department of Biomedical Engineering, McGill University, 3775 University, Montréal, Québec

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM D.G. Dimogianopoulos, J.D. Hios and S.D. Fassois DEPARTMENT OF MECHANICAL & AERONAUTICAL ENGINEERING GR-26500 PATRAS,

More information

Nonlinear Identification of Backlash in Robot Transmissions

Nonlinear Identification of Backlash in Robot Transmissions Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

Closed loop Identification of Four Tank Set up Using Direct Method

Closed loop Identification of Four Tank Set up Using Direct Method Closed loop Identification of Four Tan Set up Using Direct Method Mrs. Mugdha M. Salvi*, Dr.(Mrs) J. M. Nair** *(Department of Instrumentation Engg., Vidyavardhini s College of Engg. Tech., Vasai, Maharashtra,

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance

More information

Econometrics II - EXAM Answer each question in separate sheets in three hours

Econometrics II - EXAM Answer each question in separate sheets in three hours Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following

More information

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Navtech Part #s Volume 1 #1277 Volume 2 #1278 Volume 3 #1279 3 Volume Set #1280 Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Volume 1 Preface Contents

More information

Control Systems Lab - SC4070 System Identification and Linearization

Control Systems Lab - SC4070 System Identification and Linearization Control Systems Lab - SC4070 System Identification and Linearization Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 13, 2015 (slides

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

NEURAL NETWORK BASED HAMMERSTEIN SYSTEM IDENTIFICATION USING PARTICLE SWARM SUBSPACE ALGORITHM

NEURAL NETWORK BASED HAMMERSTEIN SYSTEM IDENTIFICATION USING PARTICLE SWARM SUBSPACE ALGORITHM NEURAL NETWORK BASED HAMMERSTEIN SYSTEM IDENTIFICATION USING PARTICLE SWARM SUBSPACE ALGORITHM S.Z. Rizvi, H.N. Al-Duwaish Department of Electrical Engineering, King Fahd Univ. of Petroleum & Minerals,

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Power Amplifier Linearization Using Multi- Stage Digital Predistortion Based On Indirect Learning Architecture

Power Amplifier Linearization Using Multi- Stage Digital Predistortion Based On Indirect Learning Architecture Power Amplifier Linearization Using Multi- Stage Digital Predistortion Based On Indirect Learning Architecture Sreenath S 1, Bibin Jose 2, Dr. G Ramachandra Reddy 3 Student, SENSE, VIT University, Vellore,

More information

5 Transfer function modelling

5 Transfer function modelling MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?

More information

CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER

CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER 114 CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER 5.1 INTRODUCTION Robust control is a branch of control theory that explicitly deals with uncertainty in its approach to controller design. It also refers

More information

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach P.A. (Rama) Ramamoorthy Electrical & Computer Engineering and Comp. Science Dept., M.L. 30, University

More information

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Robust fixed-order H Controller Design for Spectral Models by Convex Optimization

Robust fixed-order H Controller Design for Spectral Models by Convex Optimization Robust fixed-order H Controller Design for Spectral Models by Convex Optimization Alireza Karimi, Gorka Galdos and Roland Longchamp Abstract A new approach for robust fixed-order H controller design by

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Identification of a Chemical Process for Fault Detection Application

Identification of a Chemical Process for Fault Detection Application Identification of a Chemical Process for Fault Detection Application Silvio Simani Abstract The paper presents the application results concerning the fault detection of a dynamic process using linear system

More information

Intermediate Process Control CHE576 Lecture Notes # 2

Intermediate Process Control CHE576 Lecture Notes # 2 Intermediate Process Control CHE576 Lecture Notes # 2 B. Huang Department of Chemical & Materials Engineering University of Alberta, Edmonton, Alberta, Canada February 4, 2008 2 Chapter 2 Introduction

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

The ARIMA Procedure: The ARIMA Procedure

The ARIMA Procedure: The ARIMA Procedure Page 1 of 120 Overview: ARIMA Procedure Getting Started: ARIMA Procedure The Three Stages of ARIMA Modeling Identification Stage Estimation and Diagnostic Checking Stage Forecasting Stage Using ARIMA Procedure

More information

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET Identification of Linear and Nonlinear Dynamical Systems Theme : Nonlinear Models Grey-box models Division of Automatic Control Linköping University Sweden General Aspects Let Z t denote all available

More information

Estimation, Detection, and Identification CMU 18752

Estimation, Detection, and Identification CMU 18752 Estimation, Detection, and Identification CMU 18752 Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053

More information

Neural Network Control of Robot Manipulators and Nonlinear Systems

Neural Network Control of Robot Manipulators and Nonlinear Systems Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

FRTN 15 Predictive Control

FRTN 15 Predictive Control Department of AUTOMATIC CONTROL FRTN 5 Predictive Control Final Exam March 4, 27, 8am - 3pm General Instructions This is an open book exam. You may use any book you want, including the slides from the

More information

Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina

Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2017-2018) Prof. Sergio Bittanti Outline Data generation

More information

Improving performance and stability of MRI methods in closed-loop

Improving performance and stability of MRI methods in closed-loop Preprints of the 8th IFAC Symposium on Advanced Control of Chemical Processes The International Federation of Automatic Control Improving performance and stability of MRI methods in closed-loop Alain Segundo

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

ROBUSTNESS COMPARISON OF CONTROL SYSTEMS FOR A NUCLEAR POWER PLANT

ROBUSTNESS COMPARISON OF CONTROL SYSTEMS FOR A NUCLEAR POWER PLANT Control 004, University of Bath, UK, September 004 ROBUSTNESS COMPARISON OF CONTROL SYSTEMS FOR A NUCLEAR POWER PLANT L Ding*, A Bradshaw, C J Taylor Lancaster University, UK. * l.ding@email.com Fax: 0604

More information

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification Lecture 1: Introduction to System Modeling and Control Introduction Basic Definitions Different Model Types System Identification What is Mathematical Model? A set of mathematical equations (e.g., differential

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression

More information

Modeling and Control Overview

Modeling and Control Overview Modeling and Control Overview D R. T A R E K A. T U T U N J I A D V A N C E D C O N T R O L S Y S T E M S M E C H A T R O N I C S E N G I N E E R I N G D E P A R T M E N T P H I L A D E L P H I A U N I

More information

Parameter Bounds for Discrete-Time Hammerstein Models With Bounded Output Errors

Parameter Bounds for Discrete-Time Hammerstein Models With Bounded Output Errors IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 48, NO. 10, OCTOBER 003 1855 Parameter Bounds for Discrete-Time Hammerstein Models With Bounded Output Errors V. Cerone and D. Regruto Abstract In this note,

More information

CBE495 LECTURE IV MODEL PREDICTIVE CONTROL

CBE495 LECTURE IV MODEL PREDICTIVE CONTROL What is Model Predictive Control (MPC)? CBE495 LECTURE IV MODEL PREDICTIVE CONTROL Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University * Some parts are from

More information

Seasonal Models and Seasonal Adjustment

Seasonal Models and Seasonal Adjustment LECTURE 10 Seasonal Models and Seasonal Adjustment So far, we have relied upon the method of trigonometrical regression for building models which can be used for forecasting seasonal economic time series.

More information

Index. INDEX_p /15/02 3:08 PM Page 765

Index. INDEX_p /15/02 3:08 PM Page 765 INDEX_p.765-770 11/15/02 3:08 PM Page 765 Index N A Adaptive control, 144 Adiabatic reactors, 465 Algorithm, control, 5 All-pass factorization, 257 All-pass, frequency response, 225 Amplitude, 216 Amplitude

More information

Robust Loop Shaping Controller Design for Spectral Models by Quadratic Programming

Robust Loop Shaping Controller Design for Spectral Models by Quadratic Programming Robust Loop Shaping Controller Design for Spectral Models by Quadratic Programming Gorka Galdos, Alireza Karimi and Roland Longchamp Abstract A quadratic programming approach is proposed to tune fixed-order

More information

Wiener System Identification with Four-Segment and Analytically Invertible Nonlinearity Model

Wiener System Identification with Four-Segment and Analytically Invertible Nonlinearity Model Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July -3, 27 FrA4.2 Wiener System Identification with Four-Segment and Analytically Invertible

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

ADAPTIVE TEMPERATURE CONTROL IN CONTINUOUS STIRRED TANK REACTOR

ADAPTIVE TEMPERATURE CONTROL IN CONTINUOUS STIRRED TANK REACTOR INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) International Journal of Electrical Engineering and Technology (IJEET), ISSN 0976 6545(Print), ISSN 0976 6545(Print) ISSN 0976 6553(Online)

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

CompensatorTuning for Didturbance Rejection Associated with Delayed Double Integrating Processes, Part II: Feedback Lag-lead First-order Compensator

CompensatorTuning for Didturbance Rejection Associated with Delayed Double Integrating Processes, Part II: Feedback Lag-lead First-order Compensator CompensatorTuning for Didturbance Rejection Associated with Delayed Double Integrating Processes, Part II: Feedback Lag-lead First-order Compensator Galal Ali Hassaan Department of Mechanical Design &

More information

Christopher Dougherty London School of Economics and Political Science

Christopher Dougherty London School of Economics and Political Science Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this

More information

Model Identification and Validation for a Heating System using MATLAB System Identification Toolbox

Model Identification and Validation for a Heating System using MATLAB System Identification Toolbox IOP Conference Series: Materials Science and Engineering OPEN ACCESS Model Identification and Validation for a Heating System using MATLAB System Identification Toolbox To cite this article: Muhammad Junaid

More information

Chapter One. Introduction

Chapter One. Introduction Chapter One Introduction A system is a combination of components or parts that is perceived as a single entity. The parts making up the system may be clearly or vaguely defined. These parts are related

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet EE538 Final Exam Fall 005 3:0 pm -5:0 pm PHYS 3 Dec. 17, 005 Cover Sheet Test Duration: 10 minutes. Open Book but Closed Notes. Calculators ARE allowed!! This test contains five problems. Each of the five

More information

Roundoff Noise in Digital Feedback Control Systems

Roundoff Noise in Digital Feedback Control Systems Chapter 7 Roundoff Noise in Digital Feedback Control Systems Digital control systems are generally feedback systems. Within their feedback loops are parts that are analog and parts that are digital. At

More information

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2015-2016) Prof. Sergio Bittanti Outline Data generation

More information

Iterative Learning Control (ILC)

Iterative Learning Control (ILC) Department of Automatic Control LTH, Lund University ILC ILC - the main idea Time Domain ILC approaches Stability Analysis Example: The Milk Race Frequency Domain ILC Example: Marine Vibrator Material:

More information

DESIGN OF AN ON-LINE TITRATOR FOR NONLINEAR ph CONTROL

DESIGN OF AN ON-LINE TITRATOR FOR NONLINEAR ph CONTROL DESIGN OF AN ON-LINE TITRATOR FOR NONLINEAR CONTROL Alex D. Kalafatis Liuping Wang William R. Cluett AspenTech, Toronto, Canada School of Electrical & Computer Engineering, RMIT University, Melbourne,

More information