Process Dynamics & Control LECTURE 1: INTRODUCTION OF MODEL PREDICTIVE CONTROL A Multivariable Control Technique for the Process Industry
|
|
- Oliver Ryan
- 5 years ago
- Views:
Transcription
1 Process Dynamics & Control LECTURE 1: INTRODUCTION OF MODEL PREDICTIVE CONTROL A Multivariable Control Technique for the Process Industry Jong Min Lee Chemical and Biological Engineering Seoul National University
2 What is MPC? Process PID Actuator or Gp Low-level Loops Sensor Up-to-date Process Information Optimal Process Adjustment Simulation/Optimization Package Database JM Lee Computer Dynamic Process Model Objective & Constraints Connection to Information System 2
3 Main Algorithm Past measurements Future y max target projected output u max future input t t+1 t+m-1 t+p input Horizon JM Lee
4 Some Key Features Computer based: sampled-data control Model based: requires a dynamic process model Predictive: makes explicit prediction of the future time behaviour of CVs within a chosen window Optimization based: performs optimization (numerical search) online for optimal control adjustments No explicit form of control law just model, objective, and constraints are specified Integrated: constraint handling and economic optimization with regulatory and servo control Receding Horizon Control: repeats the prediction and optimization at each sample time step to update the optimal input trajectory after a feedback update JM Lee
5 Exemplary Algorithm u Plant y Receding Horizon Control only the adjustment for the current sample time is implemented and the rest are re-optimized at the next sample time step after a new feedback update reference Optimizer t=k Observer ˆX reference Y t ( ) =f ˆXt, U t ( ) U t ( ) Z t+p Control Objective min U t ( ) t `1 [Error( )] + `2 [Input( )] d U( ) 2 U, Constraints Y t ( ) 2 Y JM Lee
6 Analogy JM Lee 45804
7 Industrial Use of MPC Initiated at Shell Oil and other refineries during late 0s and early 80s Various commercial software DMCplus Aspen Tech RMPCT Honeywell Dozen+ other players (eg, 3DMPC-ABB) > 3000 worldwide installations Predominantly in the oil and petrochemical industries but the range of applications is expanding Models used are predominantly empirical models developed through plant testing Technology is used not only for multivariable control, but for most economic operation within constraint boundaries JM Lee 45804
8 Survey Result (1) Applications by 5 major MPC vendors in North America / Europe (Badgwell and Qin, 2003) JM Lee
9 Survey Result (2) - Japan (Oshima, 1995) JM Lee
10 Reason for Popularity (1) MPC provides a systematic, consistent, and integrated solution to process control problems with complex features: - Delays, inverse responses and other complex dynamics - Strong interactions (eg, large RGA) - Constraints (eg, actuator limits, output limits) Supervisory Control Selectors, Switches, Delay Compensations, Antiwindups, Decouplers, etc More and more optimization is the MPC level Process Optimization Advanced Multi-Variable Control MPC Low-level PID Loops Low-level PID Loops JM Lee
11 Example 1: Blending System Control u 1 u 2 u 3 Valve Positions Stock Additive A Additive B Blending System Model r A = Additive A stock r B = Additive B stock q total blend flow Control r A and r B Control q if possible Flowrates of additives are limited JM Lee
12 Classical Solution Valveposition controller 95% Setpoint VPC Selector < Setpoint FC Feedback Stock FT FT Additive A X Ratio setpoint FT FC > High selector Blend of A and B X FC Additive B Ratio setpoint FT JM Lee
13 MPC: each time k p: size of prediction window min u 1 (j),u 2 (j),u 3 (j) j = k,, k + p 1 px (r A (k + i k) ra) 2 +(r B (k + i k) rb) 2 + (q(k + i k) q ) 2 i=1 (u i ) min apple u i (j) apple (u i ) max, i =1,,3, << 1 JM Lee
14 Advantages of MPC over Traditional APC Integrated solution Automatic constraint handling Feedforward/feedback No need for decoupling or delay compensation Efficient utilization of degrees of freedom Can handle nonsquare systems (eg, more MVs and CVs) Assignable priorities, ideal settling values for MVs Consistent, systematic methodology Realized benefits Higher online times Cheaper implementation Easier maintenance JM Lee
15 Reason for Popularity (2) Emerging popularity of online optimization Process optimization and control are often conflicting objectives Optimization pushes the process to the boundary of constraints Quality control determines how close one can push the process to the boundary Implications for process control High performance control is needed to realize online optimization Constraint handling is a must The appropriate tradeoff between optimization and control is timevarying and is best handled within a single framework Model Predictive Control JM Lee
16 Bi-Level Optimization Used in MPC Steady-State Optimization (LP) Economics based Objective (Maximum profit or thruput, minimum utility) Control Based Constraints Optimal setting values for the inputs and outputs (setpoints) Adjustments to setpoints of low level loops or control valves Dynamic Optimization (QP) Steady-state prediction model New measurements (Feedback update) Minimization of Error (= setpt output and input) Constraints on actuator limits and safety-sensitive variables JM Lee
17 New Operational Hierarchy and Role of MPC Customer Strategic Planning $ month ~ year Production Planning Plant Scheduling Real Time Optimizer Model Predictive Control Low-Level Control $ week ~ month $ min ~ day sec $ Move the plant to the current optimal condition fast and smoothly w/o violating constraints: Local optimization + control JM Lee
18 An Exemplary Application: Ethylene Plant Primary Quench Fractionator Tower Demethanizer Deethanizer Ethylene Fractionator Hydrogen Furnaces Methane Ethylene Charge Gas Compressor Chilling Ethane Propylene Propane Feedstock Naphtha Light H-C Fuel Oil Depropanizer B - B Gasoline Propylene Fractionator Debutanizer JM Lee
19 Importance of Modeling/Sys-ID Model is the most critical element of MPC that varies the most from application to application Almost all models used in MPC are typically empirical models identified through plant tests rather than first-principles models Step responses, pulse responses from plant tests Transfer function models fitted to plant test data Up to 80% of time and expense involved in designing and installing a MPC is attributed to modeling/system identification Keep in mind that obtained models are imperfect (both in terms of structure and parameters) Importance of feedback update to correct model prediction or model parameters/states Penalize excessive input movements JM Lee
20 Design Effort for Two Approaches Process Analysis Design and Tuning of Controller Modeling & Identification Control Specification Traditional Control MPC JM Lee
21 Challenges for MPC Efficient identification of control-relevant model Managing the sometimes exorbitant online computational load Nonlinear models è Nonlinear Programs (NLP) Hybrid system models (continuous dynamics + discrete events or switches, eg, pressure swing adsorption) è Mixed Integer Programs (MIP) Difficult to solve these reliably online for large-scale problems How do we design model, estimator (of model parameters and state), and optimization algorithm as an integrated system that are simultaneously optimized, rather than as disparate components? Long-term maintenance of control system JM Lee
22 Control Relevant Modeling Coupling between Modeling and Control Model Quality (Error or Uncertainty ) System ID Test signal characteristics Model structure Data filtering Parameter fitting Model MPC Design Choice of objective function and constraints Choice of horizon sizes Choice of online estimator Sensitivity of Control Performance to Model Errors JM Lee
23 Iterative Model/Controller Refinement Identification & Controller Design Closed-loop Data New Controller Gc Gc Gp Closed-loop Operation and Testing JM Lee
24 Comparison of Computational Load Classical Optimal Control MPC Offline Analysis and Computation Explicit Control Law u = f(x) Online Computation Model; Obj Fcn; Constraints min f(x) g(x) > 0 Offline Computation Online Computation (Estimation, Prediction, & Optimization) Limited by the ability to derive the explicit control law analytically or with reasonable offline computation JM Lee Limited by available online computational power and numerical methods to solve online optimization reliably 24
25 Coupling between Online Estimation and Control Calculation Modeling (System ID) Quality of Information for Estimation Online Estimation of State & Model Parameters Model w/ parameters & states to estimate Uncertainty in Prediction = Risk Prediction Real-Time Adjustment Online Optimal Control Calculation JM Lee
26 Integrated MPC, Performance Monitoring, and Closed-Loop Identification Adjustments MPC Online Model Identification Measurements Detection and Diagnosis of Abnormal Situation Operation shifts: model parameter changes Abnormal disturbances (size & pattern) Instrumentation/Equipment Faults, Poisoning, etc Process Monitoring JM Lee
27 Conclusion MPC is the established advanced multivariable control technique for the process industry It is already an indispensable tool and its importance is continuing to grow It can be formulated to perform some economic optimization and can also be interfaced with a larger-scale (eg, plant-wide) optimization scheme Obtaining an accurate model and having reliable sensors for key parameters are key bottlenecks A number of challenges remain to improve its use and performance JM Lee
28 45804 Process Dynamics & Control Lecture3: Dynamic Matrix Control (DMC) Jong Min Lee School of Chemical and Biological Engineering Seoul National University 1
29 In this lecture, we will discuss Process representation: step response model Prediction (perfect model) Incorporation of feedback Optimization: unconstrained and constrained QP Implementation 2
30 Dynamic Matrix Control First appeared in the open literature in 199 (Cutler and Ramaker; Prett and Gillette) - with notable success on several Shell processes for many years Reformulation as a quadratic program by Garcia and Morshedi in Quadratic Dynamic Matrix Control AspenTech: DMCplus Prototype of commercial algorithms presently used in the process industry 3
31 Process representation Stable, SISO: u y S n = S n+1 = S n+2 = = S S 0 = 0 1 Dynamic System S n S n+1 S 2 T s S 0 S 1 T s unit step-response function: S =[S 1, S 2, S 3,, S n ] T Complete description of the process requires n step response coefficients 4
32 Principle of superposition u u u u 2 1 1/2 = u(0) u(1) u(2) -3/2 y y(3) y(4) y(1) = y(0) + S 1 u(0) y(2) = y(0) + S 1 u(1) + S 2 u(0) y(2) y(5) = y(0) y(1) y(k + 1) = y(0) + nx 1 i=1 S i u(k i + 1) + S n { u(k n + 1) + u(k n)+ u(0)}? = y(0) + nx 1 i=1 S i u(k i + 1) + S n u(k n + 1) 5 u(k i + 1) = u(k i + 1) u(k i)
33 Elements of DMC Past Future ~ y(k) ~ y(k+1) ~ y(k+2) target u(k) u(k+1) k k+1 k+m-1 k+p input Horizon
34 Predictions
35 1 Prediction (stable, SISO) At time k: we know y(k) and need to compute u(k), which we don t know yet ŷ(k + 1): prediction of y(k+1) made at time k ŷ(k + 1) = Assume y(0) = 0 nx 1 i=1 ŷ(k + 1) = S 1 Effect of current control action S i u(k i + 1) + S n u(k n + 1) u(k)+ nx 1 i=2 S i u(k i + 1) + S n u(k n + 1) Effect of past control actions Substitute k = k+1, and ŷ(k + 2) = S 1 u(k + 1) + S 2 u(k)+ Effect of future control action Effect of current control action nx 1 i=3 S i u(k i + 2) + S n u(k n + 2) Effect of past control actions 8
36 j-step ahead prediction ŷ(k + j) = jx S i u(k + j i)+ i=1 Effect of current and future control actions nx 1 i=j+1 S i u(k + j i)+s n u(k + j n) Effect of past control actions Let ŷ 0 (k + j) = nx 1 i=j+1 S i u(k + j i)+s n u(k + j n) This is referred to as predicted unforced response with past inputs only U =[, u(k 2), u(k 1), 0, 0, 0, ] T for j = 1 ŷ(k + j) = jx S i u(k + j i)+ŷ 0 (k + j) i=1 9
37 Multiple predictions Ŷ(k + 1) =[ŷ(k + 1), ŷ(k + 1),, ŷ(k + p)] T Ŷ 0 (k + 1) = ŷ 0 (k + 1), ŷ 0 (k + 2),, ŷ 0 (k + p) T U(k) =[ u(k), u(k + 1),, u(k + m 1)] T p: prediction horizon, m: control horizon m apple p apple n + m In a matrix form: Ŷ(k + 1) = S U(k)+Ŷ0 (k + 1) S = 2 4 S S 2 S S 3 S 2 S 1 0 S m S m 1 S m 2 S 1 S m+1 S m S m 1 S 2 S p S p 1 S p 2 S p m
38 Output feedback and bias correction So far, we have not utilized the latest observation, y(k) The fact is that there is no perfect model Corrected prediction by adding a constant bias term ỹ(k + j) =ŷ(k + j)+b(k + j) b(k + j) =y(k) ŷ(k) ŷ(k): one-step ahead prediction made at the previous time instance, k-1 ỹ(k + j) =ŷ(k + j)+[y(k) ŷ(k)] Ỹ(k + 1) = S U(k)+Ŷ0 (k + 1) + [y(k) ŷ(k)] 1 Ỹ(k + 1) = [ỹ(k + 1), ỹ(k + 2),, ỹ(k + p)] T 1 =[1, 1,,, 1] T 11
39 Recursive update of unforced response For stable models, one can update the predicted unforced response after u(k) is computed works like a state; hence you need n not p or 2 4 Ŷ 0 n(k + 1) = MŶ0 n(k)+s u(k) ŷ 0 (k + 1) ŷ 0 (k + 2) 5 = ŷ 0 (k) S 1 0 ŷ 0 (k + 1) S 2 4 ŷ 0 4 (k + p) ŷ 0 (k + p 1) S p n n n 3 5 u(k) 12
40 Why?? To achieve good control performance - Ỹ(k + 1) should be close to the true open-loop output - This requires that n, the number of coefficient matrices in S* is chosen that Sn = Sn+1 (ie, plant should be stable), otherwise MŶ0 will be in error It also requires the feedback term stays approximately constant (step disturbance) 13
41 1 Prediction (stable, MIMO) 14
42 2-by-2 system ŷ 1 (k + 1) = nx 1 i=1 nx 1 + S 11,i u 1 (k i + 1) + S 11,n u 1 (k n + 1) i=1 S 12,i u 2 (k i + 1) + S 12,n u 2 (k n + 1) ŷ 2 (k + 1) = nx 1 i=1 nx 1 + S 21,i u 1 (k i + 1) + S 21,n u 1 (k n + 1) i=1 S 22,i u 2 (k i + 1) + S 22,n u 2 (k n + 1) 15
43 Vector notation y = Ỹ(k + 1) = S = y 1 y 2 y m u = 4 ỹ(k + 1) ỹ(k + 2) ỹ(k + p) 3 5 u 1 u 2 u r 3 5 S mp-by-1 Ŷ 0 (k + 1) = S 2 S S m S m 1 S 1 S m+1 S m S 2 S p S p 1 S p m+1 Ỹ(k + 1) = S U(k)+Ŷ0 (k + 1) + I p [y(k) ŷ(k)] ŷ 0 (k + 1) ŷ 0 (k + 2) ŷ 0 (k + p) S i = 3 5 Use up to p only out of n 2 4 U(k) = I p = u(k) u(k + 1) u(k + m 1) S 11,i S 12,i S 1r,i S 21,i S 2r,i S m1,i S mr,i I I 3 5 pny-by-ny
44 Recursive update of unforced response Ŷ 0 n(k + 1) = MŶ0 n(k)+s u(k) 2 4 or ŷ 0 (k + 1) ŷ 0 (k + 2) ŷ 0 (k + p) I m = 0 0 I ŷ 0 (k) S 1 m 0 ŷ 0 (k + 1) S I m 5 ŷ 0 (k + p 1) S p I n m n n 3 5 u(k) 1
45 Control calculations 18
46 Objective function At time k, minimize the predicted deviation of the output from the setpoint with some penalty on the input movement size measured in terms of the quadratic norm min U(k) ( px (y r (k + i) ỹ(k + i)) T Q (y r (k + i) ỹ(k + i)) + i=1 mx 1 `=0 u T (k + `)R u(k + `) ) Q, R : weighting matrices (diagonal) 19
47 Constraints Past measurements Future future input y max target projected output u max Input magnitude u min apple u(k + `) apple u max Input rate u(k + `) apple u max t t+1 t+m-1 t+p Output magnitude input Horizon y min apple ỹ(k + i) apple y min 20
48 Solve: quadratic program 1 min U(k) 2 UT (k)h U(k)+f T U(k) A U(k) apple b H f A b U(k) : Hessian matrix : gradient vector : constraint matrix : constraint vector : decision variable We need to convert the MPC objective and constraints to the standard QP form 21
49 Unconstrained problem 1 min U(k) 2 UT (k)h U(k)+f T U(k) Take the gradient wrt the input: H U(k)+f =0 U(k) = H 1 f 22
50 Objective function in quadratic form min U(k) ( px (y r (k + i) ỹ(k + i)) T Q (y r (k + i) ỹ(k + i)) + i=1 mx 1 `=0 u T (k + `)R u(k + `) ) 2 4 y r (k + 1) ỹ(k + 1) y r (k + 2) ỹ(k + 2) y r (k + p) ỹ(k + p) 3 T Q Q Q y r (k + 1) ỹ(k + 1) y r (k + 2) ỹ(k + 2) y r (k + p) ỹ(k + p) u(k) u(k + 1) u(k + m 1) 3 T R R R u(k) u(k + 1) u(k + m 1) 3 5 T Y r (k + 1) Ỹ(k + 1) Q Y r (k + 1) Ỹ(k + 1) + U T (k) R U(k) 23
51 Not done yet! T Y r (k + 1) Ỹ(k + 1) Q Y r (k + 1) Ỹ(k + 1) + U T (k) R U(k) This yields Ỹ(k + 1) = S U(k)+Ŷ0 (k + 1) + I p [y(k) ŷ(k)] " T (k + 1) Q"(k + 1) 2" T (k + 1) QS U(k)+ U T (k) S T QS + R U(k) where "(k + 1) = Y r (k + 1) Ŷ 0 (k + 1) I p [y(k) ŷ(k)] is a known term Hessian (a constant matrix): H = S T QS + R gradient vector (must be updated at each time): f T = " T (k + 1) QS 24
52 Constraints in linear inequality form u min apple u(k + `) apple u max u(k + `) apple u max A U(k) apple b y min apple ỹ(k + i) apple y min i =1,,p ` =0,,m 1 25
53 Input magnitude constraint u min apple u(k + `) apple u max, ` =0,, m 1 u(k 1) `X i=0 u(k + i) apple u min u(k 1) + `X i=0 u(k + i) apple u max I 0 0 I I 0 0 I I I I 0 0 I I 0 0 I I I u(k) u(k + 1) u(k + m 1) 3 5 apple u min u(k 1) u min u(k 1) u min u(k 1) u max u(k 1) u max u(k 1) u max u(k 1) I L 2
54 Input rate constraints u(k + `) apple u max ` =0,,m 1 u max apple u(k + `) apple u max u(k + `) apple u max u(k + `) apple u max I I I I I I u(k) u(k + 1) u(k + m 1) 3 5 apple u max u max u max u max u max u max I 2
55 Output magnitude constraints y min apple ỹ(k + i) apple y max, i =1,,p ỹ(k + i) apple y max ỹ(k + i) apple y min apple S U(k)+Ŷ0 (k + 1) + I p (y(k) ŷ(k)) S U(k) Ŷ 0 (k + 1) I p (y(k) ŷ(k)) 2 3 y max y max Y max = Y min = apple apple y min y min Y max Y min 3 5 y max y min apple S S U(k) apple apple Y max Ŷ 0 (k + 1) I p (y(k) ŷ(k)) Y min + Ŷ0 (k + 1) + I p (y(k) ŷ(k)) 28
56 In summary, 2 4 I L I L I I S S 3 5 U(k) apple u min u(k 1) u min u(k 1) u max u(k 1) u max u(k 1) u max u max u max u max 3 5 apple Y max Ŷ 0 (k + 1) I p (y(k) ŷ(k)) Y min + Ŷ0 (k + 1) + I p (y(k) ŷ(k)) 3 5 A U(k) apple b 29
57 Solving QP Quadratic program: minimization of quadratic function subject to linear inequality constraints QPs are convex and therefore fundamentally tractable Off-the-shelf solvers (eg, QPSOL, QUADPROG) are available but further customization is desirable (to exploit the structure in the Hessian and constraint matrices) Complexity of a QP is a complex function of the dimension/structure of Hessian, as well as the number of constraints 30
58 Active set method Interior point method - Barrier function 31
59 Real-time implementation 1 Initialization: Initialize the memory vector and the reference vector Ŷ(0) and the reference vector Set k = 0 2 Memory update: Ŷ 0 (k + 1) = MŶ0 (k)+s u(k) 3 Reference vector update 4 Measurement intake: Take in new measurement y(k) and d(k) 5 Calculation of the gradient vector and constraint vector Solve QP Implementation of input u(k) =u(k 1) + u(k) 8Go back to step 2 after setting 32
60 45804 Process Dynamics & Control Lecture 4: Sampling and Representation of Sampled Signals Jong Min Lee Chemical & Biomolecular Engineering Seoul National University April 1, / 1
61 Overview ZOH u u(t) y(t) k y k D/A System A/D A computer oriented mathematical model (or discrete-time model) relates u k to y k does not give information on intersample behaviour can be described using difference equation or a pulse transfer function 2 / 1
62 Input-Output Model Input-output model describes a relationship between input u k and output y k Generally, it takes the form of the following difference equation: y k = a 1 y k 1 a n y k n + b 1 u k b m u k m With some abuse of notation, the above is written as (1 + a 1 q a n q n )y k = (b 1 q b m q m )u k z-transform Y(z) = b 1z 1 +b 2 z 2 + b m z m U(z) 1+a 1 z 1 + a nz n The order of the transfer function is determined by max(n, m) Denominator: Autoregressive Terms Numerator: Moving Average Terms 3 / 1
63 Discrete-Time Pole Consider the first-order system y k = ay k 1 + u k 1 Y(z) U(z) = z 1 1 az 1 One can expand the above as a power series of z 1 around z 1 = 0: Y(z) U(z) = 1 z + a z 2 + a2 z an 1 z N + truncation error Obvious convergence (stability) condition is a < 1 Note that a is the pole of Y(z) U(z) 4 / 1
64 State-Space Model A model can also be given in terms of the following matrix difference equation: x k+1 = Φx k + Γu k y k = Cx k + Du k x k is called a state vector and stores the effect of past input (u k 1, u k 2, )on the current and future output The state variables may or may not have physical meanings An equivalent input-output representation can easily be derived by performing the z-transform to the above: zx(z) = Φx(z) + Γu(z) Y(z) = Cx(z) + Du(z) Y(z) U(z) = C(zI Φ) 1 Γ + D 5 / 1
65 Input-Output Models from Discretization of Continuous TF (Optional) u k ZOH D/A G(s) A/D y k G(z) z-transform describes a discrete signal as ``impulse train" when viewed in continuous time A zero-order hold converts the sampled signal to a piece-wise constant signal (train of pulses) Hence, we need to derive a pulse transfer function for zero-order hold The basic idea is Y(z) {L {G(s) U(z) = G(z) = Z 1 1 }} e sh s / 1
66 Input-Output Models from Identification Suppose one is interested in fitting an n th -order transfer function model In the time domain, this corresponds to Y(z) U(z) = b 1z 1 + b 2 z b m z m 1 + a 1 z 1 + a 2 z a n z n y(k) = a 1 y(k 1) a 2 y(k 2) a n y(k n) + b 1 u(k 1) + b 2 u(k 2) + + b m u(k m) Notice that there is at least one time-delay between the input and output due to the presence of ZOH element The above is a linear regression model y(k) = ϕ T (k)θ where ϕ T (k) = [ y(k 1) y(k n) u(k 1) u(k m)], θ T = [a 1 a n b 1 b n] / 1
67 Estimating θ from N sample data points Given N input-output samples, Solution to the Least Squares problem ˆθ N = Example: Given the transfer function: ( Φ T NΦ N ) 1 Φ T N Y N where Φ N = G(z) = b 1z 1 + b 3 z a 1 z 1 + a 2 z 2 ϕ T (1) ϕ T (N) write the difference equation corresponding to G(z) and also form the Φ matrix suitable to estimate the parameters using the LS method 8 / 1
68 Linear Regression Solution y(k) + a 1y(k 1) + a 2y(k 2) = b 1u(k 1) + b 3u(k 3) The Φ N matrix corresponding to any difference equation is formed by first forming the Y N matrix, which is constructed by looking at the term with the largest sample distance between y(k) and any of y(k n) and u(k m) In the above example this term is u(k 3) This is done to avoid negative sample indices while writing Φ matrix Thus, y(4) y(3) y(2) u(3) u(1) y(5) Y N = = Φ y(4) y(3) u(4) u(2) N = y(n) y(n 1) y(n 2) u(n 1) u(n 3) 9 / 1
69 State-Space Models from Discretization Suppose we are given a model described by a system of linear differential equation: dx dt = Ax + Bu y = Cx + Du In the above, x is an n-dimensional vector Suppose that (1) a zero-order hold is used and (2) sampling is synchronized for all inputs and outputs Then, treating t = kh as the initial time and x k = x(kh) as an initial condition we have x(t) = e A(t kh) x k + t kh e A(t τ) Bu(τ)dτ 10 / 1
70 Discretization of Continuous SS Model Evaluating the above at t = kh + h with the fact that u(t) = u k for kh t < kh + h (due to the zero-order-hold assumption), we obtain x k+1 = e A(kh+h kh) x k + kh+h kh e A(kh+h τ) Bu(τ)dτ ( h ) = e Ah x k + e As ds Bu k [s = kh + h τ] 0 Now we can write the propagation of variables from one sample time to next as x k+1 = Φx k + Γu k y k = Cx k + Du k where Φ = e Ah Γ = h 0 e As dsb 11 / 1
71 Delays Can Be Easily Incorporated into the Discrete Model Case I: 0 < θ h Recall dx = Ax + Bu(t θ) dt Note that x(kh + h) = e Ah x(kh) + kh+h kh e A(kh+h τ) Bu(τ θ)dτ { uk kh + θ τ < kh + h u(τ θ) = kh τ < kh + θ u k 1 Substituting the above and making the change of variable s = kh + h τ θ ) ( h θ ) x k+1 = e Ah x k + (e A(h θ) e As dsb u k 1 + e As dsb u k 0 0 = Φx k + Γ 1 u k 1 + Γ 0 u k 12 / 1
72 Discrete SS Model with Delays (Cont'd) We can put the above in the standard form as follows: [ ] [ ] [ ] [ xk+1 Φ Γ1 x = k Γ0 + u k 0 0 u k 1 I ] u k y k = Cx k + Du k = [ C 0 ] [ x k u k 1 ] + Duk Hence the state vector at the k th time consists of x k and u k 1 This makes sense since when we have delay ( h), the effect of u k 1 has not been fully stored in x k Case II: θ = (d 1)h + θ where 0 < θ h and d 1 Note that for d = 1, we have the previous case As before x(kh + h) = e Ah x(kh) + kh+h kh e A(kh+h τ) Bu(τ θ)dτ 13 / 1
73 But this time Hence, u(τ θ) = { uk d+1 kh + θ τ < kh + h u k d kh τ < kh + θ x k+1 = e Ah x k + ( θ e A(h θ ) ( ) h θ + e As dsb 0 0 e As dsb u k d+1 = Φx k + Γ 1 u k d + Γ 0 u k d+1 ) u k d 14 / 1
74 We can put the above in the standard form as follows: x k+1 u k d+1 u k 1 u k = Φ Γ 1 Γ I 0 0 I 0 0 x k u k d u k 2 u k I u k y k = [ C ] x k u k d u k 2 u k 1 + Du k Note that the state vector at the k th time must include u k 1,, u k d since the effect of past d inputs has not been stored in x k 15 / 1
75 State-Space Models from Identification One can also obtain a discrete state-space model from data This can be done by Using methods called subspace ID that directly gives a model in the discrete state-space form Identifying a transfer function model and then performing a ``realization" on it (which means finding an I/O-wise equivalent state-space model representation) 1 / 1
76 45804 Process Dynamics & Control Lecture 5: System Identification: Introduction Jong Min Lee Chemical & Biomolecular Engineering Seoul National University April 20, / 1
77 References L Ljung, System Identification: Theory for the User, Prentice Hall Soderstrom, T and P Stoica, System Identification, Prentice Hall Box, G E P and G M Jenkins, Time Series Analysis: Forecasting and Control, Holden-Day, / 1
78 First-Principles Modeling Usually involves fewer measurements; requires experimentation only for the estimation of unknown parameters Provides information about internal state of the process Promotes fundamental understanding of the internal workings of the process Requires fairly accurate and complete process knowledge Not useful for poorly understood and/or complex processes Naturally produces both linear and nonlinear models System Identification Requires extensive measurements Provides information only about the portion of the process Treats the process like a ``black box" Requires no such detailed knowledge Quite often proves to be the only alternative for poorly understood/complex processes Requires special methods for nonlinear models 3 / 1
79 Objective of Sys ID From I/O Data Set: {y(k), u(k), k = 1,, N} Identify ỵ(k) = G(q)u(k) + H(q)ε(k) where or G(q): plant transfer function (deterministic part) H(q): disturbance transfer function (stochastic, noise part) ε: white noise x(k + 1) = Ax(k) + Bu(k) + Kε(k) y(k) = Cx(k) + ε(k) System identification at a more general level includes other tasks such as data generation, data pretreatment, and model validation 4 / 1
80 Plant vs Noise Model u(k) Plant Model G(q) y(k) According to the figure above, the output can be exactly calculated once the input is known In most cases, this is unrealistic There are always signals beyond our control that also affect the system Assume that such effects can be lumped into an additive term w(k) at the output y(k) = g(τ)u(k τ) + w(k) τ=1 Note: g(τ) is the impulse response, which is obtained by the unit pulse input 5 / 1
81 Then, we have w(k) u(k) Plant Model + G(q) y(k) The value of disturbance (w(k)) is not known beforehand So, we employ a probabilistic framework to describe future disturbances We assume that w(k) is driven by a white noise sequence ε(k) for simplicity / 1
82 ε(k) Disturbance Model H(q) u(k) Plant Model + G(q) y(k) ε(k): white noise / 1
83 Parametric vs Nonparametric Methods 1 Parametric Methods Select the best one among a confined set of possible models Finite dimensional parameters Ex) transfer function (matrix) of given order, ``Finite" impulse response identification 2 Nonparametric Methods Time domain: Step response, Impulse response, Correlation analysis Frequency domain: Fourier analysis, Spectral analysis End Objective: Obtain a model providing a good (multi-step) prediction with the intended feedback control loop in place 8 / 1
84 Model Structure for Parametric Identification Standard Form (SISO): y(k) = G(q, θ)u(k) + H(q, θ)ε(k) ε(k): a white noise sequence H(q): stable and stable invertible transfer function Differenced Form: If the process mean shifts continuously or time to time use y(k) = G(q, θ) u(k) + H(q, θ)ε(k) 9 / 1
85 ARX (Auto Regressive exogenous) y(k) + a 1 y(k 1) + a n y(k n) = b 1 u(k 1) + + b m u(k m) + ε(k) G(q, θ) = B(q) A(q) 1 H(q, θ) = A(q) = b 1q b m q m 1 + a 1 q a n q n = a 1 q a n q n For ARX structure, use of a very high order model is often necessary 10 / 1
86 ARMAX (Auto Regressive Moving Average exogenous) y(k) + a 1 y(k 1) + + a n y(k n) = b 1 u(k 1) + + b m u(k m) + ε(k) + c 1 ε(k 1) + + c l ε(k l) H(q) = C(q) A(q) = 1 + c 1q c l q l 1 + a 1 q a n q n 11 / 1
87 OE (Output Error) Structure ỹ(k) + a 1 ỹ(k 1) + + a n ỹ(k n) = b 1 u(k 1) + + b m u(k m) ỹ: deterministic output G(q, θ) = A(q) B(q) y(k) = ỹ(k) + ε(k) and H(q) = 1 The OE structure can also encompass the case where the noise model is set a priori: y(k) = G(q, θ)u(k) + H(q)ε(k) 1 H(q) y(k) = G(q, θ) 1 u(k) + ε(k) H(q) 12 / 1
88 Orthogonal Expansion Model A special kind of OE structure where G(q) = n b i B i (q) i=1 where B i (q) are orthogonal basis functions For example, B i (q) = q i Finite Impulse Response model B i (q) = ( ) i 1 1 α 2 1 αq q α q α Laguere model 13 / 1
89 Other Structures Box-Jenkins structure: y(k) = B(q) C(q) u(k) + A(q) D(q) ε(k) ARIMAX structure (Auto Regressive Integrator Moving Average exogenous) y(k) = B(q) A(q) u(k) q 1 C(q) A(q) ε(k) 14 / 1
90 Nonparametric Model: Impulse Response u u = {1, 0, 0, } y = {0, H 1, H 2,, H n, H n+1, 0, } y 1 h Time H 1 H 2 H 3 Time 15 / 1
91 Nonparametric Model: Step Response u u = {1, 1, 1, } y = {0, S 1, S 2, S 3, } y 1 h Time S 1 S 2 S 3 Time 1 / 1
92 Major Steps Gathering of data through a plant test Data conditioning and pretreatment Transition of data to a model: model structure selection and parameterization plus parameter estimation Validation 1 / 1
93 The System Identification Loop Experiment Design Prior Knowledge Data Choose Model Set Choose Criterion of Fit Calculate Model Validate Model Not OK: Revise OK: Use It 18 / 1
94 Step Testing Procedure 1 Assume operation at steady-state with controlled var (CV): y(t) = y 0 for t < t 0 manipulated var (MV): u(t) = u 0 for t < t 0 2 Make a step change in u of a specified magnitude, u for 3 Measure y(t) at regular intervals: u(t) = u 0 + u for t t 0 y(k) = y(t 0 + kh) for k = 1, 2,, N where h: the sampling interval Nh: is approximate time required to reach steady state 4 Calculate the step response coefficients from the data S(k) = y(k) y 0 u for k = 1, 2,, N 19 / 1
95 Discussions 1 Choice of sampling period h For modelling, best h is one such that N = Ex: If G(s) = Ke θs, then settling time 4τ + θ τs + 1 Therefore, h 4τ + θ N = 4τ + θ = 01τ θ 40 May be adjusted depending on control/operation objectives 2 Choice of Step Size ( u) Too small: May not produce enough output change Low signal to noise ratio Too big: Shift the process to an undesirable condition Nonlinearity may be induced Trial and error is needed to determine the optimum step size 20 / 1
96 Discussions on Step Testing (Cont'd) 3 Choice of number of experiments Averaging results of multiple experiments reduces impact of disturbances on calculated S(k)'s Multiple experiments can be used to check model accuracy by cross-validation (Data sets for Identification Data set for Validation) 4 An appropriate method to detect steady state is required 5 While the steady state (low frequency) characteristics are accurately identified, high frequency dynamics may be inaccurately characterized 21 / 1
97 Procedure for Pulse Testing (Impulse Response) 1 Steady operation at y 0 and u 0 2 Send a pulse of size δu lasting for 1 sampling period 3 Calculate pulse response coefficients H(k) = y(k) y 0 δu for k = 1,, N 22 / 1
98 Discussions on Pulse Testing 1 Select h and N as for the step testing 2 Usually need δu u for adequate S/N ratio 3 Multiple experiments are recommended for the same reason as in the step testing 4 An appropriate method to detect steady state is required 5 Theoretically, pulse is a perfect (unbiased) excitation for linear systems 23 / 1
99 Input Design Why use test inputs other than a step or pulse? Pure step tests or pulse tests usually take too long and are impossible for some inputs More system excitation produces more information Completely random inputs (eg, RBS, PRBS) excite all frequencies with equal energy 24 / 1
100 u u Type of Inputs Random Binary Signal (RBS) or Pseudo-Random Binary Signal (PRBS) Random Noise Time Time 25 / 1
101 PRBS Size of u(t) is fixed and switches between two levels Choice of whether to switch or stay is random: flip a coin Sequence design choices are: Levels to switch between Base length of time between switch (period) Duration of experiment Trade-off between size of PRBS and duration of experiment Larger size and longer duration give better estimates Power of this signal is that you can do a small size (unnoticeable) for a long time to get a good result Base switching period Reflect process dynamics Set a ``dominant" time constant 2 / 1
102 PRBS: Distillation Column Example Time to steady-state is mins (τ = 10 15mins) Length of experiment ( hours) switches (not very many) Levels Reflux Rate Steam Rate Sequence Design: May want to start with a step of 3 4τ Choose start (+1 or -1 level) At next switch time flip a coin 2 / 1
103 Frequency Range of Input Excitation 1 Based on the step response, obtain τ p 2 Calculate the corner frequency ω CF = 1 τ p [rad/time] 3 Choose a sampling interval h based on earlier discussion 1 4 Nyquist frequency: 2h [cycles/time] = π h [rad/time] 5 Choose lower bound for the input frequencies as zero (in order to obtain a good estimate of the gain) Choose upper bound for the input frequencies as 25 3 ω CF ω N In MATLAB, u = idinput(2000, 'rbs', [0 001], [-1 1]); 28 / 1
104 Model Types and Transfer Function Model Types Output Error (least general) ARX ARMAX Box-Jenkins (most general) Process Transfer Function G p (q 1 ) = B(q) F(q) q (d+1) Zeros: roots of B(q) Poles: roots of F(q) Time Delay: d -- Note that extra 1 time-delay is naturally introduced by zero order hold and sampling, and d is pure time delay 29 / 1
105 Disturbance Modelling: Stochastic Processes Parametric Autoregressive (denominator) Moving average (numerator) AutoRegressive and Moving Average (ARMA) w(k) = C(q) D(q) ε(k) ARIMA (AutoRegressive Integrated Moving Average) Model w(k) = C(q) 1 D(q) (1 q 1 ) d ε(k) 30 / 1
106 Least Squares Identification Recall (from Lecture 5) that least square estimate of parameters is given as ˆθ N = ( ϕ T (1) Φ T ) 1 NΦ N Φ T N Y N where Φ N = ϕ T (N) for y(k) = ϕ T (k)θ where ϕ T (k) = [ y(k 1) y(k n) u(k 1) u(k m)], θ T = [a 1 a n b 1 b n ] 31 / 1
107 45804 Process Dynamics & Control Lecture b: Disturbance Modelling Jong Min Lee Chemical & Biomolecular Engineering Seoul National University May 9, / 21
108 Disturbance Modelling Why? Predict its effect on the output so that they can be eliminated Deterministic vs stochastic disturbances steps, pulses, sinusoids -- deterministic white noise, colored noise, integrated white noise, etc -- random Stochastic processes are convenient vehicles to describe them Stochastic disturbances and noise are almost always present Most disturbances, even deterministic ones, are unpredictable in terms of size, direction and time of occurrence 2 / 21
109 Linear Stochastic Models Important: In linear systems, it is not necessary to identify and model actual physical disturbance sources It is sufficient to model their overall effect on the output w: Physical disturbance variables or signals representing the collective effect of disturbances on the output Driven by white noise ε Transfer function model w(k) = H(q)ε(k) ε(k) Stochastic Model H(q) w(k) 3 / 21
110 y Linear Stochastic Models: Example Ambient temperature / pressure at Edmonton International Airport Power / water consumption in Edmonton Stock market Any ``unknown" or ``indescribable" disturbances of a process unit A stochastic process may look like gross trends random behaviour around trends Time 4 / 21
111 General Structure w(k) = moving average component {}}{ 1 + θ 1 q 1 + θ 2 q θ m q m 1 + ϕ 1 q 1 + ϕ 2 q ϕ n q n }{{} autoregressive component 1 (1 q 1 ) }{{ d ε(k) } integrating component Each part gives a different relationship between the current value of stochastic output (w(k)) with its past values (w(k 1), w(k 2), ) or with the input itself Our goal is to identify each part: identify how the output is related to itself and to the input, ε 5 / 21
112 Time Series Sequence of observations taken sequentially over time If the variable has randomness, the sequence is stochastic process w realization 1 realization 2 realization Time (k) w(k) = w(k 1) + ε(k) / 21
113 Description of Stochastic Processes Two Relevant Questions: 1 Does probability of an outcome (or realization) of w(k + τ) depend on outcome of w(k)? Are w(k) and w(k + τ) are independent? 2 Does the distribution of w(k) or the joint distribution of {w(k), w(k + τ)} depend on k? Does the mean change with time? Are the covariances, cov{w(1), w(5)} and cov{w(11), w(15)}, different? In the last lecture, we learned Autocovariance Weakly Stationary Process / 21
114 In the Context of Our Applications We assume ``weakly stationary processes" constant means constant variances autocovariances depend only on lags Autocovariance R w (τ) = E{(w(k) w)(w(k + τ) w) T } 8 / 21
115 Autocorrelation Autocovariance has scale Normalized quantity: autocorrelation ρ w (τ) = cov{w(k), w(k + τ)} var(w(k + τ)) var(w(k)) = E{(w(k) w)(w(k + τ) w)t } σ 2 w = R w(τ) R w (0) Note: because of the stationarity, var(k + τ) = var(k) = σ 2 w 9 / 21
116 Autocorrelation & Autocovariance 1 Variance is simply the autocovariance at lag 0 σ 2 w = R w (0) 2 Autocorrelation and autocovariance are symmetric in lag τ R w (τ) = R w ( τ) ρ w (τ) = ρ w ( τ) 3 Autocorrelation is bounded and normalized 1 ρ w (τ) 1 4 Autocorrelation and autocovariance are parameters summarizing the probability behaviour of the stochastic process w(k) Sample autocorrelation / autocovariance using sample data 10 / 21
117 Disturbance Example 1 w(k) = ε(k) + 05ε(k 1) where ε N (0, σ 2 ε ) Autocorrelations? Lag 0: ρ w (0) = 1 Current output always perfectly correlated with itself Lag 1: ρ w (1) = 048 Lag > 1: ρ w (τ > 1) = 0 1 ρw(τ) τ / 21
118 Disturbance Example 1 Non-zero values to lag 1 Lag 1 moving average disturbance = MA(1) disturbance 3 Disturbance Ex Notice local "trends" w Time (k) Time Response 12 / 21
119 Disturbance Example 2 Dependence on past output Autocorrelations lag 0: ρ w (0) = 1 lag 1: ρ w (1) = 0 lag 2: ρ w (2) = (0) 2 = 03 lag k: ρ w (k) = (0) k w(k + 1) = 0w(k) + ε(k + 1) 1 ρw(τ) τ 13 / 21
120 Disturbance Example 2: Autoregressive The disturbance w is weakly stationary Sum of stationary stochastic processes; an infinite sum of the white noise sequence ε(k)'s AR coefficient is 0 convergent Mean is zero and variance is constant 3 Disturbance Ex Notice local trends w Time (k) 14 / 21
121 Two examples were Example 1 is a moving average disturbance: w(k) = ε(k) + 05ε(k 1) = {1 θ 1 q 1 }ε(k) Example 2 is an autoregressive disturbance: w(k) = 0w(k 1) + ε(k) = 1 ε(k) 1 ϕ 1 q 1 15 / 21
122 Detecting Model Structure from Data Given time series data of a stochastic process Examine autocorrelation plot If a sharp cut-off at lag k is detected, then the disturbance is a moving average, order k, disturbance If a gradual decline is observed, then the disturbance contains an autoregressive component Long tails indicate either a higher-order autoregressive component, or a pole near 1 If the autocorrelations alternate in positive and negative values one or more of the roots is negative 1 / 21
123 Estimating Autocovariances from Data Sample autocovariance function ˆR w (τ) = 1 N τ (w(k) w)(w(k + τ) w) T N k=1 N is the number of data points ˆR w (0) is sample variance of w(k) When ˆR w (τ) is computed, confidence limits should be considered Sample autocorrelation function ˆρ w (τ) = ˆR w (τ) ˆR w (0) Confidence limits for the autocorrelation are derived by examining how variability propagates through calculations 1 / 21
124 Example 1: Estimated Autocorrelation Plot Sharp cut-off: Moving Average 12 Moving Average Disturbance Process 1 08 autocorrelation lag 18 / 21
125 Example 2: Estimated Autocorrelation Plot Gradual decay: Autoregressive 12 Autoregressive Disturbance Process 1 08 autocorrelation lag 19 / 21
126 Partial Autocorrelation It is difficult to identify the order of the AR component due to the gradual decay for the AR structure Q How do we find the order of the AR component? A Partial Autocorrelation: Compute autocorrelation between w(k) and w(k + τ) after taking into account the dependence on values k + 1, k + 2, k + τ 1 The partial autocorrelation at lag τ is the autocorrelation between w(k) and w(k + τ) that is not accounted for by lags 1 through τ 1 The partial autocorrelation of an AR(τ) process is zero at lag τ + 1 and greater If the sample autocorrelation plot indicates that an AR model is appropriate, the sample partial autocorrelation is examined to identify the order of an AR model 20 / 21
127 How to Use the Partial Autocorrelation For an autoregressive process of order p, a sharp cut-off will be observed after lag p in which the partial autocorrelations go to zero No more explicit dependence beyond lag p The partial autoregressive plot for moving average processes will exhibit a decay The autocorrelation and partial autocorrelation behaviours are dual for autoregressive and moving average processes 21 / 21
128 SysID Example Jong Min Lee
129 Autocorrelation Function (ACF) Explains the dependency of samples Correlation between x(k) and x(k+τ) What can we infer? Impulse-type ACF: white noise Sharp cut-off after n lags: MA(n) Tails off to several lags: AR Very slow decrease: Integrated process eg, η(k+1) = η(k) + e(k+1) 2
130 Partial Autocorrelation Function (PACF) Reveals what cannot be explained by ACF Useful for AR(n) processes What we can infer: Impulse-type PACF: white noise signal Sharp cut-off after n lags: AR(n) Tails off to several lags: MA 3
131 Identification of Linear I/O Models H(q) e(k) or η(k) u(k) G(q) y p (k) + + y d (k) y(k) Given u(k) and y(k), determine a plant (deterministic) model (G) and a disturbance (noise, stochastic) model (H) 4
132 5 Box-Jenkins Structure ) ( ) ( ) ( ) ( ) ( ) ( ) ( k e D q q C nk k u q F q B k y + = ) ( = nb b nb q q b b q B! nf f nf q q f q F =! ) ( doc bj C = 1, D = F: ARX C = 1, D = 1: OE
133 Output Error (OE) Models B( q) y ( k) = u( k nk) + e( k) F( q) Yields best unbiased estimate of the plant model Disturbance model is not considered Useful for identifying plant models in BJ oe(iddata, [nb nf nk])
134 OE Models Assumes that the disturbance (or the prediction errors) is white The best OE model is what restricts the correlations between residuals and input within a confidence bound ACF(residuals) need not be white because the assumption of OE may not be true If ACF(res) is white, then the system posesses OE structure If ACF(res) is non-white, build an ARMA model to the residuals to obtain a disturbance model
135 General Procedure for Identification ARX ARMAX OE + ARMA BJ 8
136 Identification Example
137 Example 4 y1 0 Impulse response estimate Time Lab 2 example: IO data time lags 4000 samples Unknown process order, time-delay, and structure Detrending: detrend(data, constant ) Divide data set into training and validation sets cra: impulse response coefficients and time-delay (nk = 2) 0 10
138 Step Response From u1 >> z = iddata(y, u); >> step(z) To y y starts increasing from t=2 G( s) 1e 35s s Time Discrete-time pole: exp(-(1/35)x1) = 0 No imaginary pole: discrete-time poles are real and positive Step response is used to compare with that of the identified model 11
139 Fitting an ARX Model A ( q) y( k) = B( q) u( k nk) + e( k) model=arx(data, [na nb nk]) Fitting plant + disturbance models simultaneously Use identified time-delay for nk nk = pure delay + zoh delay (1) Start with low orders Relevant functions: arxstruc, selstruc 12
140 ARX Models resid(marx112, ztd) resid(marx442, ztd) 1 Correlation function of residuals Output y1 1 Correlation function of residuals Output y lag Cross corr function between input u1 and residuals from output y lag lag Cross corr function between input u1 and residuals from output y lag 13
141 ARX 442 Measured Output and Simulated Model Output Measured Output marx442 Fit: 898% 18 1 From u y1-1 To y Time 0 Process ARX 442 Model Time 14
142 OE for Plant Model (G p ) Correlation function of residuals Output y lag Cross corr function between input u1 and residuals from output y b1 resid(oe112, ztd) y ( k) = u( k 2) + e( k) 1 1+ f q lag 1 OK We can model the residuals later with disturbance modelling (AR, MA ) Not Acceptable! u and y are still correlated further room to improve 15
143 OE Model: [2 2 2] If correlation is not satisfactory, increase numerator or denominator order [nb nf nk]: [1 1 2] [1 2 2] [2 1 2] [2 2 2] 1 Correlation function of residuals Output y lag Cross corr function between input u1 and residuals from output y1 01 ACF of residuals is almost white (Not a concern here However, there may not be a need to construct a disturbance model) lag Acceptable! 1
144 OE222 Model >> present(moe222) Discrete-time IDPOLY model: y(t) = [B(q)/F(q)]u(t) + e(t) B(q) = ( ) q^ ( ) q^-3 F(q) = (+-0012) q^ ( ) q^-2 Gain: 015 (G(1)) Poles: and 0392 (>>roots([ ])) 1
145 Comments Use training data for checking residuals (resid) Use validation data for checking prediction performance (compare) Compare model s response on the validation data set: compare(ztd, moe222) step(ztd, moe222, r*- ); legend( process, OEmodel ) g=spa(ztd); bode(g, moe222, r- ) 18
146 Measured Output and Simulated Model Output 4 3 Measured Output moe222 Fit: 892% y From u1 to y Time 18 From u1 Amplitude Phase (degrees) Frequency (rad/s) Process OE222 To y Process moe Time 19
147 OK We finished modelling the plant (G) using OE Are we really done?
148 Disturbance Modelling: G d? y d = y y p (residuals from the OE222 model) 1 Plot ACF and PACF of the residuals (y d ) 2 Determine whether the disturbance process (G) is MA, AR, or ARMA 3 Use the MATLAB function ARMAX doc armax If data has no input channels and just one output channel (that is, it is a time series), then orders = [na nc] A ( q) y( k) = C( q) e( k) and armax calculates an ARMA model for the time series 21
149 Identifying Gd 12 Auto-correlation Function 12 Partial-autocorrelation Function lag lag MA(1) or AR(1)?: AR is a more general I will choose AR(1) [na nc] = [1 0] >> erroe = pe(ztd, moe222); >> autocf(erroe, 20, 0); pautocf(erroe, 20); >> mdist = armax(erroey, [1 0]); >> present(mdist) Discrete-time IDPOLY model: A(q)y(t) = e(t) A(q) = ( ) q^-1 22
150 The residuals of y d from G d errmdist = pe(mdist, erroe); figure; [R3, CI3, NR3] = autocf(errmdisty, 20, 0); 12 Auto-correlation Function lag 23
151 Putting Together: BJ Model You can refine G and H by putting them together in BJ (see the manual) In this case, OE and ARMA modelling steps provide an initial estimate for the BJ structure One can directly use the BJ by specifying model orders 24
152 45804 Process Dynamics & Control Lecture : Linear Quadratic Control Deterministic Case Jong Min Lee Chemical & Biomolecular Engineering Seoul National University May 19, / 22
153 Outline Basic problem setup Deterministic system Stochastic system 2 / 22
154 Basic Problem Setup Linear Deterministic System: x(k + 1) = Ax(k) + Bu(k) (1) y(k) = Cx(k) (2) We consider time-invariant system for simplicity For a linear state feedback controller The closed-loop response is: u(k) = L(k)x(k) (3) x(k + 1) = (A BL(k))x(k) Stability The state feedback controller (3) stabilizes the system if all the eigenvalues of (A BL) lie within the unit disk 3 / 22
155 Objective of LQ A system visits a sequence of states of x(0), x(1),, x(p), and desired sequence of states x(0), x(1),, x(p) Without loss of generality, the desired trajectory, x, can be set as the origin Objective function p 1 [ min x T (k)qx(k) + u T (k)ru(k) ] + x T (p)q t x(p) (4) k=0 Q and R are symmetric positive definite; Q t is positive semi-definite Q provide relative importance to the errors in various states R accounts for the cost of implementing input moves if p =, it is infinite horizon problem 4 / 22
156 Open-Loop Control vs Feedback Control Optimal open-loop control problem Find the optimal sequence of u(0),, u(k) for given (as a function of) distribution of x(0) Optimal feedback control problem Find the optimal feedback law u(k) = f(x(k)) or u(k) = f(y(k), y(k 1), ) For completely deterministic systems, the two should provide the same performance State Feedback vs Output Feedback u(k) = f(x(k)) State feedback u(k) = F(y(k)) Output feedback F would be a dynamic operator in general 5 / 22
CBE495 LECTURE IV MODEL PREDICTIVE CONTROL
What is Model Predictive Control (MPC)? CBE495 LECTURE IV MODEL PREDICTIVE CONTROL Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University * Some parts are from
More informationEL1820 Modeling of Dynamical Systems
EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance
More informationEL1820 Modeling of Dynamical Systems
EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture
More informationAdvanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification
Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of
More informationControl Systems Lab - SC4070 System Identification and Linearization
Control Systems Lab - SC4070 System Identification and Linearization Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 13, 2015 (slides
More informationIdentification in closed-loop, MISO identification, practical issues of identification
Identification in closed-loop, MISO identification, practical issues of identification CHEM-E7145 Advanced Process Control Methods Lecture 4 Contents Identification in practice Identification in closed-loop
More informationCONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström
PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,
More informationSystem Identification for Process Control: Recent Experiences and a Personal Outlook
System Identification for Process Control: Recent Experiences and a Personal Outlook Yucai Zhu Eindhoven University of Technology Eindhoven, The Netherlands and Tai-Ji Control Best, The Netherlands Contents
More informationIdentification of ARX, OE, FIR models with the least squares method
Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation
More informationDynamic Operability for the Calculation of Transient Output Constraints for Non-Square Linear Model Predictive Controllers
Dynamic Operability for the Calculation of Transient Output Constraints for Non-Square Linear Model Predictive Controllers Fernando V. Lima and Christos Georgakis* Department of Chemical and Biological
More informationFeedback Control of Linear SISO systems. Process Dynamics and Control
Feedback Control of Linear SISO systems Process Dynamics and Control 1 Open-Loop Process The study of dynamics was limited to open-loop systems Observe process behavior as a result of specific input signals
More informationMODEL PREDICTIVE CONTROL and optimization
MODEL PREDICTIVE CONTROL and optimization Lecture notes Model Predictive Control PhD., Associate professor David Di Ruscio System and Control Engineering Department of Technology Telemark University College
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationMatlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina
Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2017-2018) Prof. Sergio Bittanti Outline Data generation
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on
More informationat least 50 and preferably 100 observations should be available to build a proper model
III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationIndustrial Model Predictive Control
Industrial Model Predictive Control Emil Schultz Christensen Kongens Lyngby 2013 DTU Compute-M.Sc.-2013-49 Technical University of Denmark DTU Compute Matematiktovet, Building 303B, DK-2800 Kongens Lyngby,
More information12. Prediction Error Methods (PEM)
12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter
More informationOn Input Design for System Identification
On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationControl Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich
Control Systems I Lecture 7: Feedback and the Root Locus method Readings: Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 2, 2018 J. Tani, E. Frazzoli (ETH) Lecture 7:
More informationProfessional Portfolio Selection Techniques: From Markowitz to Innovative Engineering
Massachusetts Institute of Technology Sponsor: Electrical Engineering and Computer Science Cosponsor: Science Engineering and Business Club Professional Portfolio Selection Techniques: From Markowitz to
More informationIdentification and Control of Mechatronic Systems
Identification and Control of Mechatronic Systems Philadelphia University, Jordan NATO - ASI Advanced All-Terrain Autonomous Systems Workshop August 15 24, 2010 Cesme-Izmir, Turkey Overview Mechatronics
More information1 Loop Control. 1.1 Open-loop. ISS0065 Control Instrumentation
Lecture 4 ISS0065 Control Instrumentation 1 Loop Control System has a continuous signal (analog) basic notions: open-loop control, close-loop control. 1.1 Open-loop Open-loop / avatud süsteem / открытая
More informationOverview of Models for Automated Process Control
Overview of Models for Automated Process Control James B. Rawlings Department of Chemical and Biological Engineering April 29, 29 Utilization of Process Modeling and Advanced Process Control in QbD based
More informationTime Series I Time Domain Methods
Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT
More informationMatlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina
Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2015-2016) Prof. Sergio Bittanti Outline Data generation
More informationAutomatic Control II Computer exercise 3. LQG Design
Uppsala University Information Technology Systems and Control HN,FS,KN 2000-10 Last revised by HR August 16, 2017 Automatic Control II Computer exercise 3 LQG Design Preparations: Read Chapters 5 and 9
More informationExam in Automatic Control II Reglerteknik II 5hp (1RT495)
Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks
More informationOnline monitoring of MPC disturbance models using closed-loop data
Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification
More informationLecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.
ISS0031 Modeling and Identification Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. Aleksei Tepljakov, Ph.D. October 21, 2015 Discrete-time Transfer Functions
More informationReal-Time Implementation of Nonlinear Predictive Control
Real-Time Implementation of Nonlinear Predictive Control Michael A. Henson Department of Chemical Engineering University of Massachusetts Amherst, MA WebCAST October 2, 2008 1 Outline Limitations of linear
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationQuis custodiet ipsos custodes?
Quis custodiet ipsos custodes? James B. Rawlings, Megan Zagrobelny, Luo Ji Dept. of Chemical and Biological Engineering, Univ. of Wisconsin-Madison, WI, USA IFAC Conference on Nonlinear Model Predictive
More informationCHBE507 LECTURE II MPC Revisited. Professor Dae Ryook Yang
CHBE507 LECURE II MPC Revisited Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University Korea University III -1 Process Models ransfer function models Fixed order
More informationD(s) G(s) A control system design definition
R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure
More informationComputer Exercise 1 Estimation and Model Validation
Lund University Time Series Analysis Mathematical Statistics Fall 2018 Centre for Mathematical Sciences Computer Exercise 1 Estimation and Model Validation This computer exercise treats identification,
More informationContents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping
Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,
More informationDYNAMIC SIMULATOR-BASED APC DESIGN FOR A NAPHTHA REDISTILLATION COLUMN
HUNGARIAN JOURNAL OF INDUSTRY AND CHEMISTRY Vol. 45(1) pp. 17 22 (2017) hjic.mk.uni-pannon.hu DOI: 10.1515/hjic-2017-0004 DYNAMIC SIMULATOR-BASED APC DESIGN FOR A NAPHTHA REDISTILLATION COLUMN LÁSZLÓ SZABÓ,
More informationMulti-Input Multi-output (MIMO) Processes CBE495 LECTURE III CONTROL OF MULTI INPUT MULTI OUTPUT PROCESSES. Professor Dae Ryook Yang
Multi-Input Multi-output (MIMO) Processes CBE495 LECTURE III CONTROL OF MULTI INPUT MULTI OUTPUT PROCESSES Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University
More informationSystem Identification for MPC
System Identification for MPC Conflict of Conflux? B. Erik Ydstie, Carnegie Mellon University Course Objectives: 1. The McNamara Program for MPC 2. The Feldbaum Program for MPC 3. From Optimal Control
More informationA Data-Driven Model for Software Reliability Prediction
A Data-Driven Model for Software Reliability Prediction Author: Jung-Hua Lo IEEE International Conference on Granular Computing (2012) Young Taek Kim KAIST SE Lab. 9/4/2013 Contents Introduction Background
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More informationARIMA Modelling and Forecasting
ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first
More information6.435, System Identification
System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency
More informationFurther Results on Model Structure Validation for Closed Loop System Identification
Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification
More informationControl of MIMO processes. 1. Introduction. Control of MIMO processes. Control of Multiple-Input, Multiple Output (MIMO) Processes
Control of MIMO processes Control of Multiple-Input, Multiple Output (MIMO) Processes Statistical Process Control Feedforward and ratio control Cascade control Split range and selective control Control
More informationProgress in MPC Identification: A Case Study on Totally Closed-Loop Plant Test
Progress in MPC Identification: A Case Study on Totally Closed-Loop Plant Test Yucai Zhu Grensheuvel 10, 5685 AG Best, The Netherlands Phone +31.499.465692, fax +31.499.465693, y.zhu@taijicontrol.com Abstract:
More informationIterative Learning Control Analysis and Design I
Iterative Learning Control Analysis and Design I Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK etar@ecs.soton.ac.uk http://www.ecs.soton.ac.uk/ Contents Basics Representations
More informationChapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis
Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive
More informationAnalysis of Discrete-Time Systems
TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time
More informationIDENTIFICATION FOR CONTROL
IDENTIFICATION FOR CONTROL Raymond A. de Callafon, University of California San Diego, USA Paul M.J. Van den Hof, Delft University of Technology, the Netherlands Keywords: Controller, Closed loop model,
More informationSTAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong
STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental
More informationMODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH. I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo
Vol.4, No.2, pp.2-27, April 216 MODELING INFLATION RATES IN NIGERIA: BOX-JENKINS APPROACH I. U. Moffat and A. E. David Department of Mathematics & Statistics, University of Uyo, Uyo ABSTRACT: This study
More informationEI6801 Computer Control of Processes Dept. of EIE and ICE
Unit I DISCRETE STATE-VARIABLE TECHNIQUE State equation of discrete data system with sample and hold State transition equation Methods of computing the state transition matrix Decomposition of discrete
More informationLinear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter
Journal of Physics: Conference Series PAPER OPEN ACCESS Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter To cite this article:
More informationMODEL PREDICTIVE CONTROL
Process Control in the Chemical Industries 115 1. Introduction MODEL PREDICTIVE CONTROL An Introduction Model predictive controller (MPC) is traced back to the 1970s. It started to emerge industrially
More informationIdentification of Linear Systems
Identification of Linear Systems Johan Schoukens http://homepages.vub.ac.be/~jschouk Vrije Universiteit Brussel Department INDI /67 Basic goal Built a parametric model for a linear dynamic system from
More informationIntermediate Process Control CHE576 Lecture Notes # 2
Intermediate Process Control CHE576 Lecture Notes # 2 B. Huang Department of Chemical & Materials Engineering University of Alberta, Edmonton, Alberta, Canada February 4, 2008 2 Chapter 2 Introduction
More informationReal-Time Optimization (RTO)
Real-Time Optimization (RTO) In previous chapters we have emphasized control system performance for disturbance and set-point changes. Now we will be concerned with how the set points are specified. In
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More information1 Linear Difference Equations
ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with
More informationCautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark
Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark Prof. Michel Verhaegen Delft Center for Systems and Control Delft University of Technology the Netherlands November 28, 2011 Prof.
More informationEstimating trends using filters
Estimating trends using filters... contd. 3. Exponential smoothing of data to estimate the trend m[k] ˆm[k] = v[k]+(1 )ˆm[k 1], k =2,, n ˆm[1] = v[1] The choice of has to be fine tuned according to the
More informationPrashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles
HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More informationLearning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System
Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA
More informationf-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with
5.2 Approximate Modelling What can be said about if S / M, and even G / G? G(q, ) H(q, ) f-domain expression for the limit model Combine: with ε(t, ) =H(q, ) [y(t) G(q, )u(t)] y(t) =G (q)u(t) v(t) We know
More informationTime domain identification, frequency domain identification. Equivalencies! Differences?
Time domain identification, frequency domain identification. Equivalencies! Differences? J. Schoukens, R. Pintelon, and Y. Rolain Vrije Universiteit Brussel, Department ELEC, Pleinlaan, B5 Brussels, Belgium
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationIn search of the unreachable setpoint
In search of the unreachable setpoint Adventures with Prof. Sten Bay Jørgensen James B. Rawlings Department of Chemical and Biological Engineering June 19, 2009 Seminar Honoring Prof. Sten Bay Jørgensen
More informationELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems
ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont
More informationShort-Term Load Forecasting Using ARIMA Model For Karnataka State Electrical Load
International Journal of Engineering Research and Development e-issn: 2278-67X, p-issn: 2278-8X, www.ijerd.com Volume 13, Issue 7 (July 217), PP.75-79 Short-Term Load Forecasting Using ARIMA Model For
More informationNumerical Methods for Model Predictive Control. Jing Yang
Numerical Methods for Model Predictive Control Jing Yang Kongens Lyngby February 26, 2008 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark
More informationD.E. Rivera 1 and M.E. Flores
BEYOND STEP TESTING AND PROCESS REACTION CURVES: INTRODUCING MEANINGFUL SYSTEM IDENTIFICATION CONCEPTS IN THE UNDERGRADUATE CHEMICAL ENGINEERING CURRICULUM D.E. Rivera 1 and M.E. Flores Department of Chemical,
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive
More informationState Estimation of Linear and Nonlinear Dynamic Systems
State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationRoss Bettinger, Analytical Consultant, Seattle, WA
ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression
More informationUnivariate ARIMA Models
Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.
More informationTime Series 2. Robert Almgren. Sept. 21, 2009
Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time
More informationExercises - Time series analysis
Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare
More informationAnalysis of Discrete-Time Systems
TU Berlin Discrete-Time Control Systems TU Berlin Discrete-Time Control Systems 2 Stability Definitions We define stability first with respect to changes in the initial conditions Analysis of Discrete-Time
More informationIntroduction to system identification
Introduction to system identification Jan Swevers July 2006 0-0 Introduction to system identification 1 Contents of this lecture What is system identification Time vs. frequency domain identification Discrete
More informationLecture 5: Recurrent Neural Networks
1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies
More informationOutline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear
More informationControl Lab. Thermal Plant. Chriss Grimholt
Control Lab Thermal Plant Chriss Grimholt Process System Engineering Department of Chemical Engineering Norwegian University of Science and Technology October 3, 23 C. Grimholt (NTNU) Thermal Plant October
More information2 Introduction of Discrete-Time Systems
2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic
More informationCONTROL OF DIGITAL SYSTEMS
AUTOMATIC CONTROL AND SYSTEM THEORY CONTROL OF DIGITAL SYSTEMS Gianluca Palli Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI) Università di Bologna Email: gianluca.palli@unibo.it
More informationLinear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich
Linear Systems Manfred Morari Melanie Zeilinger Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Spring Semester 2016 Linear Systems M. Morari, M. Zeilinger - Spring
More informationNonlinear Identification of Backlash in Robot Transmissions
Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation
More informationControl Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich
Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017
More informationCHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER
114 CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER 5.1 INTRODUCTION Robust control is a branch of control theory that explicitly deals with uncertainty in its approach to controller design. It also refers
More information6.435, System Identification
SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).
More informationParis'09 ECCI Eduardo F. Camacho MPC Constraints 2. Paris'09 ECCI Eduardo F. Camacho MPC Constraints 4
Outline Constrained MPC Eduardo F. Camacho Univ. of Seville. Constraints in Process Control. Constraints and MPC 3. Formulation of Constrained MPC 4. Illustrative Examples 5. Feasibility. Constraint Management
More information