NEW DEVELOPMENTS IN PREDICTIVE CONTROL FOR NONLINEAR SYSTEMS

Similar documents
Nonlinear Model Predictive Control Tools (NMPC Tools)

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Process Modelling, Identification, and Control

EL2520 Control Theory and Practice

Iterative Learning Control Analysis and Design I

Quis custodiet ipsos custodes?

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Modeling and Control Overview

Course on Model Predictive Control Part II Linear MPC design

Math Ordinary Differential Equations

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Optimal Polynomial Control for Discrete-Time Systems

Steady State Kalman Filter

Design Methods for Control Systems

Control System Design

EML5311 Lyapunov Stability & Robust Control Design

Linear-Quadratic Optimal Control: Full-State Feedback

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

(RPG) (2017) IET,

Multivariable MRAC with State Feedback for Output Tracking

Part II: Model Predictive Control

EC Control Engineering Quiz II IIT Madras

EE C128 / ME C134 Feedback Control Systems

Performance assessment of MIMO systems under partial information

CHAPTER 3 TUNING METHODS OF CONTROLLER

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

Dynamic Inversion Design II

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Model predictive control of industrial processes. Vitali Vansovitš

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Optimal control and estimation

RECURSIVE ESTIMATION AND KALMAN FILTERING

Chapter 2 Optimal Control Problem

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

4F3 - Predictive Control

OPTIMAL CONTROL AND ESTIMATION

Control Systems Lab - SC4070 Control techniques

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Controls Problems for Qualifying Exam - Spring 2014

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

Adaptive Nonlinear Control A Tutorial. Miroslav Krstić

FUZZY CONTROL OF NONLINEAR SYSTEMS WITH INPUT SATURATION USING MULTIPLE MODEL STRUCTURE. Min Zhang and Shousong Hu

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

Optimization-Based Control

6.241 Dynamic Systems and Control

LINEAR QUADRATIC GAUSSIAN

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Lifted approach to ILC/Repetitive Control

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Control Systems Design

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Topic # Feedback Control Systems

Suppose that we have a specific single stage dynamic system governed by the following equation:

Integrator Backstepping using Barrier Functions for Systems with Multiple State Constraints

Analysis and Synthesis of Single-Input Single-Output Control Systems

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints

Robust Internal Model Control for Impulse Elimination of Singular Systems

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

4F3 - Predictive Control

10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller

Wannabe-MPC for Large Systems Based on Multiple Iterative PI Controllers

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Autonomous Mobile Robot Design

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

H-infinity Model Reference Controller Design for Magnetic Levitation System

On-off Control: Audio Applications

Control of Electromechanical Systems

1 (30 pts) Dominant Pole

MODERN CONTROL DESIGN

ANALYSIS AND IMPROVEMENT OF THE INFLUENCE OF MEASUREMENT NOISE ON MVC BASED CONTROLLER PERFORMANCE ASSESSMENT

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Outline. 1 Full information estimation. 2 Moving horizon estimation - zero prior weighting. 3 Moving horizon estimation - nonzero prior weighting

Nonlinear Control Systems

CDS 101/110a: Lecture 8-1 Frequency Domain Design

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

A unified double-loop multi-scale control strategy for NMP integrating-unstable systems

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

AERT 2013 [CA'NTI 19] ALGORITHMES DE COMMANDE NUMÉRIQUE OPTIMALE DES TURBINES ÉOLIENNES

EECE Adaptive Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

Automatic Control II Computer exercise 3. LQG Design


Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

Feedback Control of Turbulent Wall Flows

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

JUSTIFICATION OF INPUT AND OUTPUT CONSTRAINTS INCORPORATION INTO PREDICTIVE CONTROL DESIGN

System Identification: From Data to Model

Control System Design

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582

Transcription:

NEW DEVELOPMENTS IN PREDICTIVE CONTROL FOR NONLINEAR SYSTEMS M. J. Grimble, A. Ordys, A. Dutka, P. Majecki University of Strathclyde Glasgow Scotland, U.K

Introduction Model Predictive Control (MPC) is one of the most popular advanced control techniques The MPC algorithms are well established for linear systems Recent developments extended this methodology to the Non-linear systems control Techniques developed at the University of Strathclyde are presented

Linear Quadratic Gaussian Predictive Control LQGPC

Preview Control The algorithm was first presented by Tomizuka M. and D.E. Whitney. The algorithm uses the LQG approach to optimisation and a stochastic model for the reference signal beyond the preview horizon. State space model: x( t ) = Ax( t ) Bu( t ) Gw( t ) y( t) = Cx( t) v( t) Performance index: R S M L T J( t) = Ε y( t j ) r( t j ) y( t j ) r( t j ) T j= NM b g b g T λ ut ( j) ut ( j) } Reference generator: R ( t ) = Θ R ( t) η ξ( t N ) N N N N

Preview Control L NM x x( t ) ( t ) O QP = =L N M A Ο B G Ο w( t) χ( t ) χ( t ) ut ( ) Ο Θ Ο Ο µ ξ( t N ) RN, N N L NM R y( t) N ( t) O L QP = N M C Ο C Ο Ι OL QP NM x x( t) RN, A ( t) O QP O L QP N M Ι Ο O QP L N M M v( t) O QP B L N M M G OL QP NM O QP A standard (infinite horizon) LQG approach can now be used.

Linear Quadratic Gaussian Predictive Control LQGPC The standard predictive control performance index: J = Y( t) R( t) Λ Y( t) R( t) U( t) Λ U( t) t b g b g b g b g Starting with the state-space model: T e T u re-formulate it to be compatible with the above index: state equation: x( t ) = A x( t ) β U( t ) Γ W ( t ) output equation: ~ Y( t) = Φ Ax( t) S U( t) S W( t) V( t) N N N x( t ) = Ax( t ) Bu( t ) Gw( t ) y( t ) = Dx( t ) v( t ) The control vector U(t) contains all control actions within the horizon N into the future

Linear Quadratic Gaussian Predictive Control LQGPC Define the LQG-type performance index (finite or infinite) as a sum of predictive performance indices: J DPC J t = Ε R S T R S T T t T l= t T l= t J GPC U V W ( l) J DPC = Ε R S T t Τ lim JGPC ( l) T Τ l= t Substituting the criterion from the previous slide obtains: L O NM b g b g b g b gqp T T = Ε Y( l) R( l) Λ e Y( l) R( l) U( l) Λ u U( l) T The solution can be obtained through Dynamic Programming with two Riccati equations involved. The reference generator is used in a similar way as in Preview Control U V W U V W

Solution of Quadratic Gaussian problem for nonlinear systems and Non-Linear Quadratic Gaussian Predictive Control

NLQG and NLQGPC NLQG - an extension of SDRE method SDRE method State Dependent Riccati Equations The NLQGPC algorithm: predictive extension of SDRE

System representation The model: xt = f ( xt) g( xt) ut Gξt y = h( x ) t t Linear State Dependent form of the model: xt = A( xt) xt B( xt) ut Gξt y = C( x ) x t t t Assumption on the parameterisation of the model: xu, ( ( ), ( )) A x B x is controllable t t

System representation Simplified notation: A = Ax ( ), B = Bx ( ), C = Cx ( ) t t t t t t The Prediction of the trajectory: u, u,..., u x, x,..., x t t t N t t 2 t N Assumption on the trajectory after the prediction horizon.

The NLQG algorithm Estimate (or measure) the state x(t) Use previous feedback gain K(t-) to calculate prediction of current control u(t) ut () = Kt ( ) xt () Use current control prediction u(t) and the model re-calculated at time instant t (with the state x(t)) to obtain future state prediction x(t).

The NLQG algorithm The state prediction x(t) together with the state feedback gain K(t-) from previous iteration of the algorithm is used for a calculation of the future control prediction u(t). ut () = Kt ( ) xt () The model once again is re-calculated using future state prediction, stored and sequence is repeated n times.

The NLQG algorithm Use the model prediction for time instant tn and solve Algebraic Riccati Equation. The solution at time instant tn is obtained: Pt ( n, )

The NLQG algorithm Use as a boundary condition Pt ( n) = Pt ( n, ) for iterations of the Riccati Difference Equation and use appropriate prediction of the model throughout iterations of Riccati Equation. Use Pk ( ) to calculate the feedback control gain and calculate the current control.

The NLQG algorithm Use Pk ( ) to calculate the feedback control gain and calculate the current control. ut () = Ktxt () () K() t = function( A(), t B(), t P( t )) Calculated current control is used for the plant input signal manipulation.

The NLQG cost function The following cost function is minimised: J t T T t Th x ( k) Qc( k) x( k) u ( k) Rcu( k) = E lim Th 2 T k t T T T h = u ( k) Mc x( k) x ( k) Mcu( k) The cost function may be split in two parts: J = E lim J J Th 2 Th ( finite infinite ) t t t

The NLQG cost function First part is an infinite cost function: J infinite t T T t Th x ( k) Qc( t np) x( k) u ( k) Rcu( k) = k= t n T T T u ( k) Mc x( k) x ( k) Mcu( k) and this part is minimised by Algebraic Riccati Equation. Second part is a finite cost function J finite t T T t n x ( k) Qc( k) x( k) u ( k) Rcu( k) = k= t T T T u ( k) Mc x( k) x ( k) Mcu( k) And is minimised by Difference Riccati Equation with border condition given by the solution of ARE.

Remarks To obtain accurate results it should be assumed that system will remain time invariant after tn time instant: Therefore if real behaviour of the system is closer to the assumption results are more accurate

Example

Example Plant model Reference model ( x ) p,2 t atan ( ).7. xp( t ) = x ( ) ( ) ( ) p,2() t xp t u t ξ p t.3 [ ] y () t = y() t = x () t h x ( t ) = [] x ( t) [] ξ ( t) r r r r () t = [] x () t h r p

Results.6.4.2 output SDRE Proposed algorithm Constant gain feedback.8.6.4.2 -.2 2 3 4 5 6 7

Results 2.5 control SDRE Proposed algorithm Constant gain feedback.5 -.5 - -.5-2 -2.5 2 3 4 5 6 7

The NLQGPC control law derivation Re-written state equation: where xt = Atxt βtut, N Gξt β = [ B,,,..., ] t t 2 N State-space model with prediction-output equation x = A x β U Gξ t t t t t, N t Y =Φ A x S U G Ξ t, N tn, t t tn, tn, tn, tn, Note: that state equation is identical to the state equation of the controlled system.

The NLQGPC control law derivation Reference signal model: R R R R R t = t ξt X A X G R R R t, N = C Xt Augmented system : A t = Θ t t Ω tut, N Γ t χ χ ξ with Ψ = ϒ χ S U G Ξ t, N t t tn, tn, tn, tn, xt At A ξt βt G χt =, t, ξt, t,, R Θ = R = R Ω = Γ = R X t A ξ t G Ψ t, N = Yt, N Rt, N, ϒ t = Φt, NAt C R

The NLQGPC control law derivation The cost function: t Th N N J = E lim ( y r ) Λ ( y r ) u Λ u or: Introduce notation: Final form: { T i } { T i } t k i k i E k i k i k i U k i Th 2Th k= t i= i= T T T T χk ϒk ΛEϒk χk Uk, N Sk, N ΛEϒk χk t T h T T T T Jt= E lim χk k ESkN, UkN, UkN, SkN, ESkN, U ϒ Λ Λ kn, Th 2 J T h k= t T UkN, ΛU U kn, T T T k =ϒk ΛEϒ k, k =ϒk Λ E k, N, k = k, N Λ E k, N ΛU Q M S R S S Q M J E U J t Th T T k k χk t = lim k k, N Th 2 χ T T h U k= t Mk Rk kn,

NLQGPC control law derivation - Algorithm The control law minimising the cost function: ( T ) ( T T = Ω ) Ω Ω Θ P U P R P M χ where t, : solution of Algebraic Riccati Equation t, N t t, t t t t, t t t ( )( ) ( T T ) T T T Pt, = Qt Θt Pt, Θt Mt Θt Pt, Ω t Rt Ωt Pt, Ω t Mt Ωt Pt, Θ t This Algebraic Riccati Equation contains state dependent matrices Qt, Mt, Rt calculated at time t, which contain the prediction of future system behaviour

NLQGPC control law derivation - Algorithm 2 A more accurate solution of the minimisation problem: t N T T Qk Mk χk χk U k, N T U k t Mk R k kn, = Jt = E lim J Th 2 Th t N T h Q T T k Mk χk χk U k, N T U k= t N Mk Rk kn, Cost function split in two parts finite horizon infinite horizon.

NLQGPC control law derivation - Algorithm 2 Difference Riccati Equation : T T T k = k Θk k Θk k Θk k Ω k k Ωk k Ω k k Ωk k Θk ( )( ) ( T T ) P Q P M P R P M P boundary condition iterated backwards P = P t N t N, for k = t N, t N,..., t The control vector minimising cost function: ( T ) ( T T ) U = Ω P Ω R Ω P Θ M χ t, N t t t t t t t t t

Example The model is given by the following non-linear state space equations y ( ) ( ) 2 t = t.3 sin t t g t ζ ζ ζ ζ ξ 3 2 2 2 t = t.3 t t g t ζ ζ ζ υ ξ t = ζ t Next the model is re-arranged into Linear State Dependent form: [ ] ζ [ ] ( ζ t ).3 sin g ζ ζ υ ξ y t = ζ t t t t g 2 = υ, t t t.3 2 ( ζ t )

Example The system is controllable since rank ( ( 2) ) 2 ζ =.3 ζ t 2

Example The step response for two NLQGPC algorithms is compared with SDRE with g=. (noise level) 4 output 3.5 3 2.5 2.5.5 Alg. 2, Setpoint=: J=2.578Setpoint=3: J=23.6322 Alg., Setpoint=: J=2.524 Setpoint=3: J=24.597 SDRE, Setpoint=: J=2.5799 Setpoint=3: J=27.29 5 5 2 25

Example Now compare noise rejection (for two levels of process noise) 4 output 3 2 Alg. 2, g=.: J=23.562 Alg., g=.: J=23.9984 SDRE, g=.: J=26.944 5 5 2 25 4 output 3 2 Alg. 2, g=.: J=22.463 Alg., g=.: J=22.852 SDRE, g=.: J=25.7435 5 5 2 25

Advantages & Disadvantages of NLQGPC Advantages:. Controls based on solutions to the NLQGPC have been shown to offer high performance. 2. Less computational burden than other non-linear predictive control techniques. Disadvantages:. Since NLQGPC utilizes the Riccati equation, it is an unconstrained predictive control technique. 2. Like SDRE, NLQGPC doesn t guarantee closedloop global stability.

Dealing with constraints in NLQGPC The input constraints can be approximated by means of smooth limiting functions, and then included into the dynamics of the plant in a state-dependent state-space form. u α α u α

Improving stability via Satisficing Satisficing is based on a point-wise cost/benefit comparison of an action. The benefits are given by the Selectability function P s (u,x), while the costs are given by the Rejectability function P r (u,x). The satisficing set is those options for which selectability exceeds rejectability: i.e., S( x,b ) = { u : P ( u,x ) bp ( u,x )} s r

CLF-Based Satisficing Technique The selectability criteria is defined to be: Ps ( u,x ) = V x ( f gu ) The rejectability criteria is defined to be: P ( ux, ) = lx ( ) xrx r b=, therefore, the satisficing set S: { T } x S( x, b) = u: V ( f gu) T T

Augmenting NLQGPC with Satisficing By projecting the NLQGPC controller point-wise onto the satisficing set, the good properties of the NLQGPC approach are combined with the analytical properties of satisficing.

Example: Control of F-8 aircraft The non-linear dynamical model of the F-8 fighter aircraft: rad. u u. u x u x. u. x. x. x. x. x, x x, u. u x. u x. u. x. x x x. x. x x. x x. x 5236 6 4 46 265 6 2 967 3 564 47 396 4 28 63 47 28 25 3 846 9 47 88 877 3 2 2 3 2 3 3 3 2 3 2 2 3 3 2 2 2 2 3 3 = = =

Angle of Attack (rad).5.4.3.2. Blue lines Unconstrained NLQGPC. Black lines Constrained NLQGPC. Magenta lines Constrained NLQGPC with guarantee of global asymptotic stability -. -.2 2 4 6 8 2 4 Time

.4 Elevator Deflection.2 u (rad) -.2 -.4 u. 5236 rad -.6 -.8 2 4 6 8 2 4 Time

NLQGPC High Performance Deals with input Constraints Low Computational Burden Guarantee of Robustness & Asymptotic Stability

Non-Linear Generalized Minimum Variance Control

Contents Introduction Nonlinear GMV control problem and solution Relationship to the Smith Predictor Incorporating future information: Feedforward and Tracking Simulation example

Introduction 969: Åström introduces Minimum Variance (MV) controller assuming linear minimum phase plant. Successful applications in pulp and paper industry. 97s: Clarke and Hastings-James modify the MV control law by adding a control costing term. This is termed a Generalized Minimum Variance (GMV) control law and is the basis for their later self-tuning controller. The GMV control law has similar characteristics to LQG design in some cases and is much simpler to implement However, when the control weighting tends to zero the control law reverts to the initial algorithm of Åström, which is unstable for non-minimum phase processes.

Introduction Aim: introduce a GMV controller for nonlinear, multivariable, possibly time-varying processes The structure of the system is defined so that a simple solution is obtained. When the system is linear the results revert to those for the linear GMV controller. There is some loss of generality in assuming the reference and disturbance models are represented by linear subsystems. However, plant model can be in a very general nonlinear operator form, which might involve state-space, transfer operators, neural networks or even nonlinear function look-up tables.

Nonlinear system description φ = PeF u c c ξ Error weighting P c F c Control weighting W d Disturbance model ω Reference W r Controller r e C u - Nonlinear plant W m d y Nonlinear plant model: Linear disturbance model: Linear reference model: -k ( Wu)( t) = z ( W u)( t) d = f W A C r f d W = A E r k

Plant model Nonlinear plant model can be given in a very general form, e.g.: state-space formulation neural network / neuro-fuzzy model look-up table Fortran/C code It can include both linear and nonlinear components, e.g. Hammerstein model: ζ u y f( u, y ) = Plant subsystems W d d u m y W W k k Control Output Nonlinear Linear Delay z k Just need to obtain the output to given input signal

NGMV problem formulation To minimize: variance of the generalized output φ (t): J = E[ φ ( t)] NGMV 2 with ( ) ( )( ) φ t = Pe t F u t () c c P c = PcnP cd k ( u)() t = z ( u)( t) c - linear error weighting F F - control weighting (possibly nonlinear) ck Control weighting assumed invertible and potentially nonlinear to compensate for plant nonlinearities in appropriate cases The weighting selection is restricted by closed-loop stability

NGMV problem solution The approach also similar: ( W )( ) ε ( F )( ) k c k f c φ () t = P ( z u t Y ()) t u t k () = z ( F PW ) u t PY ε( t) ck c k c f ( ) φ () t = Fε() t ( Fck Pc Wk ) u t k Rε( t k) k PY c f = F z R Diophantine equation YY = WW WW * * * f f d d r r Spectral factorization statistically independent ε(t) white noise (sequence of independent random variables) Optimal control: NGMV u () t = ( F PW ) Rε () t ck c k stable causal nonlinear operator inverse

Controller implementation NGMV ck f Wk f u () t = [( F FY ) RY e]() t Disturbance d Controller Reference r - e - RY f Fck u Plant W Output y linear blocks FY f W k

Existence of a Stable Operator Inverse Necessary condition for optimality: operator ( PW F ) c k ck must have a stable inverse For linear systems: the operator must be strictly minimum-phase. To show this is satisfied for a very wide class of systems consider the case where F ck is linear and negative so that F ck = -F k. Then obtain: ( ) ( ) cwk k = k k cwk P F u F F P I u return-difference operator for a feedback system with A delay-free plant and controller c = k c. K F P

PID-based initial design Consider the delay-free plant W k and assume a PID controller K PID exists to stabilize the closed-loop system. Then a starting point for the weighting choice that will ensure the operator ( PW F ) c k k is stably invertible is P = K, F = c PID k To demonstrate this selection reasonable consider scalar case and let controller ( ) 2 ( k k k ) ( k 2k ) z k z k Kc = k k z = z z 2 2 2 2 Assume the PID gains are positive numbers, with small derivative gain. Then simple to confirm if F k = the P cn term is minimum phase and has real zeros.

Relationship to the Smith Predictor The optimal controller can be expressed in a similar form to that of a Smith Predictor. This provides a new nonlinear version of the Smith Predictor. Compensator d Reference r _ ( AP ) p cd GY y f F ck u Plant W W k F Y f ( AP ) GY D p cd f k m k D k -

Smith Predictor form of NGMV controller The system may be redrawn and the compensator rearranged as shown below. This structure is essential if P c includes an integrator. Compensator Disturbance Reference r - p f A GY - ψ F ck P cd u Plant W Output y W k P cn D k p _

Comments This last structure is intuitively reasonable. With no plant-model mismatch, the control is not due to feedback but involves an open-loop stable compensator. The nonlinear inner-loop has weightings Fck Pc acting like an inner-loop controller. If weightings are chosen to be of usual form this will represent a filtered PID controller. Such a choice of weightings is only a starting point, since stability is easier to achieve. However, control weighting can have additional lead term and high frequency characteristics of optimal controller will then have more realistic roll off. Stability: Under the given assumptions the resulting Smith system is stable. This follows because the plant is stable, the inner-loop is stable and there are only stable terms in the input block.

Feedback Feedforward Tracking Control future reference information ( ) ( )( ) φ t = Pe t F u t () c c η W r Setpoint Error weighting Reference P c r r y f C 2 C Controller F c Control weighting Nonlinear plant Measurable ζ W d d Un-measurable ξ W d d Disturbance models ω W w w H f - e C u W m y Scaling H f Feedback gain/dynamics

Feedback, Tracking and Feedforward Control Signal Generation Control Modules Measured disturbance y f i ( ) d f d d di d3 GP D z GP D W W u Reference r Nonlinear ( t p) GP 2 rd E r r plant u - Setpoint Hf w GP D e d f Fck - W m u f Total disturbance d Output y f F Y W k Feedback, Feedforward and Tracking Controller H f

Simulated example 2-by-2 model given in the non-linear state space form: x () t x ( t ) = u ( t) 2 2 x ( t) x2 () t 2 = 2 2 x ( t ).9 x () t e u () t yt () = xt () Both outputs are followed by a transport delay of k = 6 samples so the timedelay matrix: Models: D k 6 z = 6 z W r z = z reference W d z 6 = z z measurable disturbance W d..5z =..5z unmeasurable disturbance

Transient responses Feedforward action 2.5 Output setpoint NGMV NGMVFF NGMVFFTR.5 Control.5.5 2 4 6 8 2 4 6 8 2 2.5.5 Output 2 2 4 6 8 2 4 6 8 2 2 4 6 8 2 4 6 8 2.5.5 Control 2 NGMV NGMVFF NGMVFFTR 2 4 6 8 2 4 6 8 2 Future reference information incorporated

Stochastic performance 3 2 Output setpoint NGMV NGMVFF NGMVFFTR 2 Control - 2 4 6 8 2 4 6 8 2 3 2 - Output 2-2 2 4 6 8 2 4 6 8 2-2 4 6 8 2 4 6 8 2.5.5 -.5 Control 2 NGMV NGMVFF NGMVFFTR - 2 4 6 8 2 4 6 8 2 Controller Var[e] Var[u] Var[φ ] NGMV FB.6 8.43 2.45 NGMV FBFF.5 8.4.82 NGMV FBFFTR.48 8.37.27

Concluding Remarks State dependent models gives a useful structure for NL predictive controllers NGMV has much potential for development with multi - step predictive control an obvious development. Key to success in NL control is to show works and practical on real processes - why NGMV seems great potential.