Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria, Pisa. July 16th, 2012, Aula Pacinotti G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 1 / 47
Outline 1 Estimator module design for offset-free tracking 2 Steady-state optimization module design 3 Dynamic optimization module design 4 Closed-loop implementation and receding horizon principle 5 Quick overview of numerical optimization G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 2 / 47
Estimator module Tuning parameters Dynamic Optimization u(k) ˆx (k + 1) ˆ d (k + 1) Process y(k) ˆx(k), ˆ d(k) z 1 x s (k), u s (k) Steady-state Optimization ˆ d(k) Estimator Tuning parameters MPC G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 3 / 47
Estimator preliminaries Basic model, inputs and outputs Basic model: x + = Ax + Bu + w y = C x + v Inputs k: measured output y(k) and predicted state ˆx (k) Outputs k: updated state estimate ˆx(k) State estimator for basic model Choose any L such that (A ALC ) is strictly Hurwitz Filtering: ˆx(k) = ˆx (k) + L(y(k) C ˆx (k)) Key observation The estimator is the only feedback module in an MPC. Any discrepancy between true plant and model should be corrected there G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 4 / 47
Augmented system: definition Issue and solution approach An MPC based on the previous estimator does not compensate for plant/model mismatch and persistent disturbances As in [Davison and Smith, 1971, Kwakernaak and Sivan, 1972, Smith and Davison, 1972, Francis and Wonham, 1976], one should model and estimate the disturbance to be rejected For offset-free control, an integrating disturbance is added Augmented system [Muske and Badgwell, 2002, Pannocchia and Rawlings, 2003], with d R n d [ ] + [ x A Bd = d 0 I y = [ C ][ x d C d ] [ x d ] + ] + v [ ] [ ] B w u + 0 w d G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 5 / 47
Augmented system: observability Observability of the augmented system Assume that (A,C ) is observable ([ ] A Bd Question: is, [ ] ) C C 0 I d observable? Answer: yes, if and only if (from the Hautus test) [ ] A I Bd rank = n + n C d Observation: the previous can be satisfied if and only if C d n d p G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 6 / 47
Augmented system: controllability Controllability of the augmented system Assume that (A,B) is controllable ([ ] [ ]) A Bd B Question: is, controllable? 0 I 0 Answer: No Observation: the disturbance is not going to be controlled. Its effect is taken into account in Steady-State Optimization and Dynamic Optimization modules G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 7 / 47
Augmented estimator General design Set n d = p (see [Pannocchia and Rawlings, 2003, Maeder et al., 2009] for issues on choosing n d < p) Choose (B d,c d ) such that the augmented system is observable [ ] Lx Choose L = such that L d ([ A Bd 0 I ] [ A Bd 0 I ][ Lx L d ] [C Cd ] ) is strictly Hurwitz Augmented estimator: [ ] [ ˆx(k) ˆx ] (k) d(k) ˆ = dˆ + (k) [ Lx L d ]( y(k) [ C ] [ ˆx ]) (k) C d dˆ (k) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 8 / 47
The typical industrial design Typical industrial design for stable systems (A strictly Hurwitz) Output disturbance model B d = 0, C d = I, L x = 0, L d = I Any error y(k) C ˆx (k) is assumed to be caused by a step (constant) disturbance acting on the output. In fact, the filtered disturbance estimate is: d(k) ˆ = d ˆ (k) + (y(k) C ˆx (k) d ˆ (k)) = y(k) C ˆx (k) It is a deadbeat Kalman filter Q = 0, Q d = I, R 0 It is simple and does the job [Rawlings et al., 1994] G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 9 / 47
Comments on the industrial design Rotation factor for integrating systems (rotated and translated) (rotated) yk (uncorrected) }α ˆxk k 1 k k + 1 k + 2 k + 3 k 1 k + 4 (current time) Limitations of the output disturbance model The overall performance is often sluggish [Lundström et al., 1995, Muske and Badgwell, 2002, Pannocchia and Rawlings, 2003, Pannocchia, 2003] A suitable estimator design can improve the closed-loop performance [Pannocchia, 2003, Pannocchia and Bemporad, 2007, Rajamani et al., 2009] Often, a deadbeat input disturbance model works better B d = B, C d = 0, Q = 0, Q d = I, R 0 G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 10 / 47
Steady-state optimization module Tuning parameters Dynamic Optimization u(k) ˆx (k + 1) ˆ d (k + 1) Process y(k) ˆx(k), ˆ d(k) z 1 x s (k), u s (k) Steady-state Optimization ˆ d(k) Estimator Tuning parameters MPC G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 11 / 47
Steady-state optimization module: introduction A trivial case Square system (m = p) without constraints and with setpoints on all CVs Solve the linear system x s = Ax s + Bu s + B d ˆ d r s = C x s +C d ˆ d Obtain [ xs u s ] = [ ] 1 [ I A B C 0 ] B d d ˆ r sp C d d ˆ Observation The steady-state target (x s, u s ) may change at each decision time because of the disturbance estimate d ˆ G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 12 / 47
Objectives of the steady-state optimization module Boundary conditions In most cases the number of CVs is different from the number of MVs: p m CVs and MVs have constraints to meet y min y(k) y max, u min u(k) u max Only a small subset of CVs have fixed setpoints: H y(k) r sp Objectives Given the current disturbance estimate, ˆ d Compute the equilibrium (x s,u s, y s ): x s = Ax s + Bu s + B d d, ˆ ys = C x s +C d d ˆ such that: constraints on MVs and CVs are satisfied: y min y s y max, u min u s u max the subset of CVs tracks the setpoint: H y s = r s G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 13 / 47
Examples of steady-state optimization feasible regions Stable system with 2 MVs and 2 CVs (with bounds) y s = C (I A) 1 Bu s + (C (I A) 1 B d +C d ) d ˆ = Gu s +G d d ˆ y (1) max u (2) max y (2) max u (2) s y (2) min u (2) min y (1) min u (1) min u (1) s u (1) max G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 14 / 47
Examples of steady-state optimization feasible regions Stable system with 2 MVs and 2 CVs (one setpoint) y s = C (I A) 1 Bu s + (C (I A) 1 B d +C d ) d ˆ = Gu s +G d d ˆ y (1) max u (2) max u (2) s y (2) min = y (2) max u (2) min y (1) min u (1) min u (1) s u (1) max G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 15 / 47
Examples of steady-state optimization feasible regions Stable system with 2 MVs and 2 CVs (two setpoints) y s = C (I A) 1 Bu s + (C (I A) 1 B d +C d ) d ˆ = Gu s +G d d ˆ u (2) max u (2) s u (2) min y (2) min = y (2) max y (1) min = y (1) max u (1) min u (1) s u (1) max G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 16 / 47
Examples of steady-state optimization feasible regions Stable system with 2 MVs and 3 CVs (two setpoints) y s = C (I A) 1 Bu s + (C (I A) 1 B d +C d ) d ˆ = Gu s +G d d ˆ y (3) min y (3) max u (2) max u (2) s u (2) min y (2) min = y (2) max y (1) min = y (1) max u (1) min u (1) s u (1) max G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 17 / 47
Hard and soft constraints Hard constraints In the two optimization modules, constraints on MVs are regarded as hard, i.e., cannot be violated This choice comes from the possibility of satisfying them exactly Soft constraints In the two optimization modules, constraints on CVs are regarded as soft, i.e., can be violated when necessary The amount of violation is penalized in the objective function This choice comes from the impossibility of satisfying them exactly G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 18 / 47
Steady-state optimization module: linear formulation General formulation (LP) min u s,x s,ɛ s,ɛ s q ɛ s + q ɛ s + r u s x s = Ax s + Bu s + B d ˆ d s.t. u min u s u max y min ɛ s C x s +C d ˆ d ymax + ɛ s ɛ s 0 ɛ s 0 Extensions Setpoints on some CVs can be specified with either an equality constraint or by setting identical value for minimum and maximum value Often CVs are grouped by ranks G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 19 / 47
LP steady-state optimization module: tuning Cost function r R m is the MV cost vector: a positive (negative) entry in r implies that the corresponding MV should be minimized (maximized) q R p and q R p are the weights of upper and lower bound CV violations. Sometimes they are defined in terms of equal concern error: 1 1 q i =, q =, i = 1,..., p i SSECE U i Constraints SSECE L i Bounds u max, u min, y max, y min can be modified by the operators (within ranges defined by the MPC designer) Sometimes in order to avoid large target changes from time k 1 to time k a rate constraint is added u max u s (k) u s (k 1) u max G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 20 / 47
LP steady-state optimization module: numerical solution Standard LP form with z = [ xs u s ɛ s ɛ s ], E = 0 I 0 0 0 I 0 0 C 0 I 0 C 0 0 I 0 0 I 0 0 0 0 I min z c z s.t. E z e, F z = f u max u min y, e = max C d d ˆ (y min C d d) ˆ, F = [ I A B 0 0], f = B d d ˆ 0 0 Solution algorithms [Nocedal and Wright, 2006] Solution methods based on simplex or interior point algorithms G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 21 / 47
Steady-state optimization module: quadratic formulation QP formulation The LP formulation is quite intuitive but its outcome may be too jumpy Sometimes a QP formulation may be preferred min u s,x s,ɛ s,ɛ s ɛ s 2 Q s + ɛ s 2 Q s + u s u sp 2 R s x s = Ax s + Bu s + B d ˆ d s.t. u min u s u max y min ɛ s C x s +C d ˆ d ymax + ɛ s where u sp is the desired MV setpoint and x 2 Q = x Qx Tuning is slightly more complicated, but the outcome is usually smoother G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 22 / 47
Dynamic optimization module Tuning parameters Dynamic Optimization u(k) ˆx (k + 1) ˆ d (k + 1) Process y(k) ˆx(k), ˆ d(k) z 1 x s (k), u s (k) Steady-state Optimization ˆ d(k) Estimator Tuning parameters MPC G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 23 / 47
Dynamic optimization module: introduction Agenda Make a finite-horizon prediction of future CVs evolution based on a sequence of MVs Find the optimal MVs sequence, minimizing a cost function that comprises: deviation of CVs (and MVs) from their targets rate of change of MVs respecting constraints on: MVs (always) CVs (possibly) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 24 / 47
Dynamic optimization module: graphical interpretation y Prediction horizon u Control horizon Past Future G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 25 / 47
Dynamic optimization module: formulation Cost function: Q, either R or S, Q, Q, P positive definite matrices V N ( ˆx, u,ɛ,ɛ) = N 1 j =0 [ ŷ(j ) y s 2 Q + u(j ) u s 2 R + u(j ) u(j 1) 2 S + ɛ(j ) 2 Q + ɛ(j ) 2 Q x + = Ax + Bu + B d ˆ d ŷ = C x +C d ˆ d ] + x(n ) x s 2 P s.t. x(0) = ˆx ys = C x s +C d ˆ d Control problem minv N ( ˆx, u,ɛ,ɛ) u,ɛ,ɛ s.t. u min u(j ) u max u max u(j ) u(j 1) u max y min ɛ(j ) ŷ(j ) y max + ɛ(j ) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 26 / 47
Dynamic optimization module: formulation Main tuning parameters Q: diagonal matrix of weights for CVs deviation from target: q i i = ( 1 DEC E M i ) 2 R: diagonal matrix of weights for MVs deviation from target S: diagonal matrix of weights for MVs rate of change, often called move suppression factors Q, Q: diagonal matrices of weights for CVs violation of constraints ( ) 2 ( ) 2 1 q i i = DEC E U, q = 1 i i DEC E L i i u max, u min, y max, y min, u max : constraint vectors N : prediction horizon G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 27 / 47
Dynamic optimization module: rewriting Deviation variables Use the target (x s,u s ): x(j ) = x(j ) x s, ũ(j ) = u(j ) u s, Recall that: x s = Ax s + Bu s + B d ˆ d, ys = C x s +C d ˆ d Cost function becomes: V N ( ˆx, ũ,ɛ,ɛ) = N 1 j =0 [ x(j ) 2 C QC + ũ(j ) 2 R + ũ(j ) ũ(j 1) 2 S + ɛ(j ) 2 Q + ɛ(j ) 2 Q ] + x(n ) 2 P s.t. x + = A x + Bũ x(0) = ˆx x s G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 28 / 47
Dynamic optimization module: constrained LQR Compact problem formulation minv N ( ˆx, ũ,ɛ,ɛ) ũ,ɛ,ɛ s.t. u min u s ũ(j ) u max u s u max ũ(j ) ũ(j 1) u max y min y s ɛ(j ) C x(j ) y max y s + ɛ(j ) Constrained LQR formulation With suitable definitions (shown next), we obtain a constrained LQR formulation: min u a V N (x a, u a ) = N 1 j =0 [ ] x a (j ) 2 Q a + u a (j ) 2 R a + 2x a (j )M a u a (j ) + x a (N ) 2 P a s.t. x + a = A a x a + B a u a, D a u a + E a x a e a G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 29 / 47
Dynamic optimization module: constrained LQR (cont. d) Augmented state and input [ ] [ ũ(j ) ] x(j ) x a (j ) = ũ(j 1), u a (j ) = ɛ(j ) ɛ(j ) The state is augmented to write terms u(j ) u(j 1) (in the objective function and/or in the constraints) The input is augmented to write the soft output constraints Matrices and vectors A a = [ ] A 0 0 0 B a = [ ] B 0 0 I 0 0 D a = I 0 0 I 0 0 I 0 0 I 0 0 0 I 0 0 0 I [ ] Q a = C QC 0 0 S E a = 0 0 0 0 0 I 0 I C 0 C 0 R a = [ R+S 0 0 0 Q 0 0 0 Q e a = ] u max u s u s u min u max u max y max y s y s y min M a = [ ] 0 0 0 S 0 0 G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 30 / 47
Dynamic optimization module: QP solution From constrained LQR to a QP problem x a = x a (0) x a (1) x a (2).. x a () Q a = A a = Q a Qa D a = I A a A 2 a.. A N a... Qa D a Da 0 0. B..... a B a =. A a B a B.. a = x a = A a x a (0) + B a u a.... 0 R a = P a... Da A N 1 a B a A a B a B a R a Ra E a =. M.. a = Ra. E.. a. e a =.. Ea 0 E a 0 M a Ma... Ma 0 0 e a e ạ.. e a G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 31 / 47
Dynamic optimization module: QP solution (cont. d) QP problem where 1 min u a 2 u a H au a + u a q a s.t. F a u a e a G a x a (0) H a = B a Q ab a + R a + B a M a + M a B a q a = (B a Q a + M a )A a x a (0) F a = D a + E a B a G a = E a A a Observations Both the linear penalty and constraint RHS vary linearly with the current augmented state x a (0), while all other terms are fixed QP solvers are based on Active-Set Methods or Interior Point Methods G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 32 / 47
Feedback controllers synthesis from open-loop controllers: the receding horizon principle A quote from [Lee and Markus, 1967] One technique for obtaining a feedback controller synthesis from knowledge of open-loop controllers is to measure the current control process state and then compute very rapidly for the open-loop control function. The first portion of this function is then used during a short time interval, after which a new measurement of the process state is made and a new open-loop control function is computed for this new measurement. The procedure is then repeated. G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 33 / 47
Optimal control sequence and closed-loop implementation The optimal control sequence The optimal control sequence u 0 a is in the form u 0 a = ũ0 (0),ũ 0 (1),...,ũ 0 (N 1),ɛ(0),ɛ(1),...,ɛ(N 1),ɛ(0),ɛ(1),...,ɛ(N 1) }{{}}{{}}{{} ũ 0 ɛ ɛ The variables ɛ and ɛ are present only when soft output constraints are used Closed-loop implementation Only the first element of the optimal control sequence is injected into the plant: u(k) = ũ 0 (0) + u s (k) The successor state is predicted using the estimator model ˆx (k + 1) = A ˆx(k) + Bu(k) + B d ˆ d(k) dˆ (k + 1) = d(k) ˆ G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 34 / 47
Linear MPC: summary Overall algorithm 1 Given predicted state and disturbance ( ˆx (k), d ˆ (k)) and output measurement y(k), compute filtered estimate: ˆx(k) = ˆx ( (k) + L x y(k) C ˆx (k) C d d ˆ (k) ), d(k) ˆ = d ( (k) + L d y(k) C ˆx (k) C d d ˆ (k) ) 2 Solve Steady-State Optimization problem and compute targets (x s (k),u s (k)) 3 Define deviation variables: x(0) = ˆx(k) u s (k), ũ( 1) = u(k 1) u s (k), and initial regulator state x a (0) = Solve Dynamic Optimization problem to obtain ũ 0 [ ] x(0) ũ( 1). 4 Inject control action u(k) = ũ 0 (0) + u s (k). Predict successor state ˆx (k + 1) = A ˆx(k) + Bu(k) + B d d(k) ˆ and disturbance dˆ (k + 1) = d(k). ˆ Set k k + 1 and go to 1 G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 35 / 47
General formulation of an optimization problem The three ingredients x R n, vector of variables f : R n R, objective function c : R n R m, vector function of constraints that the variables must satisfy. m is the number of restrictions applied The optimization problem min f (x) x R n { ci (x) = 0 i E subject to c i (x) 0 i I E, I : sets of indices of equality and inequality constraints, respectively G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 36 / 47
Constrained optimization: example 1 Solve min x 1 + x 2 s. t. x 2 1 + x2 2 2 = 0 Standard notation, feasibility region and solution x2 In standard notation: f (x) = x 1 + x 2, I =, E = {1}, c 1 (x) = x 2 1 + x2 2 2 Feasibility region: circle of radius 2, only the border Solution: x = [ 1, 1] T f (x) x1 Observation f (x ) = [ ] [ ] 1, c 1 1 (x 2 ) = = f (x ) = 1 2 2 c 1(x ) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 37 / 47
Constrained optimization: example 2 Solve min x 1 + x 2 s. t. 2 x 2 1 x2 2 0 Standard notation, feasibility region and solution x2 In standard notation: f (x) = x 1 + x 2, I = {1}, E =, c 1 (x) = 2 (x 2 1 + x2 2 ) Feasibility region: circle of radius 2, including the interior Solution: x = [ 1, 1] T f (x) x1 Observation f (x ) = [ ] [ ] 1, c 1 1 (x 2 ) = = f (x ) = 1 2 2 c 1(x ) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 38 / 47
Constrained optimality conditions (KKT) Lagrangian function L (x,λ) = f (x) λ i c i (x) i E I Karush-Kuhn-Tucker optimality conditions If x is a local solution to the standard problem, there exists a vector λ R m such that the following conditions hold: x L (x,λ ) = 0 c i (x ) = 0 c i (x ) 0 λ i 0 λ i c i (x ) = 0 for all i E for all i I for all i I for all i E I The components of λ are called Lagrange multipliers Notice that a multiplier is zero when the corresponding constraint is inactive G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 39 / 47
Linear programs (LP) problems LP in standard form min x c T x subject to Ax = b, x 0 where: c R n, x R n, A R m n, b R m and rank(a) = m Optimality conditions Lagrangian function: L (x,π, s) = c T x π T (Ax b) s T x If x is solution of the linear program, then: A T π + s = c Ax = b x 0 s 0 x i s i = 0, i = 1,2,...n G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 40 / 47
LP: the simplex method The base points A point x R n is a base point if Ax = b and x 0 At most m components of x are nonzero The columns of A corresponding to the nonzero elements are linearly independent Fundamental aspects of the simplex method Base points are vertices of the feasibility region The solution is a base point The simplex method iterates from a base point x k to another one x k+1, and stops when all components of s k are nonnegative When a component of s k is negative, a new base point x k+1 in which the corresponding element of x k is nonzero is selected G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 41 / 47
Quadratic Programming (QP) Standard form subject to: 1 min x 2 xt Gx + x T d a T i x = b i, a T i x b i, i E i I where G R n n is symmetric (positive definite), d R n, b R n, and a i R n, for all i E I The active set A (x ) = { i E I : a T i x = b i } G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 42 / 47
Quadratic Programming (QP) problems (2/2) Lagrangian function L (x,λ) = 1 2 xt Gx + x T λ i (a T i x b i ) Optimality conditions (KKT) Gx + d λ i a i = 0 i A (x ) i E I a T i x = b i, for all i A (x ) a T i x > b i, for all i I \ A (x ) λ i 0, for all i I A (x ) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 43 / 47
Active set methods for convex QP problems Fundamental steps 1 Given a feasible x k, we evaluate its active set and build the matrix A whose row are {a T i }, i A (x k) 2 Solve the KKT linear system [ ][ G A T pk A 0 λ k ] = [ ] d +Gxk Ax k b 3 If p k ρ, check if λ i 0 for all i A (x k ). If so, stop. 4 If a multiplier λ i < 0 for some i A (x k ), remove the i th constraint from the active set. 5 If p k > ρ, define x k+1 = x k + α k p k, where α k is the largest scalar in (0,1] such that no inequality constraint is violated. When a blocking constraint is found, it is included in the new active set A (x k+1 ) G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 44 / 47
Nonlinear programming (NLP) problems via SQP algorithms Nonlinear programming (NLP) problems min x f (x) s.t. c i (x) = 0 c i (x) 0 i E i I Sequential Quadratic Programming (SQP) approach 1 min p k 2 pt k W k p k + f (x k ) T p k s.t. c i (x k ) T p k + c i (x k ) = 0 c i (x k ) T p k + c i (x k ) 0 i E i I G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 45 / 47
References I E. J. Davison and H. W. Smith. Pole assignment in linear time-invariant multivariable systems with constant disturbances. Automatica, 7:489 498, 1971. B. A. Francis and W. M. Wonham. The internal model principle of control theory. Automatica, 12: 457 465, 1976. H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. John Wiley & Sons, 1972. E. B. Lee and L. Markus. Foundations of Optimal Control Theory. John Wiley & Sons, New York, 1967. P. Lundström, J. H. Lee, M. Morari, and S. Skogestad. Limitations of dynamic matrix control. Comp. Chem. Eng., 19:409 421, 1995. U. Maeder, F. Borrelli, and M. Morari. Linear offset-free model predictive control. Automatica, 45(10): 2214 2222, 2009. K. R. Muske and T. A. Badgwell. Disturbance modeling for offset-free linear model predictive control. J. Proc. Contr., 12:617 632, 2002. J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006. G. Pannocchia. Robust disturbance modeling for model predictive control with application to multivariable ill-conditioned processes. J. Proc. Cont., 13:693 701, 2003. G. Pannocchia and A. Bemporad. Combined design of disturbance model and observer for offset-free model predictive control. IEEE Trans. Auto. Contr., 52(6):1048 1053, 2007. G. Pannocchia and J. B. Rawlings. Disturbance models for offset-free model predictive control. AIChE J., 49:426 437, 2003. G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 46 / 47
References II M. R. Rajamani, J. B. Rawlings, and S. J. Qin. Achieving state estimation equivalence for misassigned disturbances in offset-free model predictive control. AIChE J., 55(2):396 407, 2009. J. B. Rawlings, E. S. Meadows, and K. R. Muske. Nonlinear model predictive control: a tutorial and survey. In ADCHEM Conference, pages 203 214, Kyoto, Japan, 1994. H. W. Smith and E. J. Davison. Design of industrial regulators. Integral feedback and feedforward control. Proc. IEE, 119(8):1210 1216, 1972. G. Pannocchia Course on Model Predictive Control. Part II Linear MPC theory 47 / 47