Model Predictive Control Short Course Regulation

Size: px
Start display at page:

Download "Model Predictive Control Short Course Regulation"

Transcription

1 Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 2017 by James B. Rawlings Milwaukee, WI August 28 29, 2017 JCI 2017 MPC short course 1 / 61

2 Outline 1 Linear Quadratic Problem 2 Constraints 3 Dynamic Programming Solution 4 The Infinite Horizon LQ Problem 5 Controllability 6 Convergence of the Linear Quadratic Regulator 7 Constrained Regulation 8 Quadratic Programming JCI 2017 MPC short course 2 / 61

3 Linear quadratic regulation We start by designing a controller to take the state of a deterministic, linear system to the origin. If the setpoint is not the origin, or we wish to track a time-varying setpoint trajectory, we will subsequently make modifications of the zero setpoint problem to account for that. The system model is x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) (1) in which x is an n vector, u is an m vector, and y is a p vector. We can compress the notation by denoting the model as x + = Ax + Bu y = Cx (2) JCI 2017 MPC short course 3 / 61

4 A point on dot and + notation Just like in the continuous time model where we use the dot notation to represent the time derivative dx = Ax + Bu dt y = Cx + Du ẋ = Ax + Bu y = Cx + Du In discrete time we use the + notation to represent the state at the next sample time x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) + Du(k) x + = Ax + Bu y = Cx + Du JCI 2017 MPC short course 4 / 61

5 State feedback In this first problem, we assume that the state is measured, or C = I. We will handle the output measurement problem with state estimation in the next section. Using the model we can predict how the state evolves given any set of inputs we are considering. JCI 2017 MPC short course 5 / 61

6 Sequence of inputs (decision variables) Consider N time steps into the future and collect the input sequence into u, u = (u(0), u(1),..., u(n 1)) Constraints on the u sequence (i.e., valve saturations, etc.) are the main feature that distinguishes MPC from the standard linear quadratic (LQ) control. JCI 2017 MPC short course 6 / 61

7 Constraints The manipulated inputs (valve positions, voltages, torques, etc.) to most physical systems are bounded. We include these constraints by linear inequalities in which Eu(k) e k = 0, 1,... E = [ ] I I e = [ ] u u are chosen to describe simple bounds such as u u(k) u k = 0, 1,... We sometimes wish to impose constraints on states or outputs for reasons of safety, operability, product quality, etc. These can be stated as Fx(k) f k = 0, 1,... JCI 2017 MPC short course 7 / 61

8 Rate of change constraints Practitioners find it convenient in some applications to limit the rate of change of the input, u(k) u(k 1). To maintain the state space form of the model, we may augment the state as [ ] x(k) x(k) = u(k 1) and the augmented system model becomes x + = à x + Bu y = C x in which à = [ ] A B = [ ] B I C = [ C 0 ] JCI 2017 MPC short course 8 / 61

9 Rate of change constraints A rate of change constraint such as u(k) u(k 1) k = 0, 1,... is then stated as F x(k) + Eu(k) e F = [ ] 0 I 0 I E = [ ] I I e = [ ] To simplify analysis, it pays to maintain linear constraints when using linear dynamic models. So if we want to consider fairly general constraints for a linear system, we choose the form Fx(k) + Eu(k) e k = 0, 1,... which subsumes all the forms listed previously. JCI 2017 MPC short course 9 / 61

10 Controller objective function We first define an objective function V ( ) to measure the deviation of the trajectory of x(k), u(k) from zero by summing the weighted squares V (x(0), u) = 1 2 subject to N 1 k=0 [ x(k) Qx(k) + u(k) Ru(k) ] x(n) P f x(n) x + = Ax + Bu The objective function depends on the input sequence and state sequence. The initial state is available from the measurement. The remainder of the state trajectory, x(k), k = 1,..., N, is determined by the model and the input sequence u. So we show the objective function s explicit dependence on the input sequence and initial state. JCI 2017 MPC short course 10 / 61

11 Regulator tuning parameters The tuning parameters in the controller are the matrices Q and R. Note that we are using the deviation of u from zero as the input penalty. We ll use rate of change penalty S later. We allow the final state penalty to have a different weighting matrix, P f, for generality. Large values of Q in comparison to R reflect the designer s intent to drive the state to the origin quickly at the expense of large control action. Penalizing the control action through large values of R relative to Q is the way to reduce the control action and slow down the rate at which the state approaches the origin. Choosing appropriate values of Q and R (i.e., tuning) is not always obvious, and this difficulty is one of the challenges faced by industrial practitioners of LQ control. Notice that MPC inherits this tuning challenge. JCI 2017 MPC short course 11 / 61

12 Regulation problem definition We then formulate the following optimal MPC control problem subject to min u V (x(0), u) (3) x + = Ax + Bu Fx(k) + Eu(k) e, k = 0, 1,..., N 1 The Q, P f and R matrices often are chosen to be diagonal, but we do not assume that here. We assume, however, that Q, P f, and R are real and symmetric; Q and P f are positive semidefinite; and R is positive definite. These assumptions guarantee that the solution to the optimal control problem exists and is unique. JCI 2017 MPC short course 12 / 61

13 Let s first examine a scalar system (n = 1) with horizon N = 1 We have only one move to optimize, u 0. x 1 = ax 0 + bu 0 The objective is V = (1/2) ( qx0 2 + ru0 2 + p f x1 2 ) Expand the x 1 term to see its dependence on the control u 0 V = (1/2) ( qx0 2 + ru0 2 + p f (ax 0 + bu 0 ) 2) ( ) = (1/2) qx0 2 + ru0 2 + p f (a 2 x abx 0 u 0 + b 2 u0) 2 ( ) V = (1/2) (q + a 2 p f )x (bap f x 0 )u 0 + (b 2 p f + r)u0 2 JCI 2017 MPC short course 13 / 61

14 Take derivative dv /du 0, set to zero Take the derivative, set to zero Solve for optimal control d du 0 V = bp f ax 0 + (b 2 p f + r)u 0 0 = bp f ax 0 + (b 2 p f + r)u 0 u0 0 = bp f a b 2 p f + r x 0 u 0 0 = kx 0 k = bp f a b 2 p f + r JCI 2017 MPC short course 14 / 61

15 Back to vectors and matrices Quadratic functions in x 0 and u 0 V = (1/2) (x 0(Q + A P f A)x 0 + 2u 0B P f Ax 0 + u 0(B ) P f B + R)u 0 Take derivative, set to zero, solve d du 0 V = B P f Ax 0 + (B P f B + R)u 0 u 0 0 = (B P f B + R) 1 B P f Ax 0 So we have the optimal linear control law u 0 0 = Kx 0 K = (B P f B + R) 1 B P f A JCI 2017 MPC short course 15 / 61

16 Dynamic programming solution To handle N > 1, we can go to the end of the horizon and work our way backwards to the beginning of the horizon. This trick is known as (backwards) dynamic programming. Developed by Bellman (1957). Each problem that we solve going backwards is an N = 1 problem, which we already know how to solve! When we arrive at k = 0, we have the optimal control as a function of the initial state. This is our control law. We next examine the outcome of applying dynamic programming to the LQ problem. JCI 2017 MPC short course 16 / 61

17 State feedback solution for last stage If we have available x(n 1), which we denote by x: Optimal control and feedback gain un 1 0 (x) = K(N 1) x K(N 1) = (B P f B + R) 1 B P f A Optimal cost and Riccati equation V 0 N 1 (x) = (1/2)x Π(N 1) x Π(N 1) = Q + A P f A A P f B(B P f B + R) 1 B P f A JCI 2017 MPC short course 17 / 61

18 The optimal cost function The function VN 1 0 (x) defines the optimal cost to go from state x for the last stage under the optimal control law un 1 0 (x). Having this function allows us to move to the next stage of the DP recursion. For the next stage we solve the optimization min l(x(n 2), u(n 2)) + V N 1 0 (x(n 1)) u(n 2),x(N 1) subject to x(n 1) = Ax(N 2) + Bu(N 2) Notice that this problem is identical in structure to the stage we just solved and we can write out the solution by simply renaming variables. JCI 2017 MPC short course 18 / 61

19 Next step in the recursion Now assume we have x(n 1) available, which we denote by x: Optimal control and feedback gain Optimal cost and Riccati equation un 2 0 (x) = K(N 2) x VN 2 0 (x) = (1/2)x Π(N 2) x K(N 2) = (B Π(N 1)B + R) 1 B Π(N 1)A Π(N 2) = Q + A Π(N 1)A A Π(N 1)B(B Π(N 1)B + R) 1 B Π(N 1)A JCI 2017 MPC short course 19 / 61

20 The regulator Riccati equation The recursion from Π(N 1) to Π(N 2) is known as a backward Riccati iteration. To summarize, the backward Riccati iteration is defined as follows Π(k 1) = Q + A Π(k)A A Π(k)B ( B Π(k)B + R ) 1 B Π(k)A k = N, N 1,..., 1 (4) with terminal condition Π(N) = P f (5) The terminal condition replaces the typical initial condition because the iteration is running backward. JCI 2017 MPC short course 20 / 61

21 The optimal control policy The optimal control policy at each stage is uk 0 (x) = K(k)x k = N 1, N 2,..., 0 (6) The optimal gain at time k is computed from the Riccati matrix at time k + 1 K(k) = ( B Π(k + 1)B + R ) 1 B Π(k + 1)A The optimal cost to go from time k to time N is k = N 1, N 2,..., 0 (7) V 0 k (x) = (1/2)x Π(k)x k = N, N 1,..., 0 (8) JCI 2017 MPC short course 21 / 61

22 Whew. OK, the controller is optimal, but is it useful? We now have set up and solved the (unconstrained) optimal control problem. Next we are going to study some of the closed-loop properties of the controlled system under this controller. One of these properties is closed-loop stability. Before diving into the controller stability discussion, let s take a breather and see what kinds of things that we should guard against when using the optimal controller. We start with a simple scalar system with inverse response or, equivalently, a right-half-plane zero in the system transfer function. y(s) = g(s)u(s) g(s) = k s a s + b, a, b > 0 JCI 2017 MPC short course 22 / 61

23 Step response of system with right-half-plane zero Convert the transfer function to state space (notice a nonzero D) d x = Ax + Bu dt A = b B = (a + b) y = Cx + Du C = k D = k Solve the ODEs with u(t) = 1 and x(0) = 0 y time Figure 1: Step response of a system with a right-half-plane zero. JCI 2017 MPC short course 23 / 61

24 What input delivers a unit step change in the output? Say I make a unit step change in the setpoint of y. What u(t) (if any) can deliver a (perfect) unit step in y(t)? In the Laplace domain, y(s) = 1/s. Solve for u(s). Since y = g(s)u we have that u(s) = y(s) g(s) = s + b ks(s a) We can invert this signal back to the time domain a to obtain u(t) = 1 [ ] b + (a + b)e at ka Note that the second term is a growing exponential for a > 0. Hmmm... How well will that work? a Use partial fractions, for example. JCI 2017 MPC short course 24 / 61

25 Simulating the system with chosen input u y time Figure 2: Output response with an exponential input for a system with RHP zero. Sure enough, that input moves the output right to its setpoint! JCI 2017 MPC short course 25 / 61

26 Will this work in practice? So we can indeed achieve perfect tracking in y(t) with this growing input u(t). Because the system g(s) = k(s a)/(s + b) has a zero at s = a, and u(s) has a 1/(s a) term, they cancel and we see nice behavior rather than exponential growth in y(t). This cancellation is the so-called input blocking property of the transfer function zero. But what happens if u(t) hits a constraint, u(t) u max. Let s simulate the system again, but with an input constraint at u max = 20. We achieve the following result. JCI 2017 MPC short course 26 / 61

27 Output behavior when the exponential input saturates The input saturation destroys the nice output behavior u 10 y Figure 3: Output response with input saturation for a system with RHP zero. time JCI 2017 MPC short course 27 / 61

28 So how can we reliably use optimal control? Control theory to the rescue The optimal controller needs to know to avoid exploiting this exponential input. But how do we tell it? We cannot let it start down the exponential path and find out only later that this was a bad idea. We have to eliminate this option by the structure of the optimal control problem. More importantly, even if we fix this issue, how do we know that we have eliminated every other path to instability for every other kind of (large, multivariable) system? Addressing this kind of issue is where control theory is a lot more useful than brute force simulation. JCI 2017 MPC short course 28 / 61

29 The infinite horizon LQ problem Let us motivate the infinite horizon problem by showing a weakness of the finite horizon problem. Kalman (1960, p.113) pointed out in his classic 1960 paper that optimality does not ensure stability. In the engineering literature it is often assumed (tacitly and incorrectly) that a system with an optimal control law is necessarily stable. Assume that we use as our control law the first feedback gain of the N stage finite horizon problem, K N (0), u(k) = K N (0)x(k) Then the stability of the closed-loop system is determined by the eigenvalues of A + BK N (0). We now construct an example that shows choosing Q > 0, R > 0, and N 1 does not ensure stability. In fact, we can find reasonable values of these parameters such that the controller destabilizes a stable system. JCI 2017 MPC short course 29 / 61

30 An optimal controller that is closed-loop unstable Let A = [ 4/3 2/3 1 0 ] B = [ 1 0 ] C = [ 2/3 1] This discrete-time system has a transfer function zero at z = 3/2, i.e., an unstable zero (or a right-half-plane zero in continuous time). We now construct an LQ controller that inverts this zero and hence produces an unstable system. We would like to choose Q = C C so that y itself is penalized, but that Q is only semidefinite. We add a small positive definite piece to C C so that Q is positive definite, and choose a small positive R penalty (to encourage the controller to misbehave), and N = 5, [ ] Q = C 4/ /3 C I = R = / JCI 2017 MPC short course 30 / 61

31 Compute the control law We now iterate the Riccati equation four times starting from Π = P f = Q and compute K N (0) for N = 5. Then we compute the eigenvalues of A + BK N (0) and achieve 1 eig(a + BK 5 (0)) = {1.307, 0.001} Using this controller the closed-loop system evolution is x(k) = (A + BK 5 (0)) k x 0. Since an eigenvalue of A + BK 5 (0) is greater than unity, x(k) as k. In other words the closed-loop system is unstable. 1 Please check this answer with Octave or Matlab. JCI 2017 MPC short course 31 / 61

32 Closed-loop eigenvalues for different N; Riccati iteration Im Re N 5 N = 1 Figure 4: Closed-loop eigenvalues of A + BK N (0) for different horizon length N (o); open-loop eigenvalues of A (x); real versus imaginary parts. JCI 2017 MPC short course 32 / 61

33 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! x 2 0 x x 1 JCI 2017 MPC short course 33 / 61

34 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! How is this possible? x 2 0 x x 1 JCI 2017 MPC short course 33 / 61

35 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! How is this possible? x 2 0 k x x 1 JCI 2017 MPC short course 33 / 61

36 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! How is this possible? x 2 0 k + 1 k x x 1 JCI 2017 MPC short course 33 / 61

37 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! How is this possible? x 2 k k + 1 k x x 1 JCI 2017 MPC short course 33 / 61

38 Why is the closed-loop unstable? A finite horizon objective function may not give a stable controller! How is this possible? x 2 closed-loop k k + 1 k x x 1 JCI 2017 MPC short course 33 / 61

39 What happens if we make the horizon N larger? If we continue to iterate the Riccati equation, which corresponds to increasing the horizon in the controller, we obtain for N = 7 and the controller is stabilizing. eig(a + BK 7 (0)) = {0.989, 0.001} If we continue iterating the Riccati equation, we converge to the following steady-state closed-loop eigenvalues eig(a + BK (0)) = {0.664, 0.001} This controller corresponds to an infinite horizon control law. Notice that it is stabilizing and has a reasonable stability margin. Nominal stability is a guaranteed property of infinite horizon controllers as we prove in the next section. JCI 2017 MPC short course 34 / 61

40 The forecast versus the closed-loop behavior for N = y(k) k Figure 5: The forecast versus the closed-loop behavior for N = 5. JCI 2017 MPC short course 35 / 61

41 The forecast versus the closed-loop behavior for N = 7 1 y(k) k Figure 6: The forecast versus the closed-loop behavior for N = 7. JCI 2017 MPC short course 36 / 61

42 The forecast versus the closed-loop behavior for N = y(k) k Figure 7: The forecast versus the closed-loop behavior for N = 20. JCI 2017 MPC short course 37 / 61

43 The infinite horizon regulator With this motivation, we are led to consider directly the infinite horizon case V (x(0), u) = 1 2 x(k) Qx(k) + u(k) Ru(k) (9) k=0 in which x(k) is the solution at time k of x + = Ax + Bu if the initial state is x(0) and the input sequence is u. If we are interested in a continuous process (i.e., no final time), then the natural cost function is an infinite horizon cost. If we were truly interested in a batch process (i.e., the process does stop at k = N), then stability is not a relevant property, and we naturally would use the finite horizon LQ controller and the time-varying controller, u(k) = K(k)x(k), k = 0, 1,..., N. JCI 2017 MPC short course 38 / 61

44 When can we compute an infinite horizon cost? In considering the infinite horizon problem, we first restrict attention to systems for which there exist input sequences that give bounded cost. Consider the case A = I and B = 0, for example. Regardless of the choice of input sequence, (9) is unbounded for x(0) 0. It seems clear that we are not going to stabilize an unstable system (A = I ) without any input (B = 0). This is an example of an uncontrollable system. In order to state the sharpest results on stabilization, we require the concepts of controllability, stabilizability, observability, and detectability. We shall define these concepts subsequently. JCI 2017 MPC short course 39 / 61

45 Controllability A system is controllable if, for any pair of states x, z in the state space, z can be reached in finite time from x (or x controlled to z) (Sontag, 1998, p.83). A linear discrete time system x + = Ax + Bu is therefore controllable if there exists a finite time N and a sequence of inputs (u(0), u(1),... u(n 1)) that can transfer the system from any x to any z in which u(n 1) z = A N x + [ B AB A N 1 B ] u(n 2). u(0) JCI 2017 MPC short course 40 / 61

46 Checking with N = n moves is sufficient For an unconstrained linear system, if we cannot reach z in n moves, we cannot reach z in any number of moves. The question of controllability of a linear time-invariant system is therefore a question of existence of solutions to linear equations for an arbitrary right-hand side u(n 1) [ B AB A n 1 B ] u(n 2). = z An x u(0) The matrix appearing in this equation is known as the controllability matrix C C = [ B AB A n 1 B ] (10) JCI 2017 MPC short course 41 / 61

47 Controllability test From the fundamental theorem of linear algebra, we know a solution exists for all right-hand sides if and only if the rows of the n nm controllability matrix are linearly independent. 2 Therefore, the system (A, B) is controllable if and only if rank(c) = n 2 See Section A.4 in Appendix A of (Rawlings and Mayne, 2009) or (Strang, 1980, pp.87 88) for a review of this result. JCI 2017 MPC short course 42 / 61

48 Convergence of the linear quadratic regulator We now show that the infinite horizon regulator asymptotically stabilizes the origin for the closed-loop system. Define the infinite horizon objective function subject to with Q, R > 0. V (x, u) = 1 2 x(k) Qx(k) + u(k) Ru(k) k=0 x + = Ax + Bu x(0) = x JCI 2017 MPC short course 43 / 61

49 Convergence of the linear quadratic regulator If (A, B) is controllable, the solution to the optimization problem exists and is unique for all x. min u V (x, u) We denote the optimal solution by u 0 (x), and the first input in the optimal sequence by u 0 (x). The feedback control law κ ( ) for this infinite horizon case is then defined as u = κ (x) in which κ (x) = u 0 (x) = u 0 (0; x). JCI 2017 MPC short course 44 / 61

50 LQR stability As stated in the following lemma, this infinite horizon linear quadratic regulator (LQR) is stabilizing. Lemma 1 (LQR convergence) For (A, B) controllable, the infinite horizon LQR with Q, R > 0 gives a convergent closed-loop system x + = Ax + Bκ (x) JCI 2017 MPC short course 45 / 61

51 Controllability gives finite cost The cost of the infinite horizon objective is bounded above for all x(0) because (A, B) is controllable. Controllability implies that there exists a sequence of n inputs (u(0), u(1),..., u(n 1)) that transfers the state from any x(0) to x(n) = 0. A zero control sequence after k = n for (u(n + 1), u(n + 2),...) generates zero cost for all terms in V after k = n, and the objective function for this infinite control sequence is therefore finite. The cost function is strictly convex in u because R > 0 so the solution to the optimization is unique. JCI 2017 MPC short course 46 / 61

52 Proof of closed-loop convergence If we consider the sequence of costs to go along the closed-loop trajectory, we have for V k = V 0 (x(k)) V k+1 = V k (1/2) ( x(k) Qx(k) + u(k) Ru(k) ) in which V k = V 0 (x(k)) is the cost at time k for state value x(k) and u(k) = u 0 (x(k)) is the optimal control for state x(k). The cost along the closed-loop trajectory is nonincreasing and bounded below (by zero). Therefore, the sequence (V k ) converges and x(k) Qx(k) 0 u(k) Ru(k) 0 as k Since Q, R > 0, we have x(k) 0 u(k) 0 as k and closed-loop convergence is established. JCI 2017 MPC short course 47 / 61

53 Connection to Riccati equation In fact we know more. From the previous sections, we know the optimal solution is found by iterating the Riccati equation, and the optimal infinite horizon control law and optimal cost are given by u 0 (x) = Kx V 0 (x) = (1/2)x Πx in which K = (B ΠB + R) 1 B ΠA Π = Q + A ΠA A ΠB(B ΠB + R) 1 B ΠA (11) Proving Lemma 1 has shown also that for (A, B) controllable and Q, R > 0, a positive definite solution to the discrete algebraic Riccati equation (DARE), (11), exists and the eigenvalues of (A + BK) are asymptotically stable for the K corresponding to this solution (Bertsekas, 1987, pp.58 64). JCI 2017 MPC short course 48 / 61

54 Extensions to constrained systems This basic approach to establishing regulator stability will be generalized to handle constrained (and nonlinear) systems The optimal cost is a Lyapunov function for the closed-loop system. We also can strengthen the stability for linear systems from asymptotic stability to exponential stability based on the form of the Lyapunov function. JCI 2017 MPC short course 49 / 61

55 Enlarging the class of systems The LQR convergence result in Lemma 1 is the simplest to establish, but we can enlarge the class of systems and penalties for which closed-loop stability is guaranteed. The system restriction can be weakened from controllability to stabilizability, which is discussed in Exercises 1.19 and The restriction on the allowable state penalty Q can be weakened from Q > 0 to Q 0 and (A, Q) detectable, which is also discussed in Exercise The restriction R > 0 is retained to ensure uniqueness of the control law. In applications, if one cares little about the cost of the control, then R is chosen to be small, but positive definite. JCI 2017 MPC short course 50 / 61

56 Constrained regulation Now we add the constraints to the control problem. Control Objective. V (x(0), u) = 1 2 Constraints. Optimization. N 1 k=0 [ x(k) Qx(k) + u(k) Ru(k) ] x(n) P f x(n) Fx(k) + Eu(k) e, k = 0, 1,... N 1 min u V (x, u) subject to the model and constraints. JCI 2017 MPC short course 51 / 61

57 Adding constraints is a big change! We cannot solve the constrained problem in closed-form using dynamic programming. Now we have a quadratic objective subject to linear constraints, which is known as a quadratic program (Nocedal and Wright, 1999). We must compute the solution for industrial-sized multivariable problems online. We must compute all the u(k), k = 1, 2,..., N 1 simultaneously. That means a bigger online computation. But we have fast, reliable online computing available; why not use it? JCI 2017 MPC short course 52 / 61

58 The geometry of quadratic programming min u V (u) = (1/2)u Hu + h u subject to Du d u 2 V (u) = constant least squares u 0 = H 1 h u 1 Du d quadratic program JCI 2017 MPC short course 53 / 61

59 The simplest possible constrained control law The optimal control law for the regulation problem with an active input constraint remains linear (affine) u = Kx + b. But the control laws are valid over only local polyhedral regions The simplest example would be a SISO, first-order system with input saturation. There are three regions κ N(x) u x Figure 8: The optimal control law for x + = x + u, N = 2, Q = R = 1, u [ 1, 1]. JCI 2017 MPC short course 54 / 61

60 The constrained control law can be more complex x Figure 9: Explicit control law regions for a system with n = 2, m = 1, and N = 10. (Rawlings and Mayne, 2009, p.500) x 1 The number of regions can increase exponentially with system order n, number of inputs, m, and horizon length N. For even modest state dimension and horizon length, we prefer to solve the QP online rather than compute and store this complex control law. JCI 2017 MPC short course 55 / 61

61 Stability of the closed-loop system We denote the solution to the QP, which depends on the initial state x as u 0 (x) = κ N (x) The constrained controller is now nonlinear even though the model is linear The closed-loop system is then x + = Ax + Bκ N (x) And we would like to design the controller (choose N, Q, R, P f ) so that the closed-loop system is stable. JCI 2017 MPC short course 56 / 61

62 Terminal constraint solution Adding a terminal constraint x(n) = 0 ensures stability x 2 0 k V k x 1 JCI 2017 MPC short course 57 / 61

63 Terminal constraint solution Adding a terminal constraint x(n) = 0 ensures stability May cause infeasibility (cannot reach 0 in only N moves) x 2 0 k V k x 1 JCI 2017 MPC short course 57 / 61

64 Terminal constraint solution Adding a terminal constraint x(n) = 0 ensures stability May cause infeasibility (cannot reach 0 in only N moves) Open-loop predictions not equal to closed-loop behavior x 2 0 k V k x 1 JCI 2017 MPC short course 57 / 61

65 Terminal constraint solution Adding a terminal constraint x(n) = 0 ensures stability May cause infeasibility (cannot reach 0 in only N moves) Open-loop predictions not equal to closed-loop behavior x 2 0 k + 1 k V k+1 V k l(x k, u k ) V k x 1 JCI 2017 MPC short course 57 / 61

66 Terminal constraint solution Adding a terminal constraint x(n) = 0 ensures stability May cause infeasibility (cannot reach 0 in only N moves) Open-loop predictions not equal to closed-loop behavior x 2 V k+2 V k+1 l(x k+1, u k+1 ) 0 k + 2 k + 1 k V k+1 V k l(x k, u k ) V k x 1 JCI 2017 MPC short course 57 / 61

67 Infinite horizon solution The infinite horizon ensures stability x 2 0 k V k x 1 JCI 2017 MPC short course 58 / 61

68 Infinite horizon solution The infinite horizon ensures stability Open-loop predictions equal to closed-loop behavior (desirable!) x 2 0 k V k x 1 JCI 2017 MPC short course 58 / 61

69 Infinite horizon solution The infinite horizon ensures stability Open-loop predictions equal to closed-loop behavior (desirable!) Challenge to implement x 2 0 k V k x 1 JCI 2017 MPC short course 58 / 61

70 Infinite horizon solution The infinite horizon ensures stability Open-loop predictions equal to closed-loop behavior (desirable!) Challenge to implement x 2 0 k + 1 k V k V k+1 = V k l(x k, u k ) x 1 JCI 2017 MPC short course 58 / 61

71 Infinite horizon solution The infinite horizon ensures stability Open-loop predictions equal to closed-loop behavior (desirable!) Challenge to implement x 2 V k+2 = V k+1 l(x k+1, u k+1 ) 0 k + 2 k + 1 k V k V k+1 = V k l(x k, u k ) x 1 JCI 2017 MPC short course 58 / 61

72 Implementation challenges The theory is now in reasonably good shape. In 1990, there was no theory addressing the control problem with constraints. Numerical QP solution methods are efficient and robust. In the process industries, applications with hundreds of states and inputs are running today. But challenges remain: Scale. Methods must scale well with horizon N, number of states n, and inputs, m. The size of applications continues to grow. Ill-conditioning. Detect and repair nearly collinear constraints, badly conditioned Hessian matrix. Model accuracy. How to adjust the models automatically as conditions in the plant change. JCI 2017 MPC short course 59 / 61

73 Lab Exercises Exercise 1.25 Exercise 2.3 Exercise 2.19 Exercise 6.2 JCI 2017 MPC short course 60 / 61

74 Further reading I R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, New Jersey, D. P. Bertsekas. Dynamic Programming. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, R. E. Kalman. Contributions to the theory of optimal control. Bull. Soc. Math. Mex., 5: , J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, New York, J. B. Rawlings and D. Q. Mayne. Model Predictive Control: Theory and Design. Nob Hill Publishing, Madison, WI, pages, ISBN E. D. Sontag. Mathematical Control Theory. Springer-Verlag, New York, second edition, G. Strang. Linear Algebra and its Applications. Academic Press, New York, second edition, JCI 2017 MPC short course 61 / 61

75 1.6 Exercises 69 Exercise 1.22: Steady-state Riccati equation Generate a random A and B for a system model for whatever n( 3) and m( 3) you wish. Choose a positive semidefinite Q and positive definite R of the appropriate sizes. (a) Iterate the DARE by hand with Octave or MATLAB until Π stops changing. Save this result. Now call the MATLAB or Octave function to solve the steady-state DARE. Do the solutions agree? Where in the complex plane are the eigenvalues of A + BK? Increase the size of Q relative to R. Where do the eigenvalues move? (b) Repeat for a singular A matrix. What happens to the two solution techniques? (c) Repeat for an unstable A matrix. Exercise 1.23: Positive definite Riccati iteration If Π(k), Q, R > 0 in (1.11), show that Π(k 1) > 0. Hint: apply (1.55) to the term (B Π(k)B + R) 1. Exercise 1.24: Existence and uniqueness of the solution to constrained least squares Consider the least squares problem subject to linear constraint min x (1/2)x Qx subject to Ax = b in which x R n, b R p, Q R n n, Q 0, A R p n. Show that this problem has a solution for every b and the solution is unique if and only if [ ] Q rank(a) = p rank = n A Exercise 1.25: Rate-of-change penalty Consider the generalized LQR problem with the cross term between x(k) and u(k) N 1 V (x(0), u) = 1 2 k=0 ( x(k) Qx(k) + u(k) Ru(k) + 2x(k) Mu(k) ) +(1/2)x(N) P f x(n) (a) Solve this problem with backward DP and write out the Riccati iteration and feedback gain. (b) Control engineers often wish to tune a regulator by penalizing the rate of change of the input rather than the absolute size of the input. Consider the additional positive definite penalty matrix S and the modified objective function N 1 V (x(0), u) = 1 2 k=0 ( x(k) Qx(k) + u(k) Ru(k) + u(k) S u(k) ) + (1/2)x(k) P f x(k) in which u(k) = u(k) u(k 1). Show that you can augment the state to include u(k 1) via [ ] x(k) x (k) = u(k 1)

76 70 Getting Started with Model Predictive Control and reduce this new problem to the standard LQR with the cross term. What are Ã, B, Q, R, and M for the augmented problem (Rao and Rawlings, 1999)? Exercise 1.26: Existence, uniqueness and stability with the cross term Consider the linear quadratic problem with system and infinite horizon cost function V (x(0), u) = (1/2) x + = Ax + Bu (1.59) x(k) Qx(k) + u(k) Ru(k) k=0 The existence, uniqueness and stability conditions for this problem are: (A, B) stabilizable, Q 0, (A, Q) detectable, and R > 0. Consider the modified objective function with the cross term V = (1/2) x(k) Qx(k) + u(k) Ru(k) + 2x(k) Mu(k) (1.60) k=0 (a) Consider reparameterizing the input as v(k) = u(k) + T x(k) (1.61) Choose T such that the cost function in x and v does not have a cross term, and express the existence, uniqueness and stability conditions for the transformed system. Goodwin and Sin (1984, p.251) discuss this procedure in the state estimation problem with nonzero covariance between state and output measurement noises. (b) Translate and simplify these to obtain the existence, uniqueness and stability conditions for the original system with cross term. Exercise 1.27: Forecasting and variance increase or decrease Given positive definite initial state variance P(0) and process disturbance variance Q, the variance after forecasting one sample time was shown to be P (1) = AP(0)A + Q (a) If A is stable, is it true that AP(0)A < P(0)? If so, prove it. If not, provide a counterexample. (b) If A is unstable, is it true that AP(0)A > P(0)? If so, prove it. If not, provide a counterexample. (c) If the magnitudes of all the eigenvalues of A are unstable, is it true that AP(0)A > P(0)? If so, prove it. If not, provide a counterexample.

77 172 Model Predictive Control Regulation 2.12 Exercises Exercise 2.1: Discontinuous MPC Compute, for Example 2.8, U 3 (x), V 0 3 (x) and κ 3(x) at a few points on the unit circle. Exercise 2.2: Boundedness of discrete time model Complete the proof of Proposition 2.21 by showing that f ( ) and fz 1 ( ) are bounded on bounded sets. Exercise 2.3: Destabilization with state constraints Consider a state feedback regulation problem with the origin as the setpoint (Muske and Rawlings, 1993). Let the system be [ ] [ ] 4/3 2/3 1 A = B = C = [ 2/3 1] and the controller objective function tuning matrices be Q = I R = I N = 5 (a) Plot the unconstrained regulator performance starting from initial condition [. x(0) = 3 3] (b) Add the output constraint y(k) 0.5. Plot the response of the constrained regulator (both input and output). Is this regulator stabilizing? Can you modify the tuning parameters Q, R to affect stability as in Section 1.3.4? (c) Change the output constraint to y(k) 1 + ɛ, ɛ > 0. Plot the closed-loop response for a variety of ɛ. Are any of these regulators destabilizing? (d) Set the output constraint back to y(k) 0.5 and add the terminal constraint x(n) = 0. What is the solution to the regulator problem in this case? Increase the horizon N. Does this problem eventually go away? Exercise 2.4: Computing the projection of Z onto X N Given a polytope Z := {(x, u) R n R m Gx + Hu ψ} write an Octave or MATLAB program to determine X, the projection of Z onto R n X = {x R n u R m such that (x, u) Z} Use algorithms 3.1 and 3.2 in Keerthi and Gilbert (1987). To check your program, consider a system [ ] [ ] x = x + u subject to the constraints X = {x x 1 2} and U = {u 1 u 1}. Consider the MPC problem with N = 2, u = (u(0), u(1)), and the set Z given by Z = {(x, u) x, φ(1; x, u), φ(2; x, u) X and u(0), u(1) U}

78 2.12 Exercises 179 Lemma 2.46 (Evolution in a compact set). Suppose x(0) = x lies in the set X N. Then the state trajectory {x(i)} where, for each i, x(i) = φ f (i; x) of the controlled system x + = f (x) evolves in a compact set. Exercise 2.19: MPC and multivariable, constrained systems Consider a two-input, two-output process with the following transfer function 2 2 G(s) = 10s + 1 s s + 1 s + 1 (a) Consider a unit setpoint change in the first output. Choose a reasonable sample time,. Simulate the behavior of an offset-free discrete time MPC controller with Q = I, S = I and large N. (b) Add the constraint 1 u(k) 1 and simulate the response. (c) Add the constraint 0.1 u/ 0.1 and simulate the response. (d) Add significant noise to both output measurements (make the standard deviation in each output about 0.1). Retune the MPC controller to obtain good performance. Describe which controller parameters you changed and why. Exercise 2.20: LQR versus LAR We are now all experts on the linear quadratic regulator (LQR), which employs a linear model and quadratic performance measure. Let s consider the case of a linear model but absolute value performance measure, which we call the linear absolute regulator (LAR) 9 min u N 1 ( q x(k) + r u(k) ) + q(n) x(n) k=0 For simplicity consider the following one-step controller, in which u and x are scalars subject to min V (x(0), u(0)) = x(1) + u(0) u(0) x(1) = Ax(0) + Bu(0) Draw a sketch of x(1) versus u(0) (recall x(0) is a known parameter) and show the x-axis and y-axis intercepts on your plot. Now draw a sketch of V (x(0), u(0)) versus u(0) in order to see what kind of optimization problem you are solving. You may want to plot both terms in the objective function individually and then add them together to make your V plot. Label on your plot the places where the cost function V suffers discontinuities in slope. Where is the solution in your sketch? Does it exist for all A, B, x(0)? Is it unique for all A, B, x(0)? The motivation for this problem is to change the quadratic program (QP) of the LQR to a linear program (LP) in the LAR, because the computational burden for LPs is often smaller than QPs. The absolute value terms can be converted into linear terms with a standard trick involving slack variables. 9 Laplace would love us for making this choice, but Gauss would not be happy.

79 468 Distributed Model Predictive Control Exercise 6.2: LQ as least squares Consider the standard LQ problem subject to min u V = 1 2 N 1 k=0 ( x(k) Qx(k) + u(k) Ru(k) ) + (1/2)x(N) P f x(n) x + = Ax + Bu (a) Set up the dense Hessian least squares problem for the LQ problem with a horizon of three, N = 3. Eliminate the state equations and write out the objective function in terms of only the decision variables u(0), u(1), u(2). (b) What are the conditions for an optimum, i.e., what linear algebra problem do you solve to compute u(0), u(1), u(2)? Exercise 6.3: Lagrange multiplier method Consider the general least squares problem subject to min V (x) = 1 x 2 x Hx + const Dx = d (a) What is the Lagrangian L for this problem? What is the dimension of the Lagrange multiplier vector, λ? (b) What are necessary and sufficient conditions for a solution to the optimization problem? (c) Apply this approach to the LQ problem of Exercise 6.2 using the equality constraints to represent the model equations. What are H, D, d for the LQ problem? (d) Write out the linear algebra problem to be solved for the optimum. (e) Contrast the two different linear algebra problems in these two approaches. Which do you want to use when N is large and why? Exercise 6.4: Reparameterizing an unstable system Consider again the LQR problem with cross term min u V = 1 2 subject to N 1 k=0 ( x(k) Qx(k) + u(k) Ru(k) + 2x(k) Mu(k) ) + (1/2)x(N) P f x(n) and the three approaches of Exercise 6.1: x + = Ax + Bu 1. The method described in Section in which you eliminate the state and solve the problem for the decision variable u = {u(0), u(1),..., u(n 1)}

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem. Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear

More information

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations Outline Linear regulation and state estimation (LQR and LQE) James B. Rawlings Department of Chemical and Biological Engineering 1 Linear Quadratic Regulator Constraints The Infinite Horizon LQ Problem

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Postface to Model Predictive Control: Theory and Design

Postface to Model Predictive Control: Theory and Design Postface to Model Predictive Control: Theory and Design J. B. Rawlings and D. Q. Mayne August 19, 2012 The goal of this postface is to point out and comment upon recent MPC papers and issues pertaining

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive

More information

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Linear Systems Manfred Morari Melanie Zeilinger Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Spring Semester 2016 Linear Systems M. Morari, M. Zeilinger - Spring

More information

Course on Model Predictive Control Part II Linear MPC design

Course on Model Predictive Control Part II Linear MPC design Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical & Biological Engineering Computer

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical and Biological Engineering and Computer

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Discrete-time systems p. 1/30 4F3 - Predictive Control Discrete-time State Space Control Theory For reference only Jan Maciejowski jmm@eng.cam.ac.uk 4F3 Predictive Control - Discrete-time

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time

More information

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

On the Stabilization of Neutrally Stable Linear Discrete Time Systems TWCCC Texas Wisconsin California Control Consortium Technical report number 2017 01 On the Stabilization of Neutrally Stable Linear Discrete Time Systems Travis J. Arnold and James B. Rawlings Department

More information

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

Lecture 2 and 3: Controllability of DT-LTI systems

Lecture 2 and 3: Controllability of DT-LTI systems 1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be

More information

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Dr. Burak Demirel Faculty of Electrical Engineering and Information Technology, University of Paderborn December 15, 2015 2 Previous

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

Decentralized and distributed control

Decentralized and distributed control Decentralized and distributed control Centralized control for constrained discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant How to control the thermal power plant in order to ensure the stable operation of the plant? In the assignment Production

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering

More information

Numerical Methods for Model Predictive Control. Jing Yang

Numerical Methods for Model Predictive Control. Jing Yang Numerical Methods for Model Predictive Control Jing Yang Kongens Lyngby February 26, 2008 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark

More information

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear

More information

Optimizing Economic Performance using Model Predictive Control

Optimizing Economic Performance using Model Predictive Control Optimizing Economic Performance using Model Predictive Control James B. Rawlings Department of Chemical and Biological Engineering Second Workshop on Computational Issues in Nonlinear Control Monterey,

More information

Linear State Feedback Controller Design

Linear State Feedback Controller Design Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione Introduction to Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Finite horizon optimal control 2 Consider the system At time k we want to compute

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Stochastic Tube MPC with State Estimation

Stochastic Tube MPC with State Estimation Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,

More information

Moving Horizon Estimation (MHE)

Moving Horizon Estimation (MHE) Moving Horizon Estimation (MHE) James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin Madison Insitut für Systemtheorie und Regelungstechnik Universität Stuttgart

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Outline. 1 Full information estimation. 2 Moving horizon estimation - zero prior weighting. 3 Moving horizon estimation - nonzero prior weighting

Outline. 1 Full information estimation. 2 Moving horizon estimation - zero prior weighting. 3 Moving horizon estimation - nonzero prior weighting Outline Moving Horizon Estimation MHE James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin Madison SADCO Summer School and Workshop on Optimal and Model Predictive

More information

An Introduction to Model-based Predictive Control (MPC) by

An Introduction to Model-based Predictive Control (MPC) by ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon

More information

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Module 07 Controllability and Controller Design of Dynamical LTI Systems Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators! Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,

More information

Homework Solution # 3

Homework Solution # 3 ECSE 644 Optimal Control Feb, 4 Due: Feb 17, 4 (Tuesday) Homework Solution # 3 1 (5%) Consider the discrete nonlinear control system in Homework # For the optimal control and trajectory that you have found

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

IEOR 265 Lecture 14 (Robust) Linear Tube MPC IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

EE221A Linear System Theory Final Exam

EE221A Linear System Theory Final Exam EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Overview of Models for Automated Process Control

Overview of Models for Automated Process Control Overview of Models for Automated Process Control James B. Rawlings Department of Chemical and Biological Engineering April 29, 29 Utilization of Process Modeling and Advanced Process Control in QbD based

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

Enlarged terminal sets guaranteeing stability of receding horizon control

Enlarged terminal sets guaranteeing stability of receding horizon control Enlarged terminal sets guaranteeing stability of receding horizon control J.A. De Doná a, M.M. Seron a D.Q. Mayne b G.C. Goodwin a a School of Electrical Engineering and Computer Science, The University

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Extensions and applications of LQ

Extensions and applications of LQ Extensions and applications of LQ 1 Discrete time systems 2 Assigning closed loop pole location 3 Frequency shaping LQ Regulator for Discrete Time Systems Consider the discrete time system: x(k + 1) =

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Discrete-time Linear Quadratic Optimal Control ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

In search of the unreachable setpoint

In search of the unreachable setpoint In search of the unreachable setpoint Adventures with Prof. Sten Bay Jørgensen James B. Rawlings Department of Chemical and Biological Engineering June 19, 2009 Seminar Honoring Prof. Sten Bay Jørgensen

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

Optimization Problems in Model Predictive Control

Optimization Problems in Model Predictive Control Optimization Problems in Model Predictive Control Stephen Wright Jim Rawlings, Matt Tenny, Gabriele Pannocchia University of Wisconsin-Madison FoCM 02 Minneapolis August 6, 2002 1 reminder! Send your new

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University .. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System Australian Journal of Basic and Applied Sciences, 3(4): 4267-4273, 29 ISSN 99-878 Modern Control Design of Power System Atef Saleh Othman Al-Mashakbeh Tafila Technical University, Electrical Engineering

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Course on Model Predictive Control Part III Stability and robustness

Course on Model Predictive Control Part III Stability and robustness Course on Model Predictive Control Part III Stability and robustness Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Regional Solution of Constrained LQ Optimal Control

Regional Solution of Constrained LQ Optimal Control Regional Solution of Constrained LQ Optimal Control José DeDoná September 2004 Outline 1 Recap on the Solution for N = 2 2 Regional Explicit Solution Comparison with the Maximal Output Admissible Set 3

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

1 Steady State Error (30 pts)

1 Steady State Error (30 pts) Professor Fearing EECS C28/ME C34 Problem Set Fall 2 Steady State Error (3 pts) Given the following continuous time (CT) system ] ẋ = A x + B u = x + 2 7 ] u(t), y = ] x () a) Given error e(t) = r(t) y(t)

More information

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

CDS 110b: Lecture 2-1 Linear Quadratic Regulators CDS 110b: Lecture 2-1 Linear Quadratic Regulators Richard M. Murray 11 January 2006 Goals: Derive the linear quadratic regulator and demonstrate its use Reading: Friedland, Chapter 9 (different derivation,

More information

Quis custodiet ipsos custodes?

Quis custodiet ipsos custodes? Quis custodiet ipsos custodes? James B. Rawlings, Megan Zagrobelny, Luo Ji Dept. of Chemical and Biological Engineering, Univ. of Wisconsin-Madison, WI, USA IFAC Conference on Nonlinear Model Predictive

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER Gabriele Pannocchia, Nabil Laachi James B. Rawlings Department of Chemical Engineering Univ. of Pisa Via Diotisalvi 2, 5626 Pisa (Italy) Department

More information

Improved MPC Design based on Saturating Control Laws

Improved MPC Design based on Saturating Control Laws Improved MPC Design based on Saturating Control Laws D.Limon 1, J.M.Gomes da Silva Jr. 2, T.Alamo 1 and E.F.Camacho 1 1. Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla, Camino de

More information

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08 Fall 2007 線性系統 Linear Systems Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian NTU-EE Sep07 Jan08 Materials used in these lecture notes are adopted from Linear System Theory & Design, 3rd.

More information

Further results on Robust MPC using Linear Matrix Inequalities

Further results on Robust MPC using Linear Matrix Inequalities Further results on Robust MPC using Linear Matrix Inequalities M. Lazar, W.P.M.H. Heemels, D. Muñoz de la Peña, T. Alamo Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands,

More information

Module 02 CPS Background: Linear Systems Preliminaries

Module 02 CPS Background: Linear Systems Preliminaries Module 02 CPS Background: Linear Systems Preliminaries Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html August

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

OPTIMAL CONTROL WITH DISTURBANCE ESTIMATION

OPTIMAL CONTROL WITH DISTURBANCE ESTIMATION OPTIMAL CONTROL WITH DISTURBANCE ESTIMATION František Dušek, Daniel Honc, Rahul Sharma K. Department of Process control Faculty of Electrical Engineering and Informatics, University of Pardubice, Czech

More information

5.5 Quadratic programming

5.5 Quadratic programming 5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the

More information

Stability of Parameter Adaptation Algorithms. Big picture

Stability of Parameter Adaptation Algorithms. Big picture ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about

More information

MODEL PREDICTIVE CONTROL FUNDAMENTALS

MODEL PREDICTIVE CONTROL FUNDAMENTALS Nigerian Journal of Technology (NIJOTECH) Vol 31, No 2, July, 2012, pp 139 148 Copyright 2012 Faculty of Engineering, University of Nigeria ISSN 1115-8443 MODEL PREDICTIVE CONTROL FUNDAMENTALS PE Orukpe

More information