Nonlinear Model Predictive Control Tools (NMPC Tools)

Similar documents
Course on Model Predictive Control Part II Linear MPC design

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Optimizing Economic Performance using Model Predictive Control

Quis custodiet ipsos custodes?

MPC Infeasibility Handling

Overview of Models for Automated Process Control

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings

Online monitoring of MPC disturbance models using closed-loop data

Industrial Model Predictive Control

Postface to Model Predictive Control: Theory and Design

HPMPC - A new software package with efficient solvers for Model Predictive Control

Algorithms for constrained local optimization

In search of the unreachable setpoint

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

SIMULATION OF TURNING RATES IN TRAFFIC SYSTEMS

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Closed-loop Behavior of Nonlinear Model Predictive Control

Numerical Methods for Model Predictive Control. Jing Yang

Optimization Problems in Model Predictive Control

Homework Solution # 3

State estimation and the Kalman filter

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Nonlinear Stochastic Modeling and State Estimation of Weakly Observable Systems: Application to Industrial Polymerization Processes

5 Handling Constraints

MPC: Tracking, Soft Constraints, Move-Blocking

A tutorial overview on theory and design of offset-free MPC algorithms

4F3 - Predictive Control

Distributed and Real-time Predictive Control

Convex Optimization and Support Vector Machine

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Model Predictive Control Short Course Regulation

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Whither discrete time model predictive control?

CONTROL DESIGN FOR SET POINT TRACKING

EE221A Linear System Theory Final Exam

minimize x subject to (x 2)(x 4) u,

Linear-Quadratic Optimal Control: Full-State Feedback

Part II: Model Predictive Control

Outline. Model Predictive Control: Current Status and Future Challenges. Separation of the control problem. Separation of the control problem

1. Type your solutions. This homework is mainly a programming assignment.

4F3 - Predictive Control

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stability of Parameter Adaptation Algorithms. Big picture

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

A Crash Course on Kalman Filtering

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter


Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Optimal dynamic operation of chemical processes: Assessment of the last 20 years and current research opportunities

A New Penalty-SQP Method

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Automatic Control II Computer exercise 3. LQG Design

Information geometry for bivariate distribution control

ESC794: Special Topics: Model Predictive Control

EE C128 / ME C134 Feedback Control Systems

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

State Estimation using Moving Horizon Estimation and Particle Filtering

Outline. 1 Full information estimation. 2 Moving horizon estimation - zero prior weighting. 3 Moving horizon estimation - nonzero prior weighting

Lecture 8 Receding Horizon Temporal Logic Planning & Finite-State Abstraction

Moving Horizon Estimation (MHE)

Model predictive control of industrial processes. Vitali Vansovitš

Alberto Bressan. Department of Mathematics, Penn State University

Aalborg Universitet. Published in: Proceedings of European Control Conference ECC' 07. Publication date: 2007

NEW DEVELOPMENTS IN PREDICTIVE CONTROL FOR NONLINEAR SYSTEMS

LQ Control of a Two Wheeled Inverted Pendulum Process

Moving Horizon Control and Estimation of Livestock Ventilation Systems and Indoor Climate

Economic Nonlinear Model Predictive Control

Giulio Betti, Marcello Farina and Riccardo Scattolini

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Application of Autocovariance Least-Squares Methods to Laboratory Data

1 Kalman Filter Introduction

Linear & nonlinear classifiers

Control engineering sample exam paper - Model answers

Practical Implementations of Advanced Process Control for Linear Systems

Sparse quadratic regulator

2.3 Linear Programming

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Support Vector Machine (continued)

Chapter 2 Optimal Control Problem

8.3 Partial Fraction Decomposition

4TE3/6TE3. Algorithms for. Continuous Optimization

Algorithms for Constrained Optimization

Model Predictive Control

Inexact Newton Methods and Nonlinear Constrained Optimization

Topic # Feedback Control Systems

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

Model Predictive Controller of Boost Converter with RLE Load

Lecture: Convex Optimization Problems

Convex Optimization and SVM

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Transcription:

Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator Sensor Noise Unmeasured Disturbances Regulator u k Process x k+1 = Ax k + Bu k +Gw k v k Sensor y k = Cx k + v k Process Measurements y k x t u t ˆx k Target Calculation y set u set Setpoints Estimator State Estimate ˆx k p k Integrating Disturbance The first part, the state estimator, deteres an approximate current state of the system, knowing the history of injected inputs and measured amrit@wisc.edu rawlings@engr.wisc.edu 1

outputs. The state estimator is also used to estimate the integrating disturbance state. The second part is the steady-state target calculation, which adjusts the state and input targets to account for the integrated disturbance. The final part, the MPC regulator, is responsible for finding the best control profile given steady-state targets for the states and inputs. In some formulations, the regulator may instead be used to track a dynamic output trajectory. 1.1 Disturbance Model We wish to provide the user with flexibility of defining the disturbance model. For the same reason, we define the disturbance model by specifying two matrices, B d and C d. The augmented model becomes: [ ] x p k+1 }{{} ˆx k+1 = y k = [ A Bd 0 I ] [ ] x + p k }{{} ˆx k ] [C [ ] x C d p k + v k [ ] B u 0 k + [ G 0 0 Φ ] [ ] w ξ k }{{} ŵ k in which the index k represents the current sampling time, x k is the state of the system, u k is the input, w k is the stochastic noise variable, and t k is the time. We assume that w k is normally distributed and has a zero mean. The term B d p k is the integrated input disturbance. In cases of plant/model mismatch or nonzero mean disturbances, this term is nonzero; however, in the noal case in which the plant and model are identical, this term vanishes. y k is the measurement at time t k. ξ k is a normally distributed zero-mean vector. v k is a stochastic Gaussian zero-mean noise term. ˆx k and ŵ k are the augmented state and disturbance vectors. Hence the model in terms of augmented state and disturbance vectors is: ˆx k+1 = ˆx k + ˆBu k + Ĝŵ k y k = Ĉ ˆx k + v k 2

2 Moving Horizon Estimation Modules: lmhe, nmhe Linear MHE (Module: lmhe) lmhe solves the following MHE problem. The model is linear and the user provides the model matrices. w aug,v,x aug N 1 k=0 ( 1 2 (w k Q kw k +v k R kv k +x k Θ kx k +2x k L kw k +ɛ k Z kɛ k +ξ k Ψ kξ k ) + q k w k + r k v k + θ k x k) + z k ɛ k + ψ k ξ k + 1 2 ρ P N ρ + p N ρ in which ˆx 0 = x 0 + ρ [ ] [ ] [ ] x A Bd x = + p 0 I p k+1 k }{{}}{{} ˆx k+1 ˆx k ] y k = [C [ ] x C d p S k w k + E fk ˆx k s k H k ˆx k ɛ k h k ɛ k 0 Γ k v k ξ k γ k ξ k 0 S ρ ρ s ρ k + v k [ ] B u 0 k + x 0 is the a priori estimate of the initial state [ G 0 0 I ] [ ] w ξ k }{{} ŵ k Optional constraints on state and state noise (specified by matrices S k, Ef k and s k ) are hard constraints. State constraint (specified by H k and h k ) are softened The module returns the estimated states, optimal ρ and the predicted state and measurement noise vectors. 3

2.1 Nonlinear MHE (Module: nmhe) nmhe solves the following nonlinear MHE problem. N 1 w,v,x k=0 ( 1 2 (w k Q kw k + v k R kv k ) + 1 2 ρ N P Nρ N + p N ρ N x 0 = x 0 + ρ x k+1 = F(x k, u k ) + G k w k + B d p k y k = g(x k ) + v k + C d p k p k+1 = p k + ξ k H k x k h k (constraint is softened) S k w k s k (constraint is softened) G k v k g k The user provides the system model as a system of ordinary differential equations dx dt = f (x, u, t) This model is integrated to yield the function F(x k, u k ) (See Appendix A). 3 Linear MPC 3.1 Target calculation Depending on the number of inputs and outputs, the setpoint of the system may or may not be reachable. For cases when the system cannot achieve a steady state at the desired setpoint, a target steady state closest to the setpoint is estimated in a least squares sense. Instead of solving separate problems to establish the target, a single formulation is preferred. The target tracking problem is formulated as a single quadratic program [1], that achieves the output target, if possible, and relaxes the the problem if the target is infeasible. For this purpose a soft constraint is formulated as follows: y sp Cx s X y p η y sp Cx s X y p η η 0 4

The constraint y sp = Cx s + X y p is relaxed by using the slack variable η. By suitably penalizing η, we guarantee that the relaxed constraint is binding when it is feasible. The penalty is chosen to be a combination of a linear penalty q s η and a quadratic penalty η Q s η, in which the elements of q s are strictly non negative and Q s is positive definite. By choosing a sufficiently large q s, the soft constraint can be guaranteed to be exact. Module: ltarget: Solves the linear MPC target problem where: 1 x s,u s,η 2 (η Q s η + (u s u sp ) R s (u s u sp )) + q sη (I A)x s = Bu s + B d p η y sp Cx s + C d p η y sp Cx s + C d p η 0 Du Hx d 3.2 Regulator In the regulation problem, we assume the stochastic variables w k and v k take on their mean values. Module: lmpc: Solves the linear MPC problem x,ũ,ɛ N 1 k=0 ( 1 2 ( x k Q k x k +ũ k R kũ k +2 x k M kũ k +ɛ k Z kɛ k )+q k x k+r kũk+z k ɛ k) + 1/2( x N P x N + ɛ N Z Nɛ N ) + p x N + z N ɛ N x k+1 = A k x k + B k ũ k + F k k = 0,..., N 1 D k ũ k G k x k d k k = 0,..., N 1 H k x k ɛ k h k ɛ k 0 k = 0,..., N k = 0,..., N In all the Linear MPC modules, the user defines the problem matrices. 5

3.3 Quadratic teral constraint regulator Module: lmpc term: Solves the linear MPC problem with a quadratic teral constraint x,ũ,ɛ N 1 k=0 ( 1 2 ( x k Q k x k +ũ k R kũ k +2 x k M kũ k +ɛ k Z kɛ k )+q k x k+r kũk+z k ɛ k) + 1/2( x N P x N + ɛ N Z Nɛ N ) + p x N + z N ɛ N x k+1 = A k x k + B k ũ k + F k k = 0,..., N 1 D k ũ k G k x k d k k = 0,..., N 1 H k x k ɛ k h k ɛ k 0 k = 0,..., N k = 0,..., N ( x N a T ) π( x N a T ) b T (1) The quadratic constraint defined by equation (1), ensures that the teral state resulting from the regulation optimization, lies within the specified region (elliptical in this case as the constraint is quadratic) around the final target. 4 Nonlinear MPC 4.1 Disturbance model formulation The toolbox provides a flexible interface such that a wide variety of models can be simulated. The nonlinear modules require the user to specify the process model as a set of differential equations, which they use for sensitivity calculations. Also, this allows the user to specify disturbance models of their choice. The user defined differential equations are of the form: dx dt = f (x, u, t) The user has a choice of either pre-augmenting the integrating disturbance or specifying the linear disturbance multiplier in the following discrete time representation of the model: x k+1 = F(x k, u k ) + B d p k The relationship between F(x k, u k ) and f (x, u, t) is given in Appendix A. To demonstrate the disturbance model formulation using the above methodology, we now show formulations of the two common disturbance models, namely the input disturbance model and the output disturbace model. 6

4.1.1 Output disturbance model The parameter B d is 0 and C d is identity. The user input takes the following form: ẋ = f (x, u, t) y = g(x) + C d p When the module discretizes this model, the following discrete time representation is obtained: x k+1 = F(x k, u k ) + Gw k p k+1 = p k + ξ k 4.1.2 Input disturbance model Augment the state and the disturbance ad define the system as following: [ẋ ] = ṗ }{{} x [ ] f (x, u + Xu p, t) } 0 {{ } f ( x,u) Note that the parameter B d is set 0 as the disturbance has been augmented. When the module discretizes this model, the following discrete time representation is obtained: [ xk+1 p k+1 } {{ } x k+1 ] [ ] F(xk, u k + X u p k ) = + p k } {{ } f ( x k,u k ) [ G I ] [ ] wk ξ k }{{} Stochastic term 4.2 Target calculation Same idea applies for target calculation in non linear systems as explained for linear systems. The model in this case is a nonlinear function of states and inputs. Module: ntarget: Solves the nonlinear MPC target problem 1 x s,u s,η 2 (η Q s η + (u s u sp ) R s (u s u sp )) + q sη F(x s, u s ) = 0 η y sp y(x) η y sp y(x) η 0 Du Hx d 7

The user provides the system model as a system of ordinary differential equations. dx dt = f (x, u, t) This model is integrated to yield the function F(x k, u k ) (See Appendix A). The user also provides the nonlinear function describing the dependence of output on the states. y k = g(x k ) 4.3 Controller Module: nmpc: Solves the nonlinear MPC problem x,u ((1/2)((x k x set ) Q k (x k x set ) + (u k u set ) R k (u k u set )+ (u k u k 1 ) S(u k u k 1 )) + (1/2)(x N x set ) P(x N x set )) x k+1 = F(x k, u k ) + G k w k + B d p k y k = g(x k ) + v k + C d p k p k+1 = p k + ξ k H k x k h k (constraint is softened) E k y k e k (constraint is softened) D k ũ k d k G k (ũ k ũ k 1 ) g k The user provides the system model as a system of ordinary differential equations. dx dt = f (x, u, t) This model is integrated to yield the function F(x k, u k ) (See Appendix A). The user also provides the nonlinear function describing the dependence of output on the states. y k = g(x k ) 8

A Discrete time model from a system of ODE s In the nonlinear modules of the package, the user interface allows the user to provide the system model as a set of ordinary differential equations dx dt = f (x, u) x(t 0 ) = x 0 Let S(t, x 0, u c ) be the solution to the model with input u(t) = u c, where u c is a constant on the interval [0, t]. Then the model function in discrete time is given by where is the sampling time. F(x k, u k ) = S(, x k, u k ) B Quadratic teral constraint problem B.1 Problem formulation The linear problem regulator solves a quadratic objective satisfying linear constraints. To account for a quadratic teral constraint, we add a teral cost in the objective and increase it until the teral constraint is satisfied. The problem formulation becomes: x,ũ,ɛ N 1 k=0 ( 1 2 ( x k Q k x k +ũ k R kũ k +2 x k M kũ k +ɛ k Z kɛ k )+q k x k+r kũk+z k ɛ k) + 1/2( x N P x N + ɛ N Z Nɛ N ) + p x N + z N ɛ N + λ( x N a T ) π N ( x N a T ) x k+1 = A k x k + B k ũ k + F k k = 0,..., N 1 D k ũ k G k x k d k k = 0,..., N 1 H k x k ɛ k h k ɛ k 0 k = 0,..., N k = 0,..., N Under appropriate convexity assumptions, ( x N a T ) π N ( x N a T ) is a nonincreasing function of λ for λ 0. In the above objective, λ is varied until following constraint is satisfied ( x N a T ) π N ( x N a T ) b T 9

B.2 Algorithm If the solution corresponding to λ = 0 does not satisfy the constraint, the algorithm then looks for a higher value of λ for which the constraint is satisfied. The module provides a calling sequence in which the user can pass a value of λ from the previous MPC iteration. Then a sufficiently big value of λ is searched. This is done by doubling the value until the constraint is satisfied. Bisection method is then applied to estimate the value of λ corresponding to which the solution satisfies the tolerances according to the following equation. b T (1 + tol A ) ( x N a T ) π N ( x N a T ) b T (1 + tol A ) where tol A is the tolerance specified by the user. The flow of the algorithm is given in Algorithm 1. Algorithm 1 Quadratic teral constraint regulator Evaluate x for λ = 0 if ( x N a T ) π N ( x N a T ) > b T then if λ g not provided by user then Initialize λ g end if Evaluate x for λ = λ g while ( x N a T ) π N ( x N a T ) > b T do λ g 2λ g Evaluate x for λ = λ g end while λ L 0, λ R λ g λ C (λ L + λ R )/2 Evaluate x C for λ = λ C while ( x N C a T ) π N ( x N C a b T ) [ T (1+tol A ), b T (1 + tol A )] do if ( x N C a T ) π N ( x N C a T ) b T then λ R λ C else λ L λ C end if λ C (λ L + λ R )/2 Evaluate x C for λ = λ C end while else x already satisfies constraint end if 10

B.3 Example A two state one input example was simulated to test the above algorithm. [ ] [ ] 4/3 2/3 1 [ ] A =, B =, C = 2/3 1 (2) 1 0 0 The controller objective matrices were chosen as: Q = I, R = I, N = 4 (3) The input was constrained by u 1, and the initial state was set to [3, 3]. The following quadratic constraint matrices were chosen: [ ] 70 0 π T =, b 0 70 T = 0.2 (4) The following teral cost profiles were obtained for first 3 consecutive regulator optimizations. The optimal value of λ corresponds to x N π N x N = b T = 0.2 ( x N a T ) pin( x N a T ) 1 0.8 0.6 0.4 0.2 Time step 1 Time step 2 Time step 3 b T 0 0 0.5 1 1.5 2 λ Optimal solution corresponding to λ = 0 satisfies the constraint in all the optimizations at the following time steps, signifying that the state of the system was then close enough to the origin for the teral state to lie in the desired teral region. 11

References [1] James B. Rawlings. Tutorial: Model predictive control technology. In Proceedings of the American Control Conference, San Diego, CA, pages 662 676, 1999. [2] M. Tenny. Computational Strategies for Nonlinear Model Predictive Control. PhD thesis, University of Wisconsin Madison, 2002. 12