Scenario-Based Approach to Stochastic Linear Predictive Control

Similar documents
Stochastic Tube MPC with State Estimation

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Politecnico di Torino. Porto Institutional Repository

Robust Learning Model Predictive Control for Uncertain Iterative Tasks: Learning From Experience

Robust Explicit MPC Based on Approximate Multi-parametric Convex Programming

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

EE C128 / ME C134 Feedback Control Systems

Robustly stable feedback min-max model predictive control 1

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

A SIMPLE TUBE CONTROLLER FOR EFFICIENT ROBUST MODEL PREDICTIVE CONTROL OF CONSTRAINED LINEAR DISCRETE TIME SYSTEMS SUBJECT TO BOUNDED DISTURBANCES

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

Technical work in WP2 and WP5

A CHANCE CONSTRAINED ROBUST MPC AND APPLICATION TO HOT ROLLING MILLS

On robustness of suboptimal min-max model predictive control *

Approximate dynamic programming for stochastic reachability

Handout 8: Dealing with Data Uncertainty

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

LINEAR TIME VARYING TERMINAL LAWS IN MPQP

Further results on Robust MPC using Linear Matrix Inequalities

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow

Dynamic Model Predictive Control

Stochastic MPC Design for a Two-Component Granulation Process

Principles of Optimal Control Spring 2008

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Fixed Order H Controller for Quarter Car Active Suspension System

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Chance-constrained optimization with tight confidence bounds

Enlarged terminal sets guaranteeing stability of receding horizon control

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Distributed and Real-time Predictive Control

Likelihood Bounds for Constrained Estimation with Uncertainty

Robust Nonlinear Model Predictive Control with Constraint Satisfaction: A Relaxation-based Approach

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty

Reformulation of chance constrained problems using penalty functions

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Probabilistic constrained stochastic model predictive control for Markovian jump linear systems with additive disturbance

Sample-average model predictive control of uncertain linear systems

A Stable Block Model Predictive Control with Variable Implementation Horizon

EFFICIENT MODEL PREDICTIVE CONTROL WITH PREDICTION DYNAMICS

Handout 6: Some Applications of Conic Linear Programming

Postface to Model Predictive Control: Theory and Design

Chance Constrained Input Design

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

Giulio Betti, Marcello Farina and Riccardo Scattolini

Model Predictive Controller of Boost Converter with RLE Load

Decentralized and distributed control

COMPUTATIONAL DELAY IN NONLINEAR MODEL PREDICTIVE CONTROL. Rolf Findeisen Frank Allgöwer

OPTIMAL CONTROL WITH DISTURBANCE ESTIMATION

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Course on Model Predictive Control Part II Linear MPC design

On the Inherent Robustness of Suboptimal Model Predictive Control

Stochastic Nonlinear Model Predictive Control with Probabilistic Constraints

An Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

Fast Model Predictive Control with Soft Constraints

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Performance Bounds for Robust Decentralized Control

Convex relaxations of chance constrained optimization problems

Contract-based Predictive Control for Modularity in Hierarchical Systems

arxiv: v2 [cs.sy] 30 Nov 2018

Set Robust Control Invariance for Linear Discrete Time Systems

Scenario Optimization for Robust Design

arxiv: v1 [math.oc] 8 Nov 2018

Robust Model Predictive Control through Adjustable Variables: an application to Path Planning

Finite horizon robust model predictive control with terminal cost constraints

arxiv: v1 [cs.sy] 20 Dec 2017

Stochastic Target Interception in Non-convex Domain Using MILP

Course on Model Predictive Control Part III Stability and robustness

Stochastic Model Predictive Controller with Chance Constraints for Comfortable and Safe Driving Behavior of Autonomous Vehicles

Explicit Robust Model Predictive Control

Robust output feedback model predictive control of constrained linear systems

An SVD based strategy for receding horizon control of input constrained linear systems

Threshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Tube Model Predictive Control Using Homothety & Invariance

THIS paper deals with robust control in the setup associated

Scenario-based robust optimization of water flooding in oil reservoirs enjoys probabilistic guarantees

Distributed Receding Horizon Control of Cost Coupled Systems

ROBUST CONSTRAINED PREDICTIVE CONTROL OF A 3DOF HELICOPTER MODEL WITH EXTERNAL DISTURBANCES

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Optimization Problems with Probabilistic Constraints

Distributed Chance-Constrained Task Allocation for Autonomous Multi-Agent Teams

An ellipsoid algorithm for probabilistic robust controller design

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

arxiv: v3 [math.oc] 25 Apr 2018

ORIGINS OF STOCHASTIC PROGRAMMING

Denis ARZELIER arzelier

A scenario approach for. non-convex control design

Parallel Move Blocking Model Predictive Control

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

Active Fault Diagnosis for Uncertain Systems

SYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1. P. V. Pakshin, S. G. Soloviev

Robust Adaptive MPC for Systems with Exogeneous Disturbances

Improved MPC Design based on Saturating Control Laws

and Mixed / Control of Dual-Actuator Hard Disk Drive via LMIs

MODEL PREDICTIVE SLIDING MODE CONTROL FOR CONSTRAINT SATISFACTION AND ROBUSTNESS

Fixed-Order Robust H Controller Design with Regional Pole Assignment

Transcription:

Scenario-Based Approach to Stochastic Linear Predictive Control Jadranko Matuško and Francesco Borrelli Abstract In this paper we consider the problem of predictive control for linear systems subject to stochastic disturbances. We repeatedly solve a stochastic finite-time constrained optimal control problem by using the scenario-based approach. We address the conservatism of the approach by presenting a new technique for fast scenario removal based on mixed-integer quadratic optimization. Probabilistic bounds are derived which quantify the benefits of the proposed technique. The approach is illustrated through a numerical example. I. INTRODUCTION The main idea of Model Predictive Control (MPC is to use a model of the plant to predict the future evolution of the system [15]. At each sampling time, an optimal control problem is solved over a finite horizon. The optimal command signal is applied to the process only during the following sampling interval. At the next time step a new optimal control problem based on new measurements of the state is solved over a shifted horizon. For complex constrained multivariable control problems, model predictive control has become the accepted standard in the process industry [18]: its success is largely due to its almost unique ability to simply and effectively handle hard constraints on control and states. A typical robust MPC strategy consists of solving a minmax problem to minimize worst-case performance while enforcing input and state constraints for all possible disturbances. Min-max robust Receding Horizon Control (RHC was originally proposed by Witsenhausen [24]. In the context of robust MPC, the problem was addressed by Campo and Morari [16], and further developed in [1] for MIMO FIR plants. Kothare et al. [12] optimize robust performance for polytopic/multi-model and linear fractional uncertainty, Scokaert and Mayne [20] for additive disturbances, and Lee and Yu [13] for linear time-varying and time-invariant statespace models depending on a vector of parameters θ Θ, where Θ is either an ellipsoid or a polyhedron. In the aforementioned robust approaches, it is assumed that all uncertainty realizations are equally probable. Taking into account the stochastic properties of the uncertainties (i.e. their probability density functions is a natural step towards a less conservative model predictive control design. The following stochastic optimization problem will be used to explain the main idea of the paper: min z E (f(z, w s.t. P(h(z, w C 1 ɛ, J. Matuško is with Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia jadranko.matusko@fer.hr F. Borrelli is with the Department of Mechanical Engineering, University of California, Berkeley, CA, USA fborrelli@me.berkeley.edu (1 where E( is the expectation operator, P( is the probability function, z is the optimization vector and w is the disturbance. The constraints in (1 are usually referred to as joint chance constraints, since they involve multiple mutually dependent events. In problem (1 an average cost function is minimized, while constraints are allowed to be violated with a small probability ε. Problem (1 is in general nonconvex and computationally intractable when used for realtime control. Thus, obtaining a computationally tractable approximation to this problem is a crucial step for the implementation of stochastic MPC. A classical approach to solve the chance constrained optimization problem (1 resorts to transforming it into a deterministic problem by replacing the probability function P with corresponding central moments (such as mean value and variance and using them to bound the tail of the probability distribution. By using this approach for a linear system and Gaussian distribution of the disturbance, a chance constrained optimization problem can be reformulated as a second order cone problem [22]. Following this idea, similar approaches have been recently proposed based on a state feedback law [21] or disturbance feedback law [17], [11]. If the probability density function is not Gaussian one can use tail bounding relations (e.g. Chebyshev-Cantelli inequality, but this may lead to a conservative solution. Another approach that is applicable to linear systems subject to finitely supported disturbances with an arbitrary distribution is based on using probabilistic tubes around nominal system states, whose dynamics are calculated off-line using a discrete convolution [10]. This paper uses the sampling based methods proposed in [4], [7], [8] to solve problem (1. The approach can be applied to any arbitrary probability function and it is based on generating a large number of stochastic samples (often called scenarios according to the probability function of the stochastic variable. The original problem (1 is transformed into a deterministic one with a large number of deterministic constraints corresponding to the original constraints evaluated for every scenario. This approach was already used in the context of robust model predictive control in [5] and in [6]. In [19] the scenario-based approach was applied to solve a chance constrained MPC. The number of samples was selected to guarantee the feasibility of the solution, but the authors did not use any sample removal procedure. As will be shown later in this paper, this approach may lead to a very conservative solution. The main contribution of this paper is the development of a scenario removal technique which reduces the conservatism of the scenario-based approach and is fast and easy to imple-

ment. Since the proposed removal technique is suboptimal we analyze its conservatism by deriving probabilistic bounds. The rest of the paper is organized as follows. In section II we review scenario-based optimization. In section III we formally define the chance constrained MPC problem. The proposed approach is presented in section IV while in section V the conservatism of the proposed approach is analyzed and probabilistic bounds are derived. In section VI simulation results are presented. II. SCENARIO-BASED OPTIMIZATION This paper builds on the work presented in [4], [7], [8]. For the sake of completeness, in this section we present the idea of the scenario-based approach and recall some important results relevant for this work. Proofs and more results can be found in [4], [7], [8]. The scenario-based approach to the chance constrained problem (1 (CCP is based on the generation of a large number of independent identically distributed (IID disturbance samples w (1, w (2,..., w ( (usually referred to as scenarios according to the probability density function of the disturbance. Solving the optimization problem for a sufficiently large number of samples generated according the probability density function guarantees that the resulting solution is a solution to the original CCP with high probability. The original CCP (1 is converted into the following deterministic problem: min z E(f(z, w s.t. h(z, w (i C, i = 1,..., N s, where N s is number of the scenarios. The main issue is to establish the link between the number of the scenarios one needs to generate in order to get a feasible solution to the original CCP (1. This issue is addressed in [9] and [7]. With a high probability 1 β, the solution of the optimization problem (2 is also a feasible solution to the original CCP (1, if the number of samples N s satisfies the inequality: d j=0 ( j (2 ε j (1 ε j β, (3 where β is called the reliability parameter and d is the number of decision variables z. However, such an approach introduces significant conservatism. In fact, for large N s, a solution satisfying all the constraints is close to a worstcase robust constraint satisfaction solution and the violation probability obtained with this approach is much less than ε. In order to reduce the conservatism of the scenario-based stochastic optimization approach, a number of scenarios r can be removed without significant loss of reliability β, resulting in the following link between reliability of the solution and the number of generated and removed samples [9], [7]: ( r+d 1 r + d ( ε j (1 ε N j β. (4 r j j=0 Although the inequality (4 holds for any scenario removal algorithm, different removal algorithms will generally result in different levels of conservatism. A possible suboptimal removal strategy is based on iterative removal of the active constraints according to the values of their associated dual variables, interpreted as a measure of the cost function sensitivity to constraint perturbation. At each iteration, the active constraint with the highest associated dual variable is removed and r deterministic optimization problems need to be solved in order to remove r constraints. The optimal removal strategy removes r constraint so that the highest improvement of the cost function is obtained. The optimal constraint removal strategy presented in in [9], [7] ensures that, with probability 1 β 1, the achieved constraint violation (after removing r samples out of the N s won t be less than ε 1, if the following inequality holds: N s j=r+1 ( j ε j 1 (1 ε 1 j β 1. (5 Equations (4 and (5 provide the link between N s, r, the reliability and the violation probability range (ε 1, ε obtained by selecting N s scenarios and optimally removing r. For a fixed reliability, increasing N s and r increases the the lowerbound ε 1. III. PROBLEM STATEMENT We consider a linear time-invariant state-space model with additive disturbance: x k+1 = Ax k + Bu k + w k, x 0 = x(0, (6 where x R nx is the state, u R nu is the input, and w R nw is the disturbance vector. A R nx nx and B R nx nu are system and input matrix, respectively. It is assumed that the probability density function of the disturbance f(w is known or estimated from historical data. We consider the following stochastic constrained finitetime optimal control problem: ( N 1 min E (x u kqx k + u kru k (7 k=0 s.t. P(Hx k C, k = 1,..., N 1 ɛ, where N is length of the prediction horizon, E( is the expectation operator, P( is the probability function. Q R nx nx and R R nu nu are positive semi-definite matrices, H R nc nx and C R nc 1. We choose an affine state feedback control policy u k = Kx k +ū k, where the feedback gain matrix K is calculated offline (e.g., as a solution to the LQR problem and ū k is an optimization variable. The closed loop system can be written as: x k+1 = (A + BKx k + Bū k + w k, x 0 = x(0. (8 The system dynamics over the prediction horizon can be

written as: k 1 x k =(A + BK k x 0 + (A + BK k i 1 Bū i + k 1 + i=0 (A + BK k i 1 w i, i=0 or in more compact form as: (9 X = Ax 0 + BU + GW, (10 where X, U, W, A, B and G are vectors and matrices of appropriate dimensions. With this formalism the optimization problem (7 can be written in the compact form: min E [X QX + U RU] U s.t. P(HX C 1 ε, (11 where Q = diag(q,..., Q, R = diag(r,..., R, H = diag(h,..., H and C = [C,..., C]. IV. SCENARIO-BASED FINITE TIME OPTIMAL CONTROL In order to apply the scenario-based approach to solve the constrained finite-time optimal control problem (7, we consider each disturbance scenario as one realization of the disturbance over the prediction horizon: W (i = [w (i 0 w (i 1... w (i N 1 ], i = 1,..., N s (12 W is either generated from the known probability density function f(w or constructed from historical data. The scenario-based approximation of the problem (11 can be written as: min E [X QX + U RU] U s.t. H X + HGW (i C, i = 1,... N s, (13 where X = Ax 0 + BU. Note that GW (i, i = 1,..., N s represents a sample based approximation of the multivariate distribution of the disturbance over the prediction horizon. Optimal scenario removal should be carried out based on this multivariate distribution. This is very time-consuming for the most realtime problems. For this reason we apply Boole s inequality to split the multivariate distribution into Nn c univariate distributions over the prediction horizon, ignoring their mutual dependance. In terms of scenarios, this means considering that individual scenario components at each step of the prediction horizon are independent. By doing so we assume that that removal of the individual scenario component will correspond, with high probability, to the removal of its associated scenario. The result is a conservative solution to the original CCP but, as will be shown later, we can claim that, with high probability, the conservatism originating from Boole s inequality is not significant. For each individual constraint H j x k C j, j = 1,..., n c, k = 1,..., N, of problem (7 the disturbance sample generates a random offset g (i = H jg k W (i to the nominal constraint: H j x k + g (i C j, i = 1,..., N s, (14 where G k is the k-th row of the matrix G and H j and C j are the j-th rows of the matrices H and C, respectively. We denote by r the number of the scenario components to be removed for the j-th constraint at the k-th time step. One can easily notice from (14 that the optimal removal strategy corresponds to the removal of the r largest offsets (notice that all constraints in (14 are parallel. In general, the optimal number of scenario components to be removed r depends on the initial system state. The number of scenario components to be removed r can be fixed off-line for every i and k but this may lead to a conservative solution. In this paper we propose to optimally select r online (this is sometimes referred to as optimal risk allocation [2] by treating r as optimization variables subject to the constraint: N n c r r. (15 k=1 j=1 In conclusion, the optimization problem (11 can be reformulated as the mixed integer quadratic program (MIQP: min E [X QX + U RU] U s.t. H X + ( HGW (i Z C, sorted r z i,j = 1, i = 1,..., Nn c j=1 Nn c LZ k r, k=1 (16 where Z is a r Nn c matrix of binary variables, z i,j are the elements of the matrix Z, Z k is the k-th column of the matrix Z and L is a row vector defined as L = [0, 1,..., r 1]. The proposed scenario-based stochastic finite time optimal control (FTOC algorithm is summarized in Algorithm 1. In MPC implementation of algorithm 1 steps 1-4 need to be executed only once, while step 5 will be repeatedly executed in a receding horizon manner. Algorithm 1 SCENARIO-BASED STOCHASTIC FTOC 1: For given ɛ, β, β 1 calculate N s and r satisfying equations (4 and (5; 2: Generate N s scenarios W (i R nw N, i = 1,..., N s according to the PDF of the disturbance. 3: Choose a linear feedback controller K and calculate the offsets g (i. 4: Sort the offsets g (i, for all j = 1,..., n c, k = 1,..., N. 5: Solve the MIQP (16. Remark 1: An interesting interpretation of the constraint removal strategy used in this paper comes from the theory

of ordered statistics [3]. Let {g (1:, g (2:,..., g (: } denote the ordered statistics corresponding to the offsets {g (i }, obtained by sorting the offsets in ascending order. A fundamental property of the order statistics of a sample set is that its distribution function is equal to its inverse cumulative distribution function (CDF: {g (i: } = d {Fg 1 (U (i: }, (17 where {U (i: } are the order statistics of the uniformly distributed sample set on the interval [0, 1] and F g is the CDF of {g (i }. In this case, removing r scenario components in the optimal way results in the following constraint: H j x k + g ( r :N s C j (18 or, equivalently: H j x k + F 1 g (U ( r :N s C j. (19 Equation (19 has the same form as a chance constraint transformed into a deterministic one using the classical approach (e.g. [17],[11],[22]. There is a slight difference: the argument of the inverse cumulative distribution function Fg 1 ( is a stochastic variable U ( r :N s and as a consequence the distribution F g (x is cut off at a random point g ( r :N s, with associated distribution function given as [3]: F ( r :N s g (x = N s i=r ( i F g (x i (1 F g (x i. (20 By using 1 ε instead of F g (x in (20 and considering F ( r :N s g (x as a reliability parameter β, a less conservative version of the assessment (4 is obtained. In fact, inequality (4 is valid for any removal algorithm satisfying a technical condition in [9], while the bound (20 is specific to our approach. V. ASSESSMENT OF CONSERVATISM OF SCENARIO-BASED FTOC In order to enable fast scenario removal, Boole s inequality has been used (N univariate distributions have been used, instead of one multivariate distribution. This step introduces a certain amount of conservatism into the optimization process. In this section a probabilistic assessment of this conservatism will be given. In this analysis we will consider the conservatism of the proposed approach in terms of removed constraints and translate it into violation probabilities afterwards. The results of the analysis are summarized in the following theorem. Theorem 1: Generate N s scenarios and remove r of them using Algorithm 1. With probability 0 α 1 at least r scenarios will be removed: r = r r2 (N 1 2NN s n f 1 α α (21 where: n f = k 2 (N s k N(N 1 N s (N s 1 2 k 3 (N s k N 2 s (N s 1 ( 1 3 N 3 1 2 N 2 + 1 6 N (22 and k = r/n. Proof: In order to assess the conservatism of Boole s inequality we consider the worst case, i.e. where the same number of scenario components at each time step of the prediction horizon is removed: k i = k = r/n, i = 1,..., N. Define I = {1, 2,..., N s } the set of indices of the generated scenarios and Ii R I the set of indices of the scenarios whose components are removed at i-th time step of the prediction horizon. Additionally, we define set I F as: I F = ( Ii R I R j, i = 1,..., N 1, j = i + 1,..., N, i,j (23 containing the indices of those scenarios with two or more components removed. Clearly the actual number of removed scenarios satisfies: r = r n(i F, (24 where n(i F is the cardinality of the set I F. Note that the cardinality of the set I F is a stochastic variable that can be upper bounded by the sum of independent stochastic variables with the hypergeometric probability function as [23]: n(i F N 1 i=1 n F i, n F i H(N s, i k, k, x (25 where H(N s, i k, k, x is hypergeometric probability mass function defined as: ( ( k i k H(N s, i k, k, x i k x x = (. (26 i k At each time-step i there are N s scenario components and among them there are i k scenario components which belong to a scenario already removed. The probability mass function (26 describes the probability that among k removed scenario components, there are x of them that belong to scenarios already removed in previous time steps. By applying the above arguments to every time step (in any random order we obtain the bound (25. Taking into account that E(n F i = i k 2 /N s, it follows that the expected value of the cardinality of the set I F is: E(n(I F N 1 i=1 E(n F i = r2 (N 1 2NN s. (27 In order to asses the upper tail of the variable n(i F the Chebyshev one-sided inequality (also known as Chebyshev- Cantelli inequality is used. This inequality depends on the

second central moment of the stochastic variable and is given as: P(n(I F E(n(I F γ Var n(if Var n(i F = α. (28 + γ2 Given 0 α 1, it follows that, with probability α, at most n α (I F = E(n(I F + Var n(i F 1 α (29 α removed components will share the same scenario. Taking into account that the variance of n F i is equal to: Var n F i = i k 2 k i k N s k N s N s N s 1, (30 as well as the linearity of the variance operator it follows: Var n(i F = k 2 (N s k N(N 1 N s (N s 1 2 k 3 (N s k N 2 s (N s 1 ( 1 3 N 3 1 2 N 2 + 1 6 N. (31 By combining equations (31, (29 and (24 the minimal number of the removed scenarios r, given by (21, can be obtained, with confidence level 1 α. After r has been computed, one can use (5 and (4 to assess the conservatism of the scenario-based CCP solution. Note that the assessment of the conservatism was done assuming only one constraint (n c = 1, but it is straightforward to extend it for the more general case, by replacing N with Nn c, where n c is the number of constraints. VI. SIMULATION RESULTS The proposed approach is tested by using a second order LTI system. All simulation results shown in this section are obtained on a Macbook Pro 2.4GHz running Matlab 7.13, CPLEX 12.3 and Yalmip [14]. We consider the second order linear time invariant system given by: x k+1 = [ 1 0.5 1 0 ] [ 1 0 x k + 0 2 ] [ 1 u k + 1 ] w k. (32 The control policy used in this example is an affine state feedback law u k = Kx k + ū k, with the feedback matrix K calculated offline, prior to the optimization process, as the optimal linear quadratic controller (LQR: K = [ 0.9860 0.4952 0.0451 0.0001 ]. (33 The optimization variable ū is obtained as a solution to the optimization problem (7 with: [ ] [ ] 50 0 1 0 Q =, R =, H = [ 1 1], C = 0.5. 0 1 0 1 (34 During the simulation tests five different cases were considered, all with target constraint violation probability of ɛ = 0.05, but using different numbers of generated and removed scenarios. Reliability parameters β and β 1 were set to 10 10 in all the simulations. After the optimization process, the system was simulated in open loop on a newly generated set of 100000 different disturbance realizations in order to assess the achieved violation probability. The results of the simulations are summarized in Table I. Fig. 1. Open loop responses of the Stochastic FTOC - 100 000 simulations The results shown in Table I indicate that the scenariobased approach without scenario removal is very conservative. The achieved violation probability was 0.02% which is closer to a robust control problem (see Fig.1. From the achieved violation probabilities it is clear that using more scenarios generally results in less conservative approximation of the original CCP. However, the last two columns in Table I suggest that the conservatism of the scenario-based optimization approach is not only dependent on the number of generated scenarios but also on the generated sample set itself, i.e. on particular realizations of the disturbance. Additionally, the lower bounds ε 1 on the solution to the optimization problem suggest that the conservatism of the proposed scenario removal is relatively small. Additionally, the actual number of removed scenarios is close to the desired number in all simulations. This was expected if we take into account that the bounds (21 were derived for the worst case where the same number of scenario components are removed at each step of the prediction horizon. The violation probabilities obtained in the simulation tests are actually very close to the empirical probability r/n s which is substantially lower than the desired probability ε. The reason for this conservatism is twofold. Firstly, the desired violation probability ε bounds the tail of the solution to the optimization problem while the expected value of the solution is much lower and secondly the bounds (4 and (5 are not exact but obtained using the conservative Chernoff bound [7].

TABLE I SIMULATION RESULTS WITH ɛ = 0.05 AND β = 10 10, β 1 = 10 10 Number of scenarios ( =5966 r=0 ( =5966 r=100 ( =14766 r=350 ( =19160 r=600 ( =31028 r=1000 Average violation 0.02% 1.8% 2.3% 3.17% 3.12% Average time to solve MIQP 0.1 s 0.37 s 0.47 s 0.57 s 0.71 s Optimal cost function J 1164.25 1061.5 1059.99 1059.98 1059.985 Min. number of removed scenarios (α = 0.05 0 95 338 579 969 Removed scenarios r 0 99 349 598 995 Worst lower bound ɛ 1 (α = 0.05 0 0.77% 1.57% 2.3% 2.53% Best lower bound ɛ 1 0 0.83% 1.64% 2.4% 2.63% VII. CONCLUSION In this paper the problem of stochastic constrained finitetime optimal control has been solved using a scenariobased approach. In order to reduce the conservatism of the scenario-based approach a new technique for scenario removal is proposed which resorts to removal of the scenario components instead of the entire scenarios. The conservatism of this approach is analyzed and probabilistic bounds are derived. Simulation results suggest that the proposed approach could be a promising solution to real-time implementation of scenario-based stochastic predictive control. Additionally the simulation results indicate that the conservatism of the proposed approach for scenario removal is not significant. VIII. ACKNOWLEGEMENT This material is based upon work supported by Siemens and the National Science Foundation under Grant No.0844456. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Siemens and the National Science Foundation. REFERENCES [1] J.C. Allwright and G.C. Papavasiliou. On linear programming and robust model-predictive control using impulse-responses. Systems & Control Letters, 18(2:159 164, 1992. [2] Lars Blackmore and Masahiro Ono. Convex chance constrained predictive control without sampling. In Proceedings of the AIAA Guidance, Navigation and Control Conference, 2009. [3] Arnold Barry C., Balakrishnan N., and Nagaraja H. N. A First Course in Order Statistics. Society for Industrial and Applied Mathematics, 2008. [4] G. C. Calafiore and M. C. Campi. The Scenario Approach to Robust Control Design. IEEE Transactions on Automatic Control, 51(5:742 753, May 2006. [5] G.C. Calafiore and L. Fagiano. Robust model predictive control: The random convex programming approach. In Computer-Aided Control System Design (CACSD, 2011 IEEE International Symposium on, pages 222 227, sept. 2011. [6] G.C. Calafiore and L. Fagiano. Robust model predictive control via random convex programming. In 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, pages 1910 1915, Dec. 2011. [7] Giuseppe Carlo Calafiore. Random convex programs. SIAM Journal on Optimization, 20(6:3427 3464, 2010. [8] Marco C. Campi and Simone Garatti. A sampling-and-discarding approach to chance-constrained optimization: Feasibility and optimality. J. Optimization Theory and Applications, 148(2:257 280, 2011. [9] M.C. Campi and S. Garatti. The exact feasibility of randomized solutions of uncertain convex programs. SIAM Journal on Optimization, 19(4:1211 1230, 2008. [10] M. Cannon, B. Kouvaritakis, S.V. Raković and, and Qifeng Cheng. Stochastic tubes in model predictive control with probabilistic constraints. Automatic Control, IEEE Transactions on, 56(1:194 200, jan. 2011. [11] P. Hokayem, D. Chatterjee, and J. Lygeros. On stochastic receding horizon control with bounded control inputs. In Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on, pages 6359 6364, dec. 2009. [12] Mayuresh V. Kothare, Venkataramanan Balakrishnan, and Manfred Morari. Robust constrained model predictive control using linear matrix inequalities. Automatica, 32(10:1361 1379, 1996. [13] J.H. Lee and Zhenghong Yu. Worst-case formulations of model predictive control for systems with bounded parameters. Automatica, 33(5:763 781, 1997. [14] J. Löfberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proceedings of the CACSD Conference, Taipei, Taiwan, 2004. [15] D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36:789 814, 2000. [16] M. Morari and P.J. Campo. Robust Model Predictive Control. In American Control Conference, pages 1021 1026, 1987. [17] F. Oldewurtel, C.N. Jones, and M. Morari. A tractable approximation of chance constrained stochastic mpc based on affine disturbance feedback. In Decision and Control, 2008. CDC 2008. 47th IEEE Conference on, pages 4731 4736, dec. 2008. [18] S.J. Qin and T.A. Badgwell. An overview of industrial model predictive control technology. In Chemical Process Control V, CACHE, AIChE, pages 232 256, 1997. [19] G. Schilbach, G. Calafoire, L. Fagiano, and M. Morari. Randomized Model Predictive Control for Stochastic Linear Systems. In American Control Conference, page accepted for publication, 2012. [20] P. O. M. Scokaert and D. Q. Mayne. Min-max feedback model predictive control for constrained linear systems. Automatic Control, IEEE Transactions on, 43(8:1136 1142, August 1998. [21] J. Skaf and S.P. Boyd. Design of affine controllers via convex optimization. Automatic Control, IEEE Transactions on, 55(11:2476 2487, nov. 2010. [22] D. H. van Hessem and O. H. Bosgra. A conic reformulation of Model Predictive Control including bounded and stochastic disturbances under state and input constraints. In Decision and Control, 2002, Proceedings of the 41st IEEE Conference on, volume 4, pages 4643 4648 vol.4, 2002. [23] Ronald E. Walpole, Raymond H. Myers, Sharon L. Myers, and Keying Ye. Probability & statistics for engineers and scientists. Pearson Education, Upper Saddle River, 8th edition, 2007. [24] H. Witsenhausen. A minimax control problem for sampled linear systems. Automatic Control, IEEE Transactions on, 13(1:5 21, feb 1968.