The Decision Rule Approach to Optimisation under Uncertainty: Methodology and Applications in Operations Management

Size: px
Start display at page:

Download "The Decision Rule Approach to Optimisation under Uncertainty: Methodology and Applications in Operations Management"

Transcription

1 The Decision Rule Approach to Optimisation under Uncertainty: Methodology and Applications in Operations Management Angelos Georghiou, Wolfram Wiesemann, Daniel Kuhn Department of Computing, Imperial College London 180 Queen s Gate, London SW7 2AZ, United Kingdom. Abstract Decision-making under uncertainty has a long and distinguished history in operations research. However, most of the existing solution techniques suffer from the curse of dimensionality, which restricts their application to small and medium-sized problems, or they rely on simplifying modelling assumptions (e.g. absence of recourse actions). Recently, a new solution technique has been proposed, which we refer to as the decision rule approach. By approximating the feasible region of the decision problem, the decision rule approach aims to achieve tractability without changing the fundamental structure of the problem. In this paper, we survey the major theoretical results relating to this approach, and we investigate its potential in operations management. Keywords. Robust Optimisation, Decision Rules, Optimisation under Uncertainty. 1 Introduction Operations managers frequently take decisions whose consequences extend well into the future. Inevitably, the outcomes of such choices are affected by significant uncertainty: changes in customer taste, technological advances and unforeseen stakeholder actions all have a bearing on the suitability of these decisions. It is well documented in theory and practice that disregarding this uncertainty often results in severely suboptimal decisions, which can in turn lead to the underperformance or complete breakdown of production processes. Yet, 1

2 researchers and practitioners frequently neglect uncertainty and instead focus on the expected or most likely market developments. We argue in the following that this is caused by the inherent limitations of the mainstream approaches to decision-making under uncertainty. While our argument applies to other methods as well, we will restrict our discussion to stochastic programming and robust optimisation. The remainder of the paper advocates the decision rule approach as a promising candidate to remedy some of these shortcomings. Stochastic programming models the uncertain parameters of a decision problem as a random vector that follows a known distribution. The two major types of stochastic programs are recourse problems and chance-constrained problems. In the classical two-stage recourse problem, the decision maker selects a here-and-now decision before the realisations of the uncertain parameters are known. This decision is complemented by a wait-and-see action that is chosen after the parameter realisations are observed. The goal is to select here-andnow and wait-and-see decisions that optimise a risk functional, such as the expected value or the variance of a cost function. For example, a production planning problem may ask for a production schedule that minimises the expected inventory holding and backlogging costs when customer demands are uncertain. A two-stage recourse formulation of this problem could involve a here-and-now manufacturing decision that is complemented by a sales decision once the customer demands are observed. Two-stage recourse problems find their natural generalisation in multi-stage recourse problems, where a sequence of uncertain parameters is observed over time. In this setting, the decision maker chooses a recourse action whenever some of the parameter realisations are observed. For instance, a multi-stage formulation of our production planning problem could accommodate a weekly revision of the production schedule to avoid inventory depletions or overruns. Although two-stage and multi-stage recourse problems are very similar, their computational complexity differs considerably. While the exact solution of linear two-stage recourse problems is #P-hard [24], approximate solutions can be found quite efficiently via Monte Carlo sampling techniques [13,54]. Multi-stage recourse problems, on the other hand, are believed to be computationally intractable already when medium-accuracy solutions are sought [57]. For this reason, the majority of recourse problems studied in the operations management literature are based on two-stage formulations. Contrary to recourse formulations, chance-constrained problems traditionally only involve 2

3 here-and-now decisions. The goal is to optimise a deterministic objective, subject to the requirement that constraints involving uncertain parameters are satisfied with at least a prespecified probability. Going back to our production planning problem, a chance-constrained model could ask for a minimum cost production plan that satisfies the uncertain demands with a probability of at least 95%. Unfortunately, apart from some benign special cases, chance-constrained problems are notoriously difficult to solve [52]. Indeed, verifying the satisfaction of a chance constraint requires multidimensional integration, which becomes computationally demanding if there are more than a few uncertain problem parameters. As a result, one typically resorts to sampling approximations [14, 49] or reformulations using conditional value-at-risk constraints [53]. Chance-constrained problems involving wait-andsee decisions are studied in [26, 47]. A more detailed review of stochastic programming can be found in [13,54]. Classical robust optimisation can be viewed as a limiting case of chance-constrained problems where the constraints must be satisfied with probability one. Thus, a robust optimisation problem asks for a here-and-now decision that optimises a deterministic objective and at the same time satisfies a constraint set for every possibly realisation of the uncertain parameters. In our production planning example, a robust formulation may be appropriate if lost sales should be avoided at all costs, or if the decision maker is unable to assign a probability distribution to the uncertain customer demands. Robust optimisation has gained wide popularity in recent years due to its favourable computational complexity. Many classes of optimisation problems, including linear programs, conic-quadratic programs and mixedinteger linear programs, allow for robust formulations than can be solved with essentially the same effort as the corresponding deterministic problems [4,10]. Robust optimisation has been extended to accommodate chance constraints [16] and various measures of risk [46]. Similar to chance-constrained stochastic programming, however, robust optimisation traditionally only accounts for here-and-now decisions. The robust optimisation literature is surveyed in [4, 11]. The above-mentioned approaches to decision-making under uncertainty have in common that they either do not account for recourse actions or that they are computationally demanding. Neglecting recourse possibilities can lead to overly conservative decisions as operations managers typically have the option to revise plans once new information becomes available. Likewise, solution techniques must scale gracefully with the size of the problem to be useful 3

4 for practitioners. To date, this dichotomy between tractability and modelling accuracy has proved to be a major obstacle to the successful application of stochastic programming and robust optimisation to real-life operations management problems. In this paper, we make a case for the decision rule approach to decision-making under uncertainty. The decision rule approach allows us to approximately solve two-stage and multi-stage recourse problems, chance-constrained problems and robust optimisation problems in a computationally efficient way. The approach inherits the tractability of robust optimisation, but it permits a faithful modelling of the dynamic aspects of a decision problem. The purpose of this paper is twofold. Firstly, we survey the theoretical findings relating to the decision rule approach. To date, these results are scattered over several papers, and due to their technical terminology they attracted little attention outside the mathematical optimisation community. We aim to present the key findings of this field in a comprehensible and unified framework, and we extend the methodology to problems with integer here-andnow decisions. The second purpose of the paper is to critically examine the potential of the decision rule approach in operations management. To this end, we apply the approach to two stylised case studies, and we compare decision rules with alternative methods to decision-making under uncertainty. Stochastic programming techniques have a long history of applications in operations management. Recourse problems and chance-constrained problems have been developed for the design of supply chains [33,43,55], the planning of facility layouts [22,39,58], production planning and inventory control [3, 45], production scheduling [50, 62] and project management [21, 34]. There are much fewer applications of robust optimisation techniques to operations management problems. The paper [12] studies the optimal control of a multiechelon supply chain that is defined on a network. A robust inventory control problem with ambiguous demands is studied in [56]. Extension to simultaneous inventory control and dynamic pricing, as well as two-echelon supply chains with flexible commitments contracts, can be found in [2,5]. The papers [35,40] develop robust formulations of well known deterministic production scheduling problems, and robust project management problems are studied in [19, 63]. Besides stochastic programming and robust optimisation, decision problems with uncertain parameters are often formulated as stochastic dynamic programs. For textbook 4

5 introductions to stochastic dynamic programming, as well as its tractable approximation via neuro-dynamic programming, the reader is referred to [6, 51]. The remainder of the paper is structured as follows. In Section 2, we formulate the decision problems that we are interested in. We introduce linear decision rules in Section 3, and we generalise the approach to nonlinear decision rules in Section 4. Section 5 describes an extension to integer here-and-now decisions. We close with two operations management case studies in Section 6. Notation For a square matrix A R n n we denote by Tr(A) the trace of A, that is, the sum of its diagonal entries. For A,B R m n the inequalities A B and A B are understood to hold component-wise, and A denotes the transpose of A. For any real number c we define c + = max{c,0}. We define convx as the convex hull of a set X. 2 Decision-Making under Uncertainty 2.1 Problem Formulation We study dynamic decision problems under uncertainty of the following general structure. A decision maker first observes an uncertain parameter ξ 1 R k 1 and then takes a decision x 1 (ξ 1 ) R n 1. Subsequently, a second uncertain parameter ξ 2 R k 2 is revealed, in response to which the decision maker takes a second decision x 2 (ξ 1,ξ 2 ) R n 2. This sequence of alternating observations and decisions extends over T stages, where at any stage t = 1,...,T the decision maker observes an uncertain parameter ξ t and selects a decision x t (ξ 1,...,ξ t ). We emphasise that a decision taken at stage t depends on the whole history of past observations ξ 1,...,ξ t, but it may not depend on the future observations ξ t+1,...,ξ T. This feature reflects the non-anticipative nature of the dynamic decision problem and ensures its causality. To simplify notation, we define the history ofobservations upto time tas ξ t = (ξ 1,...,ξ t ) R kt, where k t = t s=1 k s. Moreover, we let ξ = (ξ 1,...,ξ T ) R k denote the vector concatenation of all uncertain parameters, where k = k T. 5

6 We assume that the decision taken at stage t incurs a linear cost c t (ξ t ) x t (ξ t ), where the vector of cost coefficients depends linearly on the observation history, that is, c t (ξ t ) = C t ξ t for some matrix C t R nt kt. We also assume that the decisions are required to satisfy a set of linear inequality constraints to be detailed below. The decision maker s objective is to select the functions or decision rules x 1 ( ),...,x T ( ), which map observation histories to decisions, such that the expected total cost is minimised while all inequality constraints are satisfied. Formally, this decision problem can be represented as follows. ) minimise E ξ ( T subject to E ξ ( T s=1 c t (ξ t ) x t (ξ t ) A ts x s (ξ s ) ξ t ) x t (ξ t ) 0 b t (ξ t ) ξ Ξ, t = 1,...,T Here, E ξ ( ) denotes expectation with respect to the random parameter ξ, while Ξ stands for the range of all possible values that ξ can adopt. Below, we will refer to Ξ as the uncertainty setandtoanyξ Ξasascenario. WewillhenceforthassumethatΞisaboundedpolyhedron of the form Ξ = {ξ R k : Wξ h} for some W R l k and h R l. Note that the stage-t decisions x t (ξ t ) in problem P are parameterised in ξ t, that is, every observation history ξ t corresponding to some scenario ξ Ξ gives rise to n t ordinary decision variables. Since the polyhedron Ξ typically contains infinitely many scenarios, problem P accommodates in fact infinitely many decision variables. (P) The inequality constraints in problem P are expressed in terms of deterministic constraint matrices A ts R mt ns and uncertainty-affected right-hand side vectors b t (ξ t ) R mt. We assume that b t (ξ t ) = B t ξ t for some matrices B t R mt kt. Note that the stage-t constraints are conditioned on the stage-t observation history ξ t, where E ξ ( ξ t ) denotes conditional expectation with respect to ξ given ξ t. Hence, E ξ ( ξ t ) treats ξ 1,...,ξ t as deterministic variables and takes the expectation only with respect to the future observations ξ t+1,...,ξ T. This implies that the stage-t constraints are parameterised in ξ t in a similar fashion like the stage-t decisions. Indeed, every ξ t corresponding to some scenario ξ Ξ gives rise to m t ordinary linear constraints. Since Ξ has typically infinite cardinality, problem P thus accommodates infinitely many constraints. Intuitively, problem P can therefore be viewed as an infinite-dimensional generalisation of the standard linear program. 6

7 2.2 Expressiveness of Problem P The general decision problem under uncertainty described in Section 2.1 provides considerable modelling flexibility. Indeed, as we will show in this section, problem P encapsulates conventional deterministic and stochastic linear programs, robust optimisation problems and tight convex approximations of chance-constrained programs as special cases. Deterministic linear programs If the uncertainty set contains only one single scenario, that is, if Ξ = {ξ }, then problem P reduces to a deterministic linear program. In this case only the decisions and constraints corresponding to ξ = ξ are relevant, while all (conditional and unconditional) expectations become redundant and can thus be eliminated. Introducing the finite problem data A 11 A 1T b 1 (ξ 1 A =....., b = c 1 (ξ 1. and c =., A T1 A TT b T (ξ ) T c T (ξ ) T we can reformulate P as the standard linear program minimise c x subject to Ax b, x 0, (LP) whose finite-dimensional decision vector can be identified with (x 1 (ξ ),...,x 1 T (ξ )). T Remark 2.1 (Deterministic Decisions and Constraints). Deterministic decisions and constraints can conveniently be incorporated into the general problem P by requiring that ξ 1 is equal to 1 for all ξ Ξ. This can always be enforced by appending the equation ξ 1 = 1 to the definition of Ξ and has the effect that, in the first stage, only the decisions and constraints corresponding to the scenario ξ 1 = 1 have physical relevance. From now on, we will always assume that k 1 = 1 and that any ξ = (ξ 1,...,ξ T ) Ξ satisfies ξ 1 = 1. Stochastic programs Problem P can be specialised to a standard linear multistage stochastic program with recourse if we ensure that the stage-t constraints are not affected by the future decisions x t+1 (ξ t+1 ),...,x T (ξ T ). This is achieved by setting A ts = 0 for all t < s and has the effect that the term inside the conditional expectation of the stage-t constraint 7

8 becomes independent of ξ t+1,...,ξ T. Since E ξ ( ξ t ) treats ξ t as a constant, the conditional expectation thus becomes redundant and can be omitted. Therefore, problem P reduces to the following multistage stochastic program in standard form [13, 36]. ) ( T minimise E ξ c t (ξ t ) x t (ξ t ) t subject to A ts x s (ξ s ) b t (ξ t ) ξ Ξ, t = 1,...,T s=1 x t (ξ t ) 0 (SP) Robust optimisation problems If the distribution governing the uncertainty ξ is unknown or if the decision maker is very risk-averse, then it is not possible or unreasonable to minimise expected costs. In these situations a rational decision maker will minimise the worst-case costs, where the worst case (maximum) is evaluated with respect to all possible scenarios ξ Ξ; see e.g. [30] for a formal justification. As we have discussed in the introduction, such worst-case (robust) optimisation problems traditionally only involve here-and-now decisions [4, 11]. Problem P allows us to formulate a multi-stage generalisation of robust optimisation problems as follows. ( T ) minimise max c t (ξ t ) x t (ξ t ) ξ Ξ t subject to A ts x s (ξ s ) b t (ξ t ) ξ Ξ, t = 1,...,T s=1 x t (ξ t ) 0 (RO) In order to see that RO is a special case of P, we consider an epigraph reformulation of the worst-case objective, ( T ) c t (ξ t ) x t (ξ t ) max ξ Ξ = min τ R = min τ R { { τ : max ξ Ξ τ : ( T ) } c t (ξ t ) x t (ξ t ) τ } T c t (ξ t ) x t (ξ t ) τ ξ Ξ, (2.1) where τ R represents an auxiliary (deterministic) decision variable. Replacing the worstcase objective in RO with (2.1) transforms the robust optimisation problem RO into a variant of the stochastic programming problem SP with a particularly simple objective function (given by τ). As RO is a special case of SP and SP is a special case of P, we conclude that RO is indeed a special case of P. 8

9 Chance-constrained programs Let P ξ be the distribution of ξ. We can then formulate a multi-stage generalisation of chance-constrained programs as ( T ) minimise E ξ c t (ξ t ) x t (ξ t ) ( T ) (CC) subject to P ξ a it x t(ξ t ) b i (ξ) 1 ǫ i i = 1,...,I, x t (ξ t ) 0 ξ Ξ, t = 1,...,T, wherea it R nt, b i (ξ) Randǫ i (0,1]. Here, theithconstraint requiresthattheinequality T a it x t(ξ t ) b i (ξ) should be satisfied with probability at least 1 ǫ i. Chance constraints of this type are useful for modelling risk preferences and safety constraints in engineering applications. Note that a chance constraint with ǫ i = 1 reduces to a robust constraint that must hold for all ξ Ξ. Therefore, chance constraints with ǫ i < 1 can be viewed as soft versions of the corresponding robust constraints. We now demonstrate that CC has a tight conservative approximation of the form P. To this end, we introduce the loss functions L i (ξ) = b i (ξ) T a it x t(ξ t ). The ith chance constraint is therefore equivalent to the requirement that the smallest (1 ǫ i )-quantile of the loss distribution, which we denote by VaR ǫi (L i (ξ)), is nonpositive. To obtain a conservative approximation for the chance constraint, we introduce the conditional value-at-risk (CVaR) of L i (ξ) at level ǫ i, which is defined as CVaR ǫi (L i (ξ)) = min βi {β i + 1 ǫ i E ξ ([L i (ξ) β i ] + )}. Due to its favourable theoretical and computational properties, CVaR has become a popular risk measure in finance. Rockafellar and Uryasev [53] have shown that the optimal β i which solves the minimisation problem in the definition of CVaR coincides with VaR ǫi (L i (ξ)) and that the CVaR al level ǫ i coincides with the conditional expectation of the right tail of the lossdistribution above VaR ǫi (L i (ξ)). Thus, thefollowing implication holds; see alsofigure1. CVaR ǫi (L i (ξ)) 0 = VaR ǫi (L i (ξ)) 0 P ξ (L i (ξ) 0) 1 ǫ i As pointed out by Nemirovski and Shapiro [48], the CVaR constraint on the left-hand side represents the tightest convex approximation for the chance constraint on the right-hand side of the above expression. By linearising the term [L i (ξ) β i ] + in the definition of CVaR, the ith CVaR constraint can be re-expressed as the following system of linear inequalities β i + 1 ǫ i E ξ (z i (ξ)) 0, z i (ξ) b i (ξ) T a it x t(ξ t ) β i, z i (ξ) 0, (2.2) 9

10 1 ǫ i VaR ǫi (L i (ξ)) CVaR ǫi (L i (ξ)) 0 L i (ξ) Figure 1: Relationship between VaR ǫi (L i (ξ)) and CVaR ǫi (L i (ξ)) for each constraint i, at level ǫ i. which involve the deterministic (first stage) variable β i R and a new stochastic (stage- T) variable z i (ξ) R. Replacing each chance constraint in CC with the corresponding system (2.2) of linear inequalities thus results in a problem of type P with expectation constraints. Therefore, chance-constrained problems of the type CC have tight conservative approximations within the class of problems P. We remark that the general decision problem P is flexible enough to cover also hybrid models which combine various aspects of deterministic, stochastic, robust and chanceconstrained programs in the same model. 3 The Decision Rule Approach In this section, we derive a tractable approximation to the decision problem P by restricting the space of the decision rules x t ( ), t = 1,...,T, to those that exhibit a linear dependence on the observed problem parameters ξ t. The second part of the section explains how we can efficiently measure the optimality gap that we incur through this simplification. 3.1 Determining the Best Linear Decision Rule Problem P generalises a number of difficult optimisation problems, including multi-stage stochastic programs. It is therefore clear that problem P is severely computationally in- 10

11 tractable itself. A simple but effective approach to improve the tractability of problem P is to restrict the space of the decision rules x t ( ), t = 1,...,T, to those that exhibit a linear dependence on the observation history ξ t. Remember that we stipulated in Remark 2.1 that k 1 = 1 and ξ 1 = 1 for all ξ Ξ. This implies that we actually optimise over all affine (i.e., linear plus a constant) decision functions of the non-degenerate uncertain parameters ξ 2,...,ξ T if we optimise over all linear functions of ξ = (ξ 1,...,ξ T ). IntherestofthepaperwewillassumethattheconditionalexpectationsE ξ (ξ ξ t )arelinear in the sense that there exist matrices M t R k kt such that E ξ (ξ ξ t ) = M t ξ t for all ξ Ξ. This assumption is non-restrictive. It is automatically satisfied, for instance, if the random parameters ξ t are mutually independent. In this case the conditional expectations reduce to simpler unconditional expectations, and thus we find E ξ (ξ ξ t ) = (ξ 1,...,ξ t,µ t+1,...,µ T ), where µ t denotes the unconditional mean value of ξ t. As ξ 1 = 1 for all ξ Ξ, we thus have E ξ (ξ ξ t ) = (ξ 1,...,ξ t,µ t+1 ξ 1,...,µ T ξ 1 ) ξ Ξ. The last expression is manifestly linear in ξ t. It is easy to verify that the conditional expectations remain linear when the process of the random parameters ξ t belongs to the large class of autoregressive moving-average models. For the further argumentation, we define the truncation operator P t : R k R kt through P t ξ = ξ t. Thus, P t maps any scenario ξ to the corresponding observation history ξ t up to stage t. If we model the decision rule x t (ξ t ) as a linear function of ξ t, it can thus be expressed as x t (ξ t ) = X t ξ t = X t P t ξ for some matrix X t R nt kt. Substituting these linear decision rules into P yields the following approximate problem. ) minimise E ξ ( T subject to E ξ ( T s=1 c t (ξ t ) X t P t ξ A ts X s P s ξ ξ t ) b t (ξ t ) X t P t ξ 0 ξ Ξ, t = 1,...,T (P u ) The objective function of P u can be simplified and re-expressed in terms of the second order moment matrix M = E(ξξ ) of the random parameters. Interchanging summation and 11

12 expectation and using the cyclicity property of the trace operator, we obtain ( T ) T E ξ c t (ξ t ) X t P t ξ = E ξ (ξ Pt Ct X t P t ξ) T = E ξ (Tr[P t ξξ Pt C t X t]) = T Tr(P t MP t C t X t). Similarly, we can reformulate the conditional expectation terms in the constraints of P u as ( T E ξ A ts X s P s ξ ) T ξ t = A ts X s P s M t P t ξ. s=1 s=1 Thus, the linear decision rule problem P u is equivalent to T minimise Tr(P t MPt C t X t) subject to ( T A ts X s P s M t P t B t P t )ξ 0 s=1 X t P t ξ 0 ξ Ξ, t = 1,...,T. (3.3) Although problem (3.3) has only finitely many decision variables, that is, the coefficients of the matrices X 1,...,X T encoding the linear decision rules, it is still not suitable for numerical solution as it involves infinitely many constraints parameterised by ξ Ξ. The following proposition, which captures the essence of robust optimisation, provides the tools for reformulating the ξ-dependent constraints in (3.3) in terms of a finite number of linear constraints [4, 11]. Proposition 3.1. For any p N and Z R p k, the following statements are equivalent. (i) Zξ 0 for all ξ Ξ, (ii) Λ R p l with Λ 0, ΛW = Z, Λh 0. Proof. We denote by Zπ the πth row of the matrix Z. Then, statement (i) is equivalent to Zξ 0 for all ξ subject to Wξ h { 0 min Z π ξ : Wξ h } π = 1,...,p ξ { 0 max h Λ π : W Λ π = Z π, Λ π 0 } (3.4) π = 1,...,p Λ π Λ π with W Λ π = Z π, h Λ π 0, Λ π 0 π = 1,...,p 12

13 The equivalence in the third line follows from linear programming duality. Interpreting Λ π as the πth row of a new matrix Λ R p l shows that the last line in (3.4) is equivalent to assertion (ii). Thus, the claim follows. Using Proposition 3.1, one can reformulate the inequality constraints in (3.3) to obtain minimise subject to T Tr(P t MP t C t X t) T A ts X s P s M t P t B t P t = Λ t W,Λ t h 0,Λ t 0 s=1 X t P t = Γ t W,Γ t h 0,Γ t 0 t = 1,...,T. ( P u ) The decision variables in P u are the entries of the matrices X t R nt kt, Λ t R mt l and Γ t R nt l for t = 1,...,T. Note that the objective function as well as all constraints are linear in these decision variables. Thus, Pu constitutes a finite linear program, which can be solved efficiently with off-the-shelf solvers such as IBM ILOG CPLEX [1]. A major benefit of using linear decision rules is that the size of the approximating linear program P u grows only moderately with the number of time stages. Indeed, the number of variables and constraints is quadratic in k, l, m = T m t, and n = T n t. Note that these numbers usually scale linearly with T, and hence the size of P u typically grows only quadratically with the number of decision stages. We close this section with two remarks about alternative approximation methods to convert problem P to a finite linear program that is amenable to numerical solution. Remark 3.2 (Scenario tree approximation). Instead of approximating the functional form of the decision rules x t ( ), t = 1,...,T, we can can improve the tractability of problem P by replacing the underlying process ξ 1,...,ξ T of random parameters with a discrete stochastic process. The resulting process can be visualised as a scenario tree, which ramifies at all time points at which new problem data is observed (Figure 2). Scenario tree approaches to stochastic programming have been studied extensively over the past decades; see e.g. the survey paper [23] that accounts for the developments up to the year More recent contributions are listed in the official stochastic programming bibliography [60]. In contrast to the decision rule approach, scenario tree methods typically scale exponentially with the number of decision stages. Figure 3 compares the scenario tree and the decision rule approximations. 13

14 Figure 2: Scenario tree with samples ξ s 1,..., ξ s 9. Remark 3.3 (Sample-Based Optimisation). We derived a tractable approximation for problem P in two steps. First, we restricted the decision rules x t ( ), t = 1,...,T, to be linear functions of the observation histories ξ t. Afterwards, we used linear programming duality to obtain a finite problem. We can derive a different approximation for problem P if we enforce the semi-infinite constraints in P u only over a finite subset of samples { ξs 1,..., ξ K} s Ξ. It has been shown in [14,61] that a modest number K of samples suffices to satisfy the semi-infinite constraints in P u with high probability. The advantage of such sampling-based approaches is that they allow us to model more general dependencies between the problem data (A, b, c) and the random parameters. However, we are not aware of any methods to measure the optimality gap that we incur with sample-based methods. 3.2 Suboptimality of the Best Linear Decision Rule The price that we have to pay for the favourable scaling properties of the linear decision rule approximation is a potential loss of optimality. Indeed, the best linear decision rule can result in a substantially higher objective value than the best general decision rule (which is typically nonlinear). The difference u = min P u minp between the optimal values of the approximate and the original decision problem can be interpreted as the approximation error associated with the linear decision rule approximation. As P u is a restriction of the minimisation problem P, u is necessarily nonnegative. Modellers should estimate u in 14

15 x(ξ) x( ξ s ) x(ξ) ξ s Ξ Ξ Figure 3: Comparison of the scenario tree (left) and the decision rule approximation (right). Scenario trees replace the process ξ 1,...,ξ T of random parameters with a discrete stochastic process. The decision rule approach retains the original stochastic process, but it restricts the functional form of the decision rules x t ( ), t = 1,...,T. order to assess the appropriateness of the linear decision rule approximation for a particular problem instance: a small u indicates that implementing the solution of Pu will incur a negligible loss of optimality, while a large u may prompt us to be more cautious and to try to improve the approximation quality (e.g. by using more flexible piecewise linear decision rules; see Section 4). Generally speaking, there are two ways to measure the approximation error u. We can derive generic a priori bounds on the maximum value of u that can be incurred over a class of instances, or we can measure u a posteriori for a specific problem instance. A priori bounds on u have a long history. In particular, linear decision rules have been proven to optimally solve the linear quadratic regulator problem [6], while piecewise linear decision rules optimally solve two-stage stochastic programs [28]. More recently, linear decision rules have been shown to optimally solve a class of one-dimensional robust control problems [9] and two-stage robust vehicle routing problems [32]. On the other hand, the worst-case approximation ratio for linear decision rules applied to two-stage robust optimisation problems with m linear constraints has been shown to be of the order O( m), see [7]. Similar results have been derived for two-stage stochastic programs in [8]. Given their scarcity and their somewhat limited scope, it seems fair to say that a priori 15

16 bounds on u are at most indicative of the expressive power of linear and piecewise linear decision rules. It thus seems natural to consider a posteriori bounds on u that exploit the specific structure of individual instances of the problem P. Unfortunately, the direct computation of u for a specific instance of P would require the solution of P itself, which is intractable. In this section we demonstrate, however, that an upper bound on u can be obtained efficiently by studying a dual decision problem associated with P. It is well known that any primal linear program min x {c x : Ax b, x 0} has an associated dual linear program max y {b y : A y c, y 0}, which is based on the same problem data (A,b,c), such that the following hold: the minimum of the primal is never smaller than the maximum of the dual (weak duality), and if either the primal or the dual is feasible then the minimum of the primal coincides with the maximum of the dual (strong duality) [18]. There is a duality theory for decision problems of the type P which is strikingly reminiscent of the duality theory for ordinary linear programs. Following Eisner and Olsen [25], the dual problem corresponding to P can be defined as ) maximise E ξ ( T subject to E ξ ( T s=1 b t (ξ t ) y t (ξ t ) A st y s(ξ s ) ξ t ) y t (ξ t ) 0 c t (ξ t ) ξ Ξ, t = 1,...,T. Note that the dual maximisation problem D is stated in terms of the same problem data as the primal minimisation problem P. As for ordinary linear programs, dualisation transposes the constraint matrices and swaps the roles of the objective function and right-hand side coefficients. Dualisation also reverts the temporal coupling of the decision stages in the sense that the sums in the constraints of D now run over the first index of the constraint matrices. Thus, even if the primal stage-t constraint is independent of ξ t+1,...,ξ T, implying that theconditional expectation E ξ ( ξ t ) becomes redundant in P, thedual stage-tconstraint usually still depends on ξ t+1,...,ξ T, implying that the conditional expectation E ξ ( ξ t ) has a non-trivial effect in D. Therefore, in hindsight we realise that the inclusion of conditional expectation constraints in P was necessary to preserve the symmetry of the employed duality scheme. (D) As in the case of ordinary linear programming, there exist weak and strong duality results for problems P and D [25]. In particular, the minimum of P is never smaller than 16

17 the maximum of D (weak duality), and if some technical regularity conditions hold, then the minimum of P coincides with the maximum of D (strong duality). The symmetry between P and D enables us to solve D with the linear decision rule approach that was originally designed for P. Indeed, if we model the dual decision rule y t (ξ t ) as a linear function of ξ t, then it can be expressed as y t (ξ t ) = Y t ξ t for some matrix Y t R mt kt. Substituting these dual linear decision rules into D yields an approximate problem D l, which can be shown to be equivalent to the following tractable linear program. maximise subject to T Tr(P t MP t B t Y t) T A st Y sp s M t P t C t P t = Φ t W,Φ t h 0,Φ t 0 s=1 Y t P t = Ψ t W,Ψ t h 0,Ψ t 0 t = 1,...,T. ( D l ) The decision variables in D l are the entries of the matrices Y t R mt kt, Φ t R nt l and Ψ t R mt l for t = 1,...,T. In analogy to the primal approximation error u, the dual approximation error can be defined as l = maxd max D l. As D l is a restriction of the maximisation problem D, l is necessarily nonnegative. It quantifies the loss of optimality of the best linear dual decision rule with respect to the best general dual decision rule. Unfortunately, l is usually unknown as its computation would require the solution of the original dual decision problem D. However, thejointprimalanddualapproximationerror = min P u max D l isefficiently computable; it merely requires the solution of two tractable finite linear programs. Note that constitutes indeed an upper bound on both u and l since = min P u max D l = min P u minp +minp maxd +maxd max D l = u +minp maxd + l u + l, where the last inequality follows from weak duality. We conclude that for any decision problem of the type P the best primal and dual linear decision rules can be computed efficiently by solving tractable finite linear programs. 17

18 The corresponding approximation errors are bounded by, which can also be computed efficiently. Moreover, the optimal values of the approximate problems P u and D l provide upper and lower bounds on the optimal value of the original problem P, respectively. 4 Nonlinear Decision Rules A large value of indicates that either the primal or the dual approximation (or both) are inadequate. If the loss of optimality of linear decision rules is unacceptably high, modellers will endeavour to find a less conservative (but typically more computationally demanding) approximation. Ideally, one would choose a richer class of decision rules over which to optimise. In this section we show that the techniques developed for linear decision rules can also be used to optimise efficiently over more flexible classes of nonlinear decision rules. The underlying theory has been developed in a series of recent publications [4,17,29,31]. We will motivate the general approach first through an example. Example 4.1. Assume that a two-dimensional random vector ξ = (ξ 1,ξ 2 ) is uniformly distributed on Ξ = {1} [ 1,1]. This choice of Ξ satisfies the standard assumption that ξ 1 = 1 for all ξ Ξ. Any scalar linear decision rule is thus representable as x(ξ) = X 1 ξ 1 + X 2 ξ 2, where X 1 denotes a constant offset (since ξ 1 is equal to 1 with certainty), while X 2 characterises the sensitivity of the decision with respect to ξ 2 ; see Figure 4 (left). To improve flexibility, one may introduce a breakpoint at ξ 2 = 0 and consider piecewise linear continuous decision rules that are linear in ξ 2 on the subintervals [ 1,0] and [0,1], respectively. These decision rules are representable as x(ξ) = X 1 ξ 1 +X 2 (min{ξ 2,0})+X 3 (max{ξ 2,0}), (4.5) where X 1 denotes again a constant offset, while X 2 and X 3 characterise the sensitivities of the decision with respect to ξ 2 on the subintervals [ 1,0] and [0,1], respectively; see Figure 4 (centre). We can now define a new set of random variables ξ 1 = ξ 1, ξ 2 = min{ξ 2,0} and ξ 3 = max{ξ 2,0}, which are completely determined by ξ. In particular, the support for ξ = (ξ 1,ξ 2,ξ 3 ) is given by Ξ = {ξ R 3 : ξ 1 = 1, ξ 2 [ 1,0], ξ 3 [0,1], ξ 2 ξ 3 = 0}. Note also that ξ is uniformly distributed on Ξ. We will henceforth refer to ξ as the lifted random 18

19 x(ξ) x(ξ) x (ξ ) X 3 X 3 1 X 2 X 1 X 2 X ξ X 1 ξ 2 ξ 3 X Ξ Ξ 1 Ξ ξ 2 Figure 4: Illustration of the linear and piecewise linear decision rules in the original and the lifted space. Note that the set Ξ = L(Ξ) is non-convex, represented by the thick line in the right diagram. Since the decision rule x (ξ ) is a linear function of the random parameters ξ, however, it is nonnegative over Ξ if and only if it is nonnegative over the convex hull of Ξ, which is given by the dark shaded region. vector as it ranges over a higher-dimensional lifted space. Moreover, the function L(ξ) = (L 1 (ξ),l 2 (ξ),l 3 (ξ)) = (ξ 1,min{ξ 2,0},max{ξ 2,0}), which maps ξ to ξ, will be referred to as a lifting. By construction, the piecewise linear decision rule (4.5) in the original space is equivalent to the linear decision rule x (ξ ) = X 1 ξ 1 +X 2 ξ 2 +X 3 ξ 3 in the lifted space; see Figure 4 (right). Moreover, due to the linearity of x (ξ ) in ξ, the decision rule x (ξ ) is nonnegative over the nonconvex set Ξ if and only if x (ξ ) is nonnegative over the convex hull of Ξ. We can therefore replace the nonconvex support Ξ in the lifted space with its (polyhedral) convex hull, which can be represented as an intersection of halfspaces as required in Section 2.1. Hence, all techniques developed for linear decision rules can also be used for piecewise linear decision rules of the form (4.5). To solve a general decision problem of the type P in nonlinear decision rules, we define a lifting operator L(ξ) = (L 1 (ξ 1 ),...,L T (ξ T )), whereeachl t (ξ t )representsacontinuousfunctionfromr kt tor k t forsomek t k t. Usingthe lifting operator, we can construct a lifted random vector ξ = (ξ 1,...,ξ T ), where ξ Rk t = L t (ξ t ), t = 1,...,T, and k = k 1+ +k T. The distribution of the lifted random vector ξ is completely determined by that of the primitive random vector ξ, andthe support of ξ can be defined as Ξ = L(Ξ). As for the primitive uncertainties it proves useful to define observation 19

20 histories ξ t = (ξ 1,...,ξ t ), k t = k Rk t 1 + +k t, and truncation operators P t : R Rk k t which map ξ to ξ t, respectively. Our goal is to solve problem P in nonlinear decision rules of the form x t (ξ t ) = X t P tl(ξ), which constitute linear combinations of the component functions of the lifting operator. The matrices X t contain the coefficients of these Rnt k t linear combinations, while the truncation operators P t eliminate those components of L(ξ) that depend on the future uncertainties ξ t+1,...,ξ T, thereby ensuring non-anticipativity. By construction, the nonlinear decision rules x t (ξ t ) = X t P tl(ξ) depending on the primitive uncertainties are equivalent to linear decision rules x t (ξ t ) = X t ξ t depending on the lifted uncertainties. Note that ordinary linear decision rules in the primitive uncertainties can be recovered by choosing a trivial lifting operator L(ξ) = ξ. For the further argumentation we require that the lifting preserves the degeneracy of the first random parameter, that is, ξ 1 = 1 for all ξ Ξ. Moreover, we assume that there is a linear retraction operator R(ξ ) = (R 1 (ξ 1 ),...,R T(ξ T )), which allows us to express the primitive random vector ξ as a linear function of the lifted random vector ξ. To this end, we assume that each R t represents a linear function from R k t to R k t. The best nonlinear decision rule can then be computed by solving the following optimisation problem. ( T ) minimise E ξ c t (P t R(ξ )) X tp tξ ( T subject to E ξ A ts X ) s P s ξ ξ t b t (P t R(ξ )) s=1 X t P t ξ 0 ξ Ξ, t = 1,...,T (P u ) Note that the observation histories in the objective function and in the right-hand side coefficients have been expressed as ξ t = P t ξ = P t R(ξ ). The optimisation variables in P u are the entries of the matrices X t Rnt k t. The key observation is that the approximate problems P u and P u have exactly the same structure. The only difference is that Ξ = L(Ξ) is typically not a polyhedron because of the nonlinearity of the lifting operator. In the following, we assume that an exact representation (or outer approximation) of the convex hull of Ξ is available in the form of l inequality constraints: ˆΞ := {ξ R k : W ξ h }, 20

21 where convξ = ˆΞ (exact representation) or convξ ˆΞ (outer approximation). Such representations can be determined efficiently for polyhedral supports, see [29]. The nonlinear decision rule problem P u can then be transformed into a tractable linear program in the same way as the linear decision rule problem P u was converted to P u, see Section 3. minimise subject to T T s=1 Tr(P t M R P t C t X t ) A ts X sp sm tp t B t P t R= Λ t W,Λ t h 0,Λ t 0 X t P t = Γ t W,Γ t h 0,Γ t 0 t = 1,...,T ( P u ) Here, the matrix R R k k is defined through Rξ = R(ξ ) for all ξ Ξ, M = E ξ (ξ ξ ) R k k denotes the second order moment matrix associated with ξ, and the conditional expectations are assumed to satisfy E ξ (ξ ξ t ) = M tξ t for some matrices M t R k k t and all ξ Ξ. The decision variables in P u are the entries of the matrices X t R nt k t, Λ t R mt l and Γ t R nt l for t = 1,...,T. Similarly, we can measure the suboptimality of nonlinear decision rules by solving the following dual approximate problem (see Section 3.2). maximise subject to T T s=1 Tr(P t M RP t B t Y t ) A st Y s P s M t P t C tp t R= Φ t W,Φ t h 0,Φ t 0 Y t P t = Ψ t W,Ψ t h 0,Ψ t 0 t = 1,...,T ( D l ) ThedecisionvariablesofthisproblemaretheentriesofthematricesY t Rmt k t, Φ t R nt l and Ψ t R mt l for t = 1,...,T. One can show that the finite-dimensional primal approximation P u is equivalent to the semi-infinite primal problem P u if ˆΞ coincides with convξ, and P u provides an upper bound on the optimal value of P u if ˆΞ is an outer approximation of convξ. Likewise, the finite-dimensional dual approximation D l is equivalent to the semi-infinite dual problem D l (not shown here) if ˆΞ coincides with convξ, and D l provides a lower bound on the optimal value of D l if ˆΞ is an outer approximation of convξ. In particular, the finite-dimensional approximate problems P u and D l still bracket the optimal value of the problem P in nonlinear decision rules if we employ an outer approximation ˆΞ of the convex hull of Ξ. The situation is illustrated in Figure 5. 21

22 Figure 5: Relationship between the primal bounds P i u and the dual bounds D i l for an exact representation of convξ (i = 1) and an outer approximation of convξ (i = 2). 5 Incorporating Integer Decisions Optimisation problems often involve decisions that are modelled through integer variables. These problems are still amenable to the decision rule techniques described in Sections 3 and 4 if the integer variables do not depend on the uncertain problem parameters ξ. In order to substantiate this claim, we consider a variant of problem P in which the right-hand side vectors of the constraints may depend on a vector of integer variables z Z d. ( T ) minimise E ξ c t (ξ t ) x t (ξ t ) subject to z Z E ξ ( T s=1 A ts x s (ξ s ) ξ t ) x t (ξ t ) 0 b t (z,ξ t ) ξ Ξ, t = 1,...,T (P MI ) As usual, we assume that b t (z,ξ t ) depends linearly on ξ t, that is, b t (z,ξ t ) = B t (z)ξ t for some matrix B t (z) R mt kt. We further assume that B t (z) depends linearly on z and that Z R d results from the intersection of Z d with a convex compact polytope. In order to apply the decision rule techniques from Sections 3 and 4 to P MI, we study the parametric program P(z), which is obtained from P MI by fixing the integer variables z. By construction, P(z) is a decision problem of the type P, which is bounded above and below by the linear programs P u (z) and D l (z) associated with the primal and dual linear decision rule approximations, respectively. Thus, an upper bound on P MI is obtained by minimising the optimal value of Pu (z) over all z Z. The resulting optimisation problem, which we 22

23 denote by P MI u, represents a mixed-integer linear program (MILP). minimise T Tr(P t MP t C t X t) subject to z Z T A ts X s P s M t P t B t (z)p t = Λ t W,Λ t h 0,Λ t 0 s=1 X t P t = Γ t W,Γ t h 0,Γ t 0 t = 1,...,T ( P u MI ) Similarly, a lower bound on P MI is obtained by minimising the optimal value of D l (z) over all z Z. The resulting min-max problem has a bilinear objective function that is linear in the integer variables z and in the coefficients of the dual decision rules Y t. In order to convert this problem to an MILP, we follow the exposition in [38] and dualise D l (z). One can show that strong duality holds whenever the original problem P MI is feasible. By construction, we thus obtain a lower bound on P MI minimising the dual of D l (z) over all z Z. The resulting optimisation problem, which we denote by P MI l, again represents an MILP. minimise T Tr(P t MP t C t X t) subject to z Z T A ts X s P s M t P t +S t P t = B t (z)p t s=1 ( ) W he 1 MP t Xt 0 ( ) W he 1 MP t St 0 t = 1,...,T ( P l MI ) The optimisation variables in P MI l are the matrices X t R nt kt and S t R mt kt, as well as the binary variables z Z. Problem P MI is also amenable to the refined approximation methods based on piecewise linear decision rules as discussed in Section 4. Further details can be found in [29]. Remark 5.1. We assumed that the integer decisions z in problem P MI do not impact the coefficients c t ( ) and A of the objective function and the constraints, respectively. It is straightforward to apply the techniques presented in this section to a generalised problem where the objective function and constraint coefficients depend linearly on z. Using a Big-M reformulation, the resulting primal and dual approximate problems can again be cast as MILPs. 23

24 6 Case Studies In the following, we apply the decision rule approach to two well known operations management problems, and we compare the method with alternative approaches to account for data uncertainty. All of our numerical results are obtained using the IBM ILOG CPLEX 12 optimisation package on a dual-core 2.4GHz machine with 4GB RAM [1]. 6.1 Production Planning Our first case study concerns a medium-term production planning problem for a multiproduct plant with uncertain customer demands and backlogging. We assume that the plant consists of a single processing unit that is capable of manufacturing different products in a continuous single-stage process. We first elaborate a formulation that disregards changeovers, and we afterwards extend the model to sequence-dependent changeover times and costs. We wish to determine a production plan that maximises the expected profit for a set of products I and a weekly planning horizon T = {1,...,T}. To this end, we denote by p ti the amount of product i I that is produced during week t T \{T}. The processing unit can manufacture r i units of product i per hour, and it has an uptime of R hours per week. At the beginning of each week t T, we observe the demand ξ ti that arises for product i during week t. We assume that the demands ξ 1i in the first week are deterministic, while the other demands ξ ti, t > 1, are stochastic. Having observed the demands ξ ti, we then decide on the quantity s ti of product i that we sell during week t at a unit price P ti. We also determine the orders b ti for product i that we backlog during week t at a unit cost CB ti. We assume that the sales s ti in week t must be served from the stock produced in week t 1 or before. Once the sales decisions s ti for week t have been made, the inventory level I ti for product i during week t is known. Each unit of product i held during period t leads to inventory holding costs CI ti, and we require that the inventory levels I ti satisfy the lower and upper inventory bounds I ti and I ti, respectively. Deterministic versions of this problem have been studied in [15, 41]. For literature surveys on related production planning problems, we refer to [27,62]. The temporal structure of the problem is illustrated in Figure 6. 24

25 week t 1 week t week t+1 p t 1,i p t,i p t+1,i I t 1,i I t,i I t+1,i s t 1,i s t,i s t+1,i b t 1,i b t,i b t+1,i ξ t,i ξ t+1,i supply demand Figure 6: Temporal structure of the production planning model. The decisions with subscript t may depend on all demands realised in weeks 1,...,t. We can formulate the production planning problem as follows. [ ] maximise E P ti s ti (ξ t ) CB ti b ti (ξ t ) CI ti I ti (ξ t ) t T i I subject to p ti (ξ t )/r i R i I p ti (ξ t ) 0 i I b ti (ξ t ) = b t 1,i (ξ t 1 )+ξ ti s ti (ξ t ) I ti (ξ t ) = I t 1,i (ξ t 1 )+p t 1,i (ξ t 1 ) s ti (ξ t ) b 1i (ξ t ) = b 0i +ξ 1i s 1i (ξ 1 ), I 1i (ξ t ) = I 0i s 1i (ξ 1 ) b ti (ξ t ) 0, s ti (ξ t ) 0, I ti I ti (ξ t ) I ti t T \{T} } t T \{1}, i I t T, i I We require the constraints to be satisfied for all realisations ξ Ξ of the uncertain customer demands. The parameters b 0i and I 0i specify the initial backlog and inventory, respectively. One can easily show that the production planning problem is an instance of the problem P studied in Section 2.1. The same applies to variations of the problem where the prices and/or costs are uncertain, as well as variations where the product demand is only known at the end of each week. For the sake of brevity, we disregard these variants here. So far, our production planning problem does not account for changeovers between consecutively manufactured products. Frequent changeovers are undesirable as the involved clean-up, set-up and start-up activities result in both delays and costs. To incorporate changeovers, we follow the approach presented in [41] and introduce binary variables c tij, t T \{T} and i,j I, that indicate whether a changeover from product i to product j occurs in week t. Likewise, we introduce binary variables c tij, t T \{T} and i,j I, that 25

Scenario-Free Stochastic Programming

Scenario-Free Stochastic Programming Scenario-Free Stochastic Programming Wolfram Wiesemann, Angelos Georghiou, and Daniel Kuhn Department of Computing Imperial College London London SW7 2AZ, United Kingdom December 17, 2010 Outline 1 Deterministic

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Generalized Decision Rule Approximations for Stochastic Programming via Liftings

Generalized Decision Rule Approximations for Stochastic Programming via Liftings Generalized Decision Rule Approximations for Stochastic Programming via Liftings Angelos Georghiou 1, Wolfram Wiesemann 2, and Daniel Kuhn 3 1 Process Systems Engineering Laboratory, Massachusetts Institute

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Binary decision rules for multistage adaptive mixed-integer optimization

Binary decision rules for multistage adaptive mixed-integer optimization Math. Program., Ser. A (2018 167:395 433 https://doi.org/10.1007/s10107-017-1135-6 FULL LENGTH PAPER Binary decision rules for multistage adaptive mixed-integer optimization Dimitris Bertsimas 1 Angelos

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Multistage Robust Mixed Integer Optimization with Adaptive Partitions

Multistage Robust Mixed Integer Optimization with Adaptive Partitions Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0000-0000 eissn 0000-0000 00 0000 0001 INFORMS doi 10.187/xxxx.0000.0000 c 0000 INFORMS Multistage Robust Mixed Integer Optimization with Adaptive Partitions

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

K-Adaptability in Two-Stage Mixed-Integer Robust Optimization

K-Adaptability in Two-Stage Mixed-Integer Robust Optimization K-Adaptability in Two-Stage Mixed-Integer Robust Optimization Anirudh Subramanyam 1, Chrysanthos E. Gounaris 1, and Wolfram Wiesemann 2 asubramanyam@cmu.edu, gounaris@cmu.edu, ww@imperial.ac.uk 1 Department

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Robust multi-sensor scheduling for multi-site surveillance

Robust multi-sensor scheduling for multi-site surveillance DOI 10.1007/s10878-009-9271-4 Robust multi-sensor scheduling for multi-site surveillance Nikita Boyko Timofey Turko Vladimir Boginski David E. Jeffcoat Stanislav Uryasev Grigoriy Zrazhevsky Panos M. Pardalos

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

K-Adaptability in Two-Stage Distributionally Robust Binary Programming

K-Adaptability in Two-Stage Distributionally Robust Binary Programming K-Adaptability in Two-Stage Distributionally Robust Binary Programming Grani A. Hanasusanto 1, Daniel Kuhn 1, and Wolfram Wiesemann 2 1 Risk Analytics and Optimization Chair, École Polytechnique Fédérale

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects

More information

A Principled Approach to Mixed Integer/Linear Problem Formulation

A Principled Approach to Mixed Integer/Linear Problem Formulation A Principled Approach to Mixed Integer/Linear Problem Formulation J N Hooker September 9, 2008 Abstract We view mixed integer/linear problem formulation as a process of identifying disjunctive and knapsack

More information

Robust Multi-Stage Decision Making

Robust Multi-Stage Decision Making INFORMS 2015 c 2015 INFORMS isbn 978-0-9843378-8-0 Robust Multi-Stage Decision Making Erick Delage HEC Montréal, Department of Decision Sciences, Montréal, Québec, Canada, erick.delage@hec.ca Dan A. Iancu

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

Robust goal programming

Robust goal programming Control and Cybernetics vol. 33 (2004) No. 3 Robust goal programming by Dorota Kuchta Institute of Industrial Engineering Wroclaw University of Technology Smoluchowskiego 25, 50-371 Wroc law, Poland Abstract:

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information Ambiguous Joint Chance Constraints under Mean and Dispersion Information Grani A. Hanasusanto 1, Vladimir Roitch 2, Daniel Kuhn 3, and Wolfram Wiesemann 4 1 Graduate Program in Operations Research and

More information

On the Approximate Linear Programming Approach for Network Revenue Management Problems

On the Approximate Linear Programming Approach for Network Revenue Management Problems On the Approximate Linear Programming Approach for Network Revenue Management Problems Chaoxu Tong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014 THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM by LEVI DELISSA B.S., Kansas State University, 2014 A THESIS submitted in partial fulfillment of the

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Robust conic quadratic programming with ellipsoidal uncertainties

Robust conic quadratic programming with ellipsoidal uncertainties Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K

More information

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory by Troels Martin Range Discussion Papers on Business and Economics No. 10/2006 FURTHER INFORMATION Department of Business

More information

SOME HISTORY OF STOCHASTIC PROGRAMMING

SOME HISTORY OF STOCHASTIC PROGRAMMING SOME HISTORY OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces

More information

Distributionally robust optimization techniques in batch bayesian optimisation

Distributionally robust optimization techniques in batch bayesian optimisation Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

Stochastic Programming Models in Design OUTLINE

Stochastic Programming Models in Design OUTLINE Stochastic Programming Models in Design John R. Birge University of Michigan OUTLINE Models General - Farming Structural design Design portfolio General Approximations Solutions Revisions Decision: European

More information

A Stochastic-Oriented NLP Relaxation for Integer Programming

A Stochastic-Oriented NLP Relaxation for Integer Programming A Stochastic-Oriented NLP Relaxation for Integer Programming John Birge University of Chicago (With Mihai Anitescu (ANL/U of C), Cosmin Petra (ANL)) Motivation: The control of energy systems, particularly

More information

Robust Markov Decision Processes

Robust Markov Decision Processes Robust Markov Decision Processes Wolfram Wiesemann, Daniel Kuhn and Berç Rustem February 9, 2012 Abstract Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments.

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Examples of linear systems and explanation of the term linear. is also a solution to this equation.

Examples of linear systems and explanation of the term linear. is also a solution to this equation. . Linear systems Examples of linear systems and explanation of the term linear. () ax b () a x + a x +... + a x b n n Illustration by another example: The equation x x + 5x 7 has one solution as x 4, x

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Graph Coloring Inequalities from All-different Systems

Graph Coloring Inequalities from All-different Systems Constraints manuscript No (will be inserted by the editor) Graph Coloring Inequalities from All-different Systems David Bergman J N Hooker Received: date / Accepted: date Abstract We explore the idea of

More information

arxiv: v2 [math.oc] 10 May 2017

arxiv: v2 [math.oc] 10 May 2017 Conic Programming Reformulations of Two-Stage Distributionally Robust Linear Programs over Wasserstein Balls arxiv:1609.07505v2 [math.oc] 10 May 2017 Grani A. Hanasusanto 1 and Daniel Kuhn 2 1 Graduate

More information

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming Integer Programming, Goal Programming, and Nonlinear Programming CHAPTER 11 253 CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming TRUE/FALSE 11.1 If conditions require that all

More information

SOME RESOURCE ALLOCATION PROBLEMS

SOME RESOURCE ALLOCATION PROBLEMS SOME RESOURCE ALLOCATION PROBLEMS A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

More information

arxiv: v1 [cs.cc] 5 Dec 2018

arxiv: v1 [cs.cc] 5 Dec 2018 Consistency for 0 1 Programming Danial Davarnia 1 and J. N. Hooker 2 1 Iowa state University davarnia@iastate.edu 2 Carnegie Mellon University jh38@andrew.cmu.edu arxiv:1812.02215v1 [cs.cc] 5 Dec 2018

More information

Mixed Integer Linear Programming Formulations for Probabilistic Constraints

Mixed Integer Linear Programming Formulations for Probabilistic Constraints Mixed Integer Linear Programming Formulations for Probabilistic Constraints J. P. Vielma a,, S. Ahmed b, G. Nemhauser b a Department of Industrial Engineering, University of Pittsburgh 1048 Benedum Hall,

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING

ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING Abstract. The mixing set with a knapsack constraint arises in deterministic equivalent of chance-constrained programming problems with finite discrete

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Homework 3. Convex Optimization /36-725

Homework 3. Convex Optimization /36-725 Homework 3 Convex Optimization 10-725/36-725 Due Friday October 14 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization

Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization Proceedings of the 2017 Industrial and Systems Engineering Research Conference K. Coperich, E. Cudney, H. Nembhard, eds. Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization

More information

Indicator Constraints in Mixed-Integer Programming

Indicator Constraints in Mixed-Integer Programming Indicator Constraints in Mixed-Integer Programming Andrea Lodi University of Bologna, Italy - andrea.lodi@unibo.it Amaya Nogales-Gómez, Universidad de Sevilla, Spain Pietro Belotti, FICO, UK Matteo Fischetti,

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Boolean Algebras. Chapter 2

Boolean Algebras. Chapter 2 Chapter 2 Boolean Algebras Let X be an arbitrary set and let P(X) be the class of all subsets of X (the power set of X). Three natural set-theoretic operations on P(X) are the binary operations of union

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

Robust Convex Quadratically Constrained Quadratic Programming with Mixed-Integer Uncertainty

Robust Convex Quadratically Constrained Quadratic Programming with Mixed-Integer Uncertainty Robust Convex Quadratically Constrained Quadratic Programming with Mixed-Integer Uncertainty Can Gokalp, Areesh Mittal, and Grani A. Hanasusanto Graduate Program in Operations Research and Industrial Engineering,

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

2.1 Definition and graphical representation for games with up to three players

2.1 Definition and graphical representation for games with up to three players Lecture 2 The Core Let us assume that we have a TU game (N, v) and that we want to form the grand coalition. We model cooperation between all the agents in N and we focus on the sharing problem: how to

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Robustness in Stochastic Programs with Risk Constraints

Robustness in Stochastic Programs with Risk Constraints Robustness in Stochastic Programs with Risk Constraints Dept. of Probability and Mathematical Statistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic www.karlin.mff.cuni.cz/~kopa

More information

MILP reformulation of the multi-echelon stochastic inventory system with uncertain demands

MILP reformulation of the multi-echelon stochastic inventory system with uncertain demands MILP reformulation of the multi-echelon stochastic inventory system with uncertain demands Axel Nyberg Åbo Aademi University Ignacio E. Grossmann Dept. of Chemical Engineering, Carnegie Mellon University,

More information

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial

More information

Practical Tips for Modelling Lot-Sizing and Scheduling Problems. Waldemar Kaczmarczyk

Practical Tips for Modelling Lot-Sizing and Scheduling Problems. Waldemar Kaczmarczyk Decision Making in Manufacturing and Services Vol. 3 2009 No. 1 2 pp. 37 48 Practical Tips for Modelling Lot-Sizing and Scheduling Problems Waldemar Kaczmarczyk Abstract. This paper presents some important

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

Designing the Distribution Network for an Integrated Supply Chain

Designing the Distribution Network for an Integrated Supply Chain Designing the Distribution Network for an Integrated Supply Chain Jia Shu and Jie Sun Abstract We consider an integrated distribution network design problem in which all the retailers face uncertain demand.

More information

Production planning of fish processed product under uncertainty

Production planning of fish processed product under uncertainty ANZIAM J. 51 (EMAC2009) pp.c784 C802, 2010 C784 Production planning of fish processed product under uncertainty Herman Mawengkang 1 (Received 14 March 2010; revised 17 September 2010) Abstract Marine fisheries

More information

DRAFT Formulation and Analysis of Linear Programs

DRAFT Formulation and Analysis of Linear Programs DRAFT Formulation and Analysis of Linear Programs Benjamin Van Roy and Kahn Mason c Benjamin Van Roy and Kahn Mason September 26, 2005 1 2 Contents 1 Introduction 7 1.1 Linear Algebra..........................

More information

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms INSTITUTT FOR FORETAKSØKONOMI DEPARTMENT OF BUSINESS AND MANAGEMENT SCIENCE FOR 12 2017 ISSN: 1500-4066 October 2017 Discussion paper Stochastic programs with binary distributions: Structural properties

More information

Pareto Efficiency in Robust Optimization

Pareto Efficiency in Robust Optimization Pareto Efficiency in Robust Optimization Dan Iancu Graduate School of Business Stanford University joint work with Nikolaos Trichakis (HBS) 1/26 Classical Robust Optimization Typical linear optimization

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Structure of Valid Inequalities for Mixed Integer Conic Programs

Structure of Valid Inequalities for Mixed Integer Conic Programs Structure of Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University 18 th Combinatorial Optimization Workshop Aussois, France January

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs Computational Integer Programming Universidad de los Andes Lecture 1 Dr. Ted Ralphs MIP Lecture 1 1 Quick Introduction Bio Course web site Course structure http://coral.ie.lehigh.edu/ ted/teaching/mip

More information