Complexity of two and multi-stage stochastic programming problems

Size: px
Start display at page:

Download "Complexity of two and multi-stage stochastic programming problems"

Transcription

1 Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia , USA

2 The concept of two-stage (linear) stochastic programming problem with recourse Min x X c x + E[Q(x, ξ)], (1) where X = {x : Ax = b, x 0}, c x denotes the scalar product of vectors c, x R n, and Q(x, ξ) is the optimal value of the second stage problem Min y q T y s.t. T x + W y = h, y 0, (2) with ξ = (q, T, W, h). (By bold script like ξ we sometimes denote random variables to distinguish them from a particular realization ξ.) The feasible set X can be finite, i.e., integer first stage problem. Both stages can be integer (mixed integer) problems. Suppose that the probability distribution P of ξ has a finite support, i.e., ξ can take values ξ 1,..., ξ K (called scenarios) with respective probabilities p 1,..., p K. In that case 1

3 where E P [Q(x, ξ)] = K k=1 p k Q(x, ξ k ), Q(x, ξ k ) = inf {q k y k : T k x + W k y k = h k, y k 0}. It follows that we can write problem (1)-(2) as one large linear program: Min x,y 1,...,y K c x + K k=1 p k (q k y k ) subject to Ax = b, T k x + W k y k = h k, k = 1,..., K, x 0, y k 0, k = 1,..., K. Multistage models Let ξ t be a random process. Denote ξ [1,t] = (ξ 1,.., ξ t ) the history of the process ξ t up to time t. The values of the decision vector x t, chosen at stage t, may depend on the information ξ [1,t] available up to time t, but not on the future observations. The decision process has the form decision(x 0 ) observation(ξ 1 ) decision(x 1 )... observation(ξ T ) decision(x T ). 2

4 There are several ways how this decision process can be made precise. Nested formulation of a linear multistage stochastic programming problem with recourse: Min A 1 x 1 =b 1 x 1 0 c 1 x 1 + E [ + E { Min B 2 x 1 +A 2 x 2 =b 2 x 2 0 Min B T x T 1 +A T x T =b T x T 0 c 2 x 2 + c T x T ]}. Here ξ t = (c t, B t, A t, b t ), t = 2,..., T, is considered as a random process, ξ 1 = (c 1, A 1, b 1 ) is supposed to be known, and x t = x t (ξ [1,t] ), t = 2,..., T, are supposed to be functions of the history of the process up to time t. If the number of realizations (scenarios) of the process ξ t is finite, then the above problem can be written as one large linear programming problem. In that respect it is convenient to represent the random process in a form of scenario tree. 3

5 Dynamic programming equations. Consider the last stage linear program of the multistage problem: Min x T c T x T s.t. B T x T 1 + A T x T = b T, x T 0. The optimal value of this problem depends on x T 1 and data ξ T = (c T, B T, A T, b T ), and is denoted Q T (x T 1, ξ T ). At stage t = T 1,..., 2 we consider the optimal value Q t (x t 1, ξ [1,t] ) of the problem Min x t c t x t + E [ Q t+1 (x t, ξ [1,t+1] ) s.t. B t x t 1 + A t x t = b t, x t 0. ] ξ [1,t] At the first stage we obtain the problem: Min x 1 c 1 x 1 + E [Q 2 (x 1, ξ 2 )] s.t. A 1 x 1 = b 1, x 1 0. (3) (4) The expectation in (3) is conditional on the history of the process ξ t up to time t. If the process is Markovian, then this expectation depends only on ξ t, and hence the cost-to-go function Q t (x t 1, ξ t ) is a function of x t 1 and ξ t. 4

6 How difficult is to solve two-stage problems? What about multistage problems? Consider stochastic optimization problem: { } f(x) := E P [F (x, ξ)], Min x X where ξ is a random vector having probability distribution P, F (x, ξ) is a real valued function and X R n. If P has a finite support (i.e., finite number ξ 1,..., ξ K of scenarios), then E P [F (x, ξ)] = K k=1 p k F (x, ξ k ). However, the number K of scenarios grows exponentially with dimension of the data ξ. For example, if d random components of ξ are independent, each having just 3 possible realizations, then the total number of scenarios K = 3 d. No computer in a foreseeable future will be able to handle calculations involving scenarios. 5

7 Monte Carlo sampling approach Let ξ 1,..., ξ N be a generated (iid) random sample drawn from P and ˆf N (x) := N 1 N j=1 F (x, ξ j ) be the corresponding sample average function. By the Law of Large Numbers, for a given x X, we have ˆf N (x) f(x) = E P [F (x, ξ)] w.p.1 as N. Notoriously slow convergence of order O p (N 1/2 ). By the Central Limit Theorem N 1/2 [ ˆf N (x) f(x) ] N(0, σ 2 (x)), where σ 2 (x) := Var[F (x, ξ)]. In order to improve the accuracy by one digit the sample size should be increased 100 times. Good news: rate of convergence does not depend on the number of scenarios, only on the variance σ 2 (x). The accuracy can be improved by variance reduction techniques. However, the rate of the square root of N (of Monte Carlo sampling estimation) cannot be changed. 6

8 Monte Carlo sampling optimization Two basic philosophies: interior and exterior Monte Carlo sampling. In interior sampling methods, sampling is performed inside a chosen algorithm with new (independent) samples generated in the process of iterations (e.g., Higle and Sen (stochastic decomposition), Infanger (statistical L-shape method), Norkin, Pflug and Ruszczynski (stochastic branch and bound method)). In the exterior sampling approach the true problem is approximated by the sample average approximation problem: (SAA) Min x X ˆf N (x) := N 1 N j=1 F (x, ξ j ). Once the sample ξ 1,..., ξ N P is generated, the SAA problem becomes a deterministic optimization and can be solved by an appropriate algorithm. Difficult to point out an exact origin of this method. Variants of this approach were suggested by a number of authors under different names. 7

9 Advantages of the SAA method: Ease of numerical implementation. one can use existing software. Often Good convergence properties. Well developed statistical inference: validation and error analysis, stopping rules. Easily amendable to variance reduction techniques. Ideal for parallel computations. The idea of common random numbers generation. Suppose that X = {x 1, x 2 }. Then the variance of N 1/2 [ ˆf N (x 1 ) ˆf N (x 2 ) ] is: Var[F (x 1, ξ)]+var[f (x 2, ξ)] 2 Cov[F (x 1, ξ), F (x 2, ξ)]. It can be much smaller than Var[F (x 1, ξ)] + Var[F (x 2, ξ)] when the samples are independent. 8

10 Notation v 0 is the optimal value of the true problem S 0 is the optimal solutions set of the true problem S ε is the set of ε-optimal solutions of the true prob ˆv N is the optimal value of the SAA problem ŜN ε is the set of ε-optimal solutions of the SAA pro ˆx N is an optimal solution of the SAA problem Convergence properties Vast literature on statistical properties of the SAA estimators ˆv N and ˆx N : Consistency. By the Law of Large Numbers, ˆf N (x) converge (pointwise) to f(x) w.p.1. Under mild additional conditions, this implies that ˆv N v 0 and dist(ˆx N, S 0 ) 0 w.p.1 as N. In particular, ˆx N x 0 w.p.1 if S 0 = {x 0 }. (Consistency of Maximum Likelihood estimators, Wald (1949)). Central Limit Theorem type results. ˆv N = min x S 0 ˆf N (x) + o p (N 1/2 ). In particular, if S 0 = {x 0 }, then N 1/2 [ˆv N v 0 ] N(0, σ 2 (x 0 )). 9

11 These results suggest that the optimal value of the SAA problem converges at a rate of N. In particular, if S 0 = {x 0 }, then ˆv N converges to v 0 at the same rate as ˆf N (x 0 ) converges to f(x 0 ). If S 0 = {x 0 }, then under certain regularity conditions, N 1/2 (ˆx N x 0 ) converges in distribution. (Asymptotic normality of M-estimators, Huber (1967)). The required regularity conditions are that the expected value function f(x) is smooth (twice differentiable) at x 0 and the Hessian matrix 2 f(x 0 ) is positive definite. This typically happens if the probability distribution P is continuous. In such cases ˆx N converges to x 0 at the same rate as the stochastic approximation iterates calculated with the optimal step sizes (Shapiro, 1996). 10

12 Large Deviations type bounds. Let Y 1,..., Y N be an iid sequence of random variables, and Z N := N 1 Nj=1 Y j. Then (the upper bound of Cramér s LD theorem) P(Z N z) exp { NI(z) } for z > µ, where µ := E[Y ] and { } I(z) := sup tz Λ(t), t R Λ(t) := log M(t) and M(t) := E[e ty ] is the moment generating function. Assume that M(t) is finite valued for all t in a neighborhood of zero (if M(t) = + for any t 0, then I(z) 0 and hence the above upper bound trivially holds). Note that Λ(t) is convex, Λ (0) = µ and Λ (0) = σ 2, where σ 2 := Var[Y ]. The rate function I(z) is convex, I (µ) = 0, I (µ) = σ 2, and I(z) attains its minimum at z = µ. Consequently, (z µ)2 I(z) = 2σ 2 + o( z µ 2 ), and I(z) > 0 for any z µ. Note that if Y N(µ, σ 2 ), then M(t) = exp{µt + σ 2 t 2 /2} and hence I(z) = (z µ)2 2σ 2. 11

13 If M(t) exp{µt + σ 2 t 2 /2}, then I(z) (z µ)2 2σ 2 and hence for z µ, { } N(z µ) 2 P(Z N z) exp 2σ 2. If distribution of Y is supported on a bounded interval [a, b], then for z µ (Hoeffding s inequality): { } N(z µ) 2 P(Z N z) exp (b a) 2. /2 Uniform LD bounds. Let X be a subset of R n of finite diameter D := sup x,x X x x, ξ 1,..., ξ N be an iid sample from a distribution supported on Ξ R d. Suppose that: (i) there is a constant σ > 0 such that M x (t) {σ 2 t 2 /2}, t R, x X, where M x (t) is the moment generating function of the random variable F (x, ξ) f(x), (ii) there is a constant L > 0 such that F (x, ξ) F (x, ξ) L x x, ξ Ξ, x, x X. 12

14 Then for any γ > 0, P { sup x X ˆf N (x) f(x) γ } ( ) } O(1)DL n γ exp { Nγ2 16σ 2. Note that if ˆx N is a δ-optimal solution of the SAA problem and sup x X ˆf N (x) f(x) < γ, then ˆx N is an ε-optimal solution of the true problem with ε = γ + δ. Now choose ε > 0, δ [0, ε) and α (0, 1). Then, for γ := ε δ, by solving ( ) n { } O(1)DL exp Nγ2 γ 16σ 2 α, we obtain that for sample size N O(1)σ2 [ ( O(1)DL (ε δ) 2 n log (ε δ) 2 ) + log ( 1 α) ] we are guaranteed that with probability at least 1 α any δ-optimal solution of the SAA problem is an ε-optimal solution of the true problem. 13

15 If the set X is finite, then the corresponding estimate of the sample size becomes (under the assumption (i)): N 2σ2 (ε δ) 2 log ( X α ). Example Let F (x, ξ) := x 2k 2k ( ξ T x ), where k N and X := {x R n : x 1}. Suppose, that ξ N(0, σ 2 I n ). Then f(x) = x 2k, and for ε [0, 1], the set of ε-optimal solutions of the true problem is {x : x 2k ε}. Let ξ N := (ξ ξ N )/N. The corresponding sample average function is ˆf N (x) = x 2k 2k ( ξ T N x), and ˆx N = ξ N γ ξ N, where γ := 2k 2 2k 1 if ξ N 1, and γ = 1 if ξ N > 1. Therefore, for ε (0, 1), the optimal solution of the SAA problem is an ε-optimal solution of the true problem iff ξ N ν ε, where ν := 2k 2k 1. 14

16 We have that ξ N N(0, σ 2 N 1 I n ), and hence N ξ N 2 /σ 2 has the chi-square distribution with n degrees of freedom. Consequently, the probability that ξ N ν > ε is equal to the probability P ( χ 2 n > Nε 2/ν /σ 2). Moreover, E[χ 2 n] = n and the probability P(χ 2 n > n) increases and tends to 1/2 as n increases. Consequently, for α (0, 0.3) and ε (0, 1), for example, the sample size N should satisfy N > nσ2 ε 2/ν (5) in order to have the property: with probability 1 α an (exact) optimal solution of the SAA problem is an ε-optimal solution of the true problem. Note that ν 1 as k. 15

17 Suppose that the true problem is convex piecewise linear. That is: (i) the distribution P has a finite support, (ii) for almost every ξ the function F (, ξ) is piecewise linear and convex, (iii) the feasible set X is polyhedral (i.e., is defined by a finite number of linear constraints). Suppose also that the optimal solutions set S 0, of the true problem, is nonempty and bounded. Then : (1) W.p.1 for N large enough, ˆx N is an exact optimal solution of the true problem. More precisely, w.p.1 for N large enough, the set Ŝ N of optimal solutions of the SAA problem is nonempty and forms a face of the (polyhedral) set S 0. (2) Probability of the event {Ŝ N S 0 } tends to one exponentially fast. That is, there exists a constant γ > 0 such that 1 lim N N log [ 1 P (Ŝ N S 0 ) ] = γ. (Shapiro & Homem-de-Mello, 2000) 16

18 The idea of repeated solutions. Solve the SAA problem M times using M independent samples each of size N. Let ˆv (1) N,..., ˆv(M) N be the optimal values and ˆx (1) N,..., ˆx(M) N be optimal solutions of the corresponding SAA problems. Probability that at least one of ˆx (i) N, i = 1,..., M is an optimal solution of the true problem is 1 p M N where and hence p N := P (ˆx N x 0 ) CN 1/2 e Nγ. p M N (CN 1/2 ) M e NMγ. Cutting plane (Benders cuts, L-shape) type algorithms. Empirical observation: on average the number of iterations (cuts) does not grow, or grows slowly, with increase of the sample size N. From theoretical point of view it converges to the respective number of the true problem. 17

19 Validation analysis How one can evaluate quality of a given solution ˆx S? Two basic approaches: (1) Evaluate the gap f(ˆx) v 0. (2) Verify the KKT optimality conditions at ˆx. Statistical test based on estimation of f(ˆx) v 0 (Norkin, Pflug & Ruszczynski 98, Mak, Morton & Wood 99): (i) Estimate f(ˆx) by the sample average ˆf N (ˆx), using sample of a large size N. (ii) Solve the SAA problem M times using M independent samples each of size N. Let ˆv (1) N,..., ˆv(M) N be the optimal values of the corresponding SAA problems. Estimate E[ˆv N ] by the average M 1 M j=1 ˆv (j). Note that [ E ˆf N (ˆx) M 1 M j=1 ˆv (j) N = ( f(ˆx) v 0 ) + ( v 0 E[ˆv N ] ), N ] and that v 0 E[ˆv N ] > 0. For ill-conditioned problems the bias v 0 E[ˆv N ] can be large. 18

20 The bias v 0 E[ˆv N ] is positive and (under mild regularity conditions) lim N 1/2 ( v 0 E[ˆv N ] ) = E N [ max Y (x) x S 0 where (Y (x 1 ),..., Y (x k )) has a multivariate normal distribution with zero mean vector and covariance matrix given by the covariance matrix of the random vector (F (x 1, ξ),..., F (x k, ξ)). For ill-conditioned problems this bias is of order O(N 1/2 ) and can be large if the ε-optimal solution set S ε is large for some small ε 0. ], Common random numbers variant: generate a sample (of size N) and calculate the gap ˆf N (ˆx) inf x X ˆf N (x). Repeat this procedure M times (with independent samples), and calculate the average of the above gaps. This procedure works well for well conditioned problems, does not improve the bias problem. 19

21 KKT statistical test Let X := {x R n : c i (x) = 0, i I, c i (x) 0, i J}. Suppose that the probability distribution is continuous. Then F (, ξ) is differentiable at ˆx w.p.1 and f(ˆx) = E P [ x F (ˆx, ξ)]. KKT-optimality conditions at an optimal solution x 0 S 0 can be written as follows: where C(x) := y = f(x 0 ) C(x 0 ), i I J(x) λ i c i (x), λ i 0, i J(x) and J(x) := {i : c i (x) = 0, i J}. The idea of the KKT test is to estimate the distance δ(ˆx) := dist ( f(ˆx), C(ˆx)), by using the sample estimator ˆδ N (ˆx) := dist ( ˆf N (ˆx), C(ˆx) ). The covariance matrix of ˆf N (ˆx) can be estimated (from the same sample), and hence a confidence region for f(ˆx) can be constructed. This allows a statistical validation of the KKT conditions. (Shapiro & Homem-de-Mello 98). 20,

22 Complexity of multistage stochastic programming Consider the following T stage stochastic programming problem [ Min F 1 (x 1 ) + E inf F 2(x 2, ξ 2 )+ x 1 G [ 1 x 2 G 2 (x 1,ξ 2 ) E + E [ inf F T (x T, ξ T ) ]]] x T G T (x T 1,ξ T ) driven by the random data process ξ 2,..., ξ T. Here x t R n t, t = 1,..., T, are decision variables, F t : R n t R d t R are continuous functions and G t : R n t 1 R d t R n t, t = 2,..., T, are measurable multifunctions, the function F 1 : R n 1 R and the set G 1 R n 1 are deterministic. We assume that the set G 1 is nonempty. For example, in linear case F t (x t, ξ t ) := c t, x t, G 1 := {x 1 : A 1 x 1 = b 1, x 1 0}, G t (x t 1, ξ t ) := {x t : B t x t 1 + A t x t = b t, x t 0}, ξ 1 := (c 1, A 1, b 1 ) is known at the first stage (and hence is nonrandom), and ξ t := (c t, B t, A t, b t ), t = 2,..., T, are data vectors some (all) elements of which can be random. 21

23 Conditional sampling: let ξ2 i, i = 1,..., N 1, be an iid random sample of ξ 2. Conditional on ξ 2 = ξ2 i, a random sample ξij 3, j = 1,..., N 2, is generated and etc. The obtained scenario tree is considered as a sample average approximation of the true problem. Note that the total number of scenarios N = T 1 t=1 N t. For T = 3, under certain regularity conditions, for ε > 0 and α (0, 1), and the sample sizes N 1 and N 2 satisfying ) n1 { D1 L O(1)[( 1 ε exp ) n2 { exp ( D1 L 3 ε ) n1 ( D2 L 2 ε O(1)N 1ε 2 σ 2 1 } + O(1)N 2ε 2 σ 2 2 } ] α, we have that any ε/2-optimal solution of the SAA problem is an ε-optimal solution of the true problem with probability at least 1 α. In particular, suppose that N 1 = N 2. Then the required sample size N 1 = N 2 : N 1 O(1)σ2 ε 2 [ (n 1 + n 2 ) log ( DL ε ) + log ( O(1) α )]. 22

24 Example of financial planning. Suppose that we can rebalance our portfolio at several, say T, periods of time. That is, at the beginning we choose values x i0 of our assets subject to the budget constraint ni=1 x i0 = W 0. At the period t = 1,..., T, our wealth is W t = n i=1 ξ it x i,t 1, where ξ it = (1 + R it ) and R it is the return of the i-th asset at the period t. Our objective is to maximize the expected utility Max E [U(W T )] at the end of the considered period, subject to the balance constraints ni=1 x it = W t and x t 0, t = 0,..., T 1. The values of the decision vector x t = (x 1t,..., x nt ), chosen at stage t, may depend on the information ξ [1,t] available up to time t, but not on the future observations. 23

25 The decision process has the form: decision(x 0 ) observation(ξ 1 ) decision(x 1 )... observation(ξ T ) decision(x T ). Dynamic programming equations for this problem can be derived as follows. At the last stage t = T 1 we have to solve the problem Max x T 1 0 s.t. E [ n i=1 U ( n i=1 ξ i,t x i,t 1 ) ξ [1,T 1] ] x i,t 1 = W T 1. ( ) Its optimal value Q T 1 WT 1, ξ [1,T 1] is a function of W T 1 and ξ [1,T 1]. At stage t = T 2,..., 0 we solve Max x t 0 s.t. E [ n i=1 Q t+1 ( n i=1 x it = W t. ξ i,t+1 x it, ξ [1,t+1] ) ξ [1,t] ] Its optimal value is Q t ( Wt, ξ [1,t] ). Suppose now that the random process ξ t is between stages independent. Then the costto-go function Q t (W t ) does not depend on ξ t. 24

26 Consider power utility function U(z) := z γ, with γ 1. Then Q T 1 ( WT 1 ) = W γ T 1 Q T 1 (1), and by induction Q 1 (W 1 ) = W γ 1 T 1 t=1 Q t (1). Consequently, the first stage optimal solution is obtained in a myopic way by solving the problem: Max E x 0 0 [( n i=1 ξ i1 x i0 ) γ ] s.t. n i=1 x i0 = W 0. The optimal value of the corresponding multistage problem is v = W γ 0 T 1 t=0 Q t (1). (6) Suppose that we apply the SAA method to this problem using conditional sampling of size N t at stage t = 1,..., T. In accordance with (6) we have that the optimal value ˆv N of the SAA 25

27 problem can be written in the form ˆv N = W γ 0 T 1 t=0 ˆQ Nt+1 (1), (7) where ˆQ Nt+1 (W t ) is the optimal value of the SAA counterpart of the (true) problem. Note that since the conditional sample is generated here in the between stages independent way, the random variables ˆQ Nt+1 (1) are mutually independent. Therefore E [ˆv N ] = W γ 0 where T 1 t=0 E [ ˆQ Nt+1 (1) ] = v T 1 (1 + β t ), t=0 β t := E [ ˆQ Nt+1 (1) ] Q t (1) Q t (1) is the relative bias of the optimal value of the corresponding t-th stage SAA problem. This indicates that the bias of the optimal value of the SAA problem growth exponentially with the number of stages.

A. Shapiro Introduction

A. Shapiro Introduction ESAIM: PROCEEDINGS, December 2003, Vol. 13, 65 73 J.P. Penot, Editor DOI: 10.1051/proc:2003003 MONTE CARLO SAMPLING APPROACH TO STOCHASTIC PROGRAMMING A. Shapiro 1 Abstract. Various stochastic programming

More information

Monte Carlo Methods for Stochastic Programming

Monte Carlo Methods for Stochastic Programming IE 495 Lecture 16 Monte Carlo Methods for Stochastic Programming Prof. Jeff Linderoth March 17, 2003 March 17, 2003 Stochastic Programming Lecture 16 Slide 1 Outline Review Jensen's Inequality Edmundson-Madansky

More information

Stochastic Programming Approach to Optimization under Uncertainty

Stochastic Programming Approach to Optimization under Uncertainty Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Shapiro Stochastic Programming Approach to Optimization under Uncertainty Received: date / Accepted: date Abstract In

More information

Revisiting some results on the sample complexity of multistage stochastic programs and some extensions

Revisiting some results on the sample complexity of multistage stochastic programs and some extensions Revisiting some results on the sample complexity of multistage stochastic programs and some extensions M.M.C.R. Reaiche IMPA, Rio de Janeiro, RJ, Brazil March 17, 2016 Abstract In this work we present

More information

Revisiting some results on the complexity of multistage stochastic programs and some extensions

Revisiting some results on the complexity of multistage stochastic programs and some extensions Revisiting some results on the complexity of multistage stochastic programs and some extensions M.M.C.R. Reaiche IMPA, Rio de Janeiro, RJ, Brazil October 30, 2015 Abstract In this work we present explicit

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Stochastic Mathematical Programs with Equilibrium Constraints, Modeling and Sample Average Approximation

Stochastic Mathematical Programs with Equilibrium Constraints, Modeling and Sample Average Approximation Stochastic Mathematical Programs with Equilibrium Constraints, Modeling and Sample Average Approximation Alexander Shapiro and Huifu Xu Revised June 24, 2005 Abstract. In this paper, we discuss the sample

More information

Sample Average Approximation (SAA) for Stochastic Programs

Sample Average Approximation (SAA) for Stochastic Programs Sample Average Approximation (SAA) for Stochastic Programs with an eye towards computational SAA Dave Morton Industrial Engineering & Management Sciences Northwestern University Outline SAA Results for

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Stochastic Integer Programming An Algorithmic Perspective

Stochastic Integer Programming An Algorithmic Perspective Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple

More information

A Sequential Sampling Procedure for Stochastic Programming

A Sequential Sampling Procedure for Stochastic Programming A Sequential Sampling Procedure for Stochastic Programming Güzin Bayraksan guzinb@sie.arizona.edu Department of Systems and Industrial Engineering University of Arizona, Tucson, AZ 85721 David P. Morton

More information

Stochastic Programming by Monte Carlo Simulation Methods

Stochastic Programming by Monte Carlo Simulation Methods Stochastic Programming by Monte Carlo Simulation Methods Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA Abstract We consider

More information

Stochastic Decomposition

Stochastic Decomposition IE 495 Lecture 18 Stochastic Decomposition Prof. Jeff Linderoth March 26, 2003 March 19, 2003 Stochastic Programming Lecture 17 Slide 1 Outline Review Monte Carlo Methods Interior Sampling Methods Stochastic

More information

Importance sampling in scenario generation

Importance sampling in scenario generation Importance sampling in scenario generation Václav Kozmík Faculty of Mathematics and Physics Charles University in Prague September 14, 2013 Introduction Monte Carlo techniques have received significant

More information

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods

More information

Monte Carlo Sampling-Based Methods for Stochastic Optimization

Monte Carlo Sampling-Based Methods for Stochastic Optimization Monte Carlo Sampling-Based Methods for Stochastic Optimization Tito Homem-de-Mello School of Business Universidad Adolfo Ibañez Santiago, Chile tito.hmello@uai.cl Güzin Bayraksan Integrated Systems Engineering

More information

Stochastic Dual Dynamic Integer Programming

Stochastic Dual Dynamic Integer Programming Stochastic Dual Dynamic Integer Programming Jikai Zou Shabbir Ahmed Xu Andy Sun December 26, 2017 Abstract Multistage stochastic integer programming (MSIP) combines the difficulty of uncertainty, dynamics,

More information

On almost sure rates of convergence for sample average approximations

On almost sure rates of convergence for sample average approximations On almost sure rates of convergence for sample average approximations Dirk Banholzer 1, Jörg Fliege 1, and Ralf Werner 2 1 Department of Mathematical Sciences, University of Southampton, Southampton, SO17

More information

Risk neutral and risk averse approaches to multistage stochastic programming.

Risk neutral and risk averse approaches to multistage stochastic programming. Risk neutral and risk averse approaches to multistage stochastic programming. A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA

More information

Computational Complexity of Stochastic Programming: Monte Carlo Sampling Approach

Computational Complexity of Stochastic Programming: Monte Carlo Sampling Approach Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 Computational Complexity of Stochastic Programming: Monte Carlo Sampling Approach Alexander Shapiro Abstract For a long

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

1. Introduction. In this paper we consider stochastic optimization problems of the form

1. Introduction. In this paper we consider stochastic optimization problems of the form O RATES OF COVERGECE FOR STOCHASTIC OPTIMIZATIO PROBLEMS UDER O-I.I.D. SAMPLIG TITO HOMEM-DE-MELLO Abstract. In this paper we discuss the issue of solving stochastic optimization problems by means of sample

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming Operations Research Letters 37 2009 143 147 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl On a time consistency concept in risk averse

More information

Scenario Generation and Sampling Methods

Scenario Generation and Sampling Methods Scenario Generation and Sampling Methods Tito Homem-de-Mello Güzin Bayraksan SVAN 2016 IMPA 9 13 May 2106 Let s Start with a Recap Sample Average Approximation We want to solve the true problem min {g(x)

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

11. Learning graphical models

11. Learning graphical models Learning graphical models 11-1 11. Learning graphical models Maximum likelihood Parameter learning Structural learning Learning partially observed graphical models Learning graphical models 11-2 statistical

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Stability-based generation of scenario trees for multistage stochastic programs

Stability-based generation of scenario trees for multistage stochastic programs Stability-based generation of scenario trees for multistage stochastic programs H. Heitsch and W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany http://www.math.hu-berlin.de/~romisch

More information

Upper bound for optimal value of risk averse multistage problems

Upper bound for optimal value of risk averse multistage problems Upper bound for optimal value of risk averse multistage problems Lingquan Ding School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332-0205 Alexander Shapiro School

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study Computational Optimization and Applications, 24, 289 333, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. The Sample Average Approximation Method Applied to Stochastic Routing

More information

Confidence regions for stochastic variational inequalities

Confidence regions for stochastic variational inequalities Confidence regions for stochastic variational inequalities Shu Lu Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, 355 Hanes Building, CB#326, Chapel Hill,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming IFORMS 2008 c 2008 IFORMS isbn 978-1-877640-23-0 doi 10.1287/educ.1080.0048 Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming Shabbir Ahmed and Alexander Shapiro H. Milton

More information

Statistical Inference of Moment Structures

Statistical Inference of Moment Structures 1 Handbook of Computing and Statistics with Applications, Vol. 1 ISSN: 1871-0301 2007 Elsevier B.V. All rights reserved DOI: 10.1016/S1871-0301(06)01011-0 1 Statistical Inference of Moment Structures Alexander

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Stochastic Programming Math Review and MultiPeriod Models

Stochastic Programming Math Review and MultiPeriod Models IE 495 Lecture 5 Stochastic Programming Math Review and MultiPeriod Models Prof. Jeff Linderoth January 27, 2003 January 27, 2003 Stochastic Programming Lecture 5 Slide 1 Outline Homework questions? I

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Testing Algebraic Hypotheses

Testing Algebraic Hypotheses Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Conditioning of linear-quadratic two-stage stochastic programming problems

Conditioning of linear-quadratic two-stage stochastic programming problems Conditioning of linear-quadratic two-stage stochastic programming problems W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch (K. Emich, R. Henrion) (WIAS

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Introduction to Self-normalized Limit Theory

Introduction to Self-normalized Limit Theory Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Learning with Decision-Dependent Uncertainty

Learning with Decision-Dependent Uncertainty Learning with Decision-Dependent Uncertainty Tito Homem-de-Mello Department of Mechanical and Industrial Engineering University of Illinois at Chicago NSF Workshop, University of Maryland, May 2010 Tito

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Individual confidence intervals for true solutions to stochastic variational inequalities

Individual confidence intervals for true solutions to stochastic variational inequalities . manuscript o. 2will be inserted by the editor Michael Lamm et al. Individual confidence intervals for true solutions to stochastic variational inequalities Michael Lamm Shu Lu Amarjit Budhiraja. Received:

More information

Sample Average Approximation Method. for Chance Constrained Programming: Theory and Applications

Sample Average Approximation Method. for Chance Constrained Programming: Theory and Applications Sample Average Approximation Method for Chance Constrained Programming: Theory and Applications B.K. Pagnoncelli S. Ahmed A. Shapiro Communicated by P.M. Pardalos Abstract We study sample approximations

More information

Validating Sample Average Approximation Solutions with Negatively Dependent Batches

Validating Sample Average Approximation Solutions with Negatively Dependent Batches Validating Sample Average Approximation Solutions with Negatively Dependent Batches Jiajie Chen Department of Statistics, University of Wisconsin-Madison chen@stat.wisc.edu Cong Han Lim Department of Computer

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Distributionally robust simple integer recourse

Distributionally robust simple integer recourse Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Class 2 & 3 Overfitting & Regularization

Class 2 & 3 Overfitting & Regularization Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating

More information

Multiscale Adaptive Inference on Conditional Moment Inequalities

Multiscale Adaptive Inference on Conditional Moment Inequalities Multiscale Adaptive Inference on Conditional Moment Inequalities Timothy B. Armstrong 1 Hock Peng Chan 2 1 Yale University 2 National University of Singapore June 2013 Conditional moment inequality models

More information

Stability of Stochastic Programming Problems

Stability of Stochastic Programming Problems Stability of Stochastic Programming Problems W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany http://www.math.hu-berlin.de/~romisch Page 1 of 35 Spring School Stochastic

More information

Theoretical Statistics. Lecture 1.

Theoretical Statistics. Lecture 1. 1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

On a class of minimax stochastic programs

On a class of minimax stochastic programs On a class of minimax stochastic programs Alexander Shapiro and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332. August 29, 2003

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Ordinal Optimization and Multi Armed Bandit Techniques

Ordinal Optimization and Multi Armed Bandit Techniques Ordinal Optimization and Multi Armed Bandit Techniques Sandeep Juneja. with Peter Glynn September 10, 2014 The ordinal optimization problem Determining the best of d alternative designs for a system, on

More information

Sampling Bounds for Stochastic Optimization

Sampling Bounds for Stochastic Optimization Sampling Bounds for Stochastic Optimization Moses Charikar 1, Chandra Chekuri 2, and Martin Pál 2 1 Computer Science Dept., Princeton University, Princeton, NJ 08544. 2 Lucent Bell Labs, 600 Mountain Avenue,

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Statistical Measures of Uncertainty in Inverse Problems

Statistical Measures of Uncertainty in Inverse Problems Statistical Measures of Uncertainty in Inverse Problems Workshop on Uncertainty in Inverse Problems Institute for Mathematics and Its Applications Minneapolis, MN 19-26 April 2002 P.B. Stark Department

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

arxiv: v2 [math.oc] 18 Nov 2017

arxiv: v2 [math.oc] 18 Nov 2017 DASC: A DECOMPOSITION ALGORITHM FOR MULTISTAGE STOCHASTIC PROGRAMS WITH STRONGLY CONVEX COST FUNCTIONS arxiv:1711.04650v2 [math.oc] 18 Nov 2017 Vincent Guigues School of Applied Mathematics, FGV Praia

More information

Statistics 300B Winter 2018 Final Exam Due 24 Hours after receiving it

Statistics 300B Winter 2018 Final Exam Due 24 Hours after receiving it Statistics 300B Winter 08 Final Exam Due 4 Hours after receiving it Directions: This test is open book and open internet, but must be done without consulting other students. Any consultation of other students

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Concentration of Measures by Bounded Size Bias Couplings

Concentration of Measures by Bounded Size Bias Couplings Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Scenario Generation and Sampling Methods

Scenario Generation and Sampling Methods Scenario Generation and Sampling Methods Güzin Bayraksan Tito Homem-de-Mello SVAN 2016 IMPA May 11th, 2016 Bayraksan (OSU) & Homem-de-Mello (UAI) Scenario Generation and Sampling SVAN IMPA May 11 1 / 33

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Risk Measures. A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia , USA ICSP 2016

Risk Measures. A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia , USA ICSP 2016 Risk Measures A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA ICSP 2016 Min-max (distributionally robust) approach to stochastic

More information

30 not everywhere defined and capturing the domain of f was part of the problem. In this paper the objective

30 not everywhere defined and capturing the domain of f was part of the problem. In this paper the objective 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A SMOOTHING DIRECT SEARCH METHOD FOR MONTE CARLO-BASED BOUND CONSTRAINED COMPOSITE NONSMOOTH OPTIMIZATION XIAOJUN CHEN, C. T. KELLEY, FENGMIN XU, AND ZAIKUN

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

SOME HISTORY OF STOCHASTIC PROGRAMMING

SOME HISTORY OF STOCHASTIC PROGRAMMING SOME HISTORY OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No. 3, August 2011, pp. 568 592 issn 0364-765X eissn 1526-5471 11 3603 0568 doi 10.1287/moor.1110.0506 2011 INFORMS Convergence of Stationary Points of Sample

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Financial Optimization ISE 347/447. Lecture 21. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 21. Dr. Ted Ralphs Financial Optimization ISE 347/447 Lecture 21 Dr. Ted Ralphs ISE 347/447 Lecture 21 1 Reading for This Lecture C&T Chapter 16 ISE 347/447 Lecture 21 2 Formalizing: Random Linear Optimization Consider the

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information