Sample Average Approximation (SAA) for Stochastic Programs
|
|
- Sharlene Atkinson
- 5 years ago
- Views:
Transcription
1 Sample Average Approximation (SAA) for Stochastic Programs with an eye towards computational SAA Dave Morton Industrial Engineering & Management Sciences Northwestern University
2 Outline SAA Results for Monte Carlo estimators: no optimization What results should we want for SAA? Results for SAA 1. Bias 2. Consistency 3. Central limit theorem (CLT) SAA Algorithm A basic algorithm A sequential algorithm Multi-Stage Problems What We Didn t Discuss
3 Stochastic Programming Models z = min x X E f(x, ξ) Such problems arise in statistics, simulation and mathematical programming Our focus: mathematical programming with X deterministic We ll assume: (A1) X and compact (A2) Ef(, ξ) is lower semicontinuous (A3) E sup f 2 (x, ξ) < x X ξ is a random vector and P ξ P ξ (x) We can evaluate f(x, ξ(ω)) for a fixed x and realization ξ(ω) Choice of f determines problem class
4 Sample Average Approximation True or population problem: Denote optimal solution x z = min Ef(x, ξ) (SP ) x X SAA problem: z n = min x X n f(x, ξ j ) 1 n j=1 }{{} f n (x) (SP n ) Here, ξ 1, ξ 2,..., ξ n iid as ξ or sampled another way. Denote optimal solution x n View z n as an estimator of z and x n as an estimator of x Want names? external sampling method, sample-path optimization, stochastic counterpart, retrospective optimization, non-recursive method, and sample average approximation.
5 Let s start in a simpler setting, momentarily putting aside optimization...
6 Monte Carlo Sampling Suppressing the (fixed) decision x Let z = Ef(ξ), σ 2 = varf(ξ) < and ξ 1, ξ 2,..., ξ n be iid as ξ Let z n = 1 n n f(ξ i ) be the sample mean estimator of z i=1 FACT 1. Ez n = z z n is an unbiased estimator of z FACT 2. z n z, wp1 (strong LLN) z n is a strongly consistent estimator of z FACT 3. n(zn z) N(0, σ 2 ) (CLT) Rate of convergence is 1/ n and scaled difference is normally distributed FACTS 4,5,... law of iterated logarithm, concentration inequalities,...
7 Do such results carry over to SAA?
8 SAA Population problem: Denote optimal solution x z = min Ef(x, ξ) (SP ) x X SAA problem: z n = min x X n f(x, ξ j ) 1 n j=1 }{{} f n (x) (SP n ) Denote optimal solution x n View zn as an estimator of z and x n as an estimator of x What can we say about zn and x n as n? What should we want to say about zn and x n as n?
9 1. x n x, wp1 and n(x n x ) N(0, Σ) SAA: Possible Goals
10 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. z n z, wp1 and n(z n z ) N(0, σ 2 ) SAA: Possible Goals
11 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. zn z, wp1 and n(zn z ) N(0, σ 2 ) 3. Ef(x n, ξ) z, wp1 SAA: Possible Goals
12 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. z n z, wp1 and n(z n z ) N(0, σ 2 ) 3. Ef(x n, ξ) z, wp1 SAA: Possible Goals 4. lim n P (Ef(x n, ξ) z ε n ) 1 α where ε n 0
13 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. z n z, wp1 and n(z n z ) N(0, σ 2 ) 3. Ef(x n, ξ) z, wp1 SAA: Possible Goals 4. lim n P (Ef(x n, ξ) z ε n ) 1 α where ε n 0 Modeling Issues: If (SP n ) is for maximum-likelihood estimation then goal 1 could be appropriate If (SP ) is to price a financial option then goal 2 could be appropriate When (SP ) is a decision-making model, 1 may be more than we need and 2 is of secondary interest. Goals 3 and 4 arguably suffice
14 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. z n z, wp1 and n(z n z ) N(0, σ 2 ) 3. Ef(x n, ξ) z, wp1 SAA: Possible Goals 4. lim n P (Ef(x n, ξ) z ε n ) 1 α where ε n 0 Modeling Issues: If (SP n ) is for maximum-likelihood estimation then goal 1 could be appropriate If (SP ) is to price a financial option then goal 2 could be appropriate When (SP ) is a decision-making model, 1 may be more than we need and 2 is of secondary interest. Goals 3 and 4 arguably suffice Technical Issues: In general, we shouldn t expect {x n} n=1 to converge when (SP ) has multiple optimal solutions. In this case, we want: limit points of {x n} n=1 solve (SP ) If we achieve limit points result, X is compact & Ef(, ξ) is continuous, then we obtain goal 3 The limiting distributions may not be normal
15 1. x n x, wp1 and n(x n x ) N(0, Σ) 2. z n z, wp1 and n(z n z ) N(0, σ 2 ) 3. Ef(x n, ξ) z, wp1 SAA: Possible Goals 1 4. lim n P (Ef(x n, ξ) z ε n ) 1 α where ε n 0 Modeling Issues: If (SP n ) is for maximum-likelihood estimation then goal 1 could be appropriate If (SP ) is to price a financial option then goal 2 could be appropriate When (SP ) is a decision-making model, 1 may be more than we need and 2 is of secondary interest. Goals 3 and 4 arguably suffice Technical Issues: In general, we shouldn t expect {x n} n=1 to converge when (SP ) has multiple optimal solutions. In this case, we want: limit points of {x n} n=1 solve (SP ) If we achieve limit points result, X is compact & Ef(, ξ) is continuous, then we obtain goal 3 The limiting distributions may not be normal 1 Again, these goals aren t true in general; i.e., they may be impossible goals.
16 1. Bias 2. Consistency 3. CLT
17 SAA: Example z = min 1 x 1 [E f(x, ξ) = Eξx], where ξ N(0, 1) Every feasible solution, x [ 1, 1] is optimal and z = 0
18 SAA: Example z = min 1 x 1 [E f(x, ξ) = Eξx], where ξ N(0, 1) Every feasible solution, x [ 1, 1] is optimal and z = 0 z n = x n = ±1, z n = N(0, 1/n) min 1 x 1 ( 1 n ) n ξ j x j=1
19 SAA: Example z = min 1 x 1 [E f(x, ξ) = Eξx], where ξ N(0, 1) Every feasible solution, x [ 1, 1] is optimal and z = 0 z n = x n = ±1, z n = N(0, 1/n) Observations min 1 x 1 1. Ez n z n (negative bias) ( 1 n ) n ξ j x j=1 2. Ez n Ez n+1 n (monotonically shrinking bias) 3. z n z, wp1 (strongly consistent) 4. n(z n z ) = N(0, 1) (non-normal errors) 5. b(z n) Ez n z = a/ n (O(n 1/2 ) bias)
20 SAA: Example z = min 1 x 1 [E f(x, ξ) = Eξx], where ξ N(0, 1) Every feasible solution, x [ 1, 1] is optimal and z = 0 z n = x n = ±1, z n = N(0, 1/n) Observations min 1 x 1 1. Ez n z n (negative bias) ( 1 n ) n ξ j x j=1 2. Ez n Ez n+1 n (monotonically shrinking bias) 3. z n z, wp1 (strongly consistent) 4. n(z n z ) = N(0, 1) (non-normal errors) 5. b(z n) Ez n z = a/ n (O(n 1/2 ) bias) So, optimization changes the nature of sample-mean estimators.
21 SAA: Example z = min 1 x 1 [E f(x, ξ) = Eξx], where ξ N(0, 1) Every feasible solution, x [ 1, 1] is optimal and z = 0 z n = x n = ±1, z n = N(0, 1/n) Observations min 1 x 1 1. Ez n z n (negative bias) ( 1 n ) n ξ j x j=1 2. Ez n Ez n+1 n (monotonically shrinking bias) 3. z n z, wp1 (strongly consistent) 4. n(z n z ) = N(0, 1) (non-normal errors) 5. b(z n) Ez n z = a/ n (O(n 1/2 ) bias) So, optimization changes the nature of sample-mean estimators. Note: What if x [ 1, 1] is replaced by x R? SAA fails, spectacularly.
22 1. Bias 2. Consistency 3. CLT
23 1. Bias All you need to know: min [f(x) + g(x)] min f(x) + min g(x) x X x X x X
24 SAA: Bias Theorem. Assume (A1), (A2), and E f n (x) = Ef(x, ξ), x X. Then, Ezn z. If, in addition, ξ 1, ξ 2,..., ξ n are iid then Ezn Ezn+1.
25 SAA: Bias Theorem. Assume (A1), (A2), and E f n (x) = Ef(x, ξ), x X. Then, Ezn z. If, in addition, ξ 1, ξ 2,..., ξ n are iid then Ezn Ezn+1. Notes: First result does not require iid realizations, just an unbiased estimator Hypothesis can be relaxed to: E f n (x) Ef(x, ξ), x X Hypothesis can be relaxed to: ξ 1, ξ 2,..., ξ n are exchangeable random variables
26 Proof of Bias Result E 1 n n f(x, ξ j ) = E f(x, ξ) j=1
27 Proof of Bias Result min E x X 1 n n f(x, ξ j ) = min E f(x, ξ) j=1 x X
28 Proof of Bias Result min E x X 1 n n f(x, ξ j ) = min E f(x, ξ) = z j=1 x X
29 Proof of Bias Result and so we obtain min E x X E min x X 1 n 1 n n f(x, ξ j ) = min E f(x, ξ) = z j=1 x X n f(x, ξ j ) min E f(x, ξ) = z j=1 x X
30 and so we obtain min E x X Ez n = E 1 n min x X Proof of Bias Result n f(x, ξ j ) = min E f(x, ξ) = z j=1 1 n x X n f(x, ξ j ) min E f(x, ξ) = z j=1 x X
31 and so we obtain min E x X Ez n = E 1 n min x X Proof of Bias Result n f(x, ξ j ) = min E f(x, ξ) = z j=1 1 n x X n f(x, ξ j ) min E f(x, ξ) = z j=1 x X
32 and so we obtain min E x X Ez n = E 1 n min x X Proof of Bias Result n f(x, ξ j ) = min E f(x, ξ) = z j=1 1 n x X n f(x, ξ j ) min E f(x, ξ) = z j=1 x X Aside: Simple example when n = 1 E min f(x, ξ) min Ef(x, ξ) x X x X
33 and so we obtain min E x X Ez n = E 1 n min x X Proof of Bias Result n f(x, ξ j ) = min E f(x, ξ) = z j=1 1 n x X n f(x, ξ j ) min E f(x, ξ) = z j=1 x X Aside: Simple example when n = 1 E min f(x, ξ) min Ef(x, ξ) x X x X Interpretation: We ll do better if we wait and see ξ s realization before choosing x Next, we show bias decreases monotonically: Intuition... Ez n Ez n+1
34 Proof of Bias Monotonicity Result Ez n+1 = E min x X = E min x X [ 1 n n + 1 n+1 i=1 n+1 i=1 f(x, ξ i ) 1 n n+1 ] j=1,j i f(x, ξ j )
35 Proof of Bias Monotonicity Result Ez n+1 = E min x X = E min x X E [ 1 n n n + 1 n+1 i=1 n+1 i=1 n+1 i=1 min x X f(x, ξ i ) 1 n 1 n n+1 ] j=1,j i n+1 j=1, j i f(x, ξ j ) f(x, ξ j )
36 Proof of Bias Monotonicity Result Ez n+1 = E min x X = E min x X E [ 1 n n n + 1 n+1 i=1 n+1 i=1 n+1 i=1 min x X f(x, ξ i ) 1 n 1 n n+1 ] j=1,j i n+1 j=1, j i f(x, ξ j ) f(x, ξ j ) = 1 n + 1 n+1 i=1 E min x X 1 n n+1 j=1, j i f(x, ξ j )
37 Proof of Bias Monotonicity Result Ez n+1 = E min x X = E min x X E [ 1 n n n + 1 n+1 i=1 n+1 i=1 n+1 i=1 min x X f(x, ξ i ) 1 n 1 n n+1 ] j=1,j i n+1 j=1, j i f(x, ξ j ) f(x, ξ j ) = 1 n + 1 n+1 i=1 E min x X 1 n n+1 j=1, j i f(x, ξ j ) = Ez n
38 Bias 2. Consistency: z n and x n 3. CLT
39 2. Consistency of z n All you need to know: Ef(x, ξ) Ef(x n, ξ) and fn (x n) f n (x )
40 SAA: Consistency of z n Theorem. Assume (A1), (A2), and the USLLN: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X Then, z n z, wp1.
41 SAA: Consistency of z n Theorem. Assume (A1), (A2), and the USLLN: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X Then, z n z, wp1. Notes: Does not assume ξ 1, ξ 2,... ξ n are iid Instead, assumes uniform strong law of large numbers (USLLN)
42 SAA: Consistency of z n Theorem. Assume (A1), (A2), and the USLLN: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X Then, z n z, wp1. Notes: Does not assume ξ 1, ξ 2,... ξ n are iid Instead, assumes uniform strong law of large numbers (USLLN) Important to realize: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X lim fn (x) Ef(x, ξ) = 0, wp1, x X n But, the converse is false. Think of our example: fn (x) = ξ n x and X = R
43 Proof of consistency of z n z n z = fn (x n) Ef(x, ξ)
44 Proof of consistency of z n z n z = fn (x n) Ef(x, ξ) = max { fn (x n) Ef(x, ξ), Ef(x, ξ) f n (x n) }
45 Proof of consistency of z n z n z = fn (x n) Ef(x, ξ) = max { fn (x n) Ef(x, ξ), Ef(x, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), Ef(x n, ξ) f n (x n) }
46 Proof of consistency of z n zn z = fn (x n) Ef(x, ξ) = max { fn (x n) Ef(x, ξ), Ef(x, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), Ef(x n, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), fn (x n) Ef(x n, ξ) }
47 Proof of consistency of z n z n z = fn (x n) Ef(x, ξ) = max { fn (x n) Ef(x, ξ), Ef(x, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), Ef(x n, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), fn (x n) Ef(x n, ξ) } sup fn (x) Ef(x, ξ) x X
48 Proof of consistency of z n z n z = fn (x n) Ef(x, ξ) = max { fn (x n) Ef(x, ξ), Ef(x, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), Ef(x n, ξ) f n (x n) } max { fn (x ) Ef(x, ξ), fn (x n) Ef(x n, ξ) } sup fn (x) Ef(x, ξ) x X Taking n completes the proof
49 2. Consistency of x n All you need to know: If g is continuous and lim k x k = ˆx then lim k g(x k ) = g(ˆx)
50 SAA: Consistency of x n Theorem. Assume (A1), (A2), Ef(, ξ) is continuous, and the USLLN: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X Then, every limit point of {x n} solves (SP ), wp1.
51 SAA: Consistency of x n Theorem. Assume (A1), (A2), Ef(, ξ) is continuous, and the USLLN: lim sup fn (x) Ef(x, ξ) = 0, wp1. n x X Then, every limit point of {x n} solves (SP ), wp1. Notes: Assumes USLLN rather than assuming ξ 1, ξ 2,... ξ n are iid And, assumes continuity of Ef(, ξ) The result doesn t say: lim n x n = x, wp1. Why not?
52 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. (Note such as limit point exists and ˆx X because X is compact.)
53 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN lim n n N f n (x n) }{{} z n = z, wp1
54 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ)
55 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ)
56 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) Taking n for n N... fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ)
57 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) Taking n for n N... First term goes to zero by USLLN fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ)
58 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) Taking n for n N... First term goes to zero by USLLN And second goes to zero by continuity of Ef(, ξ)
59 Proof of consistency of x n Let ˆx be a limit point of {x n } n=1 and let n N index a convergent subsequence. By the USLLN and lim n n N f n (x n) }{{} z n = z, wp1 fn (x n) Ef(ˆx, ξ) = fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) fn (x n) Ef(x n, ξ) + Ef(x n, ξ) Ef(ˆx, ξ) Taking n for n N... First term goes to zero by USLLN And second goes to zero by continuity of Ef(, ξ) Thus, Ef(ˆx, ξ) = z
60 Bias Consistency: z n and x n 3. CLT
61 Bias Consistency: z n and x n When does USSLN hold? Suppose we have a stochastic MIP, in which continuity doesn t make sense?
62 Sufficient Conditions for the USLLN Fact. 2 Assume X is compact and assume: f(, ξ) is continuous, wp1, on X g(ξ) satisfying sup f(x, ξ) g(ξ), wp1 and Eg(ξ) < x X ξ 1, ξ 2,..., ξ n are iid as ξ. Then, the USLLN holds. 2 Facts are theorems that we won t prove.
63 Sufficient Conditions for the USLLN Fact. Let X be compact and convex and assume: f(, ξ) is convex and continuous, wp1, on X the LLN holds pointwise: lim n Then, the USLLN holds. fn (x) Ef(x, ξ) = 0, wp1, x X
64 SAA: Consistency of z n and x n under Finite X Fact. Assume X is finite, and assume lim n fn (x) Ef(x, ξ) = 0, wp1, x X. Then, USLLN holds, z n z, and every limit point of {x n} solves (SP ), wp1. Notes: Ef(, ξ) need not be continuous (would be unnatural since domain X is finite) Assumes pointwise LLN rather than USLLN iid Here plus X finite implies lim fn (x) Ef(x, ξ) = 0, wp1, x X n lim sup fn (x) Ef(x, ξ) = 0, wp1 n x X
65 SAA: Consistency of z n and x n under LSC f(, ξ) Fact. Assume ξ 1, ξ 2,..., ξ n are iid as ξ f(, ξ) is lower semicontinuous on X ξ g(ξ) satisfying inf f(x, ξ) g(ξ), wp1, where E g(ξ) <. x X Then, z n z, wp1, and every limit point of {x n} solves (SP ), wp1.
66 SAA: Consistency of z n and x n under LSC f(, ξ) Fact. Assume ξ 1, ξ 2,..., ξ n are iid as ξ f(, ξ) is lower semicontinuous on X ξ g(ξ) satisfying inf f(x, ξ) g(ξ), wp1, where E g(ξ) <. x X Then, z n z, wp1, and every limit point of {x n} solves (SP ), wp1. Proof relies on epi-convergence of f n (x) to Ef(x, ξ) Epi-convergence provides theory for approximation in optimization beyond SAA f n (x) convex, continuous on compact, convex X: epi-convergence USLLN But, epi-convergence provides a more general framework in non-convex setting Epi-convergence can be viewed as precisely the relaxation of uniform convergence that yields desired convergence results
67 MATHEMATICS OF OPERATIONS RESEARCH Vol. 11, No. 1, February 1986 Printed in U.S.A. APPROXIMATION TO OPTIMIZATION PROBLEMS: AN ELEMENTARY REVIEW* PETER KALL Universitat Zurich During the last two decades the concept of epi-convergence was introduced and then was used in various investigations in optimization and related areas. The aim of this review is to show in an elementary way how closely the arguments in the epi-convergence approach are related to those of the classical theory of convergence of functions. 1. Introduction. In mathematical programming problems of the type inf (q(x) x E } (I) have to be solved, where r C IRR and,: F -> R are given. In designing solution methods for e it is quite common to replace the original problem by a sequence of "approximating" problems inf (,(x) I xe r}) (IF) which are supposed to be easier to solve then e. To give some examples, we just mention cutting plane methods, penalty methods and solution methods for stochastic programming problems. To simplify the presentation we restate the above problems in the usual way by defining f(x)= f (x) if xe, + oo else, and f fv(x) = ^(x) if x E r, + oo else. Then obviously e and 6l are equivalent to
68 Bias Consistency: z n and x n 3. CLT
69 3. One-sided CLT for z n All you need to know: CLT for iidrvs and f n (x n) f n (x) x X
70 SAA: Towards a CLT for z n We have conditions under which z n z shrinks to zero Is n correct scaling factor so that n(z n z ) converges to something nontrivial?
71 SAA: Towards a CLT for z n We have conditions under which z n z shrinks to zero Is n correct scaling factor so that n(z n z ) converges to something nontrivial? Notation: f n (x) = 1 n n f(x, ξ j ) j=1 σ 2 (x) = var[f(x, ξ)] s 2 n(x) = 1 n 1 n [ f(x, ξ j ) f n (x) ] 2 j=1 X is set of optimal solutions to (SP ) z α satisfies P(N(0, 1) z α ) = 1 α
72 SAA: Towards a CLT for z n z n = f n (x n) f n (x), wp1, x X
73 SAA: Towards a CLT for z n and so z n = f n (x n) f n (x), wp1, x X z n z σ(x)/ n f n (x) z σ(x)/ n, wp1
74 SAA: Towards a CLT for z n and so Let x X X. z n = f n (x n) f n (x), wp1, x X z n z σ(x)/ n f n (x) z σ(x)/ n, wp1
75 SAA: Towards a CLT for z n and so Let x X X. Then, z n = f n (x n) f n (x), wp1, x X z n z σ(x)/ n f n (x) z σ(x)/ n, wp1 ( z P n z ) σ(x )/ n z α ( fn (x ) z ) P σ(x )/ n z α
76 SAA: Towards a CLT for z n and so Let x X X. Then, By CLT for iidrvs Thus... z n = f n (x n) f n (x), wp1, x X z n z σ(x)/ n f n (x) z σ(x)/ n, wp1 ( z P n z ) σ(x )/ n z α ( fn lim P (x ) z ) n σ(x )/ n z α ( fn (x ) z ) P σ(x )/ n z α = 1 α
77 SAA: One-sided CLT for z n Theorem. Assume a pointwise CLT: ( ) fn lim P (x) Ef(x, ξ) n σ(x)/ u = P(N(0, 1) u), x X. n Let x X. Then, Notes: lim inf n ( z P n z ) σ(x )/ n z α 1 α. (A3) and ξ 1, ξ 2,..., ξ n iid as ξ suffice for pointwise CLT. Other possibilities, too For sufficiently large n, we infer that P { z n z α σ(x )/ n z } 1 α Of course, we don t know σ(x ), and so this is practically useless. But...
78 SAA: Towards (a better) CLT for z n z n = f n (x n) f n (x), wp1, x X
79 SAA: Towards (a better) CLT for z n and so z n = f n (x n) f n (x), wp1, x X z n z s n (x n)/ n f n (x) z s n (x n)/ n, wp1
80 SAA: Towards (a better) CLT for z n and so z n = f n (x n) f n (x), wp1, x X z n z s n (x n)/ n f n (x) z s n (x n)/ n, wp1 Let x = x min arg min x X σ2 (x).
81 SAA: Towards (a better) CLT for z n z n = f n (x n) f n (x), wp1, x X and so z n z s n (x n)/ n f n (x) z s n (x n)/ n, wp1 Let x = x min arg min x X σ2 (x). Then, ( z P n z ) s n (x n)/ n z α ( fn (x min P ) z s n (x n)/ n z α )
82 SAA: Towards (a better) CLT for z n and so z n = f n (x n) f n (x), wp1, x X z n z s n (x n)/ n f n (x) z s n (x n)/ n, wp1 Let x = x min arg min x X σ2 (x). Then, ( z P n z ) s n (x n)/ n z α ( fn (x min P ) ) z s n (x n)/ n z α ( fn (x min = P ) [ z σ(x min )/ n z sn (x ]) n) α σ(x min )
83 SAA: Towards (a better) CLT for z n and so z n = f n (x n) f n (x), wp1, x X z n z s n (x n)/ n f n (x) z s n (x n)/ n, wp1 Let x = x min arg min x X σ2 (x). Then, ( z P n z ) s n (x n)/ n z α If z α > 0 and lim inf n ( fn (x min P ) ) z s n (x n)/ n z α ( fn (x min = P ) [ z σ(x min )/ n z sn (x ]) n) α σ(x min ) s n(x n) inf σ(x) then... x X ( z lim inf P n z n s n (x n)/ n z α ) 1 α
84 SAA: One-sided CLT for z n Theorem. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ inf x X σ2 (x) lim inf n s2 n(x n) lim sup s 2 n(x n) sup σ 2 (x), wp1 n x X Then, given 0 < α < 1 lim inf n ( z P n z ) s n (x n)/ n z α 1 α.
85 SAA: One-sided CLT for z n Theorem. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ inf x X σ2 (x) lim inf n s2 n(x n) lim sup s 2 n(x n) sup σ 2 (x), wp1. n x X Then, given 0 < α < 1 Notes: lim inf n Could have assumed pointwise CLT For sufficiently large n, we infer that ( z P n z ) s n (x n)/ n z α 1 α. P { z n z α s n (x n)/ n z } 1 α How does this relate to the bias result: Ez n z?
86 Bias Consistency: z n and x n CLT for z n Two-sided CLT for z?
87 Two-sided CLT for z n Fact. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ f(x 1, ξ) f(x 2, ξ) g(ξ) x 1 x 2, x 1, x 2 X, where E g 2 (ξ) < If (SP ) has a unique optimal solution then: n (z n z ) N(0, σ 2 (x )).
88 Two-sided CLT for z n Fact. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ f(x 1, ξ) f(x 2, ξ) g(ξ) x 1 x 2, x 1, x 2 X, where E g 2 (ξ) < If (SP ) has a unique optimal solution then: n (z n z ) N(0, σ 2 (x )). Notes: But, there are frequently multiple optimal solutions...
89 Two-sided CLT for z n Fact. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ f(x 1, ξ) f(x 2, ξ) g(ξ) x 1 x 2, x 1, x 2 X, where E g 2 (ξ) < Then, n (z n z ) inf x X N(0, σ2 (x)).
90 Two-sided CLT for z n Fact. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ f(x 1, ξ) f(x 2, ξ) g(ξ) x 1 x 2, x 1, x 2 X, where E g 2 (ξ) <. Then, Notes: n (z n z ) inf x X N(0, σ2 (x)). What is inf x X N(0, σ2 (x))? n ( fn (x) Ef(x, ξ) ) N(0, σ 2 (x)) N(0, σ 2 (x)) is family of correlated normal random variables
91 Two-sided CLT for z n Fact. Assume (A1)-(A3) ξ 1, ξ 2,..., ξ n are iid as ξ f(x 1, ξ) f(x 2, ξ) g(ξ) x 1 x 2, x 1, x 2 X, where E g 2 (ξ) <. Then, Notes: n (z n z ) inf x X N(0, σ2 (x)). What is inf x X N(0, σ2 (x))? n ( fn (x) Ef(x, ξ) ) N(0, σ 2 (x)) N(0, σ 2 (x)) is family of correlated normal random variables Recall example: inf N(0, x X σ2 (x)) = N(0, 1) How does inf x X N(0, σ2 (x)) relate to the bias result: Ez n z?
92 Bias Consistency: z n and x n CLT for z n 3. CLT for x n
93 Fact. Assume (A1)-(A3) SAA: CLT for x n f(, ξ) is convex and twice continuously differentiable X = {x : Ax b} (SP ) has a unique optimal solution x (x 1 x 2 ) H (x 1 x 2 ) > 0, x 1, x 2 X, x 1 x 2, where H = E 2 xf(x, ξ). Assume x f(x, ξ) satisfies: x f(x 1, ξ) x f(x 2, ξ) g(ξ) x 1 x 2 x 1, x 2 X, where Eg 2 (ξ) < for some real-valued function g Then, n(x n x ) u where u solves the random QP: 1 min u 2 u H u + c u s.t. A i u 0 : i {i : A i x = b i } u E x f(x, ξ) = 0 and c is multivariate normal with mean 0 and covariance matrix Σ, where ( Σ ij = cov f(x,ξ) x i, f(x,ξ) x j ).
94 Bias: z n Consistency: z n and x n CLT: z n and x n
95 SAA: Revisiting Possible Goals 1. x n x, wp1 and n(x n x ) u, where u solves a random QP 2. z n z, wp1 and n(z n z ) inf x X N(0, σ 2 (x)) 3. Ef(x n, ξ) z, wp1 4. lim n P (Ef(x n, ξ) z ε n ) 1 α where ε n 0 We now have conditions under which variants of 1-3 hold Let s next start by aiming for a more modest version of 4: Given ˆx X and α find a (random) CI width ε with: P(E f(ˆx, ξ) z ε) 1 α
96 An SAA Algorithm
97 Assessing Solution Quality: Towards an SAA Algorithm z = min x X E f(x, ξ) Goal: Given ˆx X and α find a (random) CI width ε with: Using the bias result, E 1 n n j=1 P(E f(ˆx, ξ) z ε) 1 α f(ˆx, ξ j 1 ) min x X n n f(x, ξ j ) Ef(ˆx, ξ) z j=1 } {{ } G n (ˆx)
98 Assessing Solution Quality: Towards an SAA Algorithm z = min x X E f(x, ξ) Goal: Given ˆx X and α find a (random) CI width ε with: P(E f(ˆx, ξ) z ε) 1 α Using the bias result, E 1 n n j=1 f(ˆx, ξ j 1 ) min x X n n f(x, ξ j ) j=1 } {{ } G n (ˆx) Ef(ˆx, ξ) z Remarks Anticipate var G n (ˆx) var [ 1 n n j=1 f(ˆx, ξj )] + var zn G n (ˆx) 0, but not asymptotically normal (what to do?) Not much of an algorithm if the solution, ˆx, comes as input!
99 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g
100 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z
101 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z 0. Sample iid observations ξ 1, ξ 2,..., ξ n x, and solve (SP nx ) to obtain x n x
102 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z 0. Sample iid observations ξ 1, ξ 2,..., ξ n x, and solve (SP nx ) to obtain x n x 1. For k = 1, 2,..., n g 1.1. Sample iid observations ξ k1, ξ k2,..., ξ kn from the distribution of ξ 1.2. Solve (SP n ) using ξ k1, ξ k2,..., ξ kn to obtain x k n 1.3. Calculate G k n(x n x ) = 1 n n j=1 f(x n x, ξ kj ) 1 n n j=1 f(xk n, ξ kj )
103 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z 0. Sample iid observations ξ 1, ξ 2,..., ξ n x, and solve (SP nx ) to obtain x n x 1. For k = 1, 2,..., n g 1.1. Sample iid observations ξ k1, ξ k2,..., ξ kn from the distribution of ξ 1.2. Solve (SP n ) using ξ k1, ξ k2,..., ξ kn to obtain x k n 1.3. Calculate G k n(x n x ) = 1 n n j=1 f(x n x, ξ kj ) 1 n n j=1 f(xk n, ξ kj ) 2. Calculate gap estimate and sample variance: Ḡ n (n g ) = 1 n g n g k=1 G k n(x n x ) and s 2 G(n g ) = 1 n g 1 n g k=1 ( G k n (x n x ) Ḡn(n g ) ) 2
104 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z 0. Sample iid observations ξ 1, ξ 2,..., ξ n x, and solve (SP nx ) to obtain x n x 1. For k = 1, 2,..., n g 1.1. Sample iid observations ξ k1, ξ k2,..., ξ kn from the distribution of ξ 1.2. Solve (SP n ) using ξ k1, ξ k2,..., ξ kn to obtain x k n 1.3. Calculate G k n(x n x ) = 1 n n j=1 f(x n x, ξ kj ) 1 n n j=1 f(xk n, ξ kj ) 2. Calculate gap estimate and sample variance: Ḡ n (n g ) = 1 n g n g k=1 G k n(x n x ) and s 2 G(n g ) = 1 n g 1 n g 3. Let ε g = t ng 1,αs G (n g )/ n g, and output x n x and one-sided CI: [ 0, Ḡ n (n g ) + ε g ] k=1 ( G k n (x n x ) Ḡn(n g ) ) 2
105 An SAA Algorithm Input: CI level 1 α, sample sizes n x and n, replication size n g Fix α = 0.05 and n g = 15 (say) Choose n x and n based on what is computationally reasonable Choose n x > n, perhaps n x n Then, For fixed n and n x can justify algorithm with n g For fixed n g can justify the algorithm with n Can even use n g = 1, albeit with different variance estimator
106 An SAA Algorithm Output: Solution x n x and approximate (1 α)-level CI on E f(x n x, ξ) z x n x is the decision we will make The confidence interval is on x n x s optimality gap, E f(x n x, ξ) z Here, E f(x n x, ξ) = E ξ [f(x n x, ξ) x n x ] So, this is a posterior assessment, given the decision we will make
107 An SAA Algorithm 0. Sample iid observations ξ 1, ξ 2,..., ξ n x, and solve (SP nx ) to obtain x n x ξ 1, ξ 2,..., ξ n x need not be iid Agnostic to algorithm used to solve (SP nx )
108 An SAA Algorithm 1. For k = 1, 2,..., n g 1.1. Sample iid observations ξ k1, ξ k2,..., ξ kn from the distribution of ξ 1.2. Solve (SP n ) using ξ k1, ξ k2,..., ξ kn to obtain x k n 1.3. Calculate G k n(x n x ) = 1 n n j=1 f(x n x, ξ kj ) 1 n n j=1 f(xk n, ξ kj ) ξ k1, ξ k2,..., ξ kn need not be iid, but should satisfy E f n (x) = Ef(x, ξ) (could use Latin hypercube sampling or randomized quasi Monte Carlo sampling) (ξ k1, ξ k2,..., ξ kn ), k = 1, 2,..., n g, should be iid Agnostic to algorithm used to solve (SP n ) Can solve relaxation of (SP n ) if lower bound is used in second term of 1.3 (recall E f n (x) Ef(x, ξ) relaxation in bias result) Can also use independent samples and different sample sizes, n u, and n l, for the upper- and lower-bound estimators in step 1.3
109 An SAA Algorithm 2. Calculate gap estimate and sample variance: Ḡ n (n g ) = 1 n g n g k=1 G k n(x n x ) and s 2 G(n g ) = 1 n g 1 n g 3. Let ε g = t ng 1,αs G (n g )/ n g, and output x n x and one-sided CI: [ 0, Ḡ n (n g ) + ε g ] k=1 ( G k n (x n x ) Ḡn(n g ) ) 2 Standard calculation of sample mean and sample variance Standard calculation of one-sided confidence interval for a nonnegative parameter Again, here the parameter is E f(x n x, ξ) z SAA Algorithm tends to be conservative, i.e., exhibit over-coverage Why?
110 SAA Algorithm Applied to a few Two-Stage SLPs Problem DB WRPM 20TERM SSN n x in (SP nx ) for x n x Optimality Gap n n g % CI Width 0.2% 0.08% 0.5% 8% Var. Red Variance reduction with respect to algorithm, which estimates upper and lower bounds defining G with independent, rather than common, random number streams
111 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C 5 4 gap (% of z*) samp err gap n=n x Note: n = n x
112 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C gap (% of z*) samp err gap n=n x
113 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C gap (% of z*) samp err gap n=n x
114 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C 8.3 Upper and Lower Bounds n
115 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C 8.31 Upper and Lower Bounds n
116 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C 8.3 Upper and Lower Bounds n
117 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C 0.01 gap n If EG n (x n) = a n p then log [EG n(x n)] = log[a] p log[n] From these four points p R 2 =
118 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) B A E D C Enforce symmetry constraints: x 1 = x 6, x 2 = x 7, x 3 = x 5
119 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) gap (% of z*) samp err gap gap (% of z*) samp err gap n=n x n=n x no extra constraints with symmetry constraints
120 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) gap (% of z*) samp err gap gap (% of z*) samp err gap n=n x n=n x no extra constraints with symmetry constraints
121 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) gap (% of z*) samp err gap gap (% of z*) samp err gap n=n x n=n x no extra constraints with symmetry constraints
122 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) Upper and Lower Bounds Upper and Lower Bounds n n no extra constraints with symmetry constraints
123 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) Upper and Lower Bounds Upper and Lower Bounds n n no extra constraints with symmetry constraints
124 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) Upper and Lower Bounds Upper and Lower Bounds n n no extra constraints with symmetry constraints
125 SAA Algorithm Network Capacity Expansion Model (z 8.3) (Higle & Sen) gap gap n n EG n (x n) = a n p p R 2 = p R 2 = rate worse, constant a better
126 If you are happy with your results from SAA Algorithm then stop now!
127 Why Are You Unhappy? 1. Computational effort to solve n g = 15 instances of (SP n ) is prohibitive; 2. Bias of zn is large; 3. Sampling error, ε g, is large; or, 4. Solution x n x is far from optimal to (SP )
128 Why Are You Unhappy? 1. Computational effort to solve n g = 15 instances of (SP n ) is prohibitive; 2. Bias of zn is large; 3. Sampling error, ε g, is large; or, 4. Solution x n x is far from optimal to (SP ) Remedy 1: Single replication procedure: n g = 1 Remedy 2: LHS, randomized QMC, adaptive jackknife estimator Remedy 3: CRNs reduce variance. Other ideas help: LHS and randomized QMC Remedy 4: A sequential SAA algorithm
129 A Sequential SAA Algorithm
130 A Sequential SAA Algorithm Step 1: Generate a candidate solution Step 2: Check stopping criterion. If satisfied, stop Else, go to step 1 Instead of candidate solution ˆx = x n x X, we have {ˆx k } with each ˆx k X Stopping criterion rooted in above procedure (with n g = 1): and s 2 k s 2 n k (x n k ) = G k G nk (ˆx k ) = 1 n k 1 n k 1 n k j=1 n k j=1 ( f(ˆxk, ξ j ) f(x n k, ξ j ) ) [ (f(ˆxk, ξ j ) f(x n k, ξ j )) ( f nk (ˆx k ) f nk (x n k )) ] 2
131 A Sequential SAA Algorithm Stopping criterion: Sample size criterion: T = inf k 1 {k : G k h s k } (1) n k ( ) 2 1 ( cq,α + 2q ln 2 k ) (2) h h Fact. Consider the sequential sampling procedure in which the sample size is increased according to (2), and the procedure stops at iteration T according to (1). Then, under some regularity assumptions (including uniform integrability of a moment generating function) lim inf h h P (E f(ˆx T, ξ) z hs T ) 1 α
132 A Word (well, pictures) About Multi-Stage Stochastic Programming
133 What Does Solution Mean? In multistage setting, assessing solution quality means assessing policy quality
134 One Family of Algorithms & SAA Assume interstage independence, or, dependence with special structure. Stochastic dual dynamic programming (SDDP): (a) Forward Pass (b) Backward Pass
135 Small Sampling of Things We Didn t Talk About Non-iid sampling (well, we did a bit) Bias and variance reduction techniques (some brief allusion) Multi-stage SAA (in any detail) Large-deviation results, concentration-inequality results, finite-sample guarantees More generally, results with coefficients that are difficult to estimate SAA for expected-value constraints, including chance constraints SAA for other models, such as those with equilibrium constraints Results that exploit more specific special structure of f, ξ, and/or X Results that study interaction between an optimization algorithm and SAA Stochastic approximation, stochastic gradient descent, stochastic mirror descent, stochastic cutting-plane methods, stochastic dual dynamic programming... Statistical testing of optimality conditions Results for risk measures not expressed as expected (dis)utility. Decision-dependent probability distributions Distributionally robust data-driven variants of SAA
136 Summary: SAA SAA Results for Monte Carlo estimators: no optimization What results should we want for SAA? Results for SAA 1. Bias 2. Consistency 3. CLT SAA Algorithm A basic algorithm A sequential algorithm Multi-Stage Problems What We Didn t Discuss
137 Small Sampling of References Lagrange, Bernoulli, Euler, Laplace, Gauss, Edgeworth, Hotelling, Fisher... (leading to maximum likelihood) H. Robbins and S. Monro, A stochastic approximation method, Annals of Mathematical Statistics 22, , G. Dantzig and A. Madansky, On the solution of two-stage linear programs under uncertainty, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Overviews and Tutorials A. Shapiro, A. Ruszczyński, D. Dentcheva, Lectures on Stochastic Programming: Modeling and Theory (Chapter 5, Statistical Inference), A. Shapiro, Monte Carlo sampling methods. In A. Ruszczyński and A. Shapiro (editors), Stochastic Programming: Handbooks in Operations Research and Management Science, S. Kim, R. Pasupathy and S. Henderson. A guide to sample-average approximation. In Handbook of Simulation Optimization, edited by M. Fu, T. Homem-de-Mello and G. Bayraksan, Monte Carlo sampling-based methods for stochastic optimization, Surveys in Operations Research and Management Science 19, 56-85, G. Bayraksan and D.P. Morton, Assessing solution quality in stochastic programs via sampling, Tutorials in Operations Research, M.R. Oskoorouchi (ed.), , INFORMS, Further References G. Bayraksan and D.P. Morton, Assessing solution quality in stochastic programs, Mathematical Programming, 108, (2006). G. Bayraksan and D.P. Morton, A sequential sampling procedure for stochastic programming, Operations Research 59, (2011). J. Dupačová and R. Wets, Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems, The Annals of Statistics 16, , M. Freimer, J. Linderoth and D. Thomas, The impact of sampling methods on bias and variance in stochastic linear programs, Computational Optimization and Applications 51, 51-75, P. Glynn and G. Infanger, Simulation-based confidence bounds for two-stage stochastic programs, Mathematical Programming 138, 15-42, J. Higle and S. Sen, Stochastic decomposition: an algorithm for two-stage linear programs with recourse, Mathematics of Operations Research 16, , 1991.
138 Small Sampling of References J. Higle and S. Sen, Duality and statistical tests of optimality for two stage stochastic programs, Mathematical Programming 75, , T. Homem-de-Mello, On rates of convergence for stochastic optimization problems under non-iid sampling, SIAM Journal on Optimization 19, , G. Infanger, Monte Carlo (importance) sampling within a Benders decomposition algorithm for stochastic linear programs, Annals of Operations Research 39, 4167, 1991 A. King and R. Rockafellar, Asymptotic theory for solutions in statistical estimation and stochastic programming, Mathematics of Operations Research 18, , A. King and R. Wets, Epiconsistency of convex stochastic programs, Stochastics 34, 83-92, A. Kleywegt, A. Shapiro, and T. Homem-de-Mello, The sample average approximation method for stochastic discrete optimization, SIAM Journal on Optimization 12, , V. Kozmik and D.P. Morton, Evaluating policies in risk-averse multi-stage stochastic programming, Mathematical Programming 152, (2015). J. Luedtke and S. Ahmed, A sample approximation approach for optimization with probabilistic constraints, SIAM Journal on Optimization 19, , J. Linderoth, A. Shapiro and S. Wright, The empirical behavior of sampling methods for stochastic programming, Annals of Operations Research 142, , W. Mak, D. Morton and R. Wood, Monte Carlo bounding techniques for determining solution quality in stochastic programs, Operations Research Letters 24, (1999). B. Pagnoncelli, S. Ahmed and A. Shapiro, Sample average approximation method for chance constrained programming: theory and applications, Journal of Optimization Theory and Applications 142, , R. Pasupathy, On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization, Operations Research 58, , J. Royset and R. Szechtman, Optimal Budget Allocation for Sample Average Approximation, Operations Research 61, , 2013.
139 Sorry for All the Acronyms (SAA) CI: Confidence Interval CLT: Central Limit Theorem CRN: Common Random Numbers DB: Donohue Birge test instance iid: independent and identically distributed iidrvs: iid random variables LHS: Latin Hybercube Sampling LLN: Law of Large Numbers LSC: Lower Semi-Continuous MIP: Mixed Integer Program QMC: Quasi Monte Carlo QP: Quadratic Program SAA: Sample Average Approximation SDDP: Stochastic Dual Dynamic Programming SLP: Stochastic Linear Program SSN: SONET Switched Network test instance. Or, Suvrajeet Sen s Network SONET: Synchronous Optical Networking USLLN: Uniform Strong LLN wp1: with probability one WRPM: West-coast Regional Planning Model 20TERM: 20 TERMinal test instance
Complexity of two and multi-stage stochastic programming problems
Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept
More informationA Sequential Sampling Procedure for Stochastic Programming
A Sequential Sampling Procedure for Stochastic Programming Güzin Bayraksan guzinb@sie.arizona.edu Department of Systems and Industrial Engineering University of Arizona, Tucson, AZ 85721 David P. Morton
More informationA. Shapiro Introduction
ESAIM: PROCEEDINGS, December 2003, Vol. 13, 65 73 J.P. Penot, Editor DOI: 10.1051/proc:2003003 MONTE CARLO SAMPLING APPROACH TO STOCHASTIC PROGRAMMING A. Shapiro 1 Abstract. Various stochastic programming
More informationMonte Carlo Methods for Stochastic Programming
IE 495 Lecture 16 Monte Carlo Methods for Stochastic Programming Prof. Jeff Linderoth March 17, 2003 March 17, 2003 Stochastic Programming Lecture 16 Slide 1 Outline Review Jensen's Inequality Edmundson-Madansky
More informationScenario Generation and Sampling Methods
Scenario Generation and Sampling Methods Tito Homem-de-Mello Güzin Bayraksan SVAN 2016 IMPA 9 13 May 2106 Let s Start with a Recap Sample Average Approximation We want to solve the true problem min {g(x)
More informationSolution Methods for Stochastic Programs
Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods
More informationOptimization Tools in an Uncertain Environment
Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization
More informationStochastic Decomposition
IE 495 Lecture 18 Stochastic Decomposition Prof. Jeff Linderoth March 26, 2003 March 19, 2003 Stochastic Programming Lecture 17 Slide 1 Outline Review Monte Carlo Methods Interior Sampling Methods Stochastic
More informationImportance sampling in scenario generation
Importance sampling in scenario generation Václav Kozmík Faculty of Mathematics and Physics Charles University in Prague September 14, 2013 Introduction Monte Carlo techniques have received significant
More informationStochastic Integer Programming An Algorithmic Perspective
Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple
More informationUpper bound for optimal value of risk averse multistage problems
Upper bound for optimal value of risk averse multistage problems Lingquan Ding School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332-0205 Alexander Shapiro School
More informationStochastic Optimization One-stage problem
Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique
More informationReformulation of chance constrained problems using penalty functions
Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF
More informationStability of optimization problems with stochastic dominance constraints
Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM
More informationScenario Generation and Sampling Methods
Scenario Generation and Sampling Methods Güzin Bayraksan Tito Homem-de-Mello SVAN 2016 IMPA May 11th, 2016 Bayraksan (OSU) & Homem-de-Mello (UAI) Scenario Generation and Sampling SVAN IMPA May 11 1 / 33
More informationMonte Carlo Sampling-Based Methods for Stochastic Optimization
Monte Carlo Sampling-Based Methods for Stochastic Optimization Tito Homem-de-Mello School of Business Universidad Adolfo Ibañez Santiago, Chile tito.hmello@uai.cl Güzin Bayraksan Integrated Systems Engineering
More informationAsymptotics of minimax stochastic programs
Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming
More informationOn a class of minimax stochastic programs
On a class of minimax stochastic programs Alexander Shapiro and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332. August 29, 2003
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationJitka Dupačová and scenario reduction
Jitka Dupačová and scenario reduction W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch Session in honor of Jitka Dupačová ICSP 2016, Buzios (Brazil),
More informationLecture 1. Stochastic Optimization: Introduction. January 8, 2018
Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing
More informationStochastic Integer Programming
IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method
More informationarxiv: v3 [math.oc] 25 Apr 2018
Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,
More informationReformulation and Sampling to Solve a Stochastic Network Interdiction Problem
Network Interdiction Stochastic Network Interdiction and to Solve a Stochastic Network Interdiction Problem Udom Janjarassuk Jeff Linderoth ISE Department COR@L Lab Lehigh University jtl3@lehigh.edu informs
More informationSoumyadip Ghosh. Raghu Pasupathy. IBM T.J. Watson Research Center Yorktown Heights, NY 10598, USA. Virginia Tech Blacksburg, VA 24061, USA
Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. ON INTERIOR-POINT BASED RETROSPECTIVE APPROXIMATION METHODS FOR SOLVING TWO-STAGE
More informationA Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions
A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell
More informationThe L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44
1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence
More informationDistributionally robust simple integer recourse
Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems
More informationQuantifying Stochastic Model Errors via Robust Optimization
Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations
More information1. Introduction. In this paper we consider stochastic optimization problems of the form
O RATES OF COVERGECE FOR STOCHASTIC OPTIMIZATIO PROBLEMS UDER O-I.I.D. SAMPLIG TITO HOMEM-DE-MELLO Abstract. In this paper we discuss the issue of solving stochastic optimization problems by means of sample
More informationChance Constrained Programming
IE 495 Lecture 22 Chance Constrained Programming Prof. Jeff Linderoth April 21, 2003 April 21, 2002 Stochastic Programming Lecture 22 Slide 1 Outline HW Fixes Chance Constrained Programming Is Hard Main
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationRisk neutral and risk averse approaches to multistage stochastic programming.
Risk neutral and risk averse approaches to multistage stochastic programming. A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis
More informationA CENTRAL LIMIT THEOREM FOR NESTED OR SLICED LATIN HYPERCUBE DESIGNS
Statistica Sinica 26 (2016), 1117-1128 doi:http://dx.doi.org/10.5705/ss.202015.0240 A CENTRAL LIMIT THEOREM FOR NESTED OR SLICED LATIN HYPERCUBE DESIGNS Xu He and Peter Z. G. Qian Chinese Academy of Sciences
More informationAn Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse
An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University
More informationProbabilistic Bisection Search for Stochastic Root Finding
Probabilistic Bisection Search for Stochastic Root Finding Rolf Waeber Peter I. Frazier Shane G. Henderson Operations Research & Information Engineering Cornell University, Ithaca, NY Research supported
More informationFinancial Optimization ISE 347/447. Lecture 21. Dr. Ted Ralphs
Financial Optimization ISE 347/447 Lecture 21 Dr. Ted Ralphs ISE 347/447 Lecture 21 1 Reading for This Lecture C&T Chapter 16 ISE 347/447 Lecture 21 2 Formalizing: Random Linear Optimization Consider the
More informationWe are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero
Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.
More informationSolving Chance-Constrained Stochastic Programs via Sampling and Integer Programming
IFORMS 2008 c 2008 IFORMS isbn 978-1-877640-23-0 doi 10.1287/educ.1080.0048 Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming Shabbir Ahmed and Alexander Shapiro H. Milton
More informationMIDAS: A Mixed Integer Dynamic Approximation Scheme
MIDAS: A Mixed Integer Dynamic Approximation Scheme Andy Philpott, Faisal Wahid, Frédéric Bonnans May 7, 2016 Abstract Mixed Integer Dynamic Approximation Scheme (MIDAS) is a new sampling-based algorithm
More informationFINANCIAL OPTIMIZATION
FINANCIAL OPTIMIZATION Lecture 1: General Principles and Analytic Optimization Philip H. Dybvig Washington University Saint Louis, Missouri Copyright c Philip H. Dybvig 2008 Choose x R N to minimize f(x)
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationCONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS
CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods
More informationBias evaluation and reduction for sample-path optimization
Bias evaluation and reduction for sample-path optimization 1 1 Department of Computing Science and Operational Research Université de Montréal; CIRRELT Québec, Canada Eigth IMACS Seminar on Monte Carlo
More informationOn almost sure rates of convergence for sample average approximations
On almost sure rates of convergence for sample average approximations Dirk Banholzer 1, Jörg Fliege 1, and Ralf Werner 2 1 Department of Mathematical Sciences, University of Southampton, Southampton, SO17
More informationSample Average Approximation Method. for Chance Constrained Programming: Theory and Applications
Sample Average Approximation Method for Chance Constrained Programming: Theory and Applications B.K. Pagnoncelli S. Ahmed A. Shapiro Communicated by P.M. Pardalos Abstract We study sample approximations
More informationIn the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.
In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional
More informationConvex relaxations of chance constrained optimization problems
Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011
More informationIntroductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19
Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More informationStochastic Programming Approach to Optimization under Uncertainty
Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Shapiro Stochastic Programming Approach to Optimization under Uncertainty Received: date / Accepted: date Abstract In
More informationRobust Optimization for Risk Control in Enterprise-wide Optimization
Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization
More informationStochastic Subgradient Methods
Stochastic Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 26-7 April 13, 28 1 Noisy unbiased subgradient Suppose f : R n R is a convex function. We say
More informationComputational and Statistical Learning Theory
Computational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 17: Stochastic Optimization Part II: Realizable vs Agnostic Rates Part III: Nearest Neighbor Classification Stochastic
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationMonte Carlo Integration I [RC] Chapter 3
Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the
More informationProbability Models of Information Exchange on Networks Lecture 1
Probability Models of Information Exchange on Networks Lecture 1 Elchanan Mossel UC Berkeley All Rights Reserved Motivating Questions How are collective decisions made by: people / computational agents
More informationStat 451 Lecture Notes Numerical Integration
Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29
More informationScenario Grouping and Decomposition Algorithms for Chance-constrained Programs
Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)
More informationStability of Stochastic Programming Problems
Stability of Stochastic Programming Problems W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany http://www.math.hu-berlin.de/~romisch Page 1 of 35 Spring School Stochastic
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationAutomatic Differentiation and Neural Networks
Statistical Machine Learning Notes 7 Automatic Differentiation and Neural Networks Instructor: Justin Domke 1 Introduction The name neural network is sometimes used to refer to many things (e.g. Hopfield
More informationData Mining Stat 588
Data Mining Stat 588 Lecture 02: Linear Methods for Regression Department of Statistics & Biostatistics Rutgers University September 13 2011 Regression Problem Quantitative generic output variable Y. Generic
More informationStochastic Programming: From statistical data to optimal decisions
Stochastic Programming: From statistical data to optimal decisions W. Römisch Humboldt-University Berlin Department of Mathematics (K. Emich, H. Heitsch, A. Möller) Page 1 of 24 6th International Conference
More informationComputational Complexity of Stochastic Programming: Monte Carlo Sampling Approach
Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 Computational Complexity of Stochastic Programming: Monte Carlo Sampling Approach Alexander Shapiro Abstract For a long
More informationORIGINS OF STOCHASTIC PROGRAMMING
ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990
More informationLong-Run Covariability
Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips
More informationTHE MATHEMATICS OF CONTINUOUS-VARIABLE SIMULATION OPTIMIZATION
Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Moench, O. Rose, eds. THE MATHEMATICS OF CONTINUOUS-VARIABLE SIMULATION OPTIMIZATION Sujin Kim Department of Industrial
More informationCLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES
CLASSICAL PROBABILITY 2008 2. MODES OF CONVERGENCE AND INEQUALITIES JOHN MORIARTY In many interesting and important situations, the object of interest is influenced by many random factors. If we can construct
More informationSampling-Based Progressive Hedging Algorithms in Two-Stage Stochastic Programming
Sampling-Based Progressive Hedging Algorithms in Two-Stage Stochastic Programming Nezir Aydin *, Alper Murat, Boris S. Mordukhovich * Department of Industrial Engineering, Yıldız Technical University,
More informationStochastic Dual Dynamic Integer Programming
Stochastic Dual Dynamic Integer Programming Jikai Zou Shabbir Ahmed Xu Andy Sun December 26, 2017 Abstract Multistage stochastic integer programming (MSIP) combines the difficulty of uncertainty, dynamics,
More informationProbability and Measure
Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real
More informationStatistical inference
Statistical inference Contents 1. Main definitions 2. Estimation 3. Testing L. Trapani MSc Induction - Statistical inference 1 1 Introduction: definition and preliminary theory In this chapter, we shall
More informationThe L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38
1 / 38 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 38 1 The L-Shaped Method 2 Example: Capacity Expansion Planning 3 Examples with Optimality Cuts [ 5.1a of BL] 4 Examples
More informationAUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET
The Problem Identification of Linear and onlinear Dynamical Systems Theme : Curve Fitting Division of Automatic Control Linköping University Sweden Data from Gripen Questions How do the control surface
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationAn Optimal Path Model for the Risk-Averse Traveler
An Optimal Path Model for the Risk-Averse Traveler Leilei Zhang 1 and Tito Homem-de-Mello 2 1 Department of Industrial and Manufacturing Systems Engineering, Iowa State University 2 School of Business,
More informationStochastic Models (Lecture #4)
Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence
More informationStochastic Optimization with Risk Measures
Stochastic Optimization with Risk Measures IMA New Directions Short Course on Mathematical Optimization Jim Luedtke Department of Industrial and Systems Engineering University of Wisconsin-Madison August
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More informationA Stochastic-Oriented NLP Relaxation for Integer Programming
A Stochastic-Oriented NLP Relaxation for Integer Programming John Birge University of Chicago (With Mihai Anitescu (ANL/U of C), Cosmin Petra (ANL)) Motivation: The control of energy systems, particularly
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationOptimization Problems with Probabilistic Constraints
Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationHow Much Evidence Should One Collect?
How Much Evidence Should One Collect? Remco Heesen October 10, 2013 Abstract This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition.
More informationSome multivariate risk indicators; minimization by using stochastic algorithms
Some multivariate risk indicators; minimization by using stochastic algorithms Véronique Maume-Deschamps, université Lyon 1 - ISFA, Joint work with P. Cénac and C. Prieur. AST&Risk (ANR Project) 1 / 51
More informationRobust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE
Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the
More informationMinimum Description Length (MDL)
Minimum Description Length (MDL) Lyle Ungar AIC Akaike Information Criterion BIC Bayesian Information Criterion RIC Risk Inflation Criterion MDL u Sender and receiver both know X u Want to send y using
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationScenario-Free Stochastic Programming
Scenario-Free Stochastic Programming Wolfram Wiesemann, Angelos Georghiou, and Daniel Kuhn Department of Computing Imperial College London London SW7 2AZ, United Kingdom December 17, 2010 Outline 1 Deterministic
More informationc 2004 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 14, No. 4, pp. 1237 1249 c 2004 Society for Industrial and Applied Mathematics ON A CLASS OF MINIMAX STOCHASTIC PROGRAMS ALEXANDER SHAPIRO AND SHABBIR AHMED Abstract. For a particular
More informationMustafa H. Tongarlak Bruce E. Ankenman Barry L. Nelson
Proceedings of the 0 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RELATIVE ERROR STOCHASTIC KRIGING Mustafa H. Tongarlak Bruce E. Ankenman Barry L.
More informationMarch 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.
Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More information