Chance-constrained optimization with tight confidence bounds

Size: px
Start display at page:

Download "Chance-constrained optimization with tight confidence bounds"

Transcription

1 Chance-constrained optimization with tight confidence bounds Mark Cannon University of Oxford 25 April

2 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding 3. Posterior confidence bounds using empirical violation estimates 4. Randomized algorithm with prior and posterior confidence bounds 5. Conclusions and future directions 2

3 Chance-constrained optimization Chance constraints limit the probability of constraint violation in optimization problems with stochastic parameters Applications are numerous and diverse:... building control, chemical process design, finance, portfolio management, production planning, supply chain management, sustainable development, telecommunications networks... Long history of stochastic programming methods e.g. Charnes & Cooper, Management Sci., 1959 Prekopa, SIAM J. Control, 1966 but exact handling of probabilistic constraints is generally intractable This talk: approximate methods based on random samples 3

4 Motivation: Energy management in PHEVs Battery: charged before trip, empty at end of trip Electric motor: shifts engine load point to improve fuel e ciency or emissions Control problem: optimize power flows to maximize performance (e ciency or emissions) over the driving cycle 4

5 Motivation: Energy management in PHEVs Fuel P f Engine P eng Clutch Battery Circuit Electric P s Losses P b Motor P em + Gearbox Vehicle P drv Parallel configuration with common engine and electric motor speed P drv = P eng + P em! eng =! em =! (if!! stall ) I Engine fuel consumption: P f = P f (P eng,!) 5

6 Motivation: Energy management in PHEVs Fuel P f Engine P eng Clutch Battery Circuit Electric P s Losses P b Motor P em + Gearbox Vehicle P drv I Electric motor electrical power P b $ mechanical power P em : P b = P b (P em,!) I Battery stored energy flow rate P s $ P b : P s = P s (P b ) stored energy E $ P s : Ė = P s 5

7 Motivation: Energy management in PHEVs Nominal optimization, assuming known P drv (t),!(t) for t 2 [0,T]: minimize P eng ( ), P em ( ) 2[t,T ] Z T 0 P f (P eng ( ),!( )) d subject to Ė( ) = P s ( ), 2 [t, T ] E(t) =E now E(T ) E end with P drv = P eng + P em P b = P b (P em,!) P s = P s (P b ) 6

8 Motivation: Energy management in PHEVs P drv (t),!(t) depend on uncertain tra c flow and driver behaviour 0.6 Battery state of energy Time (sec) Introduce feedback by using a receding horizon implementation but how likely is it that we can meet constraints? 7

9 Motivation: Energy management in PHEVs Pdrv (t),!(t) depend on uncertain traffic flow and driver behaviour Real time traffic flow data 7

10 Motivation: Energy management in PHEVs Reformulate as a chance-constrained optimization, assuming known distributions for P drv (t),!(t): minimize P eng ( ), P em ( ) 2[t,T ] Z T 0 P f (P eng ( ),!( )) d subject to Ė( ) = P s ( ), 2 [t, T ] E(t) =E now P E(T ) <E end apple for a specified probability 8

11 Chance-constrained optimization Define the chance-constrained problem CCP : minimize x2x c > x subject to P f(x, ) > 0 apple. 2 R d stochastic parameter 2 [0, 1] maximum allowable violation probability Assume X R n compact and convex f(, ):R n! R convex, lower-semicontinuous for any 2 CCP is feasible, i.e. P{f(x, ) > 0} apple for some x 2X. 9

12 Chance-constrained optimization Computational di culties with chance constraints: 1. Prohibitive computation to determine feasibility of a given x Z P f(x, ) > 0 = 1 {f(x, )>0} ( ) F (d ) 2 Closed form expression only available in special cases N(0,I) Gaussian e.g. if f(x, )=b > 1+(d + D > ) > x a ne then P f(x, ) > 0 apple () 1 N (1 )kb + Dxk apple1 d> x In general multidimensional integration is needed (e.g. Monte Carlo methods) 10

13 Chance-constrained optimization Computational di culties with chance constraints: 2. Feasible set may be nonconvex even if f(, ) is convex for all 2 e.g. 1 N (1 ) kb + Dxk apple1 d> x is convex if apple 0.5 nonconvex if >0.5 x feasible x feasible <0.5 >0.5 10

14 Sample approximation Define the multisample! m = { (1),..., (m) } for i.i.d. (i) 2 Define the sample approximation of CCP as SP : minimize x2x!! m c > x subject to f(x, ) apple 0 for all 2!! q m q discarded constraints, m q n x (! ): optimal solution for x! : optimal solution for! 11

15 Sample approximation B Solutions of SP converge to the solution of CCP as m!1 if q = m(1 ) B Approximation accuracy characterized via bounds on: probability that solution of SP is feasible for CCP probability that optimal objective of SP exceeds that of CCP Luedtke & Ahmed, SIAM J. Optim., 2008 Calafiore, SIAM J. Optim., 2010 Campi & Garatti, J. Optim. Theory Appl., 2011 B SP can be formulated as a mixed integer program with a convex continuous relaxation 12

16 Example: minimal bounding circle Smallest circle containing a given probability mass CCP : minimize c,r R subject to P kc k >R apple SP : minimize c,r!! m R subject to kc kappler for all 2!! q e.g. =0.5, m =1000, q =500 R c CCP (0, 0) SP ( 0.003, 0.002) SP solved for one realisation! 1000 N(0,I) 13

17 Example: minimal bounding circle Bounds on confidence that SP solution, x (! ) with m = 1000, q = 500 is feasible for CCP ( ) from Calafiore, Confidence = Violation probability ϵ =) with 90% probability: the violation probability of x (! ) lies between 0.47 and 0.58

18 Example: minimal bounding circle Smallest circle containing a given probability mass observations: SP computation grows rapidly with m Bounds on confidence of feasibility of x (! ) for CCP are not tight (unless q<mor probability distribution is a specific worst case) because SP solution has poor generalization properties e.g. m = 20, q = 10: 15 Randomized sample discarding improves generalization

19 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding 3. Posterior confidence bounds using empirical violation estimates 4. Randomized algorithm with prior and posterior confidence bounds 5. Conclusions and future directions 16

20 Level-q subsets of a multisample Define the sampled problem for given!! m as SP(!) : minimize x2x c > x subject to f(x, ) apple 0 for all 2! x (!): optimal solution of SP(!) Assumptions: (i). SP(!) is feasible for any!! m with probability (w.p.) 1 (ii). f x (!), 6= 0 for all 2! m \! w.p. 1 17

21 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! 2 10 (! 20 ) 18

22 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! 2 10 (! 20 ) 18

23 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! 2 10 (! 20 ) 18

24 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! 2 10 (! 20 ) 18

25 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! 2 10 (! 20 ) 18

26 Level-q subsets of a multisample Define the set of level-q subsets of! m,forq apple m as q (! m )= n o!! m :! = q and f x (!), > 0 for all 2! m \! x (!) for! 2 q (! m ) is called a level-q solution e.g. minimal bounding circle problem, m = 20, q = 10:! (! 20 ) 18

27 Essential constraint set Es(!) ={ (i1),..., (i k ) }! is an essential set of SP(!) if (i). x { (i1),..., (i k ) } = x (!), (ii). x(! \ ) 6= x (!) for all 2{ (i1),..., (i k ) }. e.g. minimal bounding circle problem CO Pi C PPPPP C C essential constraints 19

28 Support dimension Define: maximum support dimension: =esssup!2 m Es(!) m 1 minimum support dimension: =essinf!2 m Es(!) then apple dim(x) =n and 1 e.g. minimal bounding circle problem m Es(!) =2= 20

29 Support dimension Define: maximum support dimension: =esssup!2 m Es(!) m 1 minimum support dimension: =essinf!2 m Es(!) then apple dim(x) =n and 1 e.g. minimal bounding circle problem m Es(!) =3= 20

30 Confidence bounds for randomly selected level-q subsets Theorem 1: For any 2 [0, 1] and q 2 [,m], P m V x (!) apple! 2 q (! m ) (q ; m, 1 ). For any 2 [0, 1] and q 2 [,m], P m V x (!) apple! 2 q (! m ) apple (q ; m, 1 ). where violation probability: V (x) =P f(x, ) > 0 binomial c.d.f.: (n; N,p) = nx i=0 N p i (1 p) N i i 21

31 Confidence bounds for randomly selected level-q subsets Proof of Thm. 1 relies on... Lemma 1: For any v 2 [0, 1] and k 2 [, ], P m V x (! k ) apple v Es(! k ) = k = F (v) =v k (e.g. Calafiore, 2010) Proof (sketch): for all q 2 [k, m], P m! k =Es(! q) \ V x (! k ) = v Es(! k ) = k =(1 v) q k df (v) =) P m! k =Es(! q) Es(! k ) = k = Z 1 0 (1 v) q k df (v) 22 hence Z 1 0 (1 v) q k q 1 df (v) = Hausdor momentproblem k

32 Confidence bounds for randomly selected level-q subsets Proof of Thm. 1 relies on... Lemma 1: For any v 2 [0, 1] and k 2 [, ], P m V x (! k ) apple v Es(! k ) = k = F (v) =v k (e.g. Calafiore, 2010) Proof (sketch): for all q 2 [k, m], P m! k =Es(! q) \ V x (! k ) = v Es(! k ) = k =(1 v) q k df (v) =) P m! k =Es(! q) Es(! k ) = k = Z 1 0 (1 v) q k df (v) e.g. k =3, q =10: 22 hence Z 1 0 (1 v) q k q 1 df (v) = Hausdor momentproblem k

33 Confidence bounds for randomly selected level-q subsets...and Lemma 2: For any k 2 [, ] and q 2 [k, m], P m! k =Es(!) for some! 2 q (! m ) Es(! k ) = k m k = k B(m q + k, q k + 1) q k Note:! k = { (1),..., (B(, ) =beta function) (k) } is statistically equivalent to a randomly selected subset of! m! k =Es(!) for some! 2 q(! m) Es(! k ) = k occurs i : q k of the samples in! m \! k satisfy f(x (! k ), ) apple 0 and the remaining m q samples satisfy f(x (! k ), ) > 0 hence Lem. 2 follows from Lem. 1 and the law of total probability 23

34 Confidence bounds for randomly selected level-q subsets...and Lemma 2: For any k 2 [, ] and q 2 [k, m], P m! k =Es(!) for some! 2 q (! m ) Es(! k ) = k m k = k B(m q + k, q k + 1) q k Note:! k = { (1),..., (B(, ) =beta function) (k) } is statistically equivalent to a randomly selected subset of! m! k =Es(!) for some! 2 q(! m) Es(! k ) = k occurs i : q k of the samples in! m \! k satisfy f(x (! k ), ) apple 0 and the remaining m q samples satisfy f(x (! k ), ) > 0 hence Lem. 2 follows from Lem. 1 and the law of total probability 23

35 Confidence bounds for randomly selected level-q subsets Proof of Thm. 1 (sketch): I P m! k =Es(!) for some! 2 q(! m) \ V x (! k ) apple Es(! k ) = k! = k m k B( ; m q + k, q k +1) q k (B(,, ) =incomplete beta function) I P m V x (! k ) apple! k =Es(!) for some! 2 q(! m) = Pm k! k =Es(!) for! 2 q(! m) \ V x (! k ) apple Es(! k ) = k P m k! k =Es(!) for some! 2 q(! m) Es(! k ) = k = (q k; m, 1 ) (Baye s law) I Hence P m V x (!) apple! 2 q(! m) \ Es(!) = k = (q k; m, 1 ) Thm. 1 follows from total law of probability, since, for all k 2 [, ], 24 (q ; m, 1 ) apple (q k; m, 1 ) apple (q ; m, 1 )

36 Confidence bounds for randomly selected level-q subsets Proof of Thm. 1 (sketch): I P m! k =Es(!) for some! 2 q(! m) \ V x (! k ) apple Es(! k ) = k! = k m k B( ; m q + k, q k +1) q k (B(,, ) =incomplete beta function) I P m V x (! k ) apple! k =Es(!) for some! 2 q(! m) = Pm k! k =Es(!) for! 2 q(! m) \ V x (! k ) apple Es(! k ) = k P m k! k =Es(!) for some! 2 q(! m) Es(! k ) = k = (q k; m, 1 ) (Baye s law) I Hence P m V x (!) apple! 2 q(! m) \ Es(!) = k = (q k; m, 1 ) Thm. 1 follows from total law of probability, since, for all k 2 [, ], 24 (q ; m, 1 ) apple (q k; m, 1 ) apple (q ; m, 1 )

37 Confidence bounds for randomly selected level-q subsets Proof of Thm. 1 (sketch): I P m! k =Es(!) for some! 2 q(! m) \ V x (! k ) apple Es(! k ) = k! = k m k B( ; m q + k, q k +1) q k (B(,, ) =incomplete beta function) I P m V x (! k ) apple! k =Es(!) for some! 2 q(! m) = Pm k! k =Es(!) for! 2 q(! m) \ V x (! k ) apple Es(! k ) = k P m k! k =Es(!) for some! 2 q(! m) Es(! k ) = k = (q k; m, 1 ) (Baye s law) I Hence P m V x (!) apple! 2 q(! m) \ Es(!) = k = (q k; m, 1 ) Thm. 1 follows from total law of probability, since, for all k 2 [, ], 24 (q ; m, 1 ) apple (q k; m, 1 ) apple (q ; m, 1 )

38 Confidence bounds for randomly selected level-q subsets Consider the relationship between optimal costs: J o ( ) of CCP J (!) of SP(!) Prop. 1: P m J (!) J o ( )! 2 q (! m ) (q ; m, 1 ) Proof: J (!) J o ( ) if the solution x of SP(!) is feasible for CCP, so P m V x (!) apple! 2 q(! m) apple P m J (! r) J o ( )! 2 q(! m) Prop. 2: P m J (!) >J o ( )! = q apple (q 1; q, 1 ) Proof: J (!) apple J o ( ) if the solution x o of CCP is feasible for SP(!) i.e. if f x o ( ), apple 0 for all 2!, so P m J (!) apple J o ( ) P f x o ( ), apple 0 (1 ) q =1 (q 1; q, 1 ) q 25

39 Confidence bounds for randomly selected level-q subsets Consider the relationship between optimal costs: J o ( ) of CCP J (!) of SP(!) Prop. 1: P m J (!) J o ( )! 2 q (! m ) (q ; m, 1 ) Proof: J (!) J o ( ) if the solution x of SP(!) is feasible for CCP, so P m V x (!) apple! 2 q(! m) apple P m J (! r) J o ( )! 2 q(! m) Prop. 2: P m J (!) >J o ( )! = q apple (q 1; q, 1 ) Proof: J (!) apple J o ( ) if the solution x o of CCP is feasible for SP(!) i.e. if f x o ( ), apple 0 for all 2!, so P m J (!) apple J o ( ) P f x o ( ), apple 0 (1 ) q =1 (q 1; q, 1 ) q 25

40 Confidence bounds for randomly selected level-q subsets Consider the relationship between optimal costs: J o ( ) of CCP J (!) of SP(!) Prop. 1: P m J (!) J o ( )! 2 q (! m ) (q ; m, 1 ) Proof: J (!) J o ( ) if the solution x of SP(!) is feasible for CCP, so P m V x (!) apple! 2 q(! m) apple P m J (! r) J o ( )! 2 q(! m) Prop. 2: P m J (!) >J o ( )! = q apple (q 1; q, 1 ) Proof: J (!) apple J o ( ) if the solution x o of CCP is feasible for SP(!) i.e. if f x o ( ), apple 0 for all 2!, so P m J (!) apple J o ( ) P f x o ( ), apple 0 (1 ) q =1 (q 1; q, 1 ) q 25

41 Comparison with deterministic discarding strategies For optimal sample discarding (i.e. SP) or a greedy heuristic we have P m V x (! ) apple (q, ; m, ), (q, ; m q + 1 m, ) =1 m q (m q + 1; m, ) Calafiore (2010), Campi & Garatti (2011) this bound is looser than the lower bound for random sample discarding (i.e. SP(!),! 2 q (! m )) since 1 (q, ; m q + 1 m, ) = 1 (q ; m, 1 ) m q implies B (q, ; m, ) apple (q ; m, 1 ) 26 B (q, ; m, ) = (q ; m, 1 ) only if =1or q = m (in which case q(! m) =1)

42 Comparison with deterministic discarding strategies For optimal sample discarding (i.e. SP) or a greedy heuristic we have P m V x (! ) apple (q, ; m, ), (q, ; m q + 1 m, ) =1 m q (m q + 1; m, ) Calafiore (2010), Campi & Garatti (2011) this bound is looser than the lower bound for random sample discarding (i.e. SP(!),! 2 q (! m )) since 1 (q, ; m q + 1 m, ) = 1 (q ; m, 1 ) m q implies B (q, ; m, ) apple (q ; m, 1 ) 26 B (q, ; m, ) = (q ; m, 1 ) only if =1or q = m (in which case q(! m) =1)

43 Comparison with deterministic discarding strategies The lower confidence bound for optimal sample discarding corresponds to the worst case assumption: since P m V x (!) > =0for all! 2 q (! m ) such that! 6=!? Thm. 1 implies ( 1 E m q(! m) E m X P m V x (!) >!2 q(! m) ) apple 1 (q ; m, 1 )? Lem. 2 implies E m m q + 1 q(! m) apple m q 27

44 Comparison with deterministic discarding strategies Confidence bounds for m = 500 with q = d0.75me = 375, =1, = Confidence Φ(q ζ; m, 1 ϵ) Φ(q ζ; m, 1 ϵ) 0.1 Ψ(q, ζ; m, ϵ) Violation probability ϵ 28

45 Comparison with deterministic discarding strategies Values of lying on 5% and 95% confidence bounds for varying m with q = d0.75me, =1, = 10 Violation probability 0.6 ϵ ϵ 5 ϵ Sample size m 10 0 ϵ 95 ϵ 5 ϵ 95 ϵ probablities 5, 95 and 0 95 are defined by Sample size m 0.05 = (q ; m, 1 5 ), 0.95 = (q ; m, 1 95) 0.95 = (q, ; m, 0 95) 29

46 Example: minimal bounding circle Empirical distribution of violation probability for smallest bounding circle with: N(0,I), q = 80, m = 100, and500 realisations of! m x(ω), ω Ω q (ω m ) x = x (ω ) Probability of V (x) ϵ (78; 100, 1 ) (77; 100, 1 ) (80, 3; 100, ) Violation probability ϵ 30

47 Example: minimal bounding circle Empirical cost distributions for smallest bounding circle with: N(0,I), q = 80, m = 100, and500 realisations of! m ω Ω q (ω m ) ω = ω Probability of J (ω) J o (ϵ) (99; 100, 1 ) (77; 100, 1 ) (80, 3; 100, ) Violation probability ϵ 31

48 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding 3. Posterior confidence bounds using empirical violation estimates 4. Randomized algorithm with prior and posterior confidence bounds 5. Conclusions and future directions 32

49 Random constraint selection strategy How can we select! 2 q (! m ) at random? Summary of approach: For given! m,let! r = { (1),..., (r) },and solve SP(! r ) for x (! r ) count q = 2! m : f x (! r ), apple 0 then 2! m : f x (! r ), apple 0 2 q (! m ) Choose r to maximize the probability that q lies in the desired range e.g. r =10 m =50 q =13+10=23 33

50 Random constraint selection strategy How can we select! 2 q (! m ) at random? Summary of approach: For given! m,let! r = { (1),..., (r) },and solve SP(! r ) for x (! r ) count q = 2! m : f x (! r ), apple 0 then 2! m : f x (! r ), apple 0 2 q (! m ) Choose r to maximize the probability that q lies in the desired range e.g. r =10 m =50 q =13+10=23 33

51 Random constraint selection strategy How can we select! 2 q (! m ) at random? Summary of approach: For given! m,let! r = { (1),..., (r) },and solve SP(! r ) for x (! r ) count q = 2! m : f x (! r ), apple 0 then 2! m : f x (! r ), apple 0 2 q (! m ) Choose r to maximize the probability that q lies in the desired range e.g. r =10 m =50 q =13+10=23 33

52 Random constraint selection strategy How can we select! 2 q (! m ) at random? Summary of approach: For given! m,let! r = { (1),..., (r) },and solve SP(! r ) for x (! r ) count q = 2! m : f x (! r ), apple 0 then 2! m : f x (! r ), apple 0 2 q (! m ) Choose r to maximize the probability that q lies in the desired range e.g. r =10 m =50 q =13+10=23 33

53 Posterior confidence bounds Define r (! m )= 2! m : f x (! r ), apple 0 Theorem 2: For any 2 [0, 1] and apple r apple q apple m, P m V x (! r ) apple r (! m )=q (q ; m, 1 ). For any 2 [0, 1] and apple r apple q apple m], P m V x (! r ) apple r (! m )=q apple (q ; m, 1 ).? r(! m) provides an empirical estimate of violation probability? Thm. 2 gives aposterioriconfidence bounds (i.e. for given r(! m))? These bounds are tight for general probability distributions, i.e. they cannot be improved if can take any value in [, ] they are exact for problems with = 34

54 Posterior confidence bounds Define r (! m )= 2! m : f x (! r ), apple 0 Theorem 2: For any 2 [0, 1] and apple r apple q apple m, P m V x (! r ) apple r (! m )=q (q ; m, 1 ). For any 2 [0, 1] and apple r apple q apple m], P m V x (! r ) apple r (! m )=q apple (q ; m, 1 ).? r(! m) provides an empirical estimate of violation probability? Thm. 2 gives aposterioriconfidence bounds (i.e. for given r(! m))? These bounds are tight for general probability distributions, i.e. they cannot be improved if can take any value in [, ] they are exact for problems with = 34

55 Probability of selecting a level-q subset of! m Selection of r is based on Lemma 3: For any apple r apple q apple m, m r P m r (! m )=q min q r m r P m r (! m )=q apple q r 2[, ] max 2[, ] B(m q +,q + 1) B(,r + 1) B(m q +,q + 1) B(,r + 1) Proof (sketch): 1 From Lem. 1 & Lem. 2, for k 2 [, ] we have P m V x (! r) = v Es(! r) = k = (1 v)r k v k 1 B(k, r k +1) dv 2 r(! m)=q i q r of the m r samples in! m \! r satisfy f(x (! r), ) apple 0 and the remaining m q samples satisfy f(x (! r), ) > 0 35 Hence Lem. 3 follows from 1 and the law of total probability

56 Probability of selecting a level-q subset of! m Selection of r is based on Lemma 3: For any apple r apple q apple m, m r P m r (! m )=q min q r m r P m r (! m )=q apple q r 2[, ] max 2[, ] B(m q +,q + 1) B(,r + 1) B(m q +,q + 1) B(,r + 1) Proof (sketch): 1 From Lem. 1 & Lem. 2, for k 2 [, ] we have P m V x (! r) = v Es(! r) = k = (1 v)r k v k 1 B(k, r k +1) dv 2 r(! m)=q i q r of the m r samples in! m \! r satisfy f(x (! r), ) apple 0 and the remaining m q samples satisfy f(x (! r), ) > 0 35 Hence Lem. 3 follows from 1 and the law of total probability

57 Posterior confidence bounds (contd.) Proof of Thm. 2 (sketch): I P m r(! m)=q \ V x (! r) apple Es(! r) = k! = m r B( ; m q + k, q k +1) q r B(k, r k +1) I Hence from Lem. 3 and Bayes law P m V x (! r) apple r(! m)=q \ Es(! r) = k = (q k; m, 1 ) so the total law of probability implies P m V x (! r) apple r(! m)=q (q ; m, 1 ) X k= P m Es(! r) = k P m V x (! r) apple r(! m)=q min{r, } X apple (q ; m, 1 ) k= P m Es(! r) = k 36

58 Posterior confidence bounds (contd.) Proof of Thm. 2 (sketch): I P m r(! m)=q \ V x (! r) apple Es(! r) = k! = m r B( ; m q + k, q k +1) q r B(k, r k +1) I Hence from Lem. 3 and Bayes law P m V x (! r) apple r(! m)=q \ Es(! r) = k = (q k; m, 1 ) so the total law of probability implies P m V x (! r) apple r(! m)=q (q ; m, 1 ) X k= P m Es(! r) = k P m V x (! r) apple r(! m)=q min{r, } X apple (q ; m, 1 ) k= P m Es(! r) = k 36

59 Posterior confidence bounds (contd.) Proof of Thm. 2 (sketch): I P m r(! m)=q \ V x (! r) apple Es(! r) = k! = m r B( ; m q + k, q k +1) q r B(k, r k +1) I Hence from Lem. 3 and Bayes law P m V x (! r) apple r(! m)=q \ Es(! r) = k = (q k; m, 1 ) so the total law of probability implies P m V x (! r) apple r(! m)=q (q ; m, 1 ) X k= P m Es(! r) = k P m V x (! r) apple r(! m)=q min{r, } X apple (q ; m, 1 ) k= P m Es(! r) = k 36

60 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding 3. Posterior confidence bounds using empirical violation estimates 4. Randomized algorithm with prior and posterior confidence bounds 5. Conclusions and future directions 37

61 Parameters and target confidence bounds Consider the design of an algorithm to generate ˆx: ˆq: an approximate CCP solution the number of constraints satisfied out of m sampled constraints Impose prior confidence bounds: V (ˆx) 2 (, ] with probability p prior Impose posterior confidence bounds, given the value of ˆq: V (ˆx) ˆq/m apple with probability p post 38

62 Parameters and target confidence bounds ˆq cannot be specified directly, but, for given m and q, q: B Lem. 3 gives the probability, p trial, of r (! m ) 2 [q, q], for given r B hence SP(! r ) must be solved N trial times in order to ensure that ˆq 2 [q, q] with a given prior probability B choose r = r, the maximizer of p trial (possibly subject to r apple r max ) in order to minimize N trial Selection of m and q, q: B choose m so that posterior confidence bounds are met: m(1 ) ; m, 1 /2 1 (1 + ppost) 2 m(1 ) ; m, 1 + /2 apple 1 (1 ppost). 2 B Thm. 2 allows q, q to be chosen so that prior confidence bounds hold 39

63 Randomized algorithm with tight confidence bounds Algorithm: (i). Given bounds,, probabilities p prior <p post,andsamplesizem, determine q =min q subject to (q ; m, 1 ) 1 2 (1 + p post) q = max q subject to (q ; m, 1 ) apple 1 2 (1 p post) qx m r r = arg max r q=q q r min 2[, ] B(m q +,q + 1) B(,r + 1) qx m r B(m q +,q + 1) p trial = q r min 2[, ] B(,r + 1) q=q ln(1 pprior /p post ) N trial = ln(1 p trial ) 40

64 Randomized algorithm with tight confidence bounds Algorithm (cont.): (ii). Draw N trial multisamples,! (i) m,andforeachi =1,...,N trial : compute x (! (i) r ) count r (! (i) m )= 2! (i) m (iii). Determine i 2{1,...,N trial }: and return i = arg min i2{1,...,n trial } : f x (! (i) r ), apple ( q + q) r (!(i) m ), the solution estimate, ˆx = x (! (i ) r ) the number of satisfied constraints, ˆq = r (! (i ) m ) 40

65 Randomized algorithm with tight confidence bounds Proposition 3: ˆx, ˆq satisfy the prior confidence bounds P N trial m V (ˆx) 2 (, ] p prior and the posterior confidence bounds (q ; m, 1 ) apple P N trial m V (ˆx) apple ˆq = q apple (q ; m, 1 ). The posterior bounds follow directly from Thm. 2 The choice of q, q gives P m V (ˆx) 2 (, ] ˆq = q p post 8q 2 [q, q], =) P N trial m V (ˆx) 2 (, ] p post P N trial m ˆq 2 [q, q] but the definition of i implies P N trial m ˆq 2 [q, q] 1 (1 p trial ) N trial =) P N trial m V (ˆx) 2 (, ] p post 1 (1 p trial ) N trial so the choice of N trial ensures the prior bound 41

66 Randomized algorithm with tight confidence bounds B Large m is computationally cheap (hence small and p post 1) B p trial is a tight bound on the probability of ˆq 2 [q, q] and is exact if = B The posterior confidence bounds are tight for all distributions and are exact if = B Repeated trials and empirical violation tests have been used before e.g. Calafiore (2017), Chamanbaz et al. (2016) B The lower posterior confidence bound derived here coincides with Calafiore (2017) for fully supported problems ( = = n) but our approach: shows that this bound is exact for fully supported problems provides tight bounds in all other cases 42

67 Randomized algorithm with tight confidence bounds Variation of r and N trial with, for m = 10 5, =0.19, =0.21, =0.005, andp post =0.5(1 + p prior ): (, ) (2, 5) (7, 10) (17, 20) (47, 50) (97, 100) p prior p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 r N trial (, ) (1, 2) (1, 5) (1, 10) p prior p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 r N trial where p 1 =0.9, p 2 =0.95, p 3 =0.99, p 4 =

68 Example I: minimal bounding hypersphere Find smallest hypersphere in R 4 containing N(0,I) with probability 0.8 Support dimension: =2, =5 Prior bounds: V (ˆx) 2 (0.19, 0.21] with probability p prior =0.9 Posterior bounds: V (ˆx) ˆq/m apple0.005 with probability p post =0.95 Parameters: m = 10 5, q = 79257, q = r = 15, p trial =0.0365, N trial = 84 Compare SP solved approximately using greedy discarding (assuming one constraint discarded per iteration), with m = 420, q = 336: V x (! ) 2 (0.173, 0.332] with probability

69 Example I: minimal bounding hypersphere Find smallest hypersphere in R 4 containing N(0,I) with probability 0.8 Support dimension: =2, =5 Prior bounds: V (ˆx) 2 (0.19, 0.21] with probability p prior =0.9 Posterior bounds: V (ˆx) ˆq/m apple0.005 with probability p post =0.95 Parameters: m = 10 5, q = 79257, q = r = 15, p trial =0.0365, N trial = 84 Compare SP solved approximately using greedy discarding (assuming one constraint discarded per iteration), with m = 420, q = 336: V x (! ) 2 (0.173, 0.332] with probability

70 Example I: minimal bounding hypersphere Posterior confidence bounds given ˆq 2 [q, q]: P{V (ˆx) apple ˆq = q} (blue) P{V (ˆx) apple ˆq = q} (red) Φ( q ζ; m, 1 ϵ) Φ(q ζ; m, 1 ϵ) Probability Violation probability ϵ 45

71 Example I: minimal bounding hypersphere Observed frequency Worst-case bound Probability Proportion of sampled constraints satisfied ˆq/m Empirical probability distribution of ˆq (blue) for 1000 repetitions Worst case bound F (red), P{ ˆq/m 0.8 applet} F (0.8+t) F (0.8 t) 46

72 Example II: finite horizon robust control design Control problem (from Calafiore 2017): Determine a control sequence to drive the system state to a target over a finite horizon with high probability despite model uncertainty Discrete-time uncertain linear system z(t + 1) = A( )z(t)+bu(t), A( )=A 0 + i,j 2 [ Xn z i,j=1 i,je i e > j, ] uniformly distributed 47

73 Example II: finite horizon robust control design Control problem (from Calafiore 2017): Determine a control sequence to drive the system state to a target over a finite horizon with high probability despite model uncertainty Discrete-time uncertain linear system z(t + 1) = A( )z(t)+bu(t), A( )=A 0 + i,j 2 [ Xn z i,j=1 i,je i e > j, ] uniformly distributed 47

74 Example II: finite horizon robust control design Control objective: CCP : minimize,u subject to P kz T z(t, )k 2 + kuk 2 > apple with u = {u(0),...,u(t 1)}: predicted control sequence z(t, ): = A( ) T z(0) + A( ) T 1 B B u z T : target state control input u(t) time step t

75 Example II: finite horizon robust control design Problem dimensions: n z =6, T = 10, n = 11, =1, =3 and parameters: = 10 3, =0.005 Single-sided prior probability bounds with =0.005: P V (x) 2 (0, ] p prior =0.99 Posterior uncertainty: =0.003 and probability: p post = =) m = Choose number of sampled constraints in each trial to maximize p trial : r = 2922, p trial =0.482, N trial =9 if r is limited to 2000, we get p trial =0.414, N trial =9 49

76 Example II: finite horizon robust control design Compare number of trials needed for ˆq expected using bounds of Calfiore (2017): 9.6 expected using proposed approach (1/p trial ): 2.4 observed: 1.4 Residual conservativism is due to use of bounds: =1, =3 rather than the actual distribution for : support dimension: observed frequency: 5% 59% 36% q 50

77 Future directions B Alternative methods for randomly selecting level-q solutions B Methods of checking feasibility of suboptimal points B Estimation of violation probability over a set of parameters application to computing invariant sets for constrained dynamics v v1 B Application to chance-constrained energy management in PHEVs 51

78 Conclusions B Solutions of sampled convex optimization problems with randomised sample selection strategies have tighter confidence bounds than optimal sample discarding strategies B Randomised sample selection can be performed by solving a robust sampled optimization and counting violations of additional sampled constraints B We propose a randomised algorithm for chance-constrained convex programming with tight a priori and a posteriori confidence bounds 52

79 Thank you for your attention! Acknowledgements: James Fleming, Matthias Lorenzen, Johannes Buerger, Rainer Manuel Schaich... 53

80 Research directions B Optimal control of uncertain systems chance-constrained optimization stochastic predictive control B Robust control design invariant sets and uncertainty parameterization B Applications: energy management in hybrid electric vehicles (BMW) ecosystem-based fisheries management (BAS) 54

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Takafumi Kanamori and Akiko Takeda Abstract. Uncertain programs have been developed to deal with optimization problems

More information

A scenario approach for. non-convex control design

A scenario approach for. non-convex control design A scenario approach for 1 non-convex control design Sergio Grammatico, Xiaojing Zhang, Kostas Margellos, Paul Goulart and John Lygeros arxiv:1401.2200v1 [cs.sy] 9 Jan 2014 Abstract Randomized optimization

More information

Scenario Optimization for Robust Design

Scenario Optimization for Robust Design Scenario Optimization for Robust Design foundations and recent developments Giuseppe Carlo Calafiore Dipartimento di Elettronica e Telecomunicazioni Politecnico di Torino ITALY Learning for Control Workshop

More information

Approximate dynamic programming for stochastic reachability

Approximate dynamic programming for stochastic reachability Approximate dynamic programming for stochastic reachability Nikolaos Kariotoglou, Sean Summers, Tyler Summers, Maryam Kamgarpour and John Lygeros Abstract In this work we illustrate how approximate dynamic

More information

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming IFORMS 2008 c 2008 IFORMS isbn 978-1-877640-23-0 doi 10.1287/educ.1080.0048 Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming Shabbir Ahmed and Alexander Shapiro H. Milton

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

Random Convex Approximations of Ambiguous Chance Constrained Programs

Random Convex Approximations of Ambiguous Chance Constrained Programs Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs

More information

Sample Average Approximation Method. for Chance Constrained Programming: Theory and Applications

Sample Average Approximation Method. for Chance Constrained Programming: Theory and Applications Sample Average Approximation Method for Chance Constrained Programming: Theory and Applications B.K. Pagnoncelli S. Ahmed A. Shapiro Communicated by P.M. Pardalos Abstract We study sample approximations

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Politecnico di Torino Porto Institutional Repository [Proceeding] Model Predictive Control of stochastic LPV Systems via Random Convex Programs Original Citation: GC Calafiore; L Fagiano (2012) Model Predictive

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

Probabilistic Optimal Estimation and Filtering

Probabilistic Optimal Estimation and Filtering Probabilistic Optimal Estimation and Filtering Least Squares and Randomized Algorithms Fabrizio Dabbene 1 Mario Sznaier 2 Roberto Tempo 1 1 CNR - IEIIT Politecnico di Torino 2 Northeastern University Boston

More information

A robust approach to the chance-constrained knapsack problem

A robust approach to the chance-constrained knapsack problem A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, 38-40 rue du gl Leclerc, 92794 Issy-les-Moux cedex 9, France 2 Université de Technologie

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

Lecture 4: Two-point Sampling, Coupon Collector s problem

Lecture 4: Two-point Sampling, Coupon Collector s problem Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

Chance constrained optimization - applications, properties and numerical issues

Chance constrained optimization - applications, properties and numerical issues Chance constrained optimization - applications, properties and numerical issues Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) May 31, 2012 This

More information

Stochastic Tube MPC with State Estimation

Stochastic Tube MPC with State Estimation Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach L. Jeff Hong Department of Industrial Engineering and Logistics Management The Hong Kong University of Science

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Master Project Scenario-Based Model Predictive Control for Energy-Efficient Building Climate Control

Master Project Scenario-Based Model Predictive Control for Energy-Efficient Building Climate Control Master Project Scenario-Based Model Predictive Control for Energy-Efficient Building Climate Control Georg Schildbach, David Sturzenegger, Prof. Dr. Manfred Morari 02.Oktober 2012 Outline Introduction

More information

Solving Classification Problems By Knowledge Sets

Solving Classification Problems By Knowledge Sets Solving Classification Problems By Knowledge Sets Marcin Orchel a, a Department of Computer Science, AGH University of Science and Technology, Al. A. Mickiewicza 30, 30-059 Kraków, Poland Abstract We propose

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

MIP reformulations of some chance-constrained mathematical programs

MIP reformulations of some chance-constrained mathematical programs MIP reformulations of some chance-constrained mathematical programs Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo December 4th, 2012 FIELDS Industrial Optimization

More information

Scenario-Based Approach to Stochastic Linear Predictive Control

Scenario-Based Approach to Stochastic Linear Predictive Control Scenario-Based Approach to Stochastic Linear Predictive Control Jadranko Matuško and Francesco Borrelli Abstract In this paper we consider the problem of predictive control for linear systems subject to

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

Persuading a Pessimist

Persuading a Pessimist Persuading a Pessimist Afshin Nikzad PRELIMINARY DRAFT Abstract While in practice most persuasion mechanisms have a simple structure, optimal signals in the Bayesian persuasion framework may not be so.

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow

Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow Line Roald, November 4 th 2016 Line Roald 09.11.2016 1 Outline Introduction Chance-Constrained Optimal

More information

Real-time energy management of the Volvo V60 PHEV based on a closed-form minimization of the Hamiltonian

Real-time energy management of the Volvo V60 PHEV based on a closed-form minimization of the Hamiltonian Real-time energy management of the Volvo V6 PHEV based on a closed-form minimization of the Hamiltonian Viktor Larsson 1, Lars Johannesson 1, Bo Egardt 1 Andreas Karlsson 2, Anders Lasson 2 1 Department

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

2 Chance constrained programming

2 Chance constrained programming 2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.

More information

Lecture notes on Regression: Markov Chain Monte Carlo (MCMC)

Lecture notes on Regression: Markov Chain Monte Carlo (MCMC) Lecture notes on Regression: Markov Chain Monte Carlo (MCMC) Dr. Veselina Kalinova, Max Planck Institute for Radioastronomy, Bonn Machine Learning course: the elegant way to extract information from data,

More information

Bayesian Congestion Control over a Markovian Network Bandwidth Process

Bayesian Congestion Control over a Markovian Network Bandwidth Process Bayesian Congestion Control over a Markovian Network Bandwidth Process Parisa Mansourifard 1/30 Bayesian Congestion Control over a Markovian Network Bandwidth Process Parisa Mansourifard (USC) Joint work

More information

arxiv: v1 [math.oc] 11 Jan 2018

arxiv: v1 [math.oc] 11 Jan 2018 From Uncertainty Data to Robust Policies for Temporal Logic Planning arxiv:181.3663v1 [math.oc 11 Jan 218 ABSTRACT Pier Giuseppe Sessa* Automatic Control Laboratory, ETH Zürich Zürich, Switzerland sessap@student.ethz.ch

More information

Polyhedral Approaches to Online Bipartite Matching

Polyhedral Approaches to Online Bipartite Matching Polyhedral Approaches to Online Bipartite Matching Alejandro Toriello joint with Alfredo Torrico, Shabbir Ahmed Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Industrial

More information

arxiv: v2 [cs.sy] 29 Mar 2016

arxiv: v2 [cs.sy] 29 Mar 2016 Approximate Dynamic Programming: a Q-Function Approach Paul Beuchat, Angelos Georghiou and John Lygeros 1 ariv:1602.07273v2 [cs.sy] 29 Mar 2016 Abstract In this paper we study both the value function and

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Selected Topics in Chance-Constrained Programming

Selected Topics in Chance-Constrained Programming Selected Topics in Chance-Constrained Programg Tara Rengarajan April 03, 2009 Abstract We consider chance-constrained programs in which the probability distribution of the random parameters is deteristic

More information

The Retail Planning Problem Under Demand Uncertainty. UCLA Anderson School of Management

The Retail Planning Problem Under Demand Uncertainty. UCLA Anderson School of Management The Retail Planning Problem Under Demand Uncertainty George Georgiadis joint work with Kumar Rajaram UCLA Anderson School of Management Introduction Many retail store chains carry private label products.

More information

A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands

A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands Nathalie Helal 1, Frédéric Pichon 1, Daniel Porumbel 2, David Mercier 1 and Éric Lefèvre1 1 Université d Artois,

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

Polynomial chaos expansions for structural reliability analysis

Polynomial chaos expansions for structural reliability analysis DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for structural reliability analysis B. Sudret & S. Marelli Incl.

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

Monte Carlo (MC) Simulation Methods. Elisa Fadda

Monte Carlo (MC) Simulation Methods. Elisa Fadda Monte Carlo (MC) Simulation Methods Elisa Fadda 1011-CH328, Molecular Modelling & Drug Design 2011 Experimental Observables A system observable is a property of the system state. The system state i is

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Bayesian Social Learning with Random Decision Making in Sequential Systems

Bayesian Social Learning with Random Decision Making in Sequential Systems Bayesian Social Learning with Random Decision Making in Sequential Systems Yunlong Wang supervised by Petar M. Djurić Department of Electrical and Computer Engineering Stony Brook University Stony Brook,

More information

Alleviating tuning sensitivity in Approximate Dynamic Programming

Alleviating tuning sensitivity in Approximate Dynamic Programming Alleviating tuning sensitivity in Approximate Dynamic Programming Paul Beuchat, Angelos Georghiou and John Lygeros Abstract Approximate Dynamic Programming offers benefits for large-scale systems compared

More information

arxiv: v2 [math.oc] 27 Aug 2018

arxiv: v2 [math.oc] 27 Aug 2018 From Uncertainty Data to Robust Policies for Temporal Logic Planning arxiv:181.3663v2 [math.oc 27 Aug 218 ABSTRACT Pier Giuseppe Sessa Automatic Control Laboratory, ETH Zurich Zurich, Switzerland sessap@student.ethz.ch

More information

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments Sequential Importance Sampling for Rare Event Estimation with Computer Experiments Brian Williams and Rick Picard LA-UR-12-22467 Statistical Sciences Group, Los Alamos National Laboratory Abstract Importance

More information

Likelihood Bounds for Constrained Estimation with Uncertainty

Likelihood Bounds for Constrained Estimation with Uncertainty Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty

More information

Agent based modelling of technological diffusion and network influence

Agent based modelling of technological diffusion and network influence Agent based modelling of technological diffusion and network influence May 4, 23 Martino Tran, D.Phil.(Oxon) Senior Researcher, Environmental Change Institute, University of Oxford Outline Outline. Study

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Data-Driven Optimization under Distributional Uncertainty

Data-Driven Optimization under Distributional Uncertainty Data-Driven Optimization under Distributional Uncertainty Postdoctoral Researcher Electrical and Systems Engineering University of Pennsylvania A network of physical objects - Devices - Vehicles - Buildings

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 26-7 April 13, 28 1 Noisy unbiased subgradient Suppose f : R n R is a convex function. We say

More information

CS242: Probabilistic Graphical Models Lecture 4A: MAP Estimation & Graph Structure Learning

CS242: Probabilistic Graphical Models Lecture 4A: MAP Estimation & Graph Structure Learning CS242: Probabilistic Graphical Models Lecture 4A: MAP Estimation & Graph Structure Learning Professor Erik Sudderth Brown University Computer Science October 4, 2016 Some figures and materials courtesy

More information

J. Sadeghi E. Patelli M. de Angelis

J. Sadeghi E. Patelli M. de Angelis J. Sadeghi E. Patelli Institute for Risk and, Department of Engineering, University of Liverpool, United Kingdom 8th International Workshop on Reliable Computing, Computing with Confidence University of

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition: nna Janicka Probability Calculus 08/09 Lecture 4. Real-valued Random Variables We already know how to describe the results of a random experiment in terms of a formal mathematical construction, i.e. the

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

Model-Based Reinforcement Learning with Continuous States and Actions

Model-Based Reinforcement Learning with Continuous States and Actions Marc P. Deisenroth, Carl E. Rasmussen, and Jan Peters: Model-Based Reinforcement Learning with Continuous States and Actions in Proceedings of the 16th European Symposium on Artificial Neural Networks

More information

Safe Approximations of Chance Constraints Using Historical Data

Safe Approximations of Chance Constraints Using Historical Data Safe Approximations of Chance Constraints Using Historical Data İhsan Yanıkoğlu Department of Econometrics and Operations Research, Tilburg University, 5000 LE, Netherlands, {i.yanikoglu@uvt.nl} Dick den

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

PROBABILISTIC ROBUST CONTROL SYSTEM DESIGN BY STOCHASTIC OPTIMIZATION

PROBABILISTIC ROBUST CONTROL SYSTEM DESIGN BY STOCHASTIC OPTIMIZATION The Pennsylvania State University The Graduate School College of Engineering PROBABILISTIC ROBUST CONTROL SYSTEM DESIGN BY STOCHASTIC OPTIMIZATION A Thesis in Electrical Engineering by Xiang Li c 2004

More information

Efficient Aircraft Design Using Bayesian History Matching. Institute for Risk and Uncertainty

Efficient Aircraft Design Using Bayesian History Matching. Institute for Risk and Uncertainty Efficient Aircraft Design Using Bayesian History Matching F.A. DiazDelaO, A. Garbuno, Z. Gong Institute for Risk and Uncertainty DiazDelaO (U. of Liverpool) DiPaRT 2017 November 22, 2017 1 / 32 Outline

More information

Chance-Constrained Binary Packing Problems

Chance-Constrained Binary Packing Problems INFORMS JOURNAL ON COMPUTING Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0899-1499 eissn 1526-5528 00 0000 0001 INFORMS doi 10.1287/ijoc.2014.0595 c 0000 INFORMS Authors are encouraged to submit new papers

More information

A Stochastic-Oriented NLP Relaxation for Integer Programming

A Stochastic-Oriented NLP Relaxation for Integer Programming A Stochastic-Oriented NLP Relaxation for Integer Programming John Birge University of Chicago (With Mihai Anitescu (ANL/U of C), Cosmin Petra (ANL)) Motivation: The control of energy systems, particularly

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

NEW RESULTS ON THE IDENTIFICATION OF INTERVAL PREDICTOR MODELS

NEW RESULTS ON THE IDENTIFICATION OF INTERVAL PREDICTOR MODELS NEW RESULTS ON THE IDENTIFICATION OF INTERVAL PREDICTOR MODELS M.C. Campi G. Calafiore S. Garatti Dipartimento di Elettronica per l Automazione - Università di Brescia, Italy. e-mail: campi@ing.unibs.it

More information

Jitka Dupačová and scenario reduction

Jitka Dupačová and scenario reduction Jitka Dupačová and scenario reduction W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch Session in honor of Jitka Dupačová ICSP 2016, Buzios (Brazil),

More information

Stochastic Optimization with Risk Measures

Stochastic Optimization with Risk Measures Stochastic Optimization with Risk Measures IMA New Directions Short Course on Mathematical Optimization Jim Luedtke Department of Industrial and Systems Engineering University of Wisconsin-Madison August

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

The Scenario Approach to Robust Control Design

The Scenario Approach to Robust Control Design The Scenario Approach to Robust Control Design GC Calafiore and MC Campi Abstract This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D. A Probabilistic Framework for solving Inverse Problems Lambros S. Katafygiotis, Ph.D. OUTLINE Introduction to basic concepts of Bayesian Statistics Inverse Problems in Civil Engineering Probabilistic Model

More information

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis

More information

Elements of probability theory

Elements of probability theory The role of probability theory in statistics We collect data so as to provide evidentiary support for answers we give to our many questions about the world (and in our particular case, about the business

More information

Randomized Load Balancing:The Power of 2 Choices

Randomized Load Balancing:The Power of 2 Choices Randomized Load Balancing: The Power of 2 Choices June 3, 2010 Balls and Bins Problem We have m balls that are thrown into n bins, the location of each ball chosen independently and uniformly at random

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Basics of Uncertainty Analysis

Basics of Uncertainty Analysis Basics of Uncertainty Analysis Chapter Six Basics of Uncertainty Analysis 6.1 Introduction As shown in Fig. 6.1, analysis models are used to predict the performances or behaviors of a product under design.

More information

Probability Distributions for Continuous Variables. Probability Distributions for Continuous Variables

Probability Distributions for Continuous Variables. Probability Distributions for Continuous Variables Probability Distributions for Continuous Variables Probability Distributions for Continuous Variables Let X = lake depth at a randomly chosen point on lake surface If we draw the histogram so that the

More information