Monte Carlo Methods for Stochastic Programming

Size: px
Start display at page:

Download "Monte Carlo Methods for Stochastic Programming"

Transcription

1 IE 495 Lecture 16 Monte Carlo Methods for Stochastic Programming Prof. Jeff Linderoth March 17, 2003 March 17, 2003 Stochastic Programming Lecture 16 Slide 1

2 Outline Review Jensen's Inequality Edmundson-Madansky Inequality Partititioning Using Bounds in Algorithms Monte Carlo Methods Estimating the optimal objective function value Lower Bounds Upper Bounds Examples March 17, 2003 Stochastic Programming Lecture 16 Slide 2

3 Review Why do we care about bounds in stochastic programming? What is Jensen's inequality? How is Jensen's Inequality used in stochastic programming? What is the Edmundson-Madansky Inequality? How is the Edmundson-Madansky Inequality used in stochastic programming? March 17, 2003 Stochastic Programming Lecture 16 Slide 3

4 Jensen's Inequality in Stochastic Programming If φ is a convex function ω of a random variable over its support Ω, then E ω φ(ω) φ(e ω (ω)) E ω [Q(ˆx, ω)] Q(ˆx, E ω [ω]) Let S = {Ω l, l = 1, 2,... v} be some partition of Ω: E ω [Q(ˆx, ω)] v P (ω Ω l )Q(ˆx, E ω (ω ω Ω l )) l=1 March 17, 2003 Stochastic Programming Lecture 16 Slide 4

5 Edmundson-Madansky Inequality If Ω has a (nite) set of extreme points ext(ω), then there is some probability measure p i (convex multipliers) so that e ext(ω) p ei Q(ˆx, e) Q(ˆx, ω). If Ω = [a 1, b 1 ] [a 2, b 2 ]... [a d, b d ] and if the random variables in the d dimensions are independent... U(ˆx, ω) = d ( ) ωi e i Q(ˆx, e) E ω Q(w, ω) = Q(x) b i a i e ext(ω) i=1 March 17, 2003 Stochastic Programming Lecture 16 Slide 5

6 Bounds in LShaped Method To use bounds, use the θ to underestimate a lower bounding function Q L (x). (We called this E ω L(x, ω) on Monday). Use the LShaped method to optimize the problem using Q L (x). Only include ω scenarios. When optimized with respect to Q L (x), compare to Q U (x) (what we called E ω U(x, ω)). If Q U (x) Q L (x) is sufciently small. Stop. Otherwise, rene the partition (which improves the bounds), and repeat. March 17, 2003 Stochastic Programming Lecture 16 Slide 6

7 Monte Carlo Methods ( ) min {f(x) E P g(x; ξ) x S Ω g(x; ξ)dp (ξ)} Most of the theory presented holds for (*) A very general SP problem Naturally it holds for our favorite SP problem: S {x Ax = b, x 0} f(x) c T x + Q(x) Q(x) E{Q(x, ω)} Q(x, ω) min y 0 {q(ω) T y W y = h(ω) T (ω)x} March 17, 2003 Stochastic Programming Lecture 16 Slide 7

8 Sampling Instead of solving (*), we solve an approximating problem. Let ξ 1,..., ξ N be N realizations of the random variable ξ: min { f N (x) N 1 x S N g(x, ξ j )}. j=1 f N (x) is just the sample average function Since ξ j drawn from P, f N (x) is an unbiased estimator of f(x) E[ f N (x)] = f(x) March 17, 2003 Stochastic Programming Lecture 16 Slide 8

9 Sample Variance Since ξ j are independent, we can estimate Var( f N (x)) : This is known as the sample variance: ˆσ 2 (x) = 1 N(N 1) N [(g(x, ξ j ) f N (x)] 2 j=1 March 17, 2003 Stochastic Programming Lecture 16 Slide 9

10 Statistics Break Let χ 1, χ 2,..., χ n be independent, identically distributed (iid) random variables. Let S n = n i=1 χ i Assume µ E χ i <. Weak Law of Large Numbers ( ) lim P ( S n n n µ δ = 0 δ > 0 March 17, 2003 Stochastic Programming Lecture 16 Slide 10

11 Strong Law of Large Numbers Almost surely. lim n S n n µ Almost surely Equivalent to with probability 1, or.. P ( lim n S n n µ) = 0 March 17, 2003 Stochastic Programming Lecture 16 Slide 11

12 Central Limit Theorem Further assume that χ 1, χ 2,..., χ n have nite nonzero variance σ 2 : lim P n ( Sn nµ σ n ) x = N (0, 1) N (µ, σ 2 ) : Normally distributed random variable with mean µ, variance σ 2. This is an amazing theorem. March 17, 2003 Stochastic Programming Lecture 16 Slide 12

13 A More Convenient Form of the CLT S n nµ σ N (0, 1) n ( ) χ µ n N (0, 1) σ n( χ µ) N (0, σ 2 ) March 17, 2003 Stochastic Programming Lecture 16 Slide 13

14 Sampling Methods Interior sampling methods. Sample during the course of the algorithm LShaped Method (Dantzig and Infanger) Stochastic Decomposition (Higle and Sen) Stochastic Quasi-gradient methods (Ermoliev) Exterior sampling methods Sample. Then solve problem approximating problem. Can we get (statistical) bounds on key solution quantities? March 17, 2003 Stochastic Programming Lecture 16 Slide 14

15 Lower Bound on the Optimal Objective Function Value v = min x S {f(x) E P g(x, ξ) For some sample ξ 1,..., ξ N, let Ω g(x; ξ)p(ξ)dξ} Thm: ˆv N = min x S { f N (x) N 1 N g(x, ξ j )}. j=1 Eˆv N v March 17, 2003 Stochastic Programming Lecture 16 Slide 15

16 min N 1 x X E " min x X N 1 Proof v = min x X E P g(x; ξ) = min x X E" N 1 NX j=1 g(x, ξ j ) N 1 N X j=1 NX j=1 g(x, ξ j ) # NX j=1 g(x, ξ j ) E " X N N 1 g(x, ξ j ) j=1 # E [ˆv N ] E " N 1 N X j=1 g(x, ξ j ) min E" N 1 x X # g(x, ξ j ) x X NX j=1 # g(x, ξ j ) x X x X # = v March 17, 2003 Stochastic Programming Lecture 16 Slide 16

17 Next? Now we need to somehow estimate E[ˆv n ] The expected value E[ v N ] can be estimated as follows. Generate M independent samples, ξ 1,j,..., ξ N,j, j = 1,..., M, each of size N, and solve the corresponding SAA problems min x X { f j N (x) := N 1 N } g(x, ξ i,j ) i=1, (1) for each j = 1,..., M. Let v j N be the optimal value of problem (1), and compute M L N,M 1 M j=1 v j N March 17, 2003 Stochastic Programming Lecture 16 Slide 17

18 Lower Bounds The estimate L N,M is an unbiased estimate of E[ v N ]. By our last theorem, it provides a statistical lower bound for the true optimal value v. When the M batches ξ 1,j, ξ 2,j,..., ξ N,j, j = 1,..., M, are i.i.d. (although the elements within each batch do not need to be i.i.d.) have by the Central Limit Theorem that M [LN,M E( v N )] N (0, σ 2 L) March 17, 2003 Stochastic Programming Lecture 16 Slide 18

19 Condence Intervals The sample variance estimator of σ 2 L is s 2 L(M) 1 M 1 M j=1 ( v j N L N,M ) 2. Dening z α to satisfy P {N(0, 1) z α } = 1 α, and replacing σ L by s L (M), we can obtain an approximate (1 α)-condence interval for E[ v N ] to be [ L N,M z αs L (M) M, L N,M + z ] αs L (M) M March 17, 2003 Stochastic Programming Lecture 16 Slide 19

20 Let's do an example Time Permitting minimize Q(x 1, x 2 ) = x 1 + x subject to 4 ω 1 =1 2/3 ω 2 =1/3 y 1 (ω 1, ω 2 ) + y 2 (ω 1, ω 2 )dω 1 dω 2 ω 1 x 1 + x 2 + y 1 (ω 1, ω 2 ) 7 ω 2 x 1 + x 2 + y 2 (ω 1, ω 2 ) 4 x 1 0 x 2 0 y 1 (ω 1, ω 2 ) 0 y 2 (ω 1, ω 2 ) 0 March 17, 2003 Stochastic Programming Lecture 16 Slide 20

21 Upper Bounds v = min x S {f(x) E P g(x; ξ) Quick, Someone prove... Ω g(x; ξ)p(ξ)dξ} f(ˆx) v How can we estimate f(ˆx)? x X March 17, 2003 Stochastic Programming Lecture 16 Slide 21

22 Estimating f(ˆx) Generate T independent batches of samples of size N, denoted by ξ 1,j, ξ 2,j,..., ξ N,j, j = 1, 2,..., T, where each batch has the unbiased property, namely E f j N(x) := N 1 N i=1 F (x, ξ i,j ) = f(x), for all x X. We can then use the average value dened by as an estimate of f(ˆx). U N,T (ˆx) := T 1 T j=1 f j N(ˆx) March 17, 2003 Stochastic Programming Lecture 16 Slide 22

23 More Condence Intervals By applying the Central Limit Theorem again, we have that T [ U N,T (ˆx) f(ˆx) ] N(0, σ 2 U (ˆx)), as T, where σu 2 (ˆx) := Var [ f N (ˆx) ]. We can estimate σu 2 (ˆx) by the sample variance estimator s 2 U (ˆx; T ) dened by s 2 U (ˆx; T ) := 1 T 1 T j=1 [ ] 2 f j N (ˆx) U N,T (ˆx). By replacing σu 2 (ˆx) with s2 U (ˆx; T ), we can proceed as above to obtain a (1 α)-condence interval for f(ˆx): [ U N,T (ˆx) z αs U (ˆx; T ), U N,T (ˆx) + z ] αs U (ˆx; T ). T T March 17, 2003 Stochastic Programming Lecture 16 Slide 23

24 Putting it all together f N (x) is the sample average function Draw ω 1,... ω N from P f N (x) N 1 N j=1 g(x, ωj ) For Stochastic LP w/recourse solve N LP's. v N is the optimal solution value for the sample average function: v N min x S { fn (x) := N 1 N j=1 g(x, ωj ) Estimate E( v N ) as Ê( v N ) = L N,M = M 1 M j=1 vj N Solve M stochastic LP's, each of sampled size N. } March 17, 2003 Stochastic Programming Lecture 16 Slide 24

25 Recapping Theorems Thm. Thm. E( v N ) v f(x) x fn (ˆx) E( v N ) f(ˆx) v, as N, M, N We are mostly interested in estimating the quality of a given solution ˆx. This is f(ˆx) v. f N (ˆx) computed by solving N independent LPs. Ê( v N) computed by solving M independent stochastic LPs. Independent no synchronization good for the Grid Independent can construct condence intervals around the estimates March 17, 2003 Stochastic Programming Lecture 16 Slide 25

26 An experiment M times Solve a stochastic sampled approximation of size N. (Thus obtaining an estimate of E( v N )). For each of the M solutions x 1,... x M, estimate f(ˆx) by solving N LP's. Test Instances Name Application Ω (m 1, n 1 ) (m 2, n 2 ) LandS HydroPower Planning 10 6 (2,4) (7,12) gbd? (?,?) (?,?) storm Cargo Flight Scheduling (185, 121) (?,1291) 20term Vehicle Assignment (1,5) (71,102) ssn Telecom. Network Design (1,89) (175,706) March 17, 2003 Stochastic Programming Lecture 16 Slide 26

27 Convergence of Optimal Solution Value 9 M 12, N = 10 6 Monte Carlo Sampling Instance N = 50 N = 100 N = 500 N = 1000 N = term gbd LandS storm ssn Latin Hypercube Sampling Instance N = 50 N = 100 N = 500 N = 1000 N = term gbd LandS storm ssn March 17, 2003 Stochastic Programming Lecture 16 Slide 27

28 20term Convergence. Monte Carlo Sampling Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 28

29 20term Convergence. Latin Hypercube Sampling Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 29

30 ssn Convergence. Monte Carlo Sampling Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 30

31 ssn Convergence. Latin Hypercube Sampling Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 31

32 storm Convergence. Monte Carlo Sampling 1.555e e+06 Lower Bound Upper Bound 1.553e e e+06 Value 1.55e e e e e e e N March 17, 2003 Stochastic Programming Lecture 16 Slide 32

33 storm Convergence. Latin Hypercube Sampling e e+06 Lower Bound Upper Bound 1.55e e e+06 Value e e e e e e e N March 17, 2003 Stochastic Programming Lecture 16 Slide 33

34 gbd Convergence. Monte Carlo Sampling 1800 Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 34

35 gbd Convergence. Latin Hypercube Sampling Lower Bound Upper Bound Value N March 17, 2003 Stochastic Programming Lecture 16 Slide 35

36 The Gap f(ˆx) f N (ˆx) v Ê v N E v N Of most concern is the bias v E v N. How fast can we make this go down in N? March 17, 2003 Stochastic Programming Lecture 16 Slide 36

37 A Biased Discussion Some problems are ill-conditioned It takes a large sample to get an accurate estimate of the solution Variance reduction can help reduce the bias You get the right small sample March 17, 2003 Stochastic Programming Lecture 16 Slide 37

38 Next Time Go over homeworks Convergence of Optimal Solution Values Stochastic Decomposition March 17, 2003 Stochastic Programming Lecture 16 Slide 38

Stochastic Decomposition

Stochastic Decomposition IE 495 Lecture 18 Stochastic Decomposition Prof. Jeff Linderoth March 26, 2003 March 19, 2003 Stochastic Programming Lecture 17 Slide 1 Outline Review Monte Carlo Methods Interior Sampling Methods Stochastic

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

Complexity of two and multi-stage stochastic programming problems

Complexity of two and multi-stage stochastic programming problems Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept

More information

Chance Constrained Programming

Chance Constrained Programming IE 495 Lecture 22 Chance Constrained Programming Prof. Jeff Linderoth April 21, 2003 April 21, 2002 Stochastic Programming Lecture 22 Slide 1 Outline HW Fixes Chance Constrained Programming Is Hard Main

More information

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem Network Interdiction Stochastic Network Interdiction and to Solve a Stochastic Network Interdiction Problem Udom Janjarassuk Jeff Linderoth ISE Department COR@L Lab Lehigh University jtl3@lehigh.edu informs

More information

Sample Average Approximation (SAA) for Stochastic Programs

Sample Average Approximation (SAA) for Stochastic Programs Sample Average Approximation (SAA) for Stochastic Programs with an eye towards computational SAA Dave Morton Industrial Engineering & Management Sciences Northwestern University Outline SAA Results for

More information

A. Shapiro Introduction

A. Shapiro Introduction ESAIM: PROCEEDINGS, December 2003, Vol. 13, 65 73 J.P. Penot, Editor DOI: 10.1051/proc:2003003 MONTE CARLO SAMPLING APPROACH TO STOCHASTIC PROGRAMMING A. Shapiro 1 Abstract. Various stochastic programming

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

SMPS and the Recourse Function

SMPS and the Recourse Function IE 495 Lecture 8 SMPS and the Recourse Function Prof. Jeff Linderoth February 5, 2003 February 5, 2003 Stochastic Programming Lecture 8 Slide 1 Outline Formulating Stochastic Program(s). SMPS format Jacob

More information

2 2 + x =

2 2 + x = Lecture 30: Power series A Power Series is a series of the form c n = c 0 + c 1 x + c x + c 3 x 3 +... where x is a variable, the c n s are constants called the coefficients of the series. n = 1 + x +

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

Computational Stochastic Programming

Computational Stochastic Programming Computational Stochastic Programming Jeff Linderoth Dept. of ISyE Dept. of CS Univ. of Wisconsin-Madison linderoth@wisc.edu SPXIII Bergamo, Italy July 7, 2013 Jeff Linderoth (UW-Madison) Computational

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Stochastic Programming Math Review and MultiPeriod Models

Stochastic Programming Math Review and MultiPeriod Models IE 495 Lecture 5 Stochastic Programming Math Review and MultiPeriod Models Prof. Jeff Linderoth January 27, 2003 January 27, 2003 Stochastic Programming Lecture 5 Slide 1 Outline Homework questions? I

More information

Financial Optimization ISE 347/447. Lecture 21. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 21. Dr. Ted Ralphs Financial Optimization ISE 347/447 Lecture 21 Dr. Ted Ralphs ISE 347/447 Lecture 21 1 Reading for This Lecture C&T Chapter 16 ISE 347/447 Lecture 21 2 Formalizing: Random Linear Optimization Consider the

More information

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University 1 / 36 A Brief Analysis of Central Limit Theorem Omid Khanmohamadi (okhanmoh@math.fsu.edu) Diego Hernán Díaz Martínez (ddiazmar@math.fsu.edu) Tony Wills (twills@math.fsu.edu) Kouadio David Yao (kyao@math.fsu.edu)

More information

Scenario Generation and Sampling Methods

Scenario Generation and Sampling Methods Scenario Generation and Sampling Methods Tito Homem-de-Mello Güzin Bayraksan SVAN 2016 IMPA 9 13 May 2106 Let s Start with a Recap Sample Average Approximation We want to solve the true problem min {g(x)

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 1 The General Bootstrap This is a computer-intensive resampling algorithm for estimating the empirical

More information

The stochastic modeling

The stochastic modeling February 211 Gdansk A schedule of the lecture Generation of randomly arranged numbers Monte Carlo Strong Law of Large Numbers SLLN We use SAS 9 For convenience programs will be given in an appendix Programs

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Oblique derivative problems for elliptic and parabolic equations, Lecture II

Oblique derivative problems for elliptic and parabolic equations, Lecture II of the for elliptic and parabolic equations, Lecture II Iowa State University July 22, 2011 of the 1 2 of the of the As a preliminary step in our further we now look at a special situation for elliptic.

More information

Importance sampling in scenario generation

Importance sampling in scenario generation Importance sampling in scenario generation Václav Kozmík Faculty of Mathematics and Physics Charles University in Prague September 14, 2013 Introduction Monte Carlo techniques have received significant

More information

IE 495 Final Exam. Due Date: May 1, 2003

IE 495 Final Exam. Due Date: May 1, 2003 IE 495 Final Exam Due Date: May 1, 2003 Do the following problems. You must work alone on this exam. The only reference materials you are allowed to use on the exam are the textbook by Birge and Louveaux,

More information

Stochastic Optimization One-stage problem

Stochastic Optimization One-stage problem Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique

More information

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday

More information

[Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4]

[Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4] Evaluating Hypotheses [Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4] Sample error, true error Condence intervals for observed hypothesis error Estimators Binomial distribution, Normal distribution,

More information

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution

More information

Hochdimensionale Integration

Hochdimensionale Integration Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents

More information

QUESTIONS AND SOLUTIONS

QUESTIONS AND SOLUTIONS UNIVERSITY OF BRISTOL Examination for the Degrees of B.Sci., M.Sci. and M.Res. (Level 3 and Level M Martingale Theory with Applications MATH 36204 and M6204 (Paper Code MATH-36204 and MATH-M6204 January

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

2WB05 Simulation Lecture 7: Output analysis

2WB05 Simulation Lecture 7: Output analysis 2WB05 Simulation Lecture 7: Output analysis Marko Boon http://www.win.tue.nl/courses/2wb05 December 17, 2012 Outline 2/33 Output analysis of a simulation Confidence intervals Warm-up interval Common random

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Section 8.1: Interval Estimation

Section 8.1: Interval Estimation Section 8.1: Interval Estimation Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 8.1: Interval Estimation 1/ 35 Section

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

Stochastic Subgradient Method

Stochastic Subgradient Method Stochastic Subgradient Method Lingjie Weng, Yutian Chen Bren School of Information and Computer Science UC Irvine Subgradient Recall basic inequality for convex differentiable f : f y f x + f x T (y x)

More information

A Sequential Sampling Procedure for Stochastic Programming

A Sequential Sampling Procedure for Stochastic Programming A Sequential Sampling Procedure for Stochastic Programming Güzin Bayraksan guzinb@sie.arizona.edu Department of Systems and Industrial Engineering University of Arizona, Tucson, AZ 85721 David P. Morton

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information

Linear Algebra in Hilbert Space

Linear Algebra in Hilbert Space Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger

More information

STAT 830 Non-parametric Inference Basics

STAT 830 Non-parametric Inference Basics STAT 830 Non-parametric Inference Basics Richard Lockhart Simon Fraser University STAT 801=830 Fall 2012 Richard Lockhart (Simon Fraser University)STAT 830 Non-parametric Inference Basics STAT 801=830

More information

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study Computational Optimization and Applications, 24, 289 333, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. The Sample Average Approximation Method Applied to Stochastic Routing

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008 Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38 1 / 38 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 38 1 The L-Shaped Method 2 Example: Capacity Expansion Planning 3 Examples with Optimality Cuts [ 5.1a of BL] 4 Examples

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Estimation of Complex Small Area Parameters with Application to Poverty Indicators

Estimation of Complex Small Area Parameters with Application to Poverty Indicators 1 Estimation of Complex Small Area Parameters with Application to Poverty Indicators J.N.K. Rao School of Mathematics and Statistics, Carleton University (Joint work with Isabel Molina from Universidad

More information

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form Taylor Series Given a function f(x), we would like to be able to find a power series that represents the function. For example, in the last section we noted that we can represent e x by the power series

More information

Stochastic Network Interdiction / October 2001

Stochastic Network Interdiction / October 2001 Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 2001-10 Stochastic Networ Interdiction / October 2001 Sanchez, Susan M. Monterey, California.

More information

Monte Carlo Simulation. CWR 6536 Stochastic Subsurface Hydrology

Monte Carlo Simulation. CWR 6536 Stochastic Subsurface Hydrology Monte Carlo Simulation CWR 6536 Stochastic Subsurface Hydrology Steps in Monte Carlo Simulation Create input sample space with known distribution, e.g. ensemble of all possible combinations of v, D, q,

More information

Chapter 5: Models used in conjunction with sampling. J. Kim, W. Fuller (ISU) Chapter 5: Models used in conjunction with sampling 1 / 70

Chapter 5: Models used in conjunction with sampling. J. Kim, W. Fuller (ISU) Chapter 5: Models used in conjunction with sampling 1 / 70 Chapter 5: Models used in conjunction with sampling J. Kim, W. Fuller (ISU) Chapter 5: Models used in conjunction with sampling 1 / 70 Nonresponse Unit Nonresponse: weight adjustment Item Nonresponse:

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

Final exam.

Final exam. EE364b Convex Optimization II June 018 Prof. John C. Duchi Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you have

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

EE/PEP 345. Modeling and Simulation. Spring Class 11

EE/PEP 345. Modeling and Simulation. Spring Class 11 EE/PEP 345 Modeling and Simulation Class 11 11-1 Output Analysis for a Single Model Performance measures System being simulated Output Output analysis Stochastic character Types of simulations Output analysis

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Inference in non-linear time series

Inference in non-linear time series Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Validating Sample Average Approximation Solutions with Negatively Dependent Batches

Validating Sample Average Approximation Solutions with Negatively Dependent Batches Validating Sample Average Approximation Solutions with Negatively Dependent Batches Jiajie Chen Department of Statistics, University of Wisconsin-Madison chen@stat.wisc.edu Cong Han Lim Department of Computer

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Evaluating Classifiers. Lecture 2 Instructor: Max Welling

Evaluating Classifiers. Lecture 2 Instructor: Max Welling Evaluating Classifiers Lecture 2 Instructor: Max Welling Evaluation of Results How do you report classification error? How certain are you about the error you claim? How do you compare two algorithms?

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Linear Regression. Machine Learning CSE546 Kevin Jamieson University of Washington. Oct 5, Kevin Jamieson 1

Linear Regression. Machine Learning CSE546 Kevin Jamieson University of Washington. Oct 5, Kevin Jamieson 1 Linear Regression Machine Learning CSE546 Kevin Jamieson University of Washington Oct 5, 2017 1 The regression problem Given past sales data on zillow.com, predict: y = House sale price from x = {# sq.

More information

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017 CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017 Quote of the Day A person with one watch knows what time it is. A person with two

More information

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Introduction to Machine Learning. Lecture 2

Introduction to Machine Learning. Lecture 2 Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for

More information

PORTA, NEOS, and Knapsack Covers. Cover Inequalities. Prof. Jeff Linderoth. October 29, June 23, 2004 DIMACS Reconnect Conference on MIP Slide 1

PORTA, NEOS, and Knapsack Covers. Cover Inequalities. Prof. Jeff Linderoth. October 29, June 23, 2004 DIMACS Reconnect Conference on MIP Slide 1 PORTA, NEOS, and Knapsack Covers Cover Inequalities Prof. Jeff Linderoth October 29, 2003 June 23, 2004 DIMACS Reconnect Conference on MIP Slide 1 Today s Outline Knapsack Cover inequalities Facets Lifting

More information

Learning Algorithms for Separable Approximations of Discrete Stochastic Optimization Problems

Learning Algorithms for Separable Approximations of Discrete Stochastic Optimization Problems Learning Algorithms for Separable Approximations of Discrete Stochastic Optimization Problems Warren B. Powell Department of Operations Research and Financial Engineering, Princeton University, Princeton,

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) School of Computer Science 10-708 Probabilistic Graphical Models Markov Chain Monte Carlo (MCMC) Readings: MacKay Ch. 29 Jordan Ch. 21 Matt Gormley Lecture 16 March 14, 2016 1 Homework 2 Housekeeping Due

More information

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Fall, 2013 Page 1 Random Variable and Probability Distribution Discrete random variable Y : Finite possible values {y

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

Rare-Event Simulation

Rare-Event Simulation Rare-Event Simulation Background: Read Chapter 6 of text. 1 Why is Rare-Event Simulation Challenging? Consider the problem of computing α = P(A) when P(A) is small (i.e. rare ). The crude Monte Carlo estimator

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Proof In the CR proof. and

Proof In the CR proof. and Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Review of Maximum Likelihood Estimators

Review of Maximum Likelihood Estimators Libby MacKinnon CSE 527 notes Lecture 7, October 7, 2007 MLE and EM Review of Maximum Likelihood Estimators MLE is one of many approaches to parameter estimation. The likelihood of independent observations

More information

Variational sampling approaches to word confusability

Variational sampling approaches to word confusability Variational sampling approaches to word confusability John R. Hershey, Peder A. Olsen and Ramesh A. Gopinath {jrhershe,pederao,rameshg}@us.ibm.com IBM, T. J. Watson Research Center Information Theory and

More information