The Lovasz-Vempala algorithm for computing the volume of a convex body in O*(n^4) - Theory and Implementation

Size: px
Start display at page:

Download "The Lovasz-Vempala algorithm for computing the volume of a convex body in O*(n^4) - Theory and Implementation"

Transcription

1 The Lovasz-Vempala algorithm for computing the volume of a convex body in O*(n^4) - Theory and Implementation Mittagsseminar, 20. Dec Christian L. Müller, MOSAIC group, Institute of Theoretical Computer Science, ETH Zürich Dec 20, 2011:ETHZ

2 Papers Journal of Computer and System Sciences 72 (2006) Simulated annealing in convex bodies and an O (n 4 ) volume algorithm László Lovász a, Santosh Vempala b,,1 a Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA b Department of Mathematics, MIT, Cambridge, MA 02139, USA Received 21 April 2004; received in revised form 27 July 2005 Available online 9 November 2005 European Journal of Operational Research 216 (2012) Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: Stochastics and Statistics Computational results of an O (n 4 ) volume algorithm q L. Lovász a, I. Deák b, a Department of Computer Science, Eötvös University, 1117 Budapest, Pázmány P. Sétány 1/c, Hungary b Corvinus University of Budapest, Hungary 2

3 Problem statement Given a convex body oracle and K R n B(0, 1) K B(0,R) accessible via a membership K Compute the volume vol(k) with relative error 3

4 Problem statement Given a convex body oracle and K R n B(0, 1) K B(0,R) accessible via a membership K Compute the volume vol(k) with relative error Not possible in polynomial time (in n, 1/ε, log R) for deterministic algorithms! 3

5 Problem statement Given a convex body oracle and K R n B(0, 1) K B(0,R) accessible via a membership K Compute the volume vol(k) with relative error Theorem 1. (See [4].) For every deterministic algorithm that uses at most n a membership queries and given a convex body K with B n K nb n outputs two numbers A,B such that A vol(k) B, thereexistsabodyk for which the ratio B/A is at least Not possible in polynomial time (in n, 1/ε, log R) for deterministic algorithms! where c is an absolute constant. ( cn a log n ) n 3 plexity of an algorith Bárány and Füredi [ e the volume to with

6 The break-through: Randomized algorithms There is a polynomial time randomized algorithm that computes the volume with high probability 1 δ with arbitrarily small relative error. Break-through paper by Dyer, Frieze, and Kannan in J.ACM, 1991describes a randomized algorithm with O*(n 23 ) complexity 4

7 The break-through: Randomized algorithms There is a polynomial time randomized algorithm that computes the volume with high probability 1 δ with arbitrarily small relative error. Break-through paper by Dyer, Frieze, and Kannan in J.ACM, 1991describes a randomized algorithm with O*(n 23 ) complexity Authors Walk Sandwiching Sampling Remark #S Type method Dyer-Frieze grid d = n n Break- -Kannan [36] +Minkowski sum through n 23 Lovász- grid Localization -Sim.[78] Lemma n 16 Applegate grid two cubes Metropolis -Kannan [8] d = n n 10 Lovász Ball [76] walk n 8 Dyer-Frieze grid two cubes [35] n 8 Lovász ball randomized, d = cn Metropolis -Sim.[80] walk R n e, Cost: n 7 Kannan-Lovász ball Isotropic, interlaced c 1 n 3 d 2 + Theorems n 10 -Sim. [60] walk Cost: n 5 c 2 Nn 2 d 2 12 n 5 Lovász [77] Hit-and Warm start Warm -Run see Remark 11 start n 5 Lovász Simulated See -Vempala [84] Annealing Appendix II n 4 4

8 The break-through: Randomized algorithms There is a polynomial time randomized algorithm that computes the volume with high probability 1 δ with arbitrarily small relative error. Break-through paper by Dyer, Frieze, and Kannan in J.ACM, 1991describes a randomized algorithm with O*(n 23 ) complexity Theorem 1.1. The volume of a convex body K, given by a membership oracle, and a parameter R such that B K RB, can be approximated to within a relative error of ε with probability 1 δ using ( n 4 O n ε 2 log9 εδ + n4 log 8 n ) δ log R = O (n 4 ) oracle calls. Lovasz and Vempala 4

9 Talk overview Part 1: Tools for volume computation - Membership oracle and boundary oracles - MCMC methods and the Hit-and-Run sampler - Classical volume computation - Simulated Annealing and log-concave functions 5

10 Talk overview Part 1: Tools for volume computation - Membership oracle and boundary oracles - MCMC methods and the Hit-and-Run sampler - Classical volume computation - Simulated Annealing and log-concave functions Part 2: The Lovasz-Vempala algorithm - Key algorithmic steps - The pencil construction - Multi-phase Monte Carlo sampling - Computational difficulties and solutions - Numerical results for hypercubes 5

11 Membership oracle Input x R n Output x 1 x 2 Yes if x K x i x n Convex body K No otherwise 6

12 Boundary oracles for line samplers Boundary oracle: Given a interior point in the body and an arbitrary direction. Return the upper and lower bounds of the body along the direction Boundary oracles are known for linear constraints, ellipsoidal constraints, and semidefinite constraints (Polyak et al. 2006) Boundary oracles can be constructed from membership oracles using bisection Ax b x T Ax 1 A 0 + x 1 A x n A n

13 MCMC sampling General objective: Draw unbiased samples from a continuous target probability distribution that is given in black-box form (and most often only up to a normalizing constant) Adaptive Stochastic methods x AuGust 25, 2011::UNIL

14 MCMC sampling General objective: Draw unbiased samples from a continuous target probability distribution that is given in black-box form (and most often only up to a normalizing constant) Black-box sampling... Adaptive Stochastic methods x AuGust 25, 2011::UNIL

15 Adaptive Stochastic methods AuGust 25, 2011::UNIL MCMC sampling Evaluate density at new sample and calculate the Metropolis criterion: α(x (g), x (g+1) )=min Draw a random number u from U(0,1) and accept 1, f(x(g+1) ) f(x (g) ) x (g+1) if u < α

16 MCMC sampling Markov Chain methods that iteratively sample from a fixed proposal distribution (often Gaussian distribution) and evaluate the target density there Random Walk Metropolis algorithm (Metropolis et al., 1953) Adaptive Stochastic methods AuGust 25, 2011::UNIL 08/24/11 5

17 MCMC sampling Markov Chain methods that iteratively sample from a fixed proposal distribution (often Gaussian distribution) and evaluate the target density there Random Walk Metropolis algorithm (Metropolis et al., 1953) x (g) x (g-1) Adaptive Stochastic methods AuGust 25, 2011::UNIL 08/24/11 5

18 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

19 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

20 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

21 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

22 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

23 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

24 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) 11

25 MCMC and Hit-and-Run sampler K (R. L. Smith, 1984) Lovasz shows in 1999 that Hit-and-Run mixes in O (n 2 R 2 ) for the uniform distribution. He also shows that the bound is best possible in terms of R and n. 11

26 Naive volume algorithm vol(k) = S K S vol(b) * * * * * * * * Needs exponential size S * * * * * * * * * * S 12

27 Classical volume algorithm B i =2 i/n B 0,i=1,...,m K 0 = B 0 K i = K B i K m = K 13

28 Classical volume algorithm B i =2 i/n B 0,i=1,...,m K 0 = B 0 K i = K B i K m = K vol(k) = vol(k m) vol(k m 1 ) vol(k m 1 ) vol(k m 2 )...vol(k 1) vol(k 0 ) vol(k 0) 13

29 Classical volume algorithm It can be shown that the number of phases must be and m =Ω(n) samples are needed per phase. m =Ω(n) The body must be sandwiched by the unit ball and an outer ball with radius R = O ( n). This can be achieved in O (n 4 ). Using Hit-and-Run to generate samples from the convex body results in an algorithm. O (n 5 ) This is best possible for this type of Multi-Phase Monte Carlo scheme. 14

30 Simulated annealing Simulated annealing has been introduced by Kirkpatrick, Gelatt, Vecchi in 1983 and Cerny in General-purpose heuristic optimizer for discrete and continuous problems f(x). Identical to the Metropolis algorithm but the acceptance ratio α is parametrized by a temperature Ti that is continuously lowered (annealed) during optimization. 15

31 Log-concave functions For log-concave functions the following relationship holds: f : R n R + f(θx +(1 θ)y) f(x) θ f(y) 1 θ Examples are the indicator function and the Gaussian function Hit-and-run also mixes fast for log-concave densities. 16

32 Further reading Math. Program., Ser. B 97: (2003) Digital Object Identifier (DOI) /s x Miklós Simonovits How to compute the volume in high dimension? Received: January 28, 2003 / Accepted: April 29, 2003 Published online: May 28, 2003 Springer-Verlag 2003 Abstract. In some areas of theoretical computer science we feel that randomized algorithms are better and in some others we can prove that they are more efficient than the deterministic ones. Approximating the volume of a convex n-dimensional body, given by an oracle is one of the areas where this difference can be proved. In general, if we use a deterministic algorithm to approximate the volume, it requires exponentially many oracle questions in terms of n as n.dyer,friezeandkannangave a randomized polynomial approximation algorithm for the volume of a convex body K R n, given by a membership oracle. The DKF algorithm was improved in a sequence of papers. The area is full of deep and interesting problems and results. This paper is an introduction to this field and also a survey. Combinatorial and Computational Geometry MSRI Publications Volume 52, 2005 Geometric Random Walks: A Survey SANTOSH VEMPALA Abstract. The developing theory of geometric random walks is outlined here. Three aspects general methods for estimating convergence (the mixing rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and convex optimization) that are based on sampling by random walks are discussed. 17

33 Part 1: Tools for volume computation - Membership oracle and boundary oracles - MCMC methods and the Hit-and-Run sampler - Classical volume computation - Simulated Annealing and log-concave functions Part 2: The Lovasz-Vempala algorithm - Key algorithmic steps - The pencil construction - Multi-phase Monte Carlo integration - Computational difficulties and solutions - Numerical results for hypercubes 18

34 Key algorithmic steps Preprocessing: The body must be in isotropic position and it is well-rounded (Rudelson, 1998). We do the so-called pencil construction in n+1 dimension. We define a sequence of log-concave densities over the pencil inspired by inverse simulated annealing. We generate random points from the densities using Hit-and- Run sampling. We approximate the volume by a product of ratios of integrals of consecutive densities. 19

35 The pencil construction C = {x x R n+1,x 0 0, K = C [(0, 2R) K] n i=1 x 2 i 2x 2 0} 20

36 The pencil construction C = {x x R n+1,x 0 0, K = C [(0, 2R) K] n i=1 x 2 i 2x 2 0} 0 2R 20

37 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K 21

38 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K a i 1/T i a 0 a 1... a m (inverse annealing) 21

39 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K a i 1/T i a 0 a 1... a m (inverse annealing) a 0 6n a i = a 0 (1 1 n ) i,i=1,...,m 21

40 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K a i 1/T i a 0 a 1... a m (inverse annealing) a 0 6n a i = a 0 (1 1 n ) i,i=1,...,m Parametrized integral 21

41 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K a i 1/T i a 0 a 1... a m (inverse annealing) a 0 6n a i = a 0 (1 1 n ) i,i=1,...,m Parametrized integral Z(a) = exp ( ax 0 )dx K 21

42 A parametrized integral over the pencil Log-concave function f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K a i 1/T i a 0 a 1... a m (inverse annealing) a 0 6n a i = a 0 (1 1 n ) i,i=1,...,m Parametrized integral Z(a) = exp ( ax 0 )dx K Z(a 0 )= K f 0 (x)dx C 21 Z(a m )= exp ( a 0 x 0 )dx = n!π 0 a (n+1) 0 K f m (x)dx

43 A parametrized integral over the pencil f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K 0 1 2R 22

44 A parametrized integral over the pencil f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K 0 f 0 1 2R 22

45 A parametrized integral over the pencil f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K 0 f 0 1 2R 22

46 A parametrized integral over the pencil f i (x) =exp( a i x 0 ), i =1,...,m x =(x 0,...,x n ) K 0 f 0 1 f m 2R 22

47 Computing the pencil volume in each phase Z(a) = K exp ( ax 0 )dx Product of ratios Z(a m )=Z(a 0 ) m 1 i=0 Z(a i+1 ) Z(a i ) 23

48 Computing the pencil volume in each phase Z(a) = K exp ( ax 0 )dx Product of ratios Z(a m )=Z(a 0 ) m 1 i=0 Z(a i+1 ) Z(a i ) Monte Carlo approximation Wi 23

49 Computing the pencil volume in each phase Z(a) = K exp ( ax 0 )dx Product of ratios Z(a m )=Z(a 0 ) m 1 i=0 Z(a i+1 ) Z(a i ) Monte Carlo approximation Wi W i = 1 k k j=1 exp (a i a i+1 )x (j) 0 k: Number of threads 23

50 Computing the pencil volume in each phase Z(a) = K exp ( ax 0 )dx Product of ratios Z(a m )=Z(a 0 ) m 1 i=0 Z(a i+1 ) Z(a i ) Monte Carlo approximation Wi W i = 1 k k j=1 exp (a i a i+1 )x (j) 0 Final acceptance/rejection step to get the volume V from V k: Number of threads 23

51 The algorithm in a nutshell ± 2 Volume algorithm: V1. Set m = 2 n ln n 512 ( ) i ε, k = ε n ln n 2 ε, δ = ε 2 n 10 and a i = 2n 1 1 n for i = 1,...,m. V2. For i = 1,...,m, do the following. Run the sampler k times for convex body T i K, with vector (a i /γ i, 0,...,0) (i.e., exponential function e a ix 0 /γ i ), error parameter δ, and (for i > 0) starting points T i Xi 1 1,...,T ixi 1 k 1. Apply Ti to the resulting points to get points Xi 1,...,Xk i. Using these points, compute W i = 1 k k j=1 e (a i a i+1 )(X j i ) 0. V3. Return Z = n!π n (2n) (n+1) W 1...W m as the estimate of the volume of K. 24

52 The algorithm in a nutshell ± 2 Volume algorithm: V1. Set m = 2 6n n ln n 512 ( ε, k = ε n ln n 2 ε, δ = ε 2 n 10 and a i = 2n 1 1 for i = 1,...,m. V2. For i = 1,...,m, do the following. n ) i Run the sampler k times for convex body T i K, with vector (a i /γ i, 0,...,0) (i.e., exponential function e a ix 0 /γ i ), error parameter δ, and (for i > 0) starting points T i Xi 1 1,...,T ixi 1 k 1. Apply Ti to the resulting points to get points Xi 1,...,Xk i. Using these points, compute W i = 1 k k j=1 e (a i a i+1 )(X j i ) 0. V3. Return 6n Z = n!π n (2n) (n+1) W 1...W m as the estimate of the volume of K. 24

53 Computational issues The computational paper gives all details for implementation. Code in FORTRAN is also available from the second author The algorithm is numerically is extremely unstable. Large size differences between the first and the last volume. The ratios are initially large and then abruptly approach 1. Pre-factor in dimension dependence of the sampling size not clear for Hit-and-Run (when does it reach stationarity). Sample size in each phase is constant but should maybe be variable 25

54 Computational enhancements Special treatment of the first integral using stratified sampling Extra sampling between phases to reach stationarity faster Double-point estimator as variance reduction technique in integral computation 2n+1 ortho-normalized points around a random point is used 26

55 Numerical results on hypercubes in n=2,9 Table 1 Estimated values of the ratio R i, its standard deviation, and partial errors d i = Z 0 ZD(W i )/W i, for examples C 2, C 9. The sample size is s in each thread and in each phase, the number of threads is k thr = 25. i Dim. n + 1 = 3 Dim. n + 1 = 10 Sample size s = 100 Sample size s = 1000 Z 0 = , Z = 2.27E +3 Z 0 = 0.56E 11, Z 4.0E + 14 W i D(W i ) Z 0 ZD(W i )/W i W i D(W i ) Z 0 ZD(W i )/W i E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E 1 27

56 Numerical results on hypercubes in n=2,9 Table 5 Results of three cube-pencils P 3, P 5, P 9. C 2 C 5 C 8 n k thr V e V E E E+2 r V Exact V e V 0.66E E E+2 Time (sec)

57 Concluding remarks Theoretically, it might be possible to further bound the mixing time of Hit-and-Run There is room for improvement in almost all aspects of the implementation. Sequential Monte Carlo as alternative paradigm for integral approximation Parallelization is possible (independent threads). I try to implement the algorithm next year. Any help, suggestions are welcome... 29

58 Further reading 30

59 Thanks and Merry christmas and happy holidays 31

60 Figures used in the presentation I took the pencil construction image and the sandwiching balls from L. Lovasz 2004 slides on volume computation ( 32

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

Simulated annealing in convex bodies and an O (n 4 ) volume algorithm

Simulated annealing in convex bodies and an O (n 4 ) volume algorithm Journal of Computer and System Sciences 72 2006 392 417 www.elsevier.com/locate/jcss Simulated annealing in convex bodies and an O n 4 volume algorithm László Lovász a, Santosh Vempala b,,1 a Microsoft

More information

Lecture 16: October 29

Lecture 16: October 29 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 16: October 29 Lecturer: Alistair Sinclair Scribes: Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Optimization of a Convex Program with a Polynomial Perturbation

Optimization of a Convex Program with a Polynomial Perturbation Optimization of a Convex Program with a Polynomial Perturbation Ravi Kannan Microsoft Research Bangalore Luis Rademacher College of Computing Georgia Institute of Technology Abstract We consider the problem

More information

Hit-and-Run from a Corner

Hit-and-Run from a Corner Hit-and-Run from a Corner László Lovász Santosh Vempala August 7, 5 Abstract We show that the hit-and-run random walk mixes rapidly starting from any interior point of a convex body. This is the first

More information

26.1 Metropolis method

26.1 Metropolis method CS880: Approximations Algorithms Scribe: Dave Anrzejewski Lecturer: Shuchi Chawla Topic: Metropolis metho, volume estimation Date: 4/26/07 The previous lecture iscusse they some of the key concepts of

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 35, No. 4, pp. 985 15 c 6 Society for Industrial and Applied Mathematics HIT-AND-RUN FROM A CORNER LÁSZLÓ LOVÁSZ AND SANTOSH VEMPALA Abstract. We show that the hit-and-run random walk

More information

Geometric Random Walks: A Survey

Geometric Random Walks: A Survey Combinatorial and Computational Geometry MSRI Publications Volume 52, 2005 Geometric Random Walks: A Survey SANTOSH VEMPALA Abstract. The developing theory of geometric random walks is outlined here. Three

More information

Recent Progress and Open Problems in Algorithmic Convex Geometry

Recent Progress and Open Problems in Algorithmic Convex Geometry Recent Progress and Open Problems in Algorithmic Convex Geometry Santosh S. Vempala School of Computer Science Algorithms and Randomness Center Georgia Tech, Atlanta, USA 30332 vempala@gatech.edu Abstract

More information

Simulated Annealing for Convex Optimization

Simulated Annealing for Convex Optimization Simulated Annealing for Convex Optimization Adam alai TTI-Chicago Santosh Vempala MIT Abstract We apply the method known as simulated annealing to the following problem in convex optimization: minimize

More information

Solving Convex Programs by Random Walks

Solving Convex Programs by Random Walks Solving Convex Programs by Random Walks DIMITRIS BERTSIMAS AND SANTOSH VEMPALA M.I.T., Cambridge, Massachusetts Abstract. Minimizing a convex function over a convex set in n-dimensional space is a basic,

More information

EFFICIENT HIGH-DIMENSIONAL SAMPLING AND INTEGRATION. A Dissertation Presented to The Academic Faculty. Ben Cousins

EFFICIENT HIGH-DIMENSIONAL SAMPLING AND INTEGRATION. A Dissertation Presented to The Academic Faculty. Ben Cousins EFFICIENT HIGH-DIMENSIONAL SAMPLING AND INTEGRATION A Dissertation Presented to The Academic Faculty By Ben Cousins In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Algorithms,

More information

Geometric Random Walks: A Survey

Geometric Random Walks: A Survey Geometric Random Walks: A Survey Santosh Vempala Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139. Abstract The developing theory of geometric random walks is outlined

More information

Log-concave sampling: Metropolis-Hastings algorithms are fast!

Log-concave sampling: Metropolis-Hastings algorithms are fast! Proceedings of Machine Learning Research vol 75:1 5, 2018 31st Annual Conference on Learning Theory Log-concave sampling: Metropolis-Hastings algorithms are fast! Raaz Dwivedi Department of Electrical

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

On the complexity of approximate multivariate integration

On the complexity of approximate multivariate integration On the complexity of approximate multivariate integration Ioannis Koutis Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 USA ioannis.koutis@cs.cmu.edu January 11, 2005 Abstract

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex bodies is hard Navin Goyal Microsoft Research India navingo@microsoft.com Luis Rademacher Georgia Tech lrademac@cc.gatech.edu Abstract We show that learning a convex body in R d, given

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

An algorithm to increase the node-connectivity of a digraph by one

An algorithm to increase the node-connectivity of a digraph by one Discrete Optimization 5 (2008) 677 684 Contents lists available at ScienceDirect Discrete Optimization journal homepage: www.elsevier.com/locate/disopt An algorithm to increase the node-connectivity of

More information

Simulated Annealing for Constrained Global Optimization

Simulated Annealing for Constrained Global Optimization Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Efficient Algorithms for Universal Portfolios

Efficient Algorithms for Universal Portfolios Efficient Algorithms for Universal Portfolios Adam Kalai CMU Department of Computer Science akalai@cs.cmu.edu Santosh Vempala y MIT Department of Mathematics and Laboratory for Computer Science vempala@math.mit.edu

More information

Lecture and notes by: Daniel Dadush and Alejandro Toriello November 20, Volume Computation Using Geometric Random Walks

Lecture and notes by: Daniel Dadush and Alejandro Toriello November 20, Volume Computation Using Geometric Random Walks Volume Computation Using Geometric Random Walks Preliminaries We state here the concepts and facts used for the rest of the discussion.. Convexity The following are some basic convexity facts, which should

More information

Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing

Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing Etienne de lerk Monique Laurent March 2, 2017 Abstract We consider the problem of minimizing

More information

Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007

Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.409 Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

Stochastic processes and algorithms

Stochastic processes and algorithms Stochastic processes and algorithms Ton Dieker H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332 ton.dieker@isye.gatech.edu partially based

More information

Hit-and-run mixes fast

Hit-and-run mixes fast Math. Program., Ser. A 86: 443 46 (999) Springer-Verlag 999 Digital Object Identifier (DOI).7/s79993 László Lovász Hit-and-run mixes fast Received January 26, 998 / Revised version received October 26,

More information

The Kannan-Lovász-Simonovits Conjecture

The Kannan-Lovász-Simonovits Conjecture The Kannan-Lovász-Simonovits Conjecture Yin Tat Lee, Santosh S. Vempala December 9, 017 Abstract The Kannan-Lovász-Simonovits conjecture says that the Cheeger constant of any logconcave density is achieved

More information

RANDOM WALKS AND AN O (n 5 ) VOLUME ALGORITHM FOR CONVEX BODIES

RANDOM WALKS AND AN O (n 5 ) VOLUME ALGORITHM FOR CONVEX BODIES RANDOM WALS AND AN O (n 5 ) VOLUME ALGORITHM FOR CONVEX BODIES Ravi annan László Lovász 2 Miklós Simonovits 3 January 997 4 Abstract Given a high dimensional convex body IR n by a separation oracle, we

More information

15-850: Advanced Algorithms CMU, Spring 2017 Lecture #17: The Ellipsoid Algorithm March 3, 2017

15-850: Advanced Algorithms CMU, Spring 2017 Lecture #17: The Ellipsoid Algorithm March 3, 2017 15-850: Advanced Algorithms CMU, Spring 2017 Lecture #17: The Ellipsoid Algorithm March 3, 2017 Lecturer: Anupam Gupta Scribe: Benjamin Berg, David K. Isenberg In this lecture, we discuss some polynomial-time

More information

Markov Chain Monte Carlo. Simulated Annealing.

Markov Chain Monte Carlo. Simulated Annealing. Aula 10. Simulated Annealing. 0 Markov Chain Monte Carlo. Simulated Annealing. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [RC] Stochastic search. General iterative formula for optimizing

More information

An FPTAS for the Volume of Some V-polytopes It is Hard to Compute the Volume of the Intersection of Two Cross-Polytopes

An FPTAS for the Volume of Some V-polytopes It is Hard to Compute the Volume of the Intersection of Two Cross-Polytopes An FPTAS for the Volume of Some V-polytopes It is Hard to Compute the Volume of the Intersection of Two Cross-Polytopes Ei Ando 1(B) and Shuji Kijima 2,3 1 Sojo University, 4-22-1, Ikeda, Nishi-Ku, Kumamoto

More information

NOTE. On the Maximum Number of Touching Pairs in a Finite Packing of Translates of a Convex Body

NOTE. On the Maximum Number of Touching Pairs in a Finite Packing of Translates of a Convex Body Journal of Combinatorial Theory, Series A 98, 19 00 (00) doi:10.1006/jcta.001.304, available online at http://www.idealibrary.com on NOTE On the Maximum Number of Touching Pairs in a Finite Packing of

More information

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs

More information

High-dimensional distributions with convexity properties

High-dimensional distributions with convexity properties High-dimensional distributions with convexity properties Bo az Klartag Tel-Aviv University A conference in honor of Charles Fefferman, Princeton, May 2009 High-Dimensional Distributions We are concerned

More information

On Bayesian Computation

On Bayesian Computation On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints

More information

Some new randomized methods for control and optimization

Some new randomized methods for control and optimization Some new randomized methods for control and optimization B.T. Polyak Institute for Control Sciences, Russian Academy of Sciences, Moscow, Russia June 29 with E. Gryazina, P. Scherbakov Workshop in honor

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Sampling Contingency Tables

Sampling Contingency Tables Sampling Contingency Tables Martin Dyer Ravi Kannan John Mount February 3, 995 Introduction Given positive integers and, let be the set of arrays with nonnegative integer entries and row sums respectively

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

Dispersion of Mass and the Complexity of Geometric Problems. Luis Alexis Rademacher

Dispersion of Mass and the Complexity of Geometric Problems. Luis Alexis Rademacher Dispersion of Mass and the Complexity of Geometric Problems by Luis Alexis Rademacher Licenciado en Ciencias de la Ingeniería Universidad de Chile 2002) Ingeniero Civil Matemático Universidad de Chile

More information

arxiv: v1 [stat.co] 18 Feb 2012

arxiv: v1 [stat.co] 18 Feb 2012 A LEVEL-SET HIT-AND-RUN SAMPLER FOR QUASI-CONCAVE DISTRIBUTIONS Dean Foster and Shane T. Jensen arxiv:1202.4094v1 [stat.co] 18 Feb 2012 Department of Statistics The Wharton School University of Pennsylvania

More information

On Approximating the Covering Radius and Finding Dense Lattice Subspaces

On Approximating the Covering Radius and Finding Dense Lattice Subspaces On Approximating the Covering Radius and Finding Dense Lattice Subspaces Daniel Dadush Centrum Wiskunde & Informatica (CWI) ICERM April 2018 Outline 1. Integer Programming and the Kannan-Lovász (KL) Conjecture.

More information

RANDOM WALKS IN A CONVEX BODY AND AN IMPROVED VOLUME ALGORITHM

RANDOM WALKS IN A CONVEX BODY AND AN IMPROVED VOLUME ALGORITHM Random Structures and Alg. 4, 359 412. RANDOM WALKS IN A CONVEX BODY AND AN IMPROVED VOLUME ALGORITHM L. Lovász Department of Computer Science, Eötvös University, Budapest, Hungary H-188, and Department

More information

Session 5B: A worked example EGARCH model

Session 5B: A worked example EGARCH model Session 5B: A worked example EGARCH model John Geweke Bayesian Econometrics and its Applications August 7, worked example EGARCH model August 7, / 6 EGARCH Exponential generalized autoregressive conditional

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Santosh Vempala. Algorithmic aspects of convexity.

Santosh Vempala. Algorithmic aspects of convexity. Winter School Combinatorial and algorithmic aspects of convexity Paris, Institut Henri Poincaré, January 19-3, 015. http ://wiki-math.univ-mlv.fr/gemecod/doku.php/winterschool015 Santosh Vempala Algorithmic

More information

Practical Numerical Methods in Physics and Astronomy. Lecture 5 Optimisation and Search Techniques

Practical Numerical Methods in Physics and Astronomy. Lecture 5 Optimisation and Search Techniques Practical Numerical Methods in Physics and Astronomy Lecture 5 Optimisation and Search Techniques Pat Scott Department of Physics, McGill University January 30, 2013 Slides available from http://www.physics.mcgill.ca/

More information

The centroid body: algorithms and statistical estimation for heavy-tailed distributions

The centroid body: algorithms and statistical estimation for heavy-tailed distributions The centroid body: algorithms and statistical estimation for heavy-tailed distributions Luis Rademacher Athens GA, March 2016 Joint work with Joseph Anderson, Navin Goyal and Anupama Nandi Challenge. Covariance

More information

Approximating the volume of unions and intersections of high-dimensional geometric objects

Approximating the volume of unions and intersections of high-dimensional geometric objects Approximating the volume of unions and intersections of high-dimensional geometric objects Karl Bringmann 1 and Tobias Friedrich 2,3 1 Universität des Saarlandes, Saarbrücken, Germany 2 Max-Planck-Institut

More information

Bayesian Estimation with Sparse Grids

Bayesian Estimation with Sparse Grids Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse

More information

Multimodal Nested Sampling

Multimodal Nested Sampling Multimodal Nested Sampling Farhan Feroz Astrophysics Group, Cavendish Lab, Cambridge Inverse Problems & Cosmology Most obvious example: standard CMB data analysis pipeline But many others: object detection,

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm Instructor: Shaddin Dughmi History and Basics Originally developed in the mid 70s by Iudin, Nemirovski, and Shor for use

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Session 3A: Markov chain Monte Carlo (MCMC)

Session 3A: Markov chain Monte Carlo (MCMC) Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte

More information

ON THE COMPUTATIONAL COMPLEXITY OF MCMC-BASED ESTIMATORS IN LARGE SAMPLES. By Alexandre Belloni Duke University and IBM Watson Research Center and

ON THE COMPUTATIONAL COMPLEXITY OF MCMC-BASED ESTIMATORS IN LARGE SAMPLES. By Alexandre Belloni Duke University and IBM Watson Research Center and Submitted to the Annals of Statistics ON THE COMPUTATIONAL COMPLEXITY OF MCMC-BASED ESTIMATORS IN LARGE SAMPLES By Alexandre Belloni Duke University and IBM Watson Research Center and By Victor Chernozhukov

More information

CSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm

CSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm CSC2411 - Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm Notes taken by Shizhong Li March 15, 2005 Summary: In the spring of 1979, the Soviet mathematician L.G.Khachian

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Applicability of Quasi-Monte Carlo for lattice systems

Applicability of Quasi-Monte Carlo for lattice systems Applicability of Quasi-Monte Carlo for lattice systems Andreas Ammon 1,2, Tobias Hartung 1,2,Karl Jansen 2, Hernan Leovey 3, Andreas Griewank 3, Michael Müller-Preussker 1 1 Humboldt-University Berlin,

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Convex inequalities, isoperimetry and spectral gap III

Convex inequalities, isoperimetry and spectral gap III Convex inequalities, isoperimetry and spectral gap III Jesús Bastero (Universidad de Zaragoza) CIDAMA Antequera, September 11, 2014 Part III. K-L-S spectral gap conjecture KLS estimate, through Milman's

More information

SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN

SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN Abstract. A set U of unit vectors is selectively balancing if one can find two disjoint subsets U + and U, not both empty, such that the Euclidean

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation Luke Tierney Department of Statistics & Actuarial Science University of Iowa Basic Ratio of Uniforms Method Introduced by Kinderman and

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

The Geometry of Logconcave Functions and Sampling Algorithms

The Geometry of Logconcave Functions and Sampling Algorithms The Geometry of Logconcave Functions and Sampling Algorithms László Lovász Santosh Vempala April 23, revised March 25 Contents Introduction 2 2 Results 3 2. Preliminaries............................. 3

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains

More information

2 2 + x =

2 2 + x = Lecture 30: Power series A Power Series is a series of the form c n = c 0 + c 1 x + c x + c 3 x 3 +... where x is a variable, the c n s are constants called the coefficients of the series. n = 1 + x +

More information

Simple Techniques for Improving SGD. CS6787 Lecture 2 Fall 2017

Simple Techniques for Improving SGD. CS6787 Lecture 2 Fall 2017 Simple Techniques for Improving SGD CS6787 Lecture 2 Fall 2017 Step Sizes and Convergence Where we left off Stochastic gradient descent x t+1 = x t rf(x t ; yĩt ) Much faster per iteration than gradient

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

Lecture 2: Ellipsoid Method and Reductions Between Convex Oracles

Lecture 2: Ellipsoid Method and Reductions Between Convex Oracles CSE 599: Interplay between Convex Optimization and Geometry Winter 2018 Lecture 2: Ellipsoid Method and Reductions Between Convex Oracles Lecturer: Yin Tat Lee Disclaimer: Please tell me any mistake you

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic - not

More information

Approximating Volume ---Randomized vs. Deterministic

Approximating Volume ---Randomized vs. Deterministic JHSDM2017@Budapest May 22, 2017 Approximating Volume ---Randomized vs. Deterministic Fukuoka ( 福岡 ) Shuji Kijima 1,2 1. Kyushu Univ., Fukuoka, Japan 2. JST PRESTO, Fukuoka, Japan This talk was/is partly

More information

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer. Math 22 - Review for Exam 3. Answer each of the following questions as either True or False. Circle the correct answer. (a) True/False: If a n > 0 and a n 0, the series a n converges. Soln: False: Let

More information

New Conjectures in the Geometry of Numbers

New Conjectures in the Geometry of Numbers New Conjectures in the Geometry of Numbers Daniel Dadush Centrum Wiskunde & Informatica (CWI) Oded Regev New York University Talk Outline 1. A Reverse Minkowski Inequality & its conjectured Strengthening.

More information

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM Mathematical Programming manuscript No. (will be inserted by the editor) Alexandre Belloni Robert M. Freund PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM Draft,

More information

Chapter 12 PAWL-Forced Simulated Tempering

Chapter 12 PAWL-Forced Simulated Tempering Chapter 12 PAWL-Forced Simulated Tempering Luke Bornn Abstract In this short note, we show how the parallel adaptive Wang Landau (PAWL) algorithm of Bornn et al. (J Comput Graph Stat, to appear) can be

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

Online convex optimization in the bandit setting: gradient descent without a gradient

Online convex optimization in the bandit setting: gradient descent without a gradient Online convex optimization in the bandit setting: gradient descent without a gradient Abraham D. Flaxman Adam Tauman Kalai H. Brendan McMahan November 30, 2004 Abstract We study a general online convex

More information

Approximating a single component of the solution to a linear system

Approximating a single component of the solution to a linear system Approximating a single component of the solution to a linear system Christina E. Lee, Asuman Ozdaglar, Devavrat Shah celee@mit.edu asuman@mit.edu devavrat@mit.edu MIT LIDS 1 How do I compare to my competitors?

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set

More information

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning Tutorial: PART 2 Online Convex Optimization, A Game- Theoretic Approach to Learning Elad Hazan Princeton University Satyen Kale Yahoo Research Exploiting curvature: logarithmic regret Logarithmic regret

More information

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x Dual Vectors and Lower Bounds for the Nearest Lattice Point Problem Johan Hastad* MIT Abstract: We prove that given a point ~z outside a given lattice L then there is a dual vector which gives a fairly

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

Some Sieving Algorithms for Lattice Problems

Some Sieving Algorithms for Lattice Problems Foundations of Software Technology and Theoretical Computer Science (Bangalore) 2008. Editors: R. Hariharan, M. Mukund, V. Vinay; pp - Some Sieving Algorithms for Lattice Problems V. Arvind and Pushkar

More information

An introduction to Bayesian statistics and model calibration and a host of related topics

An introduction to Bayesian statistics and model calibration and a host of related topics An introduction to Bayesian statistics and model calibration and a host of related topics Derek Bingham Statistics and Actuarial Science Simon Fraser University Cast of thousands have participated in the

More information