Optimisation séquentielle et application au design
|
|
- Egbert Mills
- 6 years ago
- Views:
Transcription
1 Optimisation séquentielle et application au design d expériences Nicolas Vayatis Séminaire Aristote, Ecole Polytechnique - 23 octobre 2014
2 Joint work with Emile Contal (computer scientist, PhD student) and: David Buffoni Computer scientist, postdoc researcher Vianney Perchet Mathematician, maître de conférences, Université Paris-Diderot Alexandre Robicquet Undergraduate student in applied mathematics Themistoklis Stefanakis Civil engineer, PhD in Fluid Mechanics at CMLA and tsunami experts: Frédéric Dias Professor, School of Mathematical Sciences, University College Dublin Costas Synolakis Professor, Department of Civil and Environmental Engineering, University of California San Diego
3 Tsunamis Amplification Phenomena Numerical simulations of a tsunami amplification generated by a conical island
4 Setup: sequential and batch-sequential optimization Gaussian Process setup Two novel algorithms for sequential optimization Regret bounds Numerical experiments
5 Problem statement Optimization of an unknown function Parameter space X R d compact and convex Unknown objective function f (x) R for all x X Noisy measurement y = f (x) + ɛ, where ɛ iid N (0, η 2 ) Find the parameter vector x maximizing f (x) Sequential setup and performance metric Queries x 1, x 2,... and feedback y 1, y 2,... Goal: minimize cumulative regret after T iterations: T ( ) R T = f (x ) f (x t ) t=1
6 Problem statement Optimization of an unknown function Parameter space X R d compact and convex Unknown objective function f (x) R for all x X Noisy measurement y = f (x) + ɛ, where ɛ iid N (0, η 2 ) Find the parameter vector x maximizing f (x) Batch-Sequential setup and performance metric Batch queries {x 1 t,..., x K t } and batch feedback {y 1 t,..., y K t } at each t Goal: minimize cumulative regret after T iterations: T ( ) R T = f (x ) max f (x t k ) 1 k K t=1
7 Constraints Challenges Large number of parameters High level of noise Expensive evaluations Cope with nonconcave functions: exploration vs. exploitation Example: Tsunamis 5 parameters Each simulation takes 2 hours of computation A regular grid with 10 values per parameters needs 10 5 points A naive approach would take 23 years of computation
8 Sequential Optimization 1 x 5? objective 0 1 (x 3, y 3) (x 4, y 4) (x 1, y 1) 2 (x 2, y 2) parameter
9 Sequential Optimization 1 x 5? objective 0 1 (x 3, y 3) (x 4, y 4) (x 1, y 1) 2 (x 2, y 2) parameter
10 Batch-Sequential Optimization 1 objective 0 1 x5 1? x 5 2? x 5 3? (x 4, y 4) (x 3, y 3) (x 1, y 1) 2 (x 2, y 2) parameter
11 Main Approaches to Query Selection Experimental design [Fedorov, 1972]... Bayesian optimization (BO) [Moore and Schneider, 1995][Srinivas et al., 2010]... Active learning [Carpentier et al., 2011] [Chen and Krause, 2013] Multiarmed bandits [Auer, 2002] [Audibert et al., 2011]...
12 Classical Strategies for Query Selection in BO Maximum Mean (MM) or PMAX [Moore and Schneider, 1995] Maximum Upper Interval (MUI) or IEMAX [Moore and Schneider, 1995] Maximum Probability of Improvement (MPI) [Mockus, 1989] Maximum Expected Improvement (MEI) [Jones et al., 1998] [Locatelli, 1997] Gaussian Process Upper Confidence Bound (GP-UCB) [Cox and John, 1997] [Auer, 2002], [Srinivas et al., 2010], [Desautels et al., 2012]
13 Setup: sequential and batch-sequential optimization Gaussian Process setup Two novel algorithms for sequential optimization Regret bounds Numerical experiments
14 Gaussian Processes Framework Definition f GP(m, k), with mean function m : X R and covariance function k : X X R +, when for all x 1,..., x n, ( f (x1 ),..., f (x n ) ) N (µ, C), with µ[x i ] = m(x i ) and C[x i, x j ] = k(x i, x j ). Probabilistic smoothness assumption Nearby location are highly correlated Large local variation have low probability
15 Typical Kernels Polynomial with degree α N: for c R x 1, x 2, k(x 1, x 2 ) = (x T 1 x 2 + c) α Radial Basis Function with length-scale parameter b > 0: x 1, x 2, k(x 1, x 2 ) = exp ( x 1 x 2 2 ) 2b 2 Matérn with length-scale b > 0 and order ν: ( ) x 1, x 2, k(x 1, x 2 ) = 21 ν Γ(ν) Φ 2ν x1 x 2 ν b where Φ ν (z) = z ν K ν (z) and K ν is a Bessel function of the second kind with order ν.
16 Gaussian Processes Examples 1D Gaussian Processes with different covariance functions
17 Gaussian Process Interpolation Bayesian Inference [Rasmussen and Williams, 2006] At iteration t, with observations Y t for the query points X t, the posterior mean and variances are given at all point x in the search space by: µ t (x) = k t (x) C 1 t Y t (1) σ 2 t (x) = k(x, x) k t (x) C 1 t k t (x), (2) where C t = K t + η 2 I, and k t (x) = [k(x τ, x)] 1 τ t, and K t = [k(x τ, x τ )] 1 τ,τ t. Interpretation posterior mean µ t : prediction posterior variance σ 2 t : uncertainty
18 Example: Bayesian inference with 4 observations
19 Mutual Information An Important Ingredient Information Gain The information gain on f at X T is the mutual information between f and Y T. For a GP distribution with K T the kernel matrix of X T : I T (X T ) = 1 2 log det(i + η 2 K T ). We define γ T = max X =T I T (X ) the maximum information gain by a sequence of T queries points. Empirical Lower Bound For GPs with bounded variance, we have: [Srinivas et al. 2012] T γ T = σt 2 2 (x t ) Cγ T where C = log(1 + η 2 ) t=1
20 Mutual Information Examples The parameter γ T is the maximum mutual information about f obtainable by a sequence of T queries. Linear kernel: γ T = O(d log T ) RBF kernel: γ T = O ( (log T ) d+1) Matérn kernel: where α = γ T = O ( T α log T ), d(d + 1) 2ν + d(d + 1) 1.
21 Setup: sequential and batch-sequential optimization Gaussian Process setup Two novel algorithms for sequential optimization Regret bounds Numerical experiments
22 Upper and Lower Confidence Bounds Definition Fix 0 < δ < 1, and consider upper/lower confidence bounds on f : defined in f + Property (Srinivas, 2012) t (x) = µ t (x) + ft (x) = µ t (x) β t σt 2 (x) β t σt 2 (x) Fix δ > 0. With the choice β t (δ) = O ( log(t/δ) ), we have: x X, t 1, f (x) [ ft (x), f t + (x) ], with probability at least (1 δ).
23 Relevant Region R t Definition The Relevant Region R t is defined by, y t = max t (x), { } R t = x X f t + (x) y t. x X f Property We have: x R t, with probability at least (1 δ).
24 Relevant Region R t
25 Upper Confidence Bound and Pure Exploration UCB policy: k = 1 Achieves tradeoff between exploitation vs. exploration (µ t vs. σ 2 t ): where R + t = xt+1 1 argmax x R + t { x X µ t (x) + 2 f t + (x) β t σ 2 t (x) y t PE policy: k = 2,..., K Selects the most uncertain points inside the Relevant Region: xt+1 k argmax σ (k) t (x), for 2 k K, x R + t where σ (k) t (x) is the updated uncertainty using xt+1 1,..., x t+1 k 1 }
26 Algorithm 1: GP-UCB-PE β t slowly increasing for t = 1, 2,... do Compute µ t and σ 2 t with Bayesian inference on y 1 1,..., y K t 1 Compute R + t x 1 t+1 argmax x R + t for k = 2,..., K do Update σ (k) t x k t+1 argmax x R + t Query x 1 t+1,..., x K t+1 Observe y 1 t+1,..., y K t+1 f t + (x) σ (k) t (x)
27 The GP-UCB-PE algorithm [Contal et al., 2013] 1 x
28 The GP-UCB-PE algorithm [Contal et al., 2013] 1 0 x 1 x
29 GP-MI A Novel Algorithm for Sequential Optimization Algorithm 2: GP-MI γ 0 0, α fixed for t = 1, 2,... do Compute µ t and σt 2 using Bayesian inference φ t (x) α ( σ 2t (x) + γ t 1 γ ) t 1 x t argmax x X µ t (x) + φ t (x) γ t γ t 1 + σ 2 t (x t ) Query at x t and observe y t
30 Setup: sequential and batch-sequential optimization Gaussian Process setup Two novel algorithms for sequential optimization Regret bounds Numerical experiments
31 Regret bound on GP-UCB-PE General result Consider f GP(0, k) with k(x, x) 1 for all x, then we have, with probability at least (1 δ): R K T = O ( (T K ) ) γ TK log T Specialized results Linear kernel: RT (log(tk) K = O ) dt /K ( RBF kernel: RT K = O (T /K) ( log(tk) ) ) d+2 Matérn kernel: R K T = O ( log(tk) T α+1 K α 1 )
32 Two Competitors for Batch Strategies GP-BUCB = GP Batch UCB [Desautels et al., 2012] Batch estimation based on updates µ k t (x) of µ t (x) Regret bound with RBF kernel due to initialization: ( ( (2d ) ) d (T O exp e K ) ) log(tk) SM-UCB = Simulation Matching with UCB [Azimi et al., 2010] Select batch of points that matches expected behavior Based on a greedy K-medoid algorithm to screen irrelevant data points No regret bound available
33 Regret bound for GP-MI General result Consider f GP(0, k) with k(x, x) 1 for all x, then we have, with probability at least (1 δ): ( ) ( ) 2 2 R T 5 Cγ T log + 4 log δ δ where C = 2 log(1+η 2 ). Specialized results For linear kernel: R T = O( d log T ) For RBF kernel: R T = O ( (log T ) d+1) For Matérn kernel: R T = O ( T α log T ),
34 Setup: sequential and batch-sequential optimization Gaussian Process setup Two novel algorithms for sequential optimization Regret bounds Numerical experiments
35 Experiments Competitors for batch-sequential: GP-BUCB and SM-UCB Competitors for sequential: GP-UCB and GP-EI Assessment: synthetic problems and real-data benchmarks (a) Himmelblau s function (b) Gaussian Mixture
36 Numerical results for sequential-batch strategy GP-UCB-PE Regret r K t Iteration t GP-BUCB SM-UCB GP-UCB-PE (a) Generated GP Iteration t (b) Himmelblau Iteration t (c) Gaussian mixture Regret r K t Iteration t Iteration t Iteration t (d) Mackey-Glass (e) Tsunamis (f) Abalone
37 Numerical results for sequential strategy GP-MI UCB EI MI (g) Generated GP (d = 4) UCB 1 EI 0.5 MI (h) Himmelblau RT /T EI UCB 0.5 MI ,000 (i) Gaussian mixture UCB 0.2 EI 0.1 MI ,000 RT /T MI EI UCB RT /T UCB EI MI (j) Mackey-Glass (k) Tsunamis (l) Branin
38 Conclusion 1/2 GP-UCB-PE and GP-MI Generic sequential optimization methods Good theoretical guarantees for cumulative regret - what about simple regret? Efficient in practice Easy to implement Matlab source code online at:
39 Conclusion 2/2 Further developments In progress: Nonparametric approach (active learning) In progress: Application to other fields and to multiobjective optimization Automotive industry Wind power-based energy plants Challenge: how to set physical priors in the design space?
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration Emile Contal David Buffoni Alexandre Robicquet Nicolas Vayatis CMLA, ENS Cachan, France September 25, 2013 Motivating
More informationGaussian Process Optimization with Mutual Information
Gaussian Process Optimization with Mutual Information Emile Contal 1 Vianney Perchet 2 Nicolas Vayatis 1 1 CMLA Ecole Normale Suprieure de Cachan & CNRS, France 2 LPMA Université Paris Diderot & CNRS,
More informationThe geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan
The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm
More informationarxiv: v3 [stat.ml] 8 Jun 2015
Gaussian Process Optimization with Mutual Information arxiv:1311.485v3 [stat.ml] 8 Jun 15 Emile Contal 1, Vianney Perchet, and Nicolas Vayatis 1 1 CMLA, UMR CNRS 8536, ENS Cachan, France LPMA, Université
More informationQuantifying mismatch in Bayesian optimization
Quantifying mismatch in Bayesian optimization Eric Schulz University College London e.schulz@cs.ucl.ac.uk Maarten Speekenbrink University College London m.speekenbrink@ucl.ac.uk José Miguel Hernández-Lobato
More informationTalk on Bayesian Optimization
Talk on Bayesian Optimization Jungtaek Kim (jtkim@postech.ac.kr) Machine Learning Group, Department of Computer Science and Engineering, POSTECH, 77-Cheongam-ro, Nam-gu, Pohang-si 37673, Gyungsangbuk-do,
More informationDynamic Batch Bayesian Optimization
Dynamic Batch Bayesian Optimization Javad Azimi EECS, Oregon State University azimi@eecs.oregonstate.edu Ali Jalali ECE, University of Texas at Austin alij@mail.utexas.edu Xiaoli Fern EECS, Oregon State
More informationPILCO: A Model-Based and Data-Efficient Approach to Policy Search
PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol
More informationParallelised Bayesian Optimisation via Thompson Sampling
Parallelised Bayesian Optimisation via Thompson Sampling Kirthevasan Kandasamy Carnegie Mellon University Google Research, Mountain View, CA Sep 27, 2017 Slides: www.cs.cmu.edu/~kkandasa/talks/google-ts-slides.pdf
More informationPredictive Variance Reduction Search
Predictive Variance Reduction Search Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh Centre of Pattern Recognition and Data Analytics (PRaDA), Deakin University Email: v.nguyen@deakin.edu.au
More informationPractical Bayesian Optimization of Machine Learning. Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that
More informationKNOWLEDGE GRADIENT METHODS FOR BAYESIAN OPTIMIZATION
KNOWLEDGE GRADIENT METHODS FOR BAYESIAN OPTIMIZATION A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More informationBatch Bayesian Optimization via Simulation Matching
Batch Bayesian Optimization via Simulation Matching Javad Azimi, Alan Fern, Xiaoli Z. Fern School of EECS, Oregon State University {azimi, afern, xfern}@eecs.oregonstate.edu Abstract Bayesian optimization
More informationThe multi armed-bandit problem
The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization
More informationProbabilistic numerics for deep learning
Presenter: Shijia Wang Department of Engineering Science, University of Oxford rning (RLSS) Summer School, Montreal 2017 Outline 1 Introduction Probabilistic Numerics 2 Components Probabilistic modeling
More informationCalibrating Environmental Engineering Models and Uncertainty Analysis
Models and Cornell University Oct 14, 2008 Project Team Christine Shoemaker, co-pi, Professor of Civil and works in applied optimization, co-pi Nikolai Blizniouk, PhD student in Operations Research now
More informationSparse Linear Contextual Bandits via Relevance Vector Machines
Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationGaussian Process Regression
Gaussian Process Regression 4F1 Pattern Recognition, 21 Carl Edward Rasmussen Department of Engineering, University of Cambridge November 11th - 16th, 21 Rasmussen (Engineering, Cambridge) Gaussian Process
More informationCOMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017
COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University FEATURE EXPANSIONS FEATURE EXPANSIONS
More informationGaussian Processes (10/16/13)
STA561: Probabilistic machine learning Gaussian Processes (10/16/13) Lecturer: Barbara Engelhardt Scribes: Changwei Hu, Di Jin, Mengdi Wang 1 Introduction In supervised learning, we observe some inputs
More informationTwo generic principles in modern bandits: the optimistic principle and Thompson sampling
Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle
More informationarxiv: v1 [stat.ml] 24 Oct 2016
Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation arxiv:6.7379v [stat.ml] 4 Oct 6 Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher Laboratory
More informationarxiv: v4 [cs.lg] 9 Jun 2010
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design arxiv:912.3995v4 [cs.lg] 9 Jun 21 Niranjan Srinivas California Institute of Technology niranjan@caltech.edu Sham M.
More informationContextual Gaussian Process Bandit Optimization
Contextual Gaussian Process Bandit Optimization Andreas Krause Cheng Soon Ong Department of Computer Science, ETH Zurich, 89 Zurich, Switzerland krausea@ethz.ch chengsoon.ong@inf.ethz.ch Abstract How should
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London
More informationLecture 5: GPs and Streaming regression
Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes
Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan 1, M. Koval and P. Parashar 1 Applications of Gaussian
More informationResearch Collection. Active Learning for Level Set Estimation. Master Thesis. ETH Library. Author(s): Gotovos, Alkis. Publication Date: 2013
Research Collection Master Thesis Active Learning for Level Set Estimation Author(s): Gotovos, Alkis Publication Date: 213 Permanent Link: https://doiorg/13929/ethz-a-9767767 Rights / License: In Copyright
More informationOn the Complexity of Best Arm Identification in Multi-Armed Bandit Models
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March
More informationHigh Dimensional Bayesian Optimization via Restricted Projection Pursuit Models
High Dimensional Bayesian Optimization via Restricted Projection Pursuit Models Chun-Liang Li Kirthevasan Kandasamy Barnabás Póczos Jeff Schneider {chunlial, kandasamy, bapoczos, schneide}@cs.cmu.edu Carnegie
More informationTruncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation
Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher Laboratory for Information and Inference
More informationMulti-armed bandit models: a tutorial
Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)
More informationPrediction of double gene knockout measurements
Prediction of double gene knockout measurements Sofia Kyriazopoulou-Panagiotopoulou sofiakp@stanford.edu December 12, 2008 Abstract One way to get an insight into the potential interaction between a pair
More informationEvaluation of multi armed bandit algorithms and empirical algorithm
Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.
More informationCOS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement
More informationKernels for Automatic Pattern Discovery and Extrapolation
Kernels for Automatic Pattern Discovery and Extrapolation Andrew Gordon Wilson agw38@cam.ac.uk mlg.eng.cam.ac.uk/andrew University of Cambridge Joint work with Ryan Adams (Harvard) 1 / 21 Pattern Recognition
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More informationAdvanced Machine Learning Practical 4b Solution: Regression (BLR, GPR & Gradient Boosting)
Advanced Machine Learning Practical 4b Solution: Regression (BLR, GPR & Gradient Boosting) Professor: Aude Billard Assistants: Nadia Figueroa, Ilaria Lauzana and Brice Platerrier E-mails: aude.billard@epfl.ch,
More informationGaussian with mean ( µ ) and standard deviation ( σ)
Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationProbabilistic Regression Using Basis Function Models
Probabilistic Regression Using Basis Function Models Gregory Z. Grudic Department of Computer Science University of Colorado, Boulder grudic@cs.colorado.edu Abstract Our goal is to accurately estimate
More informationReliability Monitoring Using Log Gaussian Process Regression
COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical
More informationProbabilistic Graphical Models Lecture 20: Gaussian Processes
Probabilistic Graphical Models Lecture 20: Gaussian Processes Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 30, 2015 1 / 53 What is Machine Learning? Machine learning algorithms
More informationBandits and Exploration: How do we (optimally) gather information? Sham M. Kakade
Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22
More informationRobust Monte Carlo Methods for Sequential Planning and Decision Making
Robust Monte Carlo Methods for Sequential Planning and Decision Making Sue Zheng, Jason Pacheco, & John Fisher Sensing, Learning, & Inference Group Computer Science & Artificial Intelligence Laboratory
More informationBayesian optimization for automatic machine learning
Bayesian optimization for automatic machine learning Matthew W. Ho man based o work with J. M. Hernández-Lobato, M. Gelbart, B. Shahriari, and others! University of Cambridge July 11, 2015 Black-bo optimization
More informationAdaptive Sampling of Clouds with a Fleet of UAVs: Improving Gaussian Process Regression by Including Prior Knowledge
Master s Thesis Presentation Adaptive Sampling of Clouds with a Fleet of UAVs: Improving Gaussian Process Regression by Including Prior Knowledge Diego Selle (RIS @ LAAS-CNRS, RT-TUM) Master s Thesis Presentation
More informationStratégies bayésiennes et fréquentistes dans un modèle de bandit
Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016
More informationOnline Learning and Sequential Decision Making
Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision
More informationRobustness in GANs and in Black-box Optimization
Robustness in GANs and in Black-box Optimization Stefanie Jegelka MIT CSAIL joint work with Zhi Xu, Chengtao Li, Ilija Bogunovic, Jonathan Scarlett and Volkan Cevher Robustness in ML noise Generator Critic
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#20 (Monday November 12) Gaussian Processes
Statistical Techniques in Robotics (6-83, F) Lecture# (Monday November ) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan Applications of Gaussian Processes (a) Inverse Kinematics
More informationarxiv: v3 [stat.ml] 7 Feb 2018
Bayesian Optimization with Gradients Jian Wu Matthias Poloczek Andrew Gordon Wilson Peter I. Frazier Cornell University, University of Arizona arxiv:703.04389v3 stat.ml 7 Feb 08 Abstract Bayesian optimization
More informationReinforcement Learning
Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms
More informationGWAS V: Gaussian processes
GWAS V: Gaussian processes Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS V: Gaussian processes Summer 2011
More informationBayesian Linear Regression. Sargur Srihari
Bayesian Linear Regression Sargur srihari@cedar.buffalo.edu Topics in Bayesian Regression Recall Max Likelihood Linear Regression Parameter Distribution Predictive Distribution Equivalent Kernel 2 Linear
More informationMachine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart
Machine Learning Bayesian Regression & Classification learning as inference, Bayesian Kernel Ridge regression & Gaussian Processes, Bayesian Kernel Logistic Regression & GP classification, Bayesian Neural
More informationBandit models: a tutorial
Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses
More informationWhy experimenters should not randomize, and what they should do instead
Why experimenters should not randomize, and what they should do instead Maximilian Kasy Department of Economics, Harvard University Maximilian Kasy (Harvard) Experimental design 1 / 42 project STAR Introduction
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More informationReinforcement Learning
Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory
More informationActive Learning and Optimized Information Gathering
Active Learning and Optimized Information Gathering Lecture 7 Learning Theory CS 101.2 Andreas Krause Announcements Project proposal: Due tomorrow 1/27 Homework 1: Due Thursday 1/29 Any time is ok. Office
More informationScalable kernel methods and their use in black-box optimization
with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives
More informationLecture 4: Lower Bounds (ending); Thompson Sampling
CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from
More informationApproximate Inference Part 1 of 2
Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ 1 Bayesian paradigm Consistent use of probability theory
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationNonparameteric Regression:
Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,
More informationCOMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation
COMP 55 Applied Machine Learning Lecture 2: Bayesian optimisation Associate Instructor: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp55 Unless otherwise noted, all material posted
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationAn Experimental Evaluation of High-Dimensional Multi-Armed Bandits
An Experimental Evaluation of High-Dimensional Multi-Armed Bandits Naoki Egami Romain Ferrali Kosuke Imai Princeton University Talk at Political Data Science Conference Washington University, St. Louis
More informationGaussian processes for inference in stochastic differential equations
Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017
More informationGlobal Optimisation with Gaussian Processes. Michael A. Osborne Machine Learning Research Group Department o Engineering Science University o Oxford
Global Optimisation with Gaussian Processes Michael A. Osborne Machine Learning Research Group Department o Engineering Science University o Oxford Global optimisation considers objective functions that
More informationThe Knowledge Gradient for Sequential Decision Making with Stochastic Binary Feedbacks
The Knowledge Gradient for Sequential Decision Making with Stochastic Binary Feedbacks Yingfei Wang, Chu Wang and Warren B. Powell Princeton University Yingfei Wang Optimal Learning Methods June 22, 2016
More informationGaussian processes and bayesian optimization Stanisław Jastrzębski. kudkudak.github.io kudkudak
Gaussian processes and bayesian optimization Stanisław Jastrzębski kudkudak.github.io kudkudak Plan Goal: talk about modern hyperparameter optimization algorithms Bayes reminder: equivalent linear regression
More informationStein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Qiang Liu and Dilin Wang NIPS 2016 Discussion by Yunchen Pu March 17, 2017 March 17, 2017 1 / 8 Introduction Let x R d
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Gaussian Processes Barnabás Póczos http://www.gaussianprocess.org/ 2 Some of these slides in the intro are taken from D. Lizotte, R. Parr, C. Guesterin
More informationICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts
ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes
More informationIntroduction to Gaussian Processes
Introduction to Gaussian Processes Neil D. Lawrence GPSS 10th June 2013 Book Rasmussen and Williams (2006) Outline The Gaussian Density Covariance from Basis Functions Basis Function Representations Constructing
More informationA General Framework for Constrained Bayesian Optimization using Information-based Search
Journal of Machine Learning Research 17 (2016) 1-53 Submitted 12/15; Revised 4/16; Published 9/16 A General Framework for Constrained Bayesian Optimization using Information-based Search José Miguel Hernández-Lobato
More informationAfternoon Meeting on Bayesian Computation 2018 University of Reading
Gabriele Abbati 1, Alessra Tosi 2, Seth Flaxman 3, Michael A Osborne 1 1 University of Oxford, 2 Mind Foundry Ltd, 3 Imperial College London Afternoon Meeting on Bayesian Computation 2018 University of
More informationThese slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
More informationPhysician Performance Assessment / Spatial Inference of Pollutant Concentrations
Physician Performance Assessment / Spatial Inference of Pollutant Concentrations Dawn Woodard Operations Research & Information Engineering Cornell University Johns Hopkins Dept. of Biostatistics, April
More informationRegret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known
More informationMyopic Policies for Budgeted Optimization with Constrained Experiments
Myopic Policies for Budgeted Optimization with Constrained Experiments Javad Azami and Xiaoli Fern and Alan Fern School of Electrical Engineering and Computer Science, Oregon State University Elizabeth
More informationKernel adaptive Sequential Monte Carlo
Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline
More informationWhen Gaussian Processes Meet Combinatorial Bandits: GCB
European Workshop on Reinforcement Learning 14 018 October 018, Lille, France. When Gaussian Processes Meet Combinatorial Bandits: GCB Guglielmo Maria Accabi Francesco Trovò Alessandro Nuara Nicola Gatti
More informationA Framework for Daily Spatio-Temporal Stochastic Weather Simulation
A Framework for Daily Spatio-Temporal Stochastic Weather Simulation, Rick Katz, Balaji Rajagopalan Geophysical Statistics Project Institute for Mathematics Applied to Geosciences National Center for Atmospheric
More informationProfile-Based Bandit with Unknown Profiles
Journal of Machine Learning Research 9 (208) -40 Submitted /7; Revised 6/8; Published 9/8 Profile-Based Bandit with Unknown Profiles Sylvain Lamprier sylvain.lamprier@lip6.fr Sorbonne Universités, UPMC
More informationRelevance Vector Machines for Earthquake Response Spectra
2012 2011 American American Transactions Transactions on on Engineering Engineering & Applied Applied Sciences Sciences. American Transactions on Engineering & Applied Sciences http://tuengr.com/ateas
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationMTTTS16 Learning from Multiple Sources
MTTTS16 Learning from Multiple Sources 5 ECTS credits Autumn 2018, University of Tampere Lecturer: Jaakko Peltonen Lecture 6: Multitask learning with kernel methods and nonparametric models On this lecture:
More informationNeutron inverse kinetics via Gaussian Processes
Neutron inverse kinetics via Gaussian Processes P. Picca Politecnico di Torino, Torino, Italy R. Furfaro University of Arizona, Tucson, Arizona Outline Introduction Review of inverse kinetics techniques
More informationFast Likelihood-Free Inference via Bayesian Optimization
Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability
More informationHow to build an automatic statistician
How to build an automatic statistician James Robert Lloyd 1, David Duvenaud 1, Roger Grosse 2, Joshua Tenenbaum 2, Zoubin Ghahramani 1 1: Department of Engineering, University of Cambridge, UK 2: Massachusetts
More informationConstrained Bayesian Optimization and Applications
Constrained Bayesian Optimization and Applications The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gelbart, Michael
More information