Sampling versus optimization in very high dimensional parameter spaces
|
|
- Bryan Gilmore
- 5 years ago
- Views:
Transcription
1 Sampling versus optimization in very high dimensional parameter spaces Grigor Aslanyan Berkeley Center for Cosmological Physics UC Berkeley Statistical Challenges in Modern Astronomy VI Carnegie Mellon University June 6, 2016
2 Collaborators Uroš Seljak Yu Feng Chirag Modi
3 Sampling: Hamiltonian Monte Carlo (HMC) Each parameter becomes a particle position. Momentum variables are introduced. Particles follow Hamiltonian dynamics. U = -ln(posterior). Huge advantage over random walk: Information in the derivatives is used to walk in the right direction. Acceptance rate = 1 theoretically. For each iteration need to integrate equations of motion numerically using staggered leapfrog (or similar) methods. Typically ~10 numerical integration steps are taken per iteration. Method of choice for sampling high dimensional parameter spaces. Tuning: masses, integration steps, integration time.
4 Optimization: BFGS Quasi-Newton method. Needs the first, but not second derivatives. At each iteration the inverse Hessian is estimated from previous iterations (never stored explicitly). A direction of move is deduced, followed by line search. Line search: Moré-Theunte 1992 L-BFGS: Limited memory BFGS. Store and use only a few previous iterations. Works almost as well as BFGS!
5 Linear Model data d = Rs + n S = ss N = nn signal noise C dd = RSR + N Binning: S l = {s ml } (m l =1,...,M l ) Q l = R l R RSR = X l Likelihood: l Q l L(d ) =(2 ) N/2 det(c) 1/2 exp 1 2 d C 1 d projection matrix Minimum variance estimator (Wiener Filter): ŝ = SR C 1 d For gaussian fields this is the same as the maximum probability estimator! U.Seljak astro-ph/
6 Linear Model Fisher matrix: F ll 0 2 ln L(d l 0 ˆ The inverse is an estimate of the covariance matrix of the parameters: D ˆ ˆ E D ˆ EDˆ E = F 1 Calculation: F ll 0 = 1 2 tr Q lc 1 Q l 0C 1 Window: W ll 0 = F ll P 0 l F 0 ll 0 Power spectrum quadratic estimator: ˆ l = 1 2 X l 0 F 1 ll 0 d C 1 Q l 0C 1 d b l 0 b l = tr NC 1 Q l C 1 D ˆ l E = X l 0 W ll 0 l 0 U.Seljak astro-ph/
7 Estimating Noise Bias and Fisher Matrix Noise bias: simulate noise: d n Pass through optimizer: ŝ n b l = l S 1 ŝ nŝ n S 1 l Fisher matrix: simulate signal: s s Pass through optimizer: ŝ s For each bin l 0 simulate extra signal in that bin only: s l 0 s l = s 0 s + s l 0 Pass through optimizer: ŝ l 0 * P ŝ l = ŝ 0 l ŝ 0 s F ll = K l 0 ŝ k l l (k l ) 2 + with P 0 k s l 0 l 0 (k l 0) l K l 0 is the number of modes Power spectrum: ˆ l = X kl ŝ(k l ) 2 * P ŝn (k l ) 2 l W 0 ll 0K 1 l 0 P k l 0 s s(k l 0) 2 P k l ŝ s (k l ) 2 +
8 Linear case: Weak Lensing Power spectrum Toy Example: 64x64 grid
9 Linear case: Weak Lensing Toy Example: 64x64 grid Add noise and mask Observed data:
10 Original Results L-BFGS Linear Algebra Fisher matrix:
11 Linear case: Weak Lensing 1024x1024 grid: ~million parameters Power spectrum
12 Linear case: Weak Lensing 1024x1024 grid: ~million parameters
13 Reconstruction Original L-BFGS in action CPU time ~ 1 min.
14 L-BFGS vs. Conjugate Gradient
15 Power Spectrum Fiducial power is 50% larger than the true power. CPU time ~ 1 hour
16 Nonlinear Case: Large Scale Structure Grid: 128 3, Box size: (750 Mpc/h) 3 Nonlinear evolution: 2LPT (Second order Lagrangian perturbation theory) Simulation 2LPT Noise Code used: fastpm (Yu Feng, Man-Yat Chu, Uros Seljak, )
17 Reconstruction Nonlinear density Original L-BFGS in action CPU time ~ 1 hour
18 L-BFGS vs. Conjugate Gradient
19 Power Spectrum PRELIMINARY Fiducial power has no wiggles. Power spectra are convolved with window. CPU time ~ 60 hours
20 Power Spectrum PRELIMINARY Showing ratio to fiducial (smooth) power. CPU time ~ 60 hours
21 Comparison with HMC Previously: HMC: Jasche, Wandelt, , Wang, Mo, Yang, van den Bosch, , HMC Burnin: ~500 iterations (~5,000 function/derivative calls). L-BFGS optimization: ~100 iterations (~100 function/ derivative calls). HMC correlation length: ~200. HMC sample of 10,000: ~100,000 function/derivative calls. L-BFGS full fisher matrix/power spectrum estimation: ~5,000 function/derivative calls.
22 Summary L-BFGS is a fast optimizer for very high dimensional parameter spaces. Conjugate gradient works almost as well as L-BFGS for the linear case. Not so much for the nonlinear case. Our reconstruction method works well for both linear and nonlinear models with ~million parameters at least. HMC is at least an order of magnitude more expensive. But if you really need a full sample then HMC is the way to go. DO NOT use HMC for minimization! Optimizers (L-BFGS, Conjugate gradient) and HMC publicly available (soon) as a part of the cosmo++ package: cosmopp.com
Large Scale Bayesian Inference
Large Scale Bayesian I in Cosmology Jens Jasche Garching, 11 September 2012 Introduction Cosmography 3D density and velocity fields Power-spectra, bi-spectra Dark Energy, Dark Matter, Gravity Cosmological
More informationCosmology & CMB. Set5: Data Analysis. Davide Maino
Cosmology & CMB Set5: Data Analysis Davide Maino Gaussian Statistics Statistical isotropy states only two-point correlation function is needed and it is related to power spectrum Θ(ˆn) = lm Θ lm Y lm (ˆn)
More informationApproximate Bayesian computation: an application to weak-lensing peak counts
STATISTICAL CHALLENGES IN MODERN ASTRONOMY VI Approximate Bayesian computation: an application to weak-lensing peak counts Chieh-An Lin & Martin Kilbinger SAp, CEA Saclay Carnegie Mellon University, Pittsburgh
More informationIntroduction to Hamiltonian Monte Carlo Method
Introduction to Hamiltonian Monte Carlo Method Mingwei Tang Department of Statistics University of Washington mingwt@uw.edu November 14, 2017 1 Hamiltonian System Notation: q R d : position vector, p R
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble
More informationUncertainty quantification for Wavefield Reconstruction Inversion
Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),
More informationStatistics and Prediction. Tom
Statistics and Prediction Tom Kitching tdk@roe.ac.uk @tom_kitching David Tom David Tom photon What do we want to measure How do we measure Statistics of measurement Cosmological Parameter Extraction Understanding
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationCh 4. Linear Models for Classification
Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationWavelets Applied to CMB Analysis
Wavelets Applied to CMB Analysis CMB non-gaussianity Constraining the Primordial Power Spectrum IPAM, Nov 2004 Pia Mukherjee, Astronomy Centre, University of Sussex The Universe.. is a perturbed Robertson
More informationPhysics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester
Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation
More informationCovariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation
Covariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation Emmanuel Vincent METISS Team Inria Rennes - Bretagne Atlantique E. Vincent (Inria) Artifact reduction
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationEncoding or decoding
Encoding or decoding Decoding How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms for extracting a stimulus
More informationAnisotropy in the CMB
Anisotropy in the CMB Antony Lewis Institute of Astronomy & Kavli Institute for Cosmology, Cambridge http://cosmologist.info/ Hanson & Lewis: 0908.0963 Evolution of the universe Opaque Transparent Hu &
More informationMapping the dark universe with cosmic magnification
Mapping the dark universe with cosmic magnification 张鹏杰 Zhang, Pengjie 中科院上海天文台 Shanghai Astronomical Observatory (SHAO) Chinese Academy of Sciences All the hard works are done by my student Yang Xinjuan
More informationLINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,
More informationSVMs, Duality and the Kernel Trick
SVMs, Duality and the Kernel Trick Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 26 th, 2007 2005-2007 Carlos Guestrin 1 SVMs reminder 2005-2007 Carlos Guestrin 2 Today
More informationNonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs
Nonlinear Optimization for Optimal Control Part 2 Pieter Abbeel UC Berkeley EECS Outline From linear to nonlinear Model-predictive control (MPC) POMDPs Page 1! From Linear to Nonlinear We know how to solve
More informationDimension-Independent likelihood-informed (DILI) MCMC
Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC
More informationRobust regression and non-linear kernel methods for characterization of neuronal response functions from limited data
Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer
More informationStatic Parameter Estimation using Kalman Filtering and Proximal Operators Vivak Patel
Static Parameter Estimation using Kalman Filtering and Proximal Operators Viva Patel Department of Statistics, University of Chicago December 2, 2015 Acnowledge Mihai Anitescu Senior Computational Mathematician
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationShear Power of Weak Lensing. Wayne Hu U. Chicago
Shear Power of Weak Lensing 10 3 N-body Shear 300 Sampling errors l(l+1)c l /2π εε 10 4 10 5 Error estimate Shot Noise θ y (arcmin) 200 100 10 6 100 1000 l 100 200 300 θ x (arcmin) Wayne Hu U. Chicago
More informationProbing the covariance matrix
Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241
More informationScan Time Optimization for Post-injection PET Scans
Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationA NEW METHOD FOR ADAPTIVE OPTICS POINT SPREAD FUNCTION RECONSTRUCTION
Florence, Italy. May 2013 ISBN: 978-88-908876-0-4 DOI: 10.12839/AO4ELT3.13328 A NEW METHOD FOR ADAPTIVE OPTICS POINT SPREAD FUNCTION RECONSTRUCTION J. Exposito 1a, D. Gratadour 1, G. Rousset 1, Y. Clénet
More informationApplication of the Ensemble Kalman Filter to History Matching
Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization
More informationComputational Methods Short Course on Image Quality
Computational Methods Short Course on Image Quality Matthew A. Kupinski What we Will Cover Sources of randomness Computation of linear-observer performance Computation of ideal-observer performance Miscellaneous
More informationNon-linear structure in the Universe Cosmology on the Beach
Non-linear structure in the Universe Cosmology on the Beach Puerto Vallarta January, 2011 Martin White UC Berkeley/LBNL (http://mwhite.berkeley.edu/talks) Non-linearities and BAO Acoustic oscillations
More informationAn IDL Based Image Deconvolution Software Package
An IDL Based Image Deconvolution Software Package F. Városi and W. B. Landsman Hughes STX Co., Code 685, NASA/GSFC, Greenbelt, MD 20771 Abstract. Using the Interactive Data Language (IDL), we have implemented
More informationQuasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization
Quasi-Newton Methods Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization 10-725 Last time: primal-dual interior-point methods Given the problem min x f(x) subject to h(x)
More informationParameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!
Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses
More informationAn Evolving Gradient Resampling Method for Machine Learning. Jorge Nocedal
An Evolving Gradient Resampling Method for Machine Learning Jorge Nocedal Northwestern University NIPS, Montreal 2015 1 Collaborators Figen Oztoprak Stefan Solntsev Richard Byrd 2 Outline 1. How to improve
More informationPrecise measurement of the radial BAO scale in galaxy redshift surveys
Precise measurement of the radial BAO scale in galaxy redshift surveys David Alonso (Universidad Autónoma de Madrid - IFT) Based on the work arxiv:1210.6446 In collaboration with: E. Sánchez, F. J. Sánchez,
More informationPOLI 8501 Introduction to Maximum Likelihood Estimation
POLI 8501 Introduction to Maximum Likelihood Estimation Maximum Likelihood Intuition Consider a model that looks like this: Y i N(µ, σ 2 ) So: E(Y ) = µ V ar(y ) = σ 2 Suppose you have some data on Y,
More informationCosmological parameters of modified gravity
Cosmological parameters of modified gravity Levon Pogosian Simon Fraser University Burnaby, BC, Canada In collaborations with R. Crittenden, A. Hojjati, K. Koyama, A. Silvestri, G.-B. Zhao Two questions
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationDeep learning quantum matter
Deep learning quantum matter Machine-learning approaches to the quantum many-body problem Joaquín E. Drut & Andrew C. Loheac University of North Carolina at Chapel Hill SAS, September 2018 Outline Why
More informationMaximum Likelihood Estimation. only training data is available to design a classifier
Introduction to Pattern Recognition [ Part 5 ] Mahdi Vasighi Introduction Bayesian Decision Theory shows that we could design an optimal classifier if we knew: P( i ) : priors p(x i ) : class-conditional
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationLecture 17: October 27
0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These
More informationResampling techniques for statistical modeling
Resampling techniques for statistical modeling Gianluca Bontempi Département d Informatique Boulevard de Triomphe - CP 212 http://www.ulb.ac.be/di Resampling techniques p.1/33 Beyond the empirical error
More informationStochastic Analogues to Deterministic Optimizers
Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured
More informationEnhanced constraints from multi-tracer surveys
Enhanced constraints from multi-tracer surveys or How to beat cosmic variance Raul Abramo Physics Institute, USP & LabCosmos @ USP & J-PAS / Pau-Brasil Collaboration J-PAS Galaxy surveys are evolving We
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble
More informationScalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems
Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC
More informationPrecise measures of growth rate from RSD in the void-galaxy correlation
Precise measures of growth rate from RSD in the void-galaxy correlation Seshadri Nadathur Moriond Cosmology, La Thuile Based on work with Paul Carter and Will Percival SN & Percival, arxiv:1712.07575 SN,
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationMonte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)
More informationMachine Learning. 7. Logistic and Linear Regression
Sapienza University of Rome, Italy - Machine Learning (27/28) University of Rome La Sapienza Master in Artificial Intelligence and Robotics Machine Learning 7. Logistic and Linear Regression Luca Iocchi,
More informationA NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar
A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More informationDan Roth 461C, 3401 Walnut
CIS 519/419 Applied Machine Learning www.seas.upenn.edu/~cis519 Dan Roth danroth@seas.upenn.edu http://www.cis.upenn.edu/~danroth/ 461C, 3401 Walnut Slides were created by Dan Roth (for CIS519/419 at Penn
More informationInstrumental Systematics on Lensing Reconstruction and primordial CMB B-mode Diagnostics. Speaker: Meng Su. Harvard University
Instrumental Systematics on Lensing Reconstruction and primordial CMB B-mode Diagnostics Speaker: Meng Su Harvard University Collaborators: Amit P.S. Yadav, Matias Zaldarriaga Berkeley CMB Lensing workshop
More informationEstimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction
Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept
More informationXiuyuan Yang (Columbia University and BNL) Jan Kratochvil (University of Miami)
Xiuyuan Yang (Columbia University and BNL) Jan Kratochvil (University of Miami) Advisors: Morgan May (Brookhaven National Lab) Zoltan Haiman (Columbia University) Introduction Large surveys such as LSST
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationVariational Data Assimilation Current Status
Variational Data Assimilation Current Status Eĺıas Valur Hólm with contributions from Mike Fisher and Yannick Trémolet ECMWF NORDITA Solar and stellar dynamos and cycles October 2009 Eĺıas Valur Hólm (ECMWF)
More informationPredictive Engineering and Computational Sciences. Local Sensitivity Derivative Enhanced Monte Carlo Methods. Roy H. Stogner, Vikram Garg
PECOS Predictive Engineering and Computational Sciences Local Sensitivity Derivative Enhanced Monte Carlo Methods Roy H. Stogner, Vikram Garg Institute for Computational Engineering and Sciences The University
More informationParticle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates
More informationECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets
ECE580 Exam 2 November 01, 2011 1 Name: Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc. I do
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures
More informationarxiv: v4 [stat.co] 4 May 2016
Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases arxiv:56.5555v4 [stat.co] 4 May 26 Cheng Zhang Department of Mathematics University of California, Irvine Irvine, CA 92697
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More informationCosmological Constraints from a Combined Analysis of Clustering & Galaxy-Galaxy Lensing in the SDSS. Frank van den Bosch.
Cosmological Constraints from a Combined Analysis of Clustering & Galaxy-Galaxy Lensing in the SDSS In collaboration with: Marcello Cacciato (Leiden), Surhud More (IPMU), Houjun Mo (UMass), Xiaohu Yang
More informationDirect Measurement of the W Total Decay Width at DØ. Outline
Direct Measurement of the W Total Decay Width at DØ Introduction Junjie Zhu University of Maryland On behalf of the DØ Collaboration Monte Carlo Simulation Event Selection Outline Determination of the
More information19 : Slice Sampling and HMC
10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often
More informationThe Lifted Newton Method and Its Use in Optimization
The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February
More informationEnsemble Kalman Filter
Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationMonte Python. the way back from cosmology to Fundamental Physics. Miguel Zumalacárregui. Nordic Institute for Theoretical Physics and UC Berkeley
the way back from cosmology to Fundamental Physics Nordic Institute for Theoretical Physics and UC Berkeley IFT School on Cosmology Tools March 2017 Related Tools Version control and Python interfacing
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationKalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University
Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal
More information1 Statistics Aneta Siemiginowska a chapter for X-ray Astronomy Handbook October 2008
1 Statistics Aneta Siemiginowska a chapter for X-ray Astronomy Handbook October 2008 1.1 Introduction Why do we need statistic? Wall and Jenkins (2003) give a good description of the scientific analysis
More informationCosmic Statistics of Statistics: N-point Correlations
Cosmic Statistics of Statistics: N-point Correlations István Szapudi Canadian Institute for Theoretical Astrophysics University of Toronto 60 St. George Street Toronto, Ontario, M5S 3H8 Abstract. The fully
More informationAUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET
The Problem Identification of Linear and onlinear Dynamical Systems Theme : Curve Fitting Division of Automatic Control Linköping University Sweden Data from Gripen Questions How do the control surface
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationOrganization. I MCMC discussion. I project talks. I Lecture.
Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems
More informationComments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms
Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:
More informationBias-Variance Tradeoff
What s learning, revisited Overfitting Generative versus Discriminative Logistic Regression Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University September 19 th, 2007 Bias-Variance Tradeoff
More informationOptimization for neural networks
0 - : Optimization for neural networks Prof. J.C. Kao, UCLA Optimization for neural networks We previously introduced the principle of gradient descent. Now we will discuss specific modifications we make
More informationLast updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship
More informationITP, Universität Heidelberg Jul Weak Lensing of SNe. Marra, Quartin & Amendola ( ) Quartin, Marra & Amendola ( ) Miguel Quartin
ITP, Universität Heidelberg Jul 2013 Measuring σ8 with Weak Lensing of SNe Marra, Quartin & Amendola (1304.7689) Quartin, Marra & Amendola (1307.1155) Miguel Quartin Instituto de Física Univ. Federal do
More informationCondensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.
Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search
More informationPolynomial Filtered Hybrid Monte Carlo
Polynomial Filtered Hybrid Monte Carlo Waseem Kamleh and Michael J. Peardon CSSM & U NIVERSITY OF A DELAIDE QCDNA VII, July 4th 6th, 2012 Introduction Generating large, light quark dynamical gauge field
More informationStochastic Quasi-Newton Methods
Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent
More informationCMB Polarization and Cosmology
CMB Polarization and Cosmology Wayne Hu KIPAC, May 2004 Outline Reionization and its Applications Dark Energy The Quadrupole Gravitational Waves Acoustic Polarization and Initial Power Gravitational Lensing
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationEstimation Theory Fredrik Rusek. Chapters 6-7
Estimation Theory Fredrik Rusek Chapters 6-7 All estimation problems Summary All estimation problems Summary Efficient estimator exists All estimation problems Summary MVU estimator exists Efficient estimator
More informationMultilayer Perceptron Learning Utilizing Singular Regions and Search Pruning
Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Seiya Satoh and Ryohei Nakano Abstract In a search space of a multilayer perceptron having hidden units, MLP(), there exist
More information