Overlapping Astronomical Sources: Utilizing Spectral Information
|
|
- Nickolas Glenn
- 6 years ago
- Views:
Transcription
1 Overlapping Astronomical Sources: Utilizing Spectral Information David Jones Advisor: Xiao-Li Meng Collaborators: Vinay Kashyap (CfA) and David van Dyk (Imperial College) CHASC Astrostatistics Group April 15, / 30
2 Introduction X-ray telescope data: spatial coordinates of photon detections photon energy Instrument error: diffraction in the telescope means recorded photon positions are spread out according to the point spread function (PSF) 2 / 30
3 Introduction PSFs overlap for sources near each other Aim: inference for number of sources and their intensities, positions and spectral distributions Key points: (i) obtain posterior distribution of number of sources, (ii) use spectral information 3 / 30
4 Basic Model and Notation (x i, y i ) = spatial coordinates of photon i k = # sources µ j = centre of source j s i = latent variable indicating which source photon i is from n j = n i=1 1 {s i =j}= # photons detected from component j {0, 1,..., k} (x i, y i ) s i = j, µ j, k PSF j centred at µ j for i = 1,..., n (n 0, n 1,..., n k ) w, k Mult(n; (w 0, w 1,..., w k )) (w 0, w 1,..., w k ) k Dirichlet(λ, λ,..., λ) µ j k Uniform over the image j = 1, 2,..., k k Pois(θ) Component with label 0 is background and its PSF is uniform over the image (so its centre is irrelevant) Reasonably insensitive to θ, the prior mean number of sources 4 / 30
5 3rd Dimension: Spectral Data We can distinguish the background from the sources better if we jointly model spatial and spectral information: e i s i = j, α j, β j Gamma(α j, β j ) for j {1,..., k} e i s i = 0 Uniform to some maximum α j Gamma(a α, b α) β j Gamma(a β, b β ) Using a (correctly) informative prior on α si and β si versus a diffuse prior made very little difference to results. 5 / 30
6 Computation: RJMCMC Similar to Richardson & Green 1997 Knowledge of the PSF makes things much easier Insensitive to the prior k Pois(θ) e.g. posterior when k = 10: (a) θ = 1 (b) θ = 10 Figure: Average posterior probabilities of each value of k across ten datasets 6 / 30
7 Simulation Study: Example 100 datasets simulated for each configuration Analysis with and without energy data Summarize posterior of k by posterior probability of two sources 7 / 30
8 Simulation Study: PSF (King 1962) King density has Cauchy tails Gaussian PSF leads to over-fitting in real data 8 / 30
9 Simulation Study: Data Generation Bright source: Dim source: n 1 Pois(m 1 = 1000) n 2 Pois(m 2 = 1000/r) where r = 1, 2, 10, 50 gives the relative intensity Source region : the region defined by PSF density greater than 10% of the maximum (essentially a circle with radius 1) d = the probability a photon from a source falls within its own region 9 / 30
10 Simulation Study: Data Generation Background per source region : Pois(bdm 2) where relative background b = 0.001, 0.01, 0.1, 1 Overall background ( ) image area n 0 Pois source region area bdm2 Separation: the distance between the sources. Values: 0.5, 1, 1.5, 2 10 / 30
11 Simulation Study: Data Generation Note: units should be pulse invariant (PI) channel 11 / 30
12 Median Posterior Probability at k=2: No Energy 12 / 30
13 Median Posterior Probability at k=2: Energy 13 / 30
14 Median SE of Dim Source Posterior Mean Position: No Energy 14 / 30
15 Median SE of Dim Source Posterior Mean Position: Energy 15 / 30
16 Strong Background Posterior Mean Positions: No Energy 16 / 30
17 Strong Background Posterior Mean Positions: Energy 17 / 30
18 XMM Data Additional question: how do the spectral distributions of the sources compare? 18 / 30
19 Parameter Inference Table: Posterior parameter estimation for FK Aqr and FL Aqr (using spectral data) µ 11 µ 12 µ 21 µ 22 w 1 w 2 w b α 1 α 2 β 1 β 2 Mean SD SD/Mean / 30
20 Componentwise posterior spectral distributions 20 / 30
21 Posteriors of source spectral parameters 21 / 30
22 Chandra Data 22 / 30
23 Gamma Mixture Spectral Model 23 / 30
24 Chandra k Results 24 / 30
25 Locations: 90% credible regions 25 / 30
26 Chandra data spectral model contribution 1. Spectral model gives some constraints on the spectral distributions helping us to infer source properties more precisely Posterior standard deviations are smaller with the spectral model Without it some fainter sources are occasionally not found 2. It also offers some robustness to chance or systematic variations in the PSF and background Background is more easily distinguished from sources Spectral model plays a large role in the likelihood so a strange shaped source will be unlikely to be split unless the spectral data also supports two sources 26 / 30
27 Summary and extensions Coherent method for dealing with overlapping sources that uses spectral as well as spatial information Flexibility to include other phenomenon Temporal model? Flares and other activity change the intensity and spectral distribution of sources over time Approximation to full method could be desirable in some cases 27 / 30
28 S. Richardson, P. J. Green On Bayesian analysis of mixtures with an unknown number of components (with discussion), J. R. Statist. Soc. B, 59, , 1997; corrigendum, 60 (1998), 661. I. King, The structure of star clusters. I. An empirical density law, The Astronomical Journal, 67 (1962), 471. C. M. Bishop, N. M. Nasrabadi, Pattern recognition and machine learning, Vol. 1. New York: springer, A. P. Dempster, N. M. Laird, D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, Series B (Methodological) (1977): S. P. Brooks, A. Gelman, General Methods for Monitoring Convergence of Iterative Simulations, Journal of Computational and Graphical Statistics, Vol. 7, No. 4. (Dec., 1998), pp / 30
29 XMM data spectral distribution 29 / 30
30 Four models 30 / 30
Accounting for Calibration Uncertainty in Spectral Analysis. David A. van Dyk
Bayesian Analysis of Accounting for in Spectral Analysis David A van Dyk Statistics Section, Imperial College London Joint work with Vinay Kashyap, Jin Xu, Alanna Connors, and Aneta Siegminowska ISIS May
More informationAccounting for Calibration Uncertainty in Spectral Analysis. David A. van Dyk
Accounting for in Spectral Analysis David A van Dyk Statistics Section, Imperial College London Joint work with Vinay Kashyap, Jin Xu, Alanna Connors, and Aneta Siegminowska IACHEC March 2013 David A van
More informationFully Bayesian Analysis of Low-Count Astronomical Images
Fully Bayesian Analysis of Low-Count Astronomical Images 1 Alanna Connors 2 1 Department of Statistics University of California, Irvine 2 Eurika Scientific Thanks to James Chiang, Adam Roy, and The California
More informationLuminosity Functions
Department of Statistics, Harvard University May 15th, 2012 Introduction: What is a Luminosity Function? Figure: A galaxy cluster. Introduction: Project Goal The Luminosity Function specifies the relative
More informationBayesian Estimation of log N log S
Bayesian Estimation of log N log S Paul D. Baines Department of Statistics University of California, Davis May 10th, 2013 Introduction Project Goals Develop a comprehensive method to infer (properties
More informationMH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution
MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous
More informationAccounting for Calibration Uncertainty
: High Energy Astrophysics and the PCG Sampler 1 Vinay Kashyap 2 Taeyoung Park 3 Jin Xu 4 Imperial-California-Harvard AstroStatistics Collaboration 1 Statistics Section, Imperial College London 2 Smithsonian
More informationChallenges and Methods for Massive Astronomical Data
Challenges and Methods for Massive Astronomical Data mini-workshop on Computational AstroStatistics The workshop is sponsored by CHASC/C-BAS, NSF grants DMS 09-07185 (HU) and DMS 09-07522 (UCI), and the
More informationThe Expectation Maximization Algorithm
The Expectation Maximization Algorithm Frank Dellaert College of Computing, Georgia Institute of Technology Technical Report number GIT-GVU-- February Abstract This note represents my attempt at explaining
More informationAn introduction to Variational calculus in Machine Learning
n introduction to Variational calculus in Machine Learning nders Meng February 2004 1 Introduction The intention of this note is not to give a full understanding of calculus of variations since this area
More informationChapter 08: Direct Maximum Likelihood/MAP Estimation and Incomplete Data Problems
LEARNING AND INFERENCE IN GRAPHICAL MODELS Chapter 08: Direct Maximum Likelihood/MAP Estimation and Incomplete Data Problems Dr. Martin Lauer University of Freiburg Machine Learning Lab Karlsruhe Institute
More informationTelescopes (Chapter 6)
Telescopes (Chapter 6) Based on Chapter 6 This material will be useful for understanding Chapters 7 and 10 on Our planetary system and Jovian planet systems Chapter 5 on Light will be useful for understanding
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationMixture Models and Expectation-Maximization
Mixture Models and Expectation-Maximiation David M. Blei March 9, 2012 EM for mixtures of multinomials The graphical model for a mixture of multinomials π d x dn N D θ k K How should we fit the parameters?
More informationHierarchical Bayesian Modeling
Hierarchical Bayesian Modeling Making scientific inferences about a population based on many individuals Angie Wolfgang NSF Postdoctoral Fellow, Penn State Astronomical Populations Once we discover an
More informationThe Variational Gaussian Approximation Revisited
The Variational Gaussian Approximation Revisited Manfred Opper Cédric Archambeau March 16, 2009 Abstract The variational approximation of posterior distributions by multivariate Gaussians has been much
More informationPILCO: A Model-Based and Data-Efficient Approach to Policy Search
PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol
More informationParameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets
Parameter Estimation in the Spatio-Temporal Mixed Effects Model Analysis of Massive Spatio-Temporal Data Sets Matthias Katzfuß Advisor: Dr. Noel Cressie Department of Statistics The Ohio State University
More informationStatistical Filters for Crowd Image Analysis
Statistical Filters for Crowd Image Analysis Ákos Utasi, Ákos Kiss and Tamás Szirányi Distributed Events Analysis Research Group, Computer and Automation Research Institute H-1111 Budapest, Kende utca
More informationLecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models
Advanced Machine Learning Lecture 10 Mixture Models II 30.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ Announcement Exercise sheet 2 online Sampling Rejection Sampling Importance
More informationPerformance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project
Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore
More informationBayesian Inference for the Multivariate Normal
Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate
More informationLatent Variable Models and Expectation Maximization
Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15
More informationClustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.
Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)
More informationTime Delay Estimation: Microlensing
Time Delay Estimation: Microlensing Hyungsuk Tak Stat310 15 Sep 2015 Joint work with Kaisey Mandel, David A. van Dyk, Vinay L. Kashyap, Xiao-Li Meng, and Aneta Siemiginowska Introduction Image Credit:
More informationExpectation Maximization
Expectation Maximization Bishop PRML Ch. 9 Alireza Ghane c Ghane/Mori 4 6 8 4 6 8 4 6 8 4 6 8 5 5 5 5 5 5 4 6 8 4 4 6 8 4 5 5 5 5 5 5 µ, Σ) α f Learningscale is slightly Parameters is slightly larger larger
More informationModel Selection for Galaxy Shapes
Model Selection for Galaxy Shapes Chunzhe Zhang Advisor: Thomas C.M. Lee Collaborators: Vinay Kashyap, Andreas Zezas May 24, 2016 Chunzhe Zhang (UCDavis) Model Selection May 24, 2016 1 / 20 Overview 1
More informationLAT Automated Science Processing for Gamma-Ray Bursts
SLAC-PUB-12715 LAT Automated Science Processing for Gamma-Ray Bursts James Chiang GLAST Science Support Center SLAC, 2575 Sand Hill Rd, Menlo Park CA 9425 Abstract. The LAT Instrument Science Operations
More informationNew Results of Fully Bayesian
UCI April 3, 2012 Calibration Samples Principle Component Analysis Model Building Three source parameter sampling schemes Simulation Quasar data sets Speed Up Pragmatic Bayesian Method Frequency Analysis
More informationCeleste: Variational inference for a generative model of astronomical images
Celeste: Variational inference for a generative model of astronomical images Jerey Regier Statistics Department UC Berkeley July 9, 2015 Joint work with Jon McAulie (UCB Statistics), Andrew Miller, Ryan
More informationCollaborative topic models: motivations cont
Collaborative topic models: motivations cont Two topics: machine learning social network analysis Two people: " boy Two articles: article A! girl article B Preferences: The boy likes A and B --- no problem.
More informationBayesian Estimation of log N log S
Bayesian Estimation of log N log S Paul Baines, Irina Udaltsova Department of Statistics University of California, Davis July 12th, 2011 Introduction What is log N log S? Cumulative number of sources detectable
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University October 29, 2016 David Rosenberg (New York University) DS-GA 1003 October 29, 2016 1 / 42 K-Means Clustering K-Means Clustering David
More informationpyblocxs Bayesian Low-Counts X-ray Spectral Analysis in Sherpa
pyblocxs Bayesian Low-Counts X-ray Spectral Analysis in Sherpa Aneta Siemiginowska Harvard-Smithsonian Center for Astrophysics Vinay Kashyap (CfA), David Van Dyk (UCI), Alanna Connors (Eureka), T.Park(Korea)+CHASC
More informationThe Impact of Positional Uncertainty on Gamma-Ray Burst Environment Studies
The Impact of Positional Uncertainty on Gamma-Ray Burst Environment Studies Peter Blanchard Harvard University In collaboration with Edo Berger and Wen-fai Fong arxiv:1509.07866 Topics in AstroStatistics
More informationFitting Narrow Emission Lines in X-ray Spectra
Outline Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, University of Pittsburgh October 11, 2007 Outline of Presentation Outline This talk has three components:
More informationLatent Variable Models and Expectation Maximization
Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15
More informationParametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a
Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a
More informationDisentangling Gaussians
Disentangling Gaussians Ankur Moitra, MIT November 6th, 2014 Dean s Breakfast Algorithmic Aspects of Machine Learning 2015 by Ankur Moitra. Note: These are unpolished, incomplete course notes. Developed
More informationA Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution
A Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution Carl Scheffler First draft: September 008 Contents The Student s t Distribution The
More informationInformation: Measuring the Missing, Using the Observed, and Approximating the Complete
Information: Measuring the Missing, Using the Observed, and Approximating the Complete The Harvard community has made this article openly available. Please share how this access benefits you. Your story
More informationParameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1
Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data
More information1 Statistics Aneta Siemiginowska a chapter for X-ray Astronomy Handbook October 2008
1 Statistics Aneta Siemiginowska a chapter for X-ray Astronomy Handbook October 2008 1.1 Introduction Why do we need statistic? Wall and Jenkins (2003) give a good description of the scientific analysis
More informationVariational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures
17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter
More informationAutomated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling
Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Abstract An automated unsupervised technique, based upon a Bayesian framework, for the segmentation of low light level
More informationICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts
ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes
More informationAges of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008
Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationShu Yang and Jae Kwang Kim. Harvard University and Iowa State University
Statistica Sinica 27 (2017), 000-000 doi:https://doi.org/10.5705/ss.202016.0155 DISCUSSION: DISSECTING MULTIPLE IMPUTATION FROM A MULTI-PHASE INFERENCE PERSPECTIVE: WHAT HAPPENS WHEN GOD S, IMPUTER S AND
More informationLabel Switching and Its Simple Solutions for Frequentist Mixture Models
Label Switching and Its Simple Solutions for Frequentist Mixture Models Weixin Yao Department of Statistics, Kansas State University, Manhattan, Kansas 66506, U.S.A. wxyao@ksu.edu Abstract The label switching
More informationIEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm
IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.
More informationLearning an astronomical catalog of the visible universe through scalable Bayesian inference
Learning an astronomical catalog of the visible universe through scalable Bayesian inference Jeffrey Regier with the DESI Collaboration Department of Electrical Engineering and Computer Science UC Berkeley
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationAccelerating the EM Algorithm for Mixture Density Estimation
Accelerating the EM Algorithm ICERM Workshop September 4, 2015 Slide 1/18 Accelerating the EM Algorithm for Mixture Density Estimation Homer Walker Mathematical Sciences Department Worcester Polytechnic
More informationH-means image segmentation to identify solar thermal features
H-means image segmentation to identify solar thermal features The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 20: Expectation Maximization Algorithm EM for Mixture Models Many figures courtesy Kevin Murphy s
More informationLecture 6: April 19, 2002
EE596 Pat. Recog. II: Introduction to Graphical Models Spring 2002 Lecturer: Jeff Bilmes Lecture 6: April 19, 2002 University of Washington Dept. of Electrical Engineering Scribe: Huaning Niu,Özgür Çetin
More informationAnalysis Challenges. Lynx Data: From Chandra to Lynx : 2017 Aug 8-10, Cambridge MA
From Chandra to Lynx : 2017 Aug 8-10, Cambridge MA Lynx Data: Analysis Challenges Vinay Kashyap (CHASC/CXC/CfA) Pat Broos (Penn State), Peter Freeman (CMU), Andrew Ptak (GSFC), Aneta Siemiginowska (CfA),
More informationU-Likelihood and U-Updating Algorithms: Statistical Inference in Latent Variable Models
U-Likelihood and U-Updating Algorithms: Statistical Inference in Latent Variable Models Jaemo Sung 1, Sung-Yang Bang 1, Seungjin Choi 1, and Zoubin Ghahramani 2 1 Department of Computer Science, POSTECH,
More informationIntelligent Systems I
Intelligent Systems I 00 INTRODUCTION Stefan Harmeling & Philipp Hennig 24. October 2013 Max Planck Institute for Intelligent Systems Dptmt. of Empirical Inference Which Card? Opening Experiment Which
More informationMachine Learning Techniques for Computer Vision
Machine Learning Techniques for Computer Vision Part 2: Unsupervised Learning Microsoft Research Cambridge x 3 1 0.5 0.2 0 0.5 0.3 0 0.5 1 ECCV 2004, Prague x 2 x 1 Overview of Part 2 Mixture models EM
More informationTwo Statistical Problems in X-ray Astronomy
Two Statistical Problems in X-ray Astronomy Alexander W Blocker October 21, 2008 Outline 1 Introduction 2 Replacing stacking Problem Current method Model Further development 3 Time symmetry Problem Model
More informationSTATISTICAL INFERENCE WITH DATA AUGMENTATION AND PARAMETER EXPANSION
STATISTICAL INFERENCE WITH arxiv:1512.00847v1 [math.st] 2 Dec 2015 DATA AUGMENTATION AND PARAMETER EXPANSION Yannis G. Yatracos Faculty of Communication and Media Studies Cyprus University of Technology
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationUncertainty quantification and visualization for functional random variables
Uncertainty quantification and visualization for functional random variables MascotNum Workshop 2014 S. Nanty 1,3 C. Helbert 2 A. Marrel 1 N. Pérot 1 C. Prieur 3 1 CEA, DEN/DER/SESI/LSMR, F-13108, Saint-Paul-lez-Durance,
More informationHmms with variable dimension structures and extensions
Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating
More informationPattern Recognition and Machine Learning. Bishop Chapter 9: Mixture Models and EM
Pattern Recognition and Machine Learning Chapter 9: Mixture Models and EM Thomas Mensink Jakob Verbeek October 11, 27 Le Menu 9.1 K-means clustering Getting the idea with a simple example 9.2 Mixtures
More informationNon-Parametric Bayes
Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationCS 6140: Machine Learning Spring 2017
CS 6140: Machine Learning Spring 2017 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis@cs Assignment
More informationMIXTURE MODELS AND EM
Last updated: November 6, 212 MIXTURE MODELS AND EM Credits 2 Some of these slides were sourced and/or modified from: Christopher Bishop, Microsoft UK Simon Prince, University College London Sergios Theodoridis,
More information1. Give short answers to the following questions. a. What limits the size of a corrected field of view in AO?
Astronomy 418/518 final practice exam 1. Give short answers to the following questions. a. What limits the size of a corrected field of view in AO? b. Describe the visibility vs. baseline for a two element,
More informationCeleste: Scalable variational inference for a generative model of astronomical images
Celeste: Scalable variational inference for a generative model of astronomical images Jeffrey Regier 1, Brenton Partridge 2, Jon McAuliffe 1, Ryan Adams 2, Matt Hoffman 3, Dustin Lang 4, David Schlegel
More informationMachine Learning for Signal Processing Expectation Maximization Mixture Models. Bhiksha Raj 27 Oct /
Machine Learning for Signal rocessing Expectation Maximization Mixture Models Bhiksha Raj 27 Oct 2016 11755/18797 1 Learning Distributions for Data roblem: Given a collection of examples from some data,
More informationCheng Soon Ong & Christian Walder. Canberra February June 2017
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 679 Part XIX
More informationHow to Win With Poisson Data: Imaging Gamma-Rays. A. Connors, Eureka Scientific, CHASC, SAMSI06 SaFDe
How to Win With Poisson Data: Imaging Gamma-Rays A. Connors, Eureka Scientific, CHASC, SAMSI06 SaFDe 1 All-Sky Map > 1 GeV Photons 9 years of CGRO/EGRET 2 3 Model of Diffuse Emission along Galactic Plane:
More informationMassive Event Detection. Abstract
Abstract The detection and analysis of events within massive collections of time-series has become an extremely important task for timedomain astronomy. In particular, many scientific investigations (e.g.
More informationClustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation
Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based clustering Model estimation 1 Clustering A basic tool in data mining/pattern recognition: Divide
More informationVCMC: Variational Consensus Monte Carlo
VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object
More informationHierarchical Bayesian Modeling
Hierarchical Bayesian Modeling Making scientific inferences about a population based on many individuals Angie Wolfgang NSF Postdoctoral Fellow, Penn State Astronomical Populations Once we discover an
More informationWe Prediction of Geological Characteristic Using Gaussian Mixture Model
We-07-06 Prediction of Geological Characteristic Using Gaussian Mixture Model L. Li* (BGP,CNPC), Z.H. Wan (BGP,CNPC), S.F. Zhan (BGP,CNPC), C.F. Tao (BGP,CNPC) & X.H. Ran (BGP,CNPC) SUMMARY The multi-attribute
More informationIntroduction. Chapter 1
Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics
More informationConvergence Diagnostics For Markov chain Monte Carlo. Eric B. Ford (Penn State) Bayesian Computing for Astronomical Data Analysis June 9, 2017
Convergence Diagnostics For Markov chain Monte Carlo Eric B. Ford (Penn State) Bayesian Computing for Astronomical Data Analysis June 9, 2017 MCMC: A Science & an Art Science: If your algorithm is designed
More informationNon-gaussian spatiotemporal modeling
Dec, 2008 1/ 37 Non-gaussian spatiotemporal modeling Thais C O da Fonseca Joint work with Prof Mark F J Steel Department of Statistics University of Warwick Dec, 2008 Dec, 2008 2/ 37 1 Introduction Motivation
More informationRobust and accurate inference via a mixture of Gaussian and terrors
Robust and accurate inference via a mixture of Gaussian and terrors Hyungsuk Tak ICTS/SAMSI Time Series Analysis for Synoptic Surveys and Gravitational Wave Astronomy 22 March 2017 Joint work with Justin
More informationIACHEC 13, Apr , La Tenuta dei Ciclamini. CalStat WG Report. Vinay Kashyap (CXC/CfA)
IACHEC 13, Apr 9-12 2018, La Tenuta dei Ciclamini CalStat WG Report Vinay Kashyap (CXC/CfA) CalStats Working Group A forum for the discussion of statistical, methodological, and algorithmic issues that
More informationThree data analysis problems
Three data analysis problems Andreas Zezas University of Crete CfA Two types of problems: Fitting Source Classification Fitting: complex datasets Fitting: complex datasets Maragoudakis et al. in prep.
More informationABSTRACT INTRODUCTION
ABSTRACT Presented in this paper is an approach to fault diagnosis based on a unifying review of linear Gaussian models. The unifying review draws together different algorithms such as PCA, factor analysis,
More informationReview and Motivation
Review and Motivation We can model and visualize multimodal datasets by using multiple unimodal (Gaussian-like) clusters. K-means gives us a way of partitioning points into N clusters. Once we know which
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationLatent Variable Models and EM algorithm
Latent Variable Models and EM algorithm SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic 3.1 Clustering and Mixture Modelling K-means and hierarchical clustering are non-probabilistic
More informationSpeech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models.
Speech Recognition Lecture 8: Expectation-Maximization Algorithm, Hidden Markov Models. Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.com This Lecture Expectation-Maximization (EM)
More informationp L yi z n m x N n xi
y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen
More informationMultivariate Bayesian Linear Regression MLAI Lecture 11
Multivariate Bayesian Linear Regression MLAI Lecture 11 Neil D. Lawrence Department of Computer Science Sheffield University 21st October 2012 Outline Univariate Bayesian Linear Regression Multivariate
More informationEstimating the parameters of hidden binomial trials by the EM algorithm
Hacettepe Journal of Mathematics and Statistics Volume 43 (5) (2014), 885 890 Estimating the parameters of hidden binomial trials by the EM algorithm Degang Zhu Received 02 : 09 : 2013 : Accepted 02 :
More informationMinimum message length estimation of mixtures of multivariate Gaussian and von Mises-Fisher distributions
Minimum message length estimation of mixtures of multivariate Gaussian and von Mises-Fisher distributions Parthan Kasarapu & Lloyd Allison Monash University, Australia September 8, 25 Parthan Kasarapu
More informationAlexina Mason. Department of Epidemiology and Biostatistics Imperial College, London. 16 February 2010
Strategy for modelling non-random missing data mechanisms in longitudinal studies using Bayesian methods: application to income data from the Millennium Cohort Study Alexina Mason Department of Epidemiology
More informationDimensional reduction of clustered data sets
Dimensional reduction of clustered data sets Guido Sanguinetti 5th February 2007 Abstract We present a novel probabilistic latent variable model to perform linear dimensional reduction on data sets which
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures
More informationLecture 4: Probabilistic Learning
DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods
More information