Approximate Bayesian Computation
|
|
- Ronald Watson
- 6 years ago
- Views:
Transcription
1 Approximate Bayesian Computation Jessi Cisewski Department of Statistics Yale University October 26, 2016 SAMSI Astrostatistics Course
2 Our goals... Approximate Bayesian Computation (ABC) 1
3 Background: Bayesian methods Goal: the posterior distribution of the unknown parameter(s) θ. Posterior distribution Likelihood Prior {}}{{}}{ Data {}}{ f (y θ) π(θ) π(θ y ) = f (y) f (y θ)π(θ) The prior distribution allows you to easily incorporate your beliefs about the parameter(s) of interest Posterior is a distribution on the parameter space given the observed data 2
4 The posterior for θ given observed data x obs : π(θ x obs ) = f (x obs θ)π(θ) f (xobs θ)π(θ)dθ = f (x obs θ)π(θ) f (x obs ) 3
5 The posterior for θ given observed data x obs : π(θ x obs ) = f (x obs θ)π(θ) f (xobs θ)π(θ)dθ = f (x obs θ)π(θ) f (x obs ) Approximate Bayesian Computation Likelihood-free approach to approximating π(θ x obs ) Proceeds via simulation of the forward process 3
6 The posterior for θ given observed data x obs : π(θ x obs ) = f (x obs θ)π(θ) f (xobs θ)π(θ)dθ = f (x obs θ)π(θ) f (x obs ) Approximate Bayesian Computation Likelihood-free approach to approximating π(θ x obs ) Proceeds via simulation of the forward process Why would we not know f (x obs θ)? 1 Physical model too complex to put in analytical form 2 Strong dependency in data 3 Observational limitations 3
7 4
8 ABC for Astronomy cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation (Ishida et al., 2015) Approximate Bayesian Computation for Forward Modeling in Cosmology (Akeret et al., 2015) Likelihood-Free Cosmological Inference with Type Ia Supernovae: Approximate Bayesian Computation for a Complete Treatment of Uncertainty (Weyant et al., 2013) Likelihood - free inference in cosmology: potential for the estimation of luminosity functions (Schafer and Freeman, 2012) Approximate Bayesian Computation for Astronomical Model Analysis: A case study in galaxy demographics and morphological transformation at high redshift (Cameron and Pettitt, 2012) 5
9 From Weyant et al. (2013) 6
10 Basic ABC algorithm 7
11 Basic ABC algorithm For the observed data x obs and prior π(θ): Algorithm 1 Sample θ prop from prior π(θ) 2 Generate x prop from forward process f (x θ prop ) 3 Accept θ prop if x obs = x prop 4 Return to step 1 Generates a sample from an approximation of the posterior Introduced in Pritchard et al. (1999) (population genetics) 8
12 Step 3: Accept θ prop if x obs = x prop Waiting for proposals such that x obs = x prop would be computationally prohibitive Instead, accept proposals with (x obs, x prop ) ɛ for some distance and some tolerance threshold ɛ When x is high-dimensional, will have to make ɛ too large in order to keep acceptance probability reasonable. Instead, reduce the dimension by comparing summaries S(x prop ) and S(x obs ) 9
13 Simple examples 10
14 ABC illustration: binomial distribution Data y 1:n iid Bern(θ) where n = sample size, θ = P(Y = 1) Forward process F (x θ) = ( ) n x θ x (1 θ) n x, where x = n (In this case, we use the likelihood) i=1 x i Distance function ρ(y, x) = y x Hence ρ(y, x) = 0 if the generated dataset x has the same number of 1 s as y Tolerance ɛ = 0 Prior π(θ) = Uniform(0,1) Reference: Turner and Zandt (2012) 11
15 Binomial illustration: R code n <- 25 #number of observations N < #generated sample size true.theta <-.75 data <- rbinom(n,1,true.theta) epsilon <- 0 #Tolerance theta <- numeric(n) rho <- function(y,x) abs(sum(y)-sum(x))/n #Distance function for(i in 1:N){ d <- epsilon+1 while(d>epsilon) { proposed.theta <- rbeta(1,1,1) x <- rbinom(n,1,proposed.theta) d <- rho(data,x) } theta[i] <- proposed.theta } #Prior #Forward process 12
16 Binomial illustration: ABC posterior 13
17 Binomial illustration: ABC posterior 13
18 Binomial illustration: ABC posterior 13
19 Gaussian illustration Data x obs consists of 25 iid draws from Normal(µ, 1) Summary statistics S(x) = x Distance function (S(x prop ), S(x obs )) = x prop x obs Tolerance ɛ = 0.50 and 0.10 Prior π(µ) = Normal(0,10) 14
20 Gaussian illustration: posteriors for µ Density Tolerance: 0.5, N:1000, n:25 True Posterior ABC Posterior Input value Density Tolerance: 0.1, N:1000, n:25 True Posterior ABC Posterior Input value µ µ 15
21 Some intuition about ABC 16
22 If one wants to generate a draw from the posterior: 1 Draw θ prop from the prior π(θ). 2 Draw x prop from the density f (x θ prop ). 3 Accept θ prop if x prop = x obs. Why? 17
23 If one wants to generate a draw from the posterior: 1 Draw θ prop from the prior π(θ). 2 Draw x prop from the density f (x θ prop ). 3 Accept θ prop if x prop = x obs. Why? Let θ acc denote an accepted θ prop. Then, for any θ, P(θ acc = θ) = P(θ prop = θ x prop = x obs ) / = P(x prop = x obs θ prop = θ)p(θ prop = θ) P(x prop = x obs ) / = f (x obs θ)π(θ) f (x obs ) = π(θ x obs ) in the case where θ is discrete. Illustration from Chad Schafer (CMU) 17
24 If one wants to generate a draw from the posterior: 1 Draw θ prop from the prior π(θ). 2 Draw x prop from the density f (x θ prop ). 3 Accept θ prop if x prop = x obs. Why? Let θ acc denote an accepted θ prop. Then, for any T IR, P(θ acc T ) = P(θ prop T x prop = x obs ) / = P(x prop = x obs θ prop = θ)π(θ) dθ P(x prop = x obs ) T / = f (x obs θ)π(θ) dθ f (x obs ) = π(θ x obs ) dθ in the case where θ is continuous. T T 18
25 If one wants to generate a draw from the posterior: 1 Draw θ prop from the prior π(θ). 2 Draw x prop from the density f (x θ prop ). 3 Accept θ prop if x prop = x obs. Why? Let θ acc denote an accepted θ prop. Then, for any T IR, P(θ acc T ) = P(θ prop T x prop = x obs ) = K P(x prop = x obs θ prop = θ)π(θ) dθ T = K f (x obs θ)π(θ) dθ = π(θ x obs ) dθ in the case where θ is continuous. T T 19
26 If one wants to generate a draw from the posterior: 1 Draw θ prop from the prior π(θ). 2 Draw x prop from the density f (x θ prop ). 3 Accept θ prop if??? Why? Let θ acc denote an accepted θ prop. Then, for any T IR, P(θ acc T ) = P(θ prop T Accept θ prop ) = K P(Accept θ prop θ prop = θ)π(θ) dθ T? = K f (x obs θ)π(θ) dθ = π(θ x obs ) dθ in the case where θ is continuous. T T 20
27 The Point: θ acc is a draw from the posterior if P(Accept θ prop θ prop = θ) f (x obs θ) (the likelihood) This creates a basis for assessing the quality of the approximation, irrespective of the prior. To achieve this, we could accept θ prop if x prop = x obs. Of course, this is not practical. 21
28 Clear alternative is to accept θ prop if x prop is close to x obs using some chosen distance metric. 22
29 Clear alternative is to accept θ prop if x prop is close to x obs using some chosen distance metric. What is the price of this approximation? 22
30 Clear alternative is to accept θ prop if x prop is close to x obs using some chosen distance metric. What is the price of this approximation? Define: φ ɛ (x prop, x obs ) = { 1, if (xprop, x obs ) < ɛ 0, if (x prop, x obs ) ɛ In other words, φ ɛ (x prop, x obs ) is an indicator as to whether or not x prop is close to x obs. 22
31 Hence, P(Accept θ prop θ prop = θ) = P( (x prop, x obs ) < ɛ θ prop = θ) = φ ɛ (x, x obs )f (x θ) dx Kf (x obs θ) as ɛ 0 Hence, for ɛ small, P(Accept θ prop θ prop = θ) Kf (x obs θ) 23
32 Toy Example: Assume that x obs is Gaussian with mean θ and variance one. 24
33 xxx x Depicts the convolution φ ɛ (x, x obs )f (x θ) dx = P(Accept θ prop θ prop = θ) for case where x obs = 1, θ = 0, ɛ =
34 xxx x Depicts the convolution φ ɛ (x, x obs )f (x θ) dx = P(Accept θ prop θ prop = θ) for case where x obs = 1, θ = 1, ɛ =
35 xxx Likelihood Acceptance Prob Compare these quantities for all θ. Case where x obs = 1, ɛ = 0.1. θ 27
36 xxx x Depicts the convolution φ ɛ (x, x obs )f (x θ) dx = P(Accept θ prop θ prop = θ) for case where x obs = 1, θ = 0, ɛ =
37 xxx x Depicts the convolution φ ɛ (x, x obs )f (x θ) dx = P(Accept θ prop θ prop = θ) for case where x obs = 1, θ = 1, ɛ =
38 xxx Likelihood Acceptance Prob Compare these quantities for all θ. Case where x obs = 1, ɛ = 0.4. θ 30
39 xxx Likelihood Acceptance Prob. Case where x obs = 1, ɛ = θ 31
40 Data summaries Ideally, S x is sufficient, i.e. f (x S x, θ) = f (x S x ) as this implies that f (x θ) = f (x θ, S x )f (S x θ) f (S x θ) Examples: 1 Ȳ = n 1 n i=1 Y i is a sufficient statistic for µ where Y i are iid N(µ, 1), i = 1,..., n 2 Y (n) = max{y 1,..., Y n } is a sufficient statistic for θ where Y i are iid U(0, θ), i = 1,..., n 32
41 In a nutshell The basic idea behind ABC is that using a representative (enough) summary statistic η coupled with a small (enough) tolerance ɛ should produce a good (enough) approximation to the posterior... Marin et al. (2012) 33
42 In a nutshell The basic idea behind ABC is that using a representative (enough) summary statistic η coupled with a small (enough) tolerance ɛ should produce a good (enough) approximation to the posterior... Marin et al. (2012) How to pick a tolerance, ɛ? 33
43 Sequential ABC 34
44 Sequential ABC Main idea Instead of starting the ABC algorithm over with a smaller tolerance (ɛ), use the already sampled particle system as a proposal distribution rather than drawing from the prior distribution. Particle system: (1) retained sampled values, (2) importance weights Some references: Beaumont et al. (2009); Moral et al. (2011); Bonassi and West (2004) 35
45 Monte Carlo integration Importance sampling 36
46 MC Integration General idea Monte Carlo methods are a form of stochastic integration used to approximate expectations by invoking the law of large numbers. I = b a h(y)dy = b a w(y)f (y)dy = E f (w(y )) where f (y) = 1 b a and w(y) = h(y) (b a) f (y) = 1 b a is the pdf of a U(a,b) random variable By the LLN, if we take an iid sample of size N from U(a, b), we can estimate I as Î = N 1 N i=1 w(y i ) E(w(Y )) = I 37
47 MC Integration: Gaussian CDF example Goal: estimate F Y (y) = P(Y y) = E [ I (,y) (Y ) ] where Y N(0, 1): y 1 F (Y y) = e t2 /2 1 dt = h(t) e t2 /2 dt 2π 2π where h(t) = 1 if t < y and h(t) = 0 if t y 38
48 MC Integration: Gaussian CDF example Goal: estimate F Y (y) = P(Y y) = E [ I (,y) (Y ) ] where Y N(0, 1): y 1 F (Y y) = e t2 /2 1 dt = h(t) e t2 /2 dt 2π 2π where h(t) = 1 if t < y and h(t) = 0 if t y Draw an iid sample Y 1,..., Y N from a N(0, 1), then the estimator is N Î = N 1 h(y i ) = # draws < x N i=1 Example 24.2 of Wasserman (2004) 38
49 Importance Sampling: motivation Standard Monte Carlo integration is great if you can sample from the target distribution (i.e. the desired distribution) But what if you can t sample from the target? 39
50 Importance Sampling: motivation Standard Monte Carlo integration is great if you can sample from the target distribution (i.e. the desired distribution) But what if you can t sample from the target? Idea of importance sampling: draw the sample from a proposal distribution and re-weight the integral using importance weights so that the correct distribution is targeted Target Proposal Target and Proposal Density X X X 39
51 MC Integration Importance Sampling I = h(y)f (y)dy h is some function and f is the probability density function of Y When the density f is difficult to sample from, importance sampling can be used 40
52 MC Integration Importance Sampling I = h(y)f (y)dy h is some function and f is the probability density function of Y When the density f is difficult to sample from, importance sampling can be used Rather than sampling from f, you specify a different probability density function, g, as the proposal distribution. I = h(y)f (y)dy = h(y) f (y) g(y) g(y)dy = h(y)f (y) g(y)dy g(y) 40
53 Importance Sampling [ ] h(y)f (y) h(y )f (Y ) I = E f [h(y )] = g(y)dy = E g g(y) g(y ) Hence, given an iid sample Y 1,..., Y N from g, our estimator of I becomes Î = N 1 N i=1 h(y i )f (Y i ) g(y i ) [ ] h(y )f (Y ) E g = I g(y ) 41
54 Importance sampling: Illustration Goal: estimate P(Y < 0.3) where Y f Try two proposal distributions: U(0,1) and U(0,4) 42
55 If we take 1000 samples of size 100, and find the IS estimates, we get the following estimated expected values and variances. Expected Value Variance Truth g 1 : U(0,1) g 2 : U(0,4)
56 Extensions of Importance Sampling Sequential Importance Sampling Sequential Monte Carlo (Particle Filtering) See Doucet et al. (2001) Approximate Bayesian Computation See Turner and Zandt (2012) for a tutorial, and Cameron and Pettitt (2012); Weyant et al. (2013) for applications to astronomy 44
57 Importance sampling can be used in ABC to improve the computational efficiency. 45
58 Algorithm 1 ABC - Population Monte Carlo algorithm 1: At iteration t = 1 2: Algorithm 1: Basic ABC sampler to obtain {θ (i) 1 }N i=1 3: Set importance weights W (i) 4: for t = 2 to T do 5: Set τ 2 t = 2 var 1 = 1/N for i = 1,..., N ) ( {θ (i) t 1, W (i) t 1 }N i=1 6: for i = 1 to N do 7: while ρ (S(y 1:n), S(x 1:n)) > ɛ t do 8: 9: Draw θ 0 from {θ (i) t 1 }N i=1 with probabilities {W (i) t 1 }N i=1 Propose θ N(θ 0, τt 2 ) 10: Generate x 1:n from F (x θ ) 11: Calculate summary statistics {S y, S x} 12: end while 13: θ (i) t θ 14: W (i) t ( π Nj=1 W (j) t 1 φ [ θ (i) t ) τt 1 (θ (i) t θ (j) ] t 1 ) 15: end for 16: {W (i) t } N i=1 { W (i) t } N i=1/ N 17: end for i=1 W (i) t Decreasing tolerances ɛ 1 ɛ T, φ( ) is the density function of a N(0, 1) From Beaumont et al. (2009) 46
59 Gaussian illustration: sequential posteriors N:1000, n:25 Density True Posterior ABC Posterior Input value Tolerance sequence, ɛ 1:10 : µ 47
60 Stellar Initial Mass Function 48
61 Stellar Initial Mass Function: the distribution of star masses after a star formation event within a specified volume of space Molecular cloud Protostars Stars Image: adapted from 49
62 Broken power-law (Kroupa, 2001) Φ(M) M α i, M 1i M M 2i α 1 = 0.3 α 2 = 1.3 for 0.01 M/MSun 0.08 [Sub-stellar] for 0.08 M/M Sun 0.50 α 3 = 2.3 for 0.50 M/M Sun M max Many other models, e.g. Salpeter (1955); Chabrier (2003) 1 M Sun = 1 Solar Mass (the mass of our Sun) 50
63 ABC for the Stellar Initial Mass Function Frequency Mass Image (left): Adapted from 51
64 IMF Likelihood Start with a power-law distribution: each star s mass is independently drawn from a power law distribution with density ( ) 1 α f (m) = Mmax 1 α M 1 α m α, m (M min, M max ) min Then the likelihood is ( ) ntot 1 α n tot L(α m 1:ntot ) = Mmax 1 α M 1 α min n tot = total number of stars in cluster i=1 m α i 52
65 Observational limitations: aging Lifecycle of star depends on mass more massive stars die faster Cluster age of τ Myr only observe stars with masses < T age τ 2/5 10 8/5 If age = 30 Myr so the aging cutoff is T age 10 M Sun Then the likelihood is ( L(α m 1:nobs, n tot) = T 1 α age 1 α M 1 α min ) ( nobs nobs i=1 m α i ) P(M > T age) ntot n obs n tot = # of stars in cluster n obs = # stars observed in cluster Image: 53
66 Observational limitations: completeness Completeness function: 0, m < C min m C P(observing star m) = min C max C min, m [C min, C max ] 1, m > C max Probability of observing a particular star given its mass Depends on the flux limit, stellar crowding, etc. Image: NASA, J. Trauger (JPL), J. Westphal (Caltech) 54
67 Observational limitations: measurement error Incorporating log-normal measurement error gives our final likelihood: L(α m 1:nobs, n tot ) = ( ( 1 α P(M > T age ) + Mmax 1 α M 1 α min n { obs Tage (2πσ 2 ) 1 2 m 1 i e 1 i=1 2 ( ( ) M Cmin I {M > C max} + C max C min ) Cmax C min M α 2σ 2 (log(m i ) log(m))2 ( I {C min M C max} ( 1 M C ) ) n tot n obs min dm C max C min ) 1 α Mmax 1 α M 1 α M α min ) } dm 55
68 IMF With aging, completeness, and error Density Density Mass Mass Sample size = 1000 stars, [C min, C max ] = [2, 4], σ =
69 Simulation Study: forward model Draw from ( f (m) = Aged 30 Myrs 1 α 60 1 α 2 1 α ) m α, m (2, 60) Observational completeness: 0, m < 4 P(obs m) = m 2 2, m [2, 4] 1, m > 4. Uncertainty: log M = log m η (with η N(0, 1)) Prior: α U[0, 6] 57
70 Simulation Study: summary statistics We want to account for the following with our summary statistics and distance functions: 1. Shape of the observed Mass Function [ } ] 2 1/2 ρ 1 (m sim, m obs ) = {ˆf log (x) ˆf msim log (x) mobs dx 58
71 2. Number of stars observed ρ 2 (m sim, m obs ) = 1 n sim /n obs m sim = masses of the stars simulated from the forward model m obs = masses of observed stars n sim = number of stars simulated from the forward model n obs = number of observed stars 59
72 Simulation Study 1 Draw n = 10 3 stars 2 IMF slope α = 2.35 with M min = 2 and M max = 60 3 N = 10 3 particles 4 T = 30 sequential time steps Observed Mass Function Density Mass 60
73 IMF Observed MF Density Density N = 1000 Bandwidth = N = 510 Bandwidth = Sample size = 1000 stars, [C min, C max ] = [2, 4], σ =
74 Simulation Study results 1 Draw n = 10 3 stars 2 IMF slope α = 2.35 with M min = 2 and M max = 60 3 N = 10 3 particles 4 T = 30 sequential time steps Log10(f M) (A) Observed Mass Function Observed MF 95% Credible band Posterior Median Log10(M) Density (B) Posteriors Initial ABC posterior ABC Posterior... True Posterior Intermediate ABC posterior Input value... Final ABC Posterior α 62
75 Summary ABC can be a useful tool when data are too complex to define a reasonable likelihood Selection of good summary statistics is crucial for ABC posterior to be meaningful 63
76 THANK YOU!!! 64
77 Bibliography I Akeret, J., Refregier, A., Amara, A., Seehars, S., and Hasner, C. (2015), Approximate Bayesian computation for forward modeling in cosmology, Journal of Cosmology and Astroparticle Physics, 2015, 043. Beaumont, M. A., Cornuet, J.-M., Marin, J.-M., and Robert, C. P. (2009), Adaptive approximate Bayesian computation, Biometrika, 96, Bonassi, F. V. and West, M. (2004), Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation, Bayesian Analysis, 1, Cameron, E. and Pettitt, A. N. (2012), Approximate Bayesian Computation for Astronomical Model Analysis: A Case Study in Galaxy Demographics and Morphological Transformation at High Redshift, Monthly Notices of the Royal Astronomical Society, 425, Chabrier, G. (2003), The galactic disk mass function: Reconciliation of the hubble space telescope and nearby determinations, The Astrophysical Journal Letters, 586, L133. Doucet, A., De Freitas, N., and Gordon, N. (2001), Sequential Monte Carlo Methods in Practice, Statistics for Engineering and Information Science, New York: Springer-Verlag. Ishida, E. E. O., Vitenti, S. D. P., Penna-Lima, M., Cisewski, J., de Souza, R. S., Trindade, A. M. M., Cameron, E., and Busti, V. C. (2015), cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation, Astronomy and Computing, 13, Kroupa, P. (2001), On the variation of the initial mass function, Monthly Notices of the Royal Astronomical Society, 322, Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012), Approximate Bayesian computational methods, Statistics and Computing, 22, Moral, P. D., Doucet, A., and Jasra, A. (2011), An adaptive sequential Monte Carlo method for approximate Bayesian computation, Statistics and Computing, 22, Pritchard, J. K., Seielstad, M. T., and Perez-Lezaun, A. (1999), Population Growth of Human Y Chromosomes: A study of Y Chromosome Microsatellites, Molecular Biology and Evolution, 16, Salpeter, E. E. (1955), The luminosity function and stellar evolution. The Astrophysical Journal, 121,
78 Bibliography II Schafer, C. M. and Freeman, P. E. (2012), Statistical Challenges in Modern Astronomy V, Springer, chap. 1, Lecture Notes in Statistics, pp Turner, B. M. and Zandt, T. V. (2012), A tutorial on approximate Bayesian computation, Journal of Mathematical Psychology, 56, Wasserman, L. (2004), All of statistics: a concise course in statistical inference, Springer. Weyant, A., Schafer, C., and Wood-Vasey, W. M. (2013), Likeihood-free cosmological inference with type Ia supernovae: approximate Bayesian computation for a complete treatment of uncertainty, The Astrophysical Journal, 764,
Approximate Bayesian Computation for Astrostatistics
Approximate Bayesian Computation for Astrostatistics Jessi Cisewski Department of Statistics Yale University October 24, 2016 SAMSI Undergraduate Workshop Our goals Introduction to Bayesian methods Likelihoods,
More informationApproximate Bayesian Computation for the Stellar Initial Mass Function
Approximate Bayesian Computation for the Stellar Initial Mass Function Jessi Cisewski Department of Statistics Yale University SCMA6 Collaborators: Grant Weller (Savvysherpa), Chad Schafer (Carnegie Mellon),
More informationBayes via forward simulation. Approximate Bayesian Computation
: Approximate Bayesian Computation Jessi Cisewski Carnegie Mellon University June 2014 What is Approximate Bayesian Computing? Likelihood-free approach Works by simulating from the forward process What
More informationHierarchical Bayesian Modeling with Approximate Bayesian Computation
Hierarchical Bayesian Modeling with Approximate Bayesian Computation Jessi Cisewski Carnegie Mellon University June 2014 Goal of lecture Bring the ideas of Hierarchical Bayesian Modeling (HBM), Approximate
More informationAddressing the Challenges of Luminosity Function Estimation via Likelihood-Free Inference
Addressing the Challenges of Luminosity Function Estimation via Likelihood-Free Inference Chad M. Schafer www.stat.cmu.edu/ cschafer Department of Statistics Carnegie Mellon University June 2011 1 Acknowledgements
More informationIntroduction to Bayesian Methods
Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative
More informationApproximate Bayesian Computation: a simulation based approach to inference
Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics
More informationBayesian Indirect Inference using a Parametric Auxiliary Model
Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee February
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationTutorial on Approximate Bayesian Computation
Tutorial on Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology 16 May 2016
More informationApproximate Bayesian Computation
Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki and Aalto University 1st December 2015 Content Two parts: 1. The basics of approximate
More informationTutorial on ABC Algorithms
Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed
More informationarxiv: v1 [stat.me] 30 Sep 2009
Model choice versus model criticism arxiv:0909.5673v1 [stat.me] 30 Sep 2009 Christian P. Robert 1,2, Kerrie Mengersen 3, and Carla Chen 3 1 Université Paris Dauphine, 2 CREST-INSEE, Paris, France, and
More informationProceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds.
Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. OPTIMAL PARALLELIZATION OF A SEQUENTIAL APPROXIMATE BAYESIAN COMPUTATION
More informationApproximate Bayesian Computation
Approximate Bayesian Computation Sarah Filippi Department of Statistics University of Oxford 09/02/2016 Parameter inference in a signalling pathway A RAS Receptor Growth factor Cell membrane Phosphorylation
More informationAn introduction to Approximate Bayesian Computation methods
An introduction to Approximate Bayesian Computation methods M.E. Castellanos maria.castellanos@urjc.es (from several works with S. Cabras, E. Ruli and O. Ratmann) Valencia, January 28, 2015 Valencia Bayesian
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationLecture 13 Fundamentals of Bayesian Inference
Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up
More informationAn Brief Overview of Particle Filtering
1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems
More informationSequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember
More informationMeasurement Error and Linear Regression of Astronomical Data. Brandon Kelly Penn State Summer School in Astrostatistics, June 2007
Measurement Error and Linear Regression of Astronomical Data Brandon Kelly Penn State Summer School in Astrostatistics, June 2007 Classical Regression Model Collect n data points, denote i th pair as (η
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationBayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration
Bayes Factors, posterior predictives, short intro to RJMCMC Thermodynamic Integration Dave Campbell 2016 Bayesian Statistical Inference P(θ Y ) P(Y θ)π(θ) Once you have posterior samples you can compute
More informationSequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation
Bayesian Analysis (2004) 1, Number 1, pp. 1 19 Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation Fernando V Bonassi Mike West Department of Statistical Science, Duke University,
More informationParticle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007
Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to
More informationOne Pseudo-Sample is Enough in Approximate Bayesian Computation MCMC
Biometrika (?), 99, 1, pp. 1 10 Printed in Great Britain Submitted to Biometrika One Pseudo-Sample is Enough in Approximate Bayesian Computation MCMC BY LUKE BORNN, NATESH PILLAI Department of Statistics,
More informationBayesian Analysis of RR Lyrae Distances and Kinematics
Bayesian Analysis of RR Lyrae Distances and Kinematics William H. Jefferys, Thomas R. Jefferys and Thomas G. Barnes University of Texas at Austin, USA Thanks to: Jim Berger, Peter Müller, Charles Friedman
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationFitting the Bartlett-Lewis rainfall model using Approximate Bayesian Computation
22nd International Congress on Modelling and Simulation, Hobart, Tasmania, Australia, 3 to 8 December 2017 mssanz.org.au/modsim2017 Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian
More informationApproximate Bayesian computation: methods and applications for complex systems
Approximate Bayesian computation: methods and applications for complex systems Mark A. Beaumont, School of Biological Sciences, The University of Bristol, Bristol, UK 11 November 2015 Structure of Talk
More informationApproximate Bayesian Computation and Particle Filters
Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationNonparametric Estimation of Luminosity Functions
x x Nonparametric Estimation of Luminosity Functions Chad Schafer Department of Statistics, Carnegie Mellon University cschafer@stat.cmu.edu 1 Luminosity Functions The luminosity function gives the number
More informationRobust and accurate inference via a mixture of Gaussian and terrors
Robust and accurate inference via a mixture of Gaussian and terrors Hyungsuk Tak ICTS/SAMSI Time Series Analysis for Synoptic Surveys and Gravitational Wave Astronomy 22 March 2017 Joint work with Justin
More informationData Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006
Astronomical p( y x, I) p( x, I) p ( x y, I) = p( y, I) Data Analysis I Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK 10 lectures, beginning October 2006 4. Monte Carlo Methods
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationA Review of Pseudo-Marginal Markov Chain Monte Carlo
A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the
More informationIntroduction to Bayes
Introduction to Bayes Alan Heavens September 3, 2018 ICIC Data Analysis Workshop Alan Heavens Introduction to Bayes September 3, 2018 1 / 35 Overview 1 Inverse Problems 2 The meaning of probability Probability
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationMCMC and likelihood-free methods
MCMC and likelihood-free methods Christian P. Robert Université Paris-Dauphine, IUF, & CREST Université de Besançon, November 22, 2012 MCMC and likelihood-free methods Computational issues in Bayesian
More informationAges of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008
Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical
More informationComputer intensive statistical methods
Lecture 13 MCMC, Hybrid chains October 13, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university MH algorithm, Chap:6.3 The metropolis hastings requires three objects, the distribution of
More informationSequential Monte Carlo Methods (for DSGE Models)
Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling
More informationBAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA
BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci
More informationHierarchical Bayesian Modeling
Hierarchical Bayesian Modeling Making scientific inferences about a population based on many individuals Angie Wolfgang NSF Postdoctoral Fellow, Penn State Astronomical Populations Once we discover an
More informationLectures in AstroStatistics: Topics in Machine Learning for Astronomers
Lectures in AstroStatistics: Topics in Machine Learning for Astronomers Jessi Cisewski Yale University American Astronomical Society Meeting Wednesday, January 6, 2016 1 Statistical Learning - learning
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationIntroduction to Bayesian Computation
Introduction to Bayesian Computation Dr. Jarad Niemi STAT 544 - Iowa State University March 20, 2018 Jarad Niemi (STAT544@ISU) Introduction to Bayesian Computation March 20, 2018 1 / 30 Bayesian computation
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 2. Statistical Schools Adapted from slides by Ben Rubinstein Statistical Schools of Thought Remainder of lecture is to provide
More informationApproximate Bayesian computation: an application to weak-lensing peak counts
STATISTICAL CHALLENGES IN MODERN ASTRONOMY VI Approximate Bayesian computation: an application to weak-lensing peak counts Chieh-An Lin & Martin Kilbinger SAp, CEA Saclay Carnegie Mellon University, Pittsburgh
More informationarxiv: v1 [stat.co] 26 Mar 2015
Bayesian Analysis (2015) 10, Number 1, pp. 171 187 Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation arxiv:1503.07791v1 [stat.co] 26 Mar 2015 Fernando V. Bonassi, and Mike
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning
More information(3) Review of Probability. ST440/540: Applied Bayesian Statistics
Review of probability The crux of Bayesian statistics is to compute the posterior distribution, i.e., the uncertainty distribution of the parameters (θ) after observing the data (Y) This is the conditional
More informationProbability distribution functions
Probability distribution functions Everything there is to know about random variables Imperial Data Analysis Workshop (2018) Elena Sellentin Sterrewacht Universiteit Leiden, NL Ln(a) Sellentin Universiteit
More informationThe Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model
Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for
More informationStat 451 Lecture Notes Monte Carlo Integration
Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:
More informationHierarchical Bayesian Modeling
Hierarchical Bayesian Modeling Making scientific inferences about a population based on many individuals Angie Wolfgang NSF Postdoctoral Fellow, Penn State Astronomical Populations Once we discover an
More information[MCEN 5228]: Project report: Markov Chain Monte Carlo method with Approximate Bayesian Computation
[MCEN 228]: Project report: Markov Chain Monte Carlo method with Approximate Bayesian Computation Olga Doronina December 2, 217 Contents 1 Introduction 1 2 Background 2 2.1 Autonomic Closure (AC)...........................................
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationBayesian Model Comparison:
Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500
More informationNonparametric Statistics
Nonparametric Statistics Jessi Cisewski Yale University Astrostatistics Summer School - XI Wednesday, June 3, 2015 1 Overview Many of the standard statistical inference procedures are based on assumptions
More informationDART_LAB Tutorial Section 5: Adaptive Inflation
DART_LAB Tutorial Section 5: Adaptive Inflation UCAR 14 The National Center for Atmospheric Research is sponsored by the National Science Foundation. Any opinions, findings and conclusions or recommendations
More informationA Note on Auxiliary Particle Filters
A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,
More informationIntroduction to Probabilistic Machine Learning
Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning
More informationMetropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9
Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods
More informationFast Likelihood-Free Inference via Bayesian Optimization
Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationBayesian Deep Learning
Bayesian Deep Learning Mohammad Emtiyaz Khan AIP (RIKEN), Tokyo http://emtiyaz.github.io emtiyaz.khan@riken.jp June 06, 2018 Mohammad Emtiyaz Khan 2018 1 What will you learn? Why is Bayesian inference
More informationMachine Learning CSE546 Carlos Guestrin University of Washington. September 30, What about continuous variables?
Linear Regression Machine Learning CSE546 Carlos Guestrin University of Washington September 30, 2014 1 What about continuous variables? n Billionaire says: If I am measuring a continuous variable, what
More informationInexact approximations for doubly and triply intractable problems
Inexact approximations for doubly and triply intractable problems March 27th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers of) interacting
More informationSequential Monte Carlo Methods
University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance
More informationBayesian Indirect Inference using a Parametric Auxiliary Model
Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee July 25,
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More informationBayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo
Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo, James McGree, Tony Pettitt October 7, 2 Introduction Motivation Model choice abundant throughout literature Take into account
More informationMultistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models
Multistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models Xixi Yu Imperial College London xixi.yu16@imperial.ac.uk 23 October 2018 Xixi Yu (Imperial College London) Multistage
More informationNotes on pseudo-marginal methods, variational Bayes and ABC
Notes on pseudo-marginal methods, variational Bayes and ABC Christian Andersson Naesseth October 3, 2016 The Pseudo-Marginal Framework Assume we are interested in sampling from the posterior distribution
More informationWeakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0
More informationEvidence estimation for Markov random fields: a triply intractable problem
Evidence estimation for Markov random fields: a triply intractable problem January 7th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers
More informationMarkov Chain Monte Carlo
Department of Statistics The University of Auckland https://www.stat.auckland.ac.nz/~brewer/ Emphasis I will try to emphasise the underlying ideas of the methods. I will not be teaching specific software
More informationParameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1
Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data
More informationComputational Perception. Bayesian Inference
Computational Perception 15-485/785 January 24, 2008 Bayesian Inference The process of probabilistic inference 1. define model of problem 2. derive posterior distributions and estimators 3. estimate parameters
More informationBayesian statistics, simulation and software
Module 10: Bayesian prediction and model checking Department of Mathematical Sciences Aalborg University 1/15 Prior predictions Suppose we want to predict future data x without observing any data x. Assume:
More informationSelf Adaptive Particle Filter
Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter
More informationLecture 1: Bayesian Framework Basics
Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of
More informationIntroduction to Probability and Statistics (Continued)
Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:
More informationApproximate Inference using MCMC
Approximate Inference using MCMC 9.520 Class 22 Ruslan Salakhutdinov BCS and CSAIL, MIT 1 Plan 1. Introduction/Notation. 2. Examples of successful Bayesian models. 3. Basic Sampling Algorithms. 4. Markov
More informationPredictivist Bayes density estimation
Predictivist Bayes density estimation P. Richard Hahn Abstract: This paper develops a novel computational approach for Bayesian density estimation, using a kernel density representation of the Bayesian
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationData Analysis Methods
Data Analysis Methods Successes, Opportunities, and Challenges Chad M. Schafer Department of Statistics & Data Science Carnegie Mellon University cschafer@cmu.edu The Astro/Data Science Community LSST
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationWeakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0
More informationICES REPORT Model Misspecification and Plausibility
ICES REPORT 14-21 August 2014 Model Misspecification and Plausibility by Kathryn Farrell and J. Tinsley Odena The Institute for Computational Engineering and Sciences The University of Texas at Austin
More information