Bayesian inference for the location parameter of a Student-t density

Size: px
Start display at page:

Download "Bayesian inference for the location parameter of a Student-t density"

Transcription

1 Bayesian inference for the location parameter of a Student-t density Jean-François Angers CRM-2642 February 2000 Dép. de mathématiques et de statistique; Université de Montréal; C.P. 628, Succ. Centre-ville ; Montréal, Québec, H3C 3J7;angers@dms.umontreal.ca This research has been partially funded by NSERC, Canada

2

3 Abstract Student-t densities play an important role in Bayesian statistics. For example, suppose that an estimator of the mean of a normal population with unknown variance is desired, then the marginal posterior density of the mean is often a Student-t density. In this paper, estimation of the location parameter of a Student-t density is considered when its prior is also a Student-t density. It is shown that the posterior mean and variance can be written as the ratio of two finite sums when the number of the degrees of freedom of both the likelihood function and the prior are odd. When one of them (or both) is even, approximations for the posterior mean and variance are given. The behavior of the posterior mean is also investigated in presence of outlying observations. When robustness is achieved, second order approximations of the estimator and its posterior expected loss are given. Mathematics Subject Classification: 62C0, 62F5, 62F35. Keyworks:Robust estimator, Fourier transform, Convolution of Student-t densities.

4

5 Introduction Heavy tail priors play an important role in Bayesian statistics. They can be viewed as an alternative to noninformative priors since they lead to estimators which are insensitive to misspecification of the prior parameters. However, they allow the use of prior information when it is available. Because of its heavier tails, the Student-t density is a robust alternative to the normal density when large observations are expected. (Here, robustness means that the prior information is ignored when it conflicts with the information contained in the data.) This density is also encountered when the data come from a normal population with unknown variance. In this paper, the problem of estimating the location of a Student-t density, under squarederror loss, is considered. To obtain an estimator which will ignore the prior information when it conflicts with the likelihood information, the prior density proposed in the paper is another Studentt density. However, it has fewer degrees of freedom than the likelihood. Consequently, the prior tails are heavier that those of the likelihood, resulting in an estimator which is insensitive to prior misspecification (cf. O Hagan, 979). This problem has been previously studied by Fan and Berger (990), Angers and Berger (99), Angers (992) and Fan and Berger (992). However, some conditions have to be imposed on the degrees of freedom in order to obtain an analytic expression for the estimator. A statistical motivation of the importance of this problem can be found in Fan and Berger (990). In Section 2 of this paper, it is assumed that the degrees of freedom of both the prior and the likelihood are odd. Using Angers (996a), an alternative form (which is sometimes easier to use (cf. Angers, 996b)) for the estimator is also proposed in this section. In Section 3, it is shown that the effect of large observation of the proposed estimator is limited. In the last section, using Saleh (994), an approximation is considered for the case where the number of degrees of freedom of the likelihood function is even. 2 Development of the estimator odd degrees of freedom Let us consider the following model: X θ T 2k+ (θ, σ), θ T 2κ+ (µ, τ), where σ, µ and τ are known and both k and κ are in N. The notation T m (η, ν) denotes the Student-t density with m degrees of freedom and location and scale parameters respectively given by η and ν, that is f m (x η, ν) = ( Γ([m + ]/2) ν + mπγ(m/2) ) [m+]/2 (x η)2. () mν 2 Since the hyperparameters are assumed to be known, we suppose, without loss of generality, that µ = 0 and σ =. The general case can be obtained by replacing X by σx + µ and θ by θ + µ in Theorems 2 and 3. In Angers (996a), the following theorem is proved.

6 Theorem. If X θ g(x θ) and if θ τ τ h(θ/τ), then m(x) = marginal density of X evaluated at x = I 0 (x), (2) θ(x) = posterior expected mean of θ = x i I (x) I 0 (x), (3) ρ(x) = posterior variance of θ ( ) 2 I (x) = I 2(x) I 0 (x) I 0 (x), (4) where i =, I j (x) = F {ĥ(τs)ĝ(j) (s); x}, ĥ(s) denotes the Fourier transform of h(x), ĝ(j) (s) the j th derivative of the Fourier transform of g(x) and F { f; x} represents the inverse Fourier transform of f evaluated at x. Applications of this theorem to several models can be found in Leblanc and Angers (995) and Angers (996a, 996b). In order to compute equations (2),(3) and (4), the Fourier transform of a Student-t density is needed. It is given, along with its first two derivatives, in the following proposition. Since the proof is mostly technical, it is omitted. Proposition. If X T m (0, σ), then f m (s) = ( mσ s ) m/2 2 [m 2]/2 Γ(m/2) K m/2( mσ s ), f m(s) = σ sign(s) ( mσ s ) m/2 2 [m 2]/2 Γ(m/2) K [m 2]/2( mσ s ), f m(s) = mσ 2 f(s) m(m )σ2 ( mσ s ) m 2 K 2 [m 2]/2 [m 2]/2 ( mσ s ), Γ(m/2) where K m/2 (s) denotes the modified Bessel function of the second kind of order m/2. Note that if m = 2k + where k N, then, using Gradshteyn and Ryzhik (980, equation (8.468)), we have that K m/2 ( mσ s ) = K k+/2 ( mσ s ) π = (2σ k (2k p)! 2k + ) k+/2 s k p!(k p)! (2σ 2k + s ) p. (5) p=0 To obtain the marginal density of x, the posterior mean and variance of θ, we need to compute F { f 2κ+ (τs) (j) f 2k+ (s); x}, for j = 0, and 2. Hence, the following two integrals need to be evaluated: A k,l (x) = 0 cos( x s)s k+κ l+ K k l+/2 ( 2k + s) K κ+/2 ( 2κ + τs)ds, (6) 2

7 for l = 0 and and B k (x) = 0 sin( x s)s k+κ+ K k /2 ( 2k + s) K κ+/2 ( 2κ + τs)ds. (7) Using Angers (997), we can also show that Theorem 2. m 2k+ (x) = (2k + )[2k+]/4 (2κ + ) [2κ+]/4 τ 2κ+ A 2 k+κ k,0 (x), πγ(k + /2)Γ(κ + /2) θ 2k+ (x) = x 2k + sign(x) B k(x) A k,0 (x), 2k A ρ 2k+ (x) = (2k + ) k, (x) 2k + A k,0 (x) + ( ) 2 Bk (x). A k,0 (x) In order to compute equations (6) and (7), we need the following lemma. This lemma can be proven using Gradshteyn and Ryzhik (980, equations ( ) and ( )). Lemma. 0 0 s a cos(xs) e bs Γ(a + ) ds = (b 2 + x 2 ) cos([a + ] [a+]/2 tan (x/b)), s a sin(xs) e bs Γ(a + ) ds = (b 2 + x 2 ) sin([a + ] [a+]/2 tan (x/b)). Using Lemma and equation (5), the functions A k,l (x) and B k (x) can be easily evaluated and they are given in the following theorem. Theorem 3. A k,l (x) = π 2 k l+κ+ (2k + ) [2(k l)+]/4 ([2κ + ]τ 2 ) [2κ+]/4 k l κ (2[k l] p)! (2κ q)! p=0 q=0 (k l p)! (κ q)! ( ) p + q 2 p+q (2k + ) p/2 ([2κ + ]τ 2 ) q/2 q ([ 2k + + τ (8) 2κ + ] 2 + x 2 ) [p+q+]/2 [ ]) cos ([p + q + ] tan x, 2k + + τ 2κ + B k (x) = π 2 k+κ (2k + ) [2k ]/4 ([2κ + ]τ 2 ) [2κ+]/4 k κ (2[k ] p)! (2κ q)! p=0 q=0 (k p)! (κ q)! ( ) p + q 2 p+q (2k + ) p/2 ([2κ + ]τ 2 ) q/2 q ([ 2k + + τ (9) 2κ + ] 2 + x 2 ) [p+q+2]/2 [ ]) sin ([p + q + 2] tan x. 2k + + τ 2κ + 3

8 Using Theorems 2 and 3, the posterior expected mean and the posterior variance can be computed using only a ratio of two finite sums. In Section 4, the case where the likelihood function is a Student-t density with an even number of degrees of freedom is considered. In this situation, the posterior quantities cannot be written using finite sums, although they can be expressed as the ratio of two infinite series (cf. Angers, 997). However, using an approximation for the Student-t density (cf. Saleh94), θ 2k and ρ 2k (x) can be approximated accurately. Before doing so, we first discuss two limit cases, that is, when x is large and when τ. 3 Special cases The main advantage of using a heavy-tails prior is that the resulting Bayes estimator, under the squared-error loss, is insensitive to the choice of prior when there is a conflict between the prior and the likelihood information. This situation is considered in the next subsection. 3. Behavior of θ 2k+ (x) for large x In order to study the behavior of θ 2k+ (x) for large values of x, it should first be noted that ( ) ( ) tan x 2k + + τ 2κ + = cos 2k + + τ 2κ + [ 2k + + τ 2κ + ] 2 + x 2 ( = sin x 2 ) [ 2k + + τ. 2κ + ] 2 + x 2 Using these last equalities in equations (8) and (9), the following theorem can be proven. Theorem 4. where A k,l = B k (x) = π 2 k l+κ+ (2k + ) [2(k l)+]/4 ([2κ + ]τ 2 ) [2κ+]/4 [ ] c l ([ 2k + + τ 2κ + ] 2 + x 2 ) + 2 O( x 6 ), π 2 k+κ (2k + ) [2k ]/4 ([2κ + ]τ 2 ) [2κ+]/4 [ ] 4c x ([ 2k + + τ 2κ + ] 2 + x 2 ) + 3 O( x 6 ), (2[k l])!(2κ)! c l = [ 2k + + τ 2κ + ] { 2[ 2k + + τ 2κ + ] (k l)!κ! ( 2[κ ] 3 2κ (2κ + )τ 2 I 2 (κ) + τ 2k + 2κ + I (κ)i (k l) )} 2[k l ] +2 2[k l] (2k + )I 2(k l), if b {a, a +,...}, I a (b) = 0 otherwise. 4

9 Using this theorem, it can be shown that if x, then ( θ 2k+ (x) = 8c ) 2k + c 0 [ 2k + + τ x + O( x 2 ), 2κ + ] 2 + x 2 ( ρ 2k+ (x) = (2k + ) 4k c ) + O( x 2 ). c 0 Note that, as expected, θ 2k+ (x) collapses to x when a conflict occurs between the prior and the likelihood information. 3.2 Behavior of θ 2k+ (x) for large τ If the prior scale parameter is large, the resulting Bayes estimator should be close to the one obtained using a uniform prior on θ (i.e., π(θ) ). In this subsection, the behavior of θ 2k+ (x) and ρ 2k+ (x) are considered when τ. If τ is large, tan (x/[ 2k + + τ 2κ + ]) = (x/[ 2k + + τ 2κ + ]) + O(τ 3 ). Consequently, ([p + q + ] tan [ cos [ ([p + q + 2] tan sin x 2k + + τ 2κ + ]) = + O(τ 2 ), (0) x 2k + + τ 2κ + ]) = (p + q + 2)x 2k + + τ 2κ + + O(τ 3 ). () Substituting equations (0) and () in equations (8) and (9), we obtain the following theorem. Theorem 5. A k,l (x) = B k (x) = π (2[k l])! 2 k l+κ+ (2k + ) (2[k l]+)/4 ([2κ + ]τ 2 ) (2κ+)/4 (k l)! S 0 [ 2k + + τ + O(τ 2 ), 2κ + ] 2 + x 2 π (2[k ])! 2 k+κ (2k + ) (2k+)/4 ([2κ + ]τ 2 ) (2κ+)/4 (k )! [ ] S x τ 2κ + ([ 2k + + τ 2κ + ] 2 + x 2 ) + O(τ 4 ), where S 0 = S = κ q=0 κ q=0 (2κ q)! (κ q)! 2q, (2κ q)! (κ q)! (q + )(q + 2)2q. 5

10 Table : Maximum error for k =, 2,..., 5 k η max η e k (η) η max η ηe k (η) k η max η e k (η) η max η ηe k (η) k η max η e k (η) η max η ηe k (η) Using the previous theorem, it can be shown that ( θ 2k+ (x) = 2k + S 2k S 0,0 τ 2κ + ( 2k + + τ x 2κ + ) 2 + x 2 + O(τ 3 ), ρ 2k+ (x) = 2k + 2k + O(τ ). Hence, θ 2k+ (x) has the desired behavior. In the next section, approximations for θ 2k (x) and ρ k (x) are given. 4 Even number of degrees of freedom for the likelihood function In Saleh (994), it is shown that f 2k (x) = 2k 4k f 2k (x) + 2k + 4k f 2k+(x) + e k (x), (2) where f m (x) is given by equation () with η = 0 and ν =. (Note that other approximations for the Student-t density are discussed in Saleh (994).) The term e k (x) represents an error term. Using Mathematica, the maximum of e k (η) is approximately given by max η R e k (η) /k In Table, we computed max η e k (η) for k = to 5 along with the values of η, denoted by η, where the maximum occurs. It can be seen that the maximum error becomes negligible as k increases. 6

11 Using equation (2), the Fourier transform of f 2k (x) given in Proposition can also be approximated by f 2k (s) 2k 4k = 2k 4k f 2k (s) + 2k + f 2k+ (s) 4k ( 2k s ) k /2 2 k 3/2 Γ(k /2) K k /2( 2k s ) + 2k + ( 2k + s ) k+/2 4k 2 k /2 Γ(k + /2) K k+/2( 2k + s ). Using this approximation, the following theorem can be proved. Theorem 6. If X θ T 2k (θ, ) and θ T 2κ+ (0, τ), then θ 2k (x) w k (x) θ 2k (x) + ( w k (x)) θ 2k+ (x), (3) ρ 2k (x) w k (x)ρ 2k (x) + ( w k (x))ρ 2k+ (x) w k (x)( w k (x))( θ 2k+ (x) θ 2k (x)) 2, where w k (x) = (2k ) [2k+7]/4 A k,0 (x) (2k ) [2k+7]/4 A k,0 (x) + (2k + ) [2k+5]/4 A k,0 (x). In order to see if the approximation given in Theorem 6 is accurate, let θ 2k (x) denote the exact Bayes estimator of θ (cf. Angers, 997) and θ 2k (x), its approximation using equation (3). Then, it can be shown that θ 2k (x) θ 2k (x) = E (x) E 0 (x)(x θ 2k (x)), m(x) E 0 (x) where E i (x) = ηi e k (η)π(η x)dη for i = 0, and m(x) represents the marginal density of x using the approximation given by equation (2). Using Mathematica, we also tabulated in Table, the value of max η R ηe k (η) for k = to 5 along with the value of η, denoted by η, for which the maximum occurs. Fitting a log-log regression model, we obtain that max η R ηe k(η) k The plot of the approximation error (in absolute value), that is θ 2k (x) θ 2k (x), is also given in Figure for 2k = 4, 6 and 0 and κ = 0 and for 0 x 0. (Note that θ 2k (x) has been computed using equation (3) and the I l (x) integrals were evaluated using the Monte Carlo integration technique.) The marginal density (0m 3 (x)) is also plotted in Figure to indicate which values of x have a large likelihood. From Figure it can be seen that the maximum error occurs around x = 5 and that it decreases as k increases. For small values of x (values for which the marginal of X is maximal), the error does not depend much on k. For large values of x the approximation is better for larger k. 5 Conclusion In this paper, we provide an exact (and closed form) solution for the estimation of a Studentt location parameter when the prior is also a Student-t density and both numbers of degrees of 7

12 Figure : Approximation error for 2k = 4 (top curve), 2k = 6 (middle curve) and 2k = 0 (bottom curve) and κ = 0 freedom are odd. This estimator is also shown to be insensitive to misspecification of the prior location and scale parameter. It also corresponds to the generalized Bayes estimator (based on π(θ) ) when τ is large. When the number of degrees of freedom of the likelihood function is even, the previous estimator does not apply. However, based on this estimator, an approximation of θ 2k (x) is proposed. This case can be easily generalized to the cases where either the prior or the likelihood, or both, are Student-t densities with an even number of degrees of freedom. References [] Angers, J.-F. (992). Use of Student-t prior for the estimation of normal means: A computational approach. Bayesian Statistics IV (J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, eds.), Oxford University Press, [2] Angers, J.-F. (996a). Fourier transform and Bayes estimator of a location parameter. Statistics & Probability Letters 29, [3] Angers, J.-F. (996b). Protection against outliers using a symmetric stable law prior. In IMS Lecture Notes - Monograph Series, 29, [4] Angers, J.-F. (997). Bayesian estimator of the location parameter of a student-t density. Technical Report 97-07, University of Nottingham, Nottingham University Statistics Group. [5] Angers, J.-F. and J. O. Berger (99). Robust hierarchical Bayes estimation of exchangeable means. The Canadian Journal of Statistics 9, [6] Fan, T. H. and J. O. Berger (990). Exact convolution of t distributions, with applications to Bayesian inference for a normal mean with t prior distributions. Journal of Statistical Computing and Simulation 36,

13 [7] Fan, T. H. and J. O. Berger (992). Behaviour of the posterior distribution and inferences for a normal means with t prior distributions. Statistics & Decisions 0, [8] Gradshteyn, L. S. and I. M. Ryzhik (980). Table of integrals, series and products. New York: Academic Press. [9] Leblanc, A. and J.-F. Angers (995). Fast Fourier transforms and Bayesian estimation of location parameters. Technical Report DMS-380, Université de Montréal, Département de mathématiques et de statistique. [0] O Hagan, A. (979). On outlier rejection phenomena in Bayes inference. Journal of the Royal Statistical Society Ser. B 4, [] Saleh, A. A. (994). Approximating the characteristic function of the student s t distribution. The Egyptian Statistical Journal 39(2),

Importance sampling with the generalized exponential power density

Importance sampling with the generalized exponential power density Statistics and Computing 5: 89 96, 005 C 005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Importance sampling with the generalized exponential power density ALAIN DESGAGNÉ and

More information

Robust estimation of skew-normal distribution with location and scale parameters via log-regularly varying functions

Robust estimation of skew-normal distribution with location and scale parameters via log-regularly varying functions Robust estimation of skew-normal distribution with location and scale parameters via log-regularly varying functions Shintaro Hashimoto Department of Mathematics, Hiroshima University October 5, 207 Abstract

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Bayesian robustness modelling of location and scale parameters

Bayesian robustness modelling of location and scale parameters Bayesian robustness modelling of location and scale parameters A. O Hagan and J.A.A. Andrade Dept. of Probability and Statistics, University of Sheffield, Sheffield, S 7RH, UK. Dept. of Statistics and

More information

arxiv: v2 [stat.me] 24 Dec 2017

arxiv: v2 [stat.me] 24 Dec 2017 Bayesian Robustness to Outliers in Linear Regression arxiv:62.698v2 stat.me] 24 Dec 27 Philippe Gagnon, Université de Montréal, Canada Alain Desgagné Université du Québec à Montréal, Canada and Mylène

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Computational Aspect of the Generalized Exponential Power Density

Computational Aspect of the Generalized Exponential Power Density Computational Aspect of the Generalized Exponential Power Density Alain Desgagné Jean-François Angers CRM-298 April 2003 Département de mathématiques et de statistique, Université de Montréal, C.P. 628,

More information

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

Inconsistency of Bayesian inference when the model is wrong, and how to repair it

Inconsistency of Bayesian inference when the model is wrong, and how to repair it Inconsistency of Bayesian inference when the model is wrong, and how to repair it Peter Grünwald Thijs van Ommen Centrum Wiskunde & Informatica, Amsterdam Universiteit Leiden June 3, 2015 Outline 1 Introduction

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Stat 451 Lecture Notes Simulating Random Variables

Stat 451 Lecture Notes Simulating Random Variables Stat 451 Lecture Notes 05 12 Simulating Random Variables Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 22 in Lange, and Chapter 2 in Robert & Casella 2 Updated:

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Some Curiosities Arising in Objective Bayesian Analysis

Some Curiosities Arising in Objective Bayesian Analysis . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work

More information

Theory and Methods of Statistical Inference

Theory and Methods of Statistical Inference PhD School in Statistics cycle XXIX, 2014 Theory and Methods of Statistical Inference Instructors: B. Liseo, L. Pace, A. Salvan (course coordinator), N. Sartori, A. Tancredi, L. Ventura Syllabus Some prerequisites:

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

(1) Introduction to Bayesian statistics

(1) Introduction to Bayesian statistics Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the

More information

Multivariate statistical methods and data mining in particle physics

Multivariate statistical methods and data mining in particle physics Multivariate statistical methods and data mining in particle physics RHUL Physics www.pp.rhul.ac.uk/~cowan Academic Training Lectures CERN 16 19 June, 2008 1 Outline Statement of the problem Some general

More information

Multivariate Normal & Wishart

Multivariate Normal & Wishart Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.

More information

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods PhD School in Statistics cycle XXVI, 2011 Theory and Methods of Statistical Inference PART I Frequentist theory and methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution

More information

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Anne Philippe Laboratoire de Mathématiques Jean Leray Université de Nantes Workshop EDF-INRIA,

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION

COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION (REFEREED RESEARCH) COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION Hakan S. Sazak 1, *, Hülya Yılmaz 2 1 Ege University, Department

More information

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach

More information

Surveying the Characteristics of Population Monte Carlo

Surveying the Characteristics of Population Monte Carlo International Research Journal of Applied and Basic Sciences 2013 Available online at www.irjabs.com ISSN 2251-838X / Vol, 7 (9): 522-527 Science Explorer Publications Surveying the Characteristics of

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J. Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Exogeneity tests and weak-identification

Exogeneity tests and weak-identification Exogeneity tests and weak-identification Firmin Doko Université de Montréal Jean-Marie Dufour McGill University First version: September 2007 Revised: October 2007 his version: February 2007 Compiled:

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

ST 740: Model Selection

ST 740: Model Selection ST 740: Model Selection Alyson Wilson Department of Statistics North Carolina State University November 25, 2013 A. Wilson (NCSU Statistics) Model Selection November 25, 2013 1 / 29 Formal Bayesian Model

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

The Horseshoe Estimator for Sparse Signals

The Horseshoe Estimator for Sparse Signals The Horseshoe Estimator for Sparse Signals Carlos M. Carvalho Nicholas G. Polson University of Chicago Booth School of Business James G. Scott Duke University Department of Statistical Science October

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

Bayesian analysis of the Hardy-Weinberg equilibrium model

Bayesian analysis of the Hardy-Weinberg equilibrium model Bayesian analysis of the Hardy-Weinberg equilibrium model Eduardo Gutiérrez Peña Department of Probability and Statistics IIMAS, UNAM 6 April, 2010 Outline Statistical Inference 1 Statistical Inference

More information

Gaussian Process Regression

Gaussian Process Regression Gaussian Process Regression 4F1 Pattern Recognition, 21 Carl Edward Rasmussen Department of Engineering, University of Cambridge November 11th - 16th, 21 Rasmussen (Engineering, Cambridge) Gaussian Process

More information

A Goodness-of-fit Test for Copulas

A Goodness-of-fit Test for Copulas A Goodness-of-fit Test for Copulas Artem Prokhorov August 2008 Abstract A new goodness-of-fit test for copulas is proposed. It is based on restrictions on certain elements of the information matrix and

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods PhD School in Statistics XXV cycle, 2010 Theory and Methods of Statistical Inference PART I Frequentist likelihood methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Generalized Elastic Net Regression

Generalized Elastic Net Regression Abstract Generalized Elastic Net Regression Geoffroy MOURET Jean-Jules BRAULT Vahid PARTOVINIA This work presents a variation of the elastic net penalization method. We propose applying a combined l 1

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

The Calibrated Bayes Factor for Model Comparison

The Calibrated Bayes Factor for Model Comparison The Calibrated Bayes Factor for Model Comparison Steve MacEachern The Ohio State University Joint work with Xinyi Xu, Pingbo Lu and Ruoxi Xu Supported by the NSF and NSA Bayesian Nonparametrics Workshop

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Objective Prior for the Number of Degrees of Freedom of a t Distribution

Objective Prior for the Number of Degrees of Freedom of a t Distribution Bayesian Analysis (4) 9, Number, pp. 97 Objective Prior for the Number of Degrees of Freedom of a t Distribution Cristiano Villa and Stephen G. Walker Abstract. In this paper, we construct an objective

More information

Bayesian Loss-based Approach to Change Point Analysis

Bayesian Loss-based Approach to Change Point Analysis Bayesian Loss-based Approach to Change Point Analysis Laurentiu Hinoveanu 1, Fabrizio Leisen 1, and Cristiano Villa 1 arxiv:1702.05462v2 stat.me] 7 Jan 2018 1 School of Mathematics, Statistics and Actuarial

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Bayesian Graphical Models

Bayesian Graphical Models Graphical Models and Inference, Lecture 16, Michaelmas Term 2009 December 4, 2009 Parameter θ, data X = x, likelihood L(θ x) p(x θ). Express knowledge about θ through prior distribution π on θ. Inference

More information

Stat 451 Lecture Notes Monte Carlo Integration

Stat 451 Lecture Notes Monte Carlo Integration Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:

More information

Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence

Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence Valen E. Johnson Texas A&M University February 27, 2014 Valen E. Johnson Texas A&M University Uniformly most powerful Bayes

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

COHERENCE OF THE POSTERIOR PREDICTIVE P-VALUE BASED ON THE POSTERIOR ODDS. J. de la Horra Navarro and M.T. Rodríguez Bernal*

COHERENCE OF THE POSTERIOR PREDICTIVE P-VALUE BASED ON THE POSTERIOR ODDS. J. de la Horra Navarro and M.T. Rodríguez Bernal* Working Paper 01-25 Statistics and Econometrics Series 16 March 2001 Departamento de Estadística y Econometría Universidad Carlos III de Madrid Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624-98-49

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Checking for Prior-Data Conflict

Checking for Prior-Data Conflict Bayesian Analysis (2006) 1, Number 4, pp. 893 914 Checking for Prior-Data Conflict Michael Evans and Hadas Moshonov Abstract. Inference proceeds from ingredients chosen by the analyst and data. To validate

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Practical Bayesian Optimization of Machine Learning. Learning Algorithms Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that

More information

Student-t Process as Alternative to Gaussian Processes Discussion

Student-t Process as Alternative to Gaussian Processes Discussion Student-t Process as Alternative to Gaussian Processes Discussion A. Shah, A. G. Wilson and Z. Gharamani Discussion by: R. Henao Duke University June 20, 2014 Contributions The paper is concerned about

More information

A simple two-sample Bayesian t-test for hypothesis testing

A simple two-sample Bayesian t-test for hypothesis testing A simple two-sample Bayesian t-test for hypothesis testing arxiv:159.2568v1 [stat.me] 8 Sep 215 Min Wang Department of Mathematical Sciences, Michigan Technological University, Houghton, MI, USA and Guangying

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Gaussian Processes in Machine Learning

Gaussian Processes in Machine Learning Gaussian Processes in Machine Learning November 17, 2011 CharmGil Hong Agenda Motivation GP : How does it make sense? Prior : Defining a GP More about Mean and Covariance Functions Posterior : Conditioning

More information

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model

More information

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester Physics 403 Classical Hypothesis Testing: The Likelihood Ratio Test Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Bayesian Hypothesis Testing Posterior Odds

More information

Model Selection for Gaussian Processes

Model Selection for Gaussian Processes Institute for Adaptive and Neural Computation School of Informatics,, UK December 26 Outline GP basics Model selection: covariance functions and parameterizations Criteria for model selection Marginal

More information

POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS

POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS EDWARD I. GEORGE and ZUOSHUN ZHANG The University of Texas at Austin and Quintiles Inc. June 2 SUMMARY For Bayesian analysis of hierarchical

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 208 Petros Koumoutsakos, Jens Honore Walther (Last update: June, 208) IMPORTANT DISCLAIMERS. REFERENCES: Much of the material (ideas,

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web this afternoon: capture/recapture.

More information

Supplementary materials for Scalable Bayesian model averaging through local information propagation

Supplementary materials for Scalable Bayesian model averaging through local information propagation Supplementary materials for Scalable Bayesian model averaging through local information propagation August 25, 2014 S1. Proofs Proof of Theorem 1. The result follows immediately from the distributions

More information

Relevance Vector Machines

Relevance Vector Machines LUT February 21, 2011 Support Vector Machines Model / Regression Marginal Likelihood Regression Relevance vector machines Exercise Support Vector Machines The relevance vector machine (RVM) is a bayesian

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information