Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK

Size: px
Start display at page:

Download "Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK"

Transcription

1 Practical Bayesian Quantile Regression Keming Yu University of Plymouth, UK A brief summary of some recent work of us (Keming Yu, Rana Moyeed and Julian Stander).

2 Summary We develops a Bayesian framework for quantile regression, including Tobit quantile regression. We discuss the likelihood selection, and families of prior distribution on the quantile regression vector that lead to proper posterior distributions with finite moments. We show how the posterior distribution can be sampled and summarized by Markov chain Monte Carlo methods. A method for quantile regression model choice is also developed. In an empirical comparison, our approach out-performed some common classical estimators.

3 1. Background Linear regression models: Aim: Estimate E(Y X); Method: Least-squares minimization; Suitability: Gaussian errors, symmetric distributions; Weakness: Conditional skew distribution, outliers, tail behavior.

4 Quantile regression models (Koenker and Bassett 1978): Aim: Estimate the p th quantile of Y given X (0 < p < 1), and explore a complete relationship between Y and X; Specific cases: median regression (p = 0.5); Method: Check function minimization; Check function: ρ p (u) = u(p I(u < 0)); Applications: Reference charts in medicine (Cole and Green, 1992), Survival analysis (Koenker and Geling, 2001), Value at Risk (Bassett and Chen, 2001), Labor economics (Buchinsky, 1995), Flood return period (Yu et al., 2003).

5 2. Basic setting Consider the following standard linear model y i = µ(x i ) + ɛ i, Typically, µ(x i ) = x i β for a vector of coefficients β. The pth (0 < p < 1) quantile of ɛ i is the value, q p, for which P (ɛ i < q p ) = p. The pth conditional quantile of y i given x i is denoted as q p (y i x i ) = x i β(p). (1)

6 To inference for parameter β(p) given p and the observations on (X, Y), the posterior distribution of β(p), π(β y) is given by π(β y) L(y β) π(β), (2) where π(β) is the prior distribution of β and L(y β) is the likelihood function.

7 3. Selecting likelihood function There are two or three different ways to select L(y β) in the set-up above. For example, using a mixture distribution accompanying a Dirichlet process or Pó tree prior to model the error distribution (Walker et al., 1999, Kottas and Gelfand, 2001). But people (Richardson, 1999) commented that, extra parameters under inference, associated computations complicated, difficult in the choice of the partition of the space of the priors. The other example is to use a substitute likelihood (Dunson et al., (2003), Biometrics.) But this is not a proper likelihood.

8 A simple and natural likelihood is based on the asymmetric Laplace distribution with probability density for 0 < p < 1; f p (u) = p(1 p) exp{ ρ p (u)}, (3) Link: The minimization of the loss function (3) is exactly equivalent to the maximization of a likelihood function below; Likelihood function: L(y β) = p n (1 p) n exp i ρ p (y i x i β) (4) Feature: Except parameter β, no extra parameter under inference. Then we just need to set priors for β.

9 Once we have a prior for β(p) and data set available, we could make posterior inference for the parameters. How? as there is no conjugate prior, we use the popular MCMC techniques for sampling from the posterior. Why Bayes? Classical methods rely on asymptotics, either large sample is available or using bootstrap. In contrast, MCMC sampling enables us to make exact inference for any sample size without resorting to asymptotic calculations.

10 For example, The asymptotic covariance matrices of parameter estimators in those classical approaches for these quantile regression models depend on the error densities of ɛ, and are therefore difficult to estimate reliably. In the Bayesian framework, variance estimates, as well as any other posterior summary come out as a by-product of the MCMC sampler, and therefore are trivial to obtain once samples from the posterior distribution are available. Moreover, the Bayesian paradigm also enables us to incorporate prior information in a natural way, whereas the frequentist paradigm does not. Take the uncertainty of parameters into account.

11 4. Bayesian posterior computation via MCMC An MCMC scheme would constructs a Markov chain with equilibrium distribution the posterior π(β y). After running the Markov chain for a certain burn-in period so that it can reach equilibrium, one obtains samples from π(β y). One popular method for constructing a Markov chain is via the Metropolis-Hastings (MH) algorithm. Where a candidate is generated from an auxiliary distribution and then accepted or rejected with some probability. The candidate generating distribution q(β, β c ) can depend on the current state β c of the Markov chain.

12 A candidate β is accepted with an certain acceptance probability α(β, β c ) also depending on the current state β c given by: α(β, β c ) = min[ π(β )L(y β )q(β c, β ) π(β c )L(y β c )q(β, β c ), 1]. If, for example, a simple random walk is used to generate β from β c, then the ratio q(βc,β ) q(β,β c ) = 1. In general the steps of the MH algorithm are therefore: Step 0: Start with an arbitrary value β (0) For n from 1 to N Step n: Generate β from q(β, β c ) and u from U(0, 1) If u α(β, β c ) set β (n) = β (acceptance) If u > α(β, β c ) set β (n) = β (n 1) (rejection)

13 As mentioned above after running this procedure for a certain burn-in period, the samples obtained may be through as coming from the posterior distribution. Hence, we can estimate posterior moments, standard deviations and credible intervals from this posterior sample. We found that convergence was very rapid. Remark: We may use the set-up for Tobit quantile regression: suppose that y and y are random variables connected by the censoring relationship y = max { y 0, y }, where y 0 is a known censoring point. In this case, we have found it simplifies the algorithm to assume zero to be the fixed censoring points by a simple transformation of any non-zero censoring points.

14 S-PLUS or R- codes to implement the algorithms are available (free).

15 5. Some theoretic results, including prior selection As people like Richardson (1999) mentioned that popular forms of priors tends to be those which have parameters that can be set straightforwardly and which lead to posterior with a relatively immediate form. Although a standard conjugate prior distribution is not available for the quantile regression formulation, MCMC methods may be used to draw samples from the posterior distributions. This, principal, allows us to use virtually any prior distribution. However, we should select priors that yields proper posteriors. Choose the prior π(β) from a class of known distributions, in order to get proper posteriors.

16 First, the posterior is proper if and only if 0 < π(β y)dβ <, (5) RT +1 or, equivalently, if and only if, 0 < L(y β) π(β) dβ <. RT +1 Moreover, we require that all posterior moment exist. That is, E[( T j=0 β j r j) y] <, (6) where (r 0,..., r T ) denotes the order of the moments of β = (β 0,..., β T ). We now establish a bound for the integral R T +1 Tj=0 β j r j L(y β) π(β) dβ that allows us to obtain proper posterior moments.

17 Theorem 1 (basic lemma) Let the function g(t) = exp( t ), and let h 1 (p) = min{p, 1 p} and h 2 (p) = max{p, 1 p}, then all posterior moments exist if and only if R T +1 T j=0 β j r j n i=1 is finite for both k = 1 and 2. g { h k (p)(y i x i β)} π(β)dβ

18 Theorem 2 establishes that in the absence of any realistic prior information we could legitimately use an improper uniform prior distribution for all the components of β. This choice may be appealing as the resulting posterior distribution is proportional to the likelihood surface. Theorem 2: Assume that the prior for β is improper and uniform, that is, π(β) 1, then all posterior moments of β exist.

19 Theorem 3: When the elements of β are assumed prior independent, and each π(β i ) exp( β i µ i λ ), a double-exponential with fixed i µ i and λ i > 0, all posterior moments of β exist. Theorem 4: Assume that the prior for β is multivariate normal N(µ, Σ) with fixed µ and Σ, then all posterior moments of β exist. In particular, when the elements of β are assumed a prior independent and univariate normal, all posterior moments of β exist.

20 6. Empirical comparison Buchinsky and Hahn (1998) performed Monte Carlo experiments to compare their estimator with the one proposed by Powell (1986) for Tobit quantile regression estimation. One of models they used was given by and y = max{ 0.75, y } y = 1 + x x 2 + ɛ, where the regressors in x i were each drawn from a standard normal distribution and the error term has multiplicative heteroskedasticity obtained by taking ɛ = ξ v(x) with ξ N(0, 25) and v(x) = (x 1 + x x 2 + x 2 2 ).

21 For estimating the median regression for this model, Table 1 summaries the biases, root mean square errors (RMSE) and 95% credible intervals for β 0 and β 1 obtained from the following three approaches: BH method (Buchinsky and Hahn, 1998), Powell s estimator (Power, 1986) and the proposed Bayesian method with uniform prior. The values relating to BH and Powell were also reported in Table 1 of Buchinsky and Hahn (1998). In particular, the BH method used log-likelihood cross-validated bandwidth selection for kernel estimation of the censored probability, and the 95% confidence intervals for both BH and Powell estimators are based on their asymptotic normality theory. The results from Bayesian inference are based on a burn-in of 1000 iterations and then 2000 sample values (see Figure).

22

23 Table 1. Bias, root mean square errors (RMSE) and 95% credible intervals for the parameters β 0 and β 1 of the median regression. Samples were generated from the model considered by Buchinsky and Hahn (1998). Three approaches were used: BH method (Buchinsky and Hahn, 1998), Powell estimator (Powell, 1986) and the proposed Bayesian method with uniform prior.

24 β 0 β 1 Size BH Powell Bayes BH Powell Bayes 100 Bias RMSE % % Bias RMSE % % Bias RMSE % %

25 Clearly, the proposed Bayesian method outperformed the BH and Powell methods. It yields considerably lower biases, lower mean square errors and much more precise credible intervals. S-PLUS code to implement the method with this comparison is available.

26 7. Reference chart for Immunoglobulin-G This data set refers to the serum concentration (grams per litre)of immunoglobulin-g (IgG) in 298 children aged from 6 months to 6 years (Issacs, et al., 1983). The relationship of IgG with age is quite weak, with some visual evidence of positive skewness. We took the response variable Y to be the IgG concentration and used a quadratic model in age, x, to fit the quantile regression: q p (y x) = β 0 (p) + β 1 (p) x + β 2 (p) x 2, for 0 < p < 1. Figure shows the plot of the data along with the quantile regression lines. Each point on the curves is the mean of the predictive posterior distribution. We could also obtain desired credible intervals around these curves using the MCMC samples of β(p).

27

28 8. Model choice: Using marginal likelihood and Bayes factors Basically, considering the problem of comparing a collection of models {M 1,..., M L } that reflect competing hypotheses about the regression form. The issue of model choice can be dealt with by calculating Bayes factors. Under model M k, suppose that the pth quantile model is given by y M k = x (k) β p(k) + ɛ p(k), then the marginal likelihood arising from estimating β p(k) is defined as m(y M k ) = L(y M k, β p(k) ) π(β p(k) M k )dβ p(k), which is the normalizing constant of the posterior density. The calculation of the marginal likelihood has attracted considerable interest in the recent MCMC literature. In particular, Chib (1995) and Chib and Jeliazkov (2001) have developed a simple approach for estimating the marginal likelihood using the output from the Gibbs sample and the MH algorithm respectively. Here, we have log m(y M k ) = log L(y M k, β p(k) ) + log π(β p(k) M k) log π(β p(k) y, M k), from which the marginal likelihood can be estimated by finding an estimate of the posterior ordinate π(β p(k) y, M k).

29 We denote this estimate as ˆπ(β p(k) y, M k). For estimate efficiency, β p(k) = βp(k) is generally taken to be a point of high density in the support of posterior density. On substituting the latter estimate in log m(y M k ), we get log ˆm(y M k ) = n log(p(1 p)) ρ p (y x (k) β p(k) ) + log π(β p(k) M k) log ˆπ(β p(k) y, M k), in which the first term n log(p(1 p)) is constant and the sum is over all data points. Once the posterior ordinate is estimated, we can estimate the Bayes factor of any two models M k and M l by ˆB kl = exp{log ˆm(y M k ) log ˆm(y M l )}. A simulation-consistent estimate of ˆπ(β θ(k) y, M k) can be given by For an improper prior, in which ˆπ(β y) = G 1 G g=1 α(β(g), β ) q(β (g), β ) J, 1 J j=1 α(β, β (j) ) α(β, β ) = min{1, π(β ) π(β) L (β, β)},

30 and L (β, β) = exp{ ( ρp (y x (k) β p(k) ) ρ p(y x (k) β θ(k)) ) }. Where {β (j) } are samples drawn from q(β, β) and {β (g) } are samples drawn from the posterior distribution. For proper prior, ˆπ(β y) = α(β, β )π(β y)dβ α (β, β)π(β)dβ, in which α (β, β) = min{ 1 π(β), 1 π(β ) L (β, β )}. This implies that a simulation-consistent estimate of the posterior ordinate is given by ˆπ(β y) = G 1 G g=1 α(β(g), β ) J 1 J j=1 α (β, β (j) ), where {β (j) } are samples drawn from a proper prior distribution and {β (g) } are samples drawn from the posterior distribution.

31 9. Inference with scale parameter One may be interested in introducing a scale parameter into the likelihood function L(y β) for the proposed Bayesian inference. Suppose σ > 0 is the scale parameter, ( ) L(y β, σ) = θn (1 θ) n n exp ρ σ n θ ( y i x i β ). σ The corresponding posterior distribution π(β, σ y) can be written as i=1 π(β, σ y) L(y β, σ) π(β, σ), where π(β, σ) is the prior distribution of (β θ, σ) for a particular θ. As what interests us is the regression parameter β, and σ is what is referred to as a nuisance parameter, we may integrate out σ and investigate the marginal posterior π(β y) only. For example, we have considered a reference prior π(β, σ) 1, which gives σ that π(β y) ( n ρ θ (y i x i β)) n, or i=1 log π(β y) n log n ρ θ (y i x iβ). Implementing MCMC algorithm on this posterior form, we have found that the simulation results are more or less same as those obtained using the posterior density (4). i=1

32 References Billias, Y., Chen, S. and Ying, Z. (2000): Simple Resampling Methods for Censored Regression Quantiles, Journal of Econometrics, 99, Buchinsky, M. (1998), Recent advances in quantile regression models, J. Human Res., 33, Chib, S. (1992): Bayes Inference in the Tobit Censored Regression Model, Journal of Econometrics, 51, Isaacs, D., Altman, D.G., Tidmarsh, C.E., Valman, H.B. and Webster, A.D.B. (1983), Serum Immunoglobin concentrations in preschool children measured by laser nephelometry: reference ranges for IgG, IgA, IgM, J. Clinical Pathology, 36, Koenker, R. and Bassett, G.S. (1978), Regression quantiles, Econometrica, 46, Hahn, J. (1995): Bootstrapping Quantile Regression Estimators, Econometric Theory, 11, Huang, H. (2001): Bayesian Analysis of the SUR Tobit Model, Applied Economics Letters, 8, Kottas, A. and Gelfand, A. E. (2001): Bayesian semiparametric Median Regression Model, Journal of the American Statistical Association, 91,

33 Powell, J. (1986a): Censored Regression Quantiles, Journal of Econometrics, 32, Powell, J. (1986b): Symmetrically Trimmed Least Squares Estimation for Tobit Models, Econometrica, 54, Richardson, S. (1999): Contribution to the Discussion of Walker et al., Bayesian Nonparametric Inference for Random Distribution and Related Functions, J. R. Statist. Soc. B, 61, Walker, S.G., Damien, P., Laud, P. W. and Smith, A.F.M. (1999): Bayesian Nonparametric Inference for Random Distribution and Related Functions, J. R. Statist. Soc. B, 61, Powell, J. L. (2001): Semiparametric estimation of censored selection models in Hsiao,C., K. Morimune, and J.L. Powell,eds., Nonlinear statistical modelling, Cambridge University Press. Yu, K. and Moyeed, R.A. (2001): Bayesian quantile regression, Statistics and Probability Letters, 54, Yu, K., Lu, Z. and Stander, J. (2003): Quantile regression: applications and current research area, The Statistican, 52,

Bayesian quantile regression

Bayesian quantile regression Statistics & Probability Letters 54 (2001) 437 447 Bayesian quantile regression Keming Yu, Rana A. Moyeed Department of Mathematics and Statistics, University of Plymouth, Drake circus, Plymouth PL4 8AA,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Research Article Power Prior Elicitation in Bayesian Quantile Regression

Research Article Power Prior Elicitation in Bayesian Quantile Regression Hindawi Publishing Corporation Journal of Probability and Statistics Volume 11, Article ID 87497, 16 pages doi:1.1155/11/87497 Research Article Power Prior Elicitation in Bayesian Quantile Regression Rahim

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

A Partially Collapsed Gibbs sampler for Bayesian quantile regression

A Partially Collapsed Gibbs sampler for Bayesian quantile regression A Partially Collapsed Gibbs sampler for Bayesian quantile regression Craig Reed and Keming Yu Abstract We introduce a set of new Gibbs sampler for Bayesian analysis of quantile regression model. The new

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Lecture Notes based on Koop (2003) Bayesian Econometrics

Lecture Notes based on Koop (2003) Bayesian Econometrics Lecture Notes based on Koop (2003) Bayesian Econometrics A.Colin Cameron University of California - Davis November 15, 2005 1. CH.1: Introduction The concepts below are the essential concepts used throughout

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Markov Chain Monte Carlo in Practice

Markov Chain Monte Carlo in Practice Markov Chain Monte Carlo in Practice Edited by W.R. Gilks Medical Research Council Biostatistics Unit Cambridge UK S. Richardson French National Institute for Health and Medical Research Vilejuif France

More information

UNIVERSITY OF CALIFORNIA Spring Economics 241A Econometrics

UNIVERSITY OF CALIFORNIA Spring Economics 241A Econometrics DEPARTMENT OF ECONOMICS R. Smith, J. Powell UNIVERSITY OF CALIFORNIA Spring 2006 Economics 241A Econometrics This course will cover nonlinear statistical models for the analysis of cross-sectional and

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Bayesian Methods in Multilevel Regression

Bayesian Methods in Multilevel Regression Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Gaussian kernel GARCH models

Gaussian kernel GARCH models Gaussian kernel GARCH models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics 7 June 2013 Motivation A regression model is often

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Bayesian Phylogenetics:

Bayesian Phylogenetics: Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Lecture 8: The Metropolis-Hastings Algorithm

Lecture 8: The Metropolis-Hastings Algorithm 30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Bayesian Parameter Estimation and Variable Selection for Quantile Regression

Bayesian Parameter Estimation and Variable Selection for Quantile Regression Bayesian Parameter Estimation and Variable Selection for Quantile Regression A thesis submitted for the degree of Doctor of Philosophy by Craig Reed Department of Mathematics School of Information Systems,

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Quantile regression and heteroskedasticity

Quantile regression and heteroskedasticity Quantile regression and heteroskedasticity José A. F. Machado J.M.C. Santos Silva June 18, 2013 Abstract This note introduces a wrapper for qreg which reports standard errors and t statistics that are

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL COMMUN. STATIST. THEORY METH., 30(5), 855 874 (2001) POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL Hisashi Tanizaki and Xingyuan Zhang Faculty of Economics, Kobe University, Kobe 657-8501,

More information

A general mixed model approach for spatio-temporal regression data

A general mixed model approach for spatio-temporal regression data A general mixed model approach for spatio-temporal regression data Thomas Kneib, Ludwig Fahrmeir & Stefan Lang Department of Statistics, Ludwig-Maximilians-University Munich 1. Spatio-temporal regression

More information

Multilevel Statistical Models: 3 rd edition, 2003 Contents

Multilevel Statistical Models: 3 rd edition, 2003 Contents Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction

More information

Identify Relative importance of covariates in Bayesian lasso quantile regression via new algorithm in statistical program R

Identify Relative importance of covariates in Bayesian lasso quantile regression via new algorithm in statistical program R Identify Relative importance of covariates in Bayesian lasso quantile regression via new algorithm in statistical program R Fadel Hamid Hadi Alhusseini Department of Statistics and Informatics, University

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

On Bayesian Computation

On Bayesian Computation On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Control Variates for Markov Chain Monte Carlo

Control Variates for Markov Chain Monte Carlo Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

Bayesian Multivariate Logistic Regression

Bayesian Multivariate Logistic Regression Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Inference for a Population Proportion

Inference for a Population Proportion Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

A Bootstrap Test for Conditional Symmetry

A Bootstrap Test for Conditional Symmetry ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Markov Chain Monte Carlo Methods

Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Quantile regression for longitudinal data using the asymmetric Laplace distribution

Quantile regression for longitudinal data using the asymmetric Laplace distribution Biostatistics (2007), 8, 1, pp. 140 154 doi:10.1093/biostatistics/kxj039 Advance Access publication on April 24, 2006 Quantile regression for longitudinal data using the asymmetric Laplace distribution

More information

Bayes: All uncertainty is described using probability.

Bayes: All uncertainty is described using probability. Bayes: All uncertainty is described using probability. Let w be the data and θ be any unknown quantities. Likelihood. The probability model π(w θ) has θ fixed and w varying. The likelihood L(θ; w) is π(w

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent Latent Variable Models for Binary Data Suppose that for a given vector of explanatory variables x, the latent variable, U, has a continuous cumulative distribution function F (u; x) and that the binary

More information

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation So far we have discussed types of spatial data, some basic modeling frameworks and exploratory techniques. We have not discussed

More information

Making rating curves - the Bayesian approach

Making rating curves - the Bayesian approach Making rating curves - the Bayesian approach Rating curves what is wanted? A best estimate of the relationship between stage and discharge at a given place in a river. The relationship should be on the

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

The Recycling Gibbs Sampler for Efficient Learning

The Recycling Gibbs Sampler for Efficient Learning The Recycling Gibbs Sampler for Efficient Learning L. Martino, V. Elvira, G. Camps-Valls Universidade de São Paulo, São Carlos (Brazil). Télécom ParisTech, Université Paris-Saclay. (France), Universidad

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

Bayesian Econometrics

Bayesian Econometrics Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence

More information

Modelling geoadditive survival data

Modelling geoadditive survival data Modelling geoadditive survival data Thomas Kneib & Ludwig Fahrmeir Department of Statistics, Ludwig-Maximilians-University Munich 1. Leukemia survival data 2. Structured hazard regression 3. Mixed model

More information

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables.

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Index Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Adaptive rejection metropolis sampling (ARMS), 98 Adaptive shrinkage, 132 Advanced Photo System (APS), 255 Aggregation

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Subject CS1 Actuarial Statistics 1 Core Principles

Subject CS1 Actuarial Statistics 1 Core Principles Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i,

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i, Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives Often interest may focus on comparing a null hypothesis of no difference between groups to an ordered restricted alternative. For

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

Flexible Regression Modeling using Bayesian Nonparametric Mixtures

Flexible Regression Modeling using Bayesian Nonparametric Mixtures Flexible Regression Modeling using Bayesian Nonparametric Mixtures Athanasios Kottas Department of Applied Mathematics and Statistics University of California, Santa Cruz Department of Statistics Brigham

More information

Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation

Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation Siddhartha Chib John M. Olin School of Business, Washington University, Campus Box 1133, 1 Brookings Drive, St. Louis, MO 63130.

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Bayesian Nonparametric Regression for Diabetes Deaths

Bayesian Nonparametric Regression for Diabetes Deaths Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,

More information

A Note on Lenk s Correction of the Harmonic Mean Estimator

A Note on Lenk s Correction of the Harmonic Mean Estimator Central European Journal of Economic Modelling and Econometrics Note on Lenk s Correction of the Harmonic Mean Estimator nna Pajor, Jacek Osiewalski Submitted: 5.2.203, ccepted: 30.0.204 bstract The paper

More information

Bayesian Gaussian Process Regression

Bayesian Gaussian Process Regression Bayesian Gaussian Process Regression STAT8810, Fall 2017 M.T. Pratola October 7, 2017 Today Bayesian Gaussian Process Regression Bayesian GP Regression Recall we had observations from our expensive simulator,

More information

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions Journal of Modern Applied Statistical Methods Volume 8 Issue 1 Article 13 5-1-2009 Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error

More information

Overview. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Overview. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Overview DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Bayesian estimation of bandwidths for a nonparametric regression model with a flexible error density

Bayesian estimation of bandwidths for a nonparametric regression model with a flexible error density ISSN 1440-771X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Bayesian estimation of bandwidths for a nonparametric regression model

More information

Bayesian Extreme Quantile Regression for Hidden Markov Models

Bayesian Extreme Quantile Regression for Hidden Markov Models Bayesian Extreme Quantile Regression for Hidden Markov Models A thesis submitted for the degree of Doctor of Philosophy by Antonios Koutsourelis Supervised by Dr. Keming Yu and Dr. Antoaneta Serguieva

More information

Numerical Analysis for Statisticians

Numerical Analysis for Statisticians Kenneth Lange Numerical Analysis for Statisticians Springer Contents Preface v 1 Recurrence Relations 1 1.1 Introduction 1 1.2 Binomial CoefRcients 1 1.3 Number of Partitions of a Set 2 1.4 Horner's Method

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

VCMC: Variational Consensus Monte Carlo

VCMC: Variational Consensus Monte Carlo VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information