Integer Valued AR Processes with Explanatory Variables

Size: px
Start display at page:

Download "Integer Valued AR Processes with Explanatory Variables"

Transcription

1 Integer Valued AR Processes with Explanatory Variables V. Enciso-Mora, P. Neal & T. Subba Rao First version: 2 December 2008 Research Report No. 20, 2008, Probability and Statistics Group School of Mathematics, The University of Manchester

2 Integer valued AR processes with explanatory variables Víctor Enciso-Mora (University of Manchester, Peter Neal (University of Manchester and T. Subba Rao (University of Manchester November 7, 2008 Address: School of Mathematics, University of Manchester, Alan Turing Building, Oxford Rd, Manchester, M3 9PL, United Kingdom. Abstract Integer valued AR (INAR processes are perfectly suited for modelling count data. We consider the inclusion of explanatory variables into the INAR model to extend the applicability of INAR models and give an alternative to Poisson regression models. An efficient MCMC algorithm is constructed to analyze the model and incorporates both explanatory variable and order selection. The methodology is illustrated by analyzing monthly polio incidences in the USA and claims from the logging industry to the British Columbia Workers Compensation Board Keywords: Integer valued time-series, Reversible jump MCMC, count data, explanatory variables. AMS Primary: 62M0; AMS Secondary: 65C40. Introduction There has been considerable recent interest in developing models and statistical inference for integer valued time series. The modeling has primarily been divided into two approaches, Poisson regression models, see for example Davis et al. (2003 and Davis et al. (2005 and integer valued ARMA models, see for example McCabe and Martin (2005 and Jung and Tremayne (2006. The author would like to thank CONACYT (Consejo Nacional de Ciencia y Tecnología, México for funding this research. corresponding author

3 Poisson regression models assume that an integer valued time series {X t } given explanatory variables w t is generated by X t Po(exp(wt T β, where β is a vector of parameters. (Throughout this paper all vectors are taken to be column vectors. This has been extended in Davis et al. (2000 to incorporate a more explicit dependence between the {X t } with X t W t Po(exp(wt T β + W t, where {W t } is a real valued stationary process with E[exp(W t ] =. This model is very flexible and statistical inference is usually relatively straightforward. In contrast, integer valued AR (INAR models are discrete analogues of the (standard, real valued AR model. The INAR(p model is given by p X t = α j X t j + Z t, (. j= where the {Z t } are independent and identically distributed Poisson random variables and denotes a binomial operator. That is, for any 0 γ, and nonnegative integer random variable Y, γ Y = D Bin(Y, γ. Statistical inference for INAR models is generally more complicated than for the Poisson regression models and most attention has been devoted to the INAR( model. However, recently in Neal and Subba Rao (2007 an efficient MCMC algorithm has been developed for INARMA processes of known AR and MA orders with extensions to unknown AR and MA orders given in Enciso-Mora et al. (2008. The main benefit of the INARMA models over the Poisson regression models is their similarity to standard ARMA models. In fact, the stationary and invertibility conditions of ARMA models hold for INARMA models, see Latour (997 and Latour (998. The aim of this paper is to extend the analysis of INAR models to incorporate explanatory variables. That is, we look to increase the flexibility of INAR models whilst maintaining the AR structure of the model. Thus seeking an alternative to Poisson regression models for integer valued time series. Moreover, the two approaches are complimentary and are appropriate to different types of data with the Poisson regression models being multiplicative in structure through e Wt, whilst the INAR model has additive structure through the binomial operator. As with our earlier work, Neal and Subba Rao (2007 and Enciso-Mora et al. (2008, an MCMC algorithm will be introduced to perform statistical inference for the model. The only previous work, that we are aware of, which incorporates explanatory variables into an INAR model is Brännäs (995. The paper is structured as follows. The model is introduced in Section 2. In Section 3, the MCMC algorithm is constructed. The algorithm includes explanatory variable and AR order selection which enables us to determine the most parsimonious model. Finally, in Section 4 the MCMC algorithm is applied to two real-life data sets, the total number of monthly cases of polio in the USA from January 2

4 970 to December 983 and the total number of monthly claims from the logging industry to the British Columbia Workers Compensation Board from January 985 to December Model Suppose that for each time point there are r explanatory variables. For t Z and i =,2,...,r, let w t,i denote the value of the i th explanatory variable at time t and let w t = (w t,0,w t,,...,w t,r, where w t,0 = for all t Z. Let p denote the maximum AR order of the model and for j =,2,...p let δ j = (δ j,0,δ j,,...,δ j,r. Let γ = (γ 0,γ,...,γ r. Then for t Z, the INAR(p model with explanatory variables is given by X t = where Z t Po(λ t, p α t,j X t j + Z t, (2. j= α t,j = { + exp(w T t δ j } (2.2 and λ t = exp(w T t γ. (2.3 The special case where p = was considered in Brännäs (995. In Brännäs (995, separate explanatory variables were used for α t, and λ t and is covered by the current set up by fixing some of the components of δ j and γ equal to 0. The binomial operator could be replaced by some generalised Steutel and van Harn operator (see, Steutel and van Harn (979 and Latour (997. Also alternative forms for α t,j and λ t can be used so long as for all t Z and j =,2,...,p, 0 α t,j and λ t 0. However, for ease of exposition we restrict attention to the model defined by (2., (2.2 and (2.3. The model defined by (2. is the full model. It will often be the case that a simpler model which does not include all the explanatory variables or all the AR terms will suffice. Therefore we assume that there exists R {,2,...,r} and A {,2,...,p} such that for i A, α t,i = 0 and for j R, δ i,j = 0 and γ j = 0, where A and R are unknown and are parameters in the model to be estimated. Finally, it is important to note that the explanatory variables can be used to model a linear trend or periodicity. Unlike standard AR processes for INAR processes trends and periodicity can not easily be removed by transforming the original time series since any transformation would need to preserve 3

5 the integer nature of the data. Therefore if there are trends and periodicity in the data these can be incorporated through explanatory variables. Suppose that α t,j 0 for all t Z and j =,2,...,p. Then X t Po(exp(wt T γ and there is no dependence between successive time points. This is equivalent to the Poisson regression model with e Wt for all t Z. Thus the Poisson regression model and the INAR model reduce to the same model in the absence of temporal dependence. Therefore the differences in modelling arise from the ways in which temporal dependence are included in the model, as a latent stationary process in Poisson regression models and as an observation driven binomial operator in INAR models. 3 MCMC algorithm 3. Likelihood of INAR(p model with explanatory variables For n, let x = (x p,x 2 p,...,x n denote observations from the integer-valued stochastic process {X t }. As in Neal and Subba Rao (2007 and Enciso-Mora et al. (2008, statistical inference is greatly facilitated by the use of data augmentation. For t Z and j =,2,...,p, let Y t,j α t,j X t,j. For t, let y t = (y t,,y t,2,...,y t,p, y = (y,y 2,...,y n and z t = (z,z 2,...,z t. Note that z t = x t p j= y t,j. For t =,2,...,n, let x t = (x p,x 2 p,...,x t and let w = (w p,w 2 p,...,w n. Note that each of Y t,,y t,2,...,y t,p and Z t are independent given (γ,δ,x t. Thus for t =,2,...,n, λ z t t z f t (y t,z t x,w,δ,γ t! e λt { p (xt i } y i= y t,i α t,i t,i ( α t,i xt i yt,i z t + p i= y t,i = x t, (3. 0 otherwise. Therefore f(y,z n x,w,δ,γ = n f t (y t,z t x,w,δ,γ t= n t= λ zt t z t! e λt p i= {( xt i y t,i α yt,i t,i ( α t,i xt i yt,i }, (3.2 subject to the constraint z t + p i= y t,i = x t for t =,2,...,n. For i = 0,,...,r, γ i and δ j,i (j =,2,...,p are real-valued. Therefore in the absence of conjugate priors, we assign independent Gaussian priors to each of γ i (i = 0,,...,r and δ j,i (j =,2,...,p; i = 0,,...,r. For simplicity of exposition we take π(γ i,π(δ j,i N(µ,σ 2 P and throughout this paper take µ = 0 and σ P =. As mentioned in Section 2, we assume that there is a maximum (fixed model order p and a (unknown subset A {,2,...,p} such that for j A, α t,j 0 and for j A, α t,j = 0, and hence, y t,j = 0. Therefore we 4

6 shall make inference about A with similar analysis for real-valued AR processes considered in Troughton and Godsill (997. The prior chosen for A is π(a n A A {,2,...,p}. (3.3 The prior on A penalises models with larger values of A and is motivated by the BIC-penalisation, Schwarz (978, which has been used effectively in Enciso-Mora et al. (2008. Furthermore, we will not assume that all of the explanatory variables are necessary and will perform explanatory variable selection. That is, we assume that there exists R {,2,...,r} such that for i R, γ i 0 and δ j,i 0 (j A and for i R, γ i = 0 and δ j,i = 0 (j A. Finally, we impose a BIC-penalisation prior on R, π(r n R R {,2,...,r}. (3.4 Note that we assume that there is an intercept term, so γ 0 0 and δ j,0 0 (j A. 3.2 Details of the MCMC algorithm The MCMC algorithm naturally separates into four parts, parameter updates γ and δ, updating (y,z n, updating R and updating A. We shall describe each part of the MCMC algorithm in turn with the latter two parts requiring reversible jump MCMC, Green (995. The algorithm is then run for the desired number of iterations to obtain a sample from the posterior distribution of (A, R,γ,δ. For j A and i R, we update, one at a time, each γ i and δ j,i using random walk Metropolis. For example, we propose a new value for δ j,i, δ j,i N(δ j,i,σδ;j,i 2, where σ δ;j,i is chosen based upon pilot runs of the algorithm such that approximately 40% of proposed moves are accepted, see Gelman et al. (995. The acceptance probability is straightforward to calculate using (3.2 and the prior for δ j,i. For t =,2,...,n, we update one set of components (y t,z t at a time. We use an efficient independent sampler which is essentically identical to that introduced in Neal and Subba Rao (2007. For j A, sample y t,j Bin(x t j,α t,j, resampling if x t j A y t,j. Set z t = x t j A y t,j and accept the proposed values (y t,z t with probability { min, z t! z t! λz t t zt }. Explanatory variable selection is performed as follows. Select K uniformly at random from {,2,...,r} and propose to change the inclusion/exclusion status of explanatory variable K. That is, if K R, we propose that K R and if K R, we propose that K R. We shall describe the inclusion of a 5

7 explanatory variable move in detail with the exclusion of a explanatory variable move simply being the reverse procedure. In order to use reversible jump MCMC it is necessary to define a bijection between the parameters in the current model and the parameters in the proposed model. This can be done by extending the state space of one or both of the models to include auxiliary variables. Therefore for j A, let S j N(0,σ 2 R and let S N(0,σ 2 R, where σ R is chosen to produce efficient between model moves and σ R = is found to be more than adequate for the examples considered in this paper. We set δ j,k = S j (j A and γ K = S with all other parameters remaining unchanged. Therefore the Jacobian for the transformation from (δ,γ,s, S to (δ,γ is. The proposed move affects {α t,j } and {λ t }, so the acceptance probability for the proposed move is the minimum of and n ( λ zt ( t α yt,j ( e λt λ t t,j α xt j y t,j t,j λ t= t α t,j α t,j j A ( σ R (δ j,k 2 exp n σ P 2σ 2 (δ j,k 2 j A R 2σP 2 σ ( R (γ exp K 2 σ P 2σR 2 (γ K 2 2σP 2. (3.5 The latter term is the ratio of the proposal and prior for the parameters added into the model as a result of including the explanatory variable and the multiplier / n comes from the prior on R. For the case σ R = σ P = the latter term is identical to. Order determination for INAR models has been considered in detail in Enciso-Mora et al. (2008. The procedure used here is based upon Enciso-Mora et al. (2008, Section 3 adjusted to take account of the inclusion of explanatory variables and the form of A. It was noted that from equations (3.2 and (3.3 of Enciso-Mora et al. (2008 a certain function of the parameters, corresponding to E[X t ] was well estimated from the data and this was used to devise an efficient order switching reversible jump MCMC algorithm. This idea is the motivation for the proposed moves in the intercept terms given below. Select L uniformly at random from {,2,...,p} and propose to change the inclusion/exclusion status of the lag L term in the AR processes. That is, if L A, we propose that L A and if L A, we propose that L A. We shall describe the inclusion of the lag L term move in detail with the exclusion of the lag L term move simply being the reverse procedure. Select M uniformly from A, if A = an alternative scheme is required which we will describe later. We propose to split the AR terms, so that the AR contribution from the M lag term in the current model is divided between the L lag and M lag terms in the proposed model. In Enciso-Mora et al. (2008, this is done by drawing U U[0,] and setting α L = Uα M and α M = ( Uα M. We propose δ L,0 and δ M,0 based upon the approach used in Enciso-Mora et al. (

8 Specifically, draw U U[0,] and set + exp(δ L,0 = U + exp(δ M,0 (3.6 + exp(δ M,0 = ( U + exp(δ M,0. (3.7 For j R, let V j N(0,σO 2, where σ O can be tuned to produce efficient between model moves (σ O = was found to be adequate for the examples in this paper and propose δ L,j = V j and δ M,j = δ M,j V j. The above defines the proposed changes in parameter values which necessitates changes to the augmented data. For t =,2,...,n, we propose ( y t,m Bin α t,m y t,m, α t,m + α t,l (3.8 y t,l = y t,m y t,m, (3.9 which splits y t,m based upon the proposed parameter values. This scheme is again similar to Enciso- Mora et al. (2008, Section 3.2. The transformation form (δ,y,u,v to (δ,y factorises with only the transformation given by (3.6 and (3.7 having a Jacobian other than. The Jacobian, J, for the transformation from (δ M,0,U to (δ M,0,δ L,0 is given by J = ( UU exp(δ M,0 { + exp(δ M,0 } {U + exp(δ M,0 }{ U + exp(δ M,0 }. (3.0 Increasing the AR order of the model by leads to the ratio of the priors for the two models being / n. Therefore the acceptance probability for the proposed move is the minimum of and J ( σ O (δ j,l 2 ( exp n σ P 2σ 2 (δ j,l 2 j A O 2σP 2 exp 2σP 2 (δj,m 2 (δ j,m 2 n t= ( xt M y t,m (α t,m y t,m ( α t,m xt M y t,m ( xt L y t,l ( xt M y t,m (αt,m yt,m ( α t,m xt M yt,m ( y t,m y t,m (α t,l y t,l ( α t,l xt L y t,l (ωt y t,m ( ωt yt,m y t,m, (3. where ω t = α t,m /(α t,m + α t,l. The reverse move where the lag L term is excluded from the AR model is dealt with by reversing the above procedure. The inclusion of an AR lag term when A = requires a slightly different procedure. In this case we adjust the noise term to compensate for the inclusion of an AR term, see Enciso-Mora et al. (2008. Let U U[0,] and set /(+exp(δ L,0 = U and exp(γ 0 = ( Uexp(γ 0. For j R, let V j N(0,σ 2 O and propose δ L,j = V j. Then change the data augmentation by y t,l Bin(x t x t,α t,l and z t = x t y t,l. Finally, note that the algorithm described above generally performs well. However, the performance of the algorithm can be improved by normalizing the explanatory variables, so that for i =,2,...,r, 7

9 n n i= w t,i = 0 and n n i= w2 t,i =. For the examples, considered in Section 4 normalization was not necessary. Since normalization is a linear operation the algorithm can be used to obtain parameter estimates for the normalized explanatory variables which can then be transformed into parameter estimates associated with the original explanatory variables. 4 Data analysis We apply the MCMC algorithm of Section 3 to two real-life data sets. The first data set is taken from Zeger (988 and consists of the monthly total number of polio cases in the USA from January 970 to December 983. The second data set is taken from Zhu and Joe (2006 and consists of the monthly number of claims of short-term disability benefits made by injured logging industry workers to the British Columbia Workers Compensation Board from January 985 to December Polio data The polio data has been studied by Zeger (988 and Davis et al. (2000 using Poisson regression models and therefore provides an ideal opportunity to compare INAR models with Poisson regression models. Therefore we follow these earlier papers in taking w t = ( t, 000,cos ( 2πt 2,sin ( 2πt 2,cos ( 2πt 6,sin ( 2πt where t = t 73. Thus there is assumed to be in a linear trend (with intercept January 976 and two periods, one of 6 months and the other of 2 months. Furthermore, for a full comparison with Davis et al. (2000 we assume that the model is INAR( model and all the explanatory variables are included in 6, the model. Therefore for this data we do not consider either order or model selection. We ran the MCMC algorithm to obtain a sample of size 00,000 from the posterior distribution of the parameters following a burn-in of 0,000 iterations. The results are presented in Table below. It should be noted that the estimates of γ are similar to those obtained for the Poisson regression model in Davis et al. (2000 and in even closer agreement to those of Zeger (988. This is to be expected given the high value of the intercept for δ,0 which implies α t, is small for most t. That is, the observations are driven 8

10 by the explanatory variables and the Poisson innovations rather than the AR dependence. Explanatory variable γ δ Intercept 0.079( (0.776 Trend 3.242( (2.09 cos ( 2πt ( (0.846 sin ( 2πt ( (0.907 cos ( 2πt ( (0.850 sin ( 2πt ( (0.864 Table. Means (standard deviations of parameter coefficients for the INAR( model. Finally, to test the models appropriateness we studied the predictive capabilities of the model. At each iteration of the MCMC algorithm we simulated a realization for the number of monthly cases for each month in 983 given the parameter estimates and the true number of cases for the previous month. We then compared the sample means from the posterior distributions of the number of monthly cases for each month in 983 with the actual monthly figures. For the Poisson regression model Monte Carlo estimation of {W t } was performed using the parameter estimates obtained in Davis et al. (2000 since the data is informative about the process {W t }. Since the driving force is the explanatory variables rather than autoregressive dependence, the results are unsurprisingly rather similar for the two models. Jan Feb March April May June July Aug Sept Oct Nov Dec Actual IN AR( Poisson reg Table 2. Actual monthly total number of cases for 983 compared with mean predictions. 4.2 Benefit claims data Exploratory data analysis gives support for an annual period being included in the model. Further analysis reveals that July has the highest number of claims and the summer months (May to November produce significantly more claims than the winter months (December to April. Finally, there is evidence to suggest that the number of claims might be decreasing over time. Therefore we take the following explanatory variable vector w t = (, t 20,cos ( 2πt,sin 2 ( 2πt,s t, (4. 2 9

11 where s t = for the summer months (May to November inclusive and s t = 0 otherwise. The explanatory variables are normalized before applying the MCMC algorithm. Since the data consists of only 20 timepoints we assume a maximum AR order of 3. The MCMC algorithm was run to obtain a sample of size 00,000 from the posterior distribution of the parameters following a burn-in of 0,000 iterations. Every sample from the posterior distribution contained a summer effect and only 5 samples did not contain the first order AR term. There was also some support for the inclusion of the second order AR term (49,869 samples and the linear trend (26,575 samples. Only 8,357 samples contained the third order AR term and less than 2,000 samples contained either the cosine or sine term. Therefore there is no evidence to support a periodic term in the model in addition to the summer effect. Only six of the models were obtained in more than,000 samples and their details are given in the table below. AR orders Explanatory variables Number of samples,2 S t 35,3 S t 3,46 t,s t 4,32,2 t,s t 9,34,2,3 S t 3,973,3 S t 2,562 Table 3. Models with the highest posterior probabilities. The contribution from the second and third order AR terms are small even when they are included in the model. Therefore we focus upon the parameter estimates for the noise term and the first order AR term only. Explanatory variable γ δ Intercept ( ( Trend ( ( Summer ( ( Table 4. Mean (standard deviation of parameter coefficients. Thus the summer effect is only really present in the Poisson innovation term. This is confirmed by allowing the explanatory variables to be present in some terms but not in others. Interestingly, the linear trend is decreasing in the noise term, whilst increasing in the AR( term. This leads to similar size observations over the length of the period but with higher correlation between later observations. 0

12 References [] Brännäs, K. (995 Estimation and testing in integer-valued AR( models. Umeå Economic Studies. 38. [2] Davis, R. A., Dunsmuir, W. T. and Streett, S. B. (2003 Observation-driven models for Poisson counts. Biometrika 90, [3] Davis, R. A., Dunsmuir, W. T. and Streett, S. (2005 Maximum Likelihood Estimation for an Observation Driven Model for Poisson Counts. Methodology and Computing in Applied Probability 7, [4] Davis, R. A., Dunsmuir, W. T. and Wang, Y. (2000 On autocorrelation in a Poisson regression model. Biometrika 87, [5] Enciso-Mora, V., Neal, P. and Subba Rao, T. (2008 Efficient order selection algorithms for integer valued ARMA processes. To appear in Journal of Time Series Analysis. [6] Gelman, A.,Roberts, G.O. and Gilks, W.R. (995 Efficient Metropolis Jumping Rules. Bayesian Statistics V (eds. J.M. Bernardo et al. Oxford University Press pp [7] Green P.J. (995. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82, [8] Jung, R.C. and Tremayne, A.R. (2006 Coherent forecasting in integer time series models. International Journal of Forecasting 22, [9] Latour, A. (997 The multivariate GINAR(p process. Advances in Applied Probability 29, [0] Latour, A. (998 Existence and stochastic structure of a non-negative integer-valued autoregressive process. Journal of Time Series Analysis 9, [] McCabe, B. P. M. and Martin, G. M. (2005 Bayesian predictions of low count time series. International Journal of Forecasting 22, [2] Neal, P. and Subba Rao, T. (2007 MCMC for integer-valued ARMA processes. Journal of Time Series Analysis 28, [3] Schwarz, G. (978 Estimating the Dimension of a Model. The Annals of Statistics. 6,

13 [4] Steutel F. and van Harn, K. (979 Discrete analogues of self-decomposability and stability. Annals of Probability 7, [5] Troughton, P. T. and Godsill, S. J. (997 Bayesian model selection for time series using Markov chain Monte Carlo. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing 997, vol. V. pp [6] Zeger, S. (988 A regression model for time series of counts. Biometrika 75, [7] Zhu R. and Joe H. (2006 Modelling count data time series with Markov processes based on binomial thinning. Journal of Time Series Analysis 27,

MCMC for Integer Valued ARMA Processes

MCMC for Integer Valued ARMA Processes MCMC for Integer Valued ARMA Processes Peter Neal & T. Subba Rao First version: 26 September 2005 Research Report No. 10, 2005, Probability and Statistics Group School of Mathematics, The University of

More information

Modelling and forecasting of offshore wind power fluctuations with Markov-Switching models

Modelling and forecasting of offshore wind power fluctuations with Markov-Switching models Modelling and forecasting of offshore wind power fluctuations with Markov-Switching models 02433 - Hidden Markov Models Pierre-Julien Trombe, Martin Wæver Pedersen, Henrik Madsen Course week 10 MWP, compiled

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Births at Edendale Hospital

Births at Edendale Hospital CHAPTER 14 Births at Edendale Hospital 14.1 Introduction Haines, Munoz and van Gelderen (1989) have described the fitting of Gaussian ARIMA models to various discrete-valued time series related to births

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

A generalization of the Multiple-try Metropolis algorithm for Bayesian estimation and model selection

A generalization of the Multiple-try Metropolis algorithm for Bayesian estimation and model selection A generalization of the Multiple-try Metropolis algorithm for Bayesian estimation and model selection Silvia Pandolfi Francesco Bartolucci Nial Friel University of Perugia, IT University of Perugia, IT

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 011 MODULE 3 : Stochastic processes and time series Time allowed: Three Hours Candidates should answer FIVE questions. All questions carry

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Bayesian Dynamic Linear Modelling for. Complex Computer Models

Bayesian Dynamic Linear Modelling for. Complex Computer Models Bayesian Dynamic Linear Modelling for Complex Computer Models Fei Liu, Liang Zhang, Mike West Abstract Computer models may have functional outputs. With no loss of generality, we assume that a single computer

More information

Poisson INAR processes with serial and seasonal correlation

Poisson INAR processes with serial and seasonal correlation Poisson INAR processes with serial and seasonal correlation Márton Ispány University of Debrecen, Faculty of Informatics Joint result with Marcelo Bourguignon, Klaus L. P. Vasconcellos, and Valdério A.

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Modelling AR(1) Stationary Time series of Com-Poisson Counts

Modelling AR(1) Stationary Time series of Com-Poisson Counts Modelling AR(1) Stationary Time series of Com-Poisson Counts Naushad Mamode Khan University of Mauritius Department of Economics Statistics Reduit Mauritius nmamodekhan@uomacmu Yuvraj Suneechur University

More information

Brief introduction to Markov Chain Monte Carlo

Brief introduction to Markov Chain Monte Carlo Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical

More information

Petr Volf. Model for Difference of Two Series of Poisson-like Count Data

Petr Volf. Model for Difference of Two Series of Poisson-like Count Data Petr Volf Institute of Information Theory and Automation Academy of Sciences of the Czech Republic Pod vodárenskou věží 4, 182 8 Praha 8 e-mail: volf@utia.cas.cz Model for Difference of Two Series of Poisson-like

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Assessing Regime Uncertainty Through Reversible Jump McMC

Assessing Regime Uncertainty Through Reversible Jump McMC Assessing Regime Uncertainty Through Reversible Jump McMC August 14, 2008 1 Introduction Background Research Question 2 The RJMcMC Method McMC RJMcMC Algorithm Dependent Proposals Independent Proposals

More information

1. Introduction. Hang Qian 1 Iowa State University

1. Introduction. Hang Qian 1 Iowa State University Users Guide to the VARDAS Package Hang Qian 1 Iowa State University 1. Introduction The Vector Autoregression (VAR) model is widely used in macroeconomics. However, macroeconomic data are not always observed

More information

Coupled Hidden Markov Models: Computational Challenges

Coupled Hidden Markov Models: Computational Challenges .. Coupled Hidden Markov Models: Computational Challenges Louis J. M. Aslett and Chris C. Holmes i-like Research Group University of Oxford Warwick Algorithms Seminar 7 th March 2014 ... Hidden Markov

More information

Target Tracking and Classification using Collaborative Sensor Networks

Target Tracking and Classification using Collaborative Sensor Networks Target Tracking and Classification using Collaborative Sensor Networks Xiaodong Wang Department of Electrical Engineering Columbia University p.1/3 Talk Outline Background on distributed wireless sensor

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll

More information

16 : Approximate Inference: Markov Chain Monte Carlo

16 : Approximate Inference: Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models 10-708, Spring 2017 16 : Approximate Inference: Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Yuan Yang, Chao-Ming Yen 1 Introduction As the target distribution

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Towards inference for skewed alpha stable Levy processes

Towards inference for skewed alpha stable Levy processes Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Bayesian model selection in graphs by using BDgraph package

Bayesian model selection in graphs by using BDgraph package Bayesian model selection in graphs by using BDgraph package A. Mohammadi and E. Wit March 26, 2013 MOTIVATION Flow cytometry data with 11 proteins from Sachs et al. (2005) RESULT FOR CELL SIGNALING DATA

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice

Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice Roberta Paroli 1, Silvia Pistollato, Maria Rosa, and Luigi Spezia 3 1 Istituto di Statistica

More information

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models 6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School

More information

Dynamic Matrix-Variate Graphical Models A Synopsis 1

Dynamic Matrix-Variate Graphical Models A Synopsis 1 Proc. Valencia / ISBA 8th World Meeting on Bayesian Statistics Benidorm (Alicante, Spain), June 1st 6th, 2006 Dynamic Matrix-Variate Graphical Models A Synopsis 1 Carlos M. Carvalho & Mike West ISDS, Duke

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Chapter 12 PAWL-Forced Simulated Tempering

Chapter 12 PAWL-Forced Simulated Tempering Chapter 12 PAWL-Forced Simulated Tempering Luke Bornn Abstract In this short note, we show how the parallel adaptive Wang Landau (PAWL) algorithm of Bornn et al. (J Comput Graph Stat, to appear) can be

More information

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,

More information

Bayesian Nonparametric Regression for Diabetes Deaths

Bayesian Nonparametric Regression for Diabetes Deaths Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,

More information

Spatio-temporal precipitation modeling based on time-varying regressions

Spatio-temporal precipitation modeling based on time-varying regressions Spatio-temporal precipitation modeling based on time-varying regressions Oleg Makhnin Department of Mathematics New Mexico Tech Socorro, NM 87801 January 19, 2007 1 Abstract: A time-varying regression

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Bayesian time series classification

Bayesian time series classification Bayesian time series classification Peter Sykacek Department of Engineering Science University of Oxford Oxford, OX 3PJ, UK psyk@robots.ox.ac.uk Stephen Roberts Department of Engineering Science University

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Bayesian spatial hierarchical modeling for temperature extremes

Bayesian spatial hierarchical modeling for temperature extremes Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics

More information

BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS

BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS BY RICARDO S. EHLERS AND STEPHEN P. BROOKS Federal University of Paraná, Brazil and University of Cambridge, UK Abstract. In this paper we extend

More information

Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach. Radford M. Neal, 28 February 2005

Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach. Radford M. Neal, 28 February 2005 Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach Radford M. Neal, 28 February 2005 A Very Brief Review of Gaussian Processes A Gaussian process is a distribution over

More information

Adaptive Rejection Sampling with fixed number of nodes

Adaptive Rejection Sampling with fixed number of nodes Adaptive Rejection Sampling with fixed number of nodes L. Martino, F. Louzada Institute of Mathematical Sciences and Computing, Universidade de São Paulo, Brazil. Abstract The adaptive rejection sampling

More information

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45 ESSE 4020 3.0 - Mid-Term Test 207 Tuesday 7 October 207. 08:30-09:45 Symbols have their usual meanings. All questions are worth 0 marks, although some are more difficult than others. Answer as many questions

More information

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham MCMC 2: Lecture 3 SIR models - more topics Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham Contents 1. What can be estimated? 2. Reparameterisation 3. Marginalisation

More information

Ch3. TRENDS. Time Series Analysis

Ch3. TRENDS. Time Series Analysis 3.1 Deterministic Versus Stochastic Trends The simulated random walk in Exhibit 2.1 shows a upward trend. However, it is caused by a strong correlation between the series at nearby time points. The true

More information

arxiv: v1 [stat.co] 2 Nov 2017

arxiv: v1 [stat.co] 2 Nov 2017 Binary Bouncy Particle Sampler arxiv:1711.922v1 [stat.co] 2 Nov 217 Ari Pakman Department of Statistics Center for Theoretical Neuroscience Grossman Center for the Statistics of Mind Columbia University

More information

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

Afternoon Meeting on Bayesian Computation 2018 University of Reading

Afternoon Meeting on Bayesian Computation 2018 University of Reading Gabriele Abbati 1, Alessra Tosi 2, Seth Flaxman 3, Michael A Osborne 1 1 University of Oxford, 2 Mind Foundry Ltd, 3 Imperial College London Afternoon Meeting on Bayesian Computation 2018 University of

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Bayesian Analysis of Order Uncertainty in ARIMA Models

Bayesian Analysis of Order Uncertainty in ARIMA Models Bayesian Analysis of Order Uncertainty in ARIMA Models R.S. Ehlers Federal University of Paraná, Brazil S.P. Brooks University of Cambridge, UK Summary. In this paper we extend the work of Brooks and Ehlers

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo 1 Motivation 1.1 Bayesian Learning Markov Chain Monte Carlo Yale Chang In Bayesian learning, given data X, we make assumptions on the generative process of X by introducing hidden variables Z: p(z): prior

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

E cient Method of Moments Estimators for Integer Time Series Models. Vance L. Martin University of Melbourne

E cient Method of Moments Estimators for Integer Time Series Models. Vance L. Martin University of Melbourne E cient Method of Moments Estimators for Integer Time Series Models Vance L. Martin University of Melbourne A.R.Tremayne University of New South Wales and University of Liverpool Robert C. Jung Universität

More information

Appendix: Modeling Approach

Appendix: Modeling Approach AFFECTIVE PRIMACY IN INTRAORGANIZATIONAL TASK NETWORKS Appendix: Modeling Approach There is now a significant and developing literature on Bayesian methods in social network analysis. See, for instance,

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Fundamental Issues in Bayesian Functional Data Analysis. Dennis D. Cox Rice University

Fundamental Issues in Bayesian Functional Data Analysis. Dennis D. Cox Rice University Fundamental Issues in Bayesian Functional Data Analysis Dennis D. Cox Rice University 1 Introduction Question: What are functional data? Answer: Data that are functions of a continuous variable.... say

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, ) Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND

More information

Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems

Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems Scott W. Linderman Matthew J. Johnson Andrew C. Miller Columbia University Harvard and Google Brain Harvard University Ryan

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

E cient Method of Moments Estimators for Integer Time Series Models

E cient Method of Moments Estimators for Integer Time Series Models E cient Method of Moments Estimators for Integer Time Series Models Vance L. Martin University of Melbourne A.R.Tremayne University of New South Wales and University of Liverpool Robert C. Jung Universität

More information

Ch 9. FORECASTING. Time Series Analysis

Ch 9. FORECASTING. Time Series Analysis In this chapter, we assume the model is known exactly, and consider the calculation of forecasts and their properties for both deterministic trend models and ARIMA models. 9.1 Minimum Mean Square Error

More information

GAUSSIAN COPULA MODELLING FOR INTEGER-VALUED TIME SERIES

GAUSSIAN COPULA MODELLING FOR INTEGER-VALUED TIME SERIES GAUSSIAN COPULA MODELLING FOR INTEGER-VALUED TIME SERIES A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2016

More information

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -12 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -12 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -12 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Data Extension & Forecasting Moving

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

Bayesian inference & process convolution models Dave Higdon, Statistical Sciences Group, LANL

Bayesian inference & process convolution models Dave Higdon, Statistical Sciences Group, LANL 1 Bayesian inference & process convolution models Dave Higdon, Statistical Sciences Group, LANL 2 MOVING AVERAGE SPATIAL MODELS Kernel basis representation for spatial processes z(s) Define m basis functions

More information

Hmms with variable dimension structures and extensions

Hmms with variable dimension structures and extensions Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating

More information

Forecasting with the age-period-cohort model and the extended chain-ladder model

Forecasting with the age-period-cohort model and the extended chain-ladder model Forecasting with the age-period-cohort model and the extended chain-ladder model By D. KUANG Department of Statistics, University of Oxford, Oxford OX1 3TG, U.K. di.kuang@some.ox.ac.uk B. Nielsen Nuffield

More information

Asymptotic distribution of the Yule-Walker estimator for INAR(p) processes

Asymptotic distribution of the Yule-Walker estimator for INAR(p) processes Asymptotic distribution of the Yule-Walker estimator for IAR(p) processes Isabel Silva a,b, M Eduarda Silva b,c a Departamento de Engenharia Civil, Faculdade de Engenharia - Universidade do Porto, Rua

More information

Part I State space models

Part I State space models Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Multivariate Regression Model Results

Multivariate Regression Model Results Updated: August, 0 Page of Multivariate Regression Model Results 4 5 6 7 8 This exhibit provides the results of the load model forecast discussed in Schedule. Included is the forecast of short term system

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Additional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis

Additional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis Additional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis Phil Gregory, Samantha Lawler, Brett Gladman Physics and Astronomy Univ. of British Columbia Abstract A re-analysis

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Seasonality, Cycles and Unit Roots

Seasonality, Cycles and Unit Roots Seasonality, Cycles and Unit Roots Sune Karlsson Dept. of Economic Statistics Mickael Salabasis Dept. of Economic Statistics Stockholm School of Economics Abstract Inference on ordinary unit roots, seasonal

More information

Model selection and checking

Model selection and checking CHAPTER 6 Model selection and checking In the basic HMM with m states, increasing m always improves the fit of the model (as judged by the likelihood). But along with the improvement comes a quadratic

More information

ST 740: Markov Chain Monte Carlo

ST 740: Markov Chain Monte Carlo ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian Nonparametric Learning of Complex Dynamical Phenomena

Bayesian Nonparametric Learning of Complex Dynamical Phenomena Duke University Department of Statistical Science Bayesian Nonparametric Learning of Complex Dynamical Phenomena Emily Fox Joint work with Erik Sudderth (Brown University), Michael Jordan (UC Berkeley),

More information