PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1. Murray Rosenblatt University of California, San Diego

Size: px
Start display at page:

Download "PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1. Murray Rosenblatt University of California, San Diego"

Transcription

1 PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1 Murray Rosenblatt University of California, San Diego Abstract The object of this paper is to show that under certain auxiliary assumptions a stationary autoregressive sequence has a best predictor in mean square that is linear if and only if the sequence is minimum phase or is Gaussian when all moments are finite 1 Introduction We consider a stationary autoregressive sequence, that is, a stationary sequence x t satisfying the system of equations 11 x t φ 1 x t 1 φ p x t p =ξ t, t =, 1,0,1, with the ξ t s independent identically distributed, the φ i s real and Eξ t 0,Eξ 2 t = σ 2 >0 Let 12 φz =1 φ 1 z φ p z p The system of equations is satisfied by a strictly stationary sequence which is uniquely determined if and only if φz has no roots of absolute value 1 In [4] a simple result of the type considered in this paper was established for a first order autoregressive scheme x t satisfying 13 x t βx t 1 = ξ t, t =, 1,0,1,, 0< β <1 Clearly the best one-step predictor predicting ahead of x t+1 is the linear predictor βx t However, the best one-step predictor with time reversed for the process 13, Ex t x t+1 is linear if and only if the distribution of x t is Gaussian Let G be the distribution function of ξ t and let F the distribution function of x t It is clear that F satisfies the equation F =G Fβ 1, the asterisk denotes the convolution operation If ϕ is the characteristic function of ξ t,η the characteristic function of x t ητ = ϕβ j τ, 1991 Mathematics Subject Classification Primary 60G25, 62M20; secondary 60G10, 10J10 Key words and phrases Autoregressive sequence, nongaussian, nonminimum phase, nonlinear prediction Research supported by ONR Grant N J1371 1

2 2 MURRAY ROSENBLATT and φτ 1,τ 2 =Eexpiτ 1 x 1 + iτ 2 x 0 the joint characteristic function of x 1 and x 0, the following relation is satisfied: 14 φ τ1 0,τ 2 = 1 β η τ 2 1 β ϕ τ 2 ηβτ 2 = expiτ 2 xiex 1 x 0 = x df x Let Gx = x udgu The relation 14 implies that Ex 1 x 0 = x is given by 1 β x 1 { G F β 1 dx/f dx β A related problem for heavy-tailed distributions is considered in [2] Factor the polynomial 12, φz =φ + zφ z φ + z =1 θ 1 z θ r z r 0 for z 1, φ z=1 θ r+1 z θ p z s 0 for z 1 and r, s 0,r+s=p Given that m 1,,m r,m r+1,,m p are the p zeros of φz let m i > 1,i=1,,r and m i < 1,i=r+1,,p Then φ + z = r i=1 1 m 1 i z, φ z= p i=r+1 1 m 1 i z The autoregressive sequence 11 is called minimum phase if r = p and nonminimum phase otherwise If the sequence is minimum phase, clearly one can write 15 x t = α j ξ t j 16 φz 1 = α j z j and the α j s decay to zero exponentially fast as j The relations 15 and 16 imply that the σ-algebras generated by {ξ j,j t and {x j,j t are the same This implies that in the minimum phase case the best predictor in mean square of x t+1 given x j,j t, is linear and given by x t = α j ξ t+1 j Our object is to show that in the nonminimum phase nongaussian case, the best predictor in mean square of x t+1 given x j,j t, is nonlinear if all moments of ξ t are finite and the roots m i,i = r +1,,p, are distinct In Section 2 we show

3 STATIONARY AUTOREGRESSIVE SEQUENCES 3 that the stationary solution of 11 is pth order Markovian This implies that the best one-step predictor in mean square in terms of the past is a function of the p preceding variables That the solution of 11 is pth order Markovian is obvious in the minimum phase case since the solution is causal, implying that ξ t is independent of the past of the x process, that is x t 1,x t 2, In the nonminimum phase case the x t process is noncausal and so ξ t is not independent of the past of the x process The Markovian property of the x process is used in Section 3 the principal result on the nonlinearity of the best predictor in mean square of x t+1 given x j,j t, is derived in the nonminimum phase nongaussian case when all the moments of the ξ t are finite and the roots m i,i = r +1,,p are distinct One should note that the first order autoregressive scheme with time reversed discussed in this section is not minimum phase 2 The Markov Property Our object in this section is to show that the stationary autoregressive sequence is pth order Markovian, whether it is minimum phase or not Part of the argument parallels one given in [1] The argument is carried out in the case r, s > 0, since it is obvious otherwise Introduce the causal and purely noncausal sequences U t = φ Bx t, V t = φ + Bx t, with B the one-step backshift symbol, that is, Bx t = x t 1 We then have Let us also note that we have U t = α j ξ t j, V t = β j ξ t+j j=s φ + z 1 = α j z j, φ z 1 = β j z j x t = 21 φz 1 = j= ψ j ξ t j, j= ψ j z j We shall carry through the argument assuming the existence of positive density functions However, essentially the same argument can be carried through without this assumption using a more elaborate notation Let the density function of the ξ random variables be g The random variables U l,l t, are independent of V l,l t s+1, and so the joint probability density function of U 1,,U n,v n s+1,,v n is { n h U U 1,, U r g U t θ 1 U t 1 θ r U t r h V V n s+1,,v n t=r+1 j=s

4 4 MURRAY ROSENBLATT h U and h V are the joint probability density functions of U 1,,U r and V n s+1,,v n respectively Consider the linear transformation T n given by U 1 U s U s+1 U n V n s+1 V n U 1 U U 1 s x s+1 θ r+1 x s θ p x 1 = U =T s n x 1 x n θ r+1 x n 1 θ p x n s x n s+1 θ 1 x n s θ r x n s+1 r x n x n θ 1 x n 1 θ r x n r Using this transformation one can see that the joint density of U 1,,U s,x 1,,x n is h U Ũ1,,Ũr { n { p t=p+1 t=r+1 g Ũt θ 1 Ũ t 1 θ r Ũ t r g x t φ 1 x t 1 φ p x t p h V φ + Bx n s+1,,φ + Bx n dett n { Ul if l s Ũ l = x l θ r+1 x l 1 θ p x l s if l>s If s>0, ln dett n ln θ p n p Let us compute the conditional density of x n,x n 1,,x n p given x n d,x n 1 d,,x 1,U s,,u 1 The one-step d =1 conditional density is given by gx n φ 1 x n 1 φ p x n p h Vφ + Bx n s+1,, φ + Bx n h V φ + Bx n s,,φ + Bx n 1 as if 1 <d p+ 1, one obtains { d 1 gx n u φ 1 x n 1 u φ p x n u p u=0 h Vφ + Bx n s+1,,φ + Bx n h V φ + Bx n d s+1,,φ + Bx n d dett n dett n d dett n dett n 1, If d>p+ 1 the conditional probability density is { n gx t φ 1 x t 1 φ p x t p dx n d+1 dx n p 1 t=n d+1 h V φ + Bx n s+1,,φ + Bx n h V φ + Bx n d s+1,,φ + Bx n d dett n dett n d

5 STATIONARY AUTOREGRESSIVE SEQUENCES 5 Notice that in all these cases the conditional probability density depends on x n d,x n d 1,,x 1,U s,,u 1 only through x n d,,x n d p+1 However, this implies that the conditional probability density of x n,x n 1,,x n p given x n d,x n 1 d,,x 1 is the same by a standard argument using conditional expectations The argument is that if f is integrable, B and A are σ-algebras, then if Ef B,A=his B measurable, it follows that Ef B = EEf B,A B = Eh B = h Thus {X n is a sequence that is pth order Markovian 3 A functional equation for the characteristic function The characteristic function of x k is clearly ηt = ϕψ k t The joint characteristic function of the random variables x s,x s+1,,x 0 is { [ ητ s,τ s 1,,τ 0 =E exp i ] s τ l x l = l=0 as the joint characteristic function of x s,x s+1,,x 1 is It is clear that ητ s,τ s 1,,τ 1 = s ϕ τ l ψ k l ητ s,,τ 1,τ 0 τ0=0 = η τ0 τ 2,,τ 1,0 τ 0 s = ix 0 exp i τ l x l df x s,,x 1,x 0 =i Ex 0 x 1,,x s exp i s ϕ τ l ψ k l, l=0 s τ l x l df x s,,x 1, F x s,,x 1 is the joint distribution function of x s,,x 1 In the case of the pth order autoregressive sequence x t, since the sequence is a Markov process of order p, it is sufficient in considering the best one-step predictor in mean square to consider s = p, since the one-step predictor of x 0 given the whole past will depend only on the p immediately preceding random variables Now log ητ p,,τ 1,τ 0 τ0=0 = η τ 0 τ p,,τ 1,0 τ 0 ητ p,,τ 1 / = ψ k ϕ τ l ψ k l ϕ τ l ψ k l while τ j log ητ p,,τ 1 = ψ k j ϕ / τ l ψ k l ϕ τ l ψ k l

6 6 MURRAY ROSENBLATT for some neighborhood of the origin τ p,, τ 1 ε, ε > 0 If the best predictor is linear we must have 31 η τ0 τ p,,τ 1,0 = b j η τj τ p,,,τ 1, the b j s are the coefficients of the best linear predictor of x 0 in mean square This is in turn equivalent to 32 ψ k x 0 = b j x j b l ψ k l h τ j ψ k j =0 hτ =ϕ τ / ϕτ for τ 1,,τ p such that ητ p,,τ 1 0 That 31 is equivalent to 32 follows from the fact that η τj τ p,,τ 0 { E x 2 j 1/2 and that the ψ k tend to zero exponentially as k The equation 32 is similar to the type of functional equation taken up in [3] The ψ k are the coefficients in the Laurent expansion of φz 1 The b l can be read off from the polynomial with constant coefficient positive, having the same absolute value as φz when z = e iλ and with all its zeros outside the unit disc Let 1 φ z = 1 s z s φ z Notice that the roots of φ z are the inverses of the roots of φ z Thus the polynomial ζz =φ + zφ z has all its roots outside the unit disc and has the same absolute value as φz Notice that the coefficients ψ k p b lψ k l are those in the Laurent expansion of ζzφz 1 = φ zφ z 1 If the sequence is minimum phase this function is 1 and so the coefficients are 0 for k 0 and 1 when k = 0 Further, in the minimum phase case, ψ k = 0 for k<0 The relation 32 is automatically satisfied in the minimum phase case whatever the distribution G as long as Eξ t = 0 and Eξt 2 < However, we had already seen this before using another argument Proposition 1 Consider the stationary solution x t of the system of equations 11, the ξ t are independent, identically distributed with Eξ t 0,Eξ t = σ 2 < and characteristic polynomial φ [clearly φz 0if z =1] Then the best one-step predictor for x t is linear if and only if 32 holds the ψ k s are given by 21 and the b l s the coefficients of the best linear predictor It is of some interest to essentially characterize the sequence γ k = ψ k b l ψ k l, k =, 1,0,1,

7 STATIONARY AUTOREGRESSIVE SEQUENCES 7 The generating function of the ψ k is φz 1 [see21] Since the coefficients b l correspond to best linear one-step predictor in mean square 1 b l z l = cφz p i=r+1 with c a nonzero constant It then follows that p 33 γz = γ k z k = c Since i=r+1 { 1 mi zm 1 i 1 m 1 i z { 1 mi zm 1 i 1 m 1 i z m 1 1 zm1 m 1 z 1 = m +m 2 1 when m < 1, if follows that m j 1 z j γ k = 0 for k>0 Theorem 1 Consider the stationary autoregressive sequence {x t satisfying 11 and assume that the random variables ξ t have all moments finite Further let the sequence be nonminimum phase with all the zeros m i,i=r+1,,p simple Then if the best one-step predictor is linear, the ξ distribution is Gaussian Suppose that the ξ distribution is nongaussian There are then an infinite number of nonzero cumulants µ a 0,a>2, of the ξ distribution Also ψ k = j=r+1 α j m k j, k > 0 for some coefficients α j 0,j = r +1,,p If µ a+1 0 for some a 2, the relation 32 implies that 34 γ k ψ k l1 ψ k la =0, l 1,,l a =1,,p k=0 For the ath order partial derivative of the expression in 32 with respect to τ l1,,τ la at τ l1 = = τ la =0,i a+1 µ a+1 a! multiplied by the expression on the left of 34 The equations 34 can be rewritten j 1,,j a=r+1 α j1 α ja m l1 j 1 m la j a γ k m j1 m ja k =0, l 1,,l a =1,,p Consider the set of equations obtained by letting l 1,,l a = 1,,s The matrix of this set of equations is { M =M j,l = α j1 α ja m l1 j 1 m la j a, j =j 1,,j a,l=l 1,,,l a,j 1,,j a =r+1,,p,l 1,,l a =1,,s The determinant of this matrix is p u=r+1 α a u times the a th power of the Vandermonde determinant m l j ; j = r +1,,p,,,s k=0

8 8 MURRAY ROSENBLATT Since the determinant is nonzero we must have γ m j1 m ja 1 = γ k m j1 m ja k =0, k=0 j 1,,j a =r+1,,p These are too many zeros for the function γz and so we must have µ a+1 = 0 Since this holds for any a 2, the ξ distribution must be Gaussian NonGaussian nonminimum phase autoregressive sequences arise naturally when considering transects of certain classes of random fields References [1] F Breidt, R Davis, K S Lii and M Rosenblatt, Maximum likelihood estimation for noncausal autoregressive processes, J Mult Anal 36, 1991, [2] S Cambanis and I Fakhre-Zaker, On prediction of heavy-tailed autoregressive sequences: regression versus best linear prediction, Technical report No 383, Center for Stochastic Processes, University of North Carolina [3] B Ramchandran and Ka-Sing Lau, Functional Equations in Probability Theory, Academic Press, 1991 [4] M Rosenblatt, A note on prediction and an autoregressive sequence, Stochastic Processes S Cambanis, J Ghosh, R Karandikar and P Sen, eds, 1993, pp Department of Mathematics University of California, San Diego La Jolla, California 92093

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Gaussian distributions and processes have long been accepted as useful tools for stochastic

Gaussian distributions and processes have long been accepted as useful tools for stochastic Chapter 3 Alpha-Stable Random Variables and Processes Gaussian distributions and processes have long been accepted as useful tools for stochastic modeling. In this section, we introduce a statistical model

More information

Oblique derivative problems for elliptic and parabolic equations, Lecture II

Oblique derivative problems for elliptic and parabolic equations, Lecture II of the for elliptic and parabolic equations, Lecture II Iowa State University July 22, 2011 of the 1 2 of the of the As a preliminary step in our further we now look at a special situation for elliptic.

More information

Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models

Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models Eric Zivot April 26, 2010 Outline Likehood of SV Models Survey of Estimation Techniques for SV Models GMM Estimation

More information

ARMA (and ARIMA) models are often expressed in backshift notation.

ARMA (and ARIMA) models are often expressed in backshift notation. Backshift Notation ARMA (and ARIMA) models are often expressed in backshift notation. B is the backshift operator (also called the lag operator ). It operates on time series, and means back up by one time

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1 4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

EXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES

EXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES Statistica Sinica 19 (2009), 545-560 EXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES Nan-Jung Hsu and F. Jay Breidt National Tsing-Hua University and Colorado State University Abstract:

More information

A nonparametric test for seasonal unit roots

A nonparametric test for seasonal unit roots Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna To be presented in Innsbruck November 7, 2007 Abstract We consider a nonparametric test for the

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Stochastic representation of random positive-definite tensor-valued properties: application to 3D anisotropic permeability random fields

Stochastic representation of random positive-definite tensor-valued properties: application to 3D anisotropic permeability random fields Sous la co-tutelle de : LABORATOIRE DE MODÉLISATION ET SIMULATION MULTI ÉCHELLE CNRS UPEC UNIVERSITÉ PARIS-EST CRÉTEIL UPEM UNIVERSITÉ PARIS-EST MARNE-LA-VALLÉE Stochastic representation of random positive-definite

More information

Class 1: Stationary Time Series Analysis

Class 1: Stationary Time Series Analysis Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

Lesson 9: Autoregressive-Moving Average (ARMA) models

Lesson 9: Autoregressive-Moving Average (ARMA) models Lesson 9: Autoregressive-Moving Average (ARMA) models Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@ec.univaq.it Introduction We have seen

More information

Nonlinear AR(1) and MA(1) time series models

Nonlinear AR(1) and MA(1) time series models Nonlinear AR(1) and MA(1) time series models Vesna M. Ćojbašić Abstract The simple nonlinear models of autoregressive and moving average structure are analyzed in the paper 1. In this paper we obtain some

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Adaptive wavelet decompositions of stochastic processes and some applications

Adaptive wavelet decompositions of stochastic processes and some applications Adaptive wavelet decompositions of stochastic processes and some applications Vladas Pipiras University of North Carolina at Chapel Hill SCAM meeting, June 1, 2012 (joint work with G. Didier, P. Abry)

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

On uniqueness of moving average representations of heavy-tailed stationary processes

On uniqueness of moving average representations of heavy-tailed stationary processes MPRA Munich Personal RePEc Archive On uniqueness of moving average representations of heavy-tailed stationary processes Christian Gouriéroux and Jean-Michel Zakoian University of Toronto, CREST 3 March

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

SAMPLE CHAPTERS UNESCO EOLSS STATIONARY PROCESSES. E ξ t (1) K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria

SAMPLE CHAPTERS UNESCO EOLSS STATIONARY PROCESSES. E ξ t (1) K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria STATIONARY PROCESSES K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria Keywords: Stationary process, weak sense stationary process, strict sense stationary process, correlation

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

On the inverse scattering problem in the acoustic environment

On the inverse scattering problem in the acoustic environment On the inverse scattering problem in the acoustic environment Ran Duan and Vladimir Rokhlin Technical Report YALEU/DCS/TR-1395 March 3, 28 1 Report Documentation Page Form Approved OMB No. 74-188 Public

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information

Some Statistical Inferences For Two Frequency Distributions Arising In Bioinformatics

Some Statistical Inferences For Two Frequency Distributions Arising In Bioinformatics Applied Mathematics E-Notes, 14(2014), 151-160 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Some Statistical Inferences For Two Frequency Distributions Arising

More information

Statistics of stochastic processes

Statistics of stochastic processes Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

BEST MEAN SQUARE PREDICTION FOR MOVING AVERAGES

BEST MEAN SQUARE PREDICTION FOR MOVING AVERAGES Statistica Sinica 15(2005), 427-446 BEST MEAN SQUARE PREDICTION FOR MOVING AVERAGES F. Jay Breidt and Nan-Jung Hsu Colorado State University and National Tsing-Hua University Abstract: Best mean square

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Nonlinear Filtering Revisited: a Spectral Approach, II

Nonlinear Filtering Revisited: a Spectral Approach, II Nonlinear Filtering Revisited: a Spectral Approach, II Sergey V. Lototsky Center for Applied Mathematical Sciences University of Southern California Los Angeles, CA 90089-3 sergey@cams-00.usc.edu Remijigus

More information

On the Power of Tests for Regime Switching

On the Power of Tests for Regime Switching On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

LINEAR STOCHASTIC MODELS

LINEAR STOCHASTIC MODELS LINEAR STOCHASTIC MODELS Let {x τ+1,x τ+2,...,x τ+n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ, regardless of the size of n, then the process

More information

YOU CAN BACK SUBSTITUTE TO ANY OF THE PREVIOUS EQUATIONS

YOU CAN BACK SUBSTITUTE TO ANY OF THE PREVIOUS EQUATIONS The two methods we will use to solve systems are substitution and elimination. Substitution was covered in the last lesson and elimination is covered in this lesson. Method of Elimination: 1. multiply

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

A note on Gaussian distributions in R n

A note on Gaussian distributions in R n Proc. Indian Acad. Sci. (Math. Sci. Vol., No. 4, November 0, pp. 635 644. c Indian Academy of Sciences A note on Gaussian distributions in R n B G MANJUNATH and K R PARTHASARATHY Indian Statistical Institute,

More information

MATH. 4548, Autumn 15, MWF 12:40 p.m. QUIZ 1 September 4, 2015 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points.

MATH. 4548, Autumn 15, MWF 12:40 p.m. QUIZ 1 September 4, 2015 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points. MATH. 4548, Autumn 15, MWF 12:40 p.m. QUIZ 1 September 4, 2015 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points. 1. Let f : (, 0) IR be given by f(x) = 1/x 2. Prove

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

Identifiability, Invertibility

Identifiability, Invertibility Identifiability, Invertibility Defn: If {ǫ t } is a white noise series and µ and b 0,..., b p are constants then X t = µ + b 0 ǫ t + b ǫ t + + b p ǫ t p is a moving average of order p; write MA(p). Q:

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Testing Restrictions and Comparing Models

Testing Restrictions and Comparing Models Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by

More information

in Bounded Domains Ariane Trescases CMLA, ENS Cachan

in Bounded Domains Ariane Trescases CMLA, ENS Cachan CMLA, ENS Cachan Joint work with Yan GUO, Chanwoo KIM and Daniela TONON International Conference on Nonlinear Analysis: Boundary Phenomena for Evolutionnary PDE Academia Sinica December 21, 214 Outline

More information

Chapter 1 Fundamental Concepts

Chapter 1 Fundamental Concepts Chapter 1 Fundamental Concepts 1 Signals A signal is a pattern of variation of a physical quantity, often as a function of time (but also space, distance, position, etc). These quantities are usually the

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

Bickel Rosenblatt test

Bickel Rosenblatt test University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Translation Invariant Experiments with Independent Increments

Translation Invariant Experiments with Independent Increments Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction

More information

ECON 4160, Spring term Lecture 12

ECON 4160, Spring term Lecture 12 ECON 4160, Spring term 2013. Lecture 12 Non-stationarity and co-integration 2/2 Ragnar Nymoen Department of Economics 13 Nov 2013 1 / 53 Introduction I So far we have considered: Stationary VAR, with deterministic

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

Basic concepts and terminology: AR, MA and ARMA processes

Basic concepts and terminology: AR, MA and ARMA processes ECON 5101 ADVANCED ECONOMETRICS TIME SERIES Lecture note no. 1 (EB) Erik Biørn, Department of Economics Version of February 1, 2011 Basic concepts and terminology: AR, MA and ARMA processes This lecture

More information

Exact Maximum Likelihood Estimation for Non-Gaussian Non-invertible Moving Averages

Exact Maximum Likelihood Estimation for Non-Gaussian Non-invertible Moving Averages Exact Maximum Likelihood Estimation for Non-Gaussian Non-invertible Moving Averages Nan-Jung Hsu Institute of Statistics National Tsing-Hua University F Jay Breidt Department of Statistics Colorado State

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Weakly dependent functional data. Piotr Kokoszka. Utah State University. Siegfried Hörmann. University of Utah

Weakly dependent functional data. Piotr Kokoszka. Utah State University. Siegfried Hörmann. University of Utah Weakly dependent functional data Piotr Kokoszka Utah State University Joint work with Siegfried Hörmann University of Utah Outline Examples of functional time series L 4 m approximability Convergence of

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES ECO 513 Fall 2009 C. Sims CONDIIONAL EXPECAION; SOCHASIC PROCESSES 1. HREE EXAMPLES OF SOCHASIC PROCESSES (I) X t has three possible time paths. With probability.5 X t t, with probability.25 X t t, and

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt

More information

KUMMER S LEMMA KEITH CONRAD

KUMMER S LEMMA KEITH CONRAD KUMMER S LEMMA KEITH CONRAD Let p be an odd prime and ζ ζ p be a primitive pth root of unity In the ring Z[ζ], the pth power of every element is congruent to a rational integer mod p, since (c 0 + c 1

More information

Variational Principal Components

Variational Principal Components Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Characterization of 2 n -Periodic Binary Sequences with Fixed 2-error or 3-error Linear Complexity

Characterization of 2 n -Periodic Binary Sequences with Fixed 2-error or 3-error Linear Complexity Characterization of n -Periodic Binary Sequences with Fixed -error or 3-error Linear Complexity Ramakanth Kavuluru Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA. Abstract

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

Introduction to Information Entropy Adapted from Papoulis (1991)

Introduction to Information Entropy Adapted from Papoulis (1991) Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION

More information

High dimensional Ising model selection

High dimensional Ising model selection High dimensional Ising model selection Pradeep Ravikumar UT Austin (based on work with John Lafferty, Martin Wainwright) Sparse Ising model US Senate 109th Congress Banerjee et al, 2008 Estimate a sparse

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Multivariate ARMA Processes

Multivariate ARMA Processes LECTURE 8 Multivariate ARMA Processes A vector y(t) of n elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation (1) A 0 y(t) + A 1 y(t 1) + + A p y(t p) = M

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

NONASYMPTOTIC BOUNDS FOR AUTOREGRESSIVE TIME SERIES MODELING. By Alexander Goldenshluger and Assaf Zeevi 1 University of Haifa and Stanford University

NONASYMPTOTIC BOUNDS FOR AUTOREGRESSIVE TIME SERIES MODELING. By Alexander Goldenshluger and Assaf Zeevi 1 University of Haifa and Stanford University The Annals of Statistics 2001, Vol. 29, No. 2, 417 444 NONASYMPTOTIC BOUNDS FOR AUTOREGRESSIVE TIME SERIES MODELING By Alexander Goldenshluger and Assaf Zeevi 1 University of Haifa and Stanford University

More information