Residual Bootstrap for estimation in autoregressive processes

Size: px
Start display at page:

Download "Residual Bootstrap for estimation in autoregressive processes"

Transcription

1 Chapter 7 Residual Bootstrap for estimation in autoregressive processes In Chapter 6 we consider the asymptotic sampling properties of the several estimators including the least squares estimator of the autoregressive parameters and the gaussian maximum likelihood estimator used to estimate the parameters of an ARMA process. The asymptotic distributions are often used for statistical testing and constructing confidence intervals. However the results are asymptotic, and only hold (approximately, when the sample size is relatively large. When the sample size is smaller, the normal approximation is not valid and better approximations are sought. Even in the case where we are willing to use the asymptotic distribution, often we need to obtain expressions for the variance or bias. Sometimes this may not be possible or only possible with a excessive effort. The Bootstrap is a power tool which allows one to approximate certain characteristics. To quote from Wikipedia Bootstrap is the practice of estimating properties of an estimator (such as its variance by measuring those properties when sampling from an approximating distribution. Bootstrap essentially samples from the sample. Each subsample is treated like a new sample from a population. Using these new multiple realisations one can obtain approximations for CIs and variance estimates for the parameter estimates. Of course in reality we do not have multiple-realisations, we are sampling from the sample. Thus we are not gaining more as we subsample more. But we do gain some insight into the finite sample distribution. In this chapter we will details the residual bootstrap method, and then show that the asymptotically the bootstrap distribution coincides with asymptotic distribution. The residual bootstrap method was first proposed by J. P. Kreiss (Kreiss (997 is a very nice review paper on the subject, (see also Franke and Kreiss (992, where an extension to AR( processes is also given here. One of the first theoretical papers on the bootstrap is Bickel and Freedman (98. There are several other boostrapping methods for time series, these include bootstrapping the periodogram, block bootstrap, bootstrapping the Kalman filter (Stoffer and Wall (99, Stoffer and Wall (2004 and Shumway and Stoffer (2006. These methods have not only been used for variance estimation but also determining orders etc. At this point it is worth mentioning methods Frequency domain approaches are considered in Dahlhaus and Janas (996 and Franke and Härdle (992 (a review of subsampling methods can be found in Politis et al. (

2 7. The residual bootstrap Suppose that the time series {X t } satisfies the stationary, causal AR process p X t = φ j X t j + ε t, j= where {ε t } are iid random variables with mean zero and variance one and the roots of the characteristic polynomial have absolute value greater than ( + δ. We will suppose that the order p is known. The residual bootstrap for autoregressive processes (i Let ˆΓ p = n X t X t and ˆγ p = n X t X t, (7. where X t = (X t,...,x t p+. We use ˆφ n = (ˆφ,..., ˆφ p == φ = (φ,...,φ p. ˆΓ p ˆγ p as an estimator of (ii We create the bootstrap sample by first estimating the residuals {ε t } and sampling from the residuals. Let p ˆε t = X t φ j X t j. (iii Now create the empirical distribution function based on ˆε t. Let ˆF n (x = I (,ˆεt](x. j= we notice that sampling from the distribution ˆF n (x, means observing ˆε t with probability (. (iv Sample independently from the distribution ˆF n (x n times. Label this sample as {ε + k }. (v Let X k = ε + k for k p and p X + k = φ j X + k j + ε k, p < k n. j= (vi We call {X + k }. Repeating step (vi,v N times gives us N bootstrap samples. To distinguish each sample we can label each bootstrap sample as ({(X + k (i }; i = p +,...,n. (vii For each bootstrap sample we can construct a bootstrap matrix, vector and estimator (Γ + p (i, (γ p + (i and (ˆφ + n (i = ((Γ + p (i (γ p + (i. (viii Using (ˆφ + n (i we can estimate the variance of ˆφ n φ with n n j= ((ˆφ + n (i ˆφ n and the distribution function of ˆφ n φ. 79

3 7.2 The sampling properties of the residual bootstrap estimator In this section we show that the distribution of n(ˆφ + n ˆφ n and n(ˆφ n φ asymptotically coincide. This means that using the bootstrap distribution is no worse than using the asymptotic normal approximation. However it does not say the bootstrap distribution better approximates the finite sample distribution of (ˆφ n φ, to show this one would have to use Edgeworth expansion methods. In order to show that the distribution of the bootstrap sample n(ˆφ + n ˆφ n asymptotically coincides with the asymptotic distribution of n(ˆφ n φ, we will show convergence of the distributions under the following distance d p (H, G = inf {E(X Y X H,Y G p } /p, where p >. Roughly speaking, if d p (F n, G n 0, then the limiting distributions of F n and G n are the same (see Bickel and Freedman (98. The case that p = 2 is the most commonly used p, and for p = 2, this is called Mallows distance. The Mallows distance between the distribution H and G is defined as d 2 (H, G = inf {E(X Y X H,Y G 2 } /2, we will use the Mallow distance. To reduce notation rather than specify the distributions, F and G, we let d p (X, Y = d p (H, G, where the random variables X and Y have the marginal distributions H and G, respectively. We mention that distance d p satisfies the triangle inequality. The main application of showing that d p (F n, G n 0 is stated in the following lemma, which is a version of Lemma 8.3, Bickel and Freedman (98. Lemma 7.2. Let α, α n be two probability measures then d p (α n, α 0 if and only if E αn ( X p = x p α n (dx E α ( X p = x p α(dx n. and the distribution α n converges weakly to the distribution α. Our aim is to show that d 2 ( n(ˆφ+ n ˆφ n, n(ˆφ n φ 0, which implies that their distributions asymptotically coincide. To do this we use ( n(ˆφ n φ = nˆγ p (ˆγ p ˆΓ p φ ( n(ˆφ + n ˆφ = n(γ + p (γ + p Γ + p ˆφ n. Studying how ˆΓ p, ˆγ p, Γ + p and γ + p are constructed, we see as a starting point we need to show d 2 (X + t, X t 0 t, n, d 2 (Z + t, Z t 0 n. We start by showing that d 2 (Z + t, Z t 0 80

4 Lemma Suppose ε + t is the bootstrap residuals and ε t are the true residuals. Define the discrete random variable J = {p +,...,n} and let P(J = k = n p. Then and E ( (ˆε J ε J 2 X,...,X n = Op ( n (7.2 d 2 ( ˆF n, F d 2 ( ˆF n, F n + d 2 (F n, F 0 as, (7.3 where F n = n n I (,ε t(x, ˆF n (x = n n p I (,ˆε t](x are the empirical distribution function based on the residuals {ε t } n p and estimated residuals {ˆε t } n p, and F is the distribution function of the residual ε t. PROOF. We first show (7.2. From the definition of ˆε + J and ε J we have E( ˆε J ε J 2 X,...,X n = = = p j,j 2 = (ˆε t ε t 2 p ( [ˆφ j φ j ]X t j 2 j= [ˆφ j φ j ][ˆφ j2 φ j2 ] X t j X t j2. Now by using (5.27 we have sup j p ˆφ j φ j = O p (n /2, therefore we have E ˆε J ε J 2 = O p (n /2. We now prove (7.3. We first note by the triangle inequality we have d 2 (F, F n d 2 (F, F n + d 2 ( ˆF n, F n. By using Lemma 8.4, Bickel and Freedman (98, we have that d 2 (F n, F 0. Therefore we need to show that d 2 ( ˆF n, F n 0. It is clear by definition that d 2 ( ˆF n, F n = d 2 (ε + t, ε t, where ε + t is sampled from ˆF n = n n I (,ˆε t(x and ε t is sampled from F n = n n I (,ε t(x. Hence, ε t ε + t have the same distribution as ε J and ˆε J. We now evaluate d 2 (ε + t, ε t. To evaluate d 2 (ε + t, ε t = inf ε + t ˆF n, ε t F n E ε + t ε t we need that the marginal distributions of (ε + t, ε t are ˆF n and F n, but the infimum is over all joint distributions. It is best to choose a joint distribution which is highly dependent (because this minimises the distance between the two random variables. An ideal candidate is to suppose that ε + t = ˆε J and ε t = ε J, since these have the marginals ˆF n and F n respectively. Therefore d 2 ( ˆF n, F n 2 = inf ε + t ˆF n, ε t F n E ε + t ε t 2 E ( (ˆε J ε J 2 X,...,X n = Op ( n, where the above rate comes from (7.2. This means that d 2 ( ˆF n, F n P 0, hence we obtain (7.3. Corollary 7.2. Suppose ε + t is the bootstrapped residual. Then we have E ˆFn ((ε + t 2 X,...,X n P E F (ε 2 t 8

5 PROOF. The proof follows from Lemma 7.2. and Lemma We recall that since X t is a causal autoregressive process, there exists some coefficients {a j } such that X t = a j ε t j, where a j = a j (φ = [A(φ j ], = [A j ], (see Lemma Similarly using the estimated parameters ˆφ n we can write X t + as X + t = a j (ˆφ n ε + t j, where a j (ˆφ n = [A(ˆφ n j ],. We now show that d 2 (X + t, X t 0 as n and t. Lemma Let J p+,...,j n be independent samples from { +,...,n} with P(J i = k = n p. Define Y + t = j=p+ a j (ˆφ n ε + J t j, Ỹ + t = j=p+ a j (ˆφ n ε + J t j, Ỹ t = j=p+ a j ε Jt j, Y t = Ỹt + j=t+p+ where ε Jj is a sample from {ε p+,...,ε n } and ˆε J is a sample from {ˆε p+,..., ˆε n }. Then we have a j ε t j, E ( (Y + t Ỹ + t 2 X,...,X n = Op ( n, d 2(Y + t, Ỹ + t 0 n, (7.4 and E ( (Ỹ + t Ỹt 2 X,...,X n = Op ( n, d 2(Ỹ + t, Ỹt 0 n, (7.5 E ( (Ỹt Y t 2 X,...,X n Kρ t, d 2 (Ỹt, Y t 0 n. (7.6 PROOF. We first prove (7.4. It is clear from the definitions that E ( (Y + t Ỹ + t 2 X,...,X n ([A(φ j ], [A(ˆφ n j ], 2 E((ε + j 2 X,...,X n. (7.7 Using Lemma 7.2. we have that E((ε + j 2 X,...,X n is the same for all j and E((ε + j 2 X,...,X n P E(ε 2 t, hence we will consider for now ([A(φ j ], [A(ˆφ n j ], 2. Using (5.27 we have (ˆφ n φ = O p (n /2, therefore by the mean value theorem we have [A(φ A(ˆφ n = (ˆφ n φd K n D (for some random matrix D. Hence A(ˆφ n j = (A(φ + K ( n Dj = A(φ j + A(φ K j n 82

6 (note these are heuristic bounds, and this argument needs to be made precise. Applying the mean value theorem again we have A(φ j ( + A(φ K n D j = A(φ j + K n D A(φj ( + A(φ K n Bj, where B is such that B spec K n D. Altogether this gives [A(φ j A(ˆφ n j ], K n D A(φj ( + A(φ K n Bj. Notice that for large enough n, ( + A(φ K n Bj is increasing slower (as n than A(φ j is contracting. Therefore for a large enough n we have for any +δ [A(φ j A(ˆφ n j ], < ρ <. Subsituting this into (7.7 gives E ( (Y + t Ỹ + t 2 X,...,X n hence d 2 (Ỹ + t, Y + t 0 as n. We now prove (7.5. We see that E ( (Ỹ + t K n /2 E((ε+ t 2 K n /2ρj, ρ j = O p ( n 0 n. Ỹt 2 X,...,X n = a 2 je(ˆε Jt j ε Jt j 2 = E(ˆε Jt j ε Jt j 2 Now by substituting (7.2 into the above we have E(Ỹ t + means that d 2 (Ỹ t +, Ỹt 0. Finally we prove (7.6. We see that E ( (Ỹt Y t 2 X,...,X n = j=t+ a 2 j. (7.8 Ỹt 2 = O(n, as required. This a 2 je(ε 2 t. (7.9 Using (2.7 we have E(Ỹt Y t 2 Kρ t, thus giving us (7.6. We can now almost prove the result. To do this we note that (ˆγ p ˆΓ p φ = ε t X t, (γ + p Γ + p ˆφ n = ε + t X+ t. (7.0 Lemma Let Y t, Y t +, Ỹ t + and Ỹt, be defined as in Lemma Define Γ p and Γ + p, γ p and γ p + in the same way as ˆΓ p and ˆγ p defined in (7., but using Y t and Y t + defined in Lemma 7.2.3, rspectively, rather than X t. We have that d 2 (Y t, Y + t {E(Y t Y + t 2 } /2 = O p (K(n /2 + ρ t, (7. 83

7 d 2 (Y t, X t 0, n, (7.2 and d 2 ( n( γp Γ p φ, n( γ + p Γ + p ˆφ n ne ( ( γ p Γ p φ ( γ + p Γ + p ˆφ n 2 0 n, (7.3 where Γ p, Γ + p, γ p and γ p + are defined in the same was as ˆΓ p, Γ + p, ˆγ p and γ p +, but with {Y t } replacing X t in Γ p and γ p and {Y t + } replacing X t + in Γ + p and γ p +. Furthermore we have E Γ + p Γ p 0, (7.4 d 2 ( ( γp Γ p φ, (γ p Γ p φ 0, E Γ p ˆΓ p 0 n. (7.5 PROOF. We first prove (7.. Using the triangle inequality we have {E ( (Ỹt Y + t 2 X,...,X n } /2 { ( E(Y t Ỹt 2 X,...,X n } /2 + { ( E(Ỹt Ỹ + t 2 X,...,X n } /2 +{E ( (Ỹ + t Y + t 2 X,...,X n } /2 = O(n /2 + ρ t, where we use Lemma we get the second inequality above. Therefore by definition of d 2 (X t, X + t we have (7.. To prove (7.2 we note that the only difference between Y t and X t is that the {ε Jk } in Y t, is sampled from {ε p+,...,ε n } hence sampled from F n, where as the {ε t } n in X t are iid random variables with distribution F. Since d 2 (F n, F 0 (Bickel and Freedman (98, Lemma 8.4 it follows that d 2 (Y t, X t 0, thus proving (7.2. To prove (7.3 we consider the difference ( γ p Γ p φ ( γ + p Γ + p ˆφ n and use (7.0 to get n { } ε t Y t ε + t Y+ t = n {(ε t ε +t Y t + ε +t (Y t Y +t }, where we note that Y t + = (Y t +,...,Y t p + and Y t = (Y t,...,y t p. Using the above, and taking conditional expectations with respect to {X,...,X n } and noting that conditioned on {X,...,X n }, (ε t ε + t are independent of X k and X + k for k < t we have { ( { 2 } /2 n E ε t Y t ε + t t } Y+ X,...,X n I + II where I = II = n {E ( (ε t ε + t 2 X,...,X n } /2 {E(Yt X 2,...,X n } /2 = {E ( (ε t ε + t 2 X,...,X n } /2 n n {E((Yt X 2,...,X n } /2 {E((ε + t 2 X,...,X n } /2 {E((Y t Y t + 2 X,...,X n } /2 = {E((ε + t 2 X,...,X n } /2 n {E((Y t Y t + 2 X,...,X n } /2. 84

8 Now by using (7.2 we have I Kn /2, and (7.3 and Corollary 7.2. we obtain II Kn /2, hence we have (7.3. Using a similar technique to that given above we can prove (7.4. (7.5 follows from (7.3, (7.4 and (7.2. Corollary Let Γ + p, ˆΓ p, ˆγ p and γ + p be defined in (7.. Then we have ( d 2 n(ˆγp ˆΓ p φ, n(γ p + Γ + ˆφ p n 0 (7.6 as n. d (Γ + p, ˆΓ p 0, (7.7 PROOF. We first prove (7.6. Using (7.3, (7.5 and the triangular inequality gives (7.6. To prove (7.7 we use (7.4 and (7.5 and the triangular inequality and (7.6 immediately follows. Now by using (7.7 and Lemma 7.2. we have Γ + p P E(Γ p, and by using (7.6, the distribution of n(γ + p Γ + p ˆφ n converges weakly to the distribution of n(ˆγp ˆΓ p φ. Therefore n(ˆφ+ n ˆφ n D N(0, 2Γ p, hence the distributions of n(ˆγ p ˆΓ p φ and n(γ + p Γ + p ˆφ n aymptotically coincide. From (5.28 we have n(ˆφ n φ D N(0, σ 2 Γ p. Thus we see that the distribution of n(ˆφ n φ and n(ˆφ + n ˆφ n asymptotically coincide. 85

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

Introduction to Time Series Analysis. Lecture 11.

Introduction to Time Series Analysis. Lecture 11. Introduction to Time Series Analysis. Lecture 11. Peter Bartlett 1. Review: Time series modelling and forecasting 2. Parameter estimation 3. Maximum likelihood estimator 4. Yule-Walker estimation 5. Yule-Walker

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Bootstrapping Autoregressive and Moving Average Parameter Estimates of Infinite Order Vector Autoregressive Processes

Bootstrapping Autoregressive and Moving Average Parameter Estimates of Infinite Order Vector Autoregressive Processes journal of multivariate analysis 57, 7796 (1996) article no. 0034 Bootstrapping Autoregressive and Moving Average Parameter Estimates of Infinite Order Vector Autoregressive Processes Efstathios Paparoditis

More information

Bootstrap with Larger Resample Size for Root-n Consistent Density Estimation with Time Series Data

Bootstrap with Larger Resample Size for Root-n Consistent Density Estimation with Time Series Data Bootstrap with Larger Resample Size for Root-n Consistent Density Estimation with Time Series Data Christopher C. Chang, Dimitris N. Politis 1 February 2011 Abstract We consider finite-order moving average

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

Uniform confidence bands for kernel regression estimates with dependent data IN MEMORY OF F. MARMOL

Uniform confidence bands for kernel regression estimates with dependent data IN MEMORY OF F. MARMOL . Uniform confidence bands for kernel regression estimates with dependent data J. Hidalgo London School of Economics IN MEMORY OF F. MARMOL AIMS AND ESTIMATION METHODOLOGY CONDITIONS RESULTS BOOTSTRAP

More information

ON THE RANGE OF VALIDITY OF THE AUTOREGRESSIVE SIEVE BOOTSTRAP

ON THE RANGE OF VALIDITY OF THE AUTOREGRESSIVE SIEVE BOOTSTRAP ON THE RANGE OF VALIDITY OF THE AUTOREGRESSIVE SIEVE BOOTSTRAP JENS-PETER KREISS, EFSTATHIOS PAPARODITIS, AND DIMITRIS N. POLITIS Abstract. We explore the limits of the autoregressive (AR) sieve bootstrap,

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Fall 24 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

Linear regression COMS 4771

Linear regression COMS 4771 Linear regression COMS 4771 1. Old Faithful and prediction functions Prediction problem: Old Faithful geyser (Yellowstone) Task: Predict time of next eruption. 1 / 40 Statistical model for time between

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Stationary Stochastic Time Series Models

Stationary Stochastic Time Series Models Stationary Stochastic Time Series Models When modeling time series it is useful to regard an observed time series, (x 1,x,..., x n ), as the realisation of a stochastic process. In general a stochastic

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR

More information

Generalization theory

Generalization theory Generalization theory Daniel Hsu Columbia TRIPODS Bootcamp 1 Motivation 2 Support vector machines X = R d, Y = { 1, +1}. Return solution ŵ R d to following optimization problem: λ min w R d 2 w 2 2 + 1

More information

An algorithm for robust fitting of autoregressive models Dimitris N. Politis

An algorithm for robust fitting of autoregressive models Dimitris N. Politis An algorithm for robust fitting of autoregressive models Dimitris N. Politis Abstract: An algorithm for robust fitting of AR models is given, based on a linear regression idea. The new method appears to

More information

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes.

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ψ), ψ Ψ}, Kullback-Leibler s index of f ( ; ψ) relative to f ( ; θ) is (ψ

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes. Only

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

E 4101/5101 Lecture 6: Spectral analysis

E 4101/5101 Lecture 6: Spectral analysis E 4101/5101 Lecture 6: Spectral analysis Ragnar Nymoen 3 March 2011 References to this lecture Hamilton Ch 6 Lecture note (on web page) For stationary variables/processes there is a close correspondence

More information

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Michal Pešta Charles University in Prague Faculty of Mathematics and Physics Ostap Okhrin Dresden University of Technology

More information

Problem Set 6 Solution

Problem Set 6 Solution Problem Set 6 Solution May st, 009 by Yang. Causal Expression of AR Let φz : αz βz. Zeros of φ are α and β, both of which are greater than in absolute value by the assumption in the question. By the theorem

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

Lecture 2: ARMA(p,q) models (part 2)

Lecture 2: ARMA(p,q) models (part 2) Lecture 2: ARMA(p,q) models (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Lecture 4: Completion of a Metric Space

Lecture 4: Completion of a Metric Space 15 Lecture 4: Completion of a Metric Space Closure vs. Completeness. Recall the statement of Lemma??(b): A subspace M of a metric space X is closed if and only if every convergent sequence {x n } X satisfying

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Tessa L. Childers-Day February 8, 2013 1 Introduction Today s section will deal with topics such as: the mean function, the auto- and cross-covariance

More information

1. Stochastic Processes and Stationarity

1. Stochastic Processes and Stationarity Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture Note 1 - Introduction This course provides the basic tools needed to analyze data that is observed

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests

The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests Working Paper 2013:8 Department of Statistics The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests Jianxin Wei Working Paper 2013:8 June 2013 Department of Statistics Uppsala

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

Introduction to Time Series Analysis. Lecture 12.

Introduction to Time Series Analysis. Lecture 12. Last lecture: Introduction to Time Series Analysis. Lecture 12. Peter Bartlett 1. Parameter estimation 2. Maximum likelihood estimator 3. Yule-Walker estimation 1 Introduction to Time Series Analysis.

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

Extremogram and ex-periodogram for heavy-tailed time series

Extremogram and ex-periodogram for heavy-tailed time series Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal

More information

Tail empirical process for long memory stochastic volatility models

Tail empirical process for long memory stochastic volatility models Tail empirical process for long memory stochastic volatility models Rafa l Kulik and Philippe Soulier Carleton University, 5 May 2010 Rafa l Kulik and Philippe Soulier Quick review of limit theorems and

More information

Financial Time Series Analysis Week 5

Financial Time Series Analysis Week 5 Financial Time Series Analysis Week 5 25 Estimation in AR moels Central Limit Theorem for µ in AR() Moel Recall : If X N(µ, σ 2 ), normal istribute ranom variable with mean µ an variance σ 2, then X µ

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Lecture on ARMA model

Lecture on ARMA model Lecture on ARMA model Robert M. de Jong Ohio State University Columbus, OH 43210 USA Chien-Ho Wang National Taipei University Taipei City, 104 Taiwan ROC October 19, 2006 (Very Preliminary edition, Comment

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

MAT3379 (Winter 2016)

MAT3379 (Winter 2016) MAT3379 (Winter 2016) Assignment 4 - SOLUTIONS The following questions will be marked: 1a), 2, 4, 6, 7a Total number of points for Assignment 4: 20 Q1. (Theoretical Question, 2 points). Yule-Walker estimation

More information

Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form

Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form Sílvia Gonçalves CIREQ, CIRANO and Département de sciences économiques, Université de Montréal and Lutz Kilian University

More information

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models Ragnar Nymoen Department of Economics University of Oslo 25 September 2018 The reference to this lecture is: Chapter

More information

Advanced Econometrics

Advanced Econometrics Advanced Econometrics Marco Sunder Nov 04 2010 Marco Sunder Advanced Econometrics 1/ 25 Contents 1 2 3 Marco Sunder Advanced Econometrics 2/ 25 Music Marco Sunder Advanced Econometrics 3/ 25 Music Marco

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

IDENTIFICATION OF ARMA MODELS

IDENTIFICATION OF ARMA MODELS IDENTIFICATION OF ARMA MODELS A stationary stochastic process can be characterised, equivalently, by its autocovariance function or its partial autocovariance function. It can also be characterised by

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

Inference For High Dimensional M-estimates: Fixed Design Results

Inference For High Dimensional M-estimates: Fixed Design Results Inference For High Dimensional M-estimates: Fixed Design Results Lihua Lei, Peter Bickel and Noureddine El Karoui Department of Statistics, UC Berkeley Berkeley-Stanford Econometrics Jamboree, 2017 1/49

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

Theoretical Statistics. Lecture 1.

Theoretical Statistics. Lecture 1. 1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.

More information

Orthogonal samples for estimators in time series

Orthogonal samples for estimators in time series Orthogonal samples for estimators in time series Suhasini Subba Rao Department of Statistics, Texas A&M University College Station, TX, U.S.A. suhasini@stat.tamu.edu May 10, 2017 Abstract Inference for

More information

Lecture 14 Simple Linear Regression

Lecture 14 Simple Linear Regression Lecture 4 Simple Linear Regression Ordinary Least Squares (OLS) Consider the following simple linear regression model where, for each unit i, Y i is the dependent variable (response). X i is the independent

More information

You must continuously work on this project over the course of four weeks.

You must continuously work on this project over the course of four weeks. The project Five project topics are described below. You should choose one the projects. Maximum of two people per project is allowed. If two people are working on a topic they are expected to do double

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Time Series Analysis. Asymptotic Results for Spatial ARMA Models Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results

More information

Additive Isotonic Regression

Additive Isotonic Regression Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive

More information

Time-Varying Parameters

Time-Varying Parameters Kalman Filter and state-space models: time-varying parameter models; models with unobservable variables; basic tool: Kalman filter; implementation is task-specific. y t = x t β t + e t (1) β t = µ + Fβ

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

(X i X) 2. n 1 X X. s X. s 2 F (n 1),(m 1)

(X i X) 2. n 1 X X. s X. s 2 F (n 1),(m 1) X X X 10 n 5 X n X N(µ X, σx ) n s X = (X i X). n 1 (n 1)s X σ X n = (X i X) σ X χ n 1. t t χ t (X µ X )/ σ X n s X σx = X µ X σ X n σx s X = X µ X n s X t n 1. F F χ F F n (X i X) /(n 1) m (Y i Y ) /(m

More information

Chapter 6: Model Specification for Time Series

Chapter 6: Model Specification for Time Series Chapter 6: Model Specification for Time Series The ARIMA(p, d, q) class of models as a broad class can describe many real time series. Model specification for ARIMA(p, d, q) models involves 1. Choosing

More information

21.1 Lower bounds on minimax risk for functional estimation

21.1 Lower bounds on minimax risk for functional estimation ECE598: Information-theoretic methods in high-dimensional statistics Spring 016 Lecture 1: Functional estimation & testing Lecturer: Yihong Wu Scribe: Ashok Vardhan, Apr 14, 016 In this chapter, we will

More information

Bootstrapping high dimensional vector: interplay between dependence and dimensionality

Bootstrapping high dimensional vector: interplay between dependence and dimensionality Bootstrapping high dimensional vector: interplay between dependence and dimensionality Xianyang Zhang Joint work with Guang Cheng University of Missouri-Columbia LDHD: Transition Workshop, 2014 Xianyang

More information

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Final Overview. Introduction to ML. Marek Petrik 4/25/2017 Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Wavelet Methods for Time Series Analysis. Motivating Question

Wavelet Methods for Time Series Analysis. Motivating Question Wavelet Methods for Time Series Analysis Part VII: Wavelet-Based Bootstrapping start with some background on bootstrapping and its rationale describe adjustments to the bootstrap that allow it to work

More information

Define y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting

Define y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting Forecasting Let {y t } be a covariance stationary are ergodic process, eg an ARMA(p, q) process with Wold representation y t = X μ + ψ j ε t j, ε t ~WN(0,σ 2 ) j=0 = μ + ε t + ψ 1 ε t 1 + ψ 2 ε t 2 + Let

More information

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:

More information

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe

More information

Bickel Rosenblatt test

Bickel Rosenblatt test University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance

More information

UC San Diego Recent Work

UC San Diego Recent Work UC San Diego Recent Work Title Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions Permalink https://escholarship.org/uc/item/67h5s74t Authors Pan, Li Politis, Dimitris

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

A nonparametric test for seasonal unit roots

A nonparametric test for seasonal unit roots Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna To be presented in Innsbruck November 7, 2007 Abstract We consider a nonparametric test for the

More information

Nonlinear and non-gaussian state-space modelling by means of hidden Markov models

Nonlinear and non-gaussian state-space modelling by means of hidden Markov models Nonlinear and non-gaussian state-space modelling by means of hidden Markov models University of Göttingen St Andrews, 13 December 2010 bla bla bla bla 1 2 Glacial varve thickness (General) state-space

More information

X random; interested in impact of X on Y. Time series analogue of regression.

X random; interested in impact of X on Y. Time series analogue of regression. Multiple time series Given: two series Y and X. Relationship between series? Possible approaches: X deterministic: regress Y on X via generalized least squares: arima.mle in SPlus or arima in R. We have

More information

Machine Learning. Lecture 9: Learning Theory. Feng Li.

Machine Learning. Lecture 9: Learning Theory. Feng Li. Machine Learning Lecture 9: Learning Theory Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Why Learning Theory How can we tell

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

Lecture Notes 15 Prediction Chapters 13, 22, 20.4.

Lecture Notes 15 Prediction Chapters 13, 22, 20.4. Lecture Notes 15 Prediction Chapters 13, 22, 20.4. 1 Introduction Prediction is covered in detail in 36-707, 36-701, 36-715, 10/36-702. Here, we will just give an introduction. We observe training data

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information