High quantile estimation for some Stochastic Volatility models

Size: px
Start display at page:

Download "High quantile estimation for some Stochastic Volatility models"

Transcription

1

2

3 High quantile estimation for some Stochastic Volatility models Ling Luo Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of Master of Science in Mathematics Department of Mathematics and Statistics Faculty of Science University of Ottawa c Ling Luo, Ottawa, Canada, 20 The M.Sc. program is a joint program with Carleton University, administered by the Ottawa- Carleton Institute of Mathematics and Statistics

4

5 Abstract In this thesis we consider estimation of the tail index for heavy tailed stochastic volatility models with long memory. We prove a central limit theorem for a Hill estimator. In particular, it is shown that neither the rate of convergence nor the asymptotic variance is affected by long memory. The theoretical findings are verified by simulation studies. iii

6

7 Acknowledgements I would like to thank my supervisors, Rafal Kulik and Mahmoud Zarepour, for their friendly guidance, inspiring discussions and unending patience throughout my research project. Specially, I would also like to thank my husband, Hao, for his understanding and support and my lovely son, Travis, for bringing happiness into my life. v

8

9 Contents List of Figures List of Tables x xii Introduction Time Series Models 7. White noise ARMA models ARFIMA model Gaussian long memory sequences Definition of long memory Gaussian sequences and Hermite polynomials Stochastic Volatility models Long Memory in Stochastic Volatility Limit theorems under long memory High quantiles estimation for long memory stochastic volatility models Tail empirical process with deterministic levels Weak convergence of the martingale part M n Weak convergence of long memory part R n vii

10 viii CONTENTS 2.2 Tail empirical process with random levels Asymptotic normality of the Hill estimator Numerical experiments Estimators for the tail index α Peaks Over Threshold (POT) Maximum Likelihood (ML) estimator Hill estimator Pickands estimator Simulations: POT method for different choices of threshold u Simulations: Hill and Pickands estimators Hill plots Appendix A - D[0, ) space 55 Appendix B - R codes 56 Bibliography 6

11

12 List of Figures USD versus Swiss Franc Exchange Rates Log-differenced USD vs. Swiss Franc Exchange Rates White noise and its ACF AR() and its ACF MA() and its ACF SV model and its ACF Statistics for POT simulation. The graphs illustrate empirical distribution of ˆα,..., ˆα M based on M = 000 Monte Carlo runs Hill plot for iid Pareto Hill plots with different dependence parameters x

13

14 List of Tables 3. Hill estimator simulation; standard deviation β = Hill estimator simulation; standard deviation β = Pickands estimator simulation; standard deviation β = Pickands estimator simulation; standard deviation β = xii

15

16 Introduction Motivation Typical financial data have the following behavior: they are uncorrelated, they have strong dependence in squares, they are heavy-tailed. Notice that the autocovariance function may not be defined if the variance is infinite. To illustrate better, we consider the daily exchange rates between US Dollar and Swiss Franc for the period Figure shows that the time series {P i, i =,..., n} seems to be nonstationary, it has a decreasing trend. Consider the log-differences, i.e. S i = log(p i /P i ), i = 2,..., n. The time series plot of S i, i = 2,..., n, seems to be stationary. There is no correlation among different lags, hence the log-difference series {S i, i = 2,..., n} is uncorrelated. Figure 2, lower panel depicts that significant autocorrelations exist in Si 2, i = 2,..., n. Therefore the log-difference series is not independent. The QQ-plot shows that S i has heavier tails than the normal distribution.

17 2 Introduction Time series plot for Pi Pi Time Figure : Exchange rates The natural question is about modeling the time series so that it can capture the above features. Furthermore, we would like to estimate relevant parameters, in particular the so-called tail index in such models. Structure of the thesis In Chapter we give a brief overview of the standard time series, long memory and stochastic volatility (SV) models. We will illustrate that only SV models can capture the above mentioned properties. In particular, a pure SV process and EGARCH (Exponential Generalized Autoregressive Conditionally Heteroscedastic) process can have both dependence and heavy-tailed features. These processes are defined in Section.5. and appropriate tail and dependence properties are discussed. In particular we introduce a notion of the tail index. Furthermore, in Section.5.2 we collect some limit theorems for long memory models. We use tools such as Hermite polynomials

18 Introduction 3 to establish our results. In Chapter 2, we establish asymptotic theory for high quantile estimation of EGARCH and SV processes. The core problem resolved in this paper is to study the asymptotic behavior of a tail empirical process with deterministic levels and obtain the corresponding behavior of a tail empirical process with random levels. We conclude in Theorem 2.. that the long memory influences the behaviour of the tail empirical process with deterministic levels, however, this is not the case for random levels; see Section 2.2. We use the latter result to obtain asymptotic normality of the Hill estimator, the most popular estimator of the tail index. In Chapter 3, we study the numerical performance of three tail index estimators: Peak over Threshold (POT) estimator, Hill estimator and Pickands estimator. Note that the Hill estimator can be thought of as the POT estimator based on random levels. Therefore, the theoretical results in Chapter 2 suggest that the Hill estimator should be robust against long memory, contrary to the standard POT estimator, which is based on deterministic levels. We illustrate that under the SV model with different dependence and index, the dependence does not influence the performance of the Hill estimator. Contribution Convergence of the tail empirical processes was studied in Rootzen (2009) in a weakly dependent case and in Kulik and Soulier (20) for pure long memory stochastic volatility models. Our main contribution is the extension of the latter result to so-called EGARCH models. This extension requires a careful application of the martingale central limit theorem; see Section 2.. Furthermore, we conduct simulations which confirm our theoretical findings. Similar work has been done in McElroy and

19 4 Introduction Jach (20) or Mikosch and Rezapour (20); in both cases for a pure stochastic volatility model.

20 Introduction 5 Time series plot for Si Si Time Series Si Series Si^2 Normal Q Q Plot ACF ACF Sample Quantiles Lag Lag Theoretical Quantiles Figure 2: Graph of log-differences: time series, autocovariance function, autocovariance function of squares, QQ-plot

21 6 Introduction

22 Chapter Time Series Models Let {X i, < i < } be a stationary sequence with a finite variance. We will use the following notation: γ X (h) = Cov(X i, X i+h ) = Cov(X 0, X h ), ρ X (h) = Cor(X 0, X h ) = γ X(h) Var(X).. White noise Let {Z i, < i < } be a sequence of independent standard normal random variables. Of course, these random variables, as well as their squares are uncorrelated. To illustrate this, we simulate 000 observations from a i.i.d. N(0, ) process. Figure. shows 000 simulated values, the sequence seems to be stationary. Furthermore, the figure shows the corresponding autocorrelation function, the values ρ Z (h) are close to 0 for all h > 0. It indicates that the series {Z i, < i < } is uncorrelated. Also, the squares are uncorrelated. We will write {Z i, < i < } IID(0, ). 7

23 8. Time Series Models Time series plot for White Noise Zi Time Series Zi Series Zi^2 ACF ACF Lag Lag.2 ARMA models Figure.: Simulated white noise and its ACF Definition.2. We say that {X i, < i < } is an ARMA(p,q) process if {X i, < i < } is stationary and if for every i, X i ϕ X i ϕ p X i p = Z i + θ Z i + + θ q Z i q, (.2.) where {Z i, < i < } White Noise(0, σ 2 ) and the polynomials ϕ p (z) = ϕ z ϕ p z p

24 .2. ARMA models 9 and have no common factors. We may write θ q (z) = + θ z + + θ q z q ϕ p (B)X i = θ q (B)Z i, (.2.2) where B is the backward shift operator (B j X i = X i j, j = 0, ±,...). AR(), 0.5 AR Time Series AR Series AR^2 ACF ACF Lag Lag Figure.2: Simulated AR() and its ACF We simulate 000 observations from an AR()=ARMA(,0) process. Figure.2 shows that the AR() series is stationary. The ACF decays exponentially to zero.

25 0. Time Series Models There is correlation in the process for different lags h. Theoretically, we will show that the covariance function is not equal to 0. Example.2.2 Consider AR() model: X i = ϕx i + Z i, where ϕ <, and Z i, i =,..., n, is a white noise (i.e, the sequence of uncorrelated random variables, with mean zero and variance σz 2 ). Then X i = ϕx i + Z i = ϕ j Z i j j=0 which is a linear process with ACF γ X (h) = j=0 ϕ j ϕ j+h σ 2 Z = σ 2 Zϕ h ϕ 2 = Cϕh, h > 0. This means that the AR() process is correlated. We simulate 000 observations from an MA()=ARMA(0,) process. Figure.3 shows that the MA() series is stationary. The ACF is almost zero after lag of the process and it shows that the series is correlated. Again, the squares of the time series are correlated. Example.2.3 Consider MA() process X i = Z i + θz i, where {Z i, < i < } IID(0, σ 2 ). We have γ X (h) = E(X i X i h ) 0 = E[(Z i + θz i )(Z i h + θz i h )] = E(Z i Z i h + θz i Z i h + θz i Z i h + θ 2 Z i Z i h ). When h = 0, γ X (0) = E(Z 2 i + θz i Z i + θz i Z i + θ 2 Z 2 i ) = σ 2 + θ 2 σ 2 = σ 2 ( + θ 2 )

26 .2. ARMA models MA(), 0.5 MA Time Series MA Series MA^2 ACF ACF Lag Lag Figure.3: Simulated MA() and its ACF When h =, γ X () = E(Z i Z i + θz i Z i 2 + θz 2 i + θ 2 Z i Z i 2 ) = θe(z 2 i ) = θσ 2 When h =, γ X ( ) = E(Z i Z i+ + θz 2 i + θz i Z i+ + θ 2 Z i Z i ) = θe(z 2 i ) = θσ 2

27 2. Time Series Models When h = 2, γ X (2) = E(Z i Z i 2 + θz i Z t 3 + θz i Z i 2 + θ 2 Z i Z t 3 ) = 0. Similarly, γ X (h) = 0 if h > 2. Thus we have, γ X (h) = σ 2 ( + θ 2 ) if h = 0; σ 2 θ if h = ±; 0 if h >. In summary, ARMA models can not capture the following properties of financial data: the original time series is uncorrelated, but their squares are correlated..3 ARFIMA model Consider a process {X i, < i < } defined as ϕ p (B)( B) d X i = θ q (B)Z i, {Z i, < i < } IID(0, σz), 2 (.3.) where d is a rational number and B is the backward shift operator; ϕ p (B) = ϕ B ϕ 2 B 2... ϕ p B p θ q (B) = + θ B + θ 2 B θ q B q are respectively the autoregressive and the moving average characteristic polynomials, satisfying ϕ(z) 0 and θ(z) 0 for all z such that z. For simplicity, to find the range of d for the process (.3.), we consider the case, where ϕ p (B) = and θ q (B) =, ( B) d X i = Z i (.3.2)

28 .3. ARFIMA model 3 If the process (.3.2) is stationary, then we may write it as (see Brockwell and Davis 2002) X i = ( B) d Z i. (.3.3) The operator ( B) d in equation (.3.3) is defined by the binomial expansion ( B) d = = ( ) d ( ) d ( B) j = ( ) j B j j j j=0 j=0 ψ j B j, j=0 where ψ j = ( ) d j ( ) j = Γ(j+d). Therefore X Γ(j+)Γ(d) i in equation (.3.3) is X i = ( B) d Z i = ψ j B j Z i = j=0 ψ j Z i j. (.3.4) j=0 Lemma.3. We have Proof: Using Stirling s formula ψ j Γ(d)j d. Γ(x) 2πe x x x+/2 as x, we have ψ j = = Now, we prove that equivalently, Γ(j + d) Γ(j + )Γ(d) e d+ (j + d) j+d 2. Γ(d) (j + ) j+ 2 e d+ (j + d) j+d 2 Γ(d) (j + ) j+ 2 2πe (j+d) (j + d) j+d /2 2πe (j+) (j + ) j+ /2 Γ(d) lim e d+ (j + d) j+d j j d (j + ) j+ 2 Γ(d)j d, 2 =.

29 4. Time Series Models ( where Since we have j j+d LHS = e d+ j d (j + d) j+d 2 lim j (j + ) j+ 2 ( ) j+/2 j + d = e d+ lim j d (j + d) d j j + ( ) d ( ) j+/2 j j + d = e d+ lim j j + d j + ) d, ( = e d+ lim j ( j j + d ) j j+d j e d and ) d ( j + d ( j j ) j+/2 ( ) j+/2 j, j + j+) j e as j, so ( ) d ( ) j+/2 j j + d e d+ lim = e d+ e d = = RHS. j j + d j + ψj 2 j=0 ψ j Γ(d)j d (Γ(d)) 2 j=0 j 2( d), which is convergent if 2( d) >, i.e., d < 0.5, therefore {ψ j } is square summable if and only if d < 0.5 and in this paper we are interested in d > 0. The process in (.3.) with 0 < d < 0.5 is denoted the ARFIMA(p,d,q) (AutoRegressive Fractionally Integrated Moving Average) model and the process X i in (.3.2) with 0 < d < 0.5 is called the fractionally integrated noise. The linear representation (.3.4) with the coefficients ψ j = Γ(d)j d following behaviour of ACF (Beran 994, p. 63): yields the Lemma.3.2 Consider the stationary ARFIMA(0,d,0) process with d (0, /2). Then ρ X (k) Γ( d) k 2d, k. Γ(d)

30 .4. Gaussian long memory sequences 5 In summary, the ARFIMA model itself has slowly decaying correlation. ARFIMA models cannot be used directly to model returns, but they will be used as a building block for stochastic volatility models..4 Gaussian long memory sequences.4. Definition of long memory There are different definitions of long memory. The most common form is given below (Beran 994, p. 42): Definition.4. A stationary process {X i, < i < } is said to be a short memory process if its autocorrelation function (ACF) ρ X (k) is absolutely summable: ρ X (k) <. (.4.) k=0 On the other hand, a stationary process is said to have a long memory if ACF is not absolutely summable, i.e. ρ X (k) =. k= Example.4.2 If ρ X (k) ck 2d, as k, (.4.2) where d (0, /2), then ACF is not summable. This type of behaviour of ACF is usually assumed for purpose of limit theorems. Example.4.3 Consider the AR() model of Example.2.2. If ϕ <, then γ X (h) = C ϕ h <. h=0 h=0 Thus, the AR() model has short-memory.

31 6. Time Series Models Example.4.4 Consider the ARFIMA(0,d,0) model with d (0, /2). Then γ X (h) =. h=0 Thus, the ARFIMA(0,d,0) model has long-memory..4.2 Gaussian sequences and Hermite polynomials Assume that {X i, < i < } is a stationary Gaussian process with unit variance and covariance Cov(X i, X j ) = γ X (i j) = i j 2d l 0 ( i j ), (.4.3) where d (0, /2) and l 0 is a slowly varying function at infinity. We will assume for simplicity l 0 (i) =, since if d (0, /2), Cov(X, X j ) = j=0 Thus, the sequence has long memory. A crucial tool in studying long memory Gaussian processes are Hermite polynomials. γ X (j) = j=0 j 2d = +. Definition.4.5 For any i = 0,, 2,..., the Hermite polynomials H i are defined by the formula j=0 ( ) ( ) x H i (x) = ( ) i 2 d i x 2 exp 2 dx exp. i 2 Note that {H i, < i < } is an orthogonal basis for Gaussian random variables, that is, for a standard normal random variable X with a density we have < H i (X), H j (X) >= ϕ(x) = 2π exp( x2 2 ) H i (x)h j (x)ϕ(x)dx

32 .4. Gaussian long memory sequences 7 where = Cov(H i (X), H j (X)) = E(H i (X)H j (X)) = i! if i = j, (.4.4) < H i (X), H j (X) >= 0, if i j, (.4.5) < f, g >= f(x)g(x)ϕ(x)dx. It also shows that the V ar(h i (X)) = i! and H i (X), H j (X) are uncorrelated if i j, however, H i (X), H j (X) are not independent. The first 6 Hermite polynomials are: H 0 (x) =, H (x) = x, H 2 (x) = x 2, H 3 (x) = x 3 3x, H 4 (x) = x 4 6x 2 + 3, H 5 (x) = x 5 0x 3 + 5x. Lemma.4.6 (Arcones (994)) Let a,..., a k be real numbers such that a a 2 k =. Then H q ( k j= a j x j ) = q + +q k =q q! q! q k! k j= a q j j H q j (x j ). Lemma.4.7 Let X, Y be a pair of jointly standard normal random variables with covariance γ = cov(x, Y ) and suppose that f is a measurable transformation with Hermite expansion f(x) = i=0 a i i! H i(x), where H i (x) are Hermite polynomials and a i =< f(x), H i (X) >= E(f(X)H i (X)) = f(x)h i (x)ϕ(x)dx. Then cov(h i (X), H i (Y )) = i!γ i, (.4.6) cov(h i (X), H j (Y )) = 0, for i j, (.4.7) a 2 i cov(f(x), f(y )) = i! γi. (.4.8) i=0

33 8. Time Series Models Proof of lemma.4.7: We can write Y = γx + γ 2 Z, where Z is N(0, ) and is independent of X. Using Lemma.4.6, Cov(H i (X), H i (Y )) = E(H i (X)H i (Y )) = E(H i (X)H i (γx + γ 2 Z)) i! = E(H i (X) q!q 2! γq γ 2 q 2 H q (X)H q2 (Z)) = q +q 2 =i q +q 2 =i i! q!q 2! γq γ 2 q 2 E(H i (X)H q (X))E(H q2 (Z)) = γ i E(H i (X)H i (X)) = γ i i!, where the last equality follows from equations.4.4 and.4.5 that q = i and q 2 = 0 in order to have q + q 2 = i. Therefore (recall that Y = γx + γ 2 Z), ( ) a i Cov(f(X), f(y )) = E(f(X)f(Y )) = E i! H a j i(x) j! H j(y ) = i=0 j=0 i=0 γ j a ia j i!j! E(H i(x)h j (X)) = j=0 i=0 a 2 i (i!) 2 γi i! = i=0 a 2 i i! γi..5 Stochastic Volatility models Example.5. Let {Z i, < i < } be i.i.d. N(0, ) and {X i, < i < } be an AR() model, i.e. X i = ϕx i + U i, where {U i, < i < } IID(0, σu 2 ). We assume that the sequences {X i, < i < } and {Z i, < i < } are mutually independent. We define Y i = X i Z i, < i <. We simulate 000 observations from Y i, i =,..., 000. Figure.4 shows that the composite series is stationary. The ACF is almost zero for all lags h > 0 which shows that the series is uncorrelated. The squares are

34 .5. Stochastic Volatility models 9 AR() and N(0,) composite process Yi Time Series Yi Series Yi^2 ACF ACF Lag Lag Figure.4: Simulated SV model and its ACF correlated, we conclude that the sequence {Y i, < i < } is not independent. Indeed, for h 0 Cov(Y i, Y i+h ) = E(Y i Y i+h ) = E(X i Z i X i+h Z i+h ) = E(X i X i+h )E(Z i )E(Z i+h ) = 0, which means that the series {Y i, < i < } is uncorrelated. On the other hand Cov(Y 2 i, Y 2 i+h) = E(Y 2 i Y 2 i+h) E(Y 2 i )E(Y 2 i+h) = E(Z 2 i X 2 i Z 2 i+hx 2 i+h) E(Z 2 i X 2 i )E(Z 2 i+hx 2 i+h)

35 20. Time Series Models = E(Z 2 i Z 2 i+h)e(x 2 i X 2 i+h) E(Z 2 i )E(X 2 i )E(Z 2 i+h)e(x 2 i+h) = {E(Z 2 i )E(Z 2 i+h)}cov(x 2 i, X 2 i+h) = (VarZ ) 2 Cov(X 2 i, X 2 i+h) where, in particular, Cov(X 2 i, X 2 i+) = Cov(X 2 i, (ϕx i + U i+ ) 2 ) = Cov(X 2 i, ϕ 2 X 2 i + U 2 i+ + 2ϕX i U i+ ) = ϕ 2 Cov(X 2 i, X 2 i ) + Cov(X 2 i, U 2 i+) + 2ϕCov(X 2 i, X i U i+ ) = ϕ 2 Var(X 2 i ) 0. Above we used the fact that X i and U i+ are independent. A similar computation can be carried out for a general h. We conclude that that {X 2 i, < i < } is correlated. Thus, Cov(Yi 2, Yi+h 2 2 ) 0, which means that {Yi, < i < } is correlated as well. In conclusion, Cov(Y i, Y i+h ) = 0 for h 0 (Y i uncorrelated) Cov(Yi 2, Yi+h 2 ) 0 (Y i 2 correlated) so {Y i, < i < } is not an independent sequence and inherits covariance structure from {X 2 i, i 0}. model. The above example suggests the following definition of a stochastic volatility Definition.5.2 Let σ( ) be a positive function. Without loss of generality, we will assume that σ is positive. Let {X i, < i < } be a stationary sequence, and {Z i, < i < } be a sequence of i.i.d. random variables such that E(Z i ) = 0. Define a stochastic volatility (SV) process {Y i, < i < } by Y i = σ(x i )Z i. (.5.) Remark.5.3 For theoretical purposes, there is no need to assume that σ is positive. However, in financial applications σ(x i ) plays a role of a conditional variance. Therefore, we keep this assumption throughout the thesis.

36 .5. Stochastic Volatility models 2 We note that in the above definition we do not impose any assumption about independence between the sequences {X i, < i < } and {Z i, < i < }. However, in practice we need to consider certain models in order to be able to work with such processes. We shall assume that {X i, < i < } is a Gaussian linear process (infinite order moving average) X i = c k η i k, < i <, k= where {η i, < i < } is a sequence of independent standard normal random variables and k= c2 k =. In particular, {X i, < i < } are dependent standard normal random variables. Two special cases which we are going to deal with are: Pure Stochastic Volatility (SV) model; where {η i, < i < } and {Z i, < i < }, are independent. EGARCH model; where {(η i, Z i ), < i < } is a sequence of i.i.d. random vectors. Thus, for fixed i, Z i and X i are independent. Of course, the EGARCH model includes SV. In what follows, we will write stochastic volatility (SV) model for all models described by the Definition.5.2. We will write pure SV for the model described above, where we have independence between the sequences {X i, < i < } and {Z i, < i < }. The pure SV and EGARCH models are flexible enough to model both long memory and heavy tails. Indeed, as we argued in Example.5., the squares of both models inherit the dependence structure of {X i, < i < }. At the same time, we can choose {Z i, < i < } to be heavy tailed. Let F Z (x) = P (Z x) be the marginal cumulative distribution of the noise sequence {Z i, < i < }.

37 22. Time Series Models Definition.5.4 A function L is called slowly varying at infinity if for all c > 0, L(cx) lim x L(x) = Example.5.5 If we let L(x) = x, then L(cx) L(x) = cx x varying. Example.5.6 If we let L(x) = log x, then log(cx) log x Thus, L(x) = log x is slowly varying. = c, thus L(x) = x is not slowly = log c + log x log x log x, as x. We will assume that for some α (, ), the tail of Z is F Z (x) = P (Z > x) = x α L(x), x (.5.2) where L is a slowly varying function. The parameter α is called the tail index. In particular, for each c > 0, P (Z > cx) lim x P (Z > x) = lim (cx) α L(cx) x x α L(x) = c α. (.5.3) In fact, F Z (x) is called regularly varying at infinity with index α (see Resnick 2007). Lemma.5.7 (Breiman Lemma, Resnick 2007, p. 23) Define Y i = σ(x i )Z i, assume that for fixed i the random variables X i and Z i are independent and that (.5.2) holds. If E[σ α+ϵ (X i )] <, for some ϵ > 0, then P (Y i > x) E[σ α (X i )]P (Z i > x). Breiman Lemma shows that the tail behaviour of Z is inherited by Y..5. Long Memory in Stochastic Volatility Example.5. gives us some idea how to introduce long memory in stochastic volatility. The following example shows how the long memory structure of σ 2 (X i ) is inherited by Y 2 i.

38 .5. Stochastic Volatility models 23 Example.5.8 Let {Z i, i 0} be IID(0, σ 2 Z ), and let {X i, t } be a long memory Gaussian sequence as in Section.4. Assume that {Z i, < i < } and {X i, < i < } are independent. Define Y i = σ(x i )Z i. Then for h 0, E(Y i ) = E(Z i σ(x i )) = E(Z i )E(σ(X i )) = 0, Cov(Y i, Y i+h ) = E(Y i Y i+h ) E(Y i )E(Y i+h ) = E(Y i Y i+h ) 0 = E(Z i Z i+h )E(σ(X i )σ(x i+h )) = 0E(σ(X i )σ(x i+h )) = 0 Cov(Y 2 i, Y 2 i+h) = (VarZ ) 2 Cov(σ 2 (X i ), σ 2 (X i+h )) which shows that the SV model is uncorrelated, but Y 2 i are correlated, hence the sequence {Y i, < i < } is not independent and inherits covariance structure from {σ 2 (X i ), < i < }. Take in particular σ(x) = e x. Then using Lemma.4.7 with f(x) = σ 2 (x) = e 2x and Equation (.4.3) for γ X, where Cov(σ 2 (X 0 ), σ 2 (X h )) = Cov(f(X 0 ), f(x h )) = = i= a 2 i i! a = E(exp 2X X) 0. i= a 2 i (i!) 2 i!γi X(h) ( h (2d ) ) i a 2! γ X(h) a 2 h (2d ) (.5.4) In other words, the covariance of σ 2 (X i ), up to a constant, is the same as that of {X i, < i < }. Lemma.5.9 Let {X i, < i < } be a stationary LRD Gaussian process as defined in Section.4, with covariance defined in (.4.3). If d (0, /2), then V ar( X n ) 2n 2d 2d(2d + ) C n ( 2d).

39 24. Time Series Models The above result is standard in the long memory literature, see for example page 6 in Beran (994). For a comparison with iid or short memory sequences notice that V ar( X n ) C n..5.2 Limit theorems under long memory Recall the classical central limit theorem. If {X i, < i < } are iid with mean zero and unit variance, then n n X d i N(0, ), t= The situation changes completely if long memory is involved. See Chapter 3 in Beran (994). Lemma.5.0 Let {X i, < i < } be a stationary LRD Gaussian process with mean zero and unit variance, as defined in Section.4, with covariance defined in (.4.3). If d (0, /2), then n /2+d d(2d+) t= n X d i N(0, ). From equation (.5.4), Cov(f(X 0 ), f(x i )) a 2 γ X (h) and ( n ) ( n ) V ar f(x i ) a 2 V ar X i. i= It suggests the following lemma. For details see Chapter 3 in Beran (994). Lemma.5. If f has the Hermite expansion given in Lemma.4.7, then the limiting behaviour of i= n {f(x i ) E(f(X i ))} t= is the same as that of a n t= X i, where a = E(f(X)X) 0: n {f(x i ) E(f(X i ))} d a N(0, ) N(0, a 2 ). n /2+d d(2d+) t=

40 .5. Stochastic Volatility models 25 Note: If a = 0, the limiting behaviour is more complicated and it is beyond the scope of this thesis.

41 Chapter 2 High quantiles estimation for long memory stochastic volatility models In this section we establish asymptotic theory for high quantile estimation of EGARCH and SV processes. Specifically, in section 2. we will establish the asymptotic behavior of a tail empirical process with deterministic levels. In section 2.2 we will obtain a corresponding behavior of a tail empirical process with random levels. The latter result will be used in section 2.3 to obtain asymptotic normality of the Hill estimator of the tail index. Recall the Definition.5.2 of the EGARCH process. Let u n be a sequence of constants such that: u n ; n F (u n ). The associated conditional tail distribution function is defined for x > 0: T n (x) = P (Y > u n ( + x) Y > u n ) = F (u n( + x)). F (u n ) 26

42 2.. Tail empirical process with deterministic levels 27 Under the assumption (.5.3), we have T n (x) ( + x) α = T (x) as u n. (2.0.) We also need the following assumption. Assumption (A): If i j, then for any sets A and B, P (Y i u n A, Y j u n B) C i,j P (Y i u n A)P (Y j u n B), as n, where u n A = {u n x : x A}. Also, sup i,j C i,j C. This condition holds for example when Y i is the SV model. It is also verified for EGARCH model, when Z i = Z iψ (η i ) + Ψ 2 (η i ), where {Z i, i } is the sequence of i.i.d. random variables with the same distribution as Z i and {Z i, i } is independent of {η i, < i < }. Example If Y i = σ(x i )Z i is a pure SV model, by using Breiman s Lemma.5.7, we have P (Y i > u n y, Y j > u n y) = E[P (Y i > u n y, Y j > u n y X i, X j )] E[σ α (X i )σ α (X j )]P (Z i > u n y)p (Z j > u n y) E[σ α (X i )σ α (X j )] E[σ α (X i )]E[σ α (X j )] P (Y i > u n y)p (Y j > u n y), so C i,j = E[σα (X i )σ α (X j )]. In particular, if σ(x) = E[σ α (X i )]E[σ α (X j )] ex and using X i + X j N(0, 2 + 2ρ j i ), so C i, j = E[eα(X i +X j ) ] = e α 2 2 (2+2ρ (j i) ). Since ρ, we have C E[e αx i]e[e αxj ] i,j e α2 = C. e α2 2 e α Tail empirical process with deterministic levels Define the empirical tail distribution function T n (s) = nf (u n ) n {Yi >u n(+s)}, (2..) i=

43 28 2. High quantiles estimation for long memory stochastic volatility models and the tail empirical process Note that E[ T n (s)] = e n (s) = T n (s) T n (s), s > 0. n nf (u n ) P (Y > u n( + s)) = F (u n( + s)) F (u n ) = T n (s), therefore, T n (s) is the proper centering. Our main result of this section is the following theorem. Theorem 2.. Let Y i = σ(x i )Z i be the EGARCH model as in the definition.5.2. Assume that the tail of Z is regularly varying with index α (see (.5.2)) and E[σ α (X)] <. Furthermore, assume that X i is a long memory Gaussian process with covariance given by (.4.2). Assume that (A) holds. If n 2d F (un ) 0, then n F (u n )e n (s) W (T (s)), where W is a standard Brownian motion. If n 2d F (un ), then n d /2 d(2d+) where denotes weak convergence in D[0, ). e n (s) T (s) E(X iσ α (X i )) N(0, ), E(σ α (X i )) Remark 2..2 If n 2d F (un ) c (0, ), then the limiting process is the sum of two processes which appear in Theorem 2.., however, their joint distribution may be complicated. Remark 2..3 We note that in the definition of T n ( ) we use the unknown quantity F (u n ). Consequently, the result for Theorem 2.. is not applicable in practice. However, this result is used to conclude a limiting behaviour of a practical tail empirical process considered in the next section.

44 2.. Tail empirical process with deterministic levels 29 Remark 2..4 The space D[0, ) is defined in the Appendix. The proof of this theorem will be divided into several steps. Decompose the tail empirical process into e n (s) = T n (s) T n (s) = n ( {Yi >u n (+s)} E[ {Yi >u n (+s)} F i ]) nf (u n ) i= + n E[ {Yi >u n(+s)} F i ] T n (s) =: M n (s) + R n (s), nf (u n ) i= where F i = σ(η i, η i,, Z i, Z i, ). Note that X i is F i -measurable, since X i = k= c kη i k and Z i is independent of F i. In what follows, we will show that n F (u n )M n (s) W (T (s)). This follows from Lemma 2..6 (finite dimensional convergence) and Lemma 2..8 (tightness). Furthermore, in Lemma 2..9 we will show that n d /2 d(2d+) From these two expressions, the theorem follows. R n (s) T (s) E(X iσ α (X i )) N(0, ) E(σ α (X i )) Note that M n (s), n, is a martingale. Indeed if we let V i = {Yi >u n (+s)} and M i = nf (u n ) (V i E[V i F i ]), then E(M i F i ) = = = E{(V i E[V i F i ]) F i } nf (u n ) {E(V i F i ) E[E(V i F i ) F i ]} nf (u n ) {E(V i F i ) E(V i F i )} = 0. nf (u n )

45 30 2. High quantiles estimation for long memory stochastic volatility models 2.. Weak convergence of the martingale part M n First, we will evaluate the variance of the martingale part (see Lemma 2..5). Then, we will prove a central limit theorem for the martingale part (see Lemma 2..6). That convergence is easily generalized to a finite dimensional convergence. Furthermore, we will show that the martingale part is tight (see Lemma 2..8). Lemma 2..5 Under the conditions of Theorem 2.., we have ( ) n Var { {Yi >u n(+s)} E( {Yi >u n(+s)} F i )} nf (u n ) ( + s) α nf (u n ) i= ( + s) α ne[σ α (X i )]F Z (u n ). Proof: Let V i = {Yi >u n(+s)}, then using the martingale property ( ) n Var { {Yi >u n (+s)} E( {Yi >u n (+s)} F i )} nf (u n ) i= = n n n 2 F 2 Var[V i E(V i F i )] = (u n ) i= n 2 F 2 (u n ) Var[V i E(V i F i )] = nf 2 (u n ) Var[V i E(V i F i )] = nf 2 (u n ) {E(V i E(V i F i )) 2 0} = nf 2 (u n ) {E(V i 2 ) + E[(E(V i F i )) 2 ] 2E[V i E(V i F i )]}. (2..2) Thus, we split Var[V i E(V i F i )] into 3 terms; the first term A = E(V 2 i ), the second one B = E[(E(V i F i )) 2 ] and the third term C = E[V i E(V i F i )]. For the first term, we have A = E(V 2 i ) = E(V i ) = E[ {Yi >u n(+s)}] = P (Y i > u n ( + s)) = F (u n ( + s)). By Breiman s Lemma.5.7, A E[σ α (X i )]F Z (u n ( + s)) as u n,

46 2.. Tail empirical process with deterministic levels 3 and by (.5.3), Thus, P (Z > u n ( + s)) lim u n P (Z > u n ) = ( + s) α. A E[σ α (X i )]( + s) α F Z (u n ) ( + s) α F (u n ) as u n. (2..3) For the second term, Using (.5.3) we have, E(V i F i ) lim u n P (Z > u n ) = lim E( {σ(xi )Z i >u n(+s)} X i ) u n P (Z > u n ) P (Z i > un(+s) = lim ) σ(x i ) u n P (Z > u n ) = ( + s) α σ α (X i ). (2..4) Thus, B lim u n P 2 (Z > u n ) [ ] = lim E E(Vi F i ) E(V i F i ) u n P (Z > u n ) P (Z > u n ) [ ] E(V i F i ) = E lim u n P (Z > u n ) lim E(V i F i ) u n P (Z > u n ) = E[( + s) 2α σ 2α (X i )] = ( + s) 2α E[σ 2α (X i )]. Consequently, B ( + s) 2α E[σ 2α (X i )]F 2 Z(u n ) as u n. (2..5) Here and in the sequel, we are allowed to interchange the limit with integration since E[V i F i ] is bounded. lim u n For the third term, using (.5.3) and (2..4), we have, C P 2 (Z > u n ) = lim E[V i E(V i F i )] u n P 2 (Z > u n ) ( ) ( = lim E V i E(V i F i ) = E u n P (Z > u n ) P (Z > u n ) ( ) V i = E lim u n P (Z > u n ) ( + s) α σ α (X i ) ( ) = ( + s) α lim E V i u n P (Z > u n ) σα (X i ). lim u n = lim u n E V i P (Z > u n ) lim ( u n ) E(V i F i ) P (Z > u n ) V i P (Z > u n ) ( + s) α σ α (X i ) )

47 32 2. High quantiles estimation for long memory stochastic volatility models Furthermore, ( ) ( + s) α lim E V i u n P (Z > u n ) σα (X i ) ( ) = ( + s) α lim E V i E[ u n P (Z > u n ) σα (X i ) X i ] ( ) = ( + s) α lim E σ α V i (X i )E[ u n P (Z > u n ) X i] ( = ( + s) α lim E σ α (X i )E[ ) {σ(x i )Z i >u n (+s)} X i ] u n P (Z > u n ) = ( + s) α lim (σ E α (X i ) P (Z i > un(+s) X ) σ(x i ) i) u n P (Z > u n ) = ( + s) α E[σ α (X i )( + s) α σ α (X i )] = ( + s) 2α E[σ 2α (X i )] Thus, C ( + s) 2α E[σ 2α (X i )]F 2 Z(u n ) as u n. (2..6) After comparing the results (2..3), (2..5) and (2..6), we find that A dominates B and C, so terms B and C are negligible. Consequently, Var[V i E(V i F i )] A E[σ α (X i )]( + s) α F Z (u n ), (2..7) and n Var [V i E(V i F i )] = n Var[V i E(V i F i )] = nvar[v i E(V i F i )] i= i= ne[σ α (X i )]( + s) α F Z (u n ) n( + s) α F (u n ). By (2..2), (2..3) and (2..7), we get ( ) n Var { {Yi >u n (+s)} E( {Yi >u n (+s)} F i )} nf (u n ) i= ( + s) α nf (u n ). Since n i= [V i E(V i F i )] is a martingale, it suggests the following result.

48 2.. Tail empirical process with deterministic levels 33 Lemma 2..6 Under the conditions of Theorem 2.. we have nf (u n )M n (s) = nf (u n ) n d [V i E(V i F i )] N(0, ( + s) α ). i= To prove the above lemma, we recall the following martingale CLT (see Hall and Heyde (980)). Lemma 2..7 (Martingale CLT) Let ((X ni, A n,i ) : n ; i =,, k n ) be a martingale difference array; that is, for each fixed n, X ni is a martingale difference sequence. If k n i= E(X2 ni A n,i ) p σ 2, and k n i= E[X2 p nii( X ni > ϵ) A n,i ] 0 for each ϵ > 0 then Z n = k n i= X ni d Z N(0, σ 2 ). We apply the above theorem to the scaled martingale part: Z n = nf (u n )M n (s) and X ni = M i = nf (u n) [V i E(V i F i )] and A n,i = F i. Let M i = V i E(V i F i ), so that M 2 i = nf (u n ) [V i E(V i F i )] 2 = nf (u n ) M 2 i. Then E(M 2 i F i ) = E[(V i E(V i F i )) 2 F i ] = E[[V 2 i + (E(V i F i )) 2 2V i E(V i F i )] F i )] = E(V 2 i F i ) + E[(E(V i F i )) 2 F i ] 2E[V i E(V i F i ) F i ],

49 34 2. High quantiles estimation for long memory stochastic volatility models where E(V 2 i F i ) = E(V i F i ), E[(E(V i F i )) 2 F i ] = (E(V i F i )) 2, E[V i E(V i F i ) F i ] = E(V i F i )E(V i F i ) = (E(V i F i )) 2. Thus, E(M 2 i F i ) = E(V i F i ) (E(V i F i )) 2, (2..8) and recall that, We compute that E(V i F i ) lim u n P (Z > u n ) = lim u n P (Z i > un(+s) σ(x i ) ) P (Z > u n ) = ( + s) α σ α (X i ). E(M 2 i F i ) P (Z > u n ) = E(V i F i ) P (Z > u n ) ( ) 2 E(Vi F i ) P (Z > u n ) P (Z > u n ) E(V i F i ) P (Z > u n ) ( + s) α σ α (X i ). Thus, E(M 2 i F i ) ( + s) α σ α (X i )F Z (u n ), (2..9) consequently, using (2..9) and Breiman s Lemma (.5.7) n E(Mi 2 F i ) = i= nf (u n ) ( + s)α n n E(M 2 i F i ) ( + s) α F Z (u n ) nf (u n ) n (σ α F Z (u n ) (X i )) E(σ α (X i ))F Z (u n ) i= i= n (σ α (X i )) i= P ( + s) α. where lim n n i= (σα (X i )) n = E(σ α (X i )) follows directly from the Ergodic Theorem and the first condition in the martingale CLT is verified. We note that the sequence X i is ergodic, since X i = φ(ε i, ε i,..., ), where ε i are i.i.d. normal random variables.

50 2.. Tail empirical process with deterministic levels 35 For the second condition, ( ) M 2 i E nf (u n ) F { M i >ε nf (u n )} i = ([V nf (u n ) E i E(V i F i )] 2 { M i >ε nf (u n )} F i ) = 0 since M i is bounded and 2 and nf (u n ) as n, the indicator function returns a value 0 for n sufficiently large. Thus, the second condition in the martingale CLT is Recall that M n (s) = = n E ( ) Mi 2 P { Mi >ε} F i 0. i= Lemma 2..8 Define M n (s) = 2.., the process M n (s) is tight. Proof: n ( {Yi >u n(+s)} E[ {Yi >u n(+s)} F i ] ) nf Z (u n ) i= n (V i (s) E(V i (s) F i )) nf (u n ) i= nf (u n )M n (s). Under the conditions of Theorem Using Theorem 3.5 in Billingsley (968), in order to check tightness we have to verify the following condition: for s s 2 s 3, we have to bound E[ M n (s 2 ) M n (s ) 2 M n (s 3 ) M n (s 2 ) 2 ]. (2..0) By Hölder s inequality, E[ M n (s 2 ) M n (s ) 2 M n (s 3 ) M n (s 2 ) 2 ] [ E ( M n (s 2 ) M )] [ n (s ) 4 2 E ( M n (s 3 ) M )] n (s 2 ) 4 2,

51 36 2. High quantiles estimation for long memory stochastic volatility models where E( M n (s 2 ) M n (s ) 4 ) ( n 4 = (nf (u n )) E [V i (s 2 ) V i (s )] [E(V i (s 2 ) F i ) E(V i (s ) F i )]). 2 i= If we denote V i (s) E(V i (s) F i ) = X i (s) and X i (s 2 ) X i (s ) = X i, then, [V i (s 2 ) V i (s )] [E(V i (s 2 ) F i ) E(V i (s ) F i )] = X i (s 2 ) X i (s ) = X i. then, E( M n (s 2 ) M n (s ) 4 ) n = (nf (u n )) E( X i ) 4 2 = = (nf (u n )) 2 E( (nf (u n )) 2 ( (nf (u n )) 2 i= n i= X 4 i + n E X i 4 + i= ( ne( X 4 ) + n (nf (u n )) E( X 2 i 2 ) 2 i= n ( X i 2 X j 2 )) i,j=,i j n i,j=,i j n i,j=,i j E( X 2 i X 2 j )) E( X 2 i X 2 j ) ) (Stationarity). The first inequality is the standard inequality for martingales. (Hall, Heyde 980) Furthermore, E( X 4 ) CE(V (s 2 ) V (s )) 4 = CP (u n ( + s ) < Y < u n ( + s 2 )) for some constant C and note that P (u n ( + s ) < Y < u n ( + s 2 )) lim n F (u n ) Furthermore, if assumption A holds, = T (s ) T (s 2 ) T (s ) T (s 3 ). E( X 2 i X 2 j ) CE[V i (s 2 ) V i (s )] 2 [V j (s 2 ) V j (s )] 2

52 2.. Tail empirical process with deterministic levels 37 and so = CP (u n ( + s ) < Y i < u n ( + s 2 ), u n ( + s ) < Y j < u n ( + s 2 )) C i,j P (u n ( + s ) < Y i < u n ( + s 2 )) P (u n ( + s ) < Y j < u n ( + s 2 )), C i,j P (u n ( + s ) < Y i < u n ( + s 2 )) P (u n ( + s ) < Y j < u n ( + s 2 )) lim n F (u n ) F (u n ) = C i,j (T (s ) T (s 2 ))(T (s ) T (s 2 )) = C i,j (T (s ) T (s 2 )) 2 C i,j (T (s ) T (s 3 )) 2. Thus, E( M n (s 2 ) M n (s ) 4 ) = Analogously, n F (u n ) (T (s ) T (s 3 )) + n 2 ( n i,j=,i j C i,j ) ( n(n ) n F (u n ) (T (s ) T (s 3 )) + C n 2 n F (u n ) (T (s ) T (s 3 )) + C(T (s ) T (s 3 )) 2. (T (s ) T (s 3 )) 2 ) (T (s ) T (s 3 )) 2 Thus, E( M n (s 3 ) M n (s 2 ) 4 ) n F (u n ) (T (s ) T (s 3 )) + C(T (s ) T (s 3 )) 2. E[ M n (s ) M n (s 2 ) 2 M n (s 2 ) M n (s 3 ) 2 ] This guarantees tightness. n F (u n ) (T (s ) T (s 3 ))+C(T (s ) T (s 3 )) Weak convergence of long memory part R n The term R n is treated using Lemma.5.. Note that R n (s) = n where n G(X i ) i= G(X i ) = G(X i ) E(G(X i ))

53 38 2. High quantiles estimation for long memory stochastic volatility models and G(X i ) = F (u n ) E[ {Y i >u n (+s)} F i ] = F (u n ) E[ {σ(x i )Z i >u n (+s)} F i ]. Note that function G depends on n and s. However, R n (s) has a form similar to the expression in Lemma.5.. We have to compute a = E(G(X i )X i ) = E{ F (u n ) E[ {σ(x i )Z i >u n(+s)} F i ]X i } = F (u n ) E{E[ {σ(x i )Z i >u n(+s)}x i F i ]} = F (u n ) E( {σ(x i )Z i >u n(+s)}x i ) ( ) ( ) {σ(xi )Z = E X i >u n(+s)} {σ(xi )Z i = E E[X i >u n(+s)} i X i ] F (u n ) F (u n ) ( = E X i E[ ) {σ(x i )Z i >u n P (Z (+s)} i > X i ] = E (X u n(+s) X ) σ(x i ) i) i F (u n ) F (u n ) ( ) ( + s) α σ α (X i ) E X i = ( + s) α E(X iσ α (X i )) E(σ α (X i )) E(σ α (X i )). This suggests the following limiting result. The proof is not given here, see Kulik and Soulier (20). We note that their proof for the long memory part R n (s) does not involve SV assumption and thus it holds for EGARCH model as well. Lemma 2..9 Under the conditions of Theorem 2.., n d /2 d(2d+) R n (s) T (s) E(X iσ α (X i )) N(0, ) E(σ α (X i )) We note that the limiting process is degenerate, i.e. it is a standard normal random variable, multiplied by a deterministic function.

54 2.2. Tail empirical process with random levels Tail empirical process with random levels Recall the definition (2..). We replace there n F (u n ) with k n and u n with Y n k:n, the (k + )th largest observation, in order to define the following process: ˆT n (s) = k n n j= {Yj >Y n k n:n(+s)}, T (s) = ( + s) γ, where γ = α, ê n(s) = ˆT n (s) T (s). It is assumed that k n and as n. The process ê n(s) is the modification of the process e n considered in Section 2.: instead of the deterministic level u n, we have a random level. We will show below that this process has different convergence properties than e n. The statement of the Theorem 2.. can be re-written as a n e n (s) = a n ( T n (s) T n (s)) Ψ(s), where the scaling a n and the limiting process Ψ(s) have two different forms. We will argue below that a n ê n(s) Ψ(s) T (s)ψ(0). First, we have to replace the centering T n (s) with T (s). This is allowed under the so-called second order regular variation condition (see Kulik and Soulier (20) for details). In particular, if F (x) = x α + x β, x >, β > α, then the second order regular variation holds. Lemma 2.2. From Theorem 2.. and under second order regular variation,

55 40 2. High quantiles estimation for long memory stochastic volatility models In the short memory case, if a n = n F (u n ) = k n, Ψ(s) = W (T (s)) then, kn ê n(s) W (T (s)) T (s)w () = B(T (s)), (2.2.) where B( ) is the Brownian bridge. In the long memory case, if a n = const.n /2 d, Ψ(s) = E[Xσα (X)] E[σ α (X)] T (s)n(0, ), then, a n ê n(s) E[Xσα (X)] E[σ α (X)] T (s)n(0, ) E[Xσα (X)] T (s)n(0, ) = 0. E[σ α (X)] Therefore, in the case of random levels, the long memory scaling is too big. It is possible to show that (2.2.) holds for arbitrary d (0, /2), see Kulik and Soulier (20). Generally, for any choice of d (0, /2), we will have kn ê n(s) B(T (s)). (2.2.2) Lemma (Vervaat s Lemma) Assume that s n is a sequence of constants, and x n ( ), ρ( ) are deterministic functions and x n ( ), ρ ( ) are the generated inverse function, If s n (x n (s) ρ(s)) Y (s), then s n (x n (s) ρ (s)) (ρ (s)) Y (ρ (s)) Proof of lemma 2.2.: Define δ n = Y n kn:n u n u n. We have T n (s + δ n ( + s)) = ˆT n (s), and ê n(s) = T n (s + δ n ( + s)) T (s) = e n (s + δ n ( + s)) + T n (s + δ n ( + s)) T (s). Under second order regular variation we can replace T n with T. Thus, heuristically, ê n(s) e n (s + δ n ( + s)) + T (s + δ n ( + s)) T (s).

56 2.2. Tail empirical process with random levels 4 The first term in the Taylor s expansion of T (s + δ n ( + s)) T (s) is T (s)δ n ( + s) = αt (s)δ n = T (0)T (s)δ n. Thus, we expect ê n(s) ê n (s + δ( + s)) + T (0)T (s)δ n. (2.2.3) Now, we want to show that kn T (0)δ n d W (). Using Vervaat s Lemma 2.2.2, if we take x n (s) = T n (s), ρ(s) = T (s) and s n = k n = n F (u n ) and we have from Theorem 2.., for the weakly dependence case, kn ( T n (s) T (s)) d Ψ(s) = W (T (s)). Then kn ( T n (s) T (s)) d (T (s)) W (T T (s)) = (T (s)) W (s). Take s =, we have kn ( T n () T ()) d (T ()) W (). Note that T (s) = s γ, (T (s)) = γs γ, Thus (T ()) = γ = /α. We conclude kn ( T n () T ()) d /αw () and α = T (0). Equivalently, kn T (0)( T n () T ()) d W (). Note also that T n () = Y n k n :n u n = δ n

57 42 2. High quantiles estimation for long memory stochastic volatility models and T () = 0. Thus, we conclude kn T (0)δ n d W (), which also ensures that δ n 0 as k n. Therefore, kn ê n(s) k n e n (s) + (k n )T (0)T (s)δ n Ψ(s) T (s)ψ(0). and the proof is complete. 2.3 Asymptotic normality of the Hill estimator Using results from Section 2.2, in this section we establish asymptotic normality of Hill estimator based on a stationary EGARCH model. Let Y :n Y 2:n... Y n :n Y n:n be the increasing order statistics of a stationary sequence Y,..., Y n. The Hill estimator is defined as H kn,n = ˆγ n = k n k n j= log Y n j+:n. Y n kn :n The tail empirical distribution ˆT n (s) and the tail empirical process ê n(s) play a crucial role in the study the Hill estimator. Indeed: Lemma 2.3. We have and 0 α = γ = ˆγ n = B(T (s)) + s where B is a standard Brownian bridge. 0 0 ds d = γ T (s) ds, (2.3.) + s ˆT n (s) ds (2.3.2) + s 0 B(t) t dt, (2.3.3)

58 2.3. Asymptotic normality of the Hill estimator 43 Proof: For. notice that T (s). 0 + s ds = ( + s) γ ds = ( + s) γ ds 0 + s 0 = γ( + s) γ = γ( + 0) γ = γ. For 2, define ˆT n(s) = k n n j= {Y j >Y n kn:n (s)}, where s. 0 ˆT n (s) + s ds = = k n n ˆT n(s) ds = s j= 0 k n n j= {Yj >sy n k:n } s ds {Yj >sy n kn:n } ds. (2.3.4) s Note that {Yj >Y n k = Y n:n(s)} and s, Y { j Y >s} j must be bigger than Y n kn:n. n k n:n This means that Y j = Y n kn+:n or Y j = Y n kn+2:n,, or Y j = Y n:n. Therefore, the sum in (2.3.4) becomes k n n j= Y j Y n kn:n s ds = k n k n j= Y n j+:n Y n kn:n s ds = k n k n j= log( Y n j+:n ) = ˆγ n. Y n kn :n For 3, using the substitution rule, by letting t = T (s) = ( + s) γ, then dt = ( )( + γ s) γ ds, so ds = ( γ)( + s) γ +s dt = ( γ) dt. Therefore t 0 B(T (s)) + s ds = 0 B(t)( γ) t dt = ( γ) 0 B(t) t dt d = γ 0 B(t) t (Since Brownian motion increments are normally distributed, therefore the integral is symmetric.) From (2.3.)-(2.3.2) and (2.2.2) we conclude kn (ˆγ n γ) = k n 0 ê n(s) + s ds d 0 dt. B(T (S)) + s ds = d B(t) γ dt. 0 t We note that the latter integral is a standard normal random variable. Therefore, we obtain the following result.

59 44 2. High quantiles estimation for long memory stochastic volatility models Recall that k = k n as n. We note also that k n is the user-chosen number of extreme observations. Theorem Under the assumptions of Theorem 2.. and the appropriate secondorder condition, k n (ˆγ n γ) converges weakly to the centered Gaussian distribution with variance γ 2.

60 Chapter 3 Numerical experiments 3. Estimators for the tail index α The tail index α dominates the asymptotic behaviour of a distribution and indicates the heaviness of the tail. There are lots of estimators for α, we particularly focus on the POT estimator, Hill estimator and Pickands estimator. 3.. Peaks Over Threshold (POT) Maximum Likelihood (ML) estimator The POT method analyzes exceedances over a given high thresholds. Suppose {Y i, < i < } is an i.i.d. sequence and u is a threshold. Define the exceedance times {τ r, r } by τ = inf{j : Y j > u}, τ 2 = inf{j τ : Y j > u},. τ r = inf{j τ r : Y j > u}, 45

61 46 3. Numerical experiments The sequence {Y τr, r } are called the exceedances. If {Y i, < i < } is i.i.d. with common distribution F, then {Y τr, r 0} is also i.i.d., and for F [u] (x) = P (Y τr > x) = P (Y > x Y > u), P (Y >x,y >u) P (Y >x) F [u] = P (Y >u) (x) = = F (x) if x > u, P (Y >u) F (u) if x < u. Assume that P (Y > x) = F Y (x) = x α L(x), then for x > u, P (Y > xu) P (Y > u) = F Y (xu) F Y (u) = (xu) α L(ux) u α L(u) Assume that Y is Pareto, then we have, x α as u. The maximum-likelihood estimator of α P (Y > x) = x α, x >, F [u] x α = ( x u (x) = α u ) α if x > u, if x < u. Consider likelihood function for Y τ,..., Y τr : Taking logarithms yields, L = r α i= can be found as follows: ( ) (α+) Yτi u log L = r log α (α + ) α (P OT ) = ˆγ r,n = r r log i= ( Yτi r log i= ( Yτi u u ) ) (3..) 3..2 Hill estimator Consider equation 3.., if we choose u = Y (kn+) and Y () > u, Y (2) > u,, Y (kn) > u, then we have the Hill Estimator of /α based on k n upper-order statistics: ˆγ (Hill) k n,n = k n k n i= log Y (i) Y (kn+). (3..2)

62 3.. Estimators for the tail index α 47 There is a theoretical formula which gives an optimal choice of k n. However, that formula is useless for practical purposes. In practice, one looks at stability region of the Hill plot and then one picks the biggest possible k n to have the shortest confidence interval. For example, on Figure 3.3 I would choose k n = Pickands estimator Suppose Z i, i is i.i.d. with common distribution F. The Pickands estimator of γ = α uses differences of quantiles and is based on using three upper-order statistics Z (k), Z (2k), Z (4k), from a sample of size n. The estimator is defined as (recall that k = k n depends on n) ( ) ( ) (P ickands) Z(k) Z (2k) ˆγ k,n = log. (3..3) log 2 Z (2k) Z (4k) Using the result in Resnick(2007), p.93, where a n = n /α, we have Z ([k/y]) a n/k yγ, 0 y <, a n/k γ Z (k) Z (2k) Z (2k) Z (4k) = P (Z (k) a n/k ) a n/k (Z (2k) a n/k ) a n/k (Z (2k) a n/k ) a n/k (Z (4k) a n/k ) a n/k ( 0 γ (( 2 )γ ) γ (( 2 )γ ) γ (( 4 )γ ) = 2 γ. By the continuous mapping theorem and taking logarithms yields ( ) Z(k) Z (2k) P log log 2 γ Z (2k) Z (4k) ) and dividing by log 2 ( ) ( log Z ) (k) Z (2k) P γ log 2 Z (2k) Z (4k)

63 48 3. Numerical experiments 3.2 Simulations: POT method for different choices of threshold u We simulate Y i = e 0.2X i Z i, i =,..., n, where Z i are independent Pareto with parameter α = 2 and X i is ARFIMA with standard normal innovations and d = 0.2. For a sample of size n = 000, we obtain Y,..., Y 000 and we estimate α using the maximum likelihood estimator based on exceedences over threshold u, α = r r log i= ( Yτi where r = number of exceedences over u. For the deterministic levels, we choose u = max(y,..., Y 000 )/2 based on the first sample, and then keep this value for Monte Carlo simulation. Also in simulation, we could let u = 5, since we assumed α = 2. Indeed, in a sample of size n, we expect np (Y > u ) values which are bigger than u. We have u ) np (Y > u ) nu 2 = k n, so that u = n k n. Consequently, for n = 000 and k n = 200, we obtain u = 5. We choose u = Y (n kn,n) for random levels. The following simulated results indicate that when u = Y (n kn,n), the estimator is better. Sample Mean of /ˆα when u = max(y,..., Y 000 )/2: ; Sample Mean of /ˆα when u = Y (n kn,n): ; Sample standard deviation of /ˆα when u = max(y,..., Y 000 )/2: ; Sample standard deviation of /ˆα when u = Y (n kn,n):

64 3.2. Simulations: POT method for different choices of threshold u 49 Histogram of estim Histogram of estim_rando Frequency 0 50 Frequency estim estim_random Normal Q Q Plot Normal Q Q Plot Sample Quantiles Sample Quantiles Theoretical Quantiles Theoretical Quantiles Figure 3.: Statistics for POT simulation. The graphs illustrate empirical distribution of ˆα,..., ˆα M based on M = 000 Monte Carlo runs. The Figure 3. illustrates the above results visually. The empirical distribution of ˆα,..., ˆα 000 at random levels is symmetric and has a mean equal to which is very close to the true value 0.5. The QQ plot also shows that it is normally distributed. The Histogram and QQ plot of the estimator at deterministic levels are not appropriate.

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

Tail empirical process for long memory stochastic volatility models

Tail empirical process for long memory stochastic volatility models Tail empirical process for long memory stochastic volatility models Rafa l Kulik and Philippe Soulier Carleton University, 5 May 2010 Rafa l Kulik and Philippe Soulier Quick review of limit theorems and

More information

Heavy Tailed Time Series with Extremal Independence

Heavy Tailed Time Series with Extremal Independence Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Class 1: Stationary Time Series Analysis

Class 1: Stationary Time Series Analysis Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models

More information

Generalised AR and MA Models and Applications

Generalised AR and MA Models and Applications Chapter 3 Generalised AR and MA Models and Applications 3.1 Generalised Autoregressive Processes Consider an AR1) process given by 1 αb)x t = Z t ; α < 1. In this case, the acf is, ρ k = α k for k 0 and

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements François Roueff Ecole Nat. Sup. des Télécommunications 46 rue Barrault, 75634 Paris cedex 13,

More information

Extremogram and ex-periodogram for heavy-tailed time series

Extremogram and ex-periodogram for heavy-tailed time series Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal

More information

The sample autocorrelations of financial time series models

The sample autocorrelations of financial time series models The sample autocorrelations of financial time series models Richard A. Davis 1 Colorado State University Thomas Mikosch University of Groningen and EURANDOM Abstract In this chapter we review some of the

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Ch. 19 Models of Nonstationary Time Series

Ch. 19 Models of Nonstationary Time Series Ch. 19 Models of Nonstationary Time Series In time series analysis we do not confine ourselves to the analysis of stationary time series. In fact, most of the time series we encounter are non stationary.

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Introduction to Stochastic processes

Introduction to Stochastic processes Università di Pavia Introduction to Stochastic processes Eduardo Rossi Stochastic Process Stochastic Process: A stochastic process is an ordered sequence of random variables defined on a probability space

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 II. Basic definitions A time series is a set of observations X t, each

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes.

More information

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION ARIMA MODELS: IDENTIFICATION A. Autocorrelations and Partial Autocorrelations 1. Summary of What We Know So Far: a) Series y t is to be modeled by Box-Jenkins methods. The first step was to convert y t

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series. MA6622, Ernesto Mordecki, CityU, HK, 2006.

6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series. MA6622, Ernesto Mordecki, CityU, HK, 2006. 6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series MA6622, Ernesto Mordecki, CityU, HK, 2006. References for Lecture 5: Quantitative Risk Management. A. McNeil, R. Frey,

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

Applied time-series analysis

Applied time-series analysis Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 18, 2011 Outline Introduction and overview Econometric Time-Series Analysis In principle,

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

Module 9: Stationary Processes

Module 9: Stationary Processes Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

1 Class Organization. 2 Introduction

1 Class Organization. 2 Introduction Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat

More information

Strictly Stationary Solutions of Autoregressive Moving Average Equations

Strictly Stationary Solutions of Autoregressive Moving Average Equations Strictly Stationary Solutions of Autoregressive Moving Average Equations Peter J. Brockwell Alexander Lindner Abstract Necessary and sufficient conditions for the existence of a strictly stationary solution

More information

Chapter 3 - Temporal processes

Chapter 3 - Temporal processes STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Chapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for

Chapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for Chapter 1 Basics 1.1 Definition A time series (or stochastic process) is a function Xpt, ωq such that for each fixed t, Xpt, ωq is a random variable [denoted by X t pωq]. For a fixed ω, Xpt, ωq is simply

More information

STAT 248: EDA & Stationarity Handout 3

STAT 248: EDA & Stationarity Handout 3 STAT 248: EDA & Stationarity Handout 3 GSI: Gido van de Ven September 17th, 2010 1 Introduction Today s section we will deal with the following topics: the mean function, the auto- and crosscovariance

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1 4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving

More information

Beyond the color of the noise: what is memory in random phenomena?

Beyond the color of the noise: what is memory in random phenomena? Beyond the color of the noise: what is memory in random phenomena? Gennady Samorodnitsky Cornell University September 19, 2014 Randomness means lack of pattern or predictability in events according to

More information

Quantile-quantile plots and the method of peaksover-threshold

Quantile-quantile plots and the method of peaksover-threshold Problems in SF2980 2009-11-09 12 6 4 2 0 2 4 6 0.15 0.10 0.05 0.00 0.05 0.10 0.15 Figure 2: qqplot of log-returns (x-axis) against quantiles of a standard t-distribution with 4 degrees of freedom (y-axis).

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt

More information

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t 2 X t def in general = (1 B)X t = X t X t 1 def = ( X

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

GARCH processes probabilistic properties (Part 1)

GARCH processes probabilistic properties (Part 1) GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes. Only

More information

Ch 5. Models for Nonstationary Time Series. Time Series Analysis

Ch 5. Models for Nonstationary Time Series. Time Series Analysis We have studied some deterministic and some stationary trend models. However, many time series data cannot be modeled in either way. Ex. The data set oil.price displays an increasing variation from the

More information

Part II. Time Series

Part II. Time Series Part II Time Series 12 Introduction This Part is mainly a summary of the book of Brockwell and Davis (2002). Additionally the textbook Shumway and Stoffer (2010) can be recommended. 1 Our purpose is to

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

The Functional Central Limit Theorem and Testing for Time Varying Parameters

The Functional Central Limit Theorem and Testing for Time Varying Parameters NBER Summer Institute Minicourse What s New in Econometrics: ime Series Lecture : July 4, 008 he Functional Central Limit heorem and esting for ime Varying Parameters Lecture -, July, 008 Outline. FCL.

More information

ON THE CONVERGENCE OF FARIMA SEQUENCE TO FRACTIONAL GAUSSIAN NOISE. Joo-Mok Kim* 1. Introduction

ON THE CONVERGENCE OF FARIMA SEQUENCE TO FRACTIONAL GAUSSIAN NOISE. Joo-Mok Kim* 1. Introduction JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 26, No. 2, May 2013 ON THE CONVERGENCE OF FARIMA SEQUENCE TO FRACTIONAL GAUSSIAN NOISE Joo-Mok Kim* Abstract. We consider fractional Gussian noise

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility The Slow Convergence of OLS Estimators of α, β and Portfolio Weights under Long Memory Stochastic Volatility New York University Stern School of Business June 21, 2018 Introduction Bivariate long memory

More information

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment: Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing

More information

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5)

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) Outline 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Review Session: Econometrics - CLEFIN (20192)

Review Session: Econometrics - CLEFIN (20192) Review Session: Econometrics - CLEFIN (20192) Part II: Univariate time series analysis Daniele Bianchi March 20, 2013 Fundamentals Stationarity A time series is a sequence of random variables x t, t =

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Stochastic Processes. Monday, November 14, 11

Stochastic Processes. Monday, November 14, 11 Stochastic Processes 1 Definition and Classification X(, t): stochastic process: X : T! R (, t) X(, t) where is a sample space and T is time. {X(, t) is a family of r.v. defined on {, A, P and indexed

More information

CONTAGION VERSUS FLIGHT TO QUALITY IN FINANCIAL MARKETS

CONTAGION VERSUS FLIGHT TO QUALITY IN FINANCIAL MARKETS EVA IV, CONTAGION VERSUS FLIGHT TO QUALITY IN FINANCIAL MARKETS Jose Olmo Department of Economics City University, London (joint work with Jesús Gonzalo, Universidad Carlos III de Madrid) 4th Conference

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE C. LÉVY-LEDUC, H. BOISTARD, E. MOULINES, M. S. TAQQU, AND V. A. REISEN Abstract. The paper concerns robust location

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi)

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Our immediate goal is to formulate an LLN and a CLT which can be applied to establish sufficient conditions for the consistency

More information

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m. Difference equations Definitions: A difference equation takes the general form x t fx t 1,x t 2, defining the current value of a variable x as a function of previously generated values. A finite order

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION 2012-2013 MAS451/MTH451 Time Series Analysis May 2013 TIME ALLOWED: 2 HOURS INSTRUCTIONS TO CANDIDATES 1. This examination paper contains FOUR (4)

More information