Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Similar documents
LAN property for sde s with additive fractional noise and continuous time observation

Topics in fractional Brownian motion

Variations and Hurst index estimation for a Rosenblatt process using longer filters

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu

Rough paths methods 4: Application to fbm

Stein s method and weak convergence on Wiener space

SELF-SIMILARITY PARAMETER ESTIMATION AND REPRODUCTION PROPERTY FOR NON-GAUSSIAN HERMITE PROCESSES

Hypothesis testing of the drift parameter sign for fractional Ornstein Uhlenbeck process

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Simulation of stationary fractional Ornstein-Uhlenbeck process and application to turbulence

Malliavin calculus and central limit theorems

Adaptive wavelet decompositions of stochastic processes and some applications

Integral representations in models with long memory

Stochastic volatility models: tails and memory

I forgot to mention last time: in the Ito formula for two standard processes, putting

LAN property for ergodic jump-diffusion processes with discrete observations

Variations and estimators for selfsimilarity parameters via Malliavin calculus

Optimal series representations of continuous Gaussian random fields

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Struktura zależności dla rozwiązań ułamkowych równań z szumem α-stabilnym Marcin Magdziarz

Minimum L 1 -norm Estimation for Fractional Ornstein-Uhlenbeck Type Process

arxiv: v4 [math.pr] 19 Jan 2009

From Fractional Brownian Motion to Multifractional Brownian Motion

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

1. Stochastic Processes and filtrations

NEW FUNCTIONAL INEQUALITIES

Harnack Inequalities and Applications for Stochastic Equations

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

hal , version 2-18 Jun 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Gaussian Processes. 1. Basic Notions

Autosimilarité et anisotropie : applications en imagerie médicale

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

Closest Moment Estimation under General Conditions

University of Pavia. M Estimators. Eduardo Rossi

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

An Introduction to Malliavin calculus and its applications

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

Quantitative stable limit theorems on the Wiener space

Rough paths methods 1: Introduction

ERROR BOUNDS ON THE NON-NORMAL APPROXI- MATION OF HERMITE POWER VARIATIONS OF FRAC- TIONAL BROWNIAN MOTION

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Regularity of the density for the stochastic heat equation

Generalized fractional Lévy processes 1

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

Wiener integrals, Malliavin calculus and covariance measure structure

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Representing Gaussian Processes with Martingales

Gaussian Random Fields: Geometric Properties and Extremes

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEMENTS OF PROBABILITY THEORY

Statistical Properties of Numerical Derivatives

arxiv:math/ v2 [math.pr] 9 Mar 2007

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Tail empirical process for long memory stochastic volatility models

Closest Moment Estimation under General Conditions

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Stochastic Differential Equations

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

A Short Introduction to Diffusion Processes and Ito Calculus

The Wiener Itô Chaos Expansion

arxiv: v1 [math.pr] 23 Jan 2018

Contents. 1 Preliminaries 3. Martingales

Statistical inference on Lévy processes

Nonlinear Error Correction Model and Multiple-Threshold Cointegration May 23, / 31

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

A Class of Fractional Stochastic Differential Equations

arxiv: v1 [math.pr] 7 May 2013

Graduate Econometrics I: Maximum Likelihood I

Vast Volatility Matrix Estimation for High Frequency Data

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE

Memory and hypoellipticity in neuronal models

Statistical Aspects of the Fractional Stochastic Calculus

Some Tools From Stochastic Analysis

arxiv: v1 [math.pr] 10 Jan 2019

arxiv: v2 [math.pr] 14 Dec 2009

arxiv: v1 [math.pr] 3 Jan 2019

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Gaussian Random Fields: Excursion Probabilities

arxiv: v2 [math.pr] 22 Aug 2009

Module 9: Stationary Processes

Stein's method meets Malliavin calculus: a short survey with new estimates. Contents

Ergodicity of Stochastic Differential Equations Driven by Fractional Brownian Motion

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

HITTING PROBABILITIES FOR GENERAL GAUSSIAN PROCESSES

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Exercises. T 2T. e ita φ(t)dt.

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Asymptotic distribution of GMM Estimator

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Understanding Regressions with Observations Collected at High Frequency over Long Span

An adaptive numerical scheme for fractional differential equations with explosions

On pathwise stochastic integration

Nonlinear Filtering Revisited: a Spectral Approach, II

Gaussian, Markov and stationary processes

Theoretical Statistics. Lecture 17.

Transcription:

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59

Introduction Main Objective: To study the GMM (Generalized Method of Moments) joint estimator of the drift and memory parameters of the Ornstein-Uhlenbeck SDE driven by fbm. Joint work with Prof. Frederi Viens (Department of Statistics, Purdue University). Department of Statistics, Purdue University () Probability Seminar 2 / 59

Outline 1 Preliminaries 2 Joint estimation of Gaussian stationary processes 3 fou Case 4 Simulation Department of Statistics, Purdue University () Probability Seminar 3 / 59

Outline 1 Preliminaries 2 Joint estimation of Gaussian stationary processes 3 fou Case 4 Simulation Department of Statistics, Purdue University () Probability Seminar 4 / 59

Fractional Brownian Motion The fractional Brownian motion (fbm) with Hurst parameter H (0, 1) is the centered Gaussian process B H t, continuous a.s. with: Cov(B H t, B H s ) = 1 2 ( t 2H + s 2H t s 2H ), t, s R. Some properties: Self-Similarity: for each a > 0 Bat H d = a H Bt H. It admits an integral representation with respect to the standard Brownian motion over a finite interval: B H t = t 0 K H (t, s)dw (s) Department of Statistics, Purdue University () Probability Seminar 5 / 59

Fractional Gaussian noise (fgn) Definition: N H t := B H t B H t α where α > 0. Gaussian and stationary process. Long-memory behavior of the increments when H > 1 2 that n ρ(n) = ). Ergodicity. (in the sense Department of Statistics, Purdue University () Probability Seminar 6 / 59

Fractional Gaussian noise (fgn) Autocovariance function: ρ θ (t) = 1 [ t + α 2H + t α 2H 2 t 2H] 2 Spectral density: f θ (t) = 2c H (1 cos t) t 1 2H where c H = sin(πh)γ(2h+1) 2π. (Beran, 1994). Department of Statistics, Purdue University () Probability Seminar 7 / 59

Estimation of H (fbm and fgn) Classical methods: R/S statistic (Hurst 1951), Variance plot (Heuristic method). Robinson (1992,1995): Semiparametric Gaussian estimation. (spectral information) MLE Methods: Whittle s estimator. Department of Statistics, Purdue University () Probability Seminar 8 / 59

Estimation of H Methods using variations (filters): Coeurjolly (2001): consistent estimator of H (0, 1) based on the asymptotic behavior of discrete variations of the fbm. Asymptotic normality for H < 3/4. Tudor and Viens (2008): proved consistency and asymptotics of the second-order variational estimator (convergence to Rosenblatt random variable when H > 3/4) using Malliavin calculus. Department of Statistics, Purdue University () Probability Seminar 9 / 59

Ornstein-Uhlenbeck SDE driven by fbm Cheridito (2003) Take λ, σ > 0 and ζ a.s bounded r.v. The Langevin equation: t X t = ζ λ X s ds + σbt H, t 0 0 has as an unique strong solution and it is called the fractional Ornstein-Uhlenbeck process: ( t ) ζ X t = e λt ζ + σ e λu dbu H, t T 0 and this integral exists in the Riemann-Stieltjes sense. Department of Statistics, Purdue University () Probability Seminar 10 / 59

Properties Cheridito (2003): Stationary solution (fou process): t X t = σ e λ(t u) dbu H, t > 0 Autocovariance function (Pipiras and Taqqu, 2000): ρ θ (t) = 2σ 2 c H 0 cos(tx) x 1 2H λ 2 + x 2 dx where c H = Γ(2H+1) sin(πh) 2π. Department of Statistics, Purdue University () Probability Seminar 11 / 59

Properties Cheridito (2003): X t has long memory when H > 1 2, due to the following approximation when x is large: X t is ergodic. ρ θ (x) = H(2H 1) x 2H 2 + O(x 2H 4 ). λ X t is not self-similar, but it exhibits asymptotic selfsimilarity (Bonami and Estrade, 2003): f θ (x) = c H x 1 2H + O( x 3 2H ). Department of Statistics, Purdue University () Probability Seminar 12 / 59

Estimation of λ given H (OU-fBm) MLE estimators: Kleptsyna and Le Breton (2002): MLE estimator based on Girsanov formula for fbm. Strong consistency when H > 1/2. Tudor and Viens (2006): Extended the K&L result to more general drift conditions. Strong consistency of MLE estimator when H < 1 2 using Malliavin calculus. They work with the non-stationary case. Department of Statistics, Purdue University () Probability Seminar 13 / 59

Estimation of λ given H (Least-Squares methods) Hu and Nualart (2010) Estimate of λ which is strongly consistent for H 1 2. ( 1 λ T = HΓ(2H)T T λ T is asymptotically normal if H ( 1 2, 3 4 ) 0 ) 1 0 Xt 2 2H dt The proofs of these results rely mostly on Malliavin calculus techniques. Department of Statistics, Purdue University () Probability Seminar 14 / 59

Estimation of H and λ Methods based on variations: Biermé et al (2011): For fixed T, they use the results in Biermé and Richard (2006) to prove consistency and asymptotic normality of the joint estimator of (H, σ 2 ) for any stationary gaussian process with asymptotic self-similarity (Infill-asymptotics case). Brouste and Iacus (2012): For T and α 0 they proved consistency and asymp. normality when 1 2 < H < 3 4 for the pair (H, σ 2 ) (non-stationary case). Department of Statistics, Purdue University () Probability Seminar 15 / 59

Outline 1 Preliminaries 2 Joint estimation of Gaussian stationary processes 3 fou Case 4 Simulation Department of Statistics, Purdue University () Probability Seminar 16 / 59

Preliminaries X t : real-valued centered gaussian stationary process with spectral density f θ0 (x). f θ (x): continuous function with respect to x, continuously differentiable with respect to θ. θ belongs a compact set Θ R p. Bochner s theorem: ρ θ (s) := Cov(X t+s, X t ) = R cos(sx)f θ (x)dx Department of Statistics, Purdue University () Probability Seminar 17 / 59

Preliminaries If ρ θ (s) is a continuous function of s, then the process X t is ergodic. Assumption 1 Take α > 0 and L a positive integer. Then there exists k {0, 1,..., L} such that ρ θ (αk) is an injective function of θ. Department of Statistics, Purdue University () Probability Seminar 18 / 59

Preliminaries Recall: a(l) := (a 0 (l),..., a L (l)) is a discrete filter of length L + 1 and order l for L Z + and l {0,..., L} if L a k (l)k p = 0 for 0 p l 1 k=0 L a k (l)k p 0 if p = l. k=0 Examples: finite-difference filters, Daubechies filters (wavelets). Assume that we can choose L filters with orders l i {1,..., L} for i = 1,..., L and a extra filter with l 0 = 0. Department of Statistics, Purdue University () Probability Seminar 19 / 59

Preliminaries Define the filtered process of order l i and step size f > 0 at t 0 as: ϕ i (X t ) := L a q (l i )X t f q q=0 Its expected value is: V i (θ 0 ) := E[ϕ i (X t ) 2 ] = L k=0 b k(l i )ρ θ0 ( f k). Define the set of moment equations by: where g(x t, θ) := (g 0 (X t, θ),..., g L (X t, θ)) g i (X t, θ) = ϕ i (X t ) 2 V i (θ), for 0 i L. Department of Statistics, Purdue University () Probability Seminar 20 / 59

Preliminaries Assume that we have observed the stationary process X t at times 0 = t 0 < t 1 < < t N 1 < t N = T and α := t i t i 1 > 0 (fixed). Assume there exists a sequence of symmetric positive-definite random matrices { N } such that  N p A, and A > 0. Define: ĝ N (θ) := 1 N L+1 N i=l g(x t i, θ). (sample moments) ˆQ N (θ) := ĝ N (θ)  N ĝ N (θ). Q 0 (θ) := E[g(X t, θ)] AE[g(X t, θ)]. Department of Statistics, Purdue University () Probability Seminar 21 / 59

GMM estimation Define the GMM estimator of θ 0 : ˆθ N : = argmin θ Θ ˆQ N (θ) ( ) T ( ) 1 N 1 N = argmin θ Θ g(x ti, θ) Â N g(x ti, θ). N N i=1 i=1 Department of Statistics, Purdue University () Probability Seminar 22 / 59

Consistency Lemma 1 Under the above assumptions: (i) (ii) if and only if θ = θ 0. sup ˆQ N (θ) Q 0 (θ) a.s 0. θ Θ Q 0 (θ) = 0. Department of Statistics, Purdue University () Probability Seminar 23 / 59

Consistency Key aspects of proof: Ergodicity of X t Continuity of ρ θ ( ) over the compact set Θ. Injectivity of: ρ θ (α 0) ρ θ (α) :=. ρ θ (α L) Results of Newey and McFadden, 1994. (GMM) Department of Statistics, Purdue University () Probability Seminar 24 / 59

Consistency We have all the conditions to apply: Theorem 1 (Newey and McFadden, 1994) Under the above assumptions, it holds that: ˆθ N a.s. θ 0. Department of Statistics, Purdue University () Probability Seminar 25 / 59

Asymptotic Normality of the sample moments Denote: Ĝ N (θ) := θ ĝ N (θ) G(θ) := E[ θ g(x t, θ)] Assumption 2 For fixed α > 0, assume that the (L + 1) p matrix θ ρ θ (α) is a full-column rank matrix for any θ Θ. We can prove that: Ĝ N (θ) = G(θ) = θ V (θ), where V (θ) = (V 0 (θ),..., V L (θ)). Department of Statistics, Purdue University () Probability Seminar 26 / 59

Asymptotic Normality of the sample moments And based on Biermé et al (2011) article, let us assume: Assumption 3 (Biermé et al s condition) Assume that for any l, l {l 0,..., l L } we have that: R(u l, l ) := min (1, u l+l ) ( ) u + 2πp f θ0 L 2 (( π, π)) α p Z Department of Statistics, Purdue University () Probability Seminar 27 / 59

Asymptotic Normality of the sample moments Lemma 2 Under Assumptions 1-3 NĝN (θ 0 ) d N(0, Ω) where π Ω ij = 2α 2 P ali [cos u] 2 P alj [cos u] 2 f θ0 (u/α) 2 du π and f θ0 (u) := p Z f θ 0 (u + 2πp). Note: P al (x) := L k=0 a k(l)x k. Department of Statistics, Purdue University () Probability Seminar 28 / 59

Sketch of proof Let V D (θ 0 ) := Diag (1/V j (θ 0 )) j {0,...,L}. We scale the vector g(x t, θ 0 ) as follows: N N ( NVD (θ 0 )ĝ N (θ 0 ) = H2 (Z lj,t N L + 1 i ) ) j {0,...,L} where Z lj,t i := ϕ j (X ti ) Vj (θ 0 ) and H 2( ) is the 2nd-order Hermite process. Use the vector-valued version of the Breuer-Major theorem with spectral-information conditions (Biermé et al, 2011) to deduce the asymptotic behavior of the previous sum. i=l Department of Statistics, Purdue University () Probability Seminar 29 / 59

Asymptotic behavior of the error Let ε N := ˆθ N θ 0. Using the mean value theorem: ε N := ˆθ N θ 0 = ψ N ( θ N, ˆθ N ) ĝ N (θ 0 ) where ψ N ( θ N, ˆθ N ) := [G(ˆθ N ) Â N G( θ N )] 1 G(ˆθ N ) Â N. Also note that: ψ N ( θ N, ˆθ N ) p [G(θ) AG(θ)] 1 G(θ) A then we can bound E[ ψ N ( θ N, ˆθ N ) 4p ] for any p > 0 Department of Statistics, Purdue University () Probability Seminar 30 / 59

Asymptotic behavior of the error X t can be represented as a Wiener-Ito integral with respect to the standard Brownian motion: where A k (x θ 0 ) := cos(αkx) f θ0 (x). X tk = I 1 (A k ( θ 0 )) Using the multiplication rule of Wiener integrals: (ĝ N (θ)) i = I 2 [B i,j ( θ)] where B i,j ( θ) is a kernel depending on the filter a. Department of Statistics, Purdue University () Probability Seminar 31 / 59

Asymptotic behavior of the error We already proved a CLT for ĝ N (θ 0 ), hence, for all i: E[ (ĝ N (θ 0 )) i 2 ] = O(N 1 ) Let p > 0, we can use the equivalence of L p -norms of a fixed Weiner chaos to get: By Borel-Cantelli, we conclude: E[ ĝ N (θ 0 ) 4p ] < τ p,l N 2p Department of Statistics, Purdue University () Probability Seminar 32 / 59

Asymptotic behavior of the error Corollary 1 Under Assumptions 1-3 we have: (i) E[ ˆθ N θ 0 2 ] = O(N 1 ) (ii) N γ ˆθ N θ 0 a.s. 0 for any γ < 1 2. Department of Statistics, Purdue University () Probability Seminar 33 / 59

Asymptotic behavior of the error And we can generalize the previous result as follows: Theorem 2 Let ˆθ N the GMM estimator of θ 0. Assume there exists a diagonal (L + 1) (L + 1) matrix D N (θ 0 ) such that: E[D (i,i) N (θ 0) (ĝ N (θ 0 )) i 2 ] = O(1) for all i {0,..., L}. Then: ( ) (i) E[ ˆθ N θ 0 2 ] = O max 0 i L {D (i,i) N (θ 0)} (ii) If max 0 i L {D (i,i) N (θ 0)} = f (N) N, for f (N) = o(n) and ν > 0 then for ν any γ < ν 2 : N γ ε N a.s. 0. Department of Statistics, Purdue University () Probability Seminar 34 / 59

Asymptotic Normality Theorem 3 Let X t a Gaussian stationary process with parameter θ 0. If ˆθ N is the GMM estimator of θ 0, then under Assumptions 1-3, it holds that: N(ˆθ N θ 0 ) d N(0, C(θ 0 )ΩC(θ 0 ) ) where C(θ 0 ) = [G(θ 0 ) AG(θ 0 )] 1 G(θ 0 ) A. Key ideas of the proof (Newey and McFadden, 1994; Hansen, 1982): Linear behavior of ε N = ˆθ N θ 0. Slutsky s theorem. Department of Statistics, Purdue University () Probability Seminar 35 / 59

Efficiency The GMM asymptotic variance is minimized by taking A = Ω 1. There are numerical techniques to compute ÂN such that p  N Ω 1, for example (Hansen et al., 1996): Two-step estimators. Iterative estimator. Continuous-updating estimator. Department of Statistics, Purdue University () Probability Seminar 36 / 59

Outline 1 Preliminaries 2 Joint estimation of Gaussian stationary processes 3 fou Case 4 Simulation Department of Statistics, Purdue University () Probability Seminar 37 / 59

Preliminaries Assume that there exist a closed rectangle Θ R 2 such that θ = (H, λ) Θ, and assume that σ = 1. ρ θ (t) and its partial derivatives are continuous functions of θ. The Assumptions 1 and 2 are difficult to confirm due to the analytical complexity of the covariance function ρ θ (t). we decided to check these two assumptions at least locally, by checking numerically that: ] det [ ρθ (0) H ρ θ (α) H for different values of θ and α. ρ θ (0) λ ρ θ (α) λ 0 The injectivity approximately holds for 0 < λ < 5 and H > 0.3. Department of Statistics, Purdue University () Probability Seminar 38 / 59

Preliminaries Lemma 3 Let X t be the stationary fou process with parameters θ = (H, λ). The Assumption 3 (Biermé et al s condition) holds under the following two cases: Case 1 If l + l > 1 then it holds for all H (0, 1). Case 2 If l + l 1 then it holds if H (0, 3 4 ). Conclusion: Assumption 3 is not valid for the first component of ĝ N if H 3 4. Department of Statistics, Purdue University () Probability Seminar 39 / 59

Asymptotic Normality of the sample errors Using the previous lemma, for H < 3 4 : NĝN (θ) d N(0, Λ(θ)) where the covariance matrix Λ(θ) has entries: π Λ ij (θ) =2cH 2 α4h P ali [cos u] 2 Palj [cos u] 2 u + 2πp 1 2H 2 π (u + 2πp) 2 + (λα) 2 du p Z }{{} K i,j (α) Department of Statistics, Purdue University () Probability Seminar 40 / 59

Asymptotic Normality of the sample errors Lemma 4 Assume that X t is a stationary fou process with parameters θ = (H, λ) where H 3 4. Then: (i) If H = 3 4 : N log N (ĝ d N(θ)) 0 N(0, 2α 1 cθ 2 ) where c θ = H(2H 1) λ = 3 8λ. (ii) If H > 3 4, (ĝ N(θ)) 0 does not converge to a normal law, or even a second-chaos law. However, [ E N 2 2H (ĝ N (θ)) 0 2] = O(1). Department of Statistics, Purdue University () Probability Seminar 41 / 59

Sketch of proof Denote: ĝ N,0 (θ) = (ĝ N (θ)) 0. For H = 3 4, as N : [ ] N E log N ĝ N,0(θ) 2 2α 1 cθ 2 := c 1 where c θ := H(2H 1) λ. For H > 3 4 : E[ N 2 2H ĝ N,0 (θ) 2 ] 2α 4H 4 c 2 θ (2H 1)(4H 3) := c 2 Department of Statistics, Purdue University () Probability Seminar 42 / 59

Sketch of proof N c If F N = 1 log N ĝn,0(θ) if H = 3 4 N 4 4H c 2 ĝ N,0 (θ) if H > 3 4 then we need to prove where H = L 2 (( π, π)). DF N 2 H This result is valid only if H = 3 4. L 2 (Ω) 2 Hence we can use Nualart and Ortiz-Latorre (2008) to conclude asymptotic normality of F N. Department of Statistics, Purdue University () Probability Seminar 43 / 59

Sketch of proof In the case H > 3 4 we can use the same criteria to conclude that F N does not have a normal limit in law. Furthermore, F N has this kernel: I N (r, s) := N1 2H c2 N j=1 (A j A j )(r, s) It can be proved that I N (r, s) is not a Cauchy sequence in H 2. Then F N does not have a 2nd-chaos limit. Department of Statistics, Purdue University () Probability Seminar 44 / 59

Asymptotic behavior of the error Proposition 1 Let X t be a fou process with parameters θ = (H, λ). Then the GMM estimate ˆθ N satisfies: (i) (ii) As N goes to infinity: O(N ( 1 ) ) if H (0, 3 4 ) E[ ˆθ N θ 2 ] = O log N N if H = 3 4 O(N 4H 4 ) if H ( 3 4, 1) N γ ˆθ N θ a.s. 0 for all γ < 1 2 (if H 3 4 ). Otherwise for all γ < 2 2H, if H > 3 4. Department of Statistics, Purdue University () Probability Seminar 45 / 59

Asymptotics of ˆθ N : Proposition 2 Let X t be the stationary fou process with parameters θ = (H, λ). Then, for any positive-definite matrix A, the GMM estimator of θ is consistent for any H (0, 1) and: If H (0, 3 4 ): N(ˆθ N θ) d N(0, C(θ)ΛC(θ) ) where C(θ) = [G(θ) AG(θ)] 1 G(θ) A and Λ = 2c 2 H α4h K(α). If H = 3 4, then N log N (ˆθ N θ) d N where (C(θ)) 1 is the first column of C(θ). ( ) 0, 2c2 θ α (C(θ)) 1(C(θ)) 1 Department of Statistics, Purdue University () Probability Seminar 46 / 59

Asymptotics of ˆθ N : If H > 3 4, the GMM estimator does not converge to a multivariate normal law or even a second-chaos law. Department of Statistics, Purdue University () Probability Seminar 47 / 59

Outline 1 Preliminaries 2 Joint estimation of Gaussian stationary processes 3 fou Case 4 Simulation Department of Statistics, Purdue University () Probability Seminar 48 / 59

Numerical Details Approximation of ρ θ (t) = 2σ 2 c H 0 cos(tx) x1 2H λ 2 +x 2 dx: Filon s Method (integration of oscillatory integrands) 4-5 precision digits. Simulation of fbm: Davies and Harte (1987). Approximation of fbm-ou process: the stationary fbm-ou satisfies: ti+1 X ti+1 = e λ X ti + σ e λ(t i u) dbu H t i where X 0 N(0, ρ θ (0)). For comparison purposes we use the yuima R package of Brouste and Iacus (2012). Finite-difference filters. Department of Statistics, Purdue University () Probability Seminar 49 / 59

Asymptotic behavior of N(ˆθ N θ) Figure: Asymptotic variance of N(Ĥ N 0.62) Department of Statistics, Purdue University () Probability Seminar 50 / 59

Asymptotic behavior of N(ˆθ N θ) Figure: Asymptotic variance of N(ˆλ N 0.8) Department of Statistics, Purdue University () Probability Seminar 51 / 59

Numerical Details (H, λ) = (0.37, 1.45) N = 1000, α = 1. L = 3 (filters maximum order) Number of repetitions in the sampling dist.: 100. Efficient matrix estimation: Continuous-updating estimator. Department of Statistics, Purdue University () Probability Seminar 52 / 59

Sampling distribution of ĤN Department of Statistics, Purdue University () Probability Seminar 53 / 59

Sampling distribution of ˆλ N Department of Statistics, Purdue University () Probability Seminar 54 / 59

Some statistics... Empirical MSE of ˆθ N 0.000155 Asymp. variance of ĤN (empirical) 3.11 10 5 Asymp. variance of Ĥ N (theoretical) 4.96 10 5 Asymp. variance of ˆλ N (empirical) 0.000122 Asymp. variance of ˆλ N (theoretical) 0.000193 Department of Statistics, Purdue University () Probability Seminar 55 / 59

Sampling distribution of ĤN (H, λ) = (0.8, 2) with 4 filters. Department of Statistics, Purdue University () Probability Seminar 56 / 59

Sampling distribution of ˆλ N (H, λ) = (0.8, 2) with 4 filters. Department of Statistics, Purdue University () Probability Seminar 57 / 59

Future Work More accurate simulation of the fou process? Asymptotic law of ˆθ N when H > 3 4? (fou case) Department of Statistics, Purdue University () Probability Seminar 58 / 59

Thanks! Department of Statistics, Purdue University () Probability Seminar 59 / 59