On Parameter Estimation of Hidden Telegraph Process

Size: px
Start display at page:

Download "On Parameter Estimation of Hidden Telegraph Process"

Transcription

1 On Parameter Estimation of Hidden elegraph Process arxiv:159.74v1 [math.s] 9 Sep 15 R. Z. Khasminskii 1 and Yu.A. Kutoyants 1 Wayne State University, Detroit, USA 1 Institute for Information ransmission Problems, Moscow, Russia National Research University MPEI, Moscow, Russia, University of Maine, Le Mans, France Abstract he problem of parameter estimation by the observations of the two-state telegraph process in the presence of white Gaussian noise is considered. he properties of estimator of the method of moments are described in the asymptotics of large samples. hen this estimator is used as preliminary one to construct the one-step MLE-process, which provides the asymptotically normal and asymptotically efficient estimation of the unknown parameters. MSC Classification: 6M5, 6F1, 6F1. Key words: elegraph process, estimator of method of moments, one-step MLE, asymptoic efficiency 1 Introduction his work is devoted to the problem of parameter estimation by the observations in continuous time X = Xt), t ) of the following stochastic process dx t = Y t) dt+dw t, X, 1) here W t, t is a standard Wiener process, X) = X is the initial value independent of the Wiener process and Y t), t is a two-state 1

2 y 1 and y ) stationary Markov process with transition rate matrix ) λ λ. µ µ We suppose that the values λ > and µ > are unknown and we have to estimate the two-dimensional parameter ϑ = λ,µ) Θ, here Θ = c,c 1 ) c,c 1 ) by the observations X t, < t, i.e., the estimator ϑ t,, < t is stochastic process. Here c < c 1 are positive constants. herefore, our goal is to construct an on-line estimator-process ϑ = ϑ t,, < t ), which can be sufficiently easy to evaluate and is asymptotically optimal in some sense, as. his estimator-process we constructe in two steps. First we introduce a learning interval [, δ], here δ 1,1) and propose a δ -consistent preliminary estimator constructed of the method of moments. hen we improve it up to asymptotically efficient one with the help of slightly modified one-step MLE procedure. Such model of observations is called Hidden Markov Model HMM) or partially observed system. here exists an extensive study of such type HMM for discrete time models of observations, see, for example, [5], [1], [] and the references therein. For continuous time observation models this problem is not so-well studied. See, for example, Elliot et al. [5], here the Part III Continuous time estimation is devoted to such models of observations. One can find there the discussion of the problems of filtration and parameter estimation in the case of observations of finite-state Markov process in the presence of White noise. he problem most close to our statement was studied by Chigansky [3], who considered the parameter estimation in the case of hidden finite-state Markov process by continuous time observations. He showed the consistency, asymptotic normality and asymptotic efficiency of the MLE of one-dimensional parameter. he case of two-state hidden telegraph process studied in our work was presented there as example but there supposed that λ = µ. he problem of parameter estimation for similar models including the identification of partially observed linear processes models of Kalman filtration) were studied by many authors. Let us mention here [1], [4], [17], [11]. he problem of asymptotically efficient estimation for the model of telegraph process observed in discrete times was studied in [6]. he proposed here one-step MLE-process is motivated by the work of Kamatani and Uchida [8] who introduced Newton-Raphson multi-step estimators in the problem of parameter estimation by discrete time observations of diffusion process. In particular, it was shown that multi-step Newton- Raphson procedure allows to improve the rate of convergence of preliminary

3 estimator up to asymptotically efficient one. Note that preliminary estimator there is constructted by all observations and they studied estimator of the unknown parameter. Applied in this work estimator-process uses the preliminary estimator constructed by the observations on the initial learning interval and follows the similar construction as that one introduced in the work [13] see as well [1]). Problem statement and auxiliary results We start with description of the MLE for this model of observations. he stochastic process 1) according to innovation theorem see [14], heorem 7.1) admits the representation dx t = mt,ϑ) dt+d W t, X, t, here mt,ϑ) is the conditional expectation mt,ϑ) = E ϑ [ Y t) F X t ] = y1 P ϑ Y t) = y1 F X t ) +y P ϑ Y t) = y F X t Here F X t is the σ-algebra generated by the observations up to time t, i.e., F X t := σx t, s t) and W t, t is an innovation Wiener process. Let us denote Hence πt,ϑ) = P ϑ Y t) = y1 F X t ), Pϑ Y t) = y F X t mt,ϑ) = y +y 1 y )πt,ϑ). ) = 1 πt,ϑ). he random process πt,ϑ), t satisfies the following equation see [14], heorem 9.1 and equation 9.3) there) dπt,ϑ) = [µ λ+µ)πt,ϑ) +πt,ϑ)1 πt,ϑ))y y 1 )y +y 1 y )πt,ϑ))]dt +πt,ϑ)1 πt,ϑ))y 1 y ) dx t. ) { } Denote by P t) ϑ,ϑ Θ the measures induced by the observations X t = X s, s t) of stochastic processes 1) with different ϑ in the space of realizations C[, t] continuous on [, t] functions). hese measures are equivalent and the likelihood ratio function L ϑ,x t) = dpt) ) ϑ X t, ϑ Θ, < t dp t) 3 ).

4 can be written as follows L ϑ,x t) { = exp ms,ϑ)dx s 1 } ms,ϑ) ds. Here P t) is the measure corresponding to X t with Y s). he MLE-process ˆϑ t, is defined by the equation ) L ˆϑt,,X t = supl ϑ,x t), < t. 3) ϑ Θ It is known that in the one-dimensional case d = 1, λ = µ = ϑ) the MLE ˆϑ, = ˆϑ is consistent, asymptotically normal ˆϑ ϑ) = N,Iϑ) 1) and asymptotically efficient see [3], [7]). Here Iϑ) is the Fisher information. Note that the construction of the MLE-process ˆϑ t,, < t according to 3) and ) is computationally difficult problem because we need to solve the family of equations ) for all ϑ Θ and 3) for all t,]. We propose the following construction. First we study an estimator ϑ of the method of moments and show that this estimator is -consistent, i.e., E ϑ ϑ ϑ ) C. hen using this estimator ϑ δ obtained by the observations on the learning interval [, δ], here 1 < δ < 1, we introduce the one-step MLE-process ϑ t, = ϑ δ + 1/ I ϑ δ) 1 t ϑ δ,x t), δ t. Here the empirical Fisher information matrix I t ϑ) = 1 t δ ṁϑ,s)ṁϑ,s) ds Iϑ), as t, δ = ot) and the vector score-function process is t ϑ,x t ) = 1 t δ ṁϑ,s) [dx s mϑ,s)ds]. Here and in the sequel dot means the derivation w.r.t. parameter ϑ, ṁϑ, t) is the vector-column of the derivatives ṁ λ ϑ,t) and ṁ µ ϑ,t). he estimator 4

5 ϑ t, is in some sense asymptotically efficient. In particular for ϑ, = ϑ we have ϑ ϑ) = N,Iϑ) 1), i.e., it is asymptotically equivalent to the MLE. Note that the calculation of the estimator ϑ t, for all t [ δ, ] requires the solution of the equation ) for one value ϑ = ϑ δ only. Recall as well the well-known properties of the elegraph stationary) process Y t),t. 1. he stationary distribution of the process Y t) is P ϑ {Y t) = y 1 } = µ λ+µ, P ϑ{y t) = y } = λ λ+µ. Let us denote P ij t) = P ϑ {Y t) = y j Y ) = y i }, then solving the Kolmogorov equation we obtain P 11 t) = µ λ+µ + λ λ+µ e λ+µ)t, P 1 t) = λ λ+µ λ λ+µ e λ+µ)t, P 1 t) = µ λ+µ µ λ+µ e λ+µ)t, P t) = λ λ+µ + µ λ+µ e λ+µ)t 5) 4) It follows from 4) and 5) that y1 µ+y λ Ks) = E ϑ Y t)y t+s) = λ+µ here +y y 1 ) λµ λ+µ) e λ+µ)s = Ȳ ) +De λ+µ)s, 6) Ȳ = E ϑ Y t) = y 1µ+y λ λ+µ, D = y y 1 ) λµ λ+µ). 7) 3. Let F Y t F be a family of σ-algebras, induced by the events ) {Y s) = y i, s t,i = 1,}. hen it follows from 5) that for some constant K > and A < and for all s > A,t > the inequality Eϑ { Y s+)y t+) F Y A } Eϑ [Y s)y t)] < Ke λ+µ) A) holds. 5 8)

6 3 Method of moments estimator Let us consider the problem of the construction of -consistent estimators of the parameter ϑ by the method of moments. Recall that we observe in continuous time the stochastic process dx t = Y t)dt+dw t, X, t, 9) here W t, t is a standard Wiener process, X is independent of W t, t initial value, Y t) = Y t,ω) is stationary Markov process with two states y 1 and y and infinitesimal rate matrix ) λ λ. µ µ he processes Y t),t and W t,t are independent. We suppose for simplicity that is an integer number. Introduce the condition λ [c,c 1 ],µ [c,c 1 ] 1) here c and c 1 are some positive constants. o introduce the estimators we need the following notations. he function he statistics Φx) = 1 x 1 x 1 e x ). 11) ζ = 1 1 [X i+1 X i ] 1. 1) i= he random variable α is defined as a solution of the equation ) X ζ = +η Φα ), 13) here η = ) X y 1 y X ). 14) he event A that the equation 13) has a solution. 6

7 he random variable β = α 1I {A } +c +c 1 ) 1I {A c } 15) he method of moments estimator is ˆϑ = ˆλ,ˆµ ), here ˆλ = X) y 1 y y 1 β ; ˆµ = y X) β. 16) y y 1 he properties of these estimators are given in the following theorem. heorem 1 Let the condition 1) holds. hen for the estimators 16) and some constante C > we have for all > E ϑ [ ˆλ λ)] < C, Eϑ [ ˆµ µ)] < C. 17) he proof is given in several steps. he next lemma gives -consistent estimator for Ȳ see 7)). Lemma 1 Let the condition 1) be fulfilled. hen the estimator X / is uniformly consistent for Ȳ and for any > ) X E ϑ Ȳ C, 18) here the constant C > does not depend on ϑ. Proof. Making use 6) we obtain the fllowing relations ) X E ϑ Ȳ 1 [ = E ϑ Y t) Ȳ ] dt+ W = [ E ϑ Y t) Ȳ ] dt 1 1+ λµ ) λ+µ) 3 y y 1 ) 1 ) 1+ c 1 y 4c 3 y 1 ). Corollary. he existence of the consistent estimator for λ λ+µ and µ λ+µ follows from 4) and Lemma 1. Indeed, from the equality Ȳ = λ λ+µ y + µ λ+µ y 1 7

8 and Lemma 1 we obtain he statistics E ϑ [ 1 X y 1 y y 1 E ϑ [ y 1 X y y 1 µ λ+µ X = 1 λ )] < C, λ+µ )] < C. 19) Y t)dt+ W is the sum of a bounded a.s. random variable and an independent of it gaussian random variable with parameters, 1 ). Hence for η defined in 14) we can write the estimate E ϑ [ η D)] < C, ) here the constant C > does not depend on and ϑ. he constant D is defined in 7). Note that from the condition 1) we have λµ λ+µ) > c 4c 1 and we easily obtain the estimate ) for the estimator { } η = max η, c 8c 1 Lemma he following equality holds 1) and under the condition 1) we have as well E ϑ ζ = Ȳ +DΦλ+µ) ) E ϑ [ ζ E ϑ ζ )] < C. 3) Proof. From stationarity of the process Y t) and 6) we obtain E ϑ ζ = E ϑ [X 1 X ] 1 = E ϑ 1 1 Y s)y t)dsdt+1 1 = Ȳ +DΦλ+µ). 4) 8

9 Denote γ i = i+1 Further, from the equality ζ E ϑ ζ = 1 1 i= follows the estimate E ϑ ζ E ϑ ζ ) 3 E ϑ i Y t)dt; W i) = W i+1 W i. γ i E ϑ γ i) E ϑ 1 1 γ i W i)+ 1 1 W i) 1 ) i= γ i E ϑ γi i= 1 i= ) ) E ϑ i= γ i W i) ) W i) 1 )) := 3J 1 +1J +3J 3. 5) i= From stationarity of Y t) we obtain J 1 = 1 E ϑ = 1 1 i,j= 1 γ i E ϑ γi i= { ) ) = ) E ϑ γ i E ϑ γi) γ j E ϑ γj i= E ϑ {Y s)y t) j= E ϑ [ Y i j +s1 )Y i j +t 1 ) F Y 1 ]} dsdtds1 dt 1 E ϑ γ }. 6) he estimate 8) allows to write [ ] Eϑ Y i j +s1 )Y i j +t 1 ) F1 Y Eϑ Y s 1 )Y t 1 ) Ke λ+µ) j i. From this estimate, 6) and 1) we obtain J 1 K 1 i,j= he following estimates are evident J = 1 1 ) E ϑ γ i W i) = 1 J 3 = 1 E ϑ i= 1 i= e λ+µ) j i K 1. 1 i= [ W i) 1 ]) = 1 1 E ϑ γ i = E ϑγ i= K, E ϑ [ W i) 1 ] K. he second proposition of the Lemma follows from these estimates and 5). 9

10 Lemma 3 he function Φx) see 11)) has the following properties Proof. From 1) we obtain the representations lim x +, 7) lim Φx) =, x 8) Φ x) <, for x >. 9) Φx) = 1 x 3! + x 4! x3 5! Φ x) = 3! x ) ) 3x 4x3... 4! 5! 6! which allow to verify the limits 7) and 8) and as well the estimate 9) for x <. For x this estimate follows from the explicit expression for this derivative Φ x) = 1 ) x x 1 x + 1 ) e x. 3 x Let us consider the equation 13) for α, here ζ and η are defined in 1) and 14) respectively. Due to Lemma 3 this equation has not more than one solution. Recall that A is the following event: the equation 13) has solution and consider the statistics β defined in 15) here c,c 1 are the constants from the condition 1)). Lemma 4 Under the condition 1) the estimate β is -consistent for λ+µ. Moreover, for some constant C > which does not depend on and ϑ we have the property E ϑ [ β λ+µ))] < C. 3) Proof. It follows from Lemmae 1 and that ζ = Ȳ +DΦλ+µ)+ε 1 ). 31) Here and below we have for ε i ),i = 1,... the estimates E ϑ εi )) < C. 1

11 By Lemma 1, estimates ), 1) and the boundedness of Φx) we obtain as well the relation ζ = Ȳ + η Φλ+µ)+ε ). 3) If we have the event A then it follows from 13) and 3) that η Φα ) = η Φλ+µ)+ε 3 ). his relation, Lemma 3 and the separation from zero by a positive constant of the estimator η see Corollary to Lemma 1) yield for ω A herefore from Lemma 3 we obtain Φα ) Φλ+µ) = ε 4 ). E ϑ {1I {A } β λ+µ))} < C. 33) If ω A c then the equation ) X ) Ȳ +DΦλ+µ)+γ 3 ) = + η Φx) 34) has no solution x [c,c 1 ]. It follows from 34), Lemma 1 and the Corollary that the equation has no solution for x [c,c 1 ]. Φx) = Φλ+µ)+ε 4 ) Hence we can write A c { ε 4) > α} = B for some positive constant α which does not depend on. his allow us to write PA c ) PB } ) = P{ ε 4 ) > α} P{ ε4 ) > α E ϑ ε4 ) < C α. his estimate and 33) prove the Lemma 4. Proof of the heorem 1. he obtained results allow us to prove that the estimators defined in 17) are -consistent. Indeed, from the obvious equality X) y 1 λ ˆλ λ = β β y y 1 λ+µ + λ λ+µ β λ+µ)) 11

12 and 19) we obtain by Lemma 4 the estimate [ )] )) β X) y E ϑ ˆλ λ 1 Eϑ λ y y 1 λ+µ ) λ [ ] + E ϑ β λ+µ)) < C. λ+µ he second inequality in 17) can be proved by the same way. herefore the estimator ˆϑ ) = ˆλ,ˆµ is -consistent. 4 One-step MLE Our goal is to construct the asymptotically efficient estimator-process of the parameter ϑ = λ, µ) Θ. We do it in two steps. First we obtain by the observations X δ = X t, t δ) on the learning interval [, δ] the method of moments estimator ˆϑ δ = ˆλ δ,ˆµ δ) studied in the preceding section. Here δ 1,1). his estimator by heorem 1 satisfies the condition: sup ϑ K δ E ϑ ˆϑ δ ϑ C, here the constant C > does not depend on and ϑ Θ. Remind that Θ = c,c 1 ) c,c 1 ). Introduce the additional condition MN). For some N c N +9 >. 35) y 1 y ) 4 Having this preliminary estimator ˆϑ δ we propose one-step MLE which is based on one modification of the score-function as follows ) t ϑ,x t = 1 ṁϑ,s) [dx s mϑ,s)ds], t ϑ t, = ˆϑ δ +t 1 I t ˆϑ δ) 1 δ ṁˆϑ δ,s) 1 δ t [ ] dx s mˆϑ δ,s)ds. 36)

13 Here the vector ṁϑ,s) = y 1 y ) π λs,ϑ) ϑ πt,ϑ) = y 1 y ), πt,ϑ) ) λ µ and the empirical Fisher information matrix I t ϑ) is I t ϑ) = 1 t δ ṁϑ,s)ṁϑ,s) ds Iϑ) as t by the law of large numbers. Here Iϑ) is the Fisher information matrix Iϑ) = y 1 y ) E ϑ πs,ϑ) ϑ πs,ϑ). ϑ Let us change the variable τ = t 1 [,1] and introduce the random process ϑ τ),τ δ τ 1, here ϑ τ) = ϑ τ, and τ δ = δ 1. heorem Suppose that ϑ Θ, δ 1,1) and the condition M) holds, then the one-step MLE-process is consistent: for any ν > and any τ,1] P ϑ { ϑ τ) ϑ > ν} 37) and it is asymptotically normal τ ϑ τ) ϑ ) = N,Iϑ ) 1). 38) Proof. Let us denote the partial derivatives π λ t,ϑ) = πt,ϑ) λ and so on., π µ t,ϑ) = πt,ϑ), π λ,λ t,ϑ) = πt,ϑ), µ λ Lemma 5 Suppose that ϑ Θ and N > 1. If the condition c y 1 y ) > N ) holds, then supe ϑ π λ t,ϑ) N + π µ t,ϑ) N) < C 1, 4) ϑ Θ 13

14 and if the condition holds, then c N +9 > y 1 y ) 4 41) supe ϑ π λ,λ t,ϑ) N + π λ,µ t,ϑ) N + π µ,µ t,ϑ) N) < C. 4) ϑ Θ Here the constants C 1 >,C > do not depend on t. Proof. For simplicity of exposition we write π λ t,ϑ) = π λ, π µ t,ϑ) = π µ, πt,ϑ) = π. By the formal differentiation of dπ = [µ λ+µ)π π1 π)y 1 y )y +y 1 y )π)]dt we obtain the equations +π1 π)y 1 y ) dx t. 43) d π λ = πdt π λ [λ+µ+1 π)y 1 y )[y +y 1 y )π] +π1 π)y 1 y ) ] dt+ π λ 1 π)y 1 y ) dx t), 44) d π µ =[1 π]dt π µ [λ+µ+1 π)y 1 y )[y +y 1 y )π] +π1 π)y 1 y ) ] dt+ π µ 1 π)y 1 y ) dx t). 45) If we denote the true value of the parameters by ϑ and πt,ϑ ) = π o etc., then these equations for ϑ = ϑ become [ d π λ o = πo dt π λ o λ +µ +π o 1 π o )y 1 y ) ] dt + π λ o 1 πo )y 1 y ) d W t), 46) here d π o µ =[1 π o ]dt π o µ[ λ +µ +π o 1 π o )y 1 y ) ] dt + π o µ1 π)y 1 y ) d W t), 47) dπ o = [µ λ +µ )π o ]dt+π o 1 π o )y 1 y ) d W t). 48) his system of linear for π λ and π µ equations can be re-written as follows x t = π o,y t = π o λ,z t = π o µ,a = λ +µ,b = y 1 y ) dx t = [µ ax t ]dt+bx t 1 x t )d W t, 49) dy t = x t dt [ a+b x t 1 x t ) ] y t dt+b1 x t )y t d W t, 5) dz t = [1 x t ]dt [ a+b x t 1 x t ) ] z t dt+b1 x t )z t d W t. 51) 14

15 Note that as λ > and µ > the process πt,ϑ ) = x t,1) is ergodic with two reflecting borders and 1. herefore the process π o t,ϑ ) is ergodic with the invariant density f ϑ,x) = [x1 x)]µ λ) y 1 y ) Gϑ ) = [x1 x)]γµ λ ) Gϑ ) { exp µ } +λ µ )x y 1 y ) x1 x) { exp γµ x γλ } 1 x here we denoted γ = y 1 y ) and Gϑ ) is the normalizing constant 1 { Gϑ ) = [x1 x)] γµ λ ) exp γµ x γλ } dx. 1 x he processes y t and z t have explicite expressions { ] y t = exp [a+b x s 1 x s ) b 1 x s) ds z t = v +b 1 x s )d W s }x v dv, 5) v { ] exp [a+b x s 1 x s ) b v 1 x s) ds } +b 1 x s )d W s [1 x v ] dv. 53) Let us put x s = 1 x s. hen we have v and y t = x s 1 x s ) 1 1 x s) = 3x s xv ) 1 ) } a+ e b 4 t v) exp {3b x s ds+b x s d W s dv. v o estimate the moments E ϑ y t N we note that xv 1 1 Hölder inequality N ) N 1 f v)gv) dv) f v) N N 1 dv gv) N dv 15 v and use the

16 with f v) = exp{ at v)ε} and { gv) = exp a1 ε)+ )t v)+3b b x 4 sds+b here ε >. his yields the estimate E ϑ y t N CN,ε) v v x s d W s }, ) e N a1 ε)+ b 4 t v) E ϑ e 3Nb v x s ds+nb v xsd W s dv, here the constant CN,ε) > does not depend on t. Further, we can write } E ϑ exp {3Nb x s ds+nb x s d W s v v { } = E ϑ exp Nb x s d W s N b x s ds v v { }) exp Nb N +3) x s ds v { } Nb exp N +3)t v) 4 because x s 1/4 and { E ϑ exp Nb herefore, We see that if E ϑ y t N CN,ε) v = CN,ε) } x s d W s N b x s ds = 1. v e N a1 ε)+ b 4 b 4 N+3) ) t v) dv e N a1 ε) b N+1) ) t v) dv. λ +µ y 1 y ) > 1 + N, then E ϑ y t N C. In particular, if in the condition 39) we put N = and choose sufficiently small ε >, then we obtain the estimate πt,ϑ ) sup E ϑ ϑ Θ λ C, 54) 16

17 here the constant C > does not depend on t. We need as well to estimate the derivatives 44), 45) for the valuesϑ ϑ. he equation for π λ becomes d π λ = πdt π λ [ λ+µ+1 π)y1 y ) π π ) +π1 π)y 1 y ) ] dt+ π λ 1 π)y 1 y ) d W t). 55) Hence if we put a = λ + µ,y t = π λ and b = y 1 y, then we obtain the equation dy t = x t dt [ a+b 1 x t ) x t x t) +b x t 1 x t ) ] y t dt +b1 x t )y t d W t. he solution of this equation can be written explicitly like 5) but with additional term b 1 x t )x t x t ) in the exponent. his term satisfies the inequality 1 x t ) x t x t) 1. Hence if we repeat the evaluation of the E ϑ y t as it was done above, then for it boundness we obtain the condition λ+µ y 1 y ) > 3 + N. For the second derivative π = π λ,λ t,ϑ) we obtain the similar estimates of the moments as follows. he equation for π is d π = y t [ b y t xt x t) +b y t 1 x t ) ] dt by t d W t π [ a+b 1 x t ) x t x t) +b x t 1 x t ) ] dt+b π1 x t )d W t. Let us write it as d π = At)dt+Bt) d W t π[a+ct)]dt+ π t Dt) d W t in obvious notations. Hence the solution of it is πt,ϑ) λ = We have the corresponding estimate Cs) Ds) e v[a+cs) 1 Ds) ]ds+ v Ds)d W s [ Av)dv +Bv)d Wv ]. = b 1 x s ) ) x s x b s + b 3b x 1 ) + b 4 = 3b 4 3b 17 [ xs 1 x s ) 1 x s ) ] x 1 ) 3b

18 because x 1 ) 1 4. herefore, and if a 3 b N +3 b = a b N b λ+µ y 1 y ) > N, then we obtain E ϑ πt,ϑ) λ N < C. Hence under condition 41) we have sup ϑ Θ E ϑ πt,ϑ) λ N < C. he similar estimates can be obtained for the other derivatives. Lemma 5 is proven. Lemma 6 he solutions x t,y t,z t ) of the equations 49)-51) have ergodic properties. In particular, we have the following mean square convergence 1 1 ṁ λ t,ϑ ) dt = b ṁ λ t,ϑ )ṁ µ t,ϑ )dt = b 1 ṁ µ t,ϑ ) dt = b y t dt I 11ϑ ), y t z t dt I 1 ϑ ), z t dt I ϑ ), Proof. For the proof of the invariant measure existence see [3], section 4.. Note that the equations 49)-51) do not coincide with that of [3], because there it is supposed that λ = µ and y 1 = 1,y =, but the arguments given there are directly applied to the system of equations 49)-51) too. Recall that the strong mixing coefficient αt) for ergodic diffusion process 49) satisfies the estimate αt) < e c t. 18

19 For the proof see heorem in [15]. o check the conditions of this theorem we change the variables in the equation 49) ξ t = gx t ), gx) = x 1/ and obtain the stochastic differential equation dv bv1 v), x,1) dξ t = Aξ t )dt+dw t, ξ = gx ), t. he process ξ t,t has ergodic properties and the drift coefficient A ) satisfies the conditions of this theorem. Now to verify the convergence E ϑ 1 y t dt 1 = E ϑ 1 ) E ϑ yt dt we can apply the result of the following lemma. ) [ ] y t E ϑ yt dt 56) Lemma 7 Let {Y t,t > } be a stochastic process with zero mean and for some m > and k 1 E Y t mk 1) < C 1, here αt) is the strong mixing the coefficient. hen E Y t dt Proof. For proof see Lemma.1 in [9]. t k 1 [αt)] m )/m dt < C, k C 3 k. herefore if we put Y y = yt E ϑ yt, m = 3 and k = 1, then we obtain the convergence 56). Let us verify the consistency of the one-step MLE-process. We can write { P ϑ { ϑ ν } τ) ϑ > ν} P ϑ ˆϑ δ ϑ > { I τ ˆϑ +P δ) 1 τ [ ] } ϑ ṁˆϑ τ δ,s) dx s mˆϑ δ,s)ds > ν. δ 19

20 For the first probability by the heorem 1 we have { ν } P ϑ ˆϑ δ ϑ > 4 C ν E ϑ ˆϑ δ ϑ ν. δ he second probability can be evaluated as follows P ϑ { I τ ˆϑ δ) 1 τ τ P ϑ { I τ ˆϑ δ) 1 τ δ τ +P ϑ { I τ ˆϑ δ) 1 τ ṁˆϑ δ,s) δ τ [ ] } dx s mˆϑ δ,s)ds > ν } > ν 4 ṁˆϑ δ,s)d W s δ ṁˆϑ δ,s) m ˆϑ δ,s) ) here mˆϑ δ,s = mϑ,s) mˆϑ δ,s). We can write I τ ˆϑ δ) 1 τ ṁˆϑ τ δ,s)d W s δ I τ ˆϑ δ) 1 1 τ ṁˆϑ δ,s)d W s, γ δ γ here γ is such that δ γ > 1. Hence { I τ ˆϑ P δ) 1 } τ ϑ ṁˆϑ τ δ,s)d W s > ν 4 δ { 1 τ } P ϑ δ γ ṁˆϑ δ,s)d W s ν > δ I τ ˆϑ δ) 1 ν +P ϑ >, γ δ } ds > ν, 4 as, because { 1 P ϑ δ γ τ δ } ṁˆϑ δ,s)d W s ν > ṁˆϑ δ,s) ds 1 ν δ γe ϑ δ C. νδ γ 1

21 Recall that δ γ 1 >. Further { I τ ˆϑ P δ) 1 } τ [ ] ϑ ṁˆϑ τ δ,s) mϑ,s) mˆϑ δ,s) ds > ν 4 δ { 1 τ 1 ) P ϑ τ 1 γ ṁˆϑ δ,s) ṁϑ v,s) dvdsˆϑ δ ϑ > δ I τ ˆϑ δ) 1 ν +P ϑ > γ } ν as, because ˆϑ δ ϑ = O δ/) and other terms are bounded in probability. Here ϑ v = ϑ +vˆϑ δ ϑ ). o prove 38) we write δ τ τ ϑ τ) ϑ ) = τ ˆϑ δ ϑ )+ I τˆϑ δ) 1 ṁˆϑ δ,s)d W s τ δ + I τˆϑ δ) 1 τ [ ] ṁˆϑ δ,s) mϑ,s) mˆϑ δ,s) ds. τ We have the estimate E ϑ 1 τ τ 1 τ [ ] ṁˆϑ δ,s) ṁϑ,s) d W s δ τ E ϑ δ ṁˆϑ δ,s) ṁϑ,s) ds as, and by the central limit theorem the convergence in distribution 1 τ ṁϑ,s)d W s = N,Iϑ )). τ δ Further, let us denote ˆv δ = ) τ ˆϑ δ ϑ, then we can write τ ˆv δ + I τˆϑ δ) 1 τ δ = I τ ˆϑ δ) 1 I τ ˆϑ δ) 1 [ ] ṁˆϑ δ,s) mϑ,s) mˆϑ δ,s) ds 1 τ τ δ here ϑ r = ˆϑ ) δ +r ˆϑ δ ϑ. he presentation ṁϑ r,s) = ṁˆϑ δ,s)+ 1 1 ) ṁˆϑ δ,s)ṁϑ r,s) dr ds ˆv δ, ) mϑ q,s)dq ˆϑ δ ϑ

22 and the equality allows us to write as, I τ ˆϑ δ) = 1 τ ˆv δ + I τˆϑ δ) 1 τ = τ τ δ τ δ ṁˆϑ δ,s)ṁˆϑ δ,s) ds [ ] ṁˆϑ δ,s) mϑ,s) mˆϑ δ,s) ds ˆϑ δ ϑ O1) = 1 δ O1), Let us verify that the Fisher information matrix is non degenerate. It is sufficient to show that the matrix Eϑ ỹt Jϑ ) =, E ) ϑ ỹ t z t E ϑ ỹ t z t, E ϑ z t, is non degenerated. Here ỹ t, z t are stationary solutions of 5) and 51) respectively. If this matrix is degenerated, then Recall that by Cauchy-Schwarz inequality E ϑ ỹ t E ϑ z t = E ϑ ỹ t z t ). 57) E ϑ ỹ t z t ) E ϑ ỹ t E ϑ z t with equality if and only if z t = cỹ t with some constant c. herefore in the case of equality we have E ϑ cỹ t z t ) =. Introduce a new process ṽ t = cỹ t z t as a solution of the equation dṽ t = [ x t 1 c) 1]dt [ a+b x t 1 x t ) ] ṽ t dt+b1 x t )ṽ t d W t, here ṽ t and x t are stationary solutions. Further, following [3], Section 4, here the similar estimate was obtained, we write this solution as ṽ t = ṽ e at + +b e at s) [ x s 1 c) 1] ds b e at s) x s 1 x s )ṽ s ds e at s) 1 x s )ṽ s d W s.

23 Hence E ϑ + 4b4 a ) e at s) [ x s 1 c) 1]ds 4 1+e at) E ϑṽt e at s) 1 16 E ϑ ṽs ds+4b e at s) E ϑ ṽs ds CE ϑ ṽt with some constant C > which does not depend on t. Recall that E ϑ ṽt does not depend on t too because ṽ t is stationary solution. herefore if we show that for all c lim E ϑ e at s) [ x s 1 c) 1]ds) >, t then the matrix Jϑ ) is non degenerate. he random process ζ t = is the solution of the equation e at s) [ x s 1 c) 1]ds dζ t dt = aζ t + x t 1 c) 1, ζ =. he elementary calculations show that for all ϑ and c lim E ϑ ζt = E ϑ [ π 1 c) 1] >, t a here π is the stationary distribution. herefore the Fisher information matrix Iϑ ) is non degenerate for all ϑ Θ. Note that the limit covariance matrix of the one-step MLE-process by the heorem coincides with the covariance of the asymptotically efficient MLE [3], therefore ϑ τ) is asymptotically efficient too. 5 Discussions he learning interval in one-step section is [, δ ], where δ 1,1), i.e., it is negligeable with respect to the whole observations time. It can be done even shorter, if we use two-step MLE-process approach, as it was proposed 3

24 in [13]. It corresponds to the learning interval [, δ ) with δ 1, 1 ]. he 4 procedure is the follows. First we obtain the preliminary estimator ˆϑ δ as before. hen we introduce the second preliminary estimator ϑ t, = ˆϑ δ +t 1 I t ˆϑ δ) 1 t ˆϑ δ,x t ), t [ δ, ] and then we define two-step MLE-process here ϑ t, = ϑ t, +t 1 I t ˆϑ δ) 1 t ˆϑ δ,ϑ t,,xt ), t [ δ, ], t ϑ1,ϑ,x t) = 1 t δ ṁϑ 1,s)[dX s mϑ,s)ds], t [ δ, ]. It can be shown that for all τ,1] and t = τ we have the asymptotic normality of the estimator ϑ τ) = ϑ τ, : τ ϑ τ) ϑ ) = N,Iϑ ) 1). See the details in [13]. Note that it can be shown that the one-step MLE-process converges in distribution to the limit Brownian motion. Let us introduce the random process η τ) = τ Iϑ ) 1/ ϑ τ) ϑ ), τ δ τ 1, here τ δ = δ 1. More detailed analysis shows that the random process η τ),τ τ 1 converges to two-dimensional standard Wiener process W τ),τ τ 1 with any τ,1]. For the details see the proof of such convergence in similar problem in [13]. Acknowledgment. his work was done under partial financial support second author) of the grant of RSF number References [1] Bickel, P.J., Ritov, Y. and Rydén,. 1998) Asymptotic normality of the maximum likelihood estimator for general hidden Markov models. Ann. Statist., 6, 4,

25 [] Cappé, O., Moulines, E. and Rydén,. 5) Inference in Hidden Markov Models. Springer, N.Y. [3] Chigansky, P. 9) Maximum likelihood estimation for hidden Markov models in continuous time. Statist. Inference Stoch. Processes, 1,, [4] Dembo, A. and Zeitouni, O. 1986) Parameter estimation for partially observed continuous time processes via the EM algorithm. Stoch. Proces. Applic. 3, [5] Elliott, R.J., Aggoun, L. and Moor, J.B. 1995) Hidden Markov Models. Springer, N.Y. [6] Iacus, S. and Yoshida, N. 9) Estimation for discretly observed telegraph process. heory Probab. Math. Statist., 78, [7] Ibragimov I.A. and Khasminskii R. 1981) Statistical Estimation - Asymptotic heory. Springer-Verlag, New York. [8] Kamatani, K. and Uchida, M. 15) Hybrid multi-step estimators for stochastic differential equations based on sampled data. Statist. Inference Stoch. Processes. 18,, [9] Khasminskii, R.Z. 1966) On stochastic processes defined by differential equations with small parameter. heory Probab. Appl., 11, [1] Kutoyants, Yu.A. 1984) Parameter Estimation for Stochastic Processes. Heldermann, Berlin. [11] Kutoyants, Yu.A. 1994) Identification of Dynamical Systems with Small Noise. Kluwer, Dordrecht. [1] Kutoyants, Yu.A. 14) On approximation of the backward stochastic differential equation. Small noise, large samples and high frequency cases. Proceedings of the Steklov Institute of Mathematics., 87, [13] Kutoyants, Yu.A. 15) On multi-step MLE-processes for ergodic diffusion. submitted. [14] Liptser, R.S. and Shiryaev, A.N. 1) Statistics of Random Processes, -nd ed., vol. 1, Springer, N.Y. [15] Veretennikov, A. Yu. 1987) Bounds for the mixing rate in the theory of stochastic equations. heory Probab. Appl., 3,,

26 [16] Wonham, W. M. 1965) Some applications of stochastic differential equations to optimal non-linear filtering. SIAM J. Contr., Ser. A,, [17] Zeitouni, O. and Dembo, A. 1988) Exact filters for the estimation of the number of transitions of finite-state continuous time Marcov processes. IEEE I, 34, 4,

On the Goodness-of-Fit Tests for Some Continuous Time Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université

More information

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri University of Bergamo (Italy) ilia.negri@unibg.it SAPS VIII, Le Mans 21-24 March,

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

LARGE DEVIATIONS AND BERRY ESSEEN IN- EQUALITIES FOR ESTIMATORS IN NONLINEAR NONHOMOGENEOUS DIFFUSIONS

LARGE DEVIATIONS AND BERRY ESSEEN IN- EQUALITIES FOR ESTIMATORS IN NONLINEAR NONHOMOGENEOUS DIFFUSIONS REVSA Statistical Journal Volume 5, Number 3, November 7, 49 67 LARGE DEVIAIONS AND BERRY ESSEEN IN- EQUALIIES FOR ESIMAORS IN NONLINEAR NONHOMOGENEOUS DIFFUSIONS Authors: Jaya P.N. Bishwal Department

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

LAN property for ergodic jump-diffusion processes with discrete observations

LAN property for ergodic jump-diffusion processes with discrete observations LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Translation Invariant Experiments with Independent Increments

Translation Invariant Experiments with Independent Increments Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Goodness of fit test for ergodic diffusion processes

Goodness of fit test for ergodic diffusion processes Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January

More information

An application of hidden Markov models to asset allocation problems

An application of hidden Markov models to asset allocation problems Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

arxiv: v1 [math.pr] 18 Jan 2019

arxiv: v1 [math.pr] 18 Jan 2019 Berry-Esseen ype Bound for Fractional Ornstein-Uhlenbeck ype Process Driven by Sub-Fractional Brownian Motion arxiv:191.612v1 [math.pr] 18 Jan 219 B.L.S. PRAKASA RAO CR Rao Advanced Institute of Research

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

ASYMPTOTIC PROPERTIES OF MAXIMUM LIKELIHOOD ES- TIMATION: PARAMETERISED DIFFUSION IN A MANIFOLD

ASYMPTOTIC PROPERTIES OF MAXIMUM LIKELIHOOD ES- TIMATION: PARAMETERISED DIFFUSION IN A MANIFOLD ASYMPOIC PROPERIES OF MAXIMUM LIKELIHOOD ES- IMAION: PARAMEERISED DIFFUSION IN A MANIFOLD S. SAID, he University of Melbourne J. H. MANON, he University of Melbourne Abstract his paper studies maximum

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei

More information

Sequential maximum likelihood estimation for reflected Ornstein-Uhlenbeck processes

Sequential maximum likelihood estimation for reflected Ornstein-Uhlenbeck processes Sequential maximum likelihood estimation for reflected Ornstein-Uhlenbeck processes Chihoon Lee Department of Statistics Colorado State University Fort Collins, CO 8523 Jaya P. N. Bishwal Department of

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS P. I. Kitsul Department of Mathematics and Statistics Minnesota State University, Mankato, MN, USA R. S. Liptser Department of Electrical Engeneering

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

MATH4210 Financial Mathematics ( ) Tutorial 7

MATH4210 Financial Mathematics ( ) Tutorial 7 MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra

More information

Change detection problems in branching processes

Change detection problems in branching processes Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or Physics 7b: Statistical Mechanics Brownian Motion Brownian motion is the motion of a particle due to the buffeting by the molecules in a gas or liquid. The particle must be small enough that the effects

More information

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Mátyás Barczy University of Debrecen, Hungary The 3 rd Workshop on Branching Processes and Related

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Bayesian estimation of Hidden Markov Models

Bayesian estimation of Hidden Markov Models Bayesian estimation of Hidden Markov Models Laurent Mevel, Lorenzo Finesso IRISA/INRIA Campus de Beaulieu, 35042 Rennes Cedex, France Institute of Systems Science and Biomedical Engineering LADSEB-CNR

More information

!! # % & ( # % () + & ), % &. / # ) ! #! %& & && ( ) %& & +,,

!! # % & ( # % () + & ), % &. / # ) ! #! %& & && ( ) %& & +,, !! # % & ( # % () + & ), % &. / # )! #! %& & && ( ) %& & +,, 0. /! 0 1! 2 /. 3 0 /0 / 4 / / / 2 #5 4 6 1 7 #8 9 :: ; / 4 < / / 4 = 4 > 4 < 4?5 4 4 : / / 4 1 1 4 8. > / 4 / / / /5 5 ; > //. / : / ; 4 ;5

More information

On Identification of the Threshold Diffusion Processes

On Identification of the Threshold Diffusion Processes On Identification of the hreshold Diffusion Processes arxiv:13.3539v1 [math.s] 18 Mar 21 Yury A. Kutoyants Laboratoire de Statistique et Processus, Université du Maine 7285 Le Mans, Cédex 9, France Abstract

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

The loss function and estimating equations

The loss function and estimating equations Chapter 6 he loss function and estimating equations 6 Loss functions Up until now our main focus has been on parameter estimating via the maximum likelihood However, the negative maximum likelihood is

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

Quickest Detection With Post-Change Distribution Uncertainty

Quickest Detection With Post-Change Distribution Uncertainty Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

Jae Gil Choi and Young Seo Park

Jae Gil Choi and Young Seo Park Kangweon-Kyungki Math. Jour. 11 (23), No. 1, pp. 17 3 TRANSLATION THEOREM ON FUNCTION SPACE Jae Gil Choi and Young Seo Park Abstract. In this paper, we use a generalized Brownian motion process to define

More information

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is COMM. MATH. SCI. Vol. 1, No. 2, pp. 385 391 c 23 International Press FAST COMMUNICATIONS NUMERICAL TECHNIQUES FOR MULTI-SCALE DYNAMICAL SYSTEMS WITH STOCHASTIC EFFECTS ERIC VANDEN-EIJNDEN Abstract. Numerical

More information

E. Vitali Lecture 6. Brownian motion

E. Vitali Lecture 6. Brownian motion Markov chains, presented in the previous chapter, belong to a wide class of mathematical objects, the stochastic processes, which will introduced in the present chapter. Stochastic processes form the basis

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators

Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators Sergey V. Lototsky and Boris L. Rosovskii In: V. Korolyuk, N. Portenko, and H. Syta (editors), Skorokhod s Ideas in

More information

Complete q-moment Convergence of Moving Average Processes under ϕ-mixing Assumption

Complete q-moment Convergence of Moving Average Processes under ϕ-mixing Assumption Journal of Mathematical Research & Exposition Jul., 211, Vol.31, No.4, pp. 687 697 DOI:1.377/j.issn:1-341X.211.4.14 Http://jmre.dlut.edu.cn Complete q-moment Convergence of Moving Average Processes under

More information

Bridging the Gap between Center and Tail for Multiscale Processes

Bridging the Gap between Center and Tail for Multiscale Processes Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Goodness-of-Fit Tests for Perturbed Dynamical Systems

Goodness-of-Fit Tests for Perturbed Dynamical Systems Goodness-of-Fit Tests for Perturbed Dynamical Systems arxiv:93.461v1 [math.st] 6 Mar 9 Yury A. Kutoyants Laboratoire de Statistique et Processus, Université du Maine 785 Le Mans, Cédex 9, France Abstract

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Supplement to Hidden Rust Models

Supplement to Hidden Rust Models Supplement to Hidden Rust Models Benjamin Connault May 2016 1 Appendix for section 3: Identification 1.1 Proof of heorem 1: generic identification structure First, we prove the existence part of heorem

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

arxiv: v1 [math.pr] 7 Aug 2009

arxiv: v1 [math.pr] 7 Aug 2009 A CONTINUOUS ANALOGUE OF THE INVARIANCE PRINCIPLE AND ITS ALMOST SURE VERSION By ELENA PERMIAKOVA (Kazan) Chebotarev inst. of Mathematics and Mechanics, Kazan State University Universitetskaya 7, 420008

More information

Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation. CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO

Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation. CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation JOSÉ ALFREDO LÓPEZ-MIMBELA CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO jalfredo@cimat.mx Introduction and backgrownd

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance In this technical appendix we provide proofs for the various results stated in the manuscript

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,

More information

Nonlinear Filtering Revisited: a Spectral Approach, II

Nonlinear Filtering Revisited: a Spectral Approach, II Nonlinear Filtering Revisited: a Spectral Approach, II Sergey V. Lototsky Center for Applied Mathematical Sciences University of Southern California Los Angeles, CA 90089-3 sergey@cams-00.usc.edu Remijigus

More information

A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION. Miljenko Huzak University of Zagreb,Croatia

A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION. Miljenko Huzak University of Zagreb,Croatia GLASNIK MATEMATIČKI Vol. 36(56)(2001), 139 153 A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION Miljenko Huzak University of Zagreb,Croatia Abstract. In this paper a version of the general

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Junyi Tu, Yuncheng You University of South Florida, USA you@mail.usf.edu IMA Workshop in Memory of George R. Sell June 016 Outline

More information

Optimal portfolio strategies under partial information with expert opinions

Optimal portfolio strategies under partial information with expert opinions 1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Weak convergence of Markov chain Monte Carlo II

Weak convergence of Markov chain Monte Carlo II Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Compensating operator of diffusion process with variable diffusion in semi-markov space

Compensating operator of diffusion process with variable diffusion in semi-markov space Compensating operator of diffusion process with variable diffusion in semi-markov space Rosa W., Chabanyuk Y., Khimka U. T. Zakopane, XLV KZM 2016 September 9, 2016 diffusion process September with variable

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

Diffusion Approximation of Branching Processes

Diffusion Approximation of Branching Processes UDC 519.218.23 Diffusion Approximation of Branching Processes N. Limnios Sorbonne University, Université de technologie de Compiègne LMAC Laboratory of Applied Mathematics of Compiègne - CS 60319-60203

More information