Optimum thresholding using mean and conditional mean square error

Size: px
Start display at page:

Download "Optimum thresholding using mean and conditional mean square error"

Transcription

1 Optmum tresoldng usng mean and condtonal mean square error José E. Fgueroa-López Cecla Mancn! " #!$%&'! * * + *,+!! - +*,. + / /! *'%&%&

2 Optmum tresoldng usng mean and condtonal mean square error José E. Fgueroa-López and Cecla Mancn Marc 4, 7 Abstract We consder a unvarate semmartngale model for te logartm of an asset prce, contanng jumps avng possbly nfnte actvty IA. Te nonparametrc tresold estmator ˆ IV n of te ntegrated varance IV := T σ sds proposed n 6 s constructed usng observatons on a dscrete tme grd, and precsely t sums up te squared ncrements of te process wen tey are under a tresold, a determnstc functon of te observaton step and possbly of te coeffcents of X. All te tresold functons satsfyng gven condtons allow asymptotcally consstent estmates of IV, owever te fnte sample propertes of ˆ IV n can depend on te specfc coce of te tresold. We am ere at optmally selectng te tresold by mnmzng eter te estmaton mean square error MSE or te condtonal mean square error cmse. Te last crteron allows to reac a tresold wc s optmal not n mean but for te specfc pat at and. A parsmonous caracterzaton of te optmum s establsed, wc turns out to be asymptotcally proportonal to te Lévy s modulus of contnuty of te underlyng Brownan moton. Moreover, mnmzng te cmse enables us to propose a novel mplementaton sceme for te optmal tresold sequence. Monte Carlo smulatons llustrate te superor performance of te proposed metod. Keywords: Tresold estmator, ntegrated varance, Lévy jumps, mean square error, condtonal mean square error, modulus of contnuty of te Brownan moton pats, numercal sceme JEL classfcaton codes: C6, C3 Introducton We consder te model dx t = σ t dw t +dj t, were W s a standard Brownan moton, σ s a cadlag process, and J s a pure jump semmartngale SM process. Assume we ave at our dsposal a record {x,x t,..,x tn } of dscrete observatons of X spanned on te fxed tme nterval,t, defne Z or n Z te ncrement Z t Z t for any process Z, and defne tresold functon rσ, any determnstc non-negatve functon of te observaton step, and possbly of te coeffcents of X, suc tat for any value σ R rσ, lm rσ, =, lm log = +. We know tat ten te Tresold Realzed Varance or Truncated Realzed Varance ˆ IV n := X I { X rσ t, }, Department of Matematcs, Wasngton Unversty n St. Lous, MO, 633, USA fgueroa@mat.wustl.edu Department of Management and Economcs, Unversty of Florence, va delle Pandette 9, 57 cecla.mancn@unf.t

3 were := t t, gves a consstent estmator of te Integrated Varance IV := T as sup, as soon as σ s a.s. bounded away from zero on,t. In te case were te jump process J as fnte actvty FA and te observatons are evenly spaced, te estmator s also asymptotcally Gaussan. However te fnte sample propertes of IVn ˆ can depend on te specfc coce of te tresold TH. Te estmaton error s large wen eter te tresold s too small or wen t s too large. In te frst case too many ncrements are dscarded, ncluded te ncrements bearng only small and neglgble jumps, and TRV underestmates IV. In te second case too many ncrements are kept wtn TRV, ncluded many ncrements contanng jumps, leadng to an overestmaton of IV. In ts paper we look for an optmal tresold, by consderng te followng two optmalty crtera: mnmzaton of MSE, te expected quadratc error n te estmaton of IV; and mnmzaton of cmse, te expected quadratc error condtonal to te realzed pats of te jump process J and of te volatlty process σ s s. Assumng evenly spaced observatons, te two quanttes MSE and cmse are explct functons of te TH and under eac crteron t turns out tat for any semmartngale X, for wc te volatlty and te jump processes are ndependent on te underlyng Brownan moton, an optmal TH exsts, and s a soluton of an explctly gven equaton, te equaton beng dfferent under te two crtera. Furter, under eac crteron te optmal TH s unque, at least for gven classes of processes X. Te caracterzng equaton depends on te observaton step and so does ts soluton. Te optmal TH as to tend to as tends to zero and, under eac crteron, an asymptotc expanson wt respect to s possble for some terms wtn te equaton, wc n turn mples an asymptotc expanson of te optmal TH. Under te MSE crteron, wen X s Lévy and J as eter fnte actvty jumps or te actvty s nfnte but J s symmetrc strctly stable, te leadng term of te expanson s explct n, and n bot cases s proportonal to te modulus of contnuty of te Brownan moton pats and to te spot volatlty of X, te proportonalty constant beng Y, were Y s te jump actvty ndex of X Tus te ger te jump actvty s, te lower te optmal tresold as to be to dscard te ger nose represented by te jumps, n order to catc nformaton about IV. Te leadng term of te optmal TH does not satsfy te classcal assumptons under wc te truncaton metod as been sown n 6 to consstently estmate IV, owever at least n te fnte actvty jumps case t turns out tat te tresold estmator of IV constructed wt te optmal TH s stll consstent. Te assumptons needed for te cmse crteron are a lttle bt less restrctve, and we fnd tat, for constant σ and FA jumps, te leadng term of te optmal TH stll as to be proportonal to te modulus of contnuty of te Brownan moton pats and to σ. One of te man motvatons for consderng te cmse arses from a novel applcaton of ts to tuneup te tresold parameter. Te dea conssts n teratvely updatng te optmal TH and estmates of te ncrements of te contnuous and jump components Xt c = t σ sdw s and {J t } t, respectvely. We llustrate ts metod on smulated data. Mnmzaton of te condtonal mean square estmaton error n te presence of nfnte actvty jumps n X s object of furter researc. σ sds, An outlne of te paper s as follows. Secton deals wt te MSE: te exstence of an optmal tresold ε s establsed for a qute general SM X; for a Lévy process X, unqueness s also establsed Subsecton. and te asymptotc expanson for te optmal TH s found n Secton.3, n bot te cases of a fnte jump actvty Lévy X and of an nfnte actvty symmetrc strctly stable X. In Secton 3, for any fnte jump actvty SM X, consstency of IVn ˆ s verfed even wen te tresold functon conssts of te leadng term of te optmal tresold, wc does not satsfy te classcal ypotess. Secton 4 deals wt te cmse n te case were X s a SM wt constant volatlty and FA jumps: exstence of an optmal TH ε s establsed, ts asymptotc expanson s found, ten unqueness s obtaned. In Secton 5 te results of Secton 4 are used to construct a new metod for teratvely determne te optmal tresold value n fnte samples, and a relablty ceck s executed on smulated

4 data. Secton 6 concludes and Secton 7 contans te proofs not gven n te man text. Acknowledgements. José Fgueroa-López s researc was supported n part by te Natonal Scence Foundaton grants: DMS-4969 and DMS-636. Cecla Mancn s work as benefted from support by GNAMPA Italan Group for researc n Analyss, Probablty and ter Applcatons. It s a subunt of te INdAM group, te Itaan Group for researc n Hg Matematcs, wt ste n Rome and EIF Insttut Europlace de Fnance, subunt of te Insttut Lous Baceler n Pars. MEAN SQUARE ERROR: general results We compute and optmze te mean square error MSE of IVn ˆ passng troug te condtonal expectaton wt respect to te pats of σ and J: MSE := E IV ˆ n IV = E E IV ˆ n IV σ,j. Condtonng on σ, as well as assumng no drft n X, s standard n papers were MSE-optmalty s looked for, n te absence of jumps see e.g.. We assume evenly spaced observaton over a fxed tme orzon,t, so tat t = t,n = n, for any =...n, wt = n = T/n. Denote by ε te square root of a gven tresold functon: ε := rσ,. ˆ IVn and MSE are n fact functons of ε oter tan of, and we ndcate tem by ˆ IV n ε, MSEε. Note tat for ε = we ave ˆ IV n =, so MSEε = EIV ; as ε ncreases some squared ncrements X are ncluded wtn ˆ IV n, so ˆ IV n becomes closer to IV and MSEε decreases. However, f J, for ε + te quantty MSEε ncreases agan, snce ˆ IV n ncludes all te squared ncrements X and tus ˆ IVn estmates te global quadratc varaton IV + s T X s of X at tme T, and MSEε becomes close to E s T X s. We look for a tresold ε gvng MSEε = mn ε, MSEε. In ts secton we analyze te frst dervatve MSE ε and we fnd tat an optmal tresold exsts, n te general framework were X s a semmartngale satsfyng A below, and we furns an equaton to wc ε s a soluton, wle n Secton., we fnd tat ε s even unque. Te equaton as no explct soluton, but ε s a functon of and we can explctly caracterze te frst order term of ts asymptotc expanson n. Clearly we can always fnd an approxmaton of te optmal tresold wt arbtrary precson makng use of numercal metods. Let us denote X := XI { X ε }, σ := t t σ sds, m := J. To guarantee tat W remans a Brownan moton condtonally to σ and J, we need to assume te followng A. A.s. σ s > for all s, and σ, J are ndependent on W. Teorem. Under A and te fnteness of te expectaton of te terms below, for fxed and ε >, we ave MSE ε = ε Gε, were Gε := E a ε ε + ε m b j ε IV, a ε := e σ +e ε+m σ, σ j π b ε := E X σ,j = e ε m σ ε+m +e ε+m σ σ ε m + m +σ π m+ε σ π m ε σ e x dx. It clearly follows tat MSE ε > f and only f Gε > and, tus, to our am of fndng an optmal tresold, t suffces to study te sgn of Gε as ε vares. 3

5 Notaton. For brevty we sometmes omt to precse te dependence on ε of a ε and b ε. For a functon fε we sometmes use f+ for lm ε + fε. For two functons fx,gx of a non-negatve varable x wc tends to respectvely to +, by f g, or g f we mean tat f = og as x respectvely x +, by f g we mean tat bot f = Og and g = Of as x respectvely x +, wle by f g we mean tat f and g are asymptotcally equvalent.e. f/g. We denote φx = e x + π, Φx = φsds. x.o.t means ger order terms Proof of Teorem. Under A we ave tat condtonally to σ,j te ncrement X = t t σ s dw s + J s a Gaussan r.v. wt law Nm,σ, wc allows to compute te condtonal expectaton E IV ˆ n σ,j. We ave and E ˆ IV n σ,j = b ε = E ˆ IV n ε σ,j = = ε m + m +σ σ π e ε m E X 4 σ,j+ σ ε+m +e ε+m σ σ ε m π ε+m σ e t dt+ e t dt, E X j X σ,j j> e ε m σ σ ε 3 +m ε +m ε+m 3 +5m σ +3σε ε m σ + e t dt+ e ε+m σ σ ε 3 m ε +m ε m 3 5m σ +3σε ε+m σ e t dt m 4 +6m σ +3σ 4 + π avng used tat condtonally to σ and J, X and j X are ndependent. It follows tat MSEε = E e ε m σ σ ε 3 +m ε +m ε+m 3 +5m σ +3σε + e ε+m σ σ ε 3 m ε +m ε m 3 5m σ +3σε ε m σ + e t dt+ b εb j ε IV j> ε+m σ ε m + m +σ σ π e t dt m 4 +6m σ +3σ 4 π b b j, 3 j> e ε m σ ε+m +e ε+m σ σ ε m π ε+m σ e t dt+ e t dt +IV. MSEε s a dfferentable functons of ε, terefore to fnd te mnmum on,+ of MSEε we can study te sgn of ts frst dervatve MSE ε. Snce MSE ε = d dε E IV ˆ n ε IV d dε E IV ˆ n ε, we begn to compute d dε E IV ˆ n ε σ,j. Note tat d dε b ε = e ε m σ +e ε+m σ ε+m ε m σ π 4

6 e ε m σ +e ε+m σ σ + m +σ π σ π e ε m σ +e ε+m σ = ε e ε m σ +e ε+m σ σ π = ε a ε, so tat d dε E IV ˆ n n ε σ,j = ε a ε 4 s strctly greater tan zero for all values of ε >. As for d dε E IV ˆ n ε σ,j, note tat te term j> b b j n 3 can be wrtten as j b b j, so ts dervatve concdes wt j ε a b j +b ε a j, owever b a j = b a j b a j j = a b j a b = a b j so tat and j ε a b j +b ε a j = n Remark. If also J, we ave j j ε a b j, d dε E IV ˆ n ε σ,j = ε 4 n a ε+ b εb j ε 5 j> = ε 4 a +ε a b j d dε MSEε = ε j j E ε a +a b j IVa. j = ε E a ε + j b j IV = ε Gε. MSE = EIV > and, for small, lm MSEε >. ε + Corollary. Under te same assumptons of Teorem, even n te absence of jumps, an optmal tresold exsts and s soluton of Gε =. Proof. Note tat a ε and b ε are contnuously dfferentable functons of ε, and, wt fxed = T n, 6 a ε = σ 3 π m σ a = e, b =, σ π a + =, b + = E X σ,j = m +σ, σ ε m +e ε+m σ ε+m, b ε = ε a ε, e ε m so we fnd tat G = 4 σ π E e m σ IV <, and lm ε + Gε = +, so tere exsts ε + > : MSE ε > on ε +,+. On te compact set,ε + te contnuous functon MSE as necessarly absolute mnmum value MSE, and snce on ε +,+ MSE s ncreasng we ave tat on,+ te absolute mnmum s MSE. MSE ε s contnuous and assumes bot negatve and postve values, tus equaton Gε = as a soluton. Any mnmum pont of MSE on,+ as to be a statonary pont, so t as to solve te equaton. 5

7 Remark. In prncple MSEε could even ave many ponts ε were te absolute mnmum value MSE of MSE on,+ s reaced; MSE could even ave an nfnte number of local not absolute mnma. To determne te number of solutons to Gε =, we need to study te sgn of G ε correspondng to te convexty propertes of M SEε, but ts s not easy. Defne g ε := ε + j b j IV, so tat Gε = Ea εg ε. We can easly study te functons g, snce we know tat g = IV <, lm ε + g ε = + and g ε = ε+ε j a > for all ε >. However wtn te jont functon Gε te presence of te terms a ε makes t dffcult even to know weter a g s postve.. Wen X s Lévy Let us assume A. X s Lévy. We now ave tat σ > s constant and X are..d., so te equaton caracterzng MSE ε = s muc smpler: from 6, snce wtn a j b j te term m of a s ndependent on te terms m j of b j, we ave MSE ε = ε Gε = ε nea ε ε +n Eb ε IV. Teorem. If X s Lévy, equaton ε +n Eb ε IV = 7 as a unque soluton ε and, tus, tere exsts a unque optmal tresold, wc s ε. Proof. For ε > we ave MSE ε > f and only f Gε >, wc n turn s true f and only f were, settng m := m = J, we recall tat we ave Te sgn of gε s studed as follows: gε := ε +n Eb IV > σ Eb = E e ε m σ ε+m+e ε+m σ ε m π + m +σ ε m σ e t dt+ π g = σ T <, lm gε = +, ε + g ε = ε+n εea ε+m σ e t dt. so tat g ε > for all ε >, n >. Tat mples tat gε starts at ε = from a negatve value and strctly ncreases towards +, as ε ncreases, so tat tere exsts a unque ε suc tat gε < for ε,ε, gε = and gε > for ε ε,+. Tat mples n turn tat MSEε as a unque mnmum pont n ε, wc s ten te optmal tresold we were lookng for: ε s te unque soluton of equaton 7, correspondng to gε = Gε =. Te equaton n 7 as no explct soluton, owever we can gve some mportant ndcatons to approxmate ε. 6

8 . Asymptotc beavor of Eb ε For te rest of Secton we assume tat ε := ε = ε, even wen for brevty we omt to ndcate te dependence on. We stll are under A, so recall tat Eb ε = E σ n W + n J { σ n W+ n J ε}, s constant n. Note tat Eb ε s fnte for any Lévy process J, regardless of weter J as bounded frst moment or not. We consder two cases: te case were J s a fnte jump actvty process and te one were ts s a symmetrc strctly stable process. Te asymptotc caracterzaton of Eb ε wll be used n Subsecton.3 to deduce te asymptotc beavor of te optmal tresold ε. We antcpate tat n Subsecton.3 we wll also see tat an optmal tresold ε as to tend to as and n suc a way tat ε +... Fnte Jump Actvty Lévy process Teorem 3. Let X be a fnte jump actvty Lévy process wt jump sze densty f and wt jump ntensty λ. Suppose also tat te restrctons of f on, and, admt C extensons on, and,, respectvely. Ten, for any ε = ε suc tat ε and ε, as, we ave Eb ε = σ σε e ε σ +λ ε3 π 3 Cf+O +o ε e ε σ +o ε 3, were above Cf := f + +f. Proof. By defnton, Eb ε = E n X { n X <ε, n N=} +E n X { n X <ε, n N } =: G +L. 8 By Lemma S. and Lemma S.5 wt k = n 3, provded tat ε, we ave wc sows te result. L := E n X ε 3 { n X <ε, n N } λ Cf,, 9 3 G := σ σε e ε σ +O +o ε e ε σ, π.. Strctly stable symmetrc Lévy process Let us start by notng tat Eb ε = E σw +J { σw +J ε} = σ E W { σw +J ε} +σe W J { σw +J ε} +E J { σw +J ε} =: C ε+d ε+e ε. Te frst term above can be wrtten as were C ε = σ σ E W { σw +J >ε} = σ σ C + ε+c ε, C + ε = E W {W+σ / J >σ / ε}, C ε = E W {W+σ / J < σ ε}. / 7

9 By condtonng on J and usng te fact tat EW {W>x} = xφx+ Φx, for all x R, we ave φ + Φ. C ± ε = E ε σ J σ ε σ J σ ε σ J σ In wat follows, we determne te beavor of te above quanttes under te assumpton tat ε. Te proofs of te followng Lemma and Lemma are n an Appendx. Lemma. Suppose tat {J t } t s a symmetrc Y-stable process wt Y,. Ten, tere exst constants K and K suc tat: ε E φ σ J σ = e ε σ K ε Y 3 +.o.t. π E J φ ε J = K ε Y +.o.t.. Lemma. Suppose tat {J t } t s a symmetrc strctly stable process wt Lévy measure C x Y dx. Ten, te followng asymptotcs old: ε E Φ σ J σ = C Y ε Y +O ε Y ε +O E φ σ J σ, E J C { σw +J ε} = Y ε Y +O ε Y +O 4 Y +O Y. 3 We are ready to sow our man result n ts part: Teorem 4. Let X t = σw t +J t, were W s a Wener process and J s a symmetrc strctly stable Lévy process wt Lévy measure C x Y. Ten, for any ε = ε suc tat ε and ε, as, we ave Proof. From Lemmas and, Eb ε = σ σ π εe ε σ + C Y ε Y +.o.t.. ε C + ε = E σ J σ = ε σ ε = σ ε π e ε φ σ J σ e ε σ K ε Y 3 π σ K + Φ ε σ J σ σ σ / ε Y +.o.t., K ε Y + C Y ε Y +.o.t. were above we used tat ε Y / ε Y. Terefore, usng tat D = and Lemma, wt K 3 = C Eb ε = E σw +J { σw +J ε} = C ε+d ε+e ε = σ σ ε σ ε π e σ K σ / ε Y +K 3 ε Y +.o.t. were above we used tat ε Y 3/ ε Y. = σ σ π εe ε σ +K 3 ε Y +.o.t., Y,.3 Asymptotc beavor of ε We now assume A3. J and te support of any J t s R. We frstly see tat an optmal tresold ε = ε as to tend to as and n suc a way tat ε +. Ten we wll sow te asymptotc beavor of ε n more detal. 8

10 Remark 3. Note tat under A3, f ε mnmzes MSE, ten necessarly ε as. Indeed, f lmnfε = c >, ten on a sequence ε convergng to c we would ave IV ˆ n IV s T J si Js c n probablty, rater tan IV ˆ n IV ; snce P{ s T J si Js c > } >, te MSE could not be mnmzed. Lemma 3. Suppose X t = σw t +J t, were W s a Brownan moton and J s a pure-jump Lévy process of bounded varaton or, more generally, suc tat, for some Y,, /Y n J n P J, for a real-valued random varable J. Ten, ε n/ n, as n. Remark. If J as FA jumps and drft η, J t = ηt + N t k= γ k, ten we ave J P η and, tus, te above assumpton s satsfed wt Y =. ε Proof. We sow te result by contradcton. Suppose tat lmnf n n n <. For smplcty and wtout loss of ε generalty, we furter assume tat lm n n n =: L < as all te statements below are vald on a subsequence ε {n k } k. Let M, be suc tat sup n n n M. Also, for smplcty, let us wrte ε n for ε n and assume tat T = so tat n = /n. Consder te decomposton Eb ε = E σw +J { σw +J ε} = σ E W { σw +J ε} +σe W J { σw +J ε} +E J { σw +J ε} =: c ε+d ε+e ε. Note tat domnated convergence mples tat c n ε n = σ E W n { σw+ / n n J n / n ε n} σ E W { W L/σ} < σ, snce / n J n = Y n n /Y J n, n probablty. For d note tat σ W / n J n { σw+n / J n n / ε σ n} W +σ W n / ε n σ W +σ W M, terefore, agan by domnated convergence n d n ε n = σe Smlarly, snce / n J n { σw+ / W / n J n { σw+ / n J n / n ε n} n J n / n n. ε n} σ W + n ε n W +M, n e n ε n = E n / J n { σw+n / J n / n ε n} n. Fnally, let us wrte te equaton ε n +n Eb ε n n n σ = as ε n + n dn ε n + e n ε n = σ n c n ε n. 4 n n n n n Te rgt-and sde of te equaton converges to σ E W { W L/σ} >, wle te left and sde converges to and ts leads to a contradcton and terefore lm n n n ε =. We are now ready to sow more precsely te asymptotc beavor of ε. Te followng result covers te FA case. Proposton. Let J ave FA jumps and let ε = ε be te optmal tresold. Ten, ε σ ln, as. 9

11 Proof. For smplcty, n wat follows we take T = so tat = /n. Agan, recall tat ε s te soluton of ε +n Eb ε nσ =. Trougout, we sall use tat ε, as proved n te above lemma. For smplcty, we wrte ε nstead of ε. By te asymptotc beavor of Eb ε descrbed above, ε +n σ σε e ε σ +λ ε3 π 3 Cf+O +o ε e ε σ +o ε 3 nσ =, and, tus, usng tat = /n, ε σ 4 σ ε e ε σ +λ ε3 ε π 3 Cf+O+o e ε σ +o ε 3 =. 5 Now, snce = oε as assumed at te begnnng, we can wrte te prevous equaton as ε 4 σ ε e ε ε σ +o e ε σ +o ε =. π Dvdng by ε and rearrangng te terms, ε+o = 4 π σ e ε σ +o. 6 Ten, takng logartms of bot sdes and snce ln+o = o, lnε+o = ε σ 4σ ln+ln +o. 7 π wc can be wrtten as Defnng = ε /σ, we can wrte ε ln +o σ = ε 8 σ ln+ln +o π ln + ln ln π 8 o o = +. Terefore, makng and usng tat snce ε, ln Recallng tat = ε /σ, we conclude te result.. Te followng result specfes te asymptotc beavor of ε for symmetrc strctly stable processes. Proposton. Under te condtons of Teorem 4, te optmal tresold ε = ε s suc tat ε Yσ ln, as. Proof. For smplcty, we agan take T = so tat = /n and wrte ε nstead of ε. By te asymptotc beavor of Eb ε descrbed n Teorem 4, we can wrte ε +n Eb ε nσ = as ε +n σ σ ε e ε σ + C π Y ε Y +.o.t. nσ =, and, tus, usng tat = oε and ε = o ε Y, we ave 4C Y ε Y 4 σ ε e ε ε σ +o e ε σ +o ε Y =. 8 π

12 Dvdng by ε and rearrangng te terms, ε Y +o = Y C π σ e ε σ +o. Ten, takng logartms of bot sdes and snce ln+o = o, Ylnε+o = ε σ Yσ ln+ln C +o, π wc can be wrtten as Y ε ln σ + Y ln σ + Y Equvalently, wrtng = ε /σ and dvdng by, ln+o = ε σ ln+ln Yσ C π +o. Y ln + Yln K = + o. and usng tat snce ε, we get Yln. Recallng tat = ε /σ, we conclude te result. 3 Tresold crteron wen ε = Mlog Under te framework descrbed n 6, n te case of equally spaced observatons, te tresold crteron allows convergence of IV ˆ n := X I { X rσ t,} to IV T = T σ sds wen, for all =,...,n, we ave rσ t, = r and r s a determnstc functon of s.t. r,, as. Here we sow tat, under fnte actvty jumps, te same estmator s also r log consstent n te case rσ, = M log, were M are proper random numbers. Concretely, assume te followng A4. Let dx t = a t dt+σ t dw t +dj t, 9 were J t = N t γ for a non-explosve countng process N and real-valued random varables γ j, a,σ are càdlàg and a.s. σ := nf s,t σ s >. Recall tat a.s., te pats of a and of σ are bounded on,t. Defne σ := sup s,t σ s, ten, te followng Proposton and Corollary old true. Proposton 3. UnderA4,fwecooser = M log,wtanym ωsuctatm ω nf s t,t σ sω, σ, we ave: a.s. η >, for suffcently small : =,...,n, I { X +ηr } = I { N=}. Corollary. For all η >, we ave n X P I { X +ηr } IV, as. Proof of Proposton 3. In order to prove te proposton, we follow and modfy te proof of Teorem n 6, n tat we sow tat a.s., for all η >, for suffcently small, we ave =,...,n,i { N=} I { X +ηr }

13 =,...,n,i { N=} I { X +ηr }. Ten te tess follows. Call X = t t a s ds+ t t σ s dw s, ā = sup s,t a s, σ = sup s,t σ s and γω = mn l: Nl γ l ω, and note tat under our assumptons Pγ =. To sow a we use te followng key fact: sup sup {,...,n} B IVt B IVt IV log IV X M log sup sup IV log IV M log M ā + M log sup {,...,n} log log were B s a standard Brownan moton and we used te fact tat σ W s a tme canged Brownan moton 7, teorems.9 and., meanng tat we can represent σ W = B IVt B IVt. By te Paul Lévy law on te modulus of contnuty of te BM pats 5, teorem 9.5 and te monotoncty of te functon xln/x on,/e, t follows tat for suffcently small te frst two factors of te last lne of last dsplay are bounded above by, so tat sup X M log M := sup ā log M +sup M log log ā + log σ log + σ log wc tends to, as. Now, n order to sow, we defne {J} = { {,,...,n} : N }, and t s suffcent to prove tat for small enoug sup X {J} +η. Indeed, sup X r {J} = sup X r {J} sup X r {,..,n} M, r tus for all η > for suffcently small, t s ensured tat sup X {J} < +η, tat s: for all, f r N = ten necessarly we ave X < +η r, and follows. In order to sow we prove tat, for suffcently small, nf X {J} > +η. In fact frstly note tat for r suffcently small all te ncrements of N are eter or. It follows tat f N, ten N =, and J concdes wt te sze, say γ l, of a sngle jump J = γ l. Ten X γ l X and r r r + M, nf {J} X r γ σ log sup {J} X M log γ +η σ log and ts tends to + wen, tus nf X {J} > + η, meanng tat f r N ten necessarly X > r +η, as we needed. Proof of Corollary. Te proof of te Corollary s stragtforward, n tat a.s. we fx any η >, and for suffcently small we ave X I { X +ηr } = X I { N=} = X snce te last term tends to n probablty, as E n X I { N } N T O. X P I { N } IV T,

14 4 CONDITIONAL MEAN SQUARE ERROR: FA jumps case We now put ourselves under A. Te quantty of our nterest ere, cmseε =. E IV ˆ IV σ,j, s suc tat ω, cmse = IV and as soon as J ten cmse+ >, because IV ˆ ε + QV. Furter, from te proof of Teorem, we ave cmse ε = ε Fε, wt Fε =. g, g = ε a + b j IV. j We analyze te sgn of Fε: for n, fxed, σ and m also are fxed, and we ave F = IV n a <, snce b j =. Furter we ave F+ = + : to see t, frst note tat, from te expresson of b ε, b + = m +σ, ten g ε ε + j m j σ ε, as ε +. Moreover, eac a π / σ exp ε σ ε σ, tus, for for some suffcently large ε, F = n a g s a fnte sum of n postve terms a g Kπ / σ ε exp constant K and fxed σ, so Fε +, as ε +. Snce F s contnuous, t follows tat, even n te absence of jumps, an optmal tresold exsts and solves Fε =. We now assume also A3. Remark 4. As n Remark 3, f ε = ε mnmzes cmse, ten t as to be true tat ε, as. In wat follows we agan also fnd tat necessarly ε +. A4. We assume A4 wt a, constant σ > and n =. Wen consderng, we assume to ave a suffcently small so tat a.s. te number of jumps occurrng durng t,t s at most ; note tat for any t we ave m I t t,t J t, so wen consderng a jump tme t we assume tat s suffcently small so tat te sgn of any m I t t,t s te same as te one of J t, n partcular f J t ten te ncrements m approacng t are non-zero. 4. Asymptotc beavor of b ε and F Proposton 4. Under A, A3, A4, f ε = ε solves Fε = and ε = ε, ten ε +. Proof. ε s suc tat n a g =,.e. n a ε + j b j IV =. For smplcty let us rename ε by ε. If ε lmnf = L,+ we can fnd a subsequence tat we recall ε suc tat lm ε = L. Note tat.e. = Now we sow tat σ ε a + j b j σ n = ε ε n a j b j n a a + a b j σ n a, j n = n σ a n a j b j. = σ n n a j bj n tends to a strctly postve constant, wc n turn means tat equalty a s mpossble, snce on any sequence ε suc tat ε L te left term tends to L, wle te rgt one tends to +. Let us ten ceck tat σ n a j bj n a tends to a strctly postve constant. Snce J as FA, a.s. we only ave fntely many J t, and, for small, N T concdes wt n I m. Recallng te explct expresson of b j also reported below, we ave b j = j j,m j= b j + j,m j b j n N T σ ε εe ε σ +n N T σ σ π π ε σ e x dx 3

15 j,m j σ ε e ε m j π σ +e ε+ m j σ Now, te factors εe ε σ and ε postve, so j e ε m j + m j e ε m j σ + e ε+ m j σ ε b j n N T σ σ π ε σ σ e ε+ m j σ e x dx+ + m j e ε m j j,m j + j,m j σ e ε+ m j σ m j +σ mj+ε σ π were f ε L as ten te frst term of te rs above tends to d := σ m j ε σ L σ π L σ m j +σ mj+ε σ π e x dx, m j ε σ e x dx. of σ π are strctly e x dx < σ, wle eac term of te latter fnte sum tends to, snce mj, so te fnte sum tends to. It follows tat, for all, j b n n j d+o, were d < σ, so a j bj n d+o, and σ a j bj n σ d+o σ d >, a a as we wanted. We now ceck te asymptotc beavor of b and a wen ε = ε tends to as n suc a way tat ε +. To ts end, for fxed σ, we defne bε,m, := σ π e ε m σ ε+m+e ε+m σ ε m + m +σ m+ε σ π m ε σ e x / dx ε m aε,m, := e σ +e ε+m σ σ, π so tat b j ε = bε,m j, and a j ε = aε,m j,, and note tat, as, we ave see te Appendx for te smple proof, σ σ π ε e ε σ +.o.t., f m =, bε,m, = 3 σ m ε e m ε σ +.o.t., f m. π σ π e ε σ, f m =, aε,m, = 4 It follows tat g ε = ε + j = ε + π ε σ π b j ε IV = ε + j :m j e m ε σ +.o.t., f m, j :m j m σ j ε m j εe σ b j ε+ j :m j= j :m j= σe ε σ bj ε σ j :m j j :m j σ σ σ σ +.o.t.. 5 Gven any sequence ε = ε = ε, wc tends to as n suc a way tat ε +, we now sow tat Fε = F ε +Rε, were F ε s consttuted by te leadng terms of F, wle Rε gves te remander ger order terms. A soluton ε of F = non necessarly s suc tat F ε =, owever f wt te ε above we ave F ε ten te wole Fε, so t as to be true tat ε s close n a way tat wll become explct later to one of te solutons ε of F =. Proposton 5. Under A4, f ε as n suc a way tat ε + ten Fε = F ε +.o.t., were F ε := ε e ε σ ε ε e σ 4σ π σ π. 4

16 Proof. For smplcty, n wat follows, we omt te dependence on n te functons aε,m, and bε,m, defned n -. Let us recall tat, under Assumpton A4, N t s te number of jumps by tme t, {γ l } l are te consecutve jumps of J and {J} = {J} n := { : n N }. It follows tat, for s small enoug, Fε = aε,m bε,m j IV = / {J} + {J} ε + j aε,m ε + = n N T aε, aε,m ε + j :j {J} j :j {J} bε,m j + bε,m j + ε σ N T ++ NT j :j/ {J} j :j/ {J} bε,m j IV bε,m j IV bε,γ k +n N T bε, σ + N T + aε,γ l ε σ N T + k +n N T bε, σ k lbε,γ l= = n N T σ ε π e N T + l= σ k= ε σ N T + 4n N T σε π e ε σ N T + k= σ ε γ k e γ k ε σ π σ γ k ε π e σ ε σ N T 4n N T σε e ε σ π In wat follows we use te followng notaton: + k l σ ε e γ k ε σ γ k π v = ε, u l = v e γ l σ, s = e v π π Now, snce u l = s p l and p l, as, + +.o.t.. σ, p l = e γ l σ γl v. Fε = n N T N T s v σ N T ++σv s ε σ γ k p k n N T + k= + N T s p l v σ N T +σv s ε σ γ k p k n N T +.o.t. l= k l = n N T s v 4σv s n + N T s p l v 4σv s n +.o.t. σ σ l= = n N T + N T p l s v 4σv s n +.o.t. σ l= = n s v v 4σs n +.o.t. σ = ε e ε σ ε ε e σ 4σ π σ π. 6 5

17 Note tat v n, but s, so wc s te leadng term between v and ns depends on te coce of v. Remark 5. Te asymptotc beavor 6 also olds for any drft process {a t } t tat as almost surely locally bounded pats recall tat any cadlag a satsfes suc a requrement and tat s ndependent on W. Indeed, for nonzero drft, by condtonng also on a, we ave tat Fε = N T aε,ā ε σ N T ++ bε,γ k +ā k + bε,ā j σ + / {J} k= N T + aε,γ l +ā l ε σ N T + bε,γ k +ā k + k l l= j :j/ {J} j l :j/ {J} bε,ā j σ, were ā = t t a s ds/ and te ndces < < < NT are defned suc tat k J, wle J = for any oter / {,,..., NT }. Next, we can follow te same arguments as above usng te facts tat, f a as locally bounded pats, for any and k aε,ā = ε σ / φ +.o.t., aε,γ k +ā k = γk ε σ / φ e γ kā k σ +.o.t.. σ bε,ā = σ σε ε φ σ 4. Asymptotc beavor of ε Corollary 3. Under A, A3, A4 we ave tat +.o.t., bε,γ k +ā k = σ γ k ε φ ε Proof. In fact, from Proposton 4 and 6, we ave tat σ ln, as. F ε = σ n s v v n s 4σ +.o.t. =, σ γk ε σ e γ kā k σ +.o.t.. ε σ were v := ε / and s = e π. Tus, v n s 4σ +.o.t. =, 7 or, equvalently, ε σ ε e 4σ +.o.t. =, π wc s exactly te condton n 6, entalng tat ε σ ln, as. Now we am at approxmatng any optmal ε := ε, wc s suc tat F ε =, usng a sequence ε = v. To ts end, we am at makng Fε as quckly as possble, te only possble way beng renderng v and ns 6 of te same order. So we want to coose v suc tat wc s exactly te condton n 7. v = ns 4σ π +.o.t., 8 6

18 Remark 6. Tere exsts a determnstc functon w of suc tat w :,,+ and w + w 3 e w w π as. In fact, for example a functon of type w = ln lnln lny, wt any contnuous functon y tendng to π as, satsfes te 3 condtons. Ten v = σw satsfes 8. However te quckest convergence speed of F to would be reaced by coosng a functon w wc satsfes te followng tree more restrctve condtons, as, w + w 3 3 e w w π, were condton 3 means tat F ε. In fact suc a w exsts, snce te followng olds true. Teorem 5. Tere exsts a unque determnstc functon w of suc tat w :,,+ and te tree condtons, and 3 are satsfed. Suc a w turns out to be dfferentable and to satsfy also te ODE w = w, wc entals tat w +w w + log. 9 We fnally reac te unqueness of te optmal tresold ε as a consequence of te followng Proposton, wose proof s n Appendx. Proposton 6. Te frst dervatve d dε Fε of F s suc tat, wen evaluated at a functon ε of suc tat ε ε, +, and ε = 4σ s +.o.t., as, ten F ε = F ε +.o.t., as, were F ε = 4 σ π e Remark 7. Unquenessof ε. SnceF ε > foranyε,wereactatforsuffcentlysmallweave d dε Fε > on any sequence ε as n te above Proposton. Tat entals tat for any suffcently small te cmse optmal ε s unque. In fact f tere exsted two optmal ε < ε we would necessarly ave tat ε ε, + and ε = 4σ s π +.o.t., but ten, for small, on suc sequences F >, and ten on suc sequences F s strctly ncreasng, and tus F ε < F ε, wc s a contradcton, because n order to be optmal bot sequences ave to satsfy F ε =. ε σ ε 3. Remark 8. Te asymptotc beavor of te optmal tresold ε = ε for te cmse crteron s te same as te one of te optmal tresold ε for te MSE crteron under FA jumps. Ts s due to te fact tat ε solves F =, ε solves G =, F = F +.o.t., G = G +.o.t., and te leadng terms n F are te ones wt m =, wc do not depend on ω, tus tey are te same as for G. It follows tat, n te case of Lévy FA jumps, we ave F = F +.o.t. = EF +.o.t. = G +.o.t.. Also, an alternatve n eurstc justfcaton s tat we expect tat Fε = ag n n nea g, tus te asymptotc beavor of te ε satsfyng G = nea g = s te same as any ε satsfyng Fε =. Remark 9. Comparson wt te results n. In a FA jumps process X s consdered, eter of Lévy type, wt jumps szes avng dstrbuton densty satsfyng gven condtons, or of Itô SM type, wt determnstc absolutely We tank Andrey Sarycev for avng provded suc nce examples. We tank Salvatore Federco for avng provded a suc nce result. Te proof s avalable upon request. 7

19 contnuous local caracterstcs addtve process. Te estmators Ĵ n = XI { X >ε }, ˆNn = I { X >ε } are consdered, and, as, frstly t s sown tat te condton ε + s necessary and suffcent for te convergence to of bot MSE IV ˆ n IV stronger condton mplyng consstency of IVn ˆ and MSEĴn J T. Secondly, te autors sow tat ε σ MSE ˆN n N T e, ε meanng tat n order to ave L Ω,P convergence to of te estmaton error ˆN n N T a stronger condton on ε s needed, mplyng ε. Trdly, exstence and unqueness of an optmal tresold ˇε mnmzng E IV ˆ n IV + ˆN n N T for fxed s obtaned, and te asymptotc expanson n of ˇε as leadng term 3σ log. Te factor 3 s ger tan te factor of te leadng terms of ε and ε : tat s due to te fact tat te mnmzaton crteron for ˇε ncludes also te error on N T, wc requres tat ˇε s ger tan ε, and tus ˇε > ε s necessary. 5 A NEW METHOD FOR FINITE JUMP ACTIVITY PROCESSES In ts secton, we propose a new metod to tuneup te tresold parameter ε := rσ, of te Tresold Realzed Varance TRV ntroduced n. Ts s based on te condtonal mean square error cm SEε = E ˆ IV IV σ,j studed n Secton 4. We llustrate te metod for a drftless FA process wt constant volatlty σ. As proved teren, te optmal tresold ε s suc tat F ε = εg ε =, g ε = ε a + b j ε nσ, j were a ε and b ε are rewrtten ere for easy reference: ε m a ε := aε,m,σ := e σ +e ε+m σ σ, π b ε := bε,m,σ := σ π σ ε+m +e ε+m σ ε m e ε m + m +σ m+ε σ π For future reference we set m = m,...,m n and Fε;σ,m := aε,m,σ ε + bε,m j,σ nσ j m ε σ e x / dx. Te man ssue wt te optmal tresold ε les on te fact tat ts depends on σ and te ncrements m = m,...,m n of te jump process, wc we don t know. Note also tat, for small enoug, eac m wll be eter or one of te jumps of te process and a good proxy of m s actually n X { n X > ε}. Te dea s ten to teratvely estmatng ε, σ, and m as follows: 3. Start wt some ntal guesses of σ and m, wc we call ˆσ and ˆm. In te sequel, we obtan ˆσ by assumng tat tere s no jump; tat s, we set ˆm =,..., and ˆσ = T n n X. 3 To be consstent wt secton 4, I corrected ε ere wt ε and put ˇε for te optmal tresold of. Ceck weter you approve. 8

20 . Usng ˆσ and ˆm, we ten fnd an ntal estmate for te optmum ε tat we denote ε. Tus, under te no-jump ntal guess of te prevous tem, ε s suc tat F ε ;ˆσ, ˆm = or, more specfcally, ε solves te equaton: ε +n ˆσ e ε ε ˆσ ε+ ˆσ ˆσ e x / dx nˆσ =. 3 π π It s easy to see tat ε s of te form v nˆσ, were vn s te unque soluton of te equaton: vn +4n v n e v n vn + e x / dx n =. 3 π π ε ˆσ Fgure sows tat v n ranges from about 3 to 4 wen n ranges from to. 3. Once we ave an ntal estmate of ε, we can update our estmates of σ and m usng te estmators: ˆσ := ˆ IV n ε := X { X ε }, ˆm := n X { n X > ε },..., n nx { n n X > ε } We contnue ts procedure teratvely by settng ε k suc tat F ε k ;ˆσ k, ˆm k =, wc s ten used to get ˆσ k+ := X { X ε k }, ˆm k+ := n X { n X > ε k },..., n nx { n X > ε k }. 34 We stop wen te sequence of estmates ˆσ k+ stablzes e.g., wen ˆσ k+ ˆσ k tol, for some desred small tolerance tol. vn n, number of observatons Fgure : Te soluton v n of equaton 3 as a functon of n. Te prevous procedure resembles te one ntroduced n, wc s based on coosng te tresold ε so to mnmze E IV ˆ n IV + ˆN n N T, or equvalently te expected number of jumps mss-classfcatons: n Lossε := E { n X >ε, n N=} + { n X ε, n N>}. 35 It was proved teren tat, for a FA Lévy processes, te optmal tresold, denoted ˇε, s asymptotcally equvalent to 3σ ln/, as. Usng ts nformaton, an teratve metod was proposed, n wc, gven an ntal estmate ˇσ of σ, t was set ˇε k := 3ˇσ k ln, ˇσ k+ := X { X ˇε k }, k. 36 9

21 In te lgt of te procedure used n, we adopt ere also te followng smpler one, oter tan te procedure 3-34 descrbed above. Snce, as proved n Secton 4, te optmal tresold ε as te asymptotc beavor σ ln/, as, t s natural to consder te followng teratve metod to estmate ε: ε k := σ k ln, σ k+ := X { X ε k }, k, 37 startng agan from an ntal guess σ of σ. It can be proved tat f we take bot ˇσ and σ equal to te realzed quadratc varaton T n n X n bot 36 and 37, ten te sequences of estmates { σ k } k, {ˇσ k } k s nonncreasng and, tus, eventually ˇσ k = ˇσ k+ and σ k = σ k+, for some k. So, we can and wll set te tolerance tol to. 5. Smulaton results We now proceed to assess te metods ntroduced above. We take a Lévy Merton s log-normal model of te form: N t X t = at+σw t + γ j, were N s a Posson process wt ntensty λ and {γ } s an ndependent sequence of ndependent normally dstrbuted varables wt mean and standard devaton µ Jmp and σ Jmp, respectvely. We consder te followng estmators:. σ := T n n X ;. Te estmator ˆσ as defned n 33 wt ntal guesses ˆσ = T n n X and ˆm =,...,; 3. ˆσ k found wt te new metod descrbed by te teratve formulas 34. We stop wen ˆσ k ˆσ k tol = 5 ; 4. Te estmator ˇσ as n 36 wt k =, usng te tresold 3ˇσ log/ wt ˇσ = T n n X ; 5. Te estmator ˇσ k defned by 36 wt k suc tat ˇσ k = ˇσ k, k ; 6. Teestmator σ asn37wtk =,usngtetresold ε = σ log/wt σ = T n n X ; 7. Te estmator σ k defned by te teratve formulas 37 and wt k suc tat σ k = σ k, k ; 8. Tresold Realzed Varance usng te tresold ε = ω wt ω = Tresold Realzed Varance usng te tresold ε = ω wt ω =.495. Realzed Bpower Varaton BPV. Tresold Realzed Varance usng a tresold of te form 4 ω BPV/T wt ω =.49 ts s used n te recent work 4;. Te estmator ˆσ := n X { X ε } gven n 33 were ε s suc tat F ε ;ˆσ, ˆm =, but ts tme takng ˆσ = T n n X { n X ε } and ˆm := n X { n X > ε },..., n nx { n n X > ε } wt ε as defned n te tem 6 above. 3. Te estmator ˆσ k defned n 34, were ε k s suc tat F ε k ;ˆσ k, ˆm k =, were ε s gven as n te tem above, and k s suc tat ˆσ k ˆσ k tol = 5. j=

22 Te adopted unt of measure s year 5 days and we consder 5 mnute observatons over a mont tme orzon wt a 6.5 ours per day open market. For our frst smulaton, we use te followng parameters: σ =.4, σ Jmp = 3, µ Jmp =, λ =, = Te dependence of σ Jmp on σ Jmp on was done for easer comparson wt standard devaton of te ncrements of te contnuous component, wc s.4. So, te standard devaton of te jumps s about 7.5 tmes te standard devaton of te contnuous component ncrement. Te parameter values n 38 yeld an expected annualzed volatlty of.45, wc s reasonable. Table below sows te sample means and standard devatons based on smulatons below Loss equals te number of jump msclassfcatons as defned by 35, wle N s te number of teratons needed to fnd te estmator s value. As sown teren, te new proposed estmator tems 3, 3 performs te best, followed by te teratve metod 7 based on 37. It takes on average teratons to fns f we take as an ntal guess for te tresold te soluton of Eq. 3. However, f we take advantage of te asymptotc beavor of ε as n te metod above, one teraton suffces. Metod ˆσ stdˆσ Loss stdloss ε stdε N stdn Table : Estmaton of te volatlty σ =.4 for a log-normal Merton model based on smulatons of 5-mnute observatons over a mont tme orzon. Te jump parameters are λ =, σ Jmp = 3 and µ Jmp =. We now double te ntensty of jumps and consder te followng parameter settng: σ =.4, σ Jmp = 3, µ Jmp =, λ =, = 56.5, wc yelds an expected annualzed volatlty of.5. Te results are sown n Table. We agan notce tat te metods 3 and 3 outperforms all te oters, followed by metod 7 based on te asymptotc beavor ε σ ln/. Fnally, we consder a jump ntensty of jumps per year but we reduce σ and σ Jmp n order to obtan an expected annualzed volatlty of.39. Concretely, we set: σ =., σ Jmp =.5, µ Jmp =, λ =, = 56.5, TeresultsaresownnTable3. Inspteofbengatougsettng, tenewmetoddoesagoodjobandoutperforms all oters, except metod 7, wc s based on te asymptotcs ε σ ln/. Note tat n ts case t takes on average 5 teratons for te teratve metods to converge.

23 Est ˆσ stdˆσ Loss stdloss ε stdε N stdn Table : Estmaton of te volatlty σ =.4 for a log-normal Merton model based on smulatons of 5-mnute observatons over a mont tme orzon. Te jump parameters are λ =, σ Jmp = 3 and µ Jmp =. Est ˆσ stdˆσ Loss stdloss ε stdε N stdn Table 3: Estmaton of te volatlty σ =. for a log-normal Merton model based on smulatons of 5-mnute observatons over a mont tme orzon. Te jump parameters are λ =, σ Jmp =.5 and µ Jmp =. 6 Conclusons We consder te problem of estmatng te ntegrated varance IV of a semmartngale model X wt jumps for te log prce of a fnancal asset. In vew of adoptng te truncated realzed varance of X, we look for a teoretcal and practcal way to select an optmal tresold n fnte samples. We consder te followng two optmalty crtera: mnmzaton of MSE, te expected quadratc error n te estmaton of IV; and mnmzaton of cmse, te expected quadratc error condtonal to te realzed pats of te jump process J and of te volatlty process σ s s. Under gven assumptons, we fnd tat for eac crteron an optmal TH exsts, s unque and s a soluton of an explctly

24 gven equaton, te equaton beng dfferent under te two crtera. Also, under eac crteron, an asymptotc expanson wt respect to te step between te observatons s possble for te optmal TH. Te leadng terms of te two expansons turn out to be proportonal to te modulus of contnuty of te Brownan moton pats and to te spot volatlty of X, wt proportonalty constant Y, Y beng te jump actvty ndex of X. It turns out tat te tresold estmator of IV constructed wt te optmal TH s consstent, at least n te fnte actvty jumps case. Te results obtaned for te cmse crteron allow for a novel numercal way to tuneup te tresold parameter n fnte samples. We llustrate te superorty of te new metod on smulated data. Mnmzaton of te condtonal mean square estmaton error n te presence of nfnte actvty jumps n X s object of furter researc. 7 Appendx: addtonal proofs Proof of Lemma. Trougout, p t denotes te densty of J t and recall tat te caracterstc functon of J t s of te form E e ujt = e ct u Y. Let us also recall tat te Fourer transform and ts nverse are defned by Fgx = π R gze zx dz and F Gx = π R Gzezx dz. In wat follows, we set u := Let us start by notng tat ε E φ σ J σ = F φ σ ε σ u = π x φ σ ε σ e ux dx. x φ σ ε σ p xdx = Fxp xdx = ufp udu, were, snce J s a symmetrc stable process, Fp u = π / e c u Y. Terefore, we obtan te representaton ε E φ σ ± J σ = σ/ π e c u Y σ u +εu du. 39 In order to prove, let us make te cange of varables w = σ / u and, ten, expand n a Taylor s expanson exp cσ Y Y/ w Y as follows: π e cσ Y Y/ w Y w + ε σ /w dw = π e w + ε σ /w dw + I k,n, k= were I k,n := k! ck σ ky k Y/ π = k! ck σ ky k Y/ π w ky e w + ε σ /w dw w ky e w cos ε σ /w dw. Te frst term of s ten clear. For te subsequent terms, let us apply te formula for te cosne ntegral transformaton of w ky e w / as well as te asymptotcs for te generalzed ypergeometrc seres or Kummer s functon Ma,b,z: I k,n = { k! ck σ ky k Y/ π +ky Γ + ky = k! ck σ ky k Y/ π +ky Γ Γ ε Γ ky σ ky + + ky Γ Γ + ky M e ε σ + ky ; } ε ; σ ε σ ky +.o.t.. 3

25 In te asymptotc formula for te Kummer s functon above, te frst term respectvely, second term vanses f Γ ky/ respectvely, Γ/ + ky/ are nfnty. Ts appens wen ky/ or / +ky/ are nonpostve ntegers. It s now evdent tat tere exsts nonzero constants a k and b k suc tat Note tat a k I k,n = Γ ky ε ky k+ + b k Γ + ky e ε σ ε ky k Y +.o.t.. Terefore, ε Y + I k,n, for all k >. We now sow. Note tat ε E J φ σ J σ = were ε ky k+ ε k+y k++ ε /Y = ε /, ε Y + ε ky k+ e ε σ ε ky k Y. x φ σ ε σ xp xdx = Fxp xu = d du Fp u = d Y π du e c u Terefore, we ave te followng representaton: ε E J φ σ J σ = σ Yc 3/ π Furtermore, ε E J φ J = σ Yc π 3/ Y Yc = σ 3 Y π ufxp xudu, = π e c u Y Ysgnuc u Y. sgnu u Y e c u Y σ u +εu du. u Y e cuy σ u snεudu Next, we expand n a Taylor s expanson exp cσ Y Y/ w Y as follows: were π w Y e cσ Y Y/ w Y w sn σ ε / w dw. w Y e cσ Y Y/ w Y w sn σ ε / w dw = I k,n := k! ck σ Yk k Y/ π I k,n, k= w k+y e w sn ε / w dw. Ten, we agan apply te followng formula for te sne ntegral transformaton of w k+y e w / : I k,n = { k! ck σ Yk k Y/ π +k+y k +Y ε k +Y Γ + M + ; 3 } ; ε. Fnally, we use te relatonsp M k +Y + ; 3 ; ε = Γ 3 Γ k+y + Γ Γ 3 + k+y ε k+y k+y + e ε σ ε σ +.o.t., wc, n turn sows tat, We ten conclude te result of te Lemma. I k,n I,n ε Y. 4

26 Proof of Lemma. Let ε I n ± := E Φ σ J σ { ± ε σ J σ } For I n, + let us note tat for a constant K, Φz Kφz for all z and, tus, ε I n + KE φ σ J σ { } ε σ J = O E φ σ For te oter term, we decompose t as follows: In = φup ε R σ J σ,u ε σ J σ = φup ε du+ φup = P J Y ε + σ J σ φup du J Y ε σu Y ε σ J σ u ε σ J σ du.. Te frst term above s well-known to be P J /Y ε = Y C /Y ε Y + O ε Y. For te second term, let us frst recall tat tere exsts a constant K suc tat for all x >, Ex := PJ x C Y x Y Kx Y. 4 Terefore, For te frst term above, note tat ε Y φup J Y ε σu Y du = C Y + du φu Y Y ε σu Y du φue Y ε σu Y du. φu Y Y ε σu Y du = φu σuε / Y du, wc, by te domnated convergence teorem, converges to /, because ε /, as n. Smlarly, usng 4, we ave φue Y ε σu Y du K φu Y Y ε σu Y du = O ε Y. Terefore, we fnally conclude tat In = Y Cε Y +O ε Y, wc mples. We now sow 3. To ts end, let us frst consder E, ε := E J { σw +J ε,j,w } εσ = /Y = Y ε σ φx Y ε σ Y x ε φ w σ Y ε w u p ududx u p ududx. Let Eu := p u Cu Y and let us recall tat, for a constant K, Eu K u Y u Y Ku Y, for all u >. Next, E, ε = C Y + Y ε σ ε σ ε φ σ ε φ w σ Y ε w w u Y dudx Y ε w u Eududw 5

27 For te frst term above, note tat ε Y φ w Y Y Y ε w dw = Y Y σ We dvde te second term n two cases. If Y, ten ε Y ε w φ w u Eududx σ K Y ε Y Y Y Y Y Y K Y ε Y Y Y K Y ε Y ε φ σ σ ε w w Y dw. ε Y φ w σ Y ε w dw ε Y ε φ σ. ε σ w w Y dw Note tat te last lmt s vald provded tat w Y dw <, wc olds true wen Y. For Y >, let us frst observe tat z u u Y u Y du Y + z Y {z>} Y Y + Y. 4 Terefore, for a constant K, ε Y ε w φ w u Eududx σ K ε σ φ w dw K. σ ε We conclude tat Next, we consder E, ε = C Y ε Y +O ε Y +O Y. E, ε := E J { σw +J ε,j,w } = /Y = C /Y + /Y φx φx φx Y ε σ Y x σ Y x Y ε σ Y x σ Y x Y ε σ Y x σ Y x Te frst term on te rgt-and sde above can be wrtten as C Y Y Y /Y Y ε φx σ ε x σ ε x u p ududx u Y dudx u Eududx. Y dx C Y ε Y, were te last asymptotc relatonsp follows from domnated convergence teorem and te facts tat / /ε and x Y φxdx <. For te second term of E, ε, we ave two cases. For Y, we ave Y ε σ Y x Y ε σ Y x Y φx u Eududx σ Y x K Y φx u Y dudx σ Y x K Y Y Y = Y Y Y ε φx σ ε x σ ε x dx K ε Y, 6

28 were agan we used domnated convergence and use te fact tat φx x Y dx <. For Y >, we just use 4 to deduce tat Y φx Y ε σ Y x σ Y x u Eu dudx K Y φxdx, for a constant K. Fnally, we conclude tat Fnally, let us consder E, = C Y ε Y +O ε Y +O Y. E 3, ε := E J { σw +J ε,j,w } σ ε = /Y + /Y σ ε φx u p ududx σ Y x φx Y ε σ Y x σ Y x u p ududx. Usng te fact tat p u Ku Y for a constant K and all u >, te frst term above s suc tat σ ε /Y φx σ Y x Smlarly, te second term can be wrtten as /Y σ ε σ ε u p ududx K /Y = K Y σ Y K 4 Y Y σ Y x φx u p ududx K σ Y x Y ε Y Puttng togeter te prevous results, we obtan tat = o 4 Y φx σ Y x u Y dudx Y σ ε φxx Y dx φxx Y dx = o ε Y. Y σ Y = o ε Y. σ ε E ε = E J { σw +J ε} = E, ε+e, ε+e 3, ε = C Y ε Y +O ε Y +O 4 Y +O Y. φxx Y dx Proof of 3. Let and recall tat, for x >, Nx = x φzdz, Rx = Nx φx φx, Rx x x 3. x φzdz φx x 7

29 Ten, for fxed m > and small enoug suc tat ε < m, we ave bε,m, = σ m ε m φ σ m ε m σ m+ε m φ σ m+ε m σ m ε φ σ ε σ m+ε φ σ ε m ε m+ε +σ 3 3/ φ σ σ 3 3/ φ m ε σ ±m +σ m ε R m+ε σ = σ m ε φ m σ ε σ m+ε φ m σ ε σ m ε + φ mm ε σ ε 3 σ m+ε φ mm+ε σ ε 3 + σ3 m ε m ε 3/ φ σ σ3 m+ε m+ε 3/ φ σ ±m +σ m ε R σ It s now clear tat 3 olds true. We can smlarly deal wt te case m <. Te asymptotc beavor for aε,m, s drect. Proof of Proposton 6. Let us fx, and n =, ten d dε Fε = n a g +a g = σ 3 3 π e ε m σ ε m +e σ ε+ m g + ε+ m e ε m σ +e ε+ m σ σ π ε+ ε a j. j We now evaluate F ε at ε suc tat ε wt ε, as. Snce agan wen m we ave e ε m σ e ε+ m σ and ε m, ten F ε π = {J} σ e ε m σ m σ g +ε+ε j a j + Note tat wtn g n 5 we ave tat te fnte sum π = s ε σ j :j {J} m p j j sneglgblewrts snce ε j :j {J}. Terefore p j a.s. m j {J} ε σ ε e σ σ εe mj ε σ j :j {J} m j ε π j :j {J} e g σ ++ε a j +.o.t. j = j :j {J} σ m j εu j σ = n N T I { {J}} +n N T I { {J}} s, g = ε 4σ π εs n N T I { {J}} +n N T I { {J}} σ N T I { {J}} +N T +I { {J}} +.o.t.. Furter, N T n and ε, ten for all g = ε 4σ π εs +.o.t.. Moreover from 7 we reac tat j a j = j,j {J} s σ + u j j,j {J} σ +.o.t., and agan te second sum s neglgble wrt te frst one, tus, for all, ε j Now, usng 8, from {J} a g = NT σ s l= p l NT σ s l= p l a j = s ε σ n N TI {m } +n N T I {m=}+.o.t. = s ε σ +.o.t.. v 4σv s n v σ N T +σv s ε k l +.o.t. γ k p k n N T +.o.t. = 8

30 we reac tat {J} a g m σ = NT σ s l= p l v 4σv γ s n l σ +.o.t. and from {J} a g = n N T σ s v σ N T ++σv s s n N T σ v 4σv s n +.o.t. we reac tat {J} a gε σ = n N T σ s v 4σv ε s n σ +.o.t.. Tus ε N T k= γ k p k n N T = F π = v v N T γ l 4σs n u l σ σ n N T ε s σ σ l= +ε + s ε σ u σ + s σ +.o.t.. J J If now our sequence ε s suc tat v = 4σns +.o.t., and notng tat also j J p j γ j a.s. and tat nε = n v = v + ten F ε NT s π = v ons σ 3 p l γ l nε + εs σ + s ε σ n N T +.o.t. l= s = nεv ons σ 3 + 4nεs σ + s ε σ +.o.t. now v = 4σns +ov means also s = ε +oε, and tus s ε = ε +o ε +, terefore F ε π = ε ε ons ε s ε ns 8 s σ 3 + 4nεs σ σ σ +o+.o.t. = 8 σ s ε s ε +.o.t. = +.o.t.. References Band, F. & Russel, J.R. 7. Realzed covaraton, realzed beta, and mcrostructure nose. Workng paper of te Graduate Scool of Busness, Te Unversty of Ccago, January 5 Fgueroa-López J. & Nsen, J. 3: Optmally tresolded realzed power varatons for Lévy jump dffuson models. Stocastc Processes and ter Applcatons 37, Fgueroa-López J. & Nsen, J. 6: Second-order propertes of tresolded realzed power varatons of FJA addtve processes. Preprnt, avalable at ttps://pages.wustl.edu/fgueroa/publcatons. 4 Jacod, J.& Todorov, V.4. Effcent Estmaton of Integrated Volatlty n Presence of Infnte Varaton Jumps. Te Annals of Statstcs 43: Karatzas, I. & Sreve, S.E Brownan moton and stocastc calculus, Sprnger. 6 Mancn, C.: Non-parametrc tresold estmaton for models wt stocastc dffuson coeffcent and jumps. Scandnavan Journal of Statstcs, 36, Revuz, D. & Yor, M.. Contnuous martngales and Brownan moton, Sprnger. 9

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (,

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (, A^VÇÚO 1n ò 1oÏ 215c8 Cnese Journal of Appled Probablty and Statstcs Vol.31 No.4 Aug. 215 Te Speed of Convergence of te Tresold Verson of Bpower Varaton for Semmartngales Xao Xaoyong Yn Hongwe (Department

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

Nonparametric tests for analyzing the fine structure of price fluctuations

Nonparametric tests for analyzing the fine structure of price fluctuations Nonparametrc tests for analyzng te fne structure of prce fluctuatons Rama CONT & Cecla MANCINI Columba Unversty Center for Fnancal Engneerng Fnancal Engneerng Report No. 27 13 November 27. Abstract We

More information

The Finite Element Method: A Short Introduction

The Finite Element Method: A Short Introduction Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental

More information

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling Open Journal of Statstcs, 0,, 300-304 ttp://dx.do.org/0.436/ojs.0.3036 Publsed Onlne July 0 (ttp://www.scrp.org/journal/ojs) Multvarate Rato Estmator of te Populaton Total under Stratfed Random Samplng

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Problem Set 4: Sketch of Solutions

Problem Set 4: Sketch of Solutions Problem Set 4: Sketc of Solutons Informaton Economcs (Ec 55) George Georgads Due n class or by e-mal to quel@bu.edu at :30, Monday, December 8 Problem. Screenng A monopolst can produce a good n dfferent

More information

Competitive Experimentation and Private Information

Competitive Experimentation and Private Information Compettve Expermentaton an Prvate Informaton Guseppe Moscarn an Francesco Squntan Omtte Analyss not Submtte for Publcaton Dervatons for te Gamma-Exponental Moel Dervaton of expecte azar rates. By Bayes

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL TR/9 February 980 End Condtons for Interpolatory Quntc Splnes by G. H. BEHFOROOZ* & N. PAPAMICHAEL *Present address: Dept of Matematcs Unversty of Tabrz Tabrz Iran. W9609 A B S T R A C T Accurate end condtons

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

COMP4630: λ-calculus

COMP4630: λ-calculus COMP4630: λ-calculus 4. Standardsaton Mcael Norrs Mcael.Norrs@ncta.com.au Canberra Researc Lab., NICTA Semester 2, 2015 Last Tme Confluence Te property tat dvergent evaluatons can rejon one anoter Proof

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Adaptive Kernel Estimation of the Conditional Quantiles

Adaptive Kernel Estimation of the Conditional Quantiles Internatonal Journal of Statstcs and Probablty; Vol. 5, No. ; 206 ISSN 927-7032 E-ISSN 927-7040 Publsed by Canadan Center of Scence and Educaton Adaptve Kernel Estmaton of te Condtonal Quantles Rad B.

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

5 The Laplace Equation in a convex polygon

5 The Laplace Equation in a convex polygon 5 Te Laplace Equaton n a convex polygon Te most mportant ellptc PDEs are te Laplace, te modfed Helmoltz and te Helmoltz equatons. Te Laplace equaton s u xx + u yy =. (5.) Te real and magnary parts of an

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Math 324 Advanced Financial Mathematics Spring 2008 Final Exam Solutions May 2, 2008

Math 324 Advanced Financial Mathematics Spring 2008 Final Exam Solutions May 2, 2008 Mat 324 Advanced Fnancal Matematcs Sprng 28 Fnal Exam Solutons May 2, 28 Ts s an open book take-ome exam. You may work wt textbooks and notes but do not consult any oter person. Sow all of your work and

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Nonparametric tests for pathwise properties of semimartingales

Nonparametric tests for pathwise properties of semimartingales Bernoull 17(2, 211, 781 813 DOI: 1.315/1-BEJ293 Nonparametrc tests for patwse propertes of semmartngales RAMA CONT 1 and CECILIA MANCINI 2 1 IEOR Department, Columba Unversty, New York and Laboratore de

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

A Discrete Approach to Continuous Second-Order Boundary Value Problems via Monotone Iterative Techniques

A Discrete Approach to Continuous Second-Order Boundary Value Problems via Monotone Iterative Techniques Internatonal Journal of Dfference Equatons ISSN 0973-6069, Volume 12, Number 1, pp. 145 160 2017) ttp://campus.mst.edu/jde A Dscrete Approac to Contnuous Second-Order Boundary Value Problems va Monotone

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp Drect Metods for Solvng Macromolecular Structures Ed. by S. Forter Kluwer Academc Publses, Te Neterlands, 998, pp. 79-85. SAYRE EQUATION, TANGENT FORMULA AND SAYTAN FAN HAI-FU Insttute of Pyscs, Cnese

More information

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up Not-for-Publcaton Aendx to Otmal Asymtotc Least Aquares Estmaton n a Sngular Set-u Antono Dez de los Ros Bank of Canada dezbankofcanada.ca December 214 A Proof of Proostons A.1 Proof of Prooston 1 Ts roof

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

The finite element method explicit scheme for a solution of one problem of surface and ground water combined movement

The finite element method explicit scheme for a solution of one problem of surface and ground water combined movement IOP Conference Seres: Materals Scence and Engneerng PAPER OPEN ACCESS e fnte element metod explct sceme for a soluton of one problem of surface and ground water combned movement o cte ts artcle: L L Glazyrna

More information

Dirichlet s Theorem In Arithmetic Progressions

Dirichlet s Theorem In Arithmetic Progressions Drchlet s Theorem In Arthmetc Progressons Parsa Kavkan Hang Wang The Unversty of Adelade February 26, 205 Abstract The am of ths paper s to ntroduce and prove Drchlet s theorem n arthmetc progressons,

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Modelli Clamfim Equazioni differenziali 7 ottobre 2013 CLAMFIM Bologna Modell 1 @ Clamfm Equazon dfferenzal 7 ottobre 2013 professor Danele Rtell danele.rtell@unbo.t 1/18? Ordnary Dfferental Equatons A dfferental equaton s an equaton that defnes a relatonshp

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

Solution for singularly perturbed problems via cubic spline in tension

Solution for singularly perturbed problems via cubic spline in tension ISSN 76-769 England UK Journal of Informaton and Computng Scence Vol. No. 06 pp.6-69 Soluton for sngularly perturbed problems va cubc splne n tenson K. Aruna A. S. V. Rav Kant Flud Dynamcs Dvson Scool

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

ON THE BURGERS EQUATION WITH A STOCHASTIC STEPPING STONE NOISY TERM

ON THE BURGERS EQUATION WITH A STOCHASTIC STEPPING STONE NOISY TERM O THE BURGERS EQUATIO WITH A STOCHASTIC STEPPIG STOE OISY TERM Eaterna T. Kolovsa Comuncacón Técnca o I-2-14/11-7-22 PE/CIMAT On the Burgers Equaton wth a stochastc steppng-stone nosy term Eaterna T. Kolovsa

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

On a nonlinear compactness lemma in L p (0, T ; B).

On a nonlinear compactness lemma in L p (0, T ; B). On a nonlnear compactness lemma n L p (, T ; B). Emmanuel Matre Laboratore de Matématques et Applcatons Unversté de Haute-Alsace 4, rue des Frères Lumère 6893 Mulouse E.Matre@ua.fr 3t February 22 Abstract

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline IOSR Journal of Matematcs (IOSR-JM) e-issn: 78-578, p-issn: 319-765X. Volume 14, Issue 6 Ver. I (Nov - Dec 018), PP 6-30 www.osrournals.org Numercal Smulaton of One-Dmensonal Wave Equaton by Non-Polynomal

More information

Formal solvers of the RT equation

Formal solvers of the RT equation Formal solvers of the RT equaton Formal RT solvers Runge- Kutta (reference solver) Pskunov N.: 979, Master Thess Long characterstcs (Feautrer scheme) Cannon C.J.: 970, ApJ 6, 55 Short characterstcs (Hermtan

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1 MATH 5707 HOMEWORK 4 SOLUTIONS CİHAN BAHRAN 1. Let v 1,..., v n R m, all lengths v are not larger than 1. Let p 1,..., p n [0, 1] be arbtrary and set w = p 1 v 1 + + p n v n. Then there exst ε 1,..., ε

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Nonparametric tests for pathwise properties of semimartingales

Nonparametric tests for pathwise properties of semimartingales Bernoull 17(2), 2011, 781 813 DOI: 10.3150/10-BEJ293 arxv:1104.4429v1 [mat.st] 22 Apr 2011 Nonparametrc tests for patwse propertes of semmartngales RAMA CONT 1 and CECILIA MANCINI 2 1 IEOR Department,

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed. Radar rackers Study Gude All chapters, problems, examples and page numbers refer to Appled Optmal Estmaton, A. Gelb, Ed. Chapter Example.0- Problem Statement wo sensors Each has a sngle nose measurement

More information