Convergence of option rewards for multivariate price processes

Size: px
Start display at page:

Download "Convergence of option rewards for multivariate price processes"

Transcription

1 Mathematcal Statstcs Stockholm Unversty Convergence of opton rewards for multvarate prce processes Robn Lundgren Dmtr Slvestrov Research Report 2009:10 ISSN

2 Postal address: Mathematcal Statstcs Dept. of Mathematcs Stockholm Unversty SE Stockholm Sweden Internet:

3 Convergence of opton rewards for multvarate prce processes Robn Lundgren Dmtr Slvestrov December 12, 2009 Abstract Amercan type optons wth general payoff functons possessng polynomal rate of growth are consdered for multvarate Markov prce processes. Convergence results are obtaned for optmal reward functonals of Amercan type optons for perturbed multvarate Markov processes. These results are appled to approxmaton tree type algorthms for Amercan type optons for exponental dffuson type prce processes. Applcaton to mean-reverse prce processes used to model stochastc dynamcs of energy prces are presented. Also applcaton to resellng of European optons are gven. Keywords: Amercan opton; convergence of opton rewards; bnomaltrnomal tree approxmaton; optmal stoppng; skeleton approxmaton; multvarate Markov prce process. Postal address: Stockholm Unversty, Department of Mathematcs, SE Stockholm E-mal:

4

5 Convergence of opton rewards for multvarate prce processes Robn Lundgren Dmtr Slvestrov December 12, 2009 Abstract Amercan type optons wth general payoff functons possessng polynomal rate of growth are consdered for multvarate Markov prce processes. Convergence results are obtaned for optmal reward functonals of Amercan type optons for perturbed multvarate Markov processes. These results are appled to approxmaton tree type algorthms for Amercan type optons for exponental dffuson type prce processes. Applcaton to mean-reverse prce processes used to model stochastc dynamcs of energy prces are presented. Also applcaton to resellng of European optons are gven. Keywords: Amercan opton; convergence of opton rewards; bnomal-trnomal tree approxmaton; optmal stoppng; skeleton approxmaton; multvarate Markov prce process. 1 Introducton An Amercan opton gves the holder the rght to at any moment before some future tme T, to exercse the opton to receve a payoff determned by a functon g (t, s). We study the model wth payoff functons admttng power type upper bounds and multvarate exponental Markov prce processes S (t). Optmal stoppng problems for Amercan type optons have been also studed n Jacka (1991), Km (1990), Peskr and Shryaev (2006), for models wth stochastc Mälardalen Unversty, School of Educaton, Culture and Communcaton, Appled Mathematcs Västerås Stockholm Unversty, Department of Mathematcs, Mathematcal Statstcs, Stockholm E-mal addresses:robn.lundgren@mdh.se (R. Lundgren) slvestrov@math.su.se (D. Slvestrov) 1

6 volatlty by Zhang and Lm (2006), for Amercan barrer optons n Gau, Huang and Subrahmanyam (2000), and for generalzed Amercan knock out opton n Lundgren (2007, 2010). Convergence of Amercan opton rewards have been studes n Amn and Khanna (1994), Coquet and Toldo (2007), Dupus and Wang (2005), Jönsson (2001, 2005), Jönsson, Kukush and Slvestrov (2004, 2005), Kukush and Slvestrov (2000, 2001, 2004), Prgent (2003), Slvestrov, Galochkn and Sbrtsev (1999). For recent results concernng optmal stoppng we refer to the works Dayank and Karatzas (2003), Ekström, Lndberg, Tysk and Wanntorp (2009), Henderson and Hobson (2008). We specally would lke to menton the papers Slvestrov, Jönsson and Stenberg (2006, 2007, 2009) on convergence of optmal reward functonals for Amercan type optons for one-dmensonal Markov prce processes modulated by stochastc ndces. We consder the convergence of reward functonals for Amercan type optons n a mult-asset settng. In the frst part of the paper, we generalze the results obtaned n the papers mentoned above to the model of multvarate Markov prce processes. Despte the man steps for analyss n multvarate case are smlar wth those n the unvarate case, the multvarate aspects complcate the correspondng proofs and do requre a separate consderaton. In the second part of the paper, we apply the general convergence results to the multvarate tree type approxmaton algorthms for Amercan type optons wth general payoff functons. We present the correspondng approxmatons for multvarate geometrc Brownan prce processes and mean-reverse dffuson prce processes. The latter model was used n Schwartz (1997) for descrpton of stochastc dynamcs for energy prces. Smlar model was also studed n Cortazar Gravet and Urzua (2008), where Amercan optons where studed usng Monte Carlo smulaton. Further applcatons presented n the paper relate to the approxmaton tree type algorthms for the model of optmal resellng of European type optons. We refer to the recent paper Lundgren, Slvestrov and Kukush (2008), where one can fnd addtonal detals n analyss of the resellng problem. The paper contans of 8 sectons. In Secton 2, we ntroduce the model of multvarate prce processes and Amercan type optons wth general payoff functons. In Secton 3, we present skeleton type approxmatons of reward functonals for contnuous tme prce processes by a smlar functonals for smpler mbedded dscrete tme models. In Secton 4, the results concerned condtons for convergence of reward functonals n dscrete tme models are gven. Secton 5 presents general results on convergence of reward functonals for Amercan type optons for contnuous tme prce processes. In Sectons 6, we llustrate our general convergence results by applyng them to multvarate exponental prce processes wth ndependent ncrements. In Sectons 7 and 8, we apply our general convergence results to approxmaton tree type algorthms for Amercan type optons for exponental dffuson type prce processes mentoned above. 2

7 2 Reward functonal for general Amercan type optons In ths secton we ntroduce reward functonals for Amercan type optons wth general payoff functons and formulate condtons whch guarantee that the functonals are well defned,.e take fnte values. The correspondng nequaltes are also used n followng proofs of our convergence results. For every ε 0, let Y (t) = (Y 1 (t),..., Y k (t)), t 0 be a càdlàg Markov process wth the phase space R k and transton probabltes P (t, y, t + s, A). We nterpret Y (t) as a vector log-prce process. Now, we defne a vector prce process S (t) = (S 1 (t),..., S k (t)), t 0 wth the phase space R k + = R + R +, where R + = (0, ), by the relatons S (t) = e Y (t), = 1,..., k, t 0. (2.1) Due to the one-to-one mappng and contnuty propertes of exponental functon, S (t) s also a càdlàg Markov process. For every ε 0, let g (t, s), (t, s) [0, ) R k + be a payoff functon. We assume g (t, s) to be a real-valued Borel measurable functon. Note that we do not assume payoff functons to be non-negatve. Let F t = σ( Y (s), s t), t 0 be the natural fltraton of σ-felds, assocated wth the vector log-prce process Y (t), t 0. It s useful to note that ths fltraton concdes wth the natural fltraton generated by the prce process S (t), t 0. We consder Markov moments τ wth respect to the fltraton F t, t 0. It means that τ s a random varable whch takes values n [0, ] and wth the property {ω : τ (ω) t} F t, t 0. Let M max,t be the class of all Markov moments τ T, where T > 0, and consder a class of Markov moments M T M max,t. Below we mpose condtons on prce processes and payoff functons whch guarantee that, for all ε small enough, sup E g(τ, S (τ ) <. (2.2) τ M max,t The man object of our studes s the reward functonal, that s, the maxmal expected payoff over dfferent classes of Markov moments, M T, Φ(M T ) = sup Eg (τ, S (τ )). (2.3) τ M T We are nterested n condtons of convergence for reward functonals for dfferent classes of stoppng tmes. In partcular, we formulate condtons mplyng the 3

8 followng convergence relaton: Φ(M max,t ) Φ(M(0) max,t ) as ε 0. (2.4) The frst condton assumes the absolute contnuty of payoff functons and mposes power type upper bounds on ther partal dervatves: A 1 : There exsts ε 0 > 0 such that for every 0 ε ε 0 : (a) functon g (t, s) s absolutely contnuous n t wth respect to the Lebesgue measure on [0, T ] for every fxed s R k + and n s wth respect to the Lebesgue measure on R k + for every fxed t [0, T ]; (b) for every s R k +, the partal dervatve g (t, s) K t 1 + K k 2 j=1 sγ 0 j for almost all t [0, T ] wth respect to the Lebesgue measure on [0, T ], where 0 K 1, K 2 < and γ 0 0; (c) for every t [0, T ], the partal dervatve g (t, s) s m K 3 + K 4 k j=1 sγm j for almost all s R k + wth respect to the Lebesgue measure on R k +, where 0 K 3, K 4 < and γ 1,..., γ k 0, m = 1,..., k. (d) for every t [0, T ] and for s R k +, the upper lmt lm s 0 g (t, s) K 5, where 0 K 5 <. Note that condton A 1 mples that the functon g (t, s) s contnuous n (t, s) [0, T ] R k +. Denote e y = (e y 1,..., e y k ). Then condton A1 can be re-wrtten n the equvalent form n terms of the functon g (t, e y ): A 1: There exsts ε 0 > 0 such that for every 0 ε ε 0 : (a) functon g (t, e y ) s absolutely contnuous n t wth respect to the Lebesgue measure on [0, T ] for every fxed y R k and n y wth respect to the Lebesgue measure on R k for every fxed t [0, T ]; (b) for every y R k, the partal dervatve n t s bounded as g (t,e y ) t K 1 + K k 2 j=1 eγ 0y j for almost all t [0, T ] wth respect to the Lebesgue measure on [0, T ], where 0 K 1, K 2 < and γ 0 0; (c) for every t [0, T ], the partal dervatve n y m s bounded as g (t,e y ) y m (K3 + K 4 k j=1 eγ my j )e y m for almost all y R k wth respect to the Lebesgue measure on R k, where 0 K 3, K 4 < and γ 1,..., γ k 0, m = 1,..., k. 4

9 (d) for every t [0, T ], the upper lmt lm y,=1,k g (t, e y ) K 5, where 0 K 5 <. We use the notatons E y,t and P y,t for expectaton and probablty calculated under condton Y (t) = y. For β, c, T > 0, = 1,..., k, defne the exponental moment modulus of compactness for the càdlàg process Y (t), t 0, β (Y ( ), c, T ) = sup 0 t t+u t+c T sup E y,t (e β Y y R k (t+u) Y (t) 1). We use the followng condton for exponental moment modulus of compactness for log-prce processes: C 1 : lm c 0 lm ε 0 β (Y ( ), c, T ) = 0, = 1,..., k for some β > γ, where γ = max(γ 0, γ 1 +1,..., γ k + 1) and γ 0, γ 1,..., γ k are the parameters ntroduced n condton A 1, and also the followng condton: C 2 : lm ε 0 Ee (0) <, = 1,..., k, where β s the parameter ntroduced n condton C 1. β Y The followng lemmas gves upper bounds for moments of the maxmum of prce processes whch are asymptotcally unform, wth respect to perturbaton parameter. Lemma 2.1. Let condtons C 1 and C 2 hold. Then, there exst 0 < ε 1 ε 0 and a constant L 1 < such that for every ε ε 1, and = 1,..., k, E exp{β sup Y (u) } L 1. (2.5) 0 u T Proof. The followng equalty holds for each 0 t T and = 1,..., k, βs (t) = exp{β sup Y (u) } = sup exp{β Y (u) }. (2.6) 0 u t 0 u t Note also that, by the defnton, the random varable, Let us also ntroduce random varables, βs (0) = exp{β Y (0) }. (2.7) βw [t, t ] = sup exp{β Y (t) Y (t ) }, 0 t t T. t t t 5

10 Defne a partton Π m = {0 = v (m) 0 <... < v m (m) = T } on the nterval [0, T ] by ponts v n (m) = nt/m, n = 0,..., m. Usng equalty (2.6) we can get the followng nequaltes for n = 1,... m and = 1,..., k, βs (v n (m) ) β S (v n 1) (m) + β S (v n 1) (m) + exp{β Y β S (v n 1)( (m) β W [v (m) sup exp{β Y (u) } (2.8) v (m) n 1 u v(m) n (v (m) n 1, v (m) n 1) } β W n ] + 1). [v n 1, (m) v n (m) ] Condton C 1 mples that for any constant e β < L 5 < 1 one can choose c = c(l 5 ) > 0 and then ε 1 = ε 1 (c) ε 0 such that for ε ε 1, and = 1,..., k, β (Y ( ), c, T ) + 1 e β L 5. (2.9) Moreover, condton C 2 mples that ε 1 can be chosen n such a way that, for some constant L 6 = L 6 (ε 1 ) <, the followng nequalty holds for ε ε 1 and = 1,..., k, E exp{β Y (0) } L 6. (2.10) where We need to show that the followng nequalty holds for ε ε 1 and = 1,..., k, sup 0 t t t +c T sup E y,t βw [t, t ] L 7, (2.11) y R k L 7 = eβ (e β 1)L 5 1 L 5 <. (2.12) Usng condton C 2, relatons (2.8), (2.10) (2.12), and Markov property of the process Y (t) we get, for ε ε 1, = 1,..., k and m = [T/c] + 1, and n = 1,..., m, E ( βs (v n (m) ) ) E ( βs (v n 1)E( (m) β W [v n 1, (m) v n (m) ] + 1/ Y (v (m) E β S (v n 1)(L (m) 7 + 1) E β S (0)(L 7 + 1) n L 6 (L 7 + 1) n. Fnally, for ε ε 1 and = 1,..., k, the followng nequalty holds E β S (v m (m) ) = E exp{β sup 0 u T n 1)) ) Y (u) } L 6 (L 7 + 1) m. (2.13) Relaton (2.13) mples that nequalty (2.5) holds, for ε ε 1, wth the constant, L 1 = L 6 (L 7 + 1) m. (2.14) 6

11 To show that relatons (2.11) and (2.12) holds, we have from relaton (2.9) that for every ε ε 1 and = 1,..., k, sup 0 t t t t +c T sup P y,t { Y (t ) Y (t) 1} (2.15) y R k sup 0 t t t t +c T E y,t exp{β Y sup y R k β(y ( ), c, T ) + 1 e β L 5 < 1. (t ) Y (t) } The process Y (t) s not a Markov process. Despte ths, an analogue of Kolmogorov nequalty can be obtaned by slght modfcaton of ts standard proof for Markov processes, see, for example Gkhman and Skorokhod (1971). We formulate t n the form of a lemma. Lemma 2.2. Let a, b > 0 and for the process Y (t) assume that the followng condton hold sup y R k P y,t { Y (t ) Y (t) a} L < 1, t t t. Then, for any vector y R k, P y,t { sup Y t t t e β (t) Y (t ) a + b} (2.16) 1 1 L P y,t { Y (t ) Y (t ) b}. We refer to the report Slvestrov, Jönsson and Stenberg (2006) for the proof. To shorten notatons denote the random varables. Note that W + = sup Y t t t (t) Y (t ), W = Y (t ) Y (t ). e βw + = β W [t, t ]. Usng (2.15) and Lemma 2.2, we get for every ε ε 1, = 1,..., k, 0 t t t + c T, y R k, and b > 0, P y,t {W b} 1 1 L 5 P y,t {W b}. (2.17) 7

12 Relatons (2.9) and (2.17) mply that for every ε ε 1, = 1,..., k, 0 t t t + c T and y R k, E y,t e βw + = 1 + β 1 + β = e β + β e β + βeβ 1 L 5 e βb P y,t {W + e βb db + β 1 b}db e β(1+b) P y,t {W + = e β + βeβ E y,t e βw 1 1 L 5 β 0 e βb P y,t {W b}db e βb P y,t {W b}db b}db = eβ 1 L 5 (E y,t e βw L 5 ) e β 1 L 5 ( β (Y ( ), c, T ) + 1 L 5 ) eβ (e β 1)L 5 1 L 5 = L 7. (2.18) Snce nequalty (2.18) holds for every ε ε 1 and 0 t t t + c T, y R k, t mply relaton (2.11). The proof s complete. Lemma 2.3. Let condtons A 1, C 1, and C 2 hold. Then, there exsts a constant L 2 < such that for every ε ε 1, sup E g(τ, S (τ ) τ M max,t E ( sup g (u, S (u)) ) β γ L 2. 0 u T (2.19) Proof. Consder the vectors s = (s 1,..., s k ), s = (s 1,..., s k ) Rk + and construct the vectors s = (s 1,..., s, s +1,..., s k ), s (v) = (s 1,..., s 1, v, s +1,..., s k ), = 1,..., k. By the defnton, s k = s and s 0 = s. Defne also s + = s s, s = s s, = 1,..., k. Usng condton A 1 we get the followng estmates, for ε ε 0 and s, s R k +, g (u, s ) g (u, s ) g (u, s ) g (u, s 1 ) =1 8

13 =1 =1 =1 s + s s + s g (u, s (v)) dv v 1 (K 3 + K 4 ( j=1 (s j) γ + (v ) γ + j=+1 (K 3 (s + s ) + K 4( (s + j )γ (s + s ))) j=1 (K 3 + K 4 (s + j )γ )(s + s ). =1 j=1 (s j ) γ ))dv (2.20) By lettng s = s = (s 1,..., s k ) R k + and s j 0 n (2.20) we get the followng nequalty, for ε ε 0 and s R k +, g (u, s) K 5 + K 3 s + K 4 (s j ) γ s K 5 + K 3 K 5 + K 3 =1 =1 j=1 (1 s ) γ + K 4 =1 = L 8 + L 9 (s ) γ, =1 (1 s j s ) γ j=1 (1 + (s ) γ ) + K 4 =1 =1 j=1 =1 (1 + (s j ) γ + (s ) γ ) (2.21) where L 8 = K 5 + kk 3 + k 2 K 4, L 9 = K 3 + 2kK 4. Let us denote for the moment S = sup 0 u T Usng nequalty (2.21) we get S (u), Y = sup Y (u), = 1,..., k. 0 u T sup 0 u T g (u, S (u)) L 8 + L 9 S γ. (2.22) 9 =1

14 Usng relaton (2.6) from the proof of Lemma 2.1 and nequalty (2.21) we get ( sup g (u, S (u)) ) β γ 0 u T (k + 1) β γ 1 β γ (L 8 + L β γ 9 (1 + S β ) =1 = L 10 + L 11 S β L 10 + L 11 e βy, =1 =1 (2.23) where L 10 = (k + 1) β γ 1 β γ (L β γ 8 + kl9 ), L 11 = (k + 1) β γ 1 L Thus, by takng expectaton n (2.23) and usng Lemma 2.1, we get for ε ε 1, β γ 9. E ( sup g (u, S (u)) ) β γ 0 u T L 10 + L 11 Ee βy L 10 + kl 11 L 1 = L 2 <. =1 (2.24) The proof s complete. Inequalty (2.24) mples that for ε ε 1, Φ(M max,t ) sup E g(τ, S (τ ) τ M max,t E sup 0 u T g (u, S γ β (u)) L2 <. (2.25) Thus, the reward functonal Φ(M max,t ) s well defned for ε ε 1. 3 Skeleton approxmatons for reward functonals In ths secton we gve explct and unform wth respect to perturbaton parameter estmates n so-called skeleton approxmatons where reward functonals for contnuous tme prce processes are approxmated by the correspondng reward functonals for dscrete tme prced processes. These approxmatons plays the key role n the proofs on the correspondng convergence results. They also have ther own value. For example, such approxmatons can be used for justfcaton of Monte Carlo type algorthms, where contnuous type prce processes should be approxmated by the correspondng dscrete tme processes. 10

15 Let Π = {0 = t 0 < t 1 <... t N = T } be a partton on the nterval [0, T ] and ˆM Π,T d(π) = max 1 N (t t 1 ). Consder the class of all Markov moments from M max,t, whch only take the values t 0, t 1,... t N, and the class M Π,T of all Markov moments τ from ˆM Π,T such that the event {ω : τ (ω) = t j } σ( Y (t 0 ),..., Y (t j )) for j = 0,... N. By defnton, M Π,T ˆM Π,T M max,t. (3.1) Relatons (3.1) mply that, under condtons of Lemma 2.2, for ε ε 1, < Φ(M Π,T ) Φ( ˆM Π,T ) Φ(M max,t ) <. (3.2) The reward functonals Φ(M max,t ), Φ( ˆM Π,T ), and Φ(M Π,T ) correspond to Amercan type opton n contnuous tme, Bermudan type opton n contnuous tme, and Amercan type opton n dscrete tme, respectvely. In the frst two cases, the underlyng prce process s a contnuous tme Markov type prce process, whle n the thrd case the correspondng prce process s a dscrete tme Markov type process. The random varables Y (t 0 ), Y (t 1 ),..., Y (t N ) are connected n a dscrete tme nhomogeneous Markov chan wth the phase space R k, transton probabltes P (t n, y, t n+1, A), and ntal dstrbuton P (A). Note that we have slghtly modfed the standard defnton of a dscrete tme Markov chan by consderng moments t 0,..., t N as the moments of jumps for the Markov chan Y (t n ) nstead of the moments 0,..., N. Ths s done n order to synchronze the dscrete and contnuous tme models. Thus, the optmzaton problem (2.3) for the class M Π,T s really a problem of optmal expected reward for Amercan type optons n dscrete tme. The followng lemma establshes useful equalty between reward functonals Φ(M Π,T ) and Φ( ˆM Π,T ). Lemma 3.1. Let condtons A 1, C 1 and C 2 hold. Then, for any partton Π = {0 = t 0 < t 1 <... < t N = T } on the nterval [0, T ] and ε ε 1 where ε 1 s defned n (2.9) and (2.10), Φ(M Π,T ) = Φ( ˆM Π,T ). (3.3) Proof. A smlar result was gven n Kukush and Slvestrov (2000, 2004) and we shortly present the modfed verson of the correspondng proof. The optmzaton problem (2.3) for the class ˆM Π,T can be consdered as a problem of optmal expected reward for Amercan type optons wth dscrete tme. To see ths let us add to the random varables Y tn addtonal components Y n = 11

16 { Y (t), t n 1 < t t n } wth the correspondng phase space Y endowed by the correspondng σ-feld. As Y 0 we can take an arbtrary pont n Y. Consder the extended Markov chan Y n = ( Y (t n ), Y n ) wth the phase space Y = Y Y. As above, we slghtly modfy the standard defnton and count moments t 0,..., t N as moments of jumps for the ths Markov chan nstead of moments 0,..., N. Ths s done n order to synchronze the dscrete and contnuous tme models. Denote by M Π,T the class of all Markov moments τ t N for the dscrete tme Markov chan Y n and consder the reward functonal, Φ( M Π,T ) = τ sup M Π,T Eg (τ, S (τ )). (3.4) It s readly seen that the optmzaton problem (2.3) for the class equvalent to the optmzaton problem (3.4),.e., ˆM Π,T s Φ( ˆM Π,T ) = Φ( M Π,T ). (3.5) It s well known, (See, for example Peskr and Shryaev (2006)) that the optmal stoppng moment τ exsts n any dscrete tme Markov model, and the optmal decson {τ = t n } depends only on the value Y n. Moreover the optmal Markov moment has a frst httng tme structure,.e., t s the frst moment the process enters some set D, that s τ = mn(t n : Y n D n ), where D n, n = 0,..., N are some measurable subsets of the phase space Y. The optmal stoppng domans are determned by the transton probabltes of the extended Markov chan Y n. However, the extended Markov chan Y n has transton probabltes dependng only on values of the frst component Y (t n ). As was shown n Kukush and Slvestrov (2004), the optmal Markov moment has n ths case the frst httng tme structure of the form τ = mn(t n : Y (t n ) D n ), where D n, n = 0,..., N are some measurable subsets of the phase space of the frst component Y. Therefore, for the optmal stoppng moment τ the decson {τ = t n } depends only on the value Y (t n ), and τ M Π,T. Hence, Φ(M Π,T ) Eg (τ, S (τ )) = Φ( ˆM Π,T ). (3.6) Inequaltes (3.2) and (3.6) mply the equalty (3.3) and the proof s complete. The followng theorem gves a skeleton approxmaton for the reward functonals Φ(M max,t ) whch s asymptotcally unform wth respect to the perturbaton parameter ε. Theorem 3.2. Let condtons A 1, C 1, and C 2 hold. Let also ε ε 1 and d(π) c, where ε 1 and c are defned n relatons (2.9) and (2.10). Then there exst constants 12

17 L 3, L 4 < such that the followng skeleton approxmaton nequalty holds, for ε ε 1, Φ(M max,t ) Φ(M L 3 d(π) + L 4 Π,T ) =1 β (Y ( ), d(π), T ) β γ β. (3.7) Proof. Let us assume that ε ε 1 and d(π) c. For any Markov moment τ M max,t and a partton Π = {0 = t 0 < t 1 <... < t N = T } one can defne the dscretzaton of ths moment, { τ tk, f t [Π] = k 1 τ < t k, k = 1,... N, T, f τ = T. Let τ be -optmal Markov moment n the class M max,t,.e., Eg (τ, S (τ )) Φ(M max,t ). (3.8) Such -optmal Markov moment always exsts for any > 0, by the defnton of the reward functonal Φ(M max,t ). By the defnton, the Markov moment τ [Π] (3.3) gven n Lemma 3.1 mples that By the defnton, Eg (τ [Π], S (τ [Π])) Φ( τ ˆM Π,T. Ths fact and relaton ˆM Π,T ) = Φ(M Π,T ) Φ(M max,t ). (3.9) τ [Π] τ + d(π). (3.10) Now nequaltes (3.8) and (3.9) mply the followng skeleton approxmaton nequalty, 0 Φ(M max,t ) Φ(M Π,T ) + Eg (τ, S (τ )) Eg (τ [Π], S (τ [Π])) + E g (τ, S (τ )) g (τ [Π], S (τ [Π])). (3.11) Now, we need to mprove relaton (2.20). We use the same notatons and let also 0 u, u T and u + = u u, u = u u. 13

18 Usng condton A 1 we get the followng estmates, for ε ε 0 and s, s R k +, 0 u, u T, g (u, s ) g (u, s ) g (u, s ) g (u, s ) + g (u, s ) g (u, s ) g (u, s ) g (u, s ) + g (u, s ) g (u, s 1 ) u + g (u, s ) du + u u =1 (K 1 + K 2 (s j) γ 0 )du u u + + =1 s + s (K 1 + K 2 + j=1 1 (K 3 + K 4 ( j=1 =1 s + s (s + j )γ 0 ) u u j=1 g (u, s (v)) dv v (s j) γ + (v ) γ + (K 3 + K 4 (s + j )γ ) s s. =1 j=1 j=+1 (s j ) γ ))dv (3.12) In the case, where vectors s = e y, s = e y,.e., s = e y, s = e y, = 1,..., k, and, therefore, s ± = e y±, = 1,..., k, where y + = y y, y = y y, = 1,..., k, nequalty (3.12) can be transformed, usng nequalty e y y+ 1 y + y = y y, to the followng form, g (u, e y ) g (u, e y ) (K 1 + K 2 e γ 0y + j ) u u + =1 j=1 (K 3 + K 4 e γ y + j e y + ) y y. j=1 To shorten notatons denote the random varables τ = τ, τ = τ [Π], Y = Y (τ ), Y = Y (τ ), = 1,..., k. and random vectors Y = (Y 1,..., Y k), Y = (Y 1,..., Y k ). 14 (3.13)

19 Also defne random varables Y + = Y Y, = 1,..., k. Then, Y + Y = sup Y (u), = 1,..., k. (3.14) 0 u T Thus by substtutng random varables ntroduced above n the nequalty (3.13) and then usng (3.14) we readly get that the followng estmate holds under condton A 1, for ε ε 0, g (τ Y, e ) g (τ Y, e ) (K 1 + K 2 e γ 0Y + )(τ τ ) + =1 =1 (K 3 + K 4 e γ Y + j e Y + ) Y Y (K 1 + K 2 + =1 j=1 e γ 0Y )(τ τ ) =1 (K 3 + K 4 e γ Y j e Y ) Y Y j=1 (3.15) Now, applyng Hölder nequalty (wth parameters p = γ β γ and q = and then β β wth parameters p = γ 1 and q = 1 ) to the correspondng products of random γ γ varables on the rght hand sde n (3.15) and by usng nequalty (2.5) gven n Lemma 2.1 and relaton (3.10), we can wrte down the followng estmate for the expectaton on the rght hand sde n (3.11), for ε ε 1, E g (τ, S (τ )) g (τ [Π], S (τ [Π])) = E g (τ Y, e ) g (τ Y, e ) (K 1 + K 2 Ee γ 0Y )d(π) + K 3 + K 4 =1 (K 1 + K 2 + K 4 =1 =1 j=1 =1 j=1 Ee γ Y j e Y Y Y (Ee γ 0 β =1 E Y Y γ Y ) γ β )d(π) + K3 (E Y Y =1 (Ee γ β γ Y j e β γ Y ) γ β (E Y Y β β γ β γ ) β β β γ β γ ) β (3.16) 15

20 where (K 1 + K 2 + K 4 =1 (Ee βy ) γ β )d(π) + K3 =1 ((Ee γ β j=1 γ 1 Y j ) γ 1 γ (K 1 + kk 2 (L 1 ) γ β )d(π) + K3 + K 4 =1 j=1 (K 1 + kk 2 (L 1 ) γ β )d(π) =1 (E Y Y β β γ β γ ) β (Ee βy ) 1 γ γ ) β (E Y Y β β γ β γ ) β =1 (E Y Y β β γ β γ ) β ((L 1 ) γ 1 γ (L1 ) 1 γ γ ) β (E Y Y β β γ β γ ) β + (K 3 + kk 4 (L 1 ) γ β ) (E Y Y =1 β β γ β γ ) β The last step n the proof s to show that, for ε ε 1 and = 1,..., k, where E Y Y β γ β L 12 = sup y 0 L 12 β (Y ( ), d(π), T ), (3.17) y β β γ e βy 1 Indeed, n ths case, (3.16) take the form E g (τ <. (3.18), S (τ )) g (τ [Π], S (τ [Π])) L 3 d(π) + L 4 =1 (3.19) β (Y ( ), d(π), T ) β β γ, L 3 = K 1 + kk k (L 1 ) γ β, L4 = (K 3 + kk 4 (L 1 ) γ β )(L12 ) β β γ. (3.20) Note that the quantty on the rght hand sde n (3.19) does not depend on. Thus, we can substtute t n (3.11) and then pass to zero n ths relaton that wll result n nequalty (3.7) gven n Theorem 3.2. To get nequalty (3.17), we employ the method from Slvestrov (1974) for estmaton of moments for ncrements of stochastc processes stopped at Markov type stoppng moments. Introduce the functon f Π (t) = t j+1 t for t j t < t j+1, j = 0,..., N 1 and 0 for t = t (N) N. The functon f Π(t) s contnuous from the rght on the nterval [0, T ] and 0 f Π (t) d(π). 16

21 and It follows from the defnton of functon f Π (t) that Y (τ ) Y (τ τ = τ + f Π (τ ), [Π]) = Y (τ ) Y (τ + f Π (τ )). Use agan partton Π m of nterval [0, T ] by ponts v (m) n = nt/m, n = 0,..., m, and consder the random varables, τ [ Π m ] = { v (m) j, f v (m) j 1 τ < v (m) j, j = 1,... N, T, f τ = T. Thus we have τ τ [ Π m ] τ +T/m, and the random varables τ [ Π m ] a.s. τ as m. Snce the Y (t) s a vector of càdlàg processes, we also get the followng relaton, for each = 1..., k Q m, = Y a.s. Q (τ [ Π m ]) Y (τ [ Π m ] + f Π (τ [ Π m ])) β = Y β γ (3.21) (τ ) Y (τ + f Π (τ )) β β γ as m. Note also that Q m, are non-negatve random varables and the followng estmate holds for any m = 1,... and = 1,..., k, Q m, ( Y 2 β (τ [ Π m ]) + Y (τ [ Π m ] + f Π (τ [ Π m ])) ) β β γ 1 ( Y 2 β β γ ( sup 0 u T (τ [ Π m ]) β β γ Y (u) ) β β γ 2 β β γ L12 exp{β sup 0 u T β γ + Y (τ [ Π m ] + f Π (τ [ Π m ])) β β γ ) Y (u) }. (3.22) Inequalty (2.5) gven n Lemma 2.1, mples that the random varable on the rght hand sde n (3.22) has a fnte expectaton, and relaton (3.21), we get by Lebesgue theorem that, for ε ε 1 and = 1..., k, EQ m, EQ as m. (3.23) Let us now fnd an estmate of EQ m. To reduce the notaton let us denote for the moment Y n+1, = Y (v n+1) (m) and Y n+1, = Y (v n+1)+f (m) Π (v n+1)). (m) Snce τ s a Markov moment for the Markov process Y (t), the random varables χ(v n (m) τ < v n+1) (m) and Y n+1, Y n+1, β β γ are condtonally ndependent wth respect to random vector 17

22 Y (v n+1). (m) Usng ths fact and nequalty f Π (v n+1) (m) d(π), we get, for ε ε 1, and = 1..., k, EQ m, = E Y n=0 m 1 (τ [ Π m ]) Y (τ [ Π m ] + f Π (τ [ Π m ])) β m 1 = E Y n+1, Y n+1, β β γ χ(v (m) n τ < v n+1) (m) = n=0 E{χ(v (m) n m 1 sup E (m) y,v n=0 y R k n+1 m 1 n=0 m 1 n=0 β γ τ < v (m) n+1)e{ Y n+1, Y n+1, β β γ / Y (v (m) n+1)}} Y n+1, Y n+1, β β γ P{v (m) n τ < v n+1} (m) L 12 sup E (m) y,v exp{β Y n+1, Y n+1, }P{v n (m) τ < v n+1} (m) y R k n+1 L 12 β (Y ( ), d(π), T )P{v n (m) τ < v n+1} (m) L 12 β (Y ( ), d(π), T ). Relatons (3.23) and (3.24) mply that, for ε ε 1 and = 1..., k, EQ (3.24) = E Y (τ ) Y (τ + f Π (τ )) β β γ (3.25) L 12 β (Y ( ), d(π), T ). Ths nequalty s equvalent to nequalty (3.19). The proof s complete. 4 Convergence of rewards for dscrete tme prce processes In ths secton we present convergence results for reward functonals for Amercan type optons for dscrete tme prce processes. These results have ther own value and are also essentally used n the proofs of the correspondng convergence results for contnuous tme prce processes. Let us now formulate condtons of convergence for dscrete tme reward functonals Φ(M Π,T ) for a gven partton Π = {0 = t 0 < t 1... < t N = T } on the nterval [0, T ]. In ths case t s natural to use condtons based on the transton probabltes between the sequental moments of ths partton and values of the payoff functons at the moments of ths partton. Condton A 1 s replaced by a smpler condton: 18

23 A 2 : There exsts ε 2 > 0 such that, for every 0 ε ε 2, functon g (t n, s) K 6 +K k 7 =1 sγ, for n = 0,..., N and s Rk + for some γ 1 and constants K 6, K 7 <. Condton A 2 can be re-wrtten n terms of functons g (t, e y ): A 2: There exsts ε 2 > 0 such that, for every 0 ε ε 2, functon g (t n, e y ) K 6 + K k 7 =1 eγy, for n = 0,..., N and y R k for some γ 1 and constants K 6, K 7 <. We also need an assumpton about convergence of the payoff functon. In dscrete tme case, the dervatves of the payoff functons are not nvolved. In ths case, the payoff functons can be dscontnuous. Ths s compensated by a stronger assumpton concerned the convergence of the payoff functons. We requre local unform convergence for the payoff functon on some sets, whch later wll be assumed to have the value 1 for the correspondng lmt transton probabltes and the lmt ntal dstrbuton: A 3 : There exsts a measurable set S t n R k + for every n = 0,..., N, such that g (t n, s ) g (0) (t n, s) as ε 0, for any s s S t n and n = 0,..., N. Condton A 3 can be re-wrtten n terms of functons g (t, e y ): A 3: There exsts measurable sets Y t n R k, n = 0,..., N, such that the payoff functon g (t n, e y ) g (0) (t n, e y ) as ε 0 for any y y Y t n and n = 0,..., N. The sets S t n and Y t n n condtons A 3 and A 3 are connected by the followng relaton S t n = { s = e y : y Y t n }, n = 0,..., N. Let us now formulate condtons assumed for the transton probabltes and ntal dstrbutons of the process Y (t). The frst condton assumes weak convergence (denoted by the symbol ) of the transton probabltes that should be locally unform wth respect to ntal states from some sets, and also that the correspondng lmt measures are concentrated on these sets: B 1 : There exst measurable sets Y t n R k, n = 0,..., N such that: (a) P (t n, y, t n+1, ) P (0) (t n, y, t n+1, ) as ε 0, for any y y Y t n as ε 0 and n = 0,..., N 1; (b) P (0) (t n, y, t n+1, Y t n+1 Y t n+1 ) = 1 for every y Y t n and n = 0,..., N 1. 19

24 A typcal example s when the sets Ȳ t n and Ȳ t n are at most fnte or countable sets. Then the assumpton that the measures P (0) (t n, y, t n+1, A), A B R k have no atom mples that condton B 1 (b) holds. The second condton assumes weak convergence of the ntal dstrbutons to some dstrbuton that s assumed to be concentrated on the sets of convergence for the correspondng transton probabltes: B 2 : (a) P ( ) P (0) ( ) as ε 0; (b) P (0) (Y 0 Y 0) = 1, where Y 0 and Y 0 are the sets ntroduced n condtons A 3 and B 1. A typcal example s when the sets Ȳ t n and Ȳ t n are at most fnte or countable sets. Then the assumpton that the measures P (0) (A) have no atom mples that condton B 2 (b) holds. We also weaken condton C 1 by replacng t by a smpler condton, whch s mpled by condton C 1 : C 3 : lm ε 0 sup y R k E y,tn (e (t n ) 1) <, n = 0,..., N 1, = 1,..., k, for some β > γ, where γ s the parameter ntroduced n condton A 2. β Y (t n+1 ) Y Condton C 2 does not change and takes the followng form: C 4 : lm ε 0 Ee (t 0 ) <, = 1,..., k, where β s the parameter ntroduced n condton C 3. β Y Condton C 3 mples that there exsts a constant L 13 < and ε 3 ε 2 such that for n = 0,..., N 1, ε ε 3, and = 1,... k, sup E y,tn (e β Y y R k (t n+1 ) Y (t n ) 1) L 13. (4.1) Condton C 4 mples that ε 3 can be chosen n such a way that, for some constant L 14 <, the followng nequalty holds for ε ε 3 and = 1,..., k, β Y Ee (0) L 14. (4.2) A dscrete analogue to nequalty (2.25) can be wrtten as Φ(M Π,T ) sup E g(τ, S (τ ) τ M Π,T E max 0 N g (t, S γ β (t )) c1 <. (4.3) The followng theorem gves condtons of convergence for reward functonals ) for a gven partton Π. Φ(M Π,T 20

25 Theorem 4.1. Let condtons A 2, A 3, B 1, B 2, C 3, and C 4 hold. Then, the followng asymptotc relaton holds for the partton Π = {0 = t 0 < t 1 < t N = T } on the nterval [0, T ], Φ(M Π,T ) Φ(M(0) Π,T ) as ε 0. (4.4) Proof. Snce ε 0 n (4.4), we can assume that ε ε 2. The reward functons are defned by the followng recursve relatons, and, for n = 0,..., N 1, w (t N, y) = g (t N, e y ), y R k, w (t n, y) = max(g (t n, e y ), E y,tn w (t n+1, Y (t n+1 )), y R k. As follows from general results on optmal stoppng for dscrete tme Markov processes, see Chow Robbns and Segmund (1971), Shryaev (1976), the reward functonal, Φ(M Π,T ) = Ew (t 0, Y (0)). (4.5) Condton A 2 drectly mples that the followng power type upper bound for the reward functon w (t N, y) holds, for ε ε 3 and y R k, w (t N, y) L 1,N + L 2,N e γ y, (4.6) =1 where L 1,N = K 6, L 2,N = K 7 <. (4.7) Also, accordng condtons A 3, for an arbtrary y y (0) as ε 0, where y (0) Ỹ t N Y t N, w (t N, y ) w (0) (t N, y (0) ) as ε 0. (4.8) Let us prove that relatons smlar wth (4.6), (4.7), and (4.8) hold for the reward functons w (t N 1, y). 21

26 We get usng relaton (4.6), for ε ε 3 and y R k, E y,tn 1 g (t N, e Y (t N ) ) L 1,N + L 2,N E y,tn 1 L 1,N + L 2,N E y,tn 1 L 1,N + L 2,N L 1,N + L 2,N =1 e γ y E y,tn 1 e =1 e γ y E y,tn 1 e =1 L 1,N + L 2,N (L ) e γ y γ Y e (t N ) y e γ y. =1 =1 γ Y (t N ) Y (t N 1 ) β Y (t N ) Y (t N 1 ) γ Y e (t N ) (4.9) where Relaton (4.9) mples that, for ε ε 3 and y R k, w (t N 1, y) max( g (t N 1, e y ), E y,t w (t N, Y (t N )) ) K 6 + L 1,N + (K 7 + L 2,N (L )) e γ y = L 1,N 1 + L 2,N 1 e γ y, =1 =1 (4.10) L 1,N 1 = K 6 + L 1,N, L 2,N 1 = K 7 + L 2,N (L ) <. (4.11) Let us ntroduce, for every n = 0,..., N 1 and y R k, a random vector Y n ( y) such that P{ Y n ( y) A} = P (t n, y, t n+1, A), A B R k. Let us prove that, for any y y (0) Ỹ t N 1 Y t N 1 as ε 0, the followng relaton takes place, w (t N, Y N 1 ( y )) w (0) (t N, Y (0) N 1 ( y(0) )) as ε 0. (4.12) Take an arbtrary sequence ε r ε 0 = 0 as r. From condton B 1 we have for an arbtrary y (ε r) y (ε 0) Ỹ t N 1 Y t N 1 as r, Y (ε r) N 1 ( y(εr) ) Y (ε 0) N 1 ( y(ε 0) ) as r, (4.13) and P{ Y (ε 0) N 1 ( y(ε 0) ) Ỹ t N Y t N } = 1. (4.14) 22

27 Now, accordng to Skorokhod representaton theorem, one can construct random varables Y (ε r) N 1 ( y(ε r) ), r = 0, 1,... on some probablty space (Ω, F, P) such that, for every r = 0, 1,..., and A B R k, and Denote P{ Y (ε r) N 1 ( y(εr) ) A} = P{ Y (ε r) N 1 ( y(εr) ) A}, (4.15) Y (ε r) N 1 ( y(εr) ) a.s. Y (ε 0) N 1 ( y(ε 0) ) as r. (4.16) A N 1 = {ω Ω : Y (εr) N 1 ( y(ε r), ω) Y (ε 0) N 1 ( y(ε 0), ω) as r } and B N 1 = {ω Ω : Y (ε 0) N 1 ( y(ε 0), ω) Ỹ t N Y t N }. Relaton (4.16) mples that P(A N 1 ) = 1. Also relatons (4.14) and (4.15) mply that P(B N 1 ) = 1. Thus, P(A N 1 B N 1 ) = 1. By relaton (4.8) and the defnton of the sets A N 1 and B N 1, the sequence w (εr) (t N, Y (ε r) N 1 ( y(εr), ω)) w (ε0) (t N, Y (ε 0) N 1 ( y(ε 0), ω)) as r, for every ω A N 1 B N 1,.e., the random vectors w (ε r) (t N, Y (εr) N 1 ( y(ε r) )) a.s. w (ε 0) (t N, Y (ε 0) N 1 ( y(ε 0) )) as r. (4.17) Relaton (4.15) mples that, for every r = 0, 1,..., and A B R k, P{w (εr) (t N, Y (ε r) N 1 ( y(εr) )) A} = P{w (εr) (t N, Y (ε r) N 1 ( y(εr) )) A}. (4.18) Relatons (4.17) and (4.18) mply that the random varables w (εr) (t N, Y (ε r) N 1 ( y(εr) )) w (ε 0) (t N, Y (ε 0) N 1 ( y(ε 0) )) as r. (4.19) Snce an arbtrary choce of a sequence ε r ε 0, relaton (4.19) mples relaton of weak convergence (4.12). Usng nequaltes (4.1) and (4.10), and condton C 3 we get for any sequence y y (0) Ỹ t N 1 Y t N 1 as ε 0, and for ε ε 3, E w (t N, Y N 1 ( y )) β γ = E yε,t N 1 w (t N, Y (t N )) β γ γ Y E y, t N 1 (L 1,N + L 2,N e (t N ) ) β γ (k + 1) β γ 1 ((L 1,N ) β γ + (L 2,N ) β γ E y,t N 1 =1 =1 e β y e β Y (t N ) y ) (4.20) 23

28 and, therefore, (k + 1) β γ 1 ((L 1,N ) β γ + (L2,N ) β γ (L13 + 1) =1 e β y ) lm ε 0 E w (t N, Y N 1 ( y )) β γ <. (4.21) Relatons (4.12) and (4.21) mply that for any sequence y ε y 0 Ỹk t N 1 Y k t N 1 as ε 0, E yε,t N 1 w (t N, Y (t N )) E y0,t N 1 w (0) (t N, Y (0) (t N )) as ε 0. (4.22) Relatons (4.22), (2.1) and condton A 3 mply that for any sequence y ε y 0 Ỹ k t N 1 Y k t N 1 as ε 0, w (t N 1, y ) = g (t N 1, e y ) E y,t N 1 w (t N, Y (t N )) w (0) (t N 1, y (0) ) = g (0) (t N 1, e y(0) ) E y (0),t N 1 w (0) (t N, Y (0) (t N )) as ε 0. (4.23) Relatons (4.10), (4.11), and (4.23) are analogues of relatons (4.6), (4.7), and (4.8). By repeatng the recursve procedure descrbed above, we fnally get that, under condtons A 2, A 3, B 1, and C 3, for ε ε 3, n = 0, 1,..., N, and y R k, where constants, w (t n, y) L 1,n + L 2,n e γ y, (4.24) =1 L 1,n, L 2,n <, (4.25) and that, for an arbtrary y n y n (0) as ε 0, where y n (0) Ỹ t n Y t n, and for every n = 0, 1,..., N, w (t n, y n ) w (0) (t n, y n (0) ) as ε 0. (4.26) Let us take an arbtrary sequence ε r ε 0 = 0 as r. By condton B 2, the random vectors Y (ε r) (0) Y (ε 0) (0) as r, (4.27) and P{ Y (ε 0) (0) Ỹ t 0 Y t 0 } = 1. (4.28) Accordng to Skorokhod representaton theorem, one can construct random varables 24

29 Y (εr) (0), r = 0, 1,... on some probablty space (Ω, F, P) such that for every r = 0, 1,..., and A B R k, P{ Y (ε r) (0) A} = P{ Y (ε r) (0) A}, (4.29) and Let us denote Y (ε r) (0) a.s. Y (ε 0) (0) as r. (4.30) and A = {ω Ω : Y (ε r) (0, ω) Y (ε 0) (0, ω) as r } B = { ω Ω : Y (ε 0) (0, ω) Y t 0 Y t 0 }. Relaton (4.30) mples that P(A) = 1. Relatons (4.28) and (4.29) mply that P(B) = 1. Thus, P(A B) = 1. By relaton (4.26) and the defnton of sets A and B, the non-random sequence w (ε r) (t 0, Y (ε r) (0, ω)) w (ε 0) (t 0, Y (ε 0) (0, ω)) as j, for every ω A B,.e., the random vectors w (ε r) (t 0, Y (ε r) (0)) a.s. w (ε 0) (t 0, Y (ε 0) (0)) as r. (4.31) Relaton (4.29) mples that for every r = 0, 1,..., and A B R k, P{w (εr) (t 0, Y (εr) (0)) A} = P{w (εr) (t 0, Y (εr) (0)) A}, (4.32) Relatons (4.31) and (4.32) mply that the random vectors w (εr) (t N, Y (εr) (0)) w (ε 0) (t N, Y (ε 0) (0)) as r. (4.33) Snce an arbtrary choce of a sequence ε r ε 0, relaton (4.33) mples that w (t 0, S (0)) w (0) (t 0, S (0) (0)) as ε 0. (4.34) Usng nequaltes (4.2) and (4.24), and condton C 4 we get for ε ε 3, E w (t 0, Y (0)) β γ E(L1,0 + L 2,0 e (k + 1) β γ 1 ((L 1,0 ) β γ =1 + (L2,0 ) β γ E =1 (k + 1) β γ 1 ((L 1,0 ) β γ + (L2,0 ) β γ L14 ), γ Y (0) ) β γ β Y e (0) ) (4.35) 25

30 and, therefore, Relatons (4.34) and (4.36) mply that, lm ε 0 E w (t 0, Y (0)) β γ <. (4.36) Ew (t 0, Y (0)) Ew (0) (t 0, Y (0)) as ε 0. (4.37) Formula (4.5) and relaton (4.37) mply relaton (4.4) gven n Theorem Convergence of rewards for contnuous tme prce processes We are now ready to formulate and to prove our man convergence result for rewards of Amercan type optons for contnuous tme processes. Now we formulate condtons of convergence for dscrete tme reward functonals ) for contnuous tme model. As was mentoned above, n the dscrete tme case, the payoff functons can be dscontnuous. In the contnuous tme case, the dervatves of the payoff functons are nvolved n condton A 1. The correspondng assumptons mply contnuty of the payoff functons. Ths gve us possblty to weaken the assumpton concernng the convergence of the payoff functons and just to requre ther pontwse convergence: Φ(M max,t A 4 : g (t, s) g (0) (t, s) as ε 0, for every (t, s) [0, T ] R k +. Condton A 4 can be re-wrtten n terms of functon g (t, e y ): A 4: g (t, e y ) g (0) (t, e y ) as ε 0, for every (t, y) [0, T ] R k. Remark 1. See for example the paper Kukush and Slvestrov (2004) for an example where a payoff functon are approxmated by a perturbed dscrete payoff functon. Let us now formulate condtons assumed for the transton probabltes and the ntal dstrbutons of process Y (t). The frst condton assumes weak convergence of the transton probabltes that should be locally unform wth respect to ntal states from some sets, and also that the correspondng lmt measures are concentrated on these sets: B 3 : There exst measurable sets Y t R k, t [0, T ] such that: (a) P (t, y, t + u, ) P (0) (t, y, t + u, ) as ε 0, for any y y Y t as ε 0 and 0 t < t + u T ; 26

31 (b) P (0) (t, y, t + u, Y t+u ) = 1 for every y Y t and 0 t < t + u T. The second condton assumes weak convergence of the ntal dstrbutons to some dstrbuton that s assumed to be concentrated on the sets of convergence for the correspondng transton probabltes: B 4 : (a) P ( ) P (0) ( ) as ε 0; (b) P (0) (Y 0 ) = 1, where Y 0 s the set ntroduced n condton B 3. The followng theorem presents our man convergence result. It gves condtons of convergence for reward functonals Φ(M max,t ). Theorem 5.1. Let condtons A 1, A 4, B 3, B 4, C 1, and C 2 hold. Then Φ(M max,t ) Φ(M(0) max,t ) as ε 0. (5.1) Proof. Let Π N = {0 = t 0 < t 1 <... t N = T } be a sequence of parttons on the nterval [0, T ]. Snce we study the case when ε 0, we can assume that d(π n ) c and ε ε 1 where c and ε 1 are defned n relatons (2.9) and (2.10) ths ensures that the correspondng reward functonals are fnte. The followng lemmas play the key role n the proof. Lemma 5.2. Let condtons A 1, C 1, and C 2 hold, the followng relaton holds for any sequence of parttons Π N such that d(π N ) 0 as N, lm lm N ε 0 (Φ(M max,t ) Φ(M Π N,T )) = 0. (5.2) Proof. Ths Lemma s a drect corollary of Theorem 3.2, whch mples that, under condtons A 1, C 1, and C 2, there exst constants L 3, L 4 < such that the followng skeleton approxmaton nequalty holds for ε ε 1 and N such that d(π N ) c, Φ(M max,t ) Φ(M Π N,T ) L 3 d(π N ) + L 4 ( =1 Ths estmate drectly mples relaton (5.2). (5.3) β (Y ( ), d(π N ), T )) β γ β. Lemma 5.3. Let condtons A 1, A 4, B 3, B 4, C 1, and C 2 hold. Then, condtons of Theorem 4.1 hold for any partton Π N = {0 = t 0 < t 1 < t N = T } on the nterval [0, T ] and, therefore, the followng asymptotc relaton holds, Φ(M Π N,T ) Φ(M(0) Π N,T ) as ε 0. (5.4) 27

32 Proof. Take an arbtrary partton Π N = {0 = t 0 < t 1 < t N = T } on the nterval [0, T ]. Inequalty (2.21) gven n the proof of Lemma 2.3, mples that, for every n = 0,..., N, s R k +, and ε ε 0, g (t n, s) L 8 + L 9 (s ) γ, (5.5) where γ = max(γ 0, γ 1 + 1,..., γ k + 1). Thus condton A 2 holds for partton Π N wth the constants K 6 = L 8, K 7 = L 9 and the parameter γ gven above. Also, nequalty (2.20) gven n the proof of Lemma 3, mples that for every n = 0,..., N, s, s R k +, and ε ε 0, =1 g (t n, s ) g (t n, s ) (K 3 + K 4 (s + j )γ )(s + s =1 j=1 ). (5.6) and, therefore, for any 0 < u < u + <, = 1,..., k sup u s,s <u+, s s c,=1,...,k g (t n, s ) g (t n, s ) L 15 c, (5.7) where L 15 = (K 3 + K 4 (u + j )γ ). =1 Condton A 4 and nequaltes (5.5) and (5.7) mply that for every n = 0,..., N, and 0 < u < u <, = 1,..., k, the condtons of Ascol-Arzelá theorem holds for payoff functons g (t n, s), u s < u +, = 1,..., k, for every n = 0, 1,..., N. Thus, these functons converge unformly,.e., j=1 sup u s <u +,=1,...,k g (t n, s) g (0) (t n, s) 0 as ε 0. (5.8) Relaton (5.8) mples that condton of locally unform convergence A 4 holds for partton Π N wth the correspondng sets S t n = R k +, for n = 0, 1,..., N. It reman to show that condton C 1 mples that condton C 3 holds. Condton C 1 mples that for any constant L 16 < one can choose c = c(l 16 ) > 0 and then ε 4 = ε 4 (c) < ε 2 such that for ε ε 4 and = 1,..., k, β (Y ( ), c, T ) L 16. (5.9) 28

33 Take an arbtrary nteger 0 n N and consder the unform partton t n = u (m) 0 <... < u (m) m = t n+1 on the nterval [t n, t n+1 ] by ponts u (m) j = (t n+1 t n )j. m Relaton (5.9) and the Markov property of the processes Y (t) mply that for ε ε 4, m = [ (t n+1 t n ) ] + 1 (n ths case (t n+1 t n ) c), y R k, j = 1,..., m and c m = 1,..., k, E y,tn (e β Y (u (m) j ) Y (u (m) E y,tn e = E y,tn ((e + e = E y,tn {(e 0 ) 1) β Y (u (m) j 1 ) Y (u (m) 0 ) β Y e (u (m) j ) Y (u (m) β Y (u (m) j 1 ) Y (u (m) β Y (u (m) j ) Y (u (m) E{(e + E y,tn {E{(e E y,tn (e 0 ) 1)e j 1 ) 1) β Y (u (m) j 1 ) Y (u (m) β Y (u (m) j ) Y (u (m) 0 ) 1) β Y (u (m) j ) Y (u (m) β Y (u (m) j 1 ) Y (u (m) j 1 ) 1 β Y (u (m) j ) Y (u (m) j 1 ) j 1 ) 1 + 1) Y (u (m) j 1 )}} j 1 ) 1) Y (u (m) 0 ) 1)(L ) + L 16 j 1 )}} (5.10) Fnally, we get, for ε ε 4, y R k, and = 1,..., k, E y,tn (e β Y (t n+1 ) Y = E y,tn (e (t n ) 1) β Y (u (m) m ) Y (u (m) 0 ) 1) m 1 ((L ) m + L 12 (L ) j ) = (2(L ) m 1) <. j=0 (5.11) Thus condton C 3 holds wth parameter β gven n condton A 1. Fnally, condton C 2 s equvalent to condton C 4. The proof of Lemma 5.3 s complete. Lemmas 5.2 and 5.3 let us make the fnal ffth step n the proof of Theorem 5.1. We employ the followng nequalty that can be wrtten down for any partton Π N, Φ(M max,t ) Φ(M(0) max,t ) Φ(M max,t ) Φ(M Π N,T ) + Φ(M (0) max,t ) Φ(M(0) Π N,T ) + Φ(M Π N,T ) Φ(M(0) Π N,T ). (5.12) Usng ths nequalty and relaton (5.4) gven n Lemma 5.3 we get for any partton Π N and for d(π N ) < c, lm ε 0 Φ(M max,t ) Φ(M(0) max,t ) lm Φ(M ε 0 max,t ) Φ(M Π N,T ) + Φ(M(0) max,t ) Φ(M(0) Π N,T ). (5.13) 29

34 Fnally, relaton (5.2) gven n Lemma 5.2 mples (note that relaton ε 0 admt also the case where ε = 0) that the expresson on the rght hand sde n (5.13) can be forced to take a value less then any ε > 0 by choosng the partton Π N wth the dameter d(π N ) small enough. Ths proves the asymptotc relaton (5.1) and thus the proof of Theorem 5.1 s complete. In concluson of ths secton, let us formulate some useful suffcent condtons for an mportant condton of moment compactness C 1. Let us ntroduce the modulus of J-compactness, for h, c > 0, = 1,..., k, (Y ( ), h, c, T ) = sup 0 t t+u t+c T sup P y,t { Y (t + u) Y (t) h}. y R k The followng condton of J-compactness plays the key role n functonal lmt theorems for Markov type càdlàg processes: C 5 : lm c 0 lm ε 0 (Y ( ), h, c, T ) = 0, h > 0, = 1,..., k. Introduce also the quantty, whch represents the maxmum of moment generatng functons for ncrements of the log-prce processes Y (t), = 1,..., k, Ξ β (Y ( ), T ) = sup 0 t t+u T sup E y,t e β(y y R k (t+u) Y (t)), β R 1. The followng condton formulated n terms of these moment generatng functons can be effectvely verfed n many cases: C 6 : lm ε 0 Ξ ±β (Y ( ), T ) <, = 1,..., k, for some β > β, where β s the parameter ntroduced n condton C 1. Lemma 5.4. Condtons C 5 and C 6 mply condton C 1. Proof. Usng Hölder nequalty we get the followng estmates, for every 0 t t + u T, y R k + and = 1,..., k, E y,t e β Y (t+u) Y (t) 1 (e βh 1) + E y,t e β Y (t+u) Y (t) χ( Y (t + u) Y (t) h) (e βh 1) + (E y,t e β Y (t+u) Y (t) ) β β P y,t { Y (t + u) Y (t) h}. (5.14) 30

35 The followng nequalty, whch connect the exponental moment modulus of compactness wth the modulus of J-compactness, follows from (5.14), β (Y ( ), c, T ) (e βh 1) + ( β (Y ( ), c, T ) + 1) β β (Y ( ), h, c, T ). (5.15) Also, the followng estmate takes place, for every 0 t t + u T, y R k+, and = 1,..., k, E y,t e β Y (t+u) Y (t) = E y,t e β (Y + E y,t e β (Y E y,t e β (Y (t+u) Y (t)) χ(y (t+u) Y (t)) χ(y (t+u) Y (t)) + E y,t e β (Y (t + u) Y (t)) (t + u) < Y (t)) (t+u) Y (t)). (5.16) Ths estmate mples the followng nequalty, β (Y ( ), c, T ) + 1 Ξ β (Y ( ), T ) + Ξ β (Y ( ), T ). (5.17) Relatons (5.15) and (5.17) mply the statement of Lemma Multvarate exponental prce processes wth ndependent ncrements In ths secton we show general convergence results for the mportant case of exponental prce processes wth ndependent ncrements. Let us consder the model where the log-prce process Y (t), t 0 s a càdlàg process wth ndependent ncrements. We also assume for smplcty that the ntal state of process Y (0) = y =, = 1,..., k) s a constant. The process Y (t) s a càdlàg Markov process wth transton probabltes whch are connected wth the dstrbutons of ncrements for ths process P (t, t + u, A) by the followng relaton, (y P (t, y, t + u, A) = P (t, t + u, A y) = P{ y + Y (t + u) Y (t) A}. (6.1) Let us assume the followng standard condton of weak convergence for dstrbutons of ncrements for log-prce processes: 31

36 D 1 : P (t, t + u, ) P (0) (t, t + u, ) as ε 0, 0 t t + u T. Representaton (6.1) mples n an obvous way that condton B 3 holds wth the sets Y t = R k, t [0, T ],.e., dstrbutons of ncrements for the processes Y (t) locally unformly weakly converge, f condton D 1 holds. Thus, n the case of processes wth ndependent ncrements, the condton B 3 wth the sets Y t = R k s, n fact, equvalent to the standard condton of weak convergence for such processes. In ths case the J-compactness modulus (Y ( ), h, c, T ) takes the followng form: (Y ( ), h, c, T ) = sup 0 t t+u t+c T P{ Y (t + u) Y (t) h}. Thus, condton C 5 s reduced to the standard J-compactness condton for the log-prce processes wth ndependent ncrements: D 2 : lm c 0 lm ε 0 (Y (t), h, c, T ) = 0, h > 0, = 1,..., k. Note that condtons D 1 and D 2 mply J-convergence of processes Y (t), t [0, T ] to process Y (0) (t), t [0, T ] as ε 0 and stochastc contnuty of the lmt process. Also, the quanttes Ξ β (Y ( ), T ), = 1,..., k take a smplfed form, Ξ β(y ( ), T ) = sup Ee 0 t t+u T β(y (t+u) Y Therefore, condton C 6 takes the followng form: (t)), β R 1. D 3 : lm ε 0 Ξ ±β ( ), T ) <, = 1,..., k, for some β > β, where β s the parameter ntroduced n condton C 1. (Y Accordng to Lemma 5.4, condtons D 1 and D 2 mply condton C 1. Condton B 4 s reduced n ths case to the followng condton: D 4 : lm ε 0 y = y 0. Note that y 0 can be any vector wth real-valued components snce the set Y 0 = R k. Note that, condton D 4 mples also condton C 4. The followng theorem summarzes the remarks above. Theorem 6.1. Let condtons A 1, A 4, and D 1 D 4 hold for the exponental prce processes wth ndependents S (t). Then Φ(M max,t ) Φ(M(0) max,t ) < as ε 0. (6.2) 32

37 The skeleton approxmatons Y (t) = Y (0) ([t/ε]), t 0 for a stochastcally contnuous càdlàg log-prce process Y (0) (t), t 0 wth ndependent ncrements gve a good example of the model ntroduced above. In ths case, condtons D 1 and D 2 automatcally hold. Condton D 3 s mpled by the followng condton: D 5 : Ξ ±β ( ), T ) <, = 1,..., k, for some β > β, where β s the parameter ntroduced n condton C 1. (Y (0) Thus, f we let condtons A 1, A 4, and D 5 hold, then the statement of Theorem 6.1 holds for the exponental prce processes S (t) = e Y (0) ([t/ε]), t [0, T ]. Note that, n ths case, Theorem 3.2 yelds a stronger result n the form of explct estmates for the accuracy of skeleton approxmatons for reward functons. Note also that the optmal expected rewards for the skeleton prce processes S (t) = e Y (0) ([t/ε]) can be estmated wth the use of Monte Carlo smulaton. 7 Bnomal tree opton reward approxmatons for multvarate Brownan moton In order to llustrate the results presented above, let us consder the model where k = 2 and the bvarate geometrc Brownan prce process S (0) Y (t) = e (0)(t), t 0, where the log-prce process Y (0) (t) = (Y (0) 1 (t), Y (0) 2 (t)), t 0 s a bvarate Brownan moton wth components Y (0) (t) = y (0) + µ t + σ W (t), t 0, = 1, 2, whch are correlated,.e., EW 1 (t)w 2 (t) = ρt, t 0. We approxmate the process Y (0) (t), t 0 wth a bvarate bnomal sum-process Y (t) = (Y 1 (t), Y 2 (t)), t 0 wth components Y (t) = y (0) + 1 k [t/ε] Y k,, t 0, = 1, 2. Here Y n = (Y n,1, Y n,2 ), n = 1, 2,... are, for every ε > 0,..d. random vectors whch have the followng structure, (Y n,1, Y n,2 ) = (+u 1, +u 2 ) p ++, (+u 1, u 2 ) p +, wth prob. ( u 1, +u 2 ) p +, ( u 1, u 2 ) p. (7.1) Snce we assumed that the ntal state Y (0) = y (0) = (y (0) 1, y (0) 2 ) s a constant whch does not depend of ε, condtons B 4 and C 4 automatcally hold. In order to ft the bvarate bnomal sum-processes defned above to the lmt bvarate Brownan moton, we should ft expectatons, varances, and covarance 33

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Supplement to Clustering with Statistical Error Control

Supplement to Clustering with Statistical Error Control Supplement to Clusterng wth Statstcal Error Control Mchael Vogt Unversty of Bonn Matthas Schmd Unversty of Bonn In ths supplement, we provde the proofs that are omtted n the paper. In partcular, we derve

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Modelli Clamfim Equazione del Calore Lezione ottobre 2014 CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g

More information

arxiv: v1 [math.co] 1 Mar 2014

arxiv: v1 [math.co] 1 Mar 2014 Unon-ntersectng set systems Gyula O.H. Katona and Dánel T. Nagy March 4, 014 arxv:1403.0088v1 [math.co] 1 Mar 014 Abstract Three ntersecton theorems are proved. Frst, we determne the sze of the largest

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Curvature and isoperimetric inequality

Curvature and isoperimetric inequality urvature and sopermetrc nequalty Julà ufí, Agustí Reventós, arlos J Rodríguez Abstract We prove an nequalty nvolvng the length of a plane curve and the ntegral of ts radus of curvature, that has as a consequence

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation Nonl. Analyss and Dfferental Equatons, ol., 4, no., 5 - HIKARI Ltd, www.m-har.com http://dx.do.org/.988/nade.4.456 Asymptotcs of the Soluton of a Boundary alue Problem for One-Characterstc Dfferental Equaton

More information

An Analysis of a Least Squares Regression Method for American Option Pricing

An Analysis of a Least Squares Regression Method for American Option Pricing An Analyss of a Least Squares Regresson Method for Amercan Opton Prcng Emmanuelle Clément Damen Lamberton Phlp Protter Revsed verson, December 200 Abstract Recently, varous authors proposed Monte-Carlo

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

arxiv: v1 [math.co] 12 Sep 2014

arxiv: v1 [math.co] 12 Sep 2014 arxv:1409.3707v1 [math.co] 12 Sep 2014 On the bnomal sums of Horadam sequence Nazmye Ylmaz and Necat Taskara Department of Mathematcs, Scence Faculty, Selcuk Unversty, 42075, Campus, Konya, Turkey March

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

ON THE BURGERS EQUATION WITH A STOCHASTIC STEPPING STONE NOISY TERM

ON THE BURGERS EQUATION WITH A STOCHASTIC STEPPING STONE NOISY TERM O THE BURGERS EQUATIO WITH A STOCHASTIC STEPPIG STOE OISY TERM Eaterna T. Kolovsa Comuncacón Técnca o I-2-14/11-7-22 PE/CIMAT On the Burgers Equaton wth a stochastc steppng-stone nosy term Eaterna T. Kolovsa

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Large Sample Properties of Matching Estimators for Average Treatment Effects by Alberto Abadie & Guido Imbens

Large Sample Properties of Matching Estimators for Average Treatment Effects by Alberto Abadie & Guido Imbens Addtonal Proofs for: Large Sample Propertes of atchng stmators for Average Treatment ffects by Alberto Abade & Gudo Imbens Remnder of Proof of Lemma : To get the result for U m U m, notce that U m U m

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be

More information

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION PROCDINGS OF TH AMRICAN MATHMATICAL SOCITY Volume 125, Number 7, July 1997, Pages 2119{2125 S 0002-9939(97)03816-1 TH STRONG OPN ST CONDITION IN TH RANDOM CAS NORBRT PATZSCHK (Communcated by Palle. T.

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

THERE ARE INFINITELY MANY FIBONACCI COMPOSITES WITH PRIME SUBSCRIPTS

THERE ARE INFINITELY MANY FIBONACCI COMPOSITES WITH PRIME SUBSCRIPTS Research and Communcatons n Mathematcs and Mathematcal Scences Vol 10, Issue 2, 2018, Pages 123-140 ISSN 2319-6939 Publshed Onlne on November 19, 2018 2018 Jyot Academc Press http://jyotacademcpressorg

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities Algorthms Non-Lecture E: Tal Inequaltes If you hold a cat by the tal you learn thngs you cannot learn any other way. Mar Twan E Tal Inequaltes The smple recursve structure of sp lsts made t relatvely easy

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Lecture 4: September 12

Lecture 4: September 12 36-755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table: SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be

More information