An Analysis of a Least Squares Regression Method for American Option Pricing

Size: px
Start display at page:

Download "An Analysis of a Least Squares Regression Method for American Option Pricing"

Transcription

1 An Analyss of a Least Squares Regresson Method for Amercan Opton Prcng Emmanuelle Clément Damen Lamberton Phlp Protter Revsed verson, December 200 Abstract Recently, varous authors proposed Monte-Carlo methods for the computaton of Amercan opton prces, based on least squares regresson. The purpose of ths paper s to analyze an algorthm due to Longstaff and Schwartz. Ths algorthm nvolves two types of approxmaton. Approxmaton one: replace the condtonal expectatons n the dynamc programmng prncple by proectons on a fnte set of functons. Approxmaton two: use Monte-Carlo smulatons and least squares regresson to compute the value functon of approxmaton one. Under farly general condtons, we prove the almost sure convergence of the complete algorthm. We also determne the rate of convergence of approxmaton two and prove that ts normalzed error s asymptotcally Gaussan. KEY WORDS: Amercan optons, optmal stoppng, Monte-Carlo methods, least squares regresson. AMS Classfcaton: 90A09, 93E20, 60G40. Introducton The computaton of Amercan opton prces s a challengng problem, especally when several underlyng assets are nvolved. The mathematcal problem to solve s an optmal stoppng Équpe d Analyse et de mathématques applquées, Unversté de Marne-la-Vallée, 5 Bld Descartes, Champs-sur-marne, Marne-la-Vallée Cedex 2, France. Operatons Research and Industral Engneerng Department, Cornell Unversty, Ithaca, Y ,USA; Supported n part by SF Grant #DMS and SA Grant #MDA

2 problem. In classcal dffuson models, ths problem s assocated wth a varatonal nequalty, for whch, n hgher dmensons, classcal PDE methods are neffectve. Recently, varous authors ntroduced numercal methods based on Monte-Carlo technques see, among others, [, 2, 3, 4, 5, 9, 2]. The startng pont of these methods s to replace the tme nterval of exercse dates by a fnte subset. Ths amounts to approxmatng the Amercan opton by a so called Bermudan opton. A control of the error caused by ths restrcton to dscrete stoppng tmes s generally easy to obtan see, for nstance, [8], Remark.4. Throughout the paper, we concentrate on the dscrete tme problem. The soluton of the dscrete optmal stoppng problem reduces to an effectve mplementaton of the dynamc programmng prncple. The condtonal expectatons nvolved n the teratons of dynamc programmng cause the man dffculty for the development of Monte- Carlo technques. One way of treatng ths problem s to use least squares regresson on a fnte set of functons as a proxy for condtonal expectaton. Ths dea whch already appeared n [5] s one of the man ngredents of two recent papers by Longstaff and Schwartz [9], and by Tstskls and Van Roy [2]. The purpose of the present paper s to analyze the least squares regresson method proposed by Longstaff and Schwartz [9], whch seems to have become popular among practtoners. In fact, we wll consder a varant of ther approach see Remark 2.. In order to present our results more precsely, we wll dstngush two types of approxmaton n ther algorthm. Approxmaton one: replace condtonal expectatons n the dynamc programmng prncple by proectons on a fnte set of functons taken from a sutable bass. Approxmaton two: use Monte-Carlo smulatons and least squares regresson to compute the value functon of the frst approxmaton. Approxmaton two wll be referred to as the Monte-Carlo procedure. In practce, one chooses the number of bass functons and runs the Monte-Carlo procedure. We wll prove that the value functon of approxmaton one approaches wth probablty one the value functon of the ntal optmal stoppng problem as the number m of functons goes to nfnty. We then prove that for a fxed fnte set of functons, we have almost sure convergence of the Monte-Carlo procedure to the value functon of the frst approxmaton. We also establsh a type of central lmt theorem for the rate of convergence of the Monte- Carlo procedure, thus provdng the asymptotc normalzed error. We note that partal convergence results are stated n [9], together wth excellent emprcal results, but wth no 2

3 study of the rate of convergence. On the other hand, convergence but not the rate nor the error dstrbuton s provded n [2] for a somewhat dfferent algorthm. We also refer to [2] for a dscusson of accumulaton of errors as the number of possble exercse dates rows. We beleve that our methods could be appled to analyze the rate of convergence of the Tstskls-Van Roy approach, but we wll concentrate on the Longstaff-Schwartz method. Mathematcally, the most techncal part of our work concerns the Central Lmt Theorem for the Monte-Carlo procedure. One mght thnk that the methods developed for the analyss of asymptotc errors n statstcal estmaton based on stochastc optmzaton see, for nstance, [6, 7, ] are applcable to our problem. However, the algorthm does not seem to ft n ths settng for two reasons: the lack of regularty of the value functon as a functon of the parameters and the recursve nature of dynamc programmng. The paper s organzed as follows. In Secton 2, a precse descrpton of the least squares regresson method s gven and the notaton s establshed. In Secton 3, we prove the convergence of the algorthm. In Secton 4, we study the rate of convergence of the Monte- Carlo procedure. 2 The algorthm and notatons 2. Descrpton of the algorthm As mentoned n the ntroducton, the frst step n all probablstc approxmaton methods s to replace the orgnal optmal stoppng problem n contnuous tme by an optmal stoppng problem n dscrete tme. Therefore, we wll present the algorthm n the context of dscrete optmal stoppng. We wll consder a probablty space Ω, A, IP, equpped wth a dscrete tme fltraton F =0,...,L. Here, the postve nteger L denotes the dscrete tme horzon. Gven an adapted payoff process Z =0,...,L, where Z 0, Z,..., Z L are square ntegrable random varables, we are nterested n computng sup IEZ τ, τ T 0,L where T,L denotes the set of all stoppng tmes wth values n {,..., L}. Followng classcal optmal stoppng theory for whch we refer to [0], chapter 6, we 3

4 ntroduce the Snell envelope U =0,...,L of the payoff process Z =0,...,L, defned by U = ess- sup τ T,L IE Z τ F, = 0,..., L. The dynamc programmng prncple can be wrtten as follows: U L = Z L We also have U = IE Z τ F, wth U = max Z, IE U + F, 0 L. τ = mn{k U k = Z k }. In partcular IEU 0 = sup τ T0,L IEZ τ = IEZ τ0. The dynamc programmng prncple can be rewrtten n terms of the optmal stoppng tmes τ, as follows: τ L = L τ = {Z IEZ τ+ F } + τ + {Z <IEZ τ+ F }, 0 L, Ths formulaton n terms of stoppng rules rather than n terms of value functons plays an essental role n the least squares regresson method of Longstaff and Schwartz. The method also requres that the underlyng model be a Markov chan. Therefore, we wll assume that there s an F -Markov chan X =0,...,L wth state space E, E such that, for = 0,..., L, Z = f, X, for some Borel functon f,. We then have U = V, X for some functon V, and IE Z τ+ F = IE Z τ+ X. We wll also assume that the ntal state X 0 = x s determnstc, so that U 0 s also determnstc. The frst approxmaton conssts of approxmatng the condtonal expectaton wth respect to X by the orthogonal proecton on the space generated by a fnte number of functons of X. Let us consder a sequence e k x k of measurable real valued functons defned on E and satsfyng the followng condtons: A : For = to L, the sequence e k X k s total n L 2 σx. m A 2 : For = to L and m, f λ k e k X = 0 a.s. then λ k = 0 for k = to m. k= 4

5 For = to L, we denote by P m the orthogonal proecton from L 2 Ω onto the vector space generated by {e X,..., e m X } and we ntroduce the stoppng tmes τ [m] : τ [m] L τ [m] = L = { Z P m Z τ [m] + } + τ [m] + { Z <P mz τ [m] + }, L, From these stoppng tmes, we obtan an approxmaton of the value functon: U m 0 = max Recall that Z 0 = f0, x s determnstc. numercally IEZ τ [m] by a Monte-Carlo procedure. We assume that we can smulate ndependent paths X Z n Z n Z 0, IEZ τ [m]. 2. The second approxmaton s then to evaluate,...,x n,... X of the Markov chan X and we denote by = f, X n the assocated payoff for = 0 to L and n = to. For each path n, we then estmate recursvely the stoppng tmes τ [m] by: = L τ n,m, L τ n,m, = { Z n } + τ n,m, α m, e m X n + { Z n }, L, <α m, e m X n Here, x y denotes the usual nner product n IR m, e m s the vector valued functon e,..., e m and α m, s the least square estmator: α m, Remark that for = to L, α m, the followng approxmaton for U m 0 : 2 = arg mn Z n a e m X n a IR m τ n,m,, + U m, 0 = max Z 0, IR m. Fnally, from the varables τ n,m,, we derve Z n. 2.2 τ n,m, In the next secton, we prove that, for any fxed m, U m, 0 converges almost surely to U m 0 as goes to nfnty, and that U m 0 converges to U 0 as m goes to nfnty. Before statng these results, we devote a short secton to notaton. We also menton that the above algorthm s not exactly the Longstaff-Schwartz algorthm as ther regresson nvolves only n-the-money paths see Remark 2.. 5

6 2.2 otaton Throughout the paper we denote by x the Eucldean norm of a vector x n IR d. For m we denote by e m x the vector e x,..., e m x and for = to L we defne α m so that P m Z [m] τ = α m e m X We remark that, under A 2, the m dmensonal parameter α m has the explct expresson: α m = A m IEZ [m] τ e m X, for = to L, where A m s an m m matrx, wth coeffcents gven by A m k,l m = IEe k X e l X. 2.5 Smlarly, the estmators α m, are equal to α m, = A m, Z n τ n,m, + e m X n, 2.6 for = to L, where A m, s an m m matrx, wth coeffcents gven by A m, k,l m = e k X n e l X n. 2.7 ote that lm A m, = A m almost surely. Therefore, under A 2, the matrx A m, s nvertble for large enough. We also defne α m = α m,..., αm L and αm, = α m,,..., α m, L. Gven a parameter a m = a m,..., am L n IRm... IR m and determnstc vectors z = z,..., z L IR L and x = x,..., x L E L, we defne a vector feld F = F,..., F L by: F L a m, z, x = z L F a m, z, x = z {z a m em x } + F + a m, z, x {z <a m em x }, for =,..., L. We have F a m, z, x = z B c + L =+ z B...B B c + z L B...B L, 6

7 wth B = {z < a m e m x }. We remark that F a m, Z, X does not depend on a m,..., am and that we have F α m, Z, X = Z τ [m] F α m,, Z n, X n = Z n For = 2 to L, we denote by G the vector valued functon and we defne the functons φ and ψ by Observe that wth ths notaton, we have and smlarly, for = to L, α m, τ n,m, G a m, z, x = F a m, z, xe m x, φ a m = IEF a m, Z, X 2.8 ψ a m = IEG a m, Z, X. 2.9 α m = A m ψ + α m, 2.0 = A m,. G + α m,, Z n, X n. 2. Remark 2. In [9], the regresson nvolves only n the money paths,.e. samples for whch Z n > 0. Ths seems to be more effcent numercally. In order to stck to ths type of regresson, the above descrpton of the algorthm should be modfed as follows. Use nstead of τ [m] ˆτ [m] We analogously defne ˆτ n,m, = {Z ˆα m ex } {Z >0} + ˆτ [m] + {Z <ˆα m ex } {Z =0}, for L, wth 2 ˆα m = arg mn IE a IR m {Z >0} [m] a ex Zˆτ. +, ˆα m,, ˆF and Ĝ. The convergence results stll hold for ths verson of the algorthm wth smlar proofs, provded assumptons A and A 2 are replaced by  : For = to L, the sequence e k X k s total n L 2 σx, {Z >0}dIP. m  2 : For L and m, f {Z >0} λ k e k X = 0 a.s. then λ k = 0 for k m. k= 7

8 3 Convergence The convergence of U m 0 to U 0 s a drect consequence of the followng result. Theorem 3. Assume that A s satsfed, then, for = to L, we have lm IEZ m + τ [m] F = IEZ τ F, n L 2. Proof: We proceed by nducton on. The result s true for = L. Let us prove that f t holds for +, t s true for L. Snce Z τ [m] = Z {Z α m em X } + Z [m] τ {Z <α m + em X }, for L, we have IEZ τ [m] Z τ F = Z IEZ τ+ F {Z α m em X } {Z IEZ τ+ F } +IEZ [m] τ Z τ+ F {Z <α m + em X }. By assumpton, the second term of the rght sde of the equalty converges to 0 and we ust have to prove that B m defned by B m = Z IEZ τ+ F {Z α m em X } {Z IEZ τ+ F }, converges to 0 n L 2. Observe that B m = Z IEZ τ+ F {IEZτ+ F >Z α m em X } {α m e m X >Z IEZ τ+ F } But snce P m Z IEZ τ+ F { Z IEZ τ+ F α m em X IEZ τ+ F } α m e m X IEZ τ+ F α m e m X P m IEZ τ + F + P m IEZ τ + F IEZ τ+ F. α m e m X = P m Z [m] τ = P m IEZ [m] + τ F, + s the orthogonal proecton on a subspace of the space of F -measurable random varables. Consequently B m 2 IEZ [m] τ F IEZ τ+ F 2 + P miez τ + F IEZ τ+ F

9 The frst term of the rght sde of ths nequalty tends to 0 by the nducton hypothess and the second one by A. In what follows, we fx the value m and we study the propertes of U m, 0 as the number of Monte-Carlo smulatons, goes to nfnty. For notatonal smplcty, we drop the superscrpt m throughout the rest of the paper. Theorem 3.2 Assume that for = to L, IP α ex = Z = 0. converges almost surely to U0 m of towards IEZ [m] τ Z n τ n,m, Then U m, 0 as goes to nfnty. We also have almost sure convergence as goes to nfnty, for =,..., L. ote that wth the notaton of the precedng secton, we have to prove that lm The proof s based on the followng lemmas. F α, Z n, X n = φ α, L. 3. Lemma 3. For = to L, we have : L L F a, Z, X F b, Z, X Z = { Z b ex a b ex }. = Proof : Let B = {Z < a ex } and B = {Z < b ex }. We have : But F a, Z, X F b, Z, X = Z B B + L =+ Z B...B B c B... B Bc +Z L B c...b c L Bc... B c L. B B = {a ex Z <b ex } + {b ex Z <a ex } { Z b ex a b ex } Moreover B...B B c B... B Bc = k= k= Bk Bk + B c Bc Bk Bk, 9

10 ths gves F a, Z, X F b, Z, X L = L Z B B. Combnng these nequaltes, we obtan the result of Lemma 3.. Lemma 3.2 Assume that for = to L, IP α ex = Z = 0 then α almost surely to α. = converges Proof: we proceed by nducton on. For = L, the result s a drect consequence of the law of large numbers. ow, assume that the result s true for = to L. We want to prove that t s true for. We have α = A G α, Z n, X n. By the law of large numbers, we know that A converges almost surely to A and t remans to prove that G α, Z n, X n converges to ψ α. From the law of large numbers, we have the convergence of G α, Z n, X n to ψ α and t suffces to prove that : lm G α, Z n, X n G α, Z n, X n = 0. + We note G = G G α, Z n, X n G α, Z n, X n. We have : ex n F α, Z n, X n F α, Z n, X n ex n L = Z n L = { Z n α ex n α α ex n }. Snce, for = to L, α converges almost surely to α, we have for each ɛ > 0 : lm sup G lm sup ex n L Z n L = IE ex L = n { Z = = { Z α ex ɛ ex }, = L Z α ex n ɛ ex n } where the last equalty follows from the law of large numbers. Lettng ɛ go to 0, we obtan the convergence to 0, snce for = to L, IP α ex = Z = 0. The proof of Theorem 3.2 s smlar to the proof of Lemma 3.2. Therefore, we omt t. 0

11 4 Rate of convergence of the Monte-Carlo procedure 4. Tghtness In ths secton we are nterested n the rate of convergence of Z n, for = to L. τ n, Recall that m s fxed and that Z L and ex L are square ntegrable varables. We assume that : H : =,..., L, lm sup ɛ 0 Ȳ = + L = IEȲ { Z α ex ɛ ex } ɛ L Z + ex = + < +, where L = ex. 4. ote that H mples that IP Z = α ex = 0 and, consequently, under H we know from Secton 3 that F α, Z n, X n converges almost surely to φ α. Remark too that H s satsfed f the random varable Z α ex has a bounded densty near zero and the varables Z and ex are bounded. Theorem 4. Under H, the sequences =,..., L and α F α, Z n, X n φ α α =,..., L are tght. The proof of Theorem 4. s based on the followng Lemma. Lemma 4. Let U n, V n, W n be a sequence of dentcally dstrbuted random varables wth values n [0, + 3 such that IE W {U lm sup ɛv } ɛ 0 ɛ < +, and θ a sequence of postve random varables such that θ s tght, then the sequence W n {U n θ V n } Proof: Let σ θ = s tght. W n {U n θv n }. Observe that σ s a non decreasng functon

12 of θ. Let A > 0, we have IP σ θ A IP σ θ A, θ B + IP θ > B = IP σ B A + IP θ > B A IEσ B + IP θ > B A IEW {U B V } + IP θ > B. From the assumpton on U n, V n, W n and the tghtness of θ, we deduce easly the tghtness of σ θ. Proof of Theorem 4.: We know from the classcal Central Lmt Theorem that the sequence / F α, Z n, X n φ α s tght and t remans to prove the tghtness of F α, Z n, X n F α, Z n, X n, for = to L. Smlarly, to prove the tghtness of α α, for = to L, we ust have to prove the tghtness of G α, Z n, X n G α, Z n, X n see Secton 2 for the notaton. We proceed by nducton on. The tghtness of F L α, Z n, X n F L α, Z n, X n s obvous and that of α L α L follows from the Central Lmt Theorem for the sequence Z n L exn L and the almost sure convergence of the sequence A L I. Assume that tght for = to L. We set F = F α, Z n, X n F α, Z n, X n and α α are ow from Lemma 3., we have : wth F Ȳ n = + F α, Z n, X n F α, Z n, X n. L Ȳ n = L = Z n L + { Z n = α ex n ex n L + α α ex n }, = ex n. 4.2 From Lemma 4. and by the nducton hypothess, we deduce that F s tght. In the same way, we prove that α 2 α 2 s tght. 2

13 4.2 A central lmt theorem We prove n ths secton that under some stronger assumptons than n secton 4., the vector Z n τ n, precedng notaton, we have IEZ [m] τ =,...,L converges weakly to a Gaussan vector. Wth the Z n τ n, IEZ [m] τ = F α, Z n, X n φ α. 4.3 In the followng, we wll denote by Y the par Z, X and by Y n the par Z n, X n. We wll also use Ȳ and Ȳ n as defned n 4. and 4.2. We wll need the followng hypothess: H : For = to L, there exsts a neghborhood V of α, η > 0 and k > 0 such that for a V and for ɛ [0, η ], IEȲ { Z a ex ɛ ex } ɛk. H 2 : For = to L, Z and ex are n L p for all p < +. H 3 : For = to L, φ and ψ are C n a neghborhood of α. Observe that H s stronger than H. Theorem 4.2 Under H, H 2, H 3, the vector Z n τ n, IEZ [m] τ, α α =,...,L converges n law to a Gaussan vector as goes to nfnty. For the proof of Theorem 4.2, we wll use the followng decomposton : + F α, Y n φ α = F α, Y n F α, Y n φ α φ α F α, Y n φ α + φ α φ α. From the classcal Central Lmt Theorem, we know that converges n law to a Gaussan vector. Moreover, we have 3 F α, Y n φ α

14 where A α α = A A A converges almost surely to A and A the smlar decomposton + G + α, Y n ψ + α A A ψ + α, A converges n law. We have G + α, Y n ψ + α = 4.4 G + α, Y n G + α, Y n ψ + α ψ + α G + α, Y n ψ + α + ψ + α ψ + α. Usng these decompostons and Theorem 4.3 below, together wth the dfferentablty of the functons φ and ψ, Theorem 4.2 can be proved by nducton on. Theorem 4.3 Under H, H 2, H 3, the varables and F α, Y n F α, Y n φ α φ α G + α, Y n G + α, Y n ψ + α ψ + α converge to 0 n L 2, for = to L. Remark 4. If we try to compute the covarance matrx of the lmtng dstrbuton, we see from Theorem 4.3 and the above decompostons that t depends on the dervatves of the functons φ and ψ at α. Ths means that the estmaton of ths covarance matrx may prove dffcult. Indeed, the dervaton of an estmator for the dervatve of a functon s typcally harder than for the functon tself. Remark 4.2 Theorem 4.3 can be vewed as a way of centerng the F -ncrements resp. the G + -ncrements between α and α by the φ -ncrements resp. the ψ + -ncrements. One way to get some ntuton of the proof of Theorem 4.3 s to observe that f the sequence α I were ndependent of the Y n s, the convergence n L 2 would reduce to the convergence of α I to α. Indeed, we would have to consder expectatons of the type 4

15 IE ξ n 2 wth varables ξ n whch are centered and d, condtonally on α. man dffculty n the proof of Theorem 4.3 comes from the fact that α s not ndependent of the Y n s. On the other hand, we do have dentcally dstrbuted random varables and we wll explot symmetry arguments and the ndependence of α and Y. The For the proof of Theorem 4.3, we need to control the ncrements of F and G +. Lemma 4.2 relates these ncrements to ndcator functons. Lemma 4.3 wll enable us to localze α near α. We wll then develop recursve technques adapted to dynamc programmng see Lemma 4.3 and Lemma 4.5. In the followng, we denote by IY, a, ɛ the functon IY, a, ɛ = { Z a ex ɛ ex }. ote that IY, a, ɛ IY, b, ɛ + b a. 4.5 The followng Lemma s essentally a reformulaton of Lemma 3.. Lemma 4.2 For = to L, and a, b n IR m L, we have F a, Y F b, Y Ȳ G a, Y G b, Y Ȳ L = L = IY, a, a b IY, a, a b Lemma 4.3 Assume H and H 2, then for = to L, there exsts C > 0 such that for all δ > 0, IP α α δ C δ 4 2. Proof: Let us recall that f U n n s a sequence of..d. varables such that EU 4 have δ > 0, IP U n IEU δ 5 < +, we C δ

16 Observe that α We set Ω ɛ = { A α = A From 4.6, we know that IP Ω ɛ c α G + α, Y n ψ + α A A ψ + α, A A A ɛ} and we choose ɛ such that A 2 A on Ω ɛ. α K C ɛ 4 2, for = to L and that, on Ω ɛ, G + α, Y n ψ + α + Kɛ. ow, snce G L α, Y n = Z n L exn L, we deduce from 4.6 appled wth ths choce of U n that IP α L α L δ Choosng ɛ = ρδ wth ρ small enough, we obtan: C L δ Kɛ C L ɛ 4 2. IP α L α L δ C L δ 4 2. Assume now that the result of Lemma 4.3 s true for +,..., L. We wll prove that IP α α δ C δ 4 2. We have G + α, Y n ψ + α = G + α, Y n G + α, Y n + G + α, Y n ψ + α. From Lemma 4.2, we obtan on Ω ɛ, α α Kɛ + L Ȳ n IY n, α, α α =+ + K G + α, Y n ψ + α. The last term can be treated usng 4.6. Therefore, t suffces to prove that δ > 0, IP S δ C δ 4 2, 6

17 L where S = Ȳ n =+ IP S δ IP IY n L Ȳ n =+, α, α α. But IY n, α, ɛ δ + L =+ IP α α ɛ. By assumpton, for = + to L, we have IP α α ɛ C L we know from H that δ L =+ =+ IEȲ n IY n, α, ɛ ɛk, wth K = L =+ IEȲ n IY n, α, ɛ δ ɛk and, usng 4.6 agan, we see that IP L Ȳ n =+ IY n, α, ɛ δ C δ ɛk 4 2. Choosng ɛ = ρδ, wth ρ small enough, we obtan the result of Lemma 4.3. ɛ 4 2. Moreover k, so we have Before statng other techncal results n preparaton for the proof of Theorem 4.3, we ntroduce the followng notatons. Gven k {, 2,..., L}, λ and µ n IR +, we defne a sequence of random vectors U k λ, µ, =,..., L, by the recursve relatons U k L λ, µ = λ U k λ, µ = λ + µ k L Ȳ n =+ I Y n, α k, U k λ, µ, L 2. Wth ths defnton, U k λ, µ s obvously σy,..., Y k -measurable. We also observe that t s a symmetrc functon of Y,..., Y k because α k depends symmetrcally on Y,..., Y k. The next lemma establshes a useful relaton between U k and U k. Lemma 4.4 Assume H 2. There exst postve constants C, u, v such that for each I, one can fnd an event Ω wth IP Ω c C/ 2 and, on the set Ω we have, for k {,..., L} and {,... L }, U k λ, µ + α k α k U k λ + Lµ + uȳ k, v + µ. 7

18 Proof: We have Snce A α k = A k k k G + α k, Y n. s the mean of d random varables wth moments of all orders and mean A, we can fnd Ω, wth IP Ω c = O/ 2, on whch A k 2 A, for k =,..., L, =,..., L. On ths set, we have, for some postve constant C, α k α k C Ȳ k k + Ȳ n + k k C k k G + α k, Y n G + α k, Y n. 4.7 Here we have used the nequalty ka k k A k Ȳ k. We may choose Ω n such a way that k k Ȳ n remans bounded on Ω. ote that, for = L, the last sum n 4.7 vanshes. Usng Lemma 4.2 for L 2, we have, on Ω, α k α k uȳ k for some constants u and v. I + v k Ȳ n L =+ I To complete the proof of the lemma, we observe that Y n, α k, U k λ, µ I Y n, α k Y n, α k, α k α k 4.8, U k λ, µ + α k α k. ow, for L 2, by gong back to the recursve defnton of U k λ, µ and separatng the k-th term of the sum, we obtan U k λ, µ k µ λ + LµȲ k + L I, α k Ȳ n =+ Y n, U k λ, µ + α k α k 4.9 ow let V = U k λ, µ + α k α k. By combnng 4.8 and 4.9 we get V λ + Lµ + uȳ k + µ + v k Ȳ n L =+ I Y n, α k, V 8

19 Lemma 4.5 Assume H and H 2. For all ε 0, ] and for all µ 0, there exsts a constant C ε,µ such that λ 0, {,..., L }, IEU 2 λ, µ C ε,µ + λ ε. Proof: We wll prove by nducton on k = L, L,..., 2 that sup IEU k λ, µ C ε,µ + λ k L ε 4.0 We obvously have 4.0 for k = L, snce U L L λ, µ = λ/. We now assume that 4.0 holds for k + and wll prove t for k. For k, we have, usng the symmetry of U k λ, µ wth respect to Y,..., Y k, IEU k λ, µ = λ + µ k IE Ȳ k L =+ I Y k, α k For = +,..., L, we have, usng 4.5, Lemma 4.4, and the notaton V k λ, µ = U k λ + Lµ + uȳ k, v + µ,, U k λ, µ IE Ȳ k I IEȲ k I IEȲ k I Y k Y k Y k, α k, U k λ, µ, α k, α k, U k, V k λ, µ ote that IEȲ k Ω c Ȳ L 2 IP Ω c = O/. At ths pont we would lke to use H α k λ, µ + α k + IEȲ k Ω c and the nducton hypothess. However, we have to be careful because V k λ, µ depends on Y k. For { +,..., L }, we wrte wth IEȲ k I A l = IEȲ k {l Ȳ k <l} I lip Ȳ k l Y k, α k, V k λ, µ = Y k, α k /p IEȲ k { Ȳ k <l} I, V k λ, µ Y k A l, l=, α k, V k λ, µ p, 9

20 for all p, + Hölder. ow, IEȲ k { Ȳ k <l} I Y k, α k IEȲ k I, V k λ, µ Y k, α k, U k λ + Lµ + ul, v + µ and we may condton wth respect to σy,..., Y k and use Lemma 4.3 and H to, obtan IEȲ k { Ȳ k <l} I Y k, α k, V k λ, µ CIEU k λ + Lµ + ul, v + µ + C 2. We can now apply the nducton hypothess, and we easly deduce 4.0 for k, usng the fact that IP Ȳ k l = o/l m for all m I. Proof of Theorem 4.3: We prove that lm F α, Y n F α, Y n φ α φ α = 0 n L 2. The proof s smlar for the second term of the Theorem. We ntroduce the notaton a, b, Y = F a, Y F b, Y φ a φ b. We have to prove that 2 lm IE α, α, Y n = 0. Remark that for n = to, the pars α, Y n and α, Y have the same law, and for n m, α, Y n, Y m and α, Y, Y have the same dstrbuton. So we obtan IE α, α, Y n 2 = IE 2 α, α, Y + IE α, α, Y α, α, Y. But α, α, Y 2 Ȳ + IEȲ. Snce the sequence α goes to α almost surely and IP Z = α ex = 0 for = to L by assumpton, we deduce that α, α, Y goes to 0 almost surely. Consequently, we obtan that IE 2 α, α, Y tends to 0. It remans to prove that lm IE α, α, Y α, α, Y =

21 We observe that IE α 2, α, Y Y,..., Y = 0, snce IE F α 2, Y Y,..., Y = φ α 2 almost surely. Ths gves and we ust have to prove that lm IE α 2, α, Y α 2, α, Y = 0, IE α, α, Y α, α, Y α 2, α, Y α 2, α, Y = 0. We have the equalty α, α, Y α, α, Y α 2, α, Y α 2, α, Y = α, α 2, Y α, α, Y + α 2, α, Y α, α 2, Y, We want to prove that and lm IE α, α 2, Y α, α, Y = lm IE α 2, α, Y α, α 2, Y = Both equaltes can be proved n a smlar manner. We gve the detals for 4.3. Frst, note that gven any η > 0, we have, usng H 2 and Hölder s nequalty, IE α 2, α, Y α, α 2, Y { α α η } C p IP α α η /p, for all p >. We know from Lemma 4.3 that IP α α η C/ 2 4η. Therefore, f η < /4, lm IE α 2, α, Y α, α 2, Y { α α η } = 0. On the other hand, we have, usng Lemma 4.2, L α, α 2, Y Ȳ I Y, α 2 = φ α φ α 2,, α α 2 + 2

22 and L α, α, Y Ȳ I =, α, α α Y + φ α φ α. By the same reasonng as n the proof of Lemma 4.4, we have postve constants C, s and t such that, for each I, one can fnd a set Ω wth IP Ω c C/ 2, on whch α α 2 U 2 sȳ + Ȳ, t, =,..., L. Usng Lemma 4.3 and H 3, we may also assume that, on Ω, and for some postve constant K. We now have, for η < /4, φ α φ α 2 φ α φ α K K L α = α 2 L α α =, IE α, α 2, Y α, α, Y = L l= IE α, α 2, Y Ȳ I Y l, α l, η + K η + o/. In order to prove 4.2, t now suffces to show that, for, l L, lm Ȳ IE I Y, α 2, V 2 + V 2 Ȳ I Y l, α l, η + η = 0, wth the notaton V 2 = U 2 sȳ + Ȳ, t. We wll only prove that lm Ȳ IE I Y, α 2 snce the other terms are easer to control. For m I, let, V 2 Ȳ I Y l, α l, η = 0, A m = {m Ȳ + Ȳ < m}. 22

23 We have IE Ȳ I Y, α 2, V 2 IE Ȳ I = IEȲ I Y Y Ȳ I Y, α 2, U 2 l, α l, η Am sm, t Ȳ I Y l, α l, η IEȲ I Y l, α l, η., α 2, U 2 sm, t Here we have used the fact that Y s ndependent of Y,..., Y. We now condton wth respect to σy,..., Y 2 n the frst expectaton and we use H to obtan and Lemma 4.5 IE Ȳ I Y, α 2, V 2 Ȳ I Y l, α l, η + m Am C ε ε+η. Here ε s an arbtrary postve number and, by takng ε < η < /4, summng up over m and usng Hölder s nequalty we easly complete the proof. Acknowledgement: the research on ths paper has been stmulated by the proet mathf semnar on the topc Monte-Carlo methods for Amercan optons, durng the fall The talks gven by P. Cohort, J.F. Delmas, B. Jourdan, E. Temam were especally helpful. References [] Bally, V. and G. Pagès, A quantzaton algorthm for solvng multdmensonal optmal stoppng problems, Preprnt, December [2] Broade, M. and P. Glasserman, Prcng Amercan-style securtes usng smulaton, Journal of Economc Dynamcs and Control 2, , 997. [3] Broade, M., P. Glasserman, and G. Jan, Enhanced Monte-Carlo estmates for Amercan opton prces, Journal of dervatves 5, 25-44, 997. [4] Broade, M. and P. Glasserman, A stochastc mesh method for prcng hgh-dmensonal Amercan optons, Preprnt, ovember 997. [5] Carrère, J., Valuaton of Early-Exercce Prce of optons Usng Smulatons and onparametrc Regresson, Insurance: Mathematcs and Economcs, 9, 9-30,

24 [6] Dupacova, J. and R. Wets, Asymptotc behavor of statstcal estmators and of optmal solutons of stochastc optmzaton problems, Ann. of Stat., 6, o 4, , 988. [7] Kng A.J. and T.R. Rockafellar, Asymptotc theory for solutons n statstcal estmaton and stochastc programmng, Math. of Operatons Research 8, o, 48-62, 993. [8] Lamberton, D., Brownan optmal stoppng and random walks, to appear n Appled Math. and Optmzaton, [9] Longstaff, F.A. and E.S. Schwartz, Valung Amercan optons by smulaton: a smple least-squares approach, Revew of Fnancal Studes 4, 3-48, 200. [0] eveu J. Dscrete-parameter Martngales. orth Holland, Amsterdam, 975. [] Shapro A., Asymptotc propertes of statstcal estmators n stochastc programmng Ann. of Stat., 7, o 2, , 989. [2] Tstskls, J.. and B. Van Roy, Regresson methods for prcng complex Amercan-style optons, preprnt, August To appear n IEEE Transactons on eural etworks. 24

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Convergence of option rewards for multivariate price processes

Convergence of option rewards for multivariate price processes Mathematcal Statstcs Stockholm Unversty Convergence of opton rewards for multvarate prce processes Robn Lundgren Dmtr Slvestrov Research Report 2009:10 ISSN 1650-0377 Postal address: Mathematcal Statstcs

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1 MATH 5707 HOMEWORK 4 SOLUTIONS CİHAN BAHRAN 1. Let v 1,..., v n R m, all lengths v are not larger than 1. Let p 1,..., p n [0, 1] be arbtrary and set w = p 1 v 1 + + p n v n. Then there exst ε 1,..., ε

More information

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N)

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N) SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) S.BOUCKSOM Abstract. The goal of ths note s to present a remarably smple proof, due to Hen, of a result prevously obtaned by Gllet-Soulé,

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

Lecture 4: September 12

Lecture 4: September 12 36-755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Affine and Riemannian Connections

Affine and Riemannian Connections Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons

More information

Uniqueness of Weak Solutions to the 3D Ginzburg- Landau Model for Superconductivity

Uniqueness of Weak Solutions to the 3D Ginzburg- Landau Model for Superconductivity Int. Journal of Math. Analyss, Vol. 6, 212, no. 22, 195-114 Unqueness of Weak Solutons to the 3D Gnzburg- Landau Model for Superconductvty Jshan Fan Department of Appled Mathematcs Nanjng Forestry Unversty

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

and problem sheet 2

and problem sheet 2 -8 and 5-5 problem sheet Solutons to the followng seven exercses and optonal bonus problem are to be submtted through gradescope by :0PM on Wednesday th September 08. There are also some practce problems,

More information

Sample Average Approximation with Adaptive Importance Sampling

Sample Average Approximation with Adaptive Importance Sampling oname manuscrpt o. wll be nserted by the edtor) Sample Average Approxmaton wth Adaptve Importance Samplng Andreas Wächter Jeremy Staum Alvaro Maggar Mngbn Feng October 9, 07 Abstract We study sample average

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Supplement to Clustering with Statistical Error Control

Supplement to Clustering with Statistical Error Control Supplement to Clusterng wth Statstcal Error Control Mchael Vogt Unversty of Bonn Matthas Schmd Unversty of Bonn In ths supplement, we provde the proofs that are omtted n the paper. In partcular, we derve

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

DIFFERENTIAL FORMS BRIAN OSSERMAN

DIFFERENTIAL FORMS BRIAN OSSERMAN DIFFERENTIAL FORMS BRIAN OSSERMAN Dfferentals are an mportant topc n algebrac geometry, allowng the use of some classcal geometrc arguments n the context of varetes over any feld. We wll use them to defne

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information