INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS
|
|
- Belinda Young
- 5 years ago
- Views:
Transcription
1 INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS Vincen Guigues School of Applied Mahemaics, FGV Praia de Boafogo, Rio de Janeiro, Brazil Absrac. We inroduce an exension of Dual Dynamic Programming (DDP) o solve convex nonlinear dynamic programming equaions. We call his exension Inexac DDP (IDDP) which applies o siuaions where some or all primal and dual subproblems o be solved along he ieraions of he mehod are solved wih a bounded error (inexacly). We show ha any accumulaion poin of he sequence of decisions is an approximae soluion o he dynamic programming equaions. When hese errors end o zero as he number of ieraions goes o infiniy, we show ha IDDP solves he dynamic programming equaions. We exend he analysis o sochasic convex nonlinear dynamic programming equaions, inroducing Inexac Sochasic Dual Dynamic Programming (ISDDP), an inexac varian of SDDP corresponding o he siuaion where some or all problems o be solved in he forward and backward passes of SDDP are solved approximaely. We also show he almos sure convergence of ISDDP for vanishing errors. Keywords: Sochasic programming; Inexac cus for value funcions; Bounding ε-opimal dual soluions; SDDP; Inexac SDDP. AMS subjec classificaions: 90C15, 90C Inroducion Sochasic Dual Dynamic Programming (SDDP) is a sampling-based exension of he nesed decomposiion mehod [2] o solve some T -sage sochasic programs, pioneered by [13]. Originally, in [13], i was presened o solve Mulisage Sochasic Linear Programs (MSLPs). Since many real-life applicaions in, e.g., finance and engineering, can be modelled by such problems, unil recenly mos papers on SDDP and relaed decomposiion mehods, especially heory papers, focused on enhancemens of he mehod for MSLPs. These enhancemens include risk-averse SDDP [16], [8] [7], [14], [11], [17] and a convergence proof in [15]. However, SDDP can be applied o solve nonlinear sochasic convex dynamic programming equaions. For such problems, he convergence of he mehod was proved recenly in [3] for risk-neural problems, in [4] for risk-averse problems, and in [9] for a regularized varian implemened on a nonlinear dynamic porfolio model wih marke impac coss. To he bes of our knowledge, all sudies on SDDP rely on he assumpion ha all primal and dual subproblems solved in he forward and backward passes of he mehod are solved exacly. However, when hese mehods are applied o nonlinear problems, only approximae soluions are available for he subproblems solved in he forward and backward passes of he algorihm. In his conex, he objecive of his paper is o design varians of DDP (he deerminisic counerpar of SDDP) and SDDP o solve nonlinear convex dynamic programming equaions ha ake his fac ino accoun. We call he corresponding varians of DDP and SDDP Inexac DDP (IDDP) and Inexac SDDP (ISDDP). I should be menioned, however, ha here is anoher moivaion for considering inexac varians of DDP and SDDP. Indeed, i is known (see for insance he numerical experimens in [6], [5]) ha for he firs ieraions of he mehod and especially for he firs sages, he cus compued can be quie disan from he corresponding recourse funcion in he neighborhood of he rial poin a which he cu was compued, so his cu is quickly dominaed by oher more relevan cus in his neighborhood. Therefore, i makes sense o ry and solve more quickly and less accuraely (inexacly) all subproblems of he forward and backward passes corresponding o he 1
2 2 firs ieraions, especially for he firs sages, and o increase he precision of he compued soluions as he algorihm progresses. While he idea behind IDDP and ISDDP is simple and he moivaions clear, he descripion and convergence analysis of IDDP and ISDDP require solving he following problems of convex analysis, ineresing per se, and which, o he bes of our knowledge, had no been discussed so far in he lieraure: SDDP for nonlinear programs relies on a formula for he subdifferenial of he value funcion Q(x) of a convex opimizaion problem of form: infy R n f(y, x) (1.1) Q(x) = y Y : Ay + Bx = b, g(y, x) 0, where Y R n is nonempy and convex, f : R n R m R + } is convex, lower semiconinuous, and proper, and he componens of g are convex lower semiconinuous funcions. Formulas for he subdifferenial Q(x) are given in [4]. These formulas are based on he assumpion ha primal and dual soluions o (1.1) are available. When only approximae ε-opimal primal and dual soluions are available for (1.1) wrien wih x = x, we derive formulas for affine lower bounding funcions C for Q, ha we call inexac cus, such ha he disance Q( x) C( x) beween he values of Q and of he cu a x is bounded from above by a known funcion ε 0 of he problem parameers. Of course, we would like ɛ 0 o be as small as possible and ε 0 = 0 when ε = 0. Two cases are considered: (i) he case when he feasible se of (1.1) is Y, i.e., when he argumen x of Q appears only in he objecive funcion of (1.1). In his siuaion, formulas for inexac cus are given in Proposiion 2.2, wih a refined bound on ε 0 given in Proposiions 2.3 and 2.5 under an addiional assumpion. (ii) he general case of a value funcion of form (1.1). The corresponding inexac cus are given in Proposiions 2.7 and 2.8. We provide condiions ensuring ha ε-opimal dual soluions o a convex nonlinear opimizaion problem are bounded. Proposiion 3.1 gives an analyic formula for an upper bound on he norm of hese ε-opimal dual soluions. We show in Proposiions 4.5 and 4.6 ha if we compue inexac cus for a sequence (Q k ) of value funcions of he form (1.1) (wih objecive funcions f k of special srucure) a a sequence of poins (x k ) on he basis of ε k -opimal primal and dual soluions wih lim k + ε k = 0, hen he disance beween he inexac cus and he value funcions a hese poins x k converges o 0 oo. This resul is very naural (see Proposiions 4.5 and 4.6) bu some consrain qualificaion condiions are needed. When opimizaion problem (1.1) is linear, i.e., when Q is he value funcion of a linear program, inexac cus can easily be obained from approximae dual soluions since he dual objecive is linear in his case. This observaion was used in [18] where inexac cus are combined wih Benders Decomposiion [1] o solve wo-sage sochasic linear programs. In his sense, our work can be seen as an exension of [18] where wo-sage sochasic linear problems are considered whereas ISDDP applies o mulisage sochasic nonlinear problems. In ineger programming, inexac maser soluions are also commonly used in Benderslike mehods [12], including in SDDiP, a varian of SDDP o solve mulisage sochasic linear programs wih ineger variables inroduced in [19]. The ouline of he sudy is as follows. Secion 2 provides analyic formulas for compuing inexac cus for a value funcion of an opimizaion problem of he form (1.1). In Secion 3, we provide an explici bound for he norm of ε-opimal dual soluions. Secion 4 inroduces and sudies he IDDP mehod. The class of problems o which his mehod applies is described in Subsecion 4.1. The deailed IDDP algorihm is given in Subsecions while Subsecion 4.5 sudies he convergence of IDDP. For a problem wih T periods, when noises (error erms quanifying he inexacness) are bounded, by, say, ε, we show in Theorem T (T +1) 4.7 and Corollary 4.8 ha any accumulaion poin of he sequence of decisions is a 2 ( δ + ε)-opimal soluion o he problem where δ is an upper bound on he disance beween he value of (heoreical) exac cus and he value of our inexac cus a he rial poins compued by he algorihm. I is ineresing o see he quadraic dependence of he global error wih respec o he number of periods and he linear dependence wih respec o noises. When noises are vanishing we prove ha IDDP solves he nonlinear dynamic programming equaions (see Theorem 4.7). Secion 5 inroduces and sudies ISDDP. The class of problems o which ISDDP applies is given in Subsecion 5.1. A deailed descripion of ISDDP is given
3 3 in Subsecion 5.2 and is convergence is sudied in Subsecion 5.3. More precisely, Theorem 5.3 shows he convergence of he mehod when he noises vanish. We use he following noaion and erminology: - The usual scalar produc in R n is denoed by x, y = x T y for x, y R n. The corresponding norm is x = x 2 = x, x. - ri(a) is he relaive inerior of se A. - B n (x 0, r) = x R n : x x 0 r} for x 0 R n, r 0. - dom(f) is he domain of funcion f. - Diam(X) = max x,y X x y is he diameer of X. - N A (x) is he normal cone o A a x. - X ε := X + εb n (0, 1) is he ε-faening of he se X R n. - C(X ) is he se of coninuous real-valued funcions on X, equipped wih he norm f X = sup x X f(x). - C 1 (X ) is he se of real-valued coninuously differeniable funcions on X. - span(x) is he linear span of se of vecors X and Aff(X) is he affine span of X. 2. Compuing inexac cus for he value funcion of a convex opimizaion problem Le Q : X R be he value funcion given by infy R n f(y, x) (2.2) Q(x) = y S(x) := y Y : Ay + Bx = b, g(y, x) 0}. Here, X R m and Y R n are nonempy, compac, and convex ses, and A and B are respecively q n and q m real marices. We will make he following assumpions which imply, in paricular, he convexiy of Q given by (2.2): (H1) f : R n R m R + } is lower semiconinuous, proper, and convex. (H2) For i = 1,..., p, he i-h componen of funcion g(y, x) is a convex lower semiconinuous funcion g i : R n R m R + }. In wha follows, we say ha C is a cu for Q if C is an affine funcion of x such ha Q(x) C(x) for all x X. We say ha he cu is exac a x X if Q( x) = C( x). Oherwise, he cu is said o be inexac. In his secion, our basic goal is, given x X and ε-opimal primal and dual soluions of (2.2) wrien for x = x, o derive an inexac cu C(x) for Q a x, i.e., an affine lower bounding funcion for Q such ha he disance Q( x) C( x) beween he values of Q and of he cu a x is bounded from above by a known funcion of he problem parameers. Of course, when ε = 0, we will check ha Q( x) = C( x). We firs recall from [4] how o compue exac cus for Q when opimal primal and dual soluions of (2.2) are available Formula for he subdifferenial of he value funcion of a convex opimizaion problem. Consider for (2.2) he dual problem (2.3) sup (λ,µ) R q R p + for he dual funcion (2.4) θ x (λ, µ) = inf y Y θ x (λ, µ) f(y, x) + λt (Ay + Bx b) + µ T g(y, x). We denoe by Λ(x) he se of opimal soluions of he dual problem (2.3) and we use he noaion Sol(x) := y S(x) : f(y, x) = Q(x)} o indicae he soluion se o (2.2). The descripion of he subdifferenial of Q is given in he following lemma: Lemma 2.1. Consider he value funcion Q given by (2.2) and ake x 0 X such ha S(x 0 ). Le Assumpions (H1) and (H2) hold and assume he Slaer-ype consrain qualificaion condiion: here exiss ( x, ȳ) X ri(y ) such ha Aȳ + B x = b and (ȳ, x) ri(g 0}).
4 4 We also assume ha here exiss ε > 0 such ha Y X ε dom(f). Then s Q(x 0 ) if and only if } (0, s) f(y 0, x 0 ) + [A T ; B T ]λ : λ R q (2.5) } + µ i g i (y 0, x 0 ) : µ i 0 + N Y (y 0 ) 0}, i I(y 0,x 0) where y 0 is any elemen in he soluion se Sol(x 0 ) and wih } I(y 0, x 0 ) = i 1,..., p} : g i (y 0, x 0 ) = 0. Moreover, he se x X Q(x) is bounded. In paricular, if f and g are differeniable, hen Q(x 0 ) = x f(y 0, x 0 ) + B T λ + } µ i x g i (y 0, x 0 ) : (λ, µ) Λ(x 0 ). i I(y 0,x 0) Proof. See he proofs of Lemma 2.1 and Proposiion 2.1 in [4]. Le us now discuss he compuaion of inexac cus for Q given by (2.2). We sar wih he case where he argumen x of he value funcion appears only in he objecive funcion of (2.2) Fixed feasible se. As a special case of problem (2.2), le Q : X R be he value funcion given by infy R n f(y, x) (2.6) Q(x) = y Y where X, Y are convex, compac, and nonempy ses. We pick x X and denoe by ȳ Y an opimal soluion of (2.6) wrien for x = x: (2.7) Q( x) = f(ȳ, x). Using Lemma 2.1, if f is differeniable, we have ha x f(ȳ, x) Q( x). If insead of an opimal soluion ȳ of (2.6) we only have a hand an approximae ε-opimal soluion ŷ(ε) i is naural o replace x f(ȳ, x) by x f(ŷ(ε), x). The inexac cu from Proposiion 2.2 below will be expressed in erms of he funcion l 1 : Y X R + given by (2.8) l 1 (ŷ, x) = min y Y yf(ŷ, x), y ŷ = max y Y yf(ŷ, x), ŷ y. Proposiion 2.2. Le x X and le ŷ(ε) Y be an ɛ-opimal soluion for problem (2.6) wrien for x = x wih opimal value Q( x), i.e., Q( x) f(ŷ(ε), x) ε. Assume ha f is differeniable and convex on Y X. Then seing η(ε) = l 1 (ŷ(ε), x), he affine funcion (2.9) C(x) := f(ŷ(ε), x) η(ε) + x f(ŷ(ε), x), x x is a cu for Q a x, i.e., for every x X we have Q(x) C(x) and he quaniy η(ε) is an upper bound for he disance Q( x) C( x) beween he values of Q and of he cu a x. Proof. For every (x, y) X Y using he convexiy of f we have f(y, x) f(ŷ(ε), x) + x f(ŷ(ε), x), x x + y f(ŷ(ε), x), y ŷ(ε). Minimizing over y in Y on each side of he above inequaliy we ge for every x X (2.10) Q(x) C(x) = f(ŷ(ε), x) l 1 (ŷ(ε), x) + x f(ŷ(ε), x), x x which shows ha C is a valid cu for Q. Finally, since ŷ(ε) Y, we have f(ŷ(ε), x) Q( x) and (2.11) C( x) Q( x) = f(ŷ(ε), x) l 1 (ŷ(ε), x) Q( x) l 1 (ŷ(ε), x). We now refine he bound l 1 (ŷ(ε), x) on Q( x) C( x) given by Proposiion 2.2 making he following assumpion: (H3) f is differeniable on Y X and here exiss M 1 > 0 such ha for every x X, y 1, y 2 Y, we have y f(y 2, x) y f(y 1, x) M 1 y 2 y 1.
5 5 Proposiion 2.3. Le x X and le ŷ(ε) Y be an ɛ-opimal soluion for problem (2.6) wrien for x = x wih opimal value Q( x), i.e., Q( x) f(ŷ(ε), x) ε. Then seing η(ε) = l 1 (ŷ(ε), x), if f is differeniable and convex on Y X he affine funcion C(x) given by (2.9) is a cu for Q a x. Moreover, if Assumpion (H3) holds, hen seing (2.12) ε 0 = l1(ŷ(ε), x) 2M 1Diam(Y ) (2M 2 1 Diam(Y ) 2 l 1 (ŷ(ε), x)) if l 1 (ŷ(ε), x) M 1 Diam(Y ) 2, 1 2 l 1(ŷ(ε), x) oherwise, he disance Q( x) C( x) beween he values of Q and of he cu a x is a mos ε 0. Proof. We already know from Proposiion 2.2 ha C is an inexac cu for Q. I remains o show ha if Assumpion (H3) holds hen (2.13) C( x) Q( x) = f(ŷ(ε), x) l 1 (ŷ(ε), x) Q( x) ε 0. Le y Y be such ha l 1 (ŷ(ε), x) = y f(ŷ(ε), x), ŷ(ε) y. Using (H3), for every 0 1, we have f(ŷ(ε) + (y ŷ(ε)), x) f(ŷ(ε), x) + y ŷ(ε), y f(ŷ(ε), x) M 1 2 ŷ(ε) y 2 f(ŷ(ε), x) l 1 (ŷ(ε), x) M 1 2 ŷ(ε) y 2. By convexiy of Y, since ŷ(ε), y Y, for every 0 1 we have ha ŷ(ε) + (y ŷ(ε)) Y and he above relaion yields [ Q( x) f(ŷ(ε), x) max l 1 (ŷ(ε), x) M 1Diam(Y ) 2 2]. [ ] If l 1 (ŷ(ε), x) M 1 Diam(Y ) 2 hen max 0 1 l 1 (ŷ(ε), x) 1 2 M 1Diam(Y ) 2 2 = 1 l 1(ŷ(ε), x) 2 2 and M 1Diam(Y ) 2 (2.14) Q( x) f(ŷ(ε), x) 1 l 1 (ŷ(ε), x) 2 2 M 1 Diam(Y ) 2. [ ] If l 1 (ŷ(ε), x) M 1 Diam(Y ) 2 hen max 0 1 l 1 (ŷ(ε), x) 1 2 M 1Diam(Y ) 2 2 = l 1 (ŷ(ε), x) 1 2 M 1Diam(Y ) 2 and (2.15) Q( x) f(ŷ(ε), x) 1 2 l 1(ŷ(ε), x). Combining (2.14) and (2.15) wih (2.12) gives (2.13) and complees he proof. Remark 2.4. As expeced, if ε = 0 hen ŷ(ε) is an opimal soluion of problem (2.6) wrien for x = x and he firs order opimaliy condiions ensure ha l 1 (ŷ(ε), x) = 0, meaning ha he cu given by Proposiion 2.2 is exac. Oherwise i is inexac. Since l 1 (ŷ(ε), x) 0 we also observe ha ε 0 given in Proposiion 2.3 is nonnegaive and smaller han l 1 (ŷ(ε), x), which shows ha Proposiion 2.3 improves he bound from Proposiion 2.2 for Q( x) C( x). In Proposiions 2.2 and 2.3, if he opimizaion problem max y Y y f(ŷ(ε), x), ŷ(ε) y wih opimal value l 1 (ŷ(ε), x) is solved approximaely, we obain he cus given by Proposiion 2.5. Proposiion 2.5. Le x X and le ŷ(ε 1 ) Y be an ɛ 1 -opimal soluion for problem (2.6) wrien for x = x wih opimal value Q( x), i.e., Q( x) f(ŷ(ε 1 ), x) ε 1. Le also ỹ(ŷ(ε 1 ), x) Y be an approximae ɛ 2 -opimal soluion for he problem max y Y y f(ŷ(ε 1 ), x), ŷ(ε 1 ) y wih opimal value l 1 (ŷ(ε 1 ), x), i.e., l 1 (ŷ(ε 1 ), x) ε 2 y f(ŷ(ε 1 ), x), ŷ(ε 1 ) ỹ(ŷ(ε 1 ), x). Assume ha f is convex and differeniable on Y X. Then seing η(ε 1, ε 2 ) = ε 2 ỹ(ŷ(ε 1 ), x) ŷ(ε 1 ), y f(ŷ(ε 1 ), x) and ˆl 1 (ŷ(ε 1 ), x) = ŷ(ε 1 ) ỹ(ŷ(ε 1 ), x), y f(ŷ(ε 1 ), x), he affine funcion C(x) := f(ŷ(ε 1 ), x) η(ɛ 1, ɛ 2 ) + x f(ŷ(ε 1 ), x), x x is a cu for Q a x, i.e., for every x X we have Q(x) C(x) and he disance Q( x) C( x) beween he values of Q and of he cu a x is a mos ε 2 + ˆl 1 (ŷ(ε 1 ), x). Moreover, if Assumpion (H3) holds, seing ε 2 + ˆl 1 (ŷ(ε 1 ), x) if ˆl 1 (ŷ(ε 1 ), x) 0, (2.16) ε 0 = ε 2 + ˆl 1(ŷ(ε 1), x) 2M 1Diam(Y ) (2M 2 1 Diam(Y ) 2 ˆl 1 (ŷ(ε 1 ), x)) if 0 < ˆl 1 (ŷ(ε 1 ), x) M 1 Diam(Y ) 2, ε ˆl 2 1 (ŷ(ε 1 ), x) oherwise,
6 6 he disance Q( x) C( x) beween he values of Q and of he cu a x is a mos ε 0. Proof. We will use he shor noaion ŷ for ŷ(ε 1 ), ỹ for ỹ(ŷ(ε 1 ), x), and ˆl 1 for ˆl 1 (ŷ(ε 1 ), x). Proceeding as in he proof of Proposiion 2.2, we ge for every x X (2.17) Q(x) f(ŷ, x) l 1 (ŷ, x) + x f(ŷ, x), x x C(x) = f(ŷ, x) + ỹ ŷ, y f(ŷ, x) ε 2 + x f(ŷ, x), x x which shows ha C is a valid cu for Q. Now observe ha C( x) Q( x) = f(ŷ, x) + ỹ ŷ, y f(ŷ, x) ε 2 Q( x) ε 2 ˆl 1. I remains o show ha if Assumpion (H3) holds hen (2.18) f(ŷ, x) + ỹ ŷ, y f(ŷ, x) ε 2 Q( x) ε 0. Using assumpion (H3) we have for every 0 1, This yields f(ŷ + (ỹ ŷ), x) f(ŷ, x) + ỹ ŷ, y f(ŷ, x) M 1 2 ỹ ŷ 2. [ Q( x) f(ŷ, x) + min ˆl M 1Diam(Y ) 2 2]. Three cases are possible: ˆl1 0 (Case A), 0 < ˆl 1 M 1 Diam(Y ) 2 (Case B), ˆl 1 > M 1 Diam(Y ) 2 (Case C). Case A. We have f(ŷ, x) + ỹ ŷ, y f(ŷ, x) ε 2 Q( x) ˆl 1 ε 2 = ε 0 and (2.18) holds. [ Case B. We have min 0 1 ˆl ] M 1Diam(Y ) 2 2 = 1 2 ˆl 1 2 and M 1Diam(Y ) 2 (2.19) Q( x) f(ŷ, x) 1 2 M 1 Diam(Y ) 2. [ Case C. We have min 0 1 ˆl ] M 1Diam(Y ) 2 2 = ˆl M 1Diam(Y ) 2 1 ˆl 2 1 which gives (2.20) Q( x) f(ŷ, x) 1 2 ˆl 1. Combining (2.19) and (2.20) wih (2.16) gives (2.18) for Cases B-C and complees he proof. Remark 2.6. If ε 1 = ε 2 = 0 hen ŷ is an opimal soluion of problem (2.6) wrien for x = x and ε 0 = ε 1 = ε 2 = l 1 (ŷ, x) = ˆl 1 (ŷ(ε 1 ), x) = 0, meaning ha he cu given by Proposiion 2.5 is exac. Also if ε 2 = 0 hen ˆl 1 (ŷ(ε 1 ), x) = l 1 (ŷ(ε 1 ), x) 0. Therefore when ε 2 = 0 and 0 < ˆl 1 (ŷ(ε 1 ), x) M 1 Diam(Y ) 2 or ˆl 1 (ŷ(ε 1 ), x) > M 1 Diam(Y ) 2 he inexac cus from Proposiion 2.5 correspond o he inexac cus given in Proposiion 2.3. For he case ˆl 1 (ŷ(ε 1 ), x) 0 in Proposiion 2.5, if ε 2 = 0 we ge ˆl 1 (ŷ(ε 1 ), x) = 0 which implies η(ε 1, ε 2 ) = 0 and he cu is exac, which is in accordance wih ε 0 = ε 2 = Variable feasible se. Le us now discuss he compuaion of inexac cus for Q given by (2.2). For x X, le us inroduce for problem (2.2) he Lagrangian funcion and he funcion l 2 : Y X R q R p + R + given by L x (y, λ, µ) = f(y, x) + λ T (Bx + Ay b) + µ T g(y, x) (2.21) l 2 (ŷ, x, ˆλ, ˆµ) = min y Y yl x (ŷ, ˆλ, ˆµ), y ŷ = max y Y yl x (ŷ, ˆλ, ˆµ), ŷ y. Wih his noaion he dual funcion (2.4) for problem (2.2) can be wrien ˆl 1 2 θ x (λ, µ) = inf y Y L x(y, λ, µ). We make he following assumpion which ensures no dualiy gap for (2.2) for any x X: (H4) for every x X here exiss y x ri(y ) such ha Bx + Ay x = b and g(y x, x) < 0. The following proposiion provides an inexac cu for Q given by (2.2):
7 7 Proposiion 2.7. Le x X, le ŷ(ɛ) be an ɛ-opimal feasible primal soluion for problem (2.2) wrien for x = x and le (ˆλ(ɛ), ˆµ(ɛ)) be an ɛ-opimal feasible soluion of he corresponding dual problem, i.e., of problem (2.3) wrien for x = x. Le Assumpions (H1), (H2), and (H4) hold. If addiionally f and g are differeniable on Y X hen seing η(ε) = l 2 (ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)), he affine funcion (2.22) C(x) := L x (ŷ(ɛ), ˆλ(ɛ), ˆµ(ɛ)) η(ε) + x L x (ŷ(ɛ), ˆλ(ɛ), ˆµ(ɛ)), x x is a cu for Q a x and he disance Q( x) C( x) beween he values of Q and of he cu a x is a mos ε + l 2 (ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)). Proof. To simplify noaion, we use ŷ, ˆλ, ˆµ, for respecively ŷ(ɛ), ˆλ(ɛ), ˆµ(ɛ). Consider primal problem (2.2) wrien for x = x. Due o Assumpion (H4) he opimal value Q( x) of his problem is he opimal value of he corresponding dual problem, i.e., of problem (2.3) wrien for x = x. Using he fac ha ŷ and (ˆλ, ˆµ) are respecively ε-opimal primal and dual soluions i follows ha (2.23) f(ŷ, x) Q( x) + ε and θ x (ˆλ, ˆµ) Q( x) ε. Moreover, since he approximae primal and dual soluions are feasible, we have ha (2.24) ŷ Y, B x + Aŷ = b, g(ŷ, x) 0, ˆµ 0. Using Relaion (2.23), he definiion of dual funcion θ x, and he fac ha ŷ Y, we ge (2.25) L x (ŷ, ˆλ, ˆµ) θ x (ˆλ, ˆµ) Q( x) ε. Due o Assumpions (H1) and (H2), for any λ and µ 0 he funcion L (, λ, µ) which associaes he value L x (y, λ, µ) o (x, y) is convex. I follows ha for every x X, y Y, we have ha L x (y, ˆλ, ˆµ) L x (ŷ, ˆλ, ˆµ) + x L x (ŷ, ˆλ, ˆµ), x x + y L x (ŷ, ˆλ, ˆµ), y ŷ. Since (ˆλ, ˆµ) is dual feasible for dual problem (2.3), he Weak Dualiy Theorem gives Q(x) θ x (ˆλ, ˆµ) = inf y Y L x (y, ˆλ, ˆµ) for every x X and minimizing over y Y on each side of he above inequaliy we obain Finally, using relaion (2.25), we ge Q(x) C(x) = L x (ŷ, ˆλ, ˆµ) l 2 (ŷ, x, ˆλ, ˆµ) + x L x (ŷ, ˆλ, ˆµ), x x. Q( x) C( x) = Q( x) L x (ŷ, ˆλ, ˆµ) + l 2 (ŷ, x, ˆλ, ˆµ) ε + l 2 (ŷ, x, ˆλ, ˆµ). We now refine he bound ε + l 2 (ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)) on Q( x) C( x) given by Proposiion 2.7 making he following assumpion: (H5) g is differeniable on Y X and here exiss M 2 > 0 such ha for every i = 1,..., p, x X, y 1, y 2 Y, we have y g i (y 2, x) y g i (y 1, x) M 2 y 2 y 1. Proposiion 2.8. Le x X, le ŷ(ɛ) be an ɛ-opimal feasible primal soluion for problem (2.2) wrien for x = x and le (ˆλ(ɛ), ˆµ(ɛ)) be an ɛ-opimal feasible soluion of he corresponding dual problem, i.e., of problem (2.3) wrien for x = x. Le also L x be any lower bound on Q( x). Le Assumpions (H1), (H2), (H3), (H4), and (H5) hold. Then C(x) given by (2.22) is a cu for Q a x and seing M 3 = M 1 + U x M 2 wih U x = f(y x, x) L x + ε min( g i (y x, x), i = 1,..., p), he disance Q( x) C( x) beween he values of Q and of he cu a x is a mos ε + l ε 0 = 2 (ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)) l2(ŷ(ɛ), x,ˆλ(ɛ),ˆµ(ɛ)) 2 2M 3Diam(Y ) if l 2 2 (ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)) M 3 Diam(Y ) 2, ε l 2(ŷ(ɛ), x, ˆλ(ɛ), ˆµ(ɛ)) oherwise.
8 8 Proof. As before we use he shor noaion ŷ, ˆλ, ˆµ, for respecively ŷ(ɛ), ˆλ(ɛ), ˆµ(ɛ). We already know from Proposiion 2.7 ha C is a cu for Q. Le us show ha ε 0 is an upper bound for Q( x) C( x). We compue p y L x (y, λ, µ) = y f(y, x) + A T λ + µ i y g i (y, x). Therefore for every y 1, y 2 Y, using Assumpions (H3) and (H5), we have (2.26) y L x (y 2, ˆλ, ˆµ) y L x (y 1, ˆλ, ˆµ) (M 1 + ˆµ 1 M 2 ) y 2 y 1. Nex observe ha L x ε Q( x) ε θ x (ˆλ, ˆµ) f(y x, x) + ˆλ T (Ay x + B x b) + ˆµ T g(y x, x) f(y x, x) + ˆµ 1 max i=1,...,p g i (y x, x). From he above relaion, we ge ˆµ 1 U x, which, plugged ino (2.26) gives (2.27) y L x (y 2, ˆλ, ˆµ) y L x (y 1, ˆλ, ˆµ) M 3 y 2 y 1. The compuaions are now similar o he proof of Proposiion 2.3. More precisely, le y Y such ha Using relaion (2.27), for every 0 1, we ge i=1 l 2 (ŷ, x, ˆλ, ˆµ) = y L x (ŷ, ˆλ, ˆµ), ŷ y. L x (ŷ + (y ŷ), ˆλ, ˆµ) L x (ŷ, ˆλ, ˆµ) + y L x (ŷ, ˆλ, ˆµ), y ŷ M 3 2 y ŷ 2. Since ŷ + (y ŷ) Y, using he above relaion and he definiion of θ x, we obain Q( x) ε θ x (ˆλ, ˆµ) L x (ŷ, ˆλ, ˆµ) l 2 (ŷ, x, ˆλ, ˆµ) M 3 2 y ŷ 2. Therefore Q( x) C( x) = Q( x) L x (ŷ, ˆλ, ˆµ)+l 2 (ŷ, x, ˆλ, ˆµ) ε+l 2 (ŷ, x, ˆλ, ˆµ)+ min ( l 2 (ŷ, x, ˆλ, ˆµ) M 3 2 Diam(Y ) 2) ( and we easily conclude compuing min 0 1 l 2 (ŷ, x, ˆλ, ˆµ) M 3 2 Diam(Y ) ). 2 Remark 2.9. As was done for he exension of Proposiion 2.2 corresponding o Proposiion 2.5, we can exend Proposiion 2.8 o he case when he opimizaion problem max y Y y L x (ŷ, ˆλ, ˆµ), ŷ y wih opimal value l 2 (ŷ, x, ˆλ, ˆµ) is solved approximaely. 3. Bounding he norm of ε-opimal soluions o he dual of a convex opimizaion problem Consider he following convex opimizaion problem: min f(y) (3.28) f = Ay = b, g(y) 0, y Y where (i) Y R n is a closed convex se and A is a q n marix; (ii) f : Y R is convex Lipschiz coninuous wih Lipschiz consan L(f); (iii) g : Y R p where all componens of g are convex Lipschiz coninuous funcions wih Lipschiz consan L(g); (iv) f is bounded from below on he feasible se. We also assume he following Slaer ype consrain qualificaion condiion: (3.29) SL: There exis κ > 0 and y 0 ri(y ) such ha g(y 0 ) κe and Ay 0 = b where e is a vecor of ones in R p. Since SL holds, he opimal value f of (3.28) can be wrien as he opimal value of he dual problem: } (3.30) f = max θ(λ, µ) := minf(y) + λ, Ay b + µ, g(y) }. λ,µ 0 y Y
9 9 Consider he vecor space F = AAff(Y ) b (recall ha 0 F ). Clearly for any y Y and every λ F we have λ T (Ay b) = 0 and herefore for every λ R q, θ(λ, µ) = θ(π F (λ), µ) where Π F (λ) is he orhogonal projecion of λ ono F. I follows ha if F 0}, he se of ɛ-opimal dual soluions of dual problem (3.30) is no bounded because from any ɛ-opimal dual soluion (λ(ε), µ(ε)) we can build an ɛ-opimal dual soluion (λ(ε) + λ, µ(ε)) wih he same value of he dual funcion of norm arbirarily large aking λ in F wih norm sufficienly large. However, he opimal value of he dual (and primal) problem can be wrien equivalenly as (3.31) f = max θ(λ, µ) : µ 0, λ = Ay b, y Aff(Y )}. λ,µ In his secion, our goal is o derive bounds on he norm of ɛ-opimal soluions o he dual of (3.28) wrien in he form (3.31). From Assumpion SL, we deduce ha here exiss r > 0 such ha B n (y 0, r) Aff(Y ) Y and ha here is some ball B q (0, ρ ) of ( posiive radius ρ such ) ha he inersecion of his ball and of he se AAff(Y ) b is conained in he se A B n (y 0, r) Aff(Y ) b. To define such ρ, le ρ : AAff(Y ) b R + given by ρ(z) = max z : 0, z A(B n (y 0, r) Aff(Y )) b}. Since y 0 Y, we can wrie Aff(Y ) = y 0 + V Y where V Y is he vecor space V Y = x y, x, y Aff(Y )}. Therefore A(B n (y 0, r) Aff(Y )) b = A(B n (0, r) V Y ) and ρ can be reformulaed as (3.32) ρ(z) = max z : 0, z A(B n (0, r) V Y )}. Noe ha ρ is well defined and finie valued (we have 0 ρ(z) A r). Also, clearly ρ(0) = 0 and ρ(z) = ρ(λz) for every λ > 0 and z 0. Therefore if A = 0 hen ρ can be any posiive real, for insance ρ = 1, and if A 0 we define (3.33) ρ = minρ(z) : z 0, z AAff(Y ) b} = minρ(z) : z = 1, z AAff(Y ) b}, = minρ(z) : z = 1, z AV Y }, which is well defined and posiive since ρ(z) > 0 for every z such ha z = 1, z AAff(Y ) b (indeed if z AAff(Y ) b wih z = 1 hen z = Ay b for some y Aff(Y ), y y 0, and since r ( y y 0 z = A y 0 + r y y ) ( ) 0 b A B n (y 0, r) Aff(Y ) b, y y 0 we have ρ(z) y y z = 0 our requiremen namely (3.34) B q (0, ρ ) r r y y 0 > 0). We now claim ha parameer ρ we have jus defined saisfies ( ) ( ) AAff(Y ) b A B n (y 0, r) Aff(Y ) b. This can be rewrien as ) (3.35) B q (0, ρ ) AV Y A (B n (0, r) V Y. ( ) ( ) Indeed, le z B q (0, ρ ) AAff(Y ) b. If A = 0 or z = 0 hen z A B n (y 0, r) Aff(Y ) b. Oherwise, by definiion of ρ, we have ρ(z) ρ z. Le 0 be such ha z A(B n (y 0, r) Aff(Y )) b and ρ(z) = z. The relaions ( 1) z 0 and z 0 imply 1. By definiion of, we can wrie z = Ay b where y B n (y 0, r) Aff(Y ). I follows ha z can be wrien ( z = A y 0 + y y 0 Aff(Y ) and ȳ y 0 = y y 0 ) b = Aȳ b where ȳ = y 0 + y y 0 y y 0 r (because 1 and y B n (y 0, r)). ( ) This means ha z A B n (y 0, r) Aff(Y ) b, which proves inclusion (3.34). We are now in a posiion o sae he main resul of his secion:
10 10 Proposiion 3.1. Consider he opimizaion problem (3.28) wih opimal value f. Le Assumpions (i)-(iv) and SL hold and le (λ(ε), µ(ε)) be an ε-opimal soluion o he dual problem (3.31) wih opimal value f. Le (3.36) 0 < r κ 2L(g), be such ha he inersecion of he ball B n (y 0, r) and of Aff(Y ) is conained in Y (his r exiss because y 0 ri(y )). If A = 0 le ρ = 1. Oherwise, le ρ given by (3.33) wih ρ as in (3.32). Le L be any lower bound on he opimal value f of (3.28). Then we have Proof. By definiion of (λ(ε), µ(ε)) and of L, we have (λ(ε), µ(ε)) f(y 0) L + ε + L(f)r. min(ρ, κ/2) (3.37) L ε f ε θ(λ(ε), µ(ε)). Now define z(ε) = 0 if λ(ε) = 0 and z(ε) = ρ λ(ε) λ(ε) oherwise. Observing ha z(ε) B q(0, ρ ) ( ) ( ) AAff(Y ) b and using relaion (3.34) we deduce ha z(ε) A B n (y 0, r) Aff(Y ) b AY b. Therefore, we can wrie z(ε) = Aȳ b for some ȳ B n (y 0, r) Aff(Y ) Y. Nex, using he definiion of θ, we ge θ(λ(ε), µ(ε)) f(ȳ) + λ(ε) T (Aȳ b) + µ(ε) T g(ȳ) since ȳ Y, f(y 0 ) + L(f)r + z(ε) T λ(ε) + µ(ε) T g(y 0 ) + L(g)r µ(ε) 1 using (ii), (iii), ȳ B n (y 0, r), f(y 0 ) + L(f)r ρ λ(ε) κ 2 µ(ε) 1 using SL and (3.36), which can be rewrien as (3.38) (λ(ε), µ(ε)) = λ(ε) 2 + µ(ε) 2 λ(ε) + µ(ε) λ(ε) + µ(ε) 1 f(y0)+l(f)r θ(λ(ε),µ(ε)) min(ρ,κ/2). Combining (3.37) wih (3.38), we obain he desired bound. Recalling ha Aff(Y ) = ỹ +span(y ỹ) for any ỹ Y, he consrains y Aff(Y ) in (3.31) can be wrien y = ỹ + k i=1 α ie i in variables (α i ) k i=1 where (e 1,..., e k ) is a basis of span(y ỹ) and ỹ is an arbirary poin chosen in Y. For insance, if Y ỹ is a box, i.e., Y ỹ = y R n : l y u} wih l < 0 < u hen span(y ỹ) = R n and if Y ỹ = y R n : l i y i u i, i = 1,..., n 0, y i = 0, i > n 0 } wih l i < 0 < u i hen he firs n 0 vecors of he canonical basis of R n form a basis of span(y ỹ)=r n0 0}... 0}. }} n n 0 imes We also have he following immediae corollary of Proposiion 3.1: Corollary 3.2. Under he assumpions of Proposiion 3.1, le f be an upper bound on f on he feasibiliy se of (3.28) and assume ha f is convex and Lipschiz coninuous on R n wih Lipschiz consan L( f). Then we have for (λ(ε), µ(ε)) he bound (λ(ε), µ(ε)) f(y 0) L+ε+L( f)r min(ρ,κ/2). 4. Inexac Dual Dynamic Programming (IDDP) 4.1. Problem formulaion and assumpions. Consider he opimizaion problem T inf f (4.39) (x, x 1 ) x 1,...,x T =1 x X (x 1 ), = 1,..., T, for x 0 given wih he corresponding dynamic programming equaions inf F (x, x 1 ) := f (x, x 1 ) + Q +1 (x ) Q (x 1 ) = x x X (x 1 ), for = 1,..., T, wih Q T Observe ha Q 1 (x 0 ) is he opimal value of (4.39). We will consider wo srucures for ses X (x 1 ), = 1,..., T : (S1) X (x 1 ) = X R n (in his case, for shor, we say ha X is of ype S1);
11 (S2) X (x 1 ) = x R n : x X, g (x, x 1 ) 0, say ha X is of ype S2). A x + B x 1 = b } (in his case, for shor, we Noe ha a mix of hese ypes of consrains is allowed: for insance we can have X 1 of ype S1 and X 2 of ype S2. Seing X 0 = x 0 }, we make he following assumpions (H1): for = 1,..., T, (H1)-(a) X is nonempy, convex, and compac. (H1)-(b) The funcion f (, ) is convex on X X 1 and belongs o C 1 (X X 1 ). For = 1,..., T, if X is of ype S2 we addiionally assume ha: here exiss ε > 0 such ha (wihou loss of generaliy, we will assume in he sequel ha ε = ε) (H1)-(c) each componen g i (, ), i = 1,..., p, of he funcion g (, ) is convex on X X ε 1 and belongs o C 1 (X X 1 ). (H1)-(d) For every x 1 X ε 1, he se X (x 1 ) ri(x ) is nonempy. (H1)-(e) If 2, here exiss x = ( x, x 1 ) ri(x ) X 1 such ha A x + B x 1 = b, and g ( x, x 1 ) < 0. Assumpions (H1)-(a), (b), (c) ensure ha funcions Q are convex. Assumpion (H1)-(d) is used o bound he cu coefficiens (see Proposiion 4.4) and show ha funcions Q are Lipschiz coninuous on X 1. Differeniabiliy and Assumpion (H1)-(e) are useful o derive inexac cus, see Secions , in paricular Lemma 4.1. The Inexac Dual Dynamic Programming (IDDP) algorihm o be presened in he nex secion is a soluion mehod for problem (4.39) ha explois he convexiy of Q, = 2,..., T Inexac Dual Dynamic Programming: overview. Similarly o DDP, o solve problem (4.39), he Inexac Dual Dynamic Programming algorihm approximaes for each = 2,..., T + 1, he funcion Q by a polyhedral lower approximaion Q k a ieraion k. We sar a he firs ieraion wih he lower approximaion Q 0 = for Q, = 2,..., T. A he beginning of ieraion k, we have he lower polyhedral approximaions (compued a previous ieraions) for Q, whose compuaions are deailed below. For convenience, for = 1,..., T, and k 0, le F k (y, x) = f (y, x) + Q k +1(y) and le Q k : X 1 R given by Q k 1 (4.40) Q k (x) = inf y R n F k (y, x) y X (x). Ieraion k sars wih a forward pass: for = 1,..., T, we compue an ε k -opimal soluion x k of inf (4.41) Q k 1 F (x k k 1 (y, x k 1) 1) = y y X (x k 1), saring from x k 0 = x 0 where F k 1 (y, x k 1) = f (y, x k 1) + Q k 1 +1 (y) and knowing ha Qk 1 T +1 = Q T Therefore, we have (4.42) Q k 1 (x k 1) F k 1 (x k, x k 1) Q k 1 (x k 1) + ε k. A ieraion k, a backward pass hen compues a cu C k for Q a x k 1 for = T + 1 down o = 2. For = T +1, he cu is exac: CT k For sep < T +1, we compue an εk -opimal soluion x Bk X (x k 1) of inf (4.43) Q k F (xk k (y, x k 1) 1) = y y X (x k 1), knowing Q k +1. I follows ha (4.44) x Bk X (x k 1) and Q k (xk 1) F k (x Bk, x k 1) Q k (xk 1) + ε k. 11
12 12 If X is of ype S2 we also compue an ε k -opimal soluion (λ k, µ k ) of he dual problem (4.45) sup h k,x k 1(λ, µ) λ = A y + B x k 1 b, y Aff(X ), µ R p + for he dual funcion (4.46) h k inf F k,x µ) = (y, x 1(λ, k 1) + λ T (A y + B x k 1 b ) + µ T g (y, x k 1) k y X. We now check ha Assumpion (H1) implies ha he following Slaer ype consrain qualificaion condiion holds for problem (4.43) (i.e. for all problems solved in he backward passes): (4.47) here exiss x k ri(x ) such ha A x k + B x k 1 = b and g ( x k, x k 1) < 0. The above consrain qualificaion condiion is he analogue of (3.29) for problem (4.43). Lemma 4.1. Le Assumpion (H1) hold. Then for every k N, (4.47) holds. Proof. If x k 1 = x 1 hen recalling (H1)-(e), (4.47) holds wih x k = x. Oherwise, we define x kε 1 = x k 1 + ε xk 1 x 1 x k 1 x 1. Observe ha since x k 1 X 1, we have x kε 1 X ε 1. Seing X = (x, x 1 ) ri(x ) X ε 1 : A x + B x 1 = b, g (x, x 1 ) 0}, since x kε 1 X 1, ε using (H1)-(d), here exiss x kε ri(x ) such ha (x kε, x kε 1) X. Now clearly, since X and X 1 are convex, he se ri(x ) X 1 ε is convex oo and using (H1)-(c), we obain ha X is convex. Since ( x, x 1 ) X (due o Assumpion (H1)-(e)) and recalling ha (x kε, x kε 1) X, we obain ha for every 0 < θ < 1, he poin (4.48) (x (θ), x 1 (θ)) = (1 θ)( x, x 1 ) + θ(x kε, x kε 1) X. For 1 (4.49) 0 < θ = θ 0 = ε < 1, 2 x k 1 x 1 we ge x 1 (θ 0 ) = x k 1, x (θ 0 ) ri(x ), A x (θ 0 ) + B x 1 (θ 0 ) = A x (θ 0 ) + B x k 1 = b, and since g i, i = 1,..., p, are convex on X X ε 1 (see Assumpion (H1)-(c)) and herefore on X, we ge g (x (θ 0 ), x 1 (θ 0 )) = g (x (θ 0 ), x k 1) (1 θ 0 ) g ( x, x 1 ) + θ 0 g }}}}}} (x kε, x k,ε 1 }} ) >0 <0 >0 0 < 0. We have jusified ha (4.47) holds wih x k = x (θ 0 ). From (4.47), we deduce ha he opimal value Q k (xk 1) of primal problem (4.43) is he opimal value of dual problem (4.45) and herefore ε k -opimal dual soluion (λ k, µ k ) saisfies: (4.50) Q k (xk 1) ε k h k,x k 1(λ k, µ k ) Q k (xk 1). We now inend o use he resuls of Secion 2 o derive an inexac cu C k for Q a x k 1. Since for all ieraion k he relaion Q Q k is preserved, Ck will in fac be an inexac cu for Q k and herefore for Q. To proceed, le us wrie funcion Q k +1, which is a maximum of k affine funcions, in he form Q k +1(x ) = max 1 j k ( C j +1 (x ) := θ j +1 ηj +1 (εj +1 ) + βj +1, x x j )
13 for some coefficiens θ j +1, ηj +1 (εj +1 ), and βj +1 whose ieraive compuaion is deailed below wih he convenion ha for = T coefficiens θ j +1, ηj +1 (εj +1 ), βj +1 are all zero. Plugging his represenaion ino (4.43), we ge inf f (x, x k (4.51) Q k x,y 1) + y (xk 1) = x X (x k 1), y θ j +1 ηj +1 (εj +1 ) + βj +1, x x j, j = 1,..., k, which is of form (2.2) wih y = (x, y ), x = x k 1, f(y, x) = f (x, x) + y, Y = y = [x ; y ] : x X, B k +1y b k +1}, and for consrains of ype S2 A = [A 0 q 1 ], B = B, b = b, g(y, x) = g (x, x), where he j-h line of marix B k +1 is [(β j +1 )T, 1] and he j-h componen of b k +1 is θ j +1 + ηj +1 (εj +1 ) β j +1 xj. We can now use he resuls of Secion 2 and consider several cases depending on he problem srucure Compuaion of inexac cus in he backward pass for consrains of ype S1. Le us firs consider he case where X is of ype S1. Le (x Bk, y Bk ) be an ε k -opimal soluion of inf f (x, x k (4.52) Q k x,y 1) + y [ ] (xk 1) = x X, B+1 k x b k +1. We compue θ k = f (x Bk, x k 1) + y Bk where (4.53) l k 1(x Bk, y Bk, x k 1) =, η k (ε k ) = l k 1(x Bk, y Bk y max x,y x f[ (x Bk ], x k 1), x Bk x X, B+1 k x b k +1. y, x k 1), β k = x 1 f (x Bk, x k 1), x + y Bk Using Proposiion 2.2 we have ha C k (x 1 ) = θ k η k (ε k ) + β k, x 1 x k 1 is an inexac cu for Q k and herefore for Q. Moreover, he disance beween Q k (xk 1) and C k (x k 1) is a mos η k (ε k ) = l k 1(x Bk, y Bk, x k 1) Compuaion of inexac cus in he backward pass for consrains of ype S2. We now consider he case where X is of ype S2. Le (x Bk, y Bk ) be an ε k -opimal soluion of inf f (x, x k (4.54) Q k x,y 1) + y [ ] (xk 1) = x X (x k 1), B+1 k x b y k +1. Define for problem (4.54) he Lagrangian y L x k 1 (x, y, λ, µ) = f (x, x k 1) + y + λ T (A x + B x k 1 b ) + µ T g (x, x k 1) and (4.55) l k 2(x Bk, y Bk, x k 1, λ, µ) = max x,y x L x k 1 (x [ ] Bk x X, B+1 k x b y k +1., y Bk, λ, µ), x Bk x + y Bk y Wih his noaion and recalling ha (λ k, µ k ) is an ε k -opimal soluion of (4.45) we pu θ k = L (4.56) x k 1 (x Bk, y Bk, λ k, µ k ), η k (ε k ) = l k 2(x Bk, y Bk, x k 1, λ k, µ k ), β k = x 1 f (x Bk, x k 1) + B T λ k + p i=1 µk (i) x 1 g i (x Bk, x k 1). Using Proposiion 2.7, he affine funcion C k (x 1 ) = θ k η k (ε k ) + β k, x 1 x k 1 13
14 14 defines an inexac cu for Q. Moreover, he disance beween Q k (xk 1) and C k (x k 1) is a mos ε k + l k 2(x Bk, y Bk, x k 1, λ k, µ k ) = ε k + η k (ε k ). For IDDP, we assume ha nonlinear opimizaion problems (such as primal problems (4.52), (4.54) or dual problem (4.45)) are solved approximaely whereas linear opimizaion problems are solved exacly. Noice ha we assumed ha we can compue he opimal value l k 1(x Bk, y Bk, x k 1) of opimizaion problem (4.53) and he opimal value l k 2(x Bk, y Bk, x k 1, λ k, µ k ) of opimizaion problem (4.55) wrien for (λ, µ) = (λ k, µ k ). Since hese opimizaion problems have a linear objecive funcion, hey are linear programs if and only if X is polyhedral. If his is no he case hen a) eiher we add componens o g pushing he nonlinear consrains in he represenaion of X in g or b) we also solve approximaely (4.53) and (4.55). In Case b), we can sill build an inexac cu C k and sudy he convergence of he corresponding varian of IDDP along he lines of Secion 4.5. More precisely, in his siuaion, we obain cu C k using Proposiion 2.5 insead of Proposiion 2.2 if X is of ype S1. If X is of ype S2 we can use he exension of Proposiion 2.7 obained when (2.21) is solved approximaely, exacly as was done for he exension of Proposiion 2.2 corresponding o Proposiion Convergence analysis. The main resul of his secion is Theorem 4.7, a convergence analysis of IDDP. We will use he following immediae observaion: Lemma 4.2. For = 2,..., T + 1, funcion Q is convex and Lipschiz coninuous on X 1. Proof. The proof is by backward inducion on. The resul holds for = T + 1 by definiion of Q T +1. Le us now assume ha Q +1 is convex and Lipschiz coninuous on X for some 2,..., T }. We consider wo cases: X is of ype S1 (Case A) and X is of ype S2 (Case B). Case A. Convexiy of Q immediaely follows from (H1)-(a),(b). (H1)-(b) implies ha f is coninuous on he compac se X X 1 and herefore akes finie values on X X 1 bu also on some neighborhood X X ε0 1 of X X 1 wih ε 0 > 0. Therefore, for every x 1 X ε0 1, we have ha x f (x, x 1 ) + Q +1 (x ) is finie-valued on X, and Q (x 1 ) is finie. Case B. Convexiy of Q immediaely follows from (H1)-(a),(b), (c). As in Case A, f is finie valued on X X ε0 1 for some ε 0 > 0. Combining his observaion wih (H1)-(d), for every x 1 X min(ε0,ε) he funcion x f (x, x 1 ) + Q +1 (x ) is finie-valued on he nonempy se X (x 1 ) and herefore Q (x 1 ) is finie. In boh Cases (A) and (B) we checked ha X 1 is conained in he inerior of he domain of Q which implies ha convex funcion Q is Lipschiz coninuous on X 1. In view of Lemma 4.2, we will denoe by L(Q ) a Lipschiz consan for Q for = 2,..., T + 1. A useful ingredien for he convergence analysis of IDDP is he boundedness of he sequences of approximae dual soluions (λ k, µ k ). Recall ha if X is of ype S2 hen Slaer consrain qualificaion (4.47) holds. From Theorem 2.3.2, p.312 in [10], we deduce ha if he rows of A are independen hen he se of opimal dual soluions of problem (4.45) is bounded. Therefore, he level se of h k associaed o is minimal,x k 1 value is bounded implying ha he level se associaed o his minimal value plus ε k is bounded oo (since for a convex funcion if a level se is bounded hen all level ses are bounded). I follows ha if he rows of A are independen, hen for every k N he norm (λ k, µ k ) is finie. To obain an upper bound on he sequence ( (λ k, µ k ) ) k we will use a slighly sronger assumpion han (H1)-(e), namely we will assume: (H2) For = 2,..., T, here exiss κ > 0, r > 0 such ha for every x 1 X 1, here exiss x X such ha B(x, r ) Aff(X ), A x + B x 1 = b, and for every i = 1,..., p, g i (x, x 1 ) κ. Remark 4.3. Of course, by definiion of he relaive inerior, he condiion B(x, r ) Aff(X ) implies ha x ri(x ).
15 15 However, we do no assume ha he rows of A are independen. Using (H2) and Secion 3 we can now show ha he sequences of cu coefficiens and approximae dual soluions belong o a compac se: Proposiion 4.4. Assume ha noises (ε k ) k 1 are bounded: for = 2,..., T, we have 0 ε k ε < +. If Assumpions (H1) and (H2) hold hen he sequences (θ k ),k, (η k (ε k )),k, (β k ),k, (λ k ),k, (µ k ),k generaed by he IDDP algorihm are bounded: for = 2,..., T +1, here exiss a compac se C such ha he sequence (θ k, η k (ε k ), β k ) k 1 belongs o C and for = 2,..., T, if X is of ype S2 hen here exiss a compac se D such ha he sequence (λ k, µ k ) k 1 belongs o D. Proof. The proof is by backward inducion on. Our inducion hypohesis H() for 2,..., T + 1} is ha he sequence (θ k, η k (ε k ), β k ) k 1 belongs o a compac se C. We have ha H(T + 1) holds because for = T +1 he corresponding coefficiens are all zero. Now assume ha H(T +1) holds for some 2,..., T +1}. We wan o show ha H() holds and if X is of ype S2 ha he sequence (λ k, µ k ) k 1 belongs o some compac se D. Since f and g belong o C 1 (X X 1 ) we can find finie m, M 1, M 2, M 3, M 4 such ha for every x X, x 1 X 1, for every i = 1,..., p, we have m f (x, x 1 ) M 1, f (x, x 1 ) M 2, g i (x, x 1 ) M 3, g (x, x 1 ) M 4. Also since H(+1) holds, he sequence ( β+1 ) k k 1 is bounded from above by, say, L +1, which is a Lipschiz consan for all funcions (Q k +1) k 1. We now consider wo cases: X is of ype S1 (Case A) and X is of ype S2 (Case B). Case A. We have θ k = f (x Bk, x k 1) + Q k +1(x Bk ) which gives he bound m + min x X Q 1 +1(x ) θ k M 1 + max x X Q +1 (x ), k 1, (recall ha due o H( + 1) and Lemma 4.2, he minimum and maximum in he relaion above are well defined because funcions Q 1 +1 and Q +1 are coninuous on he compac X ). Now for η k (ε k ) = l k 1(x Bk, y Bk, x k 1) and recalling definiion (4.53) of l k 1(x Bk, y Bk, x k 1), we see ha (4.57) 0 η k (ε k ) η := (M 2 + L +1 )D(X ), k 1, and of course he norm of β k = x 1 f (x Bk, x k 1) for all k 1 is bounded from above by M 2. This shows H() for Case A. Case B. We firs obain a bound on (λ k, µ k ) using Proposiion 3.1 and Corollary 3.2. Le us check ha he Assumpions of his corollary are saisfied for problem (4.54): (i) X is a closed convex se; (ii) he objecive funcion F k (, x k 1) is bounded from above by f( ) = f (, x k 1) + Q +1 ( ). Since f is convex and finie in a neighborhood of X X 1, i is Lipschiz coninuous on X X 1 wih Lipschiz consan, say, L(f ). Therefore f is Lipschiz coninuous wih Lipschiz consan L(f ) + L(Q +1 ) on X. (iii) Since all componens of g are convex and finie in a neighborhood of X X 1, hey are Lipschiz coninuous on X X 1. (iv) The objecive funcion is bounded on he feasible se by L = min x 1 X 1 Q 1 (x 1) (he minimum is well defined due o Assumpion (H1)). Due o Assumpion (H2) we can find ˆx k ri(x ) such ha ˆx k X (x k 1) and B n (ˆx k, r ) Aff(X ). Therefore, reproducing he reasoning of Secion 3, we can find ρ such ha ( ) B q (0, ρ ) A V X A B n (0, r ) V X where V X is he vecor space V X = x y, x, y Aff(X )} (his is relaion (3.35) for problem (4.54)). Applying Corollary 3.2 o problem (4.54) we deduce ha (λ k, µ k ) U where (L(f ) + L(Q +1 ))r + ε + max (f (x, x 1 ) + Q +1 (x )) min Q 1 x X,x 1 X 1 x (x 1) 1 X 1 U = min(ρ, κ 2 ). For θ k = f (x Bk, x k 1) + Q k +1(x Bk ) + µ k, g (x Bk, x k 1) we ge he bound m U M 4 + min x X Q 1 +1(x ) θ k M 1 + max x X Q +1 (x ).
16 16 Noe ha η k (ε k ) 0 and he objecive funcion of problem (4.55) wrien for (λ, µ) = (λ k, µ k ) wih opimal value η k (ε k ) is bounded from above on he feasible se by (4.58) η = (M max( A T ), M 3 p)u + L +1 D(X ) and herefore he same upper bound holds for η k (ε k ). Finally, recalling definiion (4.56) of β k we have: [ (4.59) β k M 2 + B T λ k ] + M 3 p µ k L := M max( B T, M 3 p)u, which complees he proof and provides a Lipschiz consan L valid for funcions (Q k ) k. To show ha he sequence of error erms (η k (ε k )) k converges o 0 when lim k + ε k = 0, we will make use of Proposiions 4.5 and 4.6 which follow: Proposiion 4.5. Le X R m, Y R n, be wo nonempy compac convex ses. Le f C 1 (Y X) be convex on Y X. Le (Q k ) k 1 be a sequence of convex L-Lipschiz coninuous funcions on Y saisfying Q Q k Q on Y where Q, Q are coninuous on Y. Le (x k ) k 1 be a sequence in X, (ε k ) k 1 be a sequence of nonnegaive real numbers, and le y k (ε k ) Y be an ε k -opimal soluion o (4.60) inf f(y, x k ) + Q k (y) : y Y }. Define (4.61) η k (ε k max y f(y ) = k (ε k ), x k ), y k (ε k ) y + Q k (y k (ε k )) Q k (y) y Y. Then if lim k + ε k = 0 we have (4.62) lim k + ηk (ε k ) = 0. Proof. In wha follows, o simplify noaion, we wrie y k insead of y k (ε k ). We show (4.62) by conradicion. Denoing by y k Y an opimal soluion of (4.60), we have for every k 1 ha (4.63) f(y k, x k ) + Q k (y k ) f(y k, x k ) + Q k (y k ) f(y k, x k ) + Q k (y k ) + ε k. Denoing by ỹ k Y an opimal soluion of opimizaion problem (4.61) we ge (4.64) η k (ε k ) = y f(y k, x k ), y k ỹ k + Q k (y k ) Q k (ỹ k ). Assume ha (4.62) does no hold. Then since η k (ε k ) 0 here exiss ε 0 > 0 and σ 1 : N N increasing such ha for every k N we have (4.65) η σ1(k) (ε σ1(k) ) = y f(y σ1(k), x σ1(k) ), ỹ σ1(k) + y σ1(k) + Q σ1(k) (y σ1(k) ) Q σ1(k) (ỹ σ1(k) ) ε 0. Now observe ha he sequence (Q σ1(k) ) k in C(Y ) (i) is bounded: for every k 1, for every y Y, we have < min Q(y) y Y Qσ1(k) (y) max Q(y) < + ; y Y (ii) is equiconinuous since funcions (Q σ1(k) ) k are Lipschiz coninous wih Lipschiz consan L. Therefore using he Arzelà-Ascoli heorem, his sequence has a uniformly convergen subsequence: here exiss Q C(Y ) and σ 2 : N N increasing such ha seing σ = σ 1 σ 2, we have lim k + Q σ(k) Q Y = 0. Since (y σ(k), y σ(k), ỹ σ(k), x σ(k) ) k 1 is a sequence of he compac se Y Y Y X, aking furher a subsequence if needed, we can assume ha (y σ(k), y σ(k), ỹ σ(k), x σ(k) ) converges o some (ȳ, y, ỹ, x ) Y Y Y X. By coninuiy argumens, for k sufficienly large, say k k 0, we have ha (4.66) y f(y σ(k), x σ(k) ), ỹ σ(k) + y σ(k) y f(ȳ, x ), ỹ σ(k) + ȳ ε 0 /4, y σ(k) ȳ ε0 8L, Qσ(k) Q Y ε 0 /16.
17 17 I follows ha (4.67) y f(ȳ, x ), ỹ σ(k0) + ȳ + Q (ȳ) Q (ỹ σ(k0) ) = y f(y σ(k0), x σ(k0) ), ỹ σ(k0) + y σ(k0) + Q σ(k0) (y σ(k0) ) Q σ(k0) (ỹ σ(k0) ) + y f(ȳ, x ), ỹ σ(k0) + ȳ y f(y σ(k0), x σ(k0) ), ỹ σ(k0) + y σ(k0) +[Q (ȳ) Q σ(k0) (ȳ) + Q σ(k0) (ȳ) Q σ(k0) (y σ(k0) )] [Q (ỹ σ(k0) ) Q σ(k0) (ỹ σ(k0) )], ε 0 ε0 4 2 Q Q σ(k0) Y L ȳ y σ(k0) ε0 2 > 0, where for he las wo inequaliies we have used (4.65) and (4.66). Recalling he definiion of y, k for every k 1 we have ha y σ(k) Y and f(y σ(k), x σ(k) ) + Q σ(k) (y σ(k) ) f(y, x σ(k) ) + Q σ(k) (y), y Y. Taking he limi as k + in he above inequaliy we ge (using he coninuiy of f) f := f(y, x ) + Q (y ) f(y, x ) + Q (y), y Y. Since y Y,we have shown ha y is an opimal soluion for he opimizaion problem min f(y, x ) + Q (4.68) f = (y) y Y. Replacing k by σ(k) in (4.63) and aking he limi as k +, we obain f = f(y, x ) + Q (y ) = f(ȳ, x ) + Q (ȳ). Combining his observaion wih he fac ha ȳ Y, we deduce ha ȳ is also an opimal soluion of (4.68). Nex, since all funcions (Q σ(k) ) k are convex on Y, he funcion Q is convex on Y oo. Recalling Lemma 6.1, he opimaliy condiions for ȳ read Since ỹ σ(k0) Y, we have in paricular y f(ȳ, x ), y ȳ + Q (y) Q (ȳ) 0, y Y. y f(ȳ, x ), ỹ σ(k0) ȳ + Q (ỹ σ(k0) ) Q (ȳ) 0. However, from (4.67), he lef-hand side of he above inequaliy is ε0 2 conradicion. < 0 which yields he desired Proposiion 4.6. Le Y R n, X R m, be wo nonempy compac convex ses. Le f C 1 (Y X) be convex on Y X. Le (Q k ) k 1 be a sequence of convex L-Lipschiz coninuous funcions on Y saisfying Q Q k Q on Y where Q, Q are coninuous on Y. Le g C 1 (Y X) wih componens g i, i = 1,..., p, convex on Y X ε for some ε > 0. We also assume (H) : κ > 0, r > 0, such ha x X y Y : B n (y, r) Aff(Y ), Ay + Bx = b, g(y, x) < κe, where e is a vecor of ones of size p. Le (x k ) k 1 be a sequence in X, le (ε k ) k 1 be a sequence of nonnegaive real numbers, and le y k (ε k ) be an ε k -opimal and feasible soluion o (4.69) inf f(y, x k ) + Q k (y) : y Y, Ay + Bx k = b, g(y, x k ) 0}. Le (λ k (ε k ), µ k (ε k )) be an ε k -opimal soluion o he dual problem (4.70) where sup λ,µ h k x k (λ, µ) λ = Ay + Bx k b, y Aff(Y ), µ 0, h k xk(λ, µ) = inf f(y, y Y xk ) + Q k (y) + λ, Ay + Bx k b + µ, g(y, x k ) }. Define η k (ε k ) as he opimal value of he following opimizaion problem: (4.71) p max y f(y k (ε k ), x k ) + A T λ k (ε k ) + µ k (ε k )(i) y g i (y k (ε k ), x k ), y k (ε k ) y + Q k (y k (ε k )) Q k (y) y Y. i=1
18 18 Then if lim k + ε k = 0 we have (4.72) lim k + ηk (ε k ) = 0. Proof. For simpliciy, we wrie λ k, µ k, y k insead of λ k (ε k ), µ k (ε k )), y k (ε k ), and pu Y(x) = y Y : Ay + Bx = b, g(y, x) 0}. Denoing by y k Y(x k ) an opimal soluion of (4.69), we ge (4.73) f(y k, x k ) + Q k (y k ) f(y k, x k ) + Q k (y k ) f(y k, x k ) + Q k (y k ) + ε k. We prove (4.72) by conradicion. Le ỹ k be an opimal soluion of (4.71): p η k (ε k ) = y f(y k, x k ) + A T λ k + µ k (i) y g i (y k, x k ), y k ỹ k Q k (ỹ k ) + Q k (y k ). i=1 Assume ha (4.72) does no hold. Then here exiss ε 0 > 0 and σ 1 : N N increasing such ha for every k N we have y f(y (4.74) σ1(k), x σ1(k) ) + A T λ σ1(k) + p i=1 µσ1(k) (i) y g i (y σ1(k), x σ1(k) ), ỹ σ1(k) + y σ1(k) +Q σ1(k) (y σ1(k) ) Q σ1(k) (ỹ σ1(k) ) ε 0. Using Assumpion (H) and Proposiion 3.1, we obain ha he sequence (λ σ1(k), µ σ1(k) ) k is a sequence of a compac se, say D. Therefore, same as in he proof of Proposiion 4.5, we can find Q C(Y ) and σ 2 : N N increasing such ha seing σ = σ 1 σ 2, we have lim k + Q σ(k) Q Y = 0, and (y σ(k), y σ(k), ỹ σ(k), x σ(k), λ σ(k), µ σ(k) ) converges o some (ȳ, y, ỹ, x, λ, µ ) Y Y Y X D. I follows ha here is k 0 N such ha for every k k 0 : y f(y σ(k), x σ(k) ) + A T λ σ(k) + p i=1 µσ(k) (i) y g i (y σ(k), x σ(k) ), ỹ σ(k) + y σ(k) (4.75) y f(ȳ, x ) + A T λ + p i=1 µ (i) y g i (ȳ, x ), ỹ σ(k) + ȳ ε0 /4, y σ(k) ȳ ε0 8L, Qσ(k) Q Y ε 0 /16. Same as in he proof of Lemma 4.4, we deduce from (4.74), (4.75) ha p (4.76) y f(ȳ, x ) + A T λ + µ (i) y g i (ȳ, x ), ỹ σ(k0) + ȳ + Q (ȳ) Q (ỹ σ(k0) ) ε 0 /2 > 0. i=1 Due o Assumpion (H), primal problem (4.69) and dual problem (4.70) have he same opimal value and for every y Y and k 1 we have: f(y σ(k), x σ(k) ) + Q σ(k) (y σ(k) ) + Ay σ(k) + Bx σ(k) b, λ σ(k) + µ σ(k), g(y σ(k), x σ(k) ) f(y σ(k), x σ(k) ) + Q σ(k) (y σ(k) ) + ε σ(k) by definiion of y σ(k), y σ(k) and since µ σ(k) 0, y σ(k) Y(x σ(k) ), h σ(k) (λ σ(k), µ σ(k) ) + 2ε σ(k), [(λ σ(k), µ σ(k) ) is an ɛ σ(k) -opimal dual soluion and here is no dualiy gap], x σ(k) f(y, x σ(k) ) + Ay + Bx σ(k) b, λ σ(k) + µ σ(k), g(y, x σ(k) ) + Q σ(k) (y) + 2ε σ(k) by definiion of h σ(k). x σ(k) Taking he limi in he above relaion as k +, we ge for every y Y : f(ȳ, x ) + Aȳ + Bx b, λ + µ, g(ȳ, x ) + Q (ȳ) f(y, x ) + Ay + Bx b, λ + µ, g(y, x ) + Q (y). Recalling ha ȳ Y his shows ha ȳ is an opimal soluion of min f(y, x ) + Q (4.77) (y) + Ay + Bx b, λ + µ, g(y, x ) y Y. Now recall ha all funcions (Q σ(k) ) k are convex on Y and herefore he funcion Q is convex on Y oo. Using Lemma 6.1, he firs order opimaliy condiions for ȳ can be wrien p (4.78) y f(ȳ, x ) + A T λ + µ (i) y g i (ȳ, x ), y ȳ + Q (y) Q (ȳ) 0 i=1 for all y Y. Specializing he above relaion for y = ỹ σ(k0), we ge p y f(ȳ, x ) + A T λ + µ (i) y g i (ȳ, x ), ỹ σ(k0) ȳ + Q (ỹ σ(k0) ) Q (ȳ) 0, i=1
An introduction to the theory of SDDP algorithm
An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking
More informationAn Introduction to Malliavin calculus and its applications
An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214
More informationA Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs
PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers
More informationThe Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales
Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions
More informationMATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018
MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren
More informationHamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:
M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno
More informationConvergence of the Neumann series in higher norms
Convergence of he Neumann series in higher norms Charles L. Epsein Deparmen of Mahemaics, Universiy of Pennsylvania Version 1.0 Augus 1, 003 Absrac Naural condiions on an operaor A are given so ha he Neumann
More informationLecture 20: Riccati Equations and Least Squares Feedback Control
34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he
More informationOn Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes
More informationHamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t
M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n
More informationOptimality Conditions for Unconstrained Problems
62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x
More informationFinal Spring 2007
.615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o
More informationChapter 2. First Order Scalar Equations
Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.
More informationFinish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!
MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his
More informationSupplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence
Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given
More informationLet us start with a two dimensional case. We consider a vector ( x,
Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our
More informationMonotonic Solutions of a Class of Quadratic Singular Integral Equations of Volterra type
In. J. Conemp. Mah. Sci., Vol. 2, 27, no. 2, 89-2 Monoonic Soluions of a Class of Quadraic Singular Inegral Equaions of Volerra ype Mahmoud M. El Borai Deparmen of Mahemaics, Faculy of Science, Alexandria
More informationIMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013
IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher
More informationMatrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality
Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]
More information2. Nonlinear Conservation Law Equations
. Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear
More informationChapter 3 Boundary Value Problem
Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le
More informationT L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB
Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal
More informationarxiv: v1 [math.fa] 9 Dec 2018
AN INVERSE FUNCTION THEOREM CONVERSE arxiv:1812.03561v1 [mah.fa] 9 Dec 2018 JIMMIE LAWSON Absrac. We esablish he following converse of he well-known inverse funcion heorem. Le g : U V and f : V U be inverse
More information14 Autoregressive Moving Average Models
14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationt 2 B F x,t n dsdt t u x,t dxdt
Evoluion Equaions For 0, fixed, le U U0, where U denoes a bounded open se in R n.suppose ha U is filled wih a maerial in which a conaminan is being ranspored by various means including diffusion and convecion.
More informationChapter 6. Systems of First Order Linear Differential Equations
Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh
More informationTechnical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.
Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear
More informationInventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions
Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.
More informationCorrespondence should be addressed to Nguyen Buong,
Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se
More informationSection 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients
Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationLecture 9: September 25
0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:
More informationSolutions from Chapter 9.1 and 9.2
Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is
More information3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon
3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of
More information23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes
Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals
More information4 Sequences of measurable functions
4 Sequences of measurable funcions 1. Le (Ω, A, µ) be a measure space (complee, afer a possible applicaion of he compleion heorem). In his chaper we invesigae relaions beween various (nonequivalen) convergences
More informationODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004
ODEs II, Lecure : Homogeneous Linear Sysems - I Mike Raugh March 8, 4 Inroducion. In he firs lecure we discussed a sysem of linear ODEs for modeling he excreion of lead from he human body, saw how o ransform
More informationOscillation of an Euler Cauchy Dynamic Equation S. Huff, G. Olumolode, N. Pennington, and A. Peterson
PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON DYNAMICAL SYSTEMS AND DIFFERENTIAL EQUATIONS May 4 7, 00, Wilmingon, NC, USA pp 0 Oscillaion of an Euler Cauchy Dynamic Equaion S Huff, G Olumolode,
More informationMODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE
Topics MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 2-6 3. FUNCTION OF A RANDOM VARIABLE 3.2 PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE 3.3 EXPECTATION AND MOMENTS
More informationDISCRETE GRONWALL LEMMA AND APPLICATIONS
DISCRETE GRONWALL LEMMA AND APPLICATIONS JOHN M. HOLTE MAA NORTH CENTRAL SECTION MEETING AT UND 24 OCTOBER 29 Gronwall s lemma saes an inequaliy ha is useful in he heory of differenial equaions. Here is
More informationRANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY
ECO 504 Spring 2006 Chris Sims RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 1. INTRODUCTION Lagrange muliplier mehods are sandard fare in elemenary calculus courses, and hey play a cenral role in economic
More informationAnn. Funct. Anal. 2 (2011), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:
Ann. Func. Anal. 2 2011, no. 2, 34 41 A nnals of F uncional A nalysis ISSN: 2008-8752 elecronic URL: www.emis.de/journals/afa/ CLASSIFICAION OF POSIIVE SOLUIONS OF NONLINEAR SYSEMS OF VOLERRA INEGRAL EQUAIONS
More informationOn a Fractional Stochastic Landau-Ginzburg Equation
Applied Mahemaical Sciences, Vol. 4, 1, no. 7, 317-35 On a Fracional Sochasic Landau-Ginzburg Equaion Nguyen Tien Dung Deparmen of Mahemaics, FPT Universiy 15B Pham Hung Sree, Hanoi, Vienam dungn@fp.edu.vn
More informationPhysics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle
Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,
More informationTracking Adversarial Targets
A. Proofs Proof of Lemma 3. Consider he Bellman equaion λ + V π,l x, a lx, a + V π,l Ax + Ba, πax + Ba. We prove he lemma by showing ha he given quadraic form is he unique soluion of he Bellman equaion.
More informationHamilton Jacobi equations
Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion
More informationVariational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations
IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1, Issue 6 Ver. II (Nov - Dec. 214), PP 48-54 Variaional Ieraion Mehod for Solving Sysem of Fracional Order Ordinary Differenial
More informationU( θ, θ), U(θ 1/2, θ + 1/2) and Cauchy (θ) are not exponential families. (The proofs are not easy and require measure theory. See the references.
Lecure 5 Exponenial Families Exponenial families, also called Koopman-Darmois families, include a quie number of well known disribuions. Many nice properies enjoyed by exponenial families allow us o provide
More informationSOME MORE APPLICATIONS OF THE HAHN-BANACH THEOREM
SOME MORE APPLICATIONS OF THE HAHN-BANACH THEOREM FRANCISCO JAVIER GARCÍA-PACHECO, DANIELE PUGLISI, AND GUSTI VAN ZYL Absrac We give a new proof of he fac ha equivalen norms on subspaces can be exended
More information1 Solutions to selected problems
1 Soluions o seleced problems 1. Le A B R n. Show ha in A in B bu in general bd A bd B. Soluion. Le x in A. Then here is ɛ > 0 such ha B ɛ (x) A B. This shows x in B. If A = [0, 1] and B = [0, 2], hen
More informationarxiv: v1 [math.pr] 19 Feb 2011
A NOTE ON FELLER SEMIGROUPS AND RESOLVENTS VADIM KOSTRYKIN, JÜRGEN POTTHOFF, AND ROBERT SCHRADER ABSTRACT. Various equivalen condiions for a semigroup or a resolven generaed by a Markov process o be of
More informationSome Basic Information about M-S-D Systems
Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,
More informationApproximating positive solutions of nonlinear first order ordinary quadratic differential equations
Dhage & Dhage, Cogen Mahemaics (25, 2: 2367 hp://dx.doi.org/.8/233835.25.2367 APPLIED & INTERDISCIPLINARY MATHEMATICS RESEARCH ARTICLE Approximaing posiive soluions of nonlinear firs order ordinary quadraic
More informationLecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.
Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in
More informationTHE GENERALIZED PASCAL MATRIX VIA THE GENERALIZED FIBONACCI MATRIX AND THE GENERALIZED PELL MATRIX
J Korean Mah Soc 45 008, No, pp 479 49 THE GENERALIZED PASCAL MATRIX VIA THE GENERALIZED FIBONACCI MATRIX AND THE GENERALIZED PELL MATRIX Gwang-yeon Lee and Seong-Hoon Cho Reprined from he Journal of he
More information11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu
ON EQUATIONS WITH SETS AS UNKNOWNS BY PAUL ERDŐS AND S. ULAM DEPARTMENT OF MATHEMATICS, UNIVERSITY OF COLORADO, BOULDER Communicaed May 27, 1968 We shall presen here a number of resuls in se heory concerning
More informationMath 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:
Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial
More informationdi Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems.
di ernardo, M. (995). A purely adapive conroller o synchronize and conrol chaoic sysems. hps://doi.org/.6/375-96(96)8-x Early version, also known as pre-prin Link o published version (if available):.6/375-96(96)8-x
More informationAppendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection
Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief
More informationLecture Notes 2. The Hilbert Space Approach to Time Series
Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship
More informationEssential Maps and Coincidence Principles for General Classes of Maps
Filoma 31:11 (2017), 3553 3558 hps://doi.org/10.2298/fil1711553o Published by Faculy of Sciences Mahemaics, Universiy of Niš, Serbia Available a: hp://www.pmf.ni.ac.rs/filoma Essenial Maps Coincidence
More informationCONTRIBUTION TO IMPULSIVE EQUATIONS
European Scienific Journal Sepember 214 /SPECIAL/ ediion Vol.3 ISSN: 1857 7881 (Prin) e - ISSN 1857-7431 CONTRIBUTION TO IMPULSIVE EQUATIONS Berrabah Faima Zohra, MA Universiy of sidi bel abbes/ Algeria
More informationStochastic models and their distributions
Sochasic models and heir disribuions Couning cusomers Suppose ha n cusomers arrive a a grocery a imes, say T 1,, T n, each of which akes any real number in he inerval (, ) equally likely The values T 1,,
More informationOn R d -valued peacocks
On R d -valued peacocks Francis HIRSCH 1), Bernard ROYNETTE 2) July 26, 211 1) Laboraoire d Analyse e Probabiliés, Universié d Évry - Val d Essonne, Boulevard F. Mierrand, F-9125 Évry Cedex e-mail: francis.hirsch@univ-evry.fr
More informationBU Macro BU Macro Fall 2008, Lecture 4
Dynamic Programming BU Macro 2008 Lecure 4 1 Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an
More information15. Vector Valued Functions
1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,
More information6. Stochastic calculus with jump processes
A) Trading sraegies (1/3) Marke wih d asses S = (S 1,, S d ) A rading sraegy can be modelled wih a vecor φ describing he quaniies invesed in each asse a each insan : φ = (φ 1,, φ d ) The value a of a porfolio
More informationOnline Appendix to Solution Methods for Models with Rare Disasters
Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,
More informationLECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS
LECTURE : GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS We will work wih a coninuous ime reversible Markov chain X on a finie conneced sae space, wih generaor Lf(x = y q x,yf(y. (Recall ha q
More informationOn Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature
On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check
More informationLogarithmic limit sets of real semi-algebraic sets
Ahead of Prin DOI 10.1515 / advgeom-2012-0020 Advances in Geomery c de Gruyer 20xx Logarihmic limi ses of real semi-algebraic ses Daniele Alessandrini (Communicaed by C. Scheiderer) Absrac. This paper
More informationEssential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems
Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor
More informationHeat kernel and Harnack inequality on Riemannian manifolds
Hea kernel and Harnack inequaliy on Riemannian manifolds Alexander Grigor yan UHK 11/02/2014 onens 1 Laplace operaor and hea kernel 1 2 Uniform Faber-Krahn inequaliy 3 3 Gaussian upper bounds 4 4 ean-value
More informationMATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence
MATH 433/533, Fourier Analysis Secion 6, Proof of Fourier s Theorem for Poinwise Convergence Firs, some commens abou inegraing periodic funcions. If g is a periodic funcion, g(x + ) g(x) for all real x,
More informationOptimality and complexity for constrained optimization problems with nonconvex regularization
Opimaliy and complexiy for consrained opimizaion problems wih nonconvex regularizaion Wei Bian Deparmen of Mahemaics, Harbin Insiue of Technology, Harbin, China, bianweilvse520@163.com Xiaojun Chen Deparmen
More informationClarke s Generalized Gradient and Edalat s L-derivative
1 21 ISSN 1759-9008 1 Clarke s Generalized Gradien and Edala s L-derivaive PETER HERTLING Absrac: Clarke [2, 3, 4] inroduced a generalized gradien for real-valued Lipschiz coninuous funcions on Banach
More informationPENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD
PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.
More informationApplication of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing
Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology
More informationSTABILITY OF PEXIDERIZED QUADRATIC FUNCTIONAL EQUATION IN NON-ARCHIMEDEAN FUZZY NORMED SPASES
Novi Sad J. Mah. Vol. 46, No. 1, 2016, 15-25 STABILITY OF PEXIDERIZED QUADRATIC FUNCTIONAL EQUATION IN NON-ARCHIMEDEAN FUZZY NORMED SPASES N. Eghbali 1 Absrac. We deermine some sabiliy resuls concerning
More informationLie Derivatives operator vector field flow push back Lie derivative of
Lie Derivaives The Lie derivaive is a mehod of compuing he direcional derivaive of a vecor field wih respec o anoher vecor field We already know how o make sense of a direcional derivaive of real valued
More informationOn the probabilistic stability of the monomial functional equation
Available online a www.jnsa.com J. Nonlinear Sci. Appl. 6 (013), 51 59 Research Aricle On he probabilisic sabiliy of he monomial funcional equaion Claudia Zaharia Wes Universiy of Timişoara, Deparmen of
More informationAn Introduction to Backward Stochastic Differential Equations (BSDEs) PIMS Summer School 2016 in Mathematical Finance.
1 An Inroducion o Backward Sochasic Differenial Equaions (BSDEs) PIMS Summer School 2016 in Mahemaical Finance June 25, 2016 Chrisoph Frei cfrei@ualbera.ca This inroducion is based on Touzi [14], Bouchard
More informationSystem of Linear Differential Equations
Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y
More information10. State Space Methods
. Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he
More informationEXERCISES FOR SECTION 1.5
1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler
More informationAn Introduction to Stochastic Programming: The Recourse Problem
An Inroducion o Sochasic Programming: he Recourse Problem George Danzig and Phil Wolfe Ellis Johnson, Roger Wes, Dick Cole, and Me John Birge Where o look in he ex pp. 6-7, Secion.2.: Inroducion o sochasic
More informationOnline Convex Optimization Example And Follow-The-Leader
CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion
More informationGMM - Generalized Method of Moments
GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................
More informationPositive continuous solution of a quadratic integral equation of fractional orders
Mah. Sci. Le., No., 9-7 (3) 9 Mahemaical Sciences Leers An Inernaional Journal @ 3 NSP Naural Sciences Publishing Cor. Posiive coninuous soluion of a quadraic inegral equaion of fracional orders A. M.
More information2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes
Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion
More informationdy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page
Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,
More informationarxiv: v1 [math.ca] 15 Nov 2016
arxiv:6.599v [mah.ca] 5 Nov 26 Counerexamples on Jumarie s hree basic fracional calculus formulae for non-differeniable coninuous funcions Cheng-shi Liu Deparmen of Mahemaics Norheas Peroleum Universiy
More information1. Introduction The present paper is concerned with the scalar conservation law in one space dimension: u t + (A(u)) x = 0 (1.0.1)
SINGLE ENTROPY CONDITION FOR BURGERS EQUATION VIA THE RELATIVE ENTROPY METHOD SAM G. KRUPA AND ALEXIS F. VASSEUR Absrac. This paper provides a new proof of uniqueness of soluions o scalar conservaion laws
More informationResearch Article Existence and Uniqueness of Periodic Solution for Nonlinear Second-Order Ordinary Differential Equations
Hindawi Publishing Corporaion Boundary Value Problems Volume 11, Aricle ID 19156, 11 pages doi:1.1155/11/19156 Research Aricle Exisence and Uniqueness of Periodic Soluion for Nonlinear Second-Order Ordinary
More informationGENERALIZATION OF THE FORMULA OF FAA DI BRUNO FOR A COMPOSITE FUNCTION WITH A VECTOR ARGUMENT
Inerna J Mah & Mah Sci Vol 4, No 7 000) 48 49 S0670000970 Hindawi Publishing Corp GENERALIZATION OF THE FORMULA OF FAA DI BRUNO FOR A COMPOSITE FUNCTION WITH A VECTOR ARGUMENT RUMEN L MISHKOV Received
More informationNotes for Lecture 17-18
U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up
More informationWe just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n
Lecure 3 - Kövari-Sós-Turán Theorem Jacques Versraëe jacques@ucsd.edu We jus finished he Erdős-Sone Theorem, and ex(n, F ) ( /(χ(f ) )) ( n 2). So we have asympoics when χ(f ) 3 bu no when χ(f ) = 2 i.e.
More informationHeavy Tails of Discounted Aggregate Claims in the Continuous-time Renewal Model
Heavy Tails of Discouned Aggregae Claims in he Coninuous-ime Renewal Model Qihe Tang Deparmen of Saisics and Acuarial Science The Universiy of Iowa 24 Schae er Hall, Iowa Ciy, IA 52242, USA E-mail: qang@sa.uiowa.edu
More information