On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs
|
|
- Kevin Maxwell
- 5 years ago
- Views:
Transcription
1 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 On he Convergence Time of Dual Subgradien Mehods for Srongly Convex Programs Hao Yu and Michael J. Neely Universiy of Souhern California Absra This paper sudies he convergence ime of dual gradien mehods for general (possibly non-differeniable) srongly convex programs. For general convex programs, he convergence ime of dual subgradien/gradien mehods wih simple running averages (running averages sared from ieraion 0) is known o be O( ). This paper shows ha he convergence ime for general ɛ srongly convex programs is O( ). This paper also considers ɛ a variaion of he average scheme, called he sliding running averages, and shows ha if he dual funion of he srongly convex program is locally quadraic hen he convergence ime of he dual gradien mehod wih sliding running averages is O(log( )). The convergence ime analysis is furher verified by ɛ numerical experimens. I. INTRODUCTION Consider he following srongly convex program: min f(x) () s.. g k (x) 0, k {,,..., m} () x X (3) where se X R n is closed and convex; funion f(x) is coninuous and srongly convex on X (srong convexiy is defined in Seion II-A); funions g k (x), k {,,..., m} are Lipschiz coninuous and convex on X. Noe ha he funions f(x), g (x),..., g m (x) are no necessarily differeniable. I is assumed hroughou ha problem ()-(3) has an opimal soluion. Srong convexiy of f implies he opimum is unique. Convex program ()-(3) arises ofen in conrol applicaions such as model prediive conrol (MPC) [], decenralized muli-agen conrol [3], and nework flow conrol [4], [5]. More specifically, he model prediive conrol problem is o solve problem ()-(3) where f(x) is a quadraic funion and each g k (x) is a linear funion []. In decenralized muliagen conrol [3], our goal is o develop disribuive algorihms o solve problem ()-(3) where f(x) is he sum uiliy of individual agens and consrains g k (x) 0 capure he communicaion or resource allocaion consrains among individual agens. The nework flow conrol and he ransmission conrol proocol (TCP) in compuer neworks can be inerpreed as he dual subgradien algorihm for solving a problem of he form ()-(3) [4], [5]. In paricular, Seion V-B shows ha he dual subgradien mehod based online flow conrol rapidly converges o opimaliy when uiliies are srongly convex. Hao Yu and Michael J. Neely are wih he Deparmen of Elerical Engineering, Universiy of Souhern California, Los Angeles, CA, USA. This work was presened in par a IEEE Conference on Decision and Conrol (CDC), Osaka, Japan, December, 05 []. This work is suppored in par by he NSF gran CCF A. The ɛ-approximae Soluion Definiion. Le x be he opimal soluion o problem ()- (3). For any ɛ > 0, a poin x ɛ X is said o be an ɛ- approximae soluion if f(x ɛ ) f(x ) + ɛ and g k (x ɛ ) ɛ, k {,..., m}. Definiion. Le x(), {,,...} be he soluion sequence yielded by an ieraive algorihm. The convergence ime (o an ɛ-approximae soluion) is he number of ieraions required o achieve an ɛ-approximae soluion. An algorihm is said o have convergence ime O(h(ɛ)) if {x(), O(h(ɛ))} is a sequence of ɛ-approximae soluions for some funion h(ɛ). Noe if x() saisfies f(x()) f(x ) + and g k(x()), k {,..., m} for all, hen error decays wih ime like O( ) and he convergence ime is O( ɛ ). B. The Dual Subgradien/Gradien Mehod The dual subgradien mehod is a convenional mehod o solve ()-(3) [6], [7]. I is an ieraive algorihm ha, every ieraion, removes he inequaliy consrains () and chooses primal variables o minimize a funion over he se X. This can be decomposed ino parallel smaller problems if he objeive and consrain funions are separable. The dual subgradien mehod can be inerpreed as a subgradien/gradien mehod applied o he Lagrangian dual funion of convex program ()-(3) and allows for many differen sep size rules [7]. This paper focuses on a consan sep size due o is simpliciy for praical implemenaions. Noe ha by Danskin s heorem (Proposiion B.5(a) in [7]), he Lagrangian dual funion of a srongly convex program is differeniable, hus he dual subgradien mehod for srongly convex program ()-(3) is in fa a dual gradien mehod. The consan sep size dual subgradien/gradien mehod solves problem ()-(3) as follows: Algorihm. [The Dual Subgradien/Gradien Mehod] Le c > 0 be a consan sep size. Le λ(0) 0, be a given consan veor. A each ieraion, updae x() and λ( + ) as follows: x() = argmin [f(x) + m k= λ k()g k (x)]. x X λ k ( + ) = max {λ k () + cg k (x()), 0}, k. If here exiss z X such ha g k (z) δ, k {,..., m} for some δ > 0, one can conver an ɛ-approximae poin x ɛ o anoher poin x = θx ɛ + ( θ)z, for θ = δ, which saisfies all desired consrains and ɛ+δ has objeive value wihin O(ɛ) of opimaliy.
2 Raher han using x() from Algorihm as he primal soluions, he following running average schemes are considered: ) Simple Running Averages: Use x() = τ=0 x(τ) as he soluion a each ieraion {,,...}. ) Sliding Running Averages: Use x() = x(0) and x() = { τ= x(τ) x( ) if is even if is odd as he soluion a each ieraion {,,...}. The simple running average sequence x() is also called he ergodic sequence in [8]. The idea of using he running average x() as he soluions, raher han he original primal variables x(), daes back o Shor [9] and has been furher developed in [0] and [8]. The consan sep size dual subgradien algorihm wih simple running averages is also a special case of he drif-plus-penaly algorihm, which was originally developed o solve more general sochasic opimizaion [] and used for deerminisic convex programs in []. (See Seion I.C in [] for more discussions.) This paper proposes a new running average scheme, called sliding running averages. This paper shows ha he sliding running averages can have a beer convergence ime when he dual funion of he convex program saisfies addiional assumpions. C. Relaed Work A lo of lieraure focuses on he convergence ime of dual subgradien mehods o an ɛ-approximae soluion. For general convex programs in he form of ()-(3), where he objeive funionf(x) is convex bu no necessarily srongly convex, he convergence ime of he drif-plus-penaly algorihm is shown o be O( ɛ ) in [], [3]. A similar O( ɛ ) convergence ime of he dual subgradien algorihm wih he averaged primal sequence is shown in [4]. A recen work [5] shows ha he convergence ime of he drif-plus-penaly algorihm is O( ɛ ) if he dual funion is locally polyhedral and he convergence ime is O( ) if he dual funion is locally quadraic. For a ɛ 3/ special class of srongly convex programs in he form of ()- (3), where f(x) is second-order differeniable and srongly convex and g k (x), k {,,..., m} are second-order differeniable and have bounded Jacobians, he convergence ime of he dual subgradien algorihm is shown o be O( ɛ ) in []. Noe ha convex program ()-(3) wih second order differeniable f(x) and g k (x) in general can be solved via inerior poin mehods wih linear convergence ime. However, o achieve fas convergence in praice, he barrier parameers mus be scaled carefully and he compuaion complexiy associaed wih each ieraion is high. In conras, he dual subgradien algorihm is a Lagrangian dual mehod and can yield disribued implemenaions wih low compuaion complexiy when he objeive and consrain funions are separable. This paper considers a class of srongly convex programs ha is more general han hose reaed in []. Besides he srong convexiy of f(x), we only require he consrain Noe ha bounded Jacobians imply Lipschiz coninuiy. Work [] also considers he effe of inaccurae soluions for he primal updaes. The analysis in his paper can also deal wih inaccurae updaes. In his case, here will be an error erm δ on he righ of (6). funions g k (x) o be Lipschiz coninuous. The funions f(x) and g k (x) can even be non-differeniable. Thus, his paper can deal wih non-smooh opimizaion. For example, he l norm x is non-differeniable and ofen appears as par of he objeive or consrain funions in machine learning, compressed sensing and image processing applicaions. This paper shows ha he convergence ime of he dual subgradien mehod wih simple running averages for general srongly convex programs is O( ɛ ) and he convergence ime can be improved o O(log( ɛ )) by using sliding running averages when he dual funion is locally quadraic. A closely relaed recen work is [6] ha considers srongly convex programs wih srongly convex and second order differeniable objeive funions f(x) and conic consrains in he form of Gx + h K, where K is a proper cone. The auhors in [6] show ha a hybrid algorihm using boh dual subgradien and dual fas gradien mehods can have convergence ime O( ); and he dual subgradien mehod ɛ /3 can have convergence ime O(log( ɛ )) if he srongly convex program saisfies an error bound propery. Resuls in he curren paper are developed independenly and consider general nonlinear convex consrain funions; and show ha he dual subgradien/gradien mehod wih a differen averaging scheme has an O(log( ɛ )) convergence ime when he dual funion is locally quadraic. Anoher parallel work [7] considers srongly convex programs wih srongly convex and smooh objeive funions f(x) and general consrain funions g(x) wih bounded Jacobians. The auhors in [7] show ha he dual subgradien/gradien mehod wih simple running averages has O( ɛ ) convergence. This paper and independen parallel works [6], [7] obain similar convergence imes of he dual subgradien/gradien mehod wih differen averaging schemes for srongly convex programs under slighly differen assumpions. However, he proof echnique in his paper is fundamenally differen from ha used in [6] and [7]. Works [6], [7] and oher previous works, e.g., [], follow he classical opimizaion analysis approach based on he descen lemma, while his paper is based on he drif-plus-penaly analysis ha was originally developed for sochasic opimizaion in dynamic queuing sysems [8], []. Using he drif-plus-penaly echnique, we furher propose a new Lagrangian dual ype algorihm wih O( ɛ ) convergence for general convex programs (possibly wihou srong convexiy) in a following work [9]. II. PRELIMINARIES AND BASIC ANALYSIS A. Preliminaries and Assumpions Definiion 3 (Lipschiz Coninuiy). Le X R n be a convex se. Funion h : X R m is said o be Lipschiz coninuous on X wih modulus L if here exiss L > 0 such ha h(y) h(x) L y x for all x, y X. Noe ha in he above definiion can be general norms. However, hroughou his paper, we use o denoe he veor Euclidean norm. Definiion 4 (Srongly Convex Funions). Le X R n be a convex se. Funion h is said o be srongly convex on X
3 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 wih modulus α if here exiss a consan α > 0 such ha h(x) α x is convex on X. Lemma. [See [0] or Corollary in [9]] Le h(x) be srongly convex on convex se X wih modulus α. If x op is a global minimum, h(x op ) h(y) α y xop, y X. Denoe he sacked veor of muliple funions g (x), g (x),..., g m (x) as g(x) = [ g (x), g (x),..., g m (x) ] T. Assumpion. In convex program ()-(3), funion f(x) is srongly convex on X wih modulus α; and funion g(x) is Lipschiz coninuous on X wih modulus β. Assumpion. There exiss a Lagrange muliplier veor λ = [λ, λ,..., λ m] T 0 such ha q(λ ) = min x X {f(x) : g k(x) 0, k {,,..., m}}, where q(λ) = min {f(x)+ m x X k= λ kg k (x)} is he Lagrangian dual funion of problem ()-(3). B. Properies of he Lyapunov drif Denoe λ() = [ λ (),..., λ m () ] T. Define Lyapunov funion L() = λ() and drif () = L( + ) L(). Lemma. A each ieraion in Algorihm, c () = λt ( + )g(x()) c λ( + ) λ() (4) Proof: The updae equaions λ k ( + ) = max{λ k () + cg k (x()), 0}, k {,,..., m} can be rewrien as λ k ( + ) = λ k () + c g k (x()), k {,,..., m}, (5) { gk (x()), if λ where g k (x()) = k () + cg k (x()) 0 c λ, k(), else k {,,..., m}. Fix k {,,..., m}. Squaring boh sides of (5) and dividing by faor yields: [λ k( + )] = [λ k()] + c [ g k(x())] + cλ k () g k (x()) = [λ k()] + c [ g k(x())] + cλ k ()g k (x()) + cλ k ()[ g k (x()) g k (x())] (a) = [λ k()] + c [ g k(x())] + cλ k ()g k (x()) c g k (x())[ g k (x()) g k (x())] = [λ k()] c [ g k(x())] + c[λ k () + c g k (x())]g k (x()) (b) = [λ k()] [λ k( + ) λ k ()] + cλ k ( + )g k (x()) where (a) follows from λ k ()[ g k (x()) g k (x())] = c g k (x())[ g k (x()) g k (x())], which can be shown by separaely considering cases g k (x()) = g k (x()) and g k (x()) g k (x()); and (b) follows from he fa ha λ k ( + ) = λ k () + c g k (x()). Summing over k {,,..., m} and dividing boh sides by faor c yields he resul. III. CONVERGENCE TIME ANALYSIS This seion analyzes he convergence ime of x() for srongly convex program ()-(3). A. Objeive Value Violaions Lemma 3. Le x be he opimal soluion o ()-(3). Assume c α/β. A each ieraion in Algorihm, we have c () + f(x()) f(x ), 0. (6) Proof: Fix 0. Since f(x) is srongly convex wih modulus α and for all k {,..., m} funions g k (x) are convex and scalars λ k () are non-negaive, he funion f(x)+ m k= λ k()g k (x) is also srongly convex wih modulus α. Noe ha x() = argmin [f(x) + m k= λ k()g k (x)]. By x X Lemma wih x op = x() and y = x, we have f(x()) + m k= λ k()g k (x()) [ f(x ) + m k= λ k()g k (x ) ] α x() x Adding his o equaion (4) yields c () + f(x()) f(x ) + B(), where B() = c λ( + ) λ() α x() x + λ T ()[g(x ) g(x())] + λ T ( + )g(x()) (7) I remains o show ha B() 0. Since x is he opimal soluion o problem ()-(3), we have g k (x ) 0 for all k. Noe ha λ k (+) 0 for all k. Thus, λ T (+)g(x ) 0. Adding he nonnegaive quaniy λ T (+)g(x ) o he righhand-side of (7) gives: B() c λ( + ) λ() α x() x + λ T ()[g(x ) g(x())] + λ T ( + )g(x()) λ T ( + )g(x ) = c λ( + ) λ() α x() x + [λ T () λ T ( + )][g(x ) g(x())] (a) c λ( + ) λ() α x() x + λ() λ( + ) g(x()) g(x ) (b) c λ( + ) λ() α x() x + β λ() λ( + ) x() x = ( λ( + ) λ() cβ x() x ) c (c) 0 (α cβ ) x() x where (a) follows from he Cauchy-Schwarz inequaliy; (b) follows from Assumpion ; and (c) follows from c α β. Theorem (Objeive Value Violaions). Le x X be he opimal soluion o problem ()-(3). If c α β in Algorihm, hen f(x()) f(x ) + λ(0),.
4 Proof: Fix. Summing (6) over τ {0,,..., } yields c τ=0 (τ) + τ=0 f(x(τ)) f(x ). Dividing by faor and rearranging erms yields τ=0 f(x(τ)) f(x ) + L(0) L() = f(x ) + λ(0) λ() Finally, f(x()) B. Consrain Violaions Lemma 4. For any > 0,. τ=0 f(x(τ)) by Jensen s inequaliy. λ k ( ) λ k ( ) + c g k (x(τ)), k {,,..., m} τ= In paricular, λ k () λ k (0) + c τ=0 g k(x(τ)) for all and k {,,..., m}. Proof: Fix k {,,..., m}. Noe ha λ k ( + ) = max{λ k ( ) + cg k (x( )), 0} λ k ( ) + cg k (x( )). By induion, his lemma follows. Lemma 5. Le λ 0 be given in Assumpion. If c α β in Algorihm, hen λ() saisfies λ() λ(0) + λ + λ,. (8) Proof: Le x be he opimal soluion o problem ()- (3). Assumpion implies ha f(x ) = q(λ ) f(x(τ)) + m k= λ k g k(x(τ)), τ {0,,...}, where he inequaliy follows from he definiion of q(λ). Thus, we have f(x ) f(x(τ)) m k= λ k g k(x(τ)), τ {0,,...}. Summing over τ {0,,..., } yields = f(x ) f(x(τ)) c m k= λ k m k= τ=0 [ g k (x(τ)) τ=0 ] (a) τ=0 k= c m λ kg k (x(τ)) m λ k[λ k () λ k (0)] k= λ kλ k () (b) c λ λ() (9) where (a) follows from Lemma 4 and (b) follows from he Cauchy-Schwarz inequaliy. On he oher hand, summing (6) in Lemma 3 over τ {0,,..., } yields f(x ) f(x(τ)) τ=0 Combining (9) and (0) yields L() L(0) c = λ() λ(0) c (0) λ() λ(0) c c λ λ() ( λ() λ ) λ(0) + λ λ() λ(0) + λ + λ Theorem (Consrain Violaions). Le λ 0 be defined in Assumpion. If c α β in Algorihm, hen he consrain funions saisfy for all : λ(0) + λ g k (x()) + λ, k {,..., m} Proof: Fix and k {,,..., m}. Recall ha x() = τ=0 x(τ). Thus, g k(x()) (a) τ=0 g k(x(τ)) (b) λ k () λ k (0) λ k() λ() (c) λ(0) + λ + λ, where (a) follows from he convexiy of g k (x); (b) follows from Lemma 4; and (c) follows from Lemma 5. Theorems - show ha using he simple running average sequence x() ensures he objeive value and consrain error decay like O(/). A lower bound of f(x()) f(x ) O( ) easily follows from srong dualiy and Theorem. See full version [0] for more discussions. IV. EXTENSIONS This seion shows ha he convergence ime of sliding running averages x() is O(log( ɛ )) when he dual funion of problem ()-(3) saisfies addiional assumpions. A. Smooh Dual Funions Definiion 5 (Smooh Funions). Le X R n and funion h(x) be coninuously differeniable on X. Funion h(x) is said o be smooh on X wih modulus L if x h(x) is Lipschiz coninuous on X wih modulus L. Define q(λ) = min x X {f(x)+λt g(x)} as he dual funion of problem ()-(3). Recall ha f(x) is srongly convex wih modulus α by Assumpion. For fixed λ R m +, f(x) + λ T g(x) is srongly convex wih respe o x X wih modulus α. Define x(λ) = argmin x X {f(x) + λ T g(x)}. By Danskin s heorem (Proposiion B.5 in [7]), q(λ) is differeniable wih gradien λ q(λ) = g(x(λ)). Lemma 6 (Smooh Dual Funions). The dual funion q(λ) is smooh on R m + wih modulus γ = β α. Proof: Fix λ, µ R m +. Le x(λ) = argmin x X {f(x) + λ T g(x)} and x(µ) = argmin x X {f(x) + µ T g(x)}. By Lemma, we have f(x(λ)) + λ T g(x(λ)) f(x(µ)) + λ T g(x(µ)) α x(λ) x(µ) and f(x(µ))+µ T g(x(µ)) f(x(λ)) + µ T g(x(λ)) α x(λ) x(µ). Summing he above wo inequaliies and simplifying gives α x(λ) x(µ) [µ λ] T [g(x(λ)) g(x(µ))] (a) µ λ g(x(λ)) g(x(µ)) (b) β µ λ x(λ) x(µ) where (a) follows from he Cauchy-Schwarz inequaliy and (b) follows because g(x) is Lipschiz coninuous. This implies x(λ) x(µ) β λ µ () α
5 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 Thus, we have q(λ) q(µ) (a) = g(x(λ)) g(x(µ)) (b) β x(λ) x(µ) (c) β α λ µ where (a) follows from λ q(λ) = g(x(λ)); (b) follows from he Lipschiz coninuiy of g(x); and (c) follows from (). Thus, q(λ) is smooh on R m + wih modulus L = β α. Since λ q(λ()) = g(x()), he dynamic of λ() can be inerpreed as he projeed gradien mehod wih sep size c o solve max λ R m + {q(λ)} where q( ) is a smooh funion by Lemma 6. Thus, we have he nex lemma. Lemma 7. Assume problem ()-(3) saisfies Assumpions -. If c α β, hen q(λ ) q(λ()) λ(0) λ,. Proof: Recall ha a projeed gradien algorihm wih sep size c < γ maximizes a concave funion wih smooh modulus γ wih he error decaying like O( ). Thus, his lemma follows. See [0] for he complee proof. B. Problems wih Locally Quadraic Dual Funions In addiion o Assumpions -, his subseion furher requires he nex assumpion. Assumpion 3 (Locally Quadraic Dual Funions). Le λ be a Lagrange muliplier of problem ()-(3) defined in Assumpion. There exiss D q > 0 and L q > 0, where he subscrip q denoes locally quadraic, such ha for all λ {λ R m + : λ λ D q }, he dual funion q(λ) saisfies q(λ ) q(λ) + L q λ λ. Lemma 8. Suppose problem ()-(3) saisfies Assumpions -3. Le q(λ), λ, D q and L q be given in Assumpion 3. ) If λ R m + and q(λ ) q(λ) L q D q, hen λ λ D q. ) The λ defined in Assumpion is unique. Proof: See [0] for he proof. Define T q = λ(0) λ cl q Dq. () Lemma 9. Assume problem ()-(3) saisfies Assumpions -3. If c α β in Algorihm, hen λ() λ D q for all T q, where T q is defined in (). Proof: By Lemma 7 and Lemma 8, if λ(0) λ L q D q, hen λ() λ D q. Noe ha λ(0) λ implies ha λ(0) λ L q D q. cl qd q Lemma 0. Assume problem ()-(3) saisfies Assumpions -3. If c α β in Algorihm, hen ) λ() λ λ(0) λ, T q, where clq T q is defined in (). ) λ() λ ( ) Tq λ(tq +cl q ) λ ( ) Dq ( + cl q ) Tq, T q, where T q is +clq defined in (). Proof: ) By Lemma 7, q(λ ) q(λ()) λ(0) λ,. By Lemma 9 and Assumpion 3, q(λ ) q(λ()) L q λ() λ, T q. Thus, we have L q λ() λ λ(0) λ, T q, which implies ha λ() λ λ(0) λ, T q. clq ) By par (), we know λ() λ D q, T q. The second par essenially follows from Theorem in [], which shows ha he projeed gradien mehod for se consrained smooh convex opimizaion converge geomerically if he objeive funion saisfies a quadraic growh condiion. See [0] for he proof. Corollary. Assume problem ()-(3) saisfies Assumpions -3. If c α β in Algorihm, hen λ() λ() ( ) Dq ( + cl q ) Tq, T q, where T q is defined +clq in (). Proof: λ() λ() λ() λ + λ() λ (a) ( ) Dq ( + cl q ) Tq ( ) Dq + ( + cl q ) Tq + clq + clq (b) ( ) Dq ( + cl q ) Tq, + clq where (a) follows from par () in Lemma 0; and (b) follows from +clq <. Theorem 3. Assume problem ()-(3) saisfies Assumpions -3. Le x be he opimal soluion and λ be defined in Assumpion 3. If c α β in Algorihm, hen ( ) f( x()) f(x ) + η, Tq, where η = +clq D q (+clq)tq +D Tq q(+cl q) ( λ(0) + λ + λ ) c is defined in (). and T q Proof: Fix T q. By Lemma 3, we have c (τ) + f(x(τ)) f(x ) for all τ {0,,...}. Summing over τ {, +,..., } yields c τ= (τ)+ τ= f(x(τ)) f(x ). Dividing by faor yields Thus, we have f(x(τ)) f(x ) + τ= L() L() f( x()) (a) f(x(τ)) (b) f(x L() L() ) + τ= = f(x ) + λ() λ() = f(x ) + λ() λ() + λ() λ() (c) f(x ) + λ() λ() + λ() λ() λ() ( (d) ( ) ) Dq ( + cl q ) Tq f(x +clq ) + 4 ( ) Dq ( + cl q ) Tq λ() +clq + (3)
6 (e) f(x ) + ( ) ( D q ( + cl q ) Tq + clq c + D q( + cl q ) Tq λ() ) (f) f(x ) + ( ) η + clq c where (a) follows from x() = τ= x(τ) and he convexiy of f(x); (b) follows from (3); (c) follows from he Cauchy-Schwarz inequaliy; (d) follows from Corollary ; (e) follows from < ; and (f) follows from (8) and he definiion of η. +clq Theorem 4. Assume problem ()-(3) saisfies Assumpions -3. If c α β in Algorihm, hen g k ( x()) ( ), k {,,..., m}, Tq, +clq D q(+cl q) Tq where T q is defined in (). Proof: Fix T q and k {,,..., m}. Thus, we have g k ( x()) (a) g k (x(τ)) (b) ( λk () λ k () ) τ= λ() λ() (c) D q( + cl q ) Tq ( + clq ) where (a) follows from he convexiy of g k (x); (b) follows from Lemma 4; and (c) follows from Corollary. Under Assumpions -3, Theorems 3 and 4 show ha if c α β, hen x() provides an ɛ-approximae soluion wih convergence ime O(log( ɛ )). C. Discussions ) Praical Implemenaions: Assumpion 3 in general is difficul o verify. However, we noe ha o ensure x() provides he beer O(log( ɛ )) convergence ime, we only require c α β, which is independen of he parameers in Assumpions 3. Namely, in praice, we can blindly apply Algorihm o problem ()-(3) wih no need o verify Assumpion 3. If problem ()-(3) happens o saisfy Assumpion 3, hen x() enjoys he faser convergence ime O(log( ɛ )). If no, hen x() (or x()) a leas has convergence ime O( ɛ ). ) Local Assumpion and Local Geomeric Convergence: Since Assumpion 3 only requires he quadraic propery o be saisfied in a local radius D( q around λ, he error of ( ) ) Algorihm sars o decay like O only afer +clq λ() arrives a he D q local radius afer T q ieraions, where T q is independen of he approximaion requiremen ɛ and hence is order O(). Thus, Algorihm provides an ɛ-approximae soluion wih convergence ime O(log( ɛ )). However, i is possible ha T q is relaively large if D q is small. In fa, T q > 0 because Assumpion 3 only requires he dual funion o have he quadraic propery in a local radius. Thus, he heory developed in his seion can deal wih a large class of problems. On he oher hand, if he dual funion has he quadraic propery globally, i.e., for all λ ( 0, hen T q = 0 and he error of Algorihm decays ( ) like O ),. +clq 3) Locally Srongly Concave Dual Funions: The following assumpion is sronger han Assumpion 3 bu can be easier o verify in cerain cases. Assumpion 4 (Locally Srongly Concave Dual Funions). Le λ be a Lagrange muliplier veor defined in Assumpion. There exiss D c > 0 and L c > 0, where he subscrip c denoes locally srongly concave, such ha he dual funion q(λ) is srongly concave wih modulus L c over {λ R m + : λ λ D c }. In fa, Assumpion 4 implies Assumpion 3. Lemma. If problem ()-(3) saisfies Assumpion 4, hen i also saisfies Assumpion 3 wih D q = D c and L q = Lc. Proof: See [0] for he proof. Since Assumpion 4 implies Assumpion 3, by he resuls from he previous subseion, x() from Algorihm provides an ɛ-approximae soluion wih convergence ime O(log( ɛ )). V. APPLICATIONS A. Problems wih Non-Degenerae Consrain Qualificaions Theorem 5. Consider srongly convex program ()-(3) where f(x) and g k (x), k {,,..., m} are second-order coninuously differeniable. Le x be he unique soluion. ) Le K {,,..., m} be he se of aive consrains, i.e., g k (x ) = 0, k K, and denoe he veor composed by g k (x), k K as g K. If g(x) has a bounded Jacobian and rank( x g K (x ) T ) = K, hen Assumpions -3 hold. ) If g(x) has a bounded Jacobian and rank( x g(x ) T ) = m, hen Assumpions -4 hold. Proof: See [0] for he proof. Corollary. Consider min{f(x) : Ax b}, where f(x) is second-order coninuously differeniable and srongly convex funion; and A is an m n marix. ) Le x be he opimal soluion. Assume Ax b has l rows ha hold wih equaliy, and le A be he l n submarix of A corresponding o hese aive rows. If rank(a ) = l, hen Assumpions -3 hold. ) If rank(a) = m, hen Assumpions -4 hold wih D c =. B. Nework Uiliy Maximizaion (NUM) Consider a nework wih l links and n flow sreams. Le {b, b,..., b l } be he capaciies of each link and {x, x,..., x n } be he raes of each flow sream. Le N (k) {,,..., n}, k l be he se of flow sreams ha use link k. This problem is o maximize he uiliy funion n i= w i log(x i ) wih consans w i > 0, i n, which represens a measure of nework fairness [], subje o he capaciy consrain of each link. This problem is known as
7 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 he nework uiliy maximizaion (NUM) problem and can be formulaed as follows 3 : n min w i log(x i ) (4) i= s.. Ax b (5) x 0 (6) where A = [a,, a n ] is a 0- marix of size m n such ha a ij = if and only if flow x j uses link i and b > 0. Noe ha problem (4)-(6) saisfies Assumpions and. By he resuls from Seion III, x() has convergence ime O( ɛ ). The nex heorem provides sufficien condiions such ha x() has beer convergence ime O(log( ɛ )). Theorem 6. The NUM problem (4)-(6) saisfies: ) Le b max = max i n b i and x max > 0 such ha x max i > b max, i {,..., n}. If we replace consrain (6) wih 0 x x max in problem (4)-(6), hen we obain an equivalen problem. ) Le x be he opimal soluion. Assume Ax b has m rows ha hold wih equaliy, and le A be he m n submarix of A corresponding o hese aive rows. If rank(a ) = m, hen Assumpions -3 hold for he above equivalen problem. 3) If rank(a) = m, hen Assumpions -4 hold for he above equivalen problem. Proof: See [0] for he proof. VI. NUMERICAL RESULTS A. Nework Uiliy Maximizaion Problems Consider he simple NUM problem described in Figure. Le x, x and x 3 be he daa raes of sream, and 3 and le he nework uiliy be minimizing log(x ) log(x ) 3 log(x 3 ). I can be checked ha capaciy consrains oher han x + x + x 3 0, x + x 8, and x + x 3 8 are redundan. By Theorem 6, he NUM problem can be formulaed as follows: where A = min log(x ) log(x ) 3 log(x 3 ) s.. Ax b, 0 x x max 0 0, b = and x max = [,, ] T. The opimal soluion o his NUM problem is x =, x = 3., x 3 = 4.8 and he opimal value is Since he objeive funion is separable, he dual subgradien/gradien mehod can yield a disribued soluion. This is why he dual subgradien/gradien mehod is widely used o solve NUM problems [4]. The objeive funion is srongly convex wih modulus α = on X = {0 x xmax } and g( ) is Lipschiz coninuous wih modulus β 6 on X. Figure verifies he convergence of x() wih c = α β = Wihou loss of opimaliy, we define log(0) = and hence log( ) is defined over R +. Or alernaively, we can replace he non-negaive rae consrains wih x i x min i, i {,,..., n} where x min i, i {,,..., n} are sufficienly small posiive numbers. and λ(0) = 0. Since λ(0) = 0, by Theorem, we have f(x()) f(x ), > 0. To verify he convergence ime of consrain violaions, Figure 3 plos g (x()), g (x()), g 3 (x()) and / wih boh x-axis and y-axis in log 0 scales. As observed in Figure 3, he curves of g (x()) and g 3 (x()) are parallel o he curve of / for large. Noe ha g (x()) 0 is saisfied early because his consrain is loose. Figure 3 shows ha error decays like O( ) and suggess ha he convergence ime is aually Θ( ɛ ) for his NUM problem. Fig.. sream)) capaciy=8) capaciy=8) sream)) capaciy=0) sream)3) A simple NUM problem wih 3 flow sreams capaciy=8) capaciy=8) Noe ha rank(a) = 3. By Theorem 6, his NUM problem saisfies Assumpions -4. Figure 4 verifies Theorem 6 ha he convergence ime of x() is O(log( ɛ )) by showing ha error decays like O( ) wih c = α β = 363. B. Large Scale Quadraic Programs Consider quadraic program min x R N {x T Qx + d T x : Ax b} where Q, A R N N and d, b R N. Q = UΣU H R N N where U is he orhonormal basis for a random N N zero mean and uni variance normal marix and Σ is he diagonal marix wih enries from uniform [, 3]. A is a random N N zero mean and uni variance normal marix. d and b are random veors wih enries from uniform [0, ]. In a PC wih a 4 core.7ghz Inel i7 Cpu and 6GB Memory, we run boh Algorihm and quadprog from Malab, which by defaul is using he inerior poin mehod, over randomly generaed large scale quadraic programs wih N = 400, 600, 800, 000 and 00. For differen problem size N, he running ime is he average over 00 random quadraic programs and is ploed in Figure 5. Fig.. Objeive Values Consrain Values NUM: Convergence of Objeive Values of x() Ieraions: g() = 0 opimal value f(x()) NUM:Convergence of Consrain Values of x() g(x()) = x() + x() + x3()! 0 g3(x()) = x() + x3()! 8 g(x()) = x() + x()! Ieraions: The convergence of x() for a NUM problem.
8 Fig. 3. Fig. 4. Consrain Violaions 0 NUM: Convergence Time of Consrain Violaions of x() v.s. O(/) g(x()) = x() + x()! 8 g(x()) = x() + x() + x3()! 0 = g3(x()) = x() + x3()! 8 curves are parallel for large Ieraions: The convergence ime of x() for a NUM problem. Consrain Violaions 0 NUM: Convergence Time of Consrain Violaions of ex() v.s. O( (0:998) ) g(ex()) = ex() + ex()! 8 g3(ex()) = ex() + ex3()! 8 g(ex()) = ex() + ex() + ex3()! 0 0: Ieraions: The convergence ime of x() for a NUM problem. VII. CONCLUSIONS This paper sudies he dual gradien mehod for srongly convex programs and shows ha he convergence ime of simple running averages is O( ɛ ). This paper also considers a variaion of he primal averages, called he sliding running averages, and shows ha if he dual funion is locally quadraic hen he convergence ime is O(log( ɛ )). Running Time (secs) Dual Gradien Mehod v.s. quadprog quadprog dual gradien mehod Problem Dimension: N VIII. ACKNOWLEDGEMENT We hank an anonymous reviewer for bringing o our aenions independen parallel works [6], [7] on similar opics and work [] on linear convergence for unconsrained opimizaion. By using he resuls from [], we improve he convergence ime from O( ) in our conference version [] ɛ /3 o O(log( ɛ )) when he dual funion is locally quadraic. REFERENCES [] H. Yu and M. J. Neely, On he convergence ime of he drif-pluspenaly algorihm for srongly convex programs, in Proceedings of IEEE Conference on Decision and Conrol (CDC), 05. [] I. Necoara and V. Nedelcu, Rae analysis of inexa dual firs-order mehods applicaion o dual decomposiion, IEEE Transaions on Auomaic Conrol, vol. 59, no. 5, pp. 3 43, May 04. [3] H. Terelius, U. Topcu, and R. M. Murray, Decenralized muli-agen opimizaion via dual decomposiion, in IFAC World Congress, 0. [4] S. H. Low and D. E. Lapsley, Opimizaion flow conrol I: basic algorihm and convergence, IEEE/ACM Transaions on Neworking, vol. 7, no. 6, pp , 999. [5] S. H. Low, A dualiy model of TCP and queue managemen algorihms, IEEE/ACM Transaions on Neworking, vol., no. 4, pp , 003. [6] M. S. Bazaraa, H. D. Sherali, and C. M. Shey, Nonlinear Programming: Theory and Algorihms. Wiley-Inerscience, 006. [7] D. P. Bersekas, Nonlinear Programming, nd ed. Ahena Scienific, 999. [8] T. Larsson, M. Pariksson, and A.-B. Srömberg, Ergodic, primal convergence in dual subgradien schemes for convex programming, Mahemaical programming, vol. 86, no., pp. 83 3, 999. [9] N. Z. Shor, Minimizaion Mehods for Non-Differeniable Funions. Springer-Verlag, 985. [0] H. D. Sherali and G. Choi, Recovery of primal soluions when using subgradien opimizaion mehods o solve lagrangian duals of linear programs, Operaions Research Leers, vol. 9, no. 3, pp. 05 3, 996. [] M. J. Neely, Sochasic Nework Opimizaion wih Applicaion o Communicaion and Queueing Sysems. Morgan & Claypool Publishers, 00. [], Disribued and secure compuaion of convex programs over a nework of conneed processors, in DCDIS Inernaional Conference on Engineering Applicaions and Compuaional Algorihms, 005. [3], A simple convergence ime analysis of drif-plus-penaly for sochasic opimizaion and convex programs, arxiv:4.079, 04. [4] A. Nedić and A. Ozdaglar, Approximae primal soluions and rae analysis for dual subgradien mehods, SIAM Journal on Opimizaion, vol. 9, no. 4, pp , 009. [5] S. Supiayapornpong, L. Huang, and M. J. Neely, Time-average opimizaion wih nonconvex decision se and is convergence, in Proceedings of IEEE Conference on Decision and Conrol (CDC), 04. [6] I. Necoara and A. Parascu, Ieraion complexiy analyisis of dual firs order mehods for conic convex programming, Opimizaion Mehod and Sofware, vol. 3, no. 3, pp , 06. [7] I. Necoara, A. Parascu, and A. Nedić, Complexiy cerificaions of firs-order inexa lagrangian mehods for general convex programming: Applicaion o real-ime mpc, in Developmens in Model-Based Opimizaion and Conrol. Springer, 05, pp [8] M. J. Neely, Dynamic power allocaion and rouing for saellie and wireless neworks wih ime varying channels, Ph.D. disseraion, Massachuses Insiue of Technology, 003. [9] H. Yu and M. J. Neely, A simple parallel algorihm wih an O(/) convergence rae for general convex programs, SIAM Journal on Opimizaion, vol. 7, no., pp , 07. [0], On he convergence ime of dual subgradien mehods for srongly convex programs, arxiv: , 05. [] I. Necoara, Y. Neserov, and F. Glineur, Linear convergence of firs order mehods for non-srongly convex opimizaion, arxiv: , 05. [] F. P. Kelly, Charging and rae conrol for elasic raffic, European Transaions on Telecommunicaions, vol. 8, no., pp , 997. Fig. 5. The average running ime for large scale quadraic programs.
A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs
PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationSupplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence
Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given
More informationVariational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations
IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1, Issue 6 Ver. II (Nov - Dec. 214), PP 48-54 Variaional Ieraion Mehod for Solving Sysem of Fracional Order Ordinary Differenial
More informationAn introduction to the theory of SDDP algorithm
An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking
More informationMATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018
MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren
More informationAppendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection
Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief
More informationLecture 20: Riccati Equations and Least Squares Feedback Control
34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he
More informationLecture 9: September 25
0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:
More informationLecture 2 October ε-approximation of 2-player zero-sum games
Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion
More informationTechnical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.
Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear
More informationPrimal-Dual Splitting: Recent Improvements and Variants
Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France The proximal poin
More informationEssential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems
Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor
More informationELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018
ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5
More informationA SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS
A SIMPLE PARALLEL ALGORITHM WITH AN O(/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS HAO YU AND MICHAEL J. NEELY Abstract. This paper considers convex programs with a general (possibly non-differentiable)
More informationRANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY
ECO 504 Spring 2006 Chris Sims RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 1. INTRODUCTION Lagrange muliplier mehods are sandard fare in elemenary calculus courses, and hey play a cenral role in economic
More informationA Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization
1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus
More informationSimulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010
Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid
More informationInventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions
Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.
More informationNotes on online convex optimization
Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec
More informationPENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD
PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.
More informationAn Introduction to Stochastic Programming: The Recourse Problem
An Inroducion o Sochasic Programming: he Recourse Problem George Danzig and Phil Wolfe Ellis Johnson, Roger Wes, Dick Cole, and Me John Birge Where o look in he ex pp. 6-7, Secion.2.: Inroducion o sochasic
More informationAryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,
More informationScheduling of Crude Oil Movements at Refinery Front-end
Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,
More informationAn Introduction to Malliavin calculus and its applications
An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214
More informationOptimality Conditions for Unconstrained Problems
62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x
More informationHamilton Jacobi equations
Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More information3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon
3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of
More informationInternational Journal of Mathematics Trends and Technology (IJMTT) Volume 37 Number 3 September 2016
Inernaional Journal of ahemaics Trends and Technology (IJTT) Volume 37 Number 3 Sepember 6 Conveiy in he s Class of he Rough ahemaical Programming Problems T... Aeya, Kamilia l-bess, Deparmen of Physics
More informationarxiv: v1 [math.fa] 9 Dec 2018
AN INVERSE FUNCTION THEOREM CONVERSE arxiv:1812.03561v1 [mah.fa] 9 Dec 2018 JIMMIE LAWSON Absrac. We esablish he following converse of he well-known inverse funcion heorem. Le g : U V and f : V U be inverse
More informationA Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization
A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems
More informationConvergence of the Neumann series in higher norms
Convergence of he Neumann series in higher norms Charles L. Epsein Deparmen of Mahemaics, Universiy of Pennsylvania Version 1.0 Augus 1, 003 Absrac Naural condiions on an operaor A are given so ha he Neumann
More information2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes
Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion
More informationApplication of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing
Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology
More informationVehicle Arrival Models : Headway
Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where
More informationAn Introduction to Backward Stochastic Differential Equations (BSDEs) PIMS Summer School 2016 in Mathematical Finance.
1 An Inroducion o Backward Sochasic Differenial Equaions (BSDEs) PIMS Summer School 2016 in Mahemaical Finance June 25, 2016 Chrisoph Frei cfrei@ualbera.ca This inroducion is based on Touzi [14], Bouchard
More informationA Hop Constrained Min-Sum Arborescence with Outage Costs
A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem
More informationSection 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients
Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous
More informationSystem of Linear Differential Equations
Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y
More informationPositive continuous solution of a quadratic integral equation of fractional orders
Mah. Sci. Le., No., 9-7 (3) 9 Mahemaical Sciences Leers An Inernaional Journal @ 3 NSP Naural Sciences Publishing Cor. Posiive coninuous soluion of a quadraic inegral equaion of fracional orders A. M.
More informationarxiv: v1 [math.oc] 11 Sep 2017
Online Learning in Weakly Coupled Markov Decision Processes: A Convergence ime Sudy Xiaohan Wei, Hao Yu and Michael J. Neely arxiv:709.03465v [mah.oc] Sep 07. Inroducion Absrac: We consider muliple parallel
More informationOrdinary Differential Equations
Ordinary Differenial Equaions 5. Examples of linear differenial equaions and heir applicaions We consider some examples of sysems of linear differenial equaions wih consan coefficiens y = a y +... + a
More informationMean-square Stability Control for Networked Systems with Stochastic Time Delay
JOURNAL OF SIMULAION VOL. 5 NO. May 7 Mean-square Sabiliy Conrol for Newored Sysems wih Sochasic ime Delay YAO Hejun YUAN Fushun School of Mahemaics and Saisics Anyang Normal Universiy Anyang Henan. 455
More information5. Stochastic processes (1)
Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly
More informationAccelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions
07 IEEE 56h Annual Conference on Decision and Conrol (CDC) December -5, 07, Melbourne, Ausralia Acceleraed Disribued Neserov Gradien Descen for Convex and Smooh Funcions Guannan Qu, Na Li Absrac This paper
More informationInternational Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN
Inernaional Journal of Scienific & Engineering Research, Volume 4, Issue 10, Ocober-2013 900 FUZZY MEAN RESIDUAL LIFE ORDERING OF FUZZY RANDOM VARIABLES J. EARNEST LAZARUS PIRIYAKUMAR 1, A. YAMUNA 2 1.
More informationMATH 128A, SUMMER 2009, FINAL EXAM SOLUTION
MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange
More informationThe Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales
Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions
More informationLecture 4: November 13
Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae 4.1.1 Moivaion Unil his lecure we saw
More informationA primal-dual Laplacian gradient flow dynamics for distributed resource allocation problems
2018 Annual American Conrol Conference (ACC) June 27 29, 2018. Wisconsin Cener, Milwaukee, USA A primal-dual Laplacian gradien flow dynamics for disribued resource allocaion problems Dongsheng Ding and
More informationPROOF FOR A CASE WHERE DISCOUNTING ADVANCES THE DOOMSDAY. T. C. Koopmans
PROOF FOR A CASE WHERE DISCOUNTING ADVANCES THE DOOMSDAY T. C. Koopmans January 1974 WP-74-6 Working Papers are no inended for disribuion ouside of IIASA, and are solely for discussion and informaion purposes.
More informationL07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms
L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationRandom Walk with Anti-Correlated Steps
Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and
More informationSUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL
HE PUBLISHING HOUSE PROCEEDINGS OF HE ROMANIAN ACADEMY, Series A, OF HE ROMANIAN ACADEMY Volume, Number 4/200, pp 287 293 SUFFICIEN CONDIIONS FOR EXISENCE SOLUION OF LINEAR WO-POIN BOUNDARY PROBLEM IN
More informationDecentralized Stochastic Control with Partial History Sharing: A Common Information Approach
1 Decenralized Sochasic Conrol wih Parial Hisory Sharing: A Common Informaion Approach Ashuosh Nayyar, Adiya Mahajan and Demoshenis Tenekezis arxiv:1209.1695v1 [cs.sy] 8 Sep 2012 Absrac A general model
More informationParticle Swarm Optimization
Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion
More informationOn the Solutions of First and Second Order Nonlinear Initial Value Problems
Proceedings of he World Congress on Engineering 13 Vol I, WCE 13, July 3-5, 13, London, U.K. On he Soluions of Firs and Second Order Nonlinear Iniial Value Problems Sia Charkri Absrac In his paper, we
More informationGlobal Optimization for Scheduling Refinery Crude Oil Operations
Global Opimizaion for Scheduling Refinery Crude Oil Operaions Ramkumar Karuppiah 1, Kevin C. Furman 2 and Ignacio E. Grossmann 1 (1) Deparmen of Chemical Engineering Carnegie Mellon Universiy (2) Corporae
More informationIMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013
IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher
More informationHamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:
M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno
More informationEnergy-Efficient Link Adaptation on Parallel Channels
Energy-Efficien Link Adapaion on Parallel Channels Chrisian Isheden and Gerhard P. Feweis Technische Universiä Dresden Vodafone Chair Mobile Communicaions Sysems D-0069 Dresden Germany Email: (see hp://www.vodafone-chair.com/
More informationMatrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality
Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]
More informationSome Ramsey results for the n-cube
Some Ramsey resuls for he n-cube Ron Graham Universiy of California, San Diego Jozsef Solymosi Universiy of Briish Columbia, Vancouver, Canada Absrac In his noe we esablish a Ramsey-ype resul for cerain
More informationNavneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi
Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec
More informationarxiv: v1 [math.oc] 13 Sep 2018
Hamilonian Descen Mehods Chris J. Maddison 1,2,*, Daniel Paulin 1,*, Yee Whye Teh 1,2, Brendan O Donoghue 2, and Arnaud Douce 1 arxiv:1809.05042v1 [mah.oc] 13 Sep 2018 1 Deparmen of Saisics, Universiy
More informationChapter 2. First Order Scalar Equations
Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.
More informationControl of computer chip semi-conductor wafer fabs
Conrol of compuer chip semi-conducor wafer fabs Cos: x9 $ Reurn: years Cycle ime: 6 weeks WIP: 6, wafers, 8x6 $ Challenge: Conrol he queues a ~5 work seps Conrol of Manufacuring Sysems:! Sochasic racking
More informationGeorey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract
Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical
More informationarxiv: v2 [math.ap] 16 Oct 2017
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????XX0000-0 MINIMIZATION SOLUTIONS TO CONSERVATION LAWS WITH NON-SMOOTH AND NON-STRICTLY CONVEX FLUX CAREY CAGINALP arxiv:1708.02339v2 [mah.ap]
More informationOptimality and complexity for constrained optimization problems with nonconvex regularization
Opimaliy and complexiy for consrained opimizaion problems wih nonconvex regularizaion Wei Bian Deparmen of Mahemaics, Harbin Insiue of Technology, Harbin, China, bianweilvse520@163.com Xiaojun Chen Deparmen
More informationFinish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!
MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his
More informationThe Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite
American Journal of Operaions Research, 08, 8, 8-9 hp://wwwscirporg/journal/ajor ISSN Online: 60-8849 ISSN Prin: 60-8830 The Opimal Sopping Time for Selling an Asse When I Is Uncerain Wheher he Price Process
More informationClass Meeting # 10: Introduction to the Wave Equation
MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion
More informationt dt t SCLP Bellman (1953) CLP (Dantzig, Tyndall, Grinold, Perold, Anstreicher 60's-80's) Anderson (1978) SCLP
Coninuous Linear Programming. Separaed Coninuous Linear Programming Bellman (1953) max c () u() d H () u () + Gsusds (,) () a () u (), < < CLP (Danzig, yndall, Grinold, Perold, Ansreicher 6's-8's) Anderson
More informationOn Oscillation of a Generalized Logistic Equation with Several Delays
Journal of Mahemaical Analysis and Applicaions 253, 389 45 (21) doi:1.16/jmaa.2.714, available online a hp://www.idealibrary.com on On Oscillaion of a Generalized Logisic Equaion wih Several Delays Leonid
More informationELE 538B: Large-Scale Optimization for Data Science. Introduction. Yuxin Chen Princeton University, Spring 2018
ELE 538B: Large-Scale Opimizaion for Daa Science Inroducion Yuxin Chen Princeon Universiy, Spring 2018 Surge of daa-inensive applicaions Widespread applicaions in large-scale daa science and learning 2.5
More informationGeneralized Snell envelope and BSDE With Two general Reflecting Barriers
1/22 Generalized Snell envelope and BSDE Wih Two general Reflecing Barriers EL HASSAN ESSAKY Cadi ayyad Universiy Poly-disciplinary Faculy Safi Work in progress wih : M. Hassani and Y. Ouknine Iasi, July
More information556: MATHEMATICAL STATISTICS I
556: MATHEMATICAL STATISTICS I INEQUALITIES 5.1 Concenraion and Tail Probabiliy Inequaliies Lemma (CHEBYCHEV S LEMMA) c > 0, If X is a random variable, hen for non-negaive funcion h, and P X [h(x) c] E
More informationarxiv: v1 [math.pr] 19 Feb 2011
A NOTE ON FELLER SEMIGROUPS AND RESOLVENTS VADIM KOSTRYKIN, JÜRGEN POTTHOFF, AND ROBERT SCHRADER ABSTRACT. Various equivalen condiions for a semigroup or a resolven generaed by a Markov process o be of
More informationCorrespondence should be addressed to Nguyen Buong,
Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se
More informationOn Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes
More informationSOME PROPERTIES OF GENERALIZED STRONGLY HARMONIC CONVEX FUNCTIONS MUHAMMAD ASLAM NOOR, KHALIDA INAYAT NOOR, SABAH IFTIKHAR AND FARHAT SAFDAR
Inernaional Journal o Analysis and Applicaions Volume 16, Number 3 2018, 427-436 URL: hps://doi.org/10.28924/2291-8639 DOI: 10.28924/2291-8639-16-2018-427 SOME PROPERTIES OF GENERALIZED STRONGLY HARMONIC
More informationE β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem.
Noes, M. Krause.. Problem Se 9: Exercise on FTPL Same model as in paper and lecure, only ha one-period govenmen bonds are replaced by consols, which are bonds ha pay one dollar forever. I has curren marke
More informationDifferential Harnack Estimates for Parabolic Equations
Differenial Harnack Esimaes for Parabolic Equaions Xiaodong Cao and Zhou Zhang Absrac Le M,g be a soluion o he Ricci flow on a closed Riemannian manifold In his paper, we prove differenial Harnack inequaliies
More informationSTABILITY OF NONLINEAR NEUTRAL DELAY DIFFERENTIAL EQUATIONS WITH VARIABLE DELAYS
Elecronic Journal of Differenial Equaions, Vol. 217 217, No. 118, pp. 1 14. ISSN: 172-6691. URL: hp://ejde.mah.xsae.edu or hp://ejde.mah.un.edu STABILITY OF NONLINEAR NEUTRAL DELAY DIFFERENTIAL EQUATIONS
More informationOrthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind
Proceedings of he World Congress on Engineering 2008 Vol II Orhogonal Raional Funcions, Associaed Raional Funcions And Funcions Of The Second Kind Karl Deckers and Adhemar Bulheel Absrac Consider he sequence
More informationChapter 6. Systems of First Order Linear Differential Equations
Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh
More informationOscillation of an Euler Cauchy Dynamic Equation S. Huff, G. Olumolode, N. Pennington, and A. Peterson
PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON DYNAMICAL SYSTEMS AND DIFFERENTIAL EQUATIONS May 4 7, 00, Wilmingon, NC, USA pp 0 Oscillaion of an Euler Cauchy Dynamic Equaion S Huff, G Olumolode,
More informationStability and Bifurcation in a Neural Network Model with Two Delays
Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy
More informationEXERCISES FOR SECTION 1.5
1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler
More informationNetwork Newton Distributed Optimization Methods
Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are
More informationLecture 10: The Poincaré Inequality in Euclidean space
Deparmens of Mahemaics Monana Sae Universiy Fall 215 Prof. Kevin Wildrick n inroducion o non-smooh analysis and geomery Lecure 1: The Poincaré Inequaliy in Euclidean space 1. Wha is he Poincaré inequaliy?
More information) were both constant and we brought them from under the integral.
YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha
More informationPOSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION
Novi Sad J. Mah. Vol. 32, No. 2, 2002, 95-108 95 POSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION Hajnalka Péics 1, János Karsai 2 Absrac. We consider he scalar nonauonomous neural delay differenial
More informationResource Allocation in Visible Light Communication Networks NOMA vs. OFDMA Transmission Techniques
Resource Allocaion in Visible Ligh Communicaion Neworks NOMA vs. OFDMA Transmission Techniques Eirini Eleni Tsiropoulou, Iakovos Gialagkolidis, Panagiois Vamvakas, and Symeon Papavassiliou Insiue of Communicaions
More informationParticle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems
Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,
More informationDifferential Equations
Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding
More informationINEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS
INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS Vincen Guigues School of Applied Mahemaics, FGV Praia de Boafogo, Rio de Janeiro,
More information