Inexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC

Size: px
Start display at page:

Download "Inexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC"

Transcription

1 Inexact Alternatng Mnmzaton Algorthm for Dstrbuted Optmzaton wth an Applcaton to Dstrbuted MPC Ye Pu, Coln N. Jones and Melane N. Zelnger arxv: v [math.oc] Aug 206 Abstract In ths paper, we propose the nexact alternatng mnmzaton algorthm (nexact AMA), whch allows nexact teratons n the algorthm, and ts accelerated varant, called the nexact fast alternatng mnmzaton algorthm (nexact FAMA). We show that nexact AMA and nexact FAMA are equvalent to the nexact proxmal-gradent method and ts accelerated varant appled to the dual problem. Based on ths equvalence, we derve complexty upper-bounds on the number of teratons for the nexact algorthms. We apply nexact AMA and nexact FAMA to dstrbuted optmzaton problems, wth an emphass on dstrbuted MPC applcatons, and show the convergence propertes for ths specal case. By employng the complexty upper-bounds on the number of teratons, we provde suffcent condtons on the nexact teratons for the convergence of the algorthms. We further study the specal case of quadratc local objectves n the dstrbuted optmzaton problems, whch s a standard form n dstrbuted MPC. For ths specal case, we allow local computatonal errors at each teraton. By explotng a warm-startng strategy and the suffcent condtons on the errors for convergence, we propose an approach to certfy the number of teratons for solvng local problems, whch guarantees that the local computatonal errors satsfy the suffcent condtons and the nexact dstrbuted optmzaton algorthm converges to the optmal soluton. I. INTRODUCTION Frst-order optmzaton methods, see e.g. [5], [3] and [2], play a central role n large-scale convex optmzaton, snce they offer smple teraton schemes that only requre nformaton of the functon value and the gradent, and have shown good performance for solvng large problems wth moderate accuracy requrements n many felds, e.g. optmal control [20], sgnal processng [7] and machne learnng [6]. In ths paper, we wll study a sub-group of frst-order methods, called splttng methods, and apply them to dstrbuted optmzaton problems. Splttng methods, whch are also nown as alternatng drecton methods, are a powerful tool for general mathematcal programmng and optmzaton. A varety of dfferent spttng methods exst, requrng dfferent assumptons on the problem setup, whle exhbtng dfferent propertes, see e.g. [2] and [7] for an overvew. The man concept s to splt a complex convex mnmzaton problem nto smple and small sub-problems and solve them n an alternatng manner. For a problem wth multple objectves, the man strategy s not to compute the descent drecton of the sum of several objectves, but to tae a combnaton of the descent drectons of each objectve. The property of mnmzng the objectves n an alternatng way provdes an effcent technque for solvng dstrbuted optmzaton problems, whch arse n many engneerng felds [6]. By consderng the local cost functons, as well as local constrants, as the multple objectves of a dstrbuted optmzaton problem, splttng methods allow us to splt a global constraned optmzaton problem nto sub-problems accordng to the structure of the networ, and solve them n a dstrbuted manner. The advantages of usng dstrbuted optmzaton algorthms nclude the followng three ponts: n contrast to centralzed methods, they do not requre global, but only local communcaton,.e., neghbour-to-neghbour communcaton; secondly, they parallelze the computatonal tass and splt the global problem nto small sub-problems, whch reduces the requred computatonal power for each sub-system; thrdly, dstrbuted optmzaton algorthms preserve the prvacy of each-subsystem n the sense that each sub-system computes an optmal soluton wthout sharng ts local cost functon and local constrant wth all the enttes n the networ. In ths paper, we consder a dstrbuted Model Predctve Control problem as the applcaton for the dstrbuted optmzaton to demonstrate the proposed algorthms, as well as the theoretcal fndngs. Model Predctve Control (MPC) s a control technque that optmzes the control nput over a fnte tme-horzon n the future and allows for constrants on the states and control nputs to be ntegrated nto the controller desgn. However, for networed systems, mplementng an MPC controller becomes challengng, snce solvng an MPC problem n a centralzed way requres full communcaton to collect nformaton from each sub-system, and the computatonal power to solve the global problem n one central entty. Dstrbuted model predctve control [2] s a promsng tool to overcome the lmtng computatonal complexty and communcaton requrements assocated wth Y. Pu and C.N. Jones are wth the Automatc Control Lab, École Polytechnque Fédérale de Lausanne, EPFL-STI-IGM-LA Staton 9 CH-05 Lausanne, Swtzerland, e-mal: {y.pu,coln.jones}@epfl.ch. M.N. Zelnger s wth the Emprcal Inference Department, Max Planc Insttute for Intellgent Systems, Tübngen, Germany, e-mal: melane.zelnger@tuebngen.mpg.de. Ths wor has receved fundng from the European Research Councl under the European Unon s Seventh Framewor Programme (FP/ )/ ERC Grant Agreement n The research of M. N. Zelnger has receved fundng from the EU FP7 under grant agreement no. PIOF-GA COGENT.

2 2 centralzed control of large-scale networed systems. The research on dstrbuted MPC has manly focused on the mpact of dstrbuted optmzaton on system propertes such as stablty and feasblty, and the development of effcent dstrbuted optmzaton algorthms. However, a ey challenge n practce s that dstrbuted optmzaton algorthms, see e.g. [5], [6] and [], may suffer from nexact local solutons and unrelable communcatons. The resultng nexact updates n the dstrbuted optmzaton algorthms affect the convergence propertes, and can even cause dvergence of the algorthm. In ths wor, we study nexact splttng methods and am at answerng the questons of how these errors affect the algorthms and under whch condtons convergence can stll be guaranteed. Semnal wor on nexact optmzaton algorthms ncludes [4], [9], [3] and [22]. In [4], the authors studed the convergence rates of nexact dual frst-order methods. In [9], the authors propose an nexact decomposton algorthm for solvng dstrbuted optmzaton problems by employng smoothng technques and an excessve gap condton. In [3], the authors proposed an nexact optmzaton algorthm wth an acceleratng strategy. The algorthm permts nexact nner-loop solutons. Suffcent condtons on the nexact nner-loop solutons for convergence are shown for dfferent assumptons on the optmzaton problem. In [22], an nexact proxmal-gradent method, as well as ts accelerated verson, are ntroduced. The proxmal gradent method, also nown as the teratve shrnage-thresholdng algorthm (ISTA) [3], has two man steps: the frst one s to compute the gradent of the smooth objectve and the second one s to solve the proxmal mnmzaton. The conceptual dea of the nexact proxmal-gradent method s to allow errors n these two steps,.e. the error n the calculaton of the gradent and the error n the proxmal mnmzaton. The results n [22] show convergence propertes of the nexact proxmal-gradent method and provde condtons on the errors, under whch convergence of the algorthm can be guaranteed. Buldng on the results n [22], we propose two new nexact splttng algorthms, the nexact Alternatng Mnmzaton Algorthm (nexact AMA) and ts accelerated varant, nexact Fast Alternatng Mnmzaton Algorthm (nexact FAMA). The nexact FAMA has been studed n [9], and s expanded n ths paper. The contrbutons of ths wor are the followng: We propose the nexact AMA and nexact FAMA algorthms, whch are nexact varants of the splttng methods, AMA and FAMA n [23] and [2]. We show that applyng nexact AMA and nexact FAMA to the prmal problem s equvalent to applyng the nexact proxmal-gradent method (nexact PGM) and the nexact accelerated proxmal-gradent method (nexact APGM) n [22] to the dual problem. Based on ths fact, we extend the results n [22], and show the convergence propertes of nexact AMA and nexact FAMA. We derve complexty upper bounds on the number of teratons to acheve a certan accuracy for the algorthms. By explotng these complexty upper-bounds, we present suffcent condtons on the errors for convergence of the algorthms. We study the convergence of the algorthms under bounded errors that do not satsfy the suffcent condtons for convergence and show the complexty upper bounds on the number of teratons for ths specal case. We apply nexact AMA and nexact FAMA for solvng dstrbuted optmzaton problems wth local computatonal errors. We present the complexty upper bounds of the algorthms for ths specal case, and show suffcent condtons on the local computatonal errors for convergence. We study the specal case of quadratc local objectve functons, relatng to a standard form of dstrbuted MPC problems. We show that f the local quadratc functons are postve defnte, then the algorthms converge to the optmal soluton wth a lnear rate. We propose to use the proxmal gradent method to solve the local problems. By explotng the suffcent condton on the local computatonal errors for the convergence together wth a warm-startng strategy, we provde an approach to certfy the number of teratons for the proxmal gradent method to solve the local problems to the accuracy requred for convergence of the dstrbuted algorthm. The proposed on-lne certfcaton method only requres on-lne local nformaton. We demonstrate the performance and the theoretcal results for nexact algorthms by solvng a randomly generated example of a dstrbuted MPC problem wth 40 subsystems. A. Notaton II. PRELIMINARIES Let v R nv be a vector. v denotes the l 2 norm of v. Let C be a subset of R nv. The projecton of any pont v R nv onto the set C s denoted by Proj C (v) := argmn w C w v. Let f : Θ Ω be a functon. The conjugate functon of f s defned as f (v) = sup w Θ (v T w f(w)). For a conjugate functon, t holds that q f(p) p f (q), where ( ) denotes the set of sub-gradents of a functon at a gven pont. Let f be a strongly convex functon. σ f denotes the convexty modulus p q, v w σ f v w 2, where p f(v) and q f(w), v, w Θ. L(f) denotes a Lpschtz constant of the functon f,.e. f(v) f(w) L(f) v w, v, w Θ. Let C be a matrx. ρ(c) denotes the l 2 norm of the matrx C T C. The proxmty operator s defned as prox f (v) = argmn w f(w) + 2 w v 2. () We note the followng equvalence: w = prox f (v) v w f(w ) (2)

3 3 We refer to [4] and [2] for detals on the defntons and propertes above. In ths paper, s used to denote an nexact soluton of an optmzaton problem. The proxmty operator wth an extra subscrpt ɛ,.e. x = prox f,ɛ (y), means that a maxmum computaton error ɛ s allowed n the proxmal objectve functon: f( w) + { 2 w v 2 ɛ + mn w f(w) + } 2 w v 2 (3) B. Inexact Proxmal-Gradent Method In ths secton, we wll ntroduce the nexact proxmal-gradent method (nexact PGM) proposed n [22]. It addresses optmzaton problems of the form gven n Problem 2. and requres Assumpton 2.2 for convergence, and Assumpton 2.3 for lnear convergence. The algorthm s presented n Algorthm. Problem 2.: Assumpton 2.2: mn Φ(w) = φ(w) + ψ(w). w R nw φ s a convex functon wth Lpschtz contnuous gradent wth Lpschtz constant ψ s a lower sem-contnuous convex functon, not necessarly smooth. Assumpton 2.3: φ s a strongly convex functon wth Lpschtz contnuous gradent wth a convexty modulus σ φ. ψ s a lower sem-contnuous convex functon, not necessarly smooth. Algorthm Inexact Proxmal-Gradent Method Requre: Requre w 0 R nx and τ < for =, 2, do : w = prox τψ,ɛ ( w τ( φ( w ) + e )) end for Inexact PGM n Algorthm allows two nds of errors: {e } represents the error n the gradent calculatons of φ, and {ɛ } represents the error n the computaton of the proxmal mnmzaton n (3) at every teraton. The followng propostons state the convergence property of nexact PGM wth dfferent assumptons. Proposton 2.4 (Proposton n [22]): Let { w } be generated by nexact PGM defned n Algorthm. If Assumpton 2.2 holds, then for any 0 we have: ( ) Φ w p Φ(w ) ( w 0 x + 2Γ + ) 2 2Λ 2 p= where Φ( ) s defned n Problem 2., Γ = ( ) e p + 2ɛ p, Λ = p= p= ɛ p, and w 0 and w denote the ntal sequences of Algorthm and the optmal soluton of Problem 2., respectvely. As dscussed n [22], the complexty upper-bound n Proposton 2.4 allows one to derve suffcent condtons on the error sequences {e } and {ɛ } for the convergence of the algorthm to the optmal soluton w : The seres { e } and { ɛ } are fntely summable,.e., = e < and =0 ɛ <. The sequences { e } and { ɛ } decrease at the rate O( ) for any κ 0. +κ Proposton 2.5 (Proposton 3 n [22]): Let {x } be generated by nexact PGM defned n Algorthm. If Assumpton 2.3 holds, then for any 0 we have: σ φ w w ( γ) ( w 0 w + Γ ), (4) where γ = and w0 and w denote the ntal sequence of Algorthm and the optmal soluton of Problem 2., respectvely, and ( ) Γ = ( γ) p 2 ep + ɛ p. p= From the dscusson n [22], we can conclude that, f the seres { e } and { ɛ } decrease at a lnear rate, then x x converges to the optmal soluton.

4 4 C. Inexact Accelerated Proxmal-Gradent Method In ths secton, we ntroduce an accelerated varant of nexact PGM, named the nexact accelerated proxmal-gradent method (nexact APGM) proposed n [22]. It addresses the same problem class n Problem 2. and smlarly requres Assumpton 2.2 for convergence. Algorthm 2 Inexact Accelerated Proxmal-Gradent Method Requre: Intalze v = w 0 R nw and τ < for =, 2, do : w = prox τψ,ɛ (v τ( φ(v ) + e )) 2: v = w + +2 ( w w ) end for Dfferng from nexact PGM, nexact APGM nvolves one extra lnear update n Algorthm 2. If Assumpton 2.2 holds, t mproves the convergence rate of the complexty upper-bound from O( ) to O( ). The followng proposton states the 2 convergence property of nexact APGM. Proposton 2.6 (Proposton 2 n [22]): Let { w } be generated by nexact APGM defned n Algorthm. If Assumpton 2.2 holds, then for any we have: Φ( w ) Φ(w ) 2 ( ( + ) 2 w 0 x + 2Γ + ) 2 2Λ where Φ( ) s defned n Problem 2. Γ = ( ) e p p + 2ɛ p, Λ = p= p= p 2 ɛ p, and w 0 and w denote the startng sequence of Algorthm 2 and the optmal soluton of Problem 2., respectvely. The complexty upper-bound n Proposton 2.6 provdes smlar suffcent condtons on the error sequences {e } and {ɛ } for the convergence of Algorthm 2: The seres { e } and { ɛ } are fnte summable. The sequences { e } and { ɛ } decrease at the rate O( 2+κ ) for κ 0. III. INEXACT ALTERNATING MINIMIZATION ALGORITHM AND ITS ACCELERATED VARIANT The nexact proxmal gradent method, as well as ts accelerated verson s lmted to the case where both objectves are a functon of the same varable. However, many optmzaton problems from engneerng felds, e.g. optmal control and machne learnng [6], are not of ths problem type. In order to generalze the problem formulaton, we employ the alternatng mnmzaton algorthm (AMA) and ts accelerated varant n [23] and [2], whch cover optmzaton problems of the form of Problem 3.. In ths secton, we extend AMA and ts accelerated varant to the nexact case and present the theoretcal convergence propertes. Problem 3.: mn s.t. f(x) + g(z) Ax + Bz = c wth varables x R nx and z R nz, where A R nc nx, B R nc nz and c R nc. f : R nx R and g : R nz R are convex functons. The Lagrangan of Problem 3. s: and the dual functon s: D(λ) = nf x,z L(x, z, λ) = f(x) + g(z) λ T (Ax + Bz c), (5) L(x, z, λ) (6a) { λ T Ax f(x) } { sup λ T Bz g(z) } + λ T c z = sup x = f (A T λ) g (B T λ) + λ T c, (6b) where f and g are the conjugate functons of f and g. The dual problem of Problem 3. s: Problem 3.2: mn D(λ) = f (A T λ) + g (B T λ) c T λ. }{{}}{{} φ(λ) ψ(λ)

5 5 A. Inexact alternatng mnmzaton algorthm (nexact AMA) We propose the nexact alternatng mnmzaton algorthm (nexact AMA) presented n Algorthm 3 for solvng Problem 3.. The algorthm allows errors n Step and Step 2,.e. both mnmzaton problems are solved nexactly wth errors δ and θ, respectvely. Algorthm 3 Inexact alternatng mnmzaton algorthm (Inexact AMA) Requre: Intalze λ 0 R N b, and τ < σ f /ρ(a) for =, 2, do : x = argmn x {f(x) + λ, Ax } + δ. 2: z = argmn z {g(z) + λ, Bz + τ 2 c A x Bz 2 } + θ 3: λ = λ + τ(c A x B z ) end for We study the theoretcal propertes of nexact AMA under Assumpton 3.3. If Assumpton 3.3 holds, we show that nexact AMA n Algorthm 3 s equvalent to applyng nexact PGM to the dual problem n Problem 3.2 wth the followng correspondence: the gradent computaton error n Algorthm 2 s equal to e = Aδ and the error of solvng the proxmal mnmzaton s equal to ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2. Wth ths equvalence, the complexty bound n Proposton 2.4 can be extended to the nexact AMA algorthm n Theorem 3.5. Assumpton 3.3: We assume that f s a strongly convex functon wth convexty modulus σ f, and g s a convex functon, not necessarly smooth. Lemma 3.4: If Assumpton 3.3 s satsfed and nexact AMA and nexact PGM are ntalzed wth the same dual and prmal startng sequence, then applyng the nexact AMA n Algorthm 3 to Problem 3. s equvalent to applyng nexact PGM n Algorthm to the dual problem defned n Problem 3.2 wth the errors e = Aδ and ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2, where L(ψ) denotes the Lpschtz constant of the functon ψ. The proof of Lemma 3.4 s provded n the appendx n Secton VI-A. Ths proof s an extenson of the proof of Theorem 2 n [2] and the proof n Secton 3 n [23]. Based on the equvalence shown n Lemma 3.4, we can now derve an upper-bound on the dfference of the dual functon value of the sequence {λ } from the optmal dual functon value n Theorem 3.5. Theorem 3.5: Let {λ } be generated by the nexact AMA n Algorthm 3. If Assumpton 3.3 holds, then for any ( ) D(λ ) D λ p ( λ 0 λ + 2Γ + ) 2 2Λ 2 (7) where = σ f ρ(a), Γ = Λ = p= p= ( ) Aδ p + τ 2L(ψ) Bθ p + Bθ p 2, (8) p= τ 2 (2L(ψ) Bθ p + Bθ p 2 ) 2 and λ 0 and λ denote the ntal sequences of Algorthm 3 and the optmal soluton of Problem 3., respectvely. Proof: Lemma 3.4 shows the equvalence between Algorthm 3 and Algorthm wth e = Aδ and ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2. Then we need to show that the dual defned n Problem 3.2 satsfes Assumpton 2.2. φ(λ) and ψ(λ) are both convex, snce the conjugate functons and lnear functons as well as ther weghted sum are always convex (the conjugate functon s the pont-wse supremum of a set of affne functons). Furthermore, snce f(x) s strongly convex wth σ f by Assumpton 3.3, then we now f has Lpschtz-contnuous gradent wth Lpschtz constant: L( f ) = σ f. It follows that the functon φ has Lpschtz-contnuous gradent φ wth a Lpschtz constant: = σ f ρ(a). Hence, the functons φ and ψ satsfy Assumpton 2.2. Proposton 2.4 completes the proof of the upper-bound n (7). Usng the complexty upper-bound n Theorem 3.5, we derve suffcent condtons on the error sequences for the convergence of nexact AMA n Corollary 3.6. Corollary 3.6: Let {λ } be generated by the nexact AMA n Algorthm 3. If Assumpton 3.3 holds, and the constant L(ψ) <, the followng suffcent condtons on the error sequences {δ } and {θ } guarantee the convergence of Algorthm 3: (9)

6 6 The sequences { δ } and { θ } are fntely summable,.e., = δ < and =0 θ <. The sequences { δ } and { θ } decrease at the rate O( ) for any κ > 0. +κ Proof: By Assumpton 3.3, the dual Problem 3.2 satsfes Assumpton and the complexty upper-bound n Proposton 2.4 holds. By extendng the suffcent condtons on the error sequences for the convergence of nexact proxmal-gradent method dscussed after Proposton 2.4, we can derve suffcent condtons on the error sequences for nexact AMA wth the errors defned n Lemma 3.4 e = Aδ and ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2. Snce L(ψ) <, we have that f the error sequences { δ } and { θ } satsfy the condtons n Corollary 3.6, the complexty upper-bound n Theorem 3.5 converges to zero, as the number of teratons goes to nfnty, whch further mples that the nexact AMA algorthm converges to the optmal soluton. Remar 3.7: If the functon ψ s an ndcator functon on a convex set, then the constant L(ψ) s equal to nfnty, f for any teraton the nexact soluton s nfeasble wth respect to the convex set. However, f we can guarantee that for every teraton the solutons are feasble wth respect to the convex set, then the constant L(ψ) s equal to zero. ) Lnear convergence of nexact AMA for a quadratc cost: In ths secton, we study the convergence propertes of nexact AMA wth a stronger assumpton,.e. the frst objectve f s a quadratc functon and couplng matrx A has full-row ran. We show that wth ths stronger assumpton, the convergence rate of nexact AMA s mproved to be lnear. The applcatons satsfyng ths assumpton nclude least squares problems and dstrbuted MPC problems. Assumpton 3.8: We assume that f s a quadratc functon f = x T Hx + h T x wth H 0, A has full-row ran. Remar 3.9: If Assumpton 3.8 holds, we now that the frst objectve φ(λ) n the dual problem n Problem 3.2 s equal to φ(λ) = 4 (AT λ h) T H (A T λ h). Then, a Lpschtz constant s gven by the largest egenvalue of the matrx 4 AH A T,.e., = λ max ( 4 AH A T ). In addton, the convexty modulus of φ(λ) s equal to the smallest egenvalue,.e., σ φ = λ mn ( 4 AH A T ). Theorem 3.0: Let {λ } be generated by nexact AMA n Algorthm 3. If Assumpton 3.3 and 3.8 hold, then for any wth Γ = λ λ ( γ) ( λ 0 λ + Γ ), (0) ( γ) p p= γ = λ mn(ah A T ) λ max (AH A T ), ( Aδ p + τ ) L(ψ) Bθ p + Bθ p 2. and λ 0 and λ denote the ntal sequences of Algorthm 3 and the optmal soluton of Problem 3., respectvely. Proof: By Assumpton 3.3 and 3.8, the dual problem n Problem 3.2 satsfes Assumpton 2.3 and the complexty upper-bound n Proposton 2.5 holds for the dual problem. The proof of Theorem 3.0 follows drectly from ths fact. By usng the complexty upper-bounds n Theorem 3.0, we derve suffcent condtons on the error sequences, whch guarantee the convergence of the nexact AMA algorthm. Corollary 3.: Let {λ } be generated by the nexact AMA n Algorthm 3. If Assumpton 3.3 and 3.8 hold, and the constant L(ψ) <, the followng suffcent condtons on the error sequences {δ } and {θ } guarantee the convergence of Algorthm 3: The sequences { δ } and { θ } are fntely summable,.e., = δ < and =0 θ <. The sequences { δ } and {θ } decrease at O( ) for any κ Z +κ +. For ths case the complexty upper-bound n (0) reduces to the same rate as the error sequences. The sequences { δ } and { θ } decrease at a lnear rate. Proof: By usng the complexty upper-bound n Theorem 3.0, we can derve suffcent condtons on the error sequences for nexact AMA. Snce L(ψ) <, we have that f the error sequences { δ } and { θ } satsfy the frst and thrd condtons n Corollary 3., the complexty upper-bound n Theorem 3.0 converges to zero, as the number of teratons goes to nfnty, whch further mples that the nexact AMA algorthm converges to the optmal soluton. For the second suffcent condton n Corollary 3., we provde Lemma 3.2 to prove that the second suffcent condton guarantees the convergence of the algorthm. Lemma 3.2: Let α be a postve number 0 < α <. The followng seres S converges to zero, as the ndex goes to nfnty α p lm S := lm α = 0. p Furthermore, the seres S converges at the rate O( ). p=

7 7 The proof of Lemma 3.2 s provded n the appendx n Secton VI-B. Lemma 3.2 provdes that f the sequences { δ } and { θ } decrease at O( ), the complexty upper-bound n (0) converges at the rate O( ). Note that ths result can be extended to the case that { δ } and { θ } decrease at O( ) for any κ Z +κ +, by followng a smlar proof as for Lemma 3.2. B. Inexact fast alternatng mnmzaton algorthm (nexact FAMA) In ths secton, we present an accelerated varant of nexact AMA, named the nexact fast alternatng mnmzaton Algorthm (nexact FAMA), whch s presented n Algorthm 4. It addresses the same problem class as Problem 3. and requres the same assumpton as Assumpton 3.3 for convergence. Smlar to nexact AMA, nexact FAMA allows computaton errors n the two mnmzaton steps n the algorthm. Dfferng from nexact AMA, nexact FAMA nvolves one extra lnear update n Step 4 n Algorthm 4, whch mproves the optmal convergence rate of the complexty upper-bound of the algorthm from O( ) to O( 2 ). Ths s smlar to the relatonshp between the nexact PGM and nexact APGM. By extendng the result n Lemma 3.4, we show that nexact FAMA s equvalent to applyng nexact APGM to the dual problem. Wth ths equvalence, we further show a complexty upper bound for nexact FAMA by usng the result n Proposton 2.6 for nexact APGM. The man results for nexact FAMA have been presented n [9], and are restated n ths secton. Algorthm 4 Inexact Fast alternatng mnmzaton algorthm (Inexact FAMA) Requre: Intalze ˆλ 0 = λ 0 R N b, and τ < σ f /ρ(a) for =, 2, do : x = argmn x {f(x) + ˆλ, Ax } + δ. 2: z = argmn z {g(z) + ˆλ, Bz + τ 2 c A x Bz 2 } + θ 3: λ = ˆλ + τ(c A x B z ) 4: ˆλ = λ + +2 (λ λ ) end for Lemma 3.3: If Assumpton 3.3 s satsfed and nexact FAMA and nexact APGM are ntalzed wth the same dual and prmal startng sequence, respectvely, then applyng the nexact FAMA n Algorthm 4 to Problem 3. s equvalent to applyng nexact APGM n Algorthm 2 to the dual problem defned n Problem 3.2 wth the errors e = Aδ and ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2, where L(ψ) denotes the Lpschtz constant of the functon ψ. Proof: The proof follows the same flow of the proof of Lemma 3.4 by replacng λ by ˆλ computed n Step 4 n Algorthm 4 and showng the followng equalty λ = prox τψ,ɛ (ˆλ τ( φ(ˆλ ) + e )) () Based on the equvalence shown n Lemma 3.3, we can now derve an upper-bound on the dfference of the dual functon value of the sequence {λ } for nexact FAMA n Theorem 3.4. Theorem 3.4 (Theorem III.5 n [9]): Let {λ } be generated by the nexact FAMA n Algorthm 4. If Assumpton 3.3 holds, then for any D(λ ) D(λ ) 2 ( ( + ) 2 λ 0 λ + 2Γ + ) 2 2Λ (2) where ( ) Γ Aδ p = p + τ 2L(ψ) Bθ p + Bθ p 2, (3) Λ = p= p= p 2 τ 2 (2L(ψ) Bθ p + Bθ p 2 ) 2 and = σ f ρ(a). Proof: The proof s a smlar to the proof of Theorem 3.5. Lemma 3.3 shows the equvalence between Algorthm 4 and Algorthm 2. Proposton 2.6 completes the proof of the upper-bound n nequalty (2). Wth the results n Theorem 3.4, the suffcent condtons on the errors for the convergence of nexact APGM presented n Secton II-C can be extended to nexact FAMA wth the errors defned n Lemma 3.3. Corollary 3.5: Let {λ } be generated by the nexact AMA n Algorthm 4. If Assumpton 3.3 holds, and the constant L(ψ) <, the followng suffcent condtons on the error sequences {δ } and {θ } guarantee the convergence of Algorthm 3: The seres { δ } and { θ } are fntely summable,.e., = δ < and =0 θ <. The sequences { δ } and { θ } decrease at the rate O( ) for any κ > 0. 2+κ (4)

8 8 Proof: By Assumpton 3.3, the dual Problem 3.2 satsfes Assumpton and the complexty upper-bound n Proposton 2.6 holds. By extendng the suffcent condtons on the error sequences dscussed after Proposton 2.6, we obtan suffcent condtons on the error sequences for nexact FAMA wth the errors defned n Lemma 3.3. Snce L(ψ) <, we have that f the error sequences { δ } and { θ } satsfy the condtons n Corollary 3.5, the complexty upper-bound n Theorem 3.4 converges to zero, as the number of teratons goes to nfnty, whch further mples that the nexact FAMA algorthm converges to the optmal soluton. C. Dscusson: nexact AMA and nexact FAMA wth bounded errors In ths secton, we study the specal case that the error sequences δ and θ are bounded by constants. Ths specal case s of partcular nterest, as t appears n many engneerng problems n practce, e.g. quantzed dstrbuted computaton and dstrbuted optmzaton wth constant local computaton errors. Prevous wor ncludes [4], where the authors studed the complexty upper-bounds for a dstrbuted optmzaton algorthm wth bounded nose on the solutons of local problems. In ths secton, we wll study errors satsfyng Assumpton 3.6 and derve the correspondng complexty upper-bounds for nexact AMA, as well as for nexact FAMA, wth dfferent assumptons. We show that f the problem satsfes the stronger assumpton n Assumpton 3.8,.e. the cost functon f s a quadratc functon, then the complexty bounds for the nexact algorthms wth bounded errors converge to a fnte postve value, as ncreases. It s mportant to pont out that f only the condtons n Assumpton 3.3 are satsfed, convergence of the complexty upper bound to a small constant cannot be shown. We present the complexty upper-bound of nexact FAMA n detals for ths case, and the result can be easly extended to nexact AMA. Assumpton 3.6: We assume that the error sequences δ and θ are bounded by δ δ and θ θ for all 0, where δ and θ are postve constants. Corollary 3.7: Let {λ } be generated by the nexact AMA n Algorthm 3. If Assumpton 3.3, 3.8 and 3.6 hold, then for any λ λ ( γ) λ 0 λ +, (5) ( ) where = A δ γ + τ L(ψ) B θ + B θ 2, γ = λmn(ah A T ) λ max(ah A T ) and λ0 and λ denote the ntal sequences of Algorthm 3 and the optmal soluton of Problem 3., respectvely. Proof: Snce Assumpton 3.3 and 3.8 are satsfed, then the results n Theorem 3.0 hold. By Assumpton 3.6, we now that the error sequences satsfy δ δ and θ θ for all 0. Then the error functon Γ n Theorem 3.0 s upper-bounded by Γ ( γ) p A δ + τ L(ψ) B θ + B θ 2. p= Due to the fact that 0 < γ < and the property of geometrc seres, we get ( γ) Γ ( γ) p A δ + τ L(ψ) B θ + B θ 2 p= ( γ) A δ γ + τ L(ψ) B θ + B θ 2 γ A δ + τ L(ψ) B θ + B θ 2. Then the upper-bound n Theorem 3.0 mples the upper-bound n (5). Remar 3.8: The nexact AMA algorthm wth bounded errors satsfyng Assumpton 3.3 and 3.8 has a constant term n the complexty upper-bound n (5). Hence, the complexty bound n (5) converges to a neghbourhood of the orgn wth the sze of, as goes to nfnty. Remar 3.9: For the nexact FAMA n Algorthm 4, f Assumpton 3.3 and Assumpton 3.6 hold,.e. the cost s not necessarly quadratc, we can also derve the followng complexty upper bound ( 2 λ D(λ ) D(λ 0 λ 2 ) + ) (6) ( + ) wth = A δ + 3τ 2 (2L(ψ) B θ + B θ 2 )

9 9 and = σ f ρ(a). The proof follows the same flow of the proof for Corollary 3.7 by replacng Theorem 3.0 wth Theorem 3.4. Compared to the FAMA algorthm wthout errors, we see that the nexact FAMA wth bounded errors has one extra term n the complexty upper-bound n (6). Unfortunately, the term ncreases as ncreases. Hence, the complexty bound for the nexact FAMA wth bounded errors does not converge, as goes to nfnty. IV. INEXACT AMA FOR DISTRIBUTED OPTIMIZATION WITH AN APPLICATION TO DISTRIBUTED MPC A. Dstrbuted optmzaton problem In ths secton, we consder a dstrbuted optmzaton problem on a networ of M sub-systems (nodes). The sub-systems communcate accordng to a fxed undrected graph G = (V, E). The vertex set V = {, 2,, M} represents the sub-systems and the edge set E V V specfes pars of sub-systems that can communcate. If (, j) E, we say that sub-systems and j are neghbours, and we denote by N = {j (, j) E} the set of the neghbours of sub-system. Note that N ncludes. The cardnalty of N s denoted by N. The global optmzaton varable s denoted by z. The local varable of sub-system, namely the th element of z and z = [z T,, zm T ]T, s denoted by [z]. The concatenaton of the varable of sub-system and the varables of ts neghbours s denoted by z. Wth the selecton matrces E and F j, the varables have the followng relatonshp: z = E z and [z] = F j z j, j N, whch mples the relaton between the local varable [z] and the global varable z,.e. [z] = F j E j z, j N. We consder the followng dstrbuted optmzaton problem: Problem 4.: mn z,v M f (z ) = s.t. z C, z = E v, =, 2,, M. where f s the local cost functon for sub-system, and the constrant C represents a convex local constrant on the concatenaton of the varable of sub-system and the varables of ts neghbours z. Assumpton 4.2: Each local cost functon f n Problem 4. s a strongly convex functon wth a convexty modulus σ f and has a Lpschtz contnuous gradent wth Lpschtz constant L( f ). The set C s a convex set, for all =,, M. Remar 4.3: Recall the problem formulaton of nexact AMA and FAMA defned n Problem 3.. The two objectves are defned as f(z) = M = f (z ) subject to z C for all =,, M and g = 0. The matrces are A = I, B = [E T, E2 T,, EM T ]T and c = 0. The frst objectve f(z) conssts of a strongly convex functon on z and convex constrants. The convex constrants can be consdered as ndcator functons, whch are convex functons. Due to the fact that the sum of a strongly convex and a convex functon s strongly convex, the objectve f(z) s strongly convex wth the modulus σ f and Problem 4. satsfes Assumpton 3.3. B. Applcaton: dstrbuted model predctve control In ths secton, we consder a dstrbuted lnear MPC problem wth M sub-systems, and show that t can be wrtten n the form of Problem 4.. The dynamcs of the th agent are gven by the dscrete-tme lnear dynamcs: x (t + ) = j N A j x j (t) + B j u j (t) =, 2,, M. (7) where A j and B j are the dynamcal matrces. The states and nputs of agent are subject to local convex constrants: x (t) X u (t) U =, 2,, M. (8) The dstrbuted MPC problem, as e.g. consdered n [8], s gven n Problem 4.4. Problem 4.4: M N M l (x (t), u (t)) + l f (x (N)) s.t. mn x,u = t=0 x (t + ) = x (t) X = j N A j x j (t) + B j u j (t) u (t) U x (N) X f, x (0) = x, =, 2,, M. where l (, ) and l f ( ) are strctly convex stage cost functons and N s the horzon for the MPC problem. The state and nput sequences along the horzon of agent are denoted by x = [x T (0), xt (),, xt (N)]T and u = [u T (0), ut (),, ut (N )] T. We denote the concatenatons of the state and nput sequences of agent and ts neghbours by x N and u N. The correspondng constrants are x N X N and u N U N. We defne v = [x T, x T 2,, x T M, ut, u T 2,, u T M ]T to be the global varable and z = [x N, u N ] to be the local varables. Z N = X N U N denotes the local constrants on z and

10 0 H z = h denotes the dynamcal constrant of sub-system. Then consderng the dstrbuted problem n Problem 4., we see that the local cost functon f for agent contans all the stage cost functons of the state and nput sequences of agent and ts neghbours. The constrant C ncludes the constrant Z N and the dynamcal constrant H z = h. E are the matrces selectng the local varables from the global varable. The th component of v s equal to [v] = [x, u ]. Remar 4.5: If the stage cost functons l (, ) and l f ( ) are strctly convex functons, and the state and nput constrants X and U are convex sets, then the condtons n Assumpton 4.2 are all satsfed. Furthermore, f the state cost functons l (, ) and l f ( ) are set to be postve defnte quadratc functons, then the dstrbuted optmzaton problem orgnatng from the dstrbuted MPC problem further satsfes Assumpton 3.8. Remar 4.6: For the case that the dstrbuted MPC problem has only nput constrants and the state couplng matrces n the lnear dynamcs are A j = 0 for any j, we can elmnate all state varables n the dstrbuted MPC problem and only have the nput varables as the optmzaton varables. For ths case, f the stage cost functons l (, ) and l f ( ) are strctly convex functons wth respect to the nput varables and the local lnear dynamcal system x (t + ) = A x (t) + j N B j u j (t) s controllable, then the resultng dstrbuted optmzaton problem satsfes Assumpton 3.8. The detals of ths formulaton can be found n [8]. C. Inexact AMA and Inexact FAMA for dstrbuted optmzaton In ths secton, we apply nexact AMA and nexact FAMA to the dstrbuted optmzaton problem n Problem 4., orgnatng from the dstrbuted MPC problem n Problem 4.4. The concept s to splt the dstrbuted optmzaton nto small and local problems accordng to the physcal couplngs of the sub-systems. Algorthm 5 and Algorthm 6 represent the algorthms. Note that Step 2 n nexact AMA and nexact FAMA,.e., Algorthm 3 and Algorthm 4, are smplfed to be a consensus step n Step 3 n Algorthm 5 and Algorthm 6, whch requres only local communcaton. In the algorthms, δ represents the computatonal error of the local problems. Algorthm 5 Inexact Alternatng Mnmzaton Algorthm for Dstrbuted Optmzaton Requre: Intalze λ 0 = 0, and τ < mn Rz M {σ f } for =, 2, do : z = argmn z C {f (z ) + λ, z } + δ 2: Send z to all the neghbours of agent. 3: [ṽ ] = M N j N [ z j ]. 4: Send [ṽ ] to all the neghbours of agent. 5: λ = λ + τ(e ṽ z ) end for Algorthm 6 Inexact fast alternatng mnmzaton algorthm for Dstrbuted Optmzaton Requre: Intalze λ 0 = ˆλ 0, and τ < mn Rz M {σ f } for =, 2, do : z = argmn z C {f (z ) + ˆλ, z } + δ 2: Send z to all the neghbours of agent. 3: [ṽ ] = M N j N [ z j ]. 4: Send [ṽ ] to all the neghbours of agent. 5: λ = ˆλ + τ(e ṽ z ) 6: ˆλ = λ + +2 (λ λ ) end for Remar 4.7: Note that for every teraton, Algorthm 5 and 6 only need local communcaton and the computatons can be performed n parallel for every subsystem. We provde a lemma showng that consderng Algorthm 5 there exsts a Lpschtz constant L(ψ) equal to zero. The results can be easly extended to Algorthm 6. Ths result s requred by the proofs of the complexty upper-bounds n Corollary 4.9, 4.0 and 4.. Lemma 4.8: Let the sequence λ be generated by Algorthm 5. For all 0 t holds that E T λ = 0 and the Lpschtz constant of the second objectve n the dual problem of Problem 4. L(ψ) s equal to zero. Proof: We frst prove that for all 0, the sequence λ satsfes E T λ = 0. We now that Step 3 n Algorthm 5 s equvalent to the followng update M ṽ = M E T z = M E T z, =

11 wth M = bldag( N I,, N I,, N M I M ) = (E T E), where N denotes the number of the elements n the set N, and I denotes an dentty matrx wth the dmenson of the th component of v, denoted as [v]. From Step 5 n Algorthm 5, for all we have that By multplyng the matrx E T to both sdes, we have λ = λ + τ(eṽ z ). E T λ = E T λ + τ(e T Eṽ E T z ) = E T λ + τ(e T EME T z E T z ). Snce M = (E T E), the above equalty becomes E T λ = E T λ + τ(e T z E T z ) = E T λ. From the ntalzaton n Algorthm 5, we now E T λ 0 = E T 0 = 0. Then by nducton, we can mmedately prove that for all 0 t holds that E T λ = 0. We can now show that for all E T λ = 0, a Lpschtz constant of the second objectve n the dual problem n Problem 3.2 L(ψ) s equal to zero. Snce g = 0, B = E and c = 0, then the second objectve n the dual problem s equal to ψ(λ) = g (B T λ) c T λ = g (E T λ) = sup(v T E T λ 0) = w { 0 f E T λ = 0 f E T λ 0. The functon ψ(λ) s an ndcator functon on the nullspace of matrx E T. For all λ satsfyng E T λ = 0, the functon ψ(λ) s equal to zero. Hence, zero s a Lpschtz constant of the functon ψ(λ) for all E T λ = 0. After provng Lemma 4.8, we are ready to show the man theoretcal propertes of Algorthm 5 and 6. Corollary 4.9: Let {λ = [λ T,, λ T M ]T } be generated by Algorthm 5. If Assumpton 4.2 s satsfed and the nexact solutons z for all are feasble,.e. z C, then for any ( ) ( D(λ ) D λ p λ 0 λ p= p=. ) 2 δ p, (9) where D( ) s the dual functon of Problem 4., λ 0 = [λ 0T,, λ 0T M ]T and λ are the startng sequence and the optmal sequence of the Lagrangan multpler, respectvely, and δ p = [δ pt,, δpt M ]T denotes the global error sequence. The Lpschtz constant s equal to σ f, wth σ f = mn{σ f,, σ fm }. Proof: As stated n Remar 4.3, Problem 4. s splt as follows: f = M = f (z ) wth the constrants z C for all =,, M and g = 0. The matrces are A = I, B = E and c = 0. If Assumpton 4.2 holds, then ths splttng problem satsfes Assumpton 3.3 wth the convexty modulus σ f. From Theorem 3.5, we now that the sequence {λ } generated by nexact AMA n Algorthm 5, satsfes the complexty upper bound n (7) wth Γ and Λ n (8) and (9) wth δ = [δ T,, δm T ]T and θ = 0. By Lemma 4.8, t follows that the constant L(ψ) n Λ s equal to zero. The Lpschtz constant of the gradent of the dual objectve s equal to = σ f ρ(a) = σ f wth σ f = mn{σ f,, σ fm }. Hence, we can smplfy the complexty upper bound n (7) for Algorthm 5 to be nequalty (2). As we dscussed n Remar 4.5, f the state cost functons l (, ) and l f ( ) n the dstrbuted MPC problem are strctly postve quadratc functons, then the dstrbuted optmzaton problem orgnatng from the dstrbuted MPC problem satsfes Assumpton 3.8, whch accordng to Theorem 3.5 mples a lnearly decreasng upper-bound gven n Corollary 4.0. Corollary 4.0: Let {λ = [λ T,, λ T M ]T } be generated by Algorthm 5. If Assumpton 4.2 s satsfed, the local cost functon f s a strctly postve quadratc functon, and the nexact solutons z for all are feasble,.e. z C, then for any ( ) λ λ ( γ) + λ 0 λ + ( γ) p Aδp, (20) where γ = λmn(h) λ, and max(h) λ0 and λ are the startng sequence and the optmal sequence of the Lagrangan multpler, respectvely. The Lpschtz constant s equal to σ f, where σ f = mn{σ f,, σ fm }. Proof: In Algorthm 6, the varable ˆλ s a lnear functon of λ and λ. Ths preserves all propertes shown n Lemma 4.8 for Algorthm 6. Then, Corollary 4.0 can be easly proven by followng the same steps as n the proof of Corollary 4.9 by replacng Theorem 3.5 by Theorem 3.0. Corollary 4.: Let {λ = [λ T,, λ T M ]T } be generated by Algorthm 6. If Assumpton 4.2 s satsfed and the nexact solutons z for all are feasble,.e. z C, then for any ( ) 2 D(λ ) D(λ ) 2 ( + ) 2 λ 0 λ δ p + 2M p. (2) p=0 p=

12 2 where D( ) s the dual functon of Problem 4., λ 0 and λ are the startng sequence and the optmal sequence of the Lagrangan multpler, respectvely. The Lpschtz constant s equal to σ f, where σ f = mn{σ f,, σ fm }. Proof: It follows from the same proof as Corollary 4.9 by replacng Theorem 3.5 by Theorem 3.4. Remar 4.2: For the case that all the local problems are solved exactly,.e. δ = 0, Algorthm 5 and Algorthm 6 reduce to standard AMA and FAMA, and converge to the optmal soluton at the rate of the complexty upper-bounds. Remar 4.3: The suffcent condtons on the errors for convergence gven n Corollary 3.6, 3. and 3.5 can be drectly extended to the error sequence {δ }. D. Certfcaton of the number of local teratons for convergence We have shown that the nexact dstrbuted optmzaton algorthms n Algorthm 5 and 6 allow one to solve the local problems,.e. Step n Algorthm 5 and 6, nexactly. In ths secton, we wll address two questons: whch algorthms are sutable for solvng the local problems; and what termnaton condtons for the local algorthms guarantee that the computatonal error of the local soluton satsfes the suffcent condtons on the errors for the global dstrbuted optmzaton algorthms. We apply the proxmal gradent method for solvng the local problems n Step n Algorthm 5 and 6, and propose an approach to certfy the number of teratons for ther soluton, by employng a warm-start strategy and the complexty upper-bounds of the proxmal gradent method. The approach guarantees that the local computatonal errors δ decrease wth a gven rate, that satsfes the suffcent condtons derved from Corollary 4.9, 4.0 and 4., ensurng convergence of the nexact dstrbuted optmzaton algorthm to the optmal soluton. We defne a decrease functon α satsfyng the suffcent condtons, for example α = α 0, where α 0 s a postve number. 2 ) Gradent method: The local problems n Step n Algorthm 5 and 6 are optmzaton problems wth strongly convex cost functons and convex constrants. From Corollary 4.9, 4.0 and 4., we now that the nexact soluton z needs to be a feasble soluton subject to the local constrant C,.e., z C for all > 0. Therefore, a good canddate algorthm for solvng the local problems should have the followng three propertes: the algorthm can solve convex optmzaton problems effcently; f the algorthm s stopped early,.e., only a few number of teratons are mplemented, the sub-optmal soluton s feasble wth respect to the local constrant C ; and there exsts a certfcate on the number of teratons to acheve a gven accuracy of the sub-optmal soluton. Gradent methods satsfy these requrements, have smple and effcent mplementatons, and offer complexty upper-bounds on the number of teratons [3]. These methods have been studed n the context of MPC n [20], [] and [7]. We apply the proxmal gradent method n Algorthm 7 for solvng the local problems n Step n Algorthm 5 and 6. The local optmzaton problems at teraton are parametrc optmzaton problems wth the parameter λ optmal functon as The soluton of the optmal functon at λ L(z ) satsfyng as z (λ ) z (λ ) L(z ) λ λ 2 for any λ. We denote the z (λ ) := argmn z C {f (z ) + λ, z }. (22) s denoted as z, := z (λ between the parameters λ and λ s lmted and measurable for each,.e. β = λ use a warm-startng strategy to ntalze the local problems,.e. we use the soluton z ntal soluton for Algorthm 7 for step. Algorthm 7 Gradent method for solvng Step n Algorthm 5 at teraton Requre: Intalze α = α 0, β = τ(e 2 ṽ z ), λ, z,0 Compute J satsfyng (24) for j =, 2,, J do z,j = Proj C (z,j τ( f (z,j ) λ )) end for z z,j ). The functon z ( ) has a Lpschtz constant and λ 2. Motvated by the fact that the dfference λ = τ(e ṽ z ), we from the prevous step as the = z and τ < L( f ) Note that we ntalze the vectors ṽ, z and z for = n Algorthm 5 to be zero vectors. Proposton 4.4 (Proposton 3 n [22]): Let z,j be generated by Algorthm 7. If Assumpton 4.2 holds, then for any j 0 we have: σ f z,j z, z,0 z, ( γ) j, (23) where γ = L( f ) and z,0 and z, denote the ntal sequence of Algorthm and the optmal soluton of the problem n Step 6 n Algorthm 5 at teraton, respectvely.

13 3 2) Termnaton condton on the number of teratons for solvng local problems: Methods for boundng the number of teratons to reach a gven accuracy have been studed e.g. n [0], [6] and [9]. In [0] and [6], the authors proposed dual decomposton based optmzaton methods for solvng quadratc programmng problems and presented termnaton condtons to guarantee a prespecfed accuracy. However, these methods do not drectly guarantee feasblty of the sub-optmal soluton. One approach s to tghten constrants to ensure feasblty, whch can be conservatve n practce. In [9], the authors propose an nexact decomposton algorthm for solvng dstrbuted optmzaton problems by employng smoothng technques and an excessve gap condton as the termnaton condton on the number of teratons to acheve a gven accuracy. To certfy the termnaton condton, ths method requres to measure the values of the global prmal and dual functons on-lne, whch requres full communcaton on the networ and s not satsfed n our dstrbuted framewor. In addton, ths method does not provde any algorthms for solvng the local problems. By employng the complexty upper-bounds n Proposton 4.4 for Algorthm 7, we propose a termnaton condton n (24) to fnd the number of teratons J, whch guarantees that the local computatonal error s upper-bounded by the predefned decrease functon α,.e. δ α. Lemma 4.5: If the number of teratons J n Algorthm 7 satsfes α J log ( γ) α + L(z (24) )β for all, then the computatonal error for solvng the local problem n Step 6 n Algorthm 5 δ satsfes δ α. Proof: We wll prove Lemma 4.5 by nducton. Base case: For =, the vectors ṽ, z and z are ntalzed as zero vectors. By Proposton 2.4 and the fact z,0 = z 0 = 0, we now z,j z, z,0 z, ( γ) J = 0 z, ( γ) J. Due to the defnton of the functon α, t follows that the term above s upper-bounded by α 0 ( γ) J. Usng the fact that β 0 = τ(e ṽ 0 z 0 ) = 0 and J satsfes (24), t s further upper-bounded by α ; δ = z z, = z,j z, α. Inducton step: Let l be gven and suppose that δ l α l. We wll prove that δ l+ α l+ By Proposton 2.4 and the warm-startng strategy,.e. z,0 = z = z,j, we now δ l+ = z l+,jg+ z l+,0 = z g,j l z l+, z l+, ( γ) J l+ z l+, ( γ) J l+ ( z l,j l z l, + z l, z l+, ) ( γ) J l+ (δ l + L(z ) β l+ ) ( γ) J l+. Due to the nducton assumpton and the fact that J l satsfes (24), t follows that δ l+ α l+. We conclude that by the prncple of nducton, t holds that δ α for all. Corollary 4.6: If Assumpton 4.2 holds and the decrease rate of the functon α satsfes the correspondng suffcent condtons presented n Corollary 3.6 and 3.5, then Algorthm 5 and 6 converge to the optmal soluton, wth Algorthm 7 solvng the local problem n Step. Furthermore, f the local cost functon f s a strctly postve quadratc functon, and the decrease rate of the functon α satsfes the suffcent condtons presented n Corollary 3., then Algorthm 5 converges to the optmal soluton, wth Algorthm 7 solvng the local problem n Step. Remar 4.7: All the nformaton requred by the proposed on-lne certfcaton method,.e.,by Algorthm 7, as well as the condton for J n (24), can be obtaned on-lne and locally. 3) Computaton of the Lpschtz constant L(z ): In the above proposed on-lne certfcaton method, the Lpschtz constant of the optmal soluton functon z (λ ), L(z ), plays an mportant role. Whle t s generally dffcult to compute ths Lpschtz constant, t can be computed for specal cases, such as postve quadratc functons. Lemma 4.8: Let the local cost functon be a quadratc functon,.e. f (z ) = 2 zt H z + h T z wth H 0. A Lpschtz constant of the functon z (λ ) defned n (22) s gven by ρ mn(h ),.e. z (λ ) z (λ 2 ) ρ mn (H ) λ λ 2. (25)

14 4 Proof: Snce H 0, we can defne H = D D T wth D nvertble, whch mples z (λ ) = argmn z C 2 zt H z + (h λ ) T z = argmn z C 2 DT z + D (h λ ) 2 Let v = D T z. The optmzaton problem above becomes v (λ ) = argmn D T v C 2 v + D (h λ ) 2, whch can be seen as the projecton of the pont D (h λ ) onto the set C := {v D v C }. Snce C s convex, then C s convex as well. It follows drectly from Proposton 2.2. n [4] that By z = D T v, we get v (λ ) v (λ 2 ) D (λ λ 2 ). z (λ ) z (λ 2 ) D D (λ λ 2 ) D 2 λ λ 2 ρ mn (H ) λ λ 2. V. NUMERICAL EXAMPLE Ths secton llustrates the theoretcal fndngs of the paper and demonstrates the performance of nexact AMA by solvng a randomly generated dstrbuted MPC problem wth 40 sub-systems. For ths example, we assume that the sub-systems are coupled only n the control nput: x (t + ) = A x j (t) + j N B j u j (t) =, 2,, M, The nput-coupled dynamcs allow us to elmnate the states of the dstrbuted MPC problem, such that the optmzaton varable n the dstrbuted optmzaton problems s the control sequence u = [u T,, u T M ]T, wth u = [u T (0), ut (),, ut (N)]T. Examples wth ths structure nclude systems sharng one resource, e.g. a water-tan system or an energy storage system. We randomly generate a connected networ wth 40 agents. Each sub-system has three states and two nputs. The dynamcal matrces A and B j are randomly generated,.e. generally dense, and the local systems are controllable. The nput constrant U for sub-system s set to be U = {u (t) 0.4 u (t) 0.3}. The horzon of the MPC problem s set to be N =. The local cost functons are set to be quadratc functons,.e. l (x (t), u (t)) = x T (t)qx (t) + u T (t)ru (t) and l f (x (N)) = x T (N)P x (N), where Q, R and P are dentty matrces. Therefore, the dstrbuted optmzaton problem resultng from the dstrbuted MPC satsfes Assumpton 4.2, the local cost functons f are strctly postve quadratc functons, and the results n Corollary 4.0 hold. The ntal states x are chosen, such that more than 70% of the elements of the vector u are at the constrants. In Fg., we demonstrate the convergence performance of nexact AMA for solvng the dstrbuted optmzaton problem n Problem 4., orgnatng from the randomly generated dstrbuted MPC problem, applyng Algorthm 5. In ths smulaton, we compare the performance of nexact AMA wth three dfferent nds of errors for δ wth exact AMA, for whch the errors are equal to zero. Note that these errors are synthetcally constructed to specfy dfferent error propertes. We solve the local problems to optmalty and then add errors wth predefned decreasng rates to the local optmal soluton, ensurng that the soluton remans prmal feasble. The blac lne shows the performance of exact AMA. The blue, red and green lnes show the performance of nexact AMA, where the errors δ are set to be decreasng at the rates of O( ), O( ) and O( 2 ), 3 respectvely. Note that all three errors satsfy the suffcent condton for convergence n Corollary 3.. We can observe that as the number of teratons ncreases, the dfferences u u decrease for all the cases, however, the convergence speed s qute dfferent. For the exact AMA algorthm (blac lne), t decreases lnearly, whch supports the results n Corollary 4.0. For the three cases for nexact AMA (blue, red and green lnes), we can see that the dfferences u u decrease more slowly than for exact AMA, and the decrease rates correspond to the decrease rate of the errors, whch supports the theoretcal fndngs n Corollary 3.. The second smulaton llustrates the convergence propertes of nexact AMA, where the proxmal gradent method n Algorthm 7 s appled to solve the local problems n Step 2 n Algorthm 5. In ths experment Algorthm 7 s stopped after the number of teratons provdng that the local computaton error δ decreases at a certan rate. The error decrease rate s

15 5 selected to be O( ),.e., the decrease functon α s set to be α = α 0 and thereby satsfes the second suffcent condton n Corollary 3.. In order to ensure δ α, the number of teratons for the proxmal gradent method J K n Algorthm 7 s chosen accordng to the certfcaton method presented n Secton IV-D such that condton (24) s satsfed. Note that we use a warm-startng strategy for the ntalzaton of Algorthm 7. Fg. 2 shows the comparson of the performance of exact AMA and nexact AMA. We can observe that the blac (exact AMA) and red lnes bascally overlap (nexact AMA wth Algorthm 7 solvng local problems wth the numbers of teratons J satsfyng (24)). Inexact AMA converges to the optmal soluton as the teratons ncrease, and shows almost the same performance as exact AMA. Fg. 3 shows the correspondng local error sequence δ, where the number of teratons J for Algorthm 7 satsfes the condton n (24). We can observe that the global error sequence δ = [δ,, δm ] s upper-bounded by the decease functon α. As s small, the upper-bound α s tght to the error sequence. As ncreases, the error decreases faster and the bound becomes loose. Fg. 4 shows the comparson of the numbers of teratons for Algorthm 7, computed usng two dfferent approaches. Approach uses the termnaton condton proposed n Secton IV-D. In Approach 2, we frst compute the optmal soluton of the local problem z, and then run the proxmal gradent method to fnd the smallest number of teratons provdng that the dfference of the local sub-optmal soluton satsfes the decrease functon α,.e. z,j z, α. Approach 2 s therefore the exact mnmal number, whereas Approach uses a bound on the mnmal number. Note that the second approach guarantees δ α for all, however, ths method s not practcally applcable, snce the optmal soluton z, s unnown. Its purpose s merely to compare wth the proposed certfcaton method and to show how tght the theoretcal bound n (24) s. For both technques, we use a warm-startng strategy for ntalzaton of the proxmal gradent method to solve the local problems for each n Algorthm 5. In Fg. 4, the green lne and regon result from the termnaton condton proposed n Secton IV-D, and the pn lne and regon result from the second approach. The sold green and red lnes show the average value of the numbers of teratons for the proxmal gradent method for solvng the local problems over the 40 sub-systems. The upper and lower boundares of the regons show the maxmal and mnmal number of teratons, respectvely. The maxmal number of teratons for the proposed certfcaton method(green regon) s equal to 7, whle for the second method (the red regon) t s equal to 4. Fg. 4 shows that the certfcaton approach n (24), whch can be performed locally, s reasonably tght and the provded number of teratons s close to the mnmal number of teratons requred to satsfy the desred error. u u IAMA, O(/) IAMA, O(/ 2 ) IAMA, O(/ 3 ) AMA, no error teraton Fgure : Comparson of the performance of AMA and nexact AMA (IAMA) wth the errors decreasng at pre-defned rates. A. Proof of Lemma 3.4 VI. APPENDIX Proof: In order to show the equvalence, we prove that Step, 2 and 3 n Algorthm 3 are equvalent to Step n Algorthm,.e. the followng equalty holds: wth e = Aδ and ɛ = τ 2 L(ψ) Bθ + τ 2 λ = prox τψ,ɛ (λ τ( φ(λ ) + e )) (26) 2 Bθ 2. Step 2 n Algorthm 3 mples: B T λ + τb T (c A x Bz ) g(z ), where z = argmn z {g(z) + λ, Bz + τ 2 c A x+ Bz 2 } = z θ. From the property of the conjugate functon p f(q) q f (p), t follows: z g (B T λ + τb T (c A x Bz )).

16 6 0 0 IAMA wth Approach 2 IAMA wth Approach AMA wthout errors u u teraton Fgure 2: Comparson of the performance of AMA and nexact AMA (IAMA) wth the proxmal-gradent method to solve local problems, where the number of teratons s chosen accordng to two approaches: Approach uses a bound on the mnmal number,.e. the termnaton condton proposed n (24); and Approach 2 computes the exact mnmal number, whch requres the optmal soluton of the local problem z, at each teraton. Local computatonal errors δ Decrease functon α Error sequence δ usng (23) teraton Fgure 3: Error sequence δ n nexact AMA usng the proxmal-gradent method for solvng the local problems wth the numbers of teratons satsfyng (24). By multplyng wth B and subtractng c on both sdes, we obtan: Bz c B g (B T λ + τb T (c A x Bz )) c. By multplyng wth τ and addng λ + τ(c A x Bz ) on both sdes, we get: λ τa x τb g (B T λ + τb T (c A x Bz )) τc + λ + τ(c A x Bz ). Snce ψ(λ) = g (B T λ) c T λ, we have ψ(λ) = B g (B T λ) c, whch mples: λ τa x τ ψ(λ + τ(c A x Bz )) + λ + τ(c A x Bz ). Snce z = z θ, t follows that: λ τa x τ ψ(λ + τ(c A x B z + Bθ )) + λ + τ(c A x B z + Bθ ). By Step 3 n Algorthm 3, the above equaton results n: λ τa x τ ψ(λ + τbθ ) + λ + τbθ. From Step n Algorthm 3 and the property of the conjugate functon p f(q) q f (p), we obtan: λ τa( f (A T λ ) + δ ) τ ψ(λ + τbθ ) + λ + τbθ.

17 7 Fgure 4: Comparson of the numbers of teratons for Algorthm 7, usng two approaches: Approach uses a bound on the mnmal number,.e. the termnaton condton proposed n (24); and Approach 2 computes the exact mnmal number, whch requres the optmal soluton of the local problem z, at each teraton. By defnton of the functon φ, we get: whch s equvalent to: λ τ( φ(λ ) + Aδ ) τ ψ(λ + τbθ ) + λ + τbθ, λ = prox τψ (λ τ( φ(λ ) + e )) τbθ, wth e = Aδ. In order to complete the proof of equaton (26), we need to show that λ s an nexact soluton of the proxmal operator as defned n equaton (3) wth the error ɛ = τ 2 L(ψ) Bθ + τ 2 2 Bθ 2,.e. to prove: τψ(λ ) + { 2 λ v 2 ɛ + mn λ τψ(λ) + } 2 λ v 2, where ν = λ τ( φ(λ ) + Aδ ). Fnally, usng τψ(λ + τbθ ) + 2 λ + τbθ ν 2 τψ(λ ) 2 λ ν 2 τ(ψ(λ + τbθ ) ψ(λ )) + 2 τbθ 2 equaton (26) s proved. B. Proof of Lemma 3.2 Proof: We frst prove that there exsts an upper bound on the seres b = exsts a postve nteger such that 0 < α +) < α ( +) < α ( + b = p= τ 2 L(ψ) Bθ + τ 2 p= α p p. We can wrte the seres b as α p p + α p p p=. 2 Bθ 2 = ɛ,. Snce 0 < α <, there always Snce satsfes 0 < α + and 0 < α <, then we now that for any t the functon α t t s a non-decreasng functon wth respect to t. Due to the fact that for any non-decreasng functon f(t), the followng nequalty holds. x x f(p) = f( t )dt + f(x) f(t)dt + f(x) p Z:y p x y y

18 8 where denotes the floor operator, the seres b can be upper-bounded by b p= α p p + α t t dt + α. We now that the ntegral of the functon α t t s equal to E ( x log(α)), where E ( ) denotes the Exponental Integral Functon, defned as E (x) := e t x t dt. By usng the fact that E ( x) = E (x), where E (x) := e t x t dt, and nequalty (5..20) n [], t follows that the Exponental Integral Functon E (x) satsfes Snce e x > 0, we can rewrte the nequalty as Hence, the seres b can be further upper-bounded by b < = p= p= p= α p p α p p α p p log( + x ) < ex E ( x) < 2 log( + 2 x ). e x log( + x ) < E ( x) < 2 e x log( + 2 x ). + α + E ( log(α)) E ( log(α)) + α 2 e log(α) log( + + α 2 α log( + We can now fnd the upper-bound for the seres S as s = α b < α p= α p 2 log(α) ) + e log(α) log( + 2 log(α) ) + α log( + p + 2 log( + 2 log(α) ) + α log(α) ). log( + log(α) ) log(α) ). Snce 0 < α < and the nteger s a constant for a gven α, the upper bound above converges to zero, as goes to nfnty. In addton, we now that the two terms α p= α p p and α log( + log(α) ) converge to zero lnearly wth the constant α. From Taylor seres expanson, we now that the term 2 log( + 2 log(α) ) converges to zeros at the rate O( ). Note that snce 0 < α <, the term 2 log( + 2 log(α) ) s always negatve for all > 0. To summarze, we now that the upper bound above converges to zero wth the rate O( ). Therefore, we conclude that the seres s converges to zero, as goes to nfnty. In addton, the convergence rate s O( ). REFERENCES [] M. Abramowtz and I. A. Stegun. Handboo of Mathematcal Functons wth Formulas, Graphs and Mathematcal Tables. Dover Publcatons, Incorporated, 974. [2] H. H. Bausche and P. L. Combettes. Convex analyss and monotone operator theory n Hlbert spaces. Sprnger, 20. [3] A. Bec and M. Teboulle. A fast teratve shrnage thresholdng algorthm for lnear nverse problems. SIAM Journal on Imagng Scences, pages , [4] D. P. Bertseas, A. Nedc, and A. E. Ozdaglar. Convex analyss and optmzaton. Athena Scentfc Belmont, [5] D. P. Bertseas and J. N. Tstsls. Parallel and Dstrbuted Computaton: Numercal Methods. Athena Scentfc, Belmont, Massachusetts, 997. [6] S. Boyd, N. Parh, E. Chu, B. Peleato, and J. Ecsten. Dstrbuted optmzaton and statstcal learnng va the alternatng drecton method of multplers. Foundatons and Trends n Machne Learnng, 3: 22, 20. [7] P. L. Combettes and J-C. Pesquet. Proxmal splttng methods n sgnal processng. In Fxed-Pont Algorthms for Inverse Problems n Scence and Engneerng, Sprnger Optmzaton and Its Applcatons, pages Sprnger New Yor, 20. [8] C. Conte, N. R. Voellmy, M. N. Zelnger, M. Morar, and C. N. Jones. Dstrbuted synthess and control of constraned lnear systems. In Amercan Control Conference, 202, pages , 202. [9] Q. T. Dnh, I. Necoara, and M. Dehl. Fast nexact decomposton algorthms for large-scale separable convex optmzaton. arxv preprnt arxv:22,4275, 202. [0] P. Gselsson. Executon tme certfcaton for gradent-based optmzaton n model predctve control. In 5th IEEE Conference on Decson and Control, pages , December 202. [] P. Gselsson, M. D. Doan, T. Kevczy, B. D. Schutter, and A. Rantzer. Accelerated gradent methods and dual decomposton n dstrbuted model predctve control. Automatca, 49: , 203. [2] T. Goldsten, B. O Donoghue, and S. Setzer. Fast alternatng drecton optmzaton methods. CAM report, pages 2 35, 202. [3] H. Ln, J. Maral, and Z. Harchaou. A unversal catalyst for frst-order optmzaton. In Advances n Neural Informaton Processng Systems, pages , 205. [4] I. Necoara and V. Nedelcu. Rate analyss of nexact dual frst order methods: Applcaton to dstrbuted MPC for networ systems. arxv: [math], February 203. arxv: [5] Y. Nesterov. A method of solvng a convex programmng problem wth convergence rate O(/ 2 ). In Sovet Mathematcs Dolady, volume 27, pages , 983.

19 9 [6] P. Patrnos and A. Bemporad. An accelerated dual gradent-projecton algorthm for embedded lnear model predctve control. IEEE Transactons on Automatc Control, 59:8 33, 204. [7] Y. Pu, M. N. Zelnger, and C. N. Jones. Fast alternatng mnmzaton algorthm for model predctve control. In 9th World Congress of the Internatonal Federaton of Automatc Control, 204. [8] Y. Pu, M. N. Zelnger, and C. N. Jones. Quantzaton desgn for dstrbuted optmzaton wth tme-varyng parameters. In 54th IEEE Conference on Decson and Control, pages , 205. [9] Y. Pu, M.N. Zelnger, and C. N. Jones. Inexact fast alternatng mnmzaton algorthm for dstrbuted model predctve control. In 53th IEEE Conference on Decson and Control, pages , 204. [20] R. Rchter, C. N. Jones, and M. Morar. Computatonal complexty certfcaton for real-tme MPC wth nput constrants based on the fast gradent method. IEEE Transactons on Automatc Control, 57(6):39 403, 202. [2] R. Scattoln. Archtectures for dstrbuted and herarchcal model predctve control a revew. Journal of Process Control, 9(5):723 73, [22] M. Schmdt, N. L. Roux, and F. Bach. Convergence rates of nexact proxmal-gradent methods for convex optmzaton. In 25th Annual Conference on Neural Informaton Processng Systems, pages , 20. [23] P. Tseng. Applcatons of a splttng algorthm to decomposton n convex programmng and varatonal nequaltes. SIAM Journal on Control and Optmzaton, 29:9 38, 99. Ye Pu receved the B.S. degree from the School of Electronc Informaton and Electrcal Engneerng at Shangha Jao Tong Unversty, Chna, n 2008, and the M.S. degree from the department of Electrcal Engneerng and Computer Scences at the Techncal Unversty Berln, Germany, n 20. Snce February 202, she has been a Ph.D. student n the Automatc Control Laboratory at Ecole Polytechnque Fédérale de Lausanne (EPFL), Swtzerland. Her research nterests are n the area of fast and dstrbuted predctve control and optmzaton and dstrbuted algorthms wth communcaton lmtatons. Coln N. Jones receved the Bachelors degree n Electrcal Engneerng and the Masters degree n Mathematcs from the Unversty of Brtsh Columba, Vancouver, BC, Canada, and the Ph.D. degree from the Unversty of Cambrdge, Cambrdge, U.K., n He s an Assstant Professor n the Automatc Control Laboratory at the Ecole Polytechnque Federale de Lausanne (EPFL), Lausanne, Swtzerland. He was a Senor Researcher at the Automatc Control Laboratory of the Swss Federal Insttute of Technology Zurch untl 200. Hs current research nterests are n the areas of hgh-speed predctve control and optmsaton, as well as green energy generaton, dstrbuton and management. Melane N. Zelnger receved the Dploma degree n engneerng cybernetcs from the Unversty of Stuttgart, Germany, n 2006, and the Ph.D. degree (wth honors) n electrcal engneerng from ETH Zurch, Swtzerland, n 20. She s an Assstant Professor at the Department of Mechancal and Process Engneerng at ETH Zurch, Swtzerland. She was a Mare Cure fellow and Postdoctoral Researcher wth the Max Planc Insttute for Intellgent Systems, Tbngen, Germany untl 205 and wth the Department of Electrcal Engneerng and Computer Scences at the Unversty of Calforna at Bereley, CA, USA, from 202 to 204. From 20 to 202 she was a Postdoctoral Fellow wth the Ecole Polytechnque Federale de Lausanne (EPFL), Swtzerland. Her current research nterests nclude dstrbuted control and optmzaton, as well as safe learnng-based control, wth applcatons to energy dstrbuton systems and human-n-the-loop control.

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Splitting Methods for Distributed Optimization and Control

Splitting Methods for Distributed Optimization and Control Splttng Methods for Dstrbuted Optmzaton and Control THÈSE N O 7041 (016) PRÉSENTÉE LE 7 MAI 016 À LA FACULTÉ DES SCIENCES ET TECHNIQUES DE L'INGÉNIEUR LABORATOIRE D'AUTOMATIQUE 3 PROGRAMME DOCTORAL EN

More information

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization To appear n Optmzaton Vol. 00, No. 00, Month 20XX, 1 27 Research Artcle Almost Sure Convergence of Random Projected Proxmal and Subgradent Algorthms for Dstrbuted Nonsmooth Convex Optmzaton Hdea Idua a

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Convergence rates of proximal gradient methods via the convex conjugate

Convergence rates of proximal gradient methods via the convex conjugate Convergence rates of proxmal gradent methods va the convex conjugate Davd H Gutman Javer F Peña January 8, 018 Abstract We gve a novel proof of the O(1/ and O(1/ convergence rates of the proxmal gradent

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Randomized block proximal damped Newton method for composite self-concordant minimization

Randomized block proximal damped Newton method for composite self-concordant minimization Randomzed block proxmal damped Newton method for composte self-concordant mnmzaton Zhaosong Lu June 30, 2016 Revsed: March 28, 2017 Abstract In ths paper we consder the composte self-concordant CSC mnmzaton

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui. Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Dynamic Systems on Graphs

Dynamic Systems on Graphs Prepared by F.L. Lews Updated: Saturday, February 06, 200 Dynamc Systems on Graphs Control Graphs and Consensus A network s a set of nodes that collaborates to acheve what each cannot acheve alone. A network,

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N)

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N) SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) S.BOUCKSOM Abstract. The goal of ths note s to present a remarably smple proof, due to Hen, of a result prevously obtaned by Gllet-Soulé,

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Infinitely Split Nash Equilibrium Problems in Repeated Games

Infinitely Split Nash Equilibrium Problems in Repeated Games Infntely Splt ash Equlbrum Problems n Repeated Games Jnlu L Department of Mathematcs Shawnee State Unversty Portsmouth, Oho 4566 USA Abstract In ths paper, we ntroduce the concept of nfntely splt ash equlbrum

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Spectral Graph Theory and its Applications September 16, Lecture 5

Spectral Graph Theory and its Applications September 16, Lecture 5 Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

Control of Uncertain Bilinear Systems using Linear Controllers: Stability Region Estimation and Controller Design

Control of Uncertain Bilinear Systems using Linear Controllers: Stability Region Estimation and Controller Design Control of Uncertan Blnear Systems usng Lnear Controllers: Stablty Regon Estmaton Controller Desgn Shoudong Huang Department of Engneerng Australan Natonal Unversty Canberra, ACT 2, Australa shoudong.huang@anu.edu.au

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

THE GUARANTEED COST CONTROL FOR UNCERTAIN LARGE SCALE INTERCONNECTED SYSTEMS

THE GUARANTEED COST CONTROL FOR UNCERTAIN LARGE SCALE INTERCONNECTED SYSTEMS Copyrght 22 IFAC 5th rennal World Congress, Barcelona, Span HE GUARANEED COS CONROL FOR UNCERAIN LARGE SCALE INERCONNECED SYSEMS Hroak Mukadan Yasuyuk akato Yoshyuk anaka Koch Mzukam Faculty of Informaton

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information