Accelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions

Size: px
Start display at page:

Download "Accelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions"

Transcription

1 07 IEEE 56h Annual Conference on Decision and Conrol (CDC) December -5, 07, Melbourne, Ausralia Acceleraed Disribued Neserov Gradien Descen for Convex and Smooh Funcions Guannan Qu, Na Li Absrac This paper considers he disribued opimizaion problem over a nework, where he objecive is o opimize a global funcion formed by an average of local funcions, using only local compuaion and communicaion. We develop an Acceleraed Disribued Neserov Gradien Descen (Acc- DNGD) mehod for convex and smooh objecive funcions. We show ha i achieves a O(/.4 ɛ ) ( ɛ (0,.4)) convergence rae when a vanishing sep size is used. The convergence rae can be improved o O(/ ) when we use a fixed sep size and he objecive funcions saisfy a special propery. To he bes of our knowledge, Acc-DNGD is he fases among all disribued gradien-based algorihms ha have been proposed so far. I. INTRODUCTION Given a se of agens N = {,,..., n}, each of which has a local convex cos funcion f i (x) : R N R, he objecive of disribued opimizaion is o find x ha minimizes he average of all he funcions, min x R N f(x) n n f i (x) i= using local communicaion and local compuaion. The local communicaion is defined hrough a conneced and undireced communicaion graph G = (V, E), where he nodes V = N and edges E V V. This problem has found various applicaions in muli-agen conrol, disribued sae esimaion over sensor neworks, large scale compuaion in machine learning, ec [ [3. There exis many sudies on developing disribued algorihms for his problem, e.g., [4 [5, mos of which are disribued gradien descen algorihms. Each ieraion is composed of a consensus sep and a gradien descen sep. These mehods have achieved sublinear convergence raes (usually O( ) for convex funcions. When he funcions are nonsmooh, he sublinear convergence raes mach Cenralized Gradien Descen (CGD). More recen work have improved hese resuls for smooh funcions, by adding a correcion erm [6 [8, or using a gradien esimaion sequence [9 [5. Wih hese echniques, paper [6, [ can achieve a O( ) convergence rae for smooh funcions, maching he rae of CGD. Addiionally, if srong convexiy is furher assumed, paper [6 [8, [ [5 can achieve a linear convergence rae, maching he rae of CGD as well. I is known ha among all cenralized gradien based algorihms, Cenralized Neserov Gradien Descen (CNGD) [6 achieves he opimal convergence rae in erms of firs-order oracle complexiy. For µ-srongly convex and L- smooh funcions, i achieves a O(( µ/l) ) convergence rae; for convex and L-smooh funcions, i achieves Guannan Qu and Na Li are affiliaed wih John A. Paulson School of Engineering and Applied Sciences a Harvard Universiy. gqu@g.harvard.edu, nali@seas.harvard.edu. This work is suppored under NSF ECCS and NSF CAREER a O(/ ) convergence rae. The nice convergence raes lead o he quesion of his paper: how o decenralize he Neserov Gradien mehod o achieve similar convergence raes? Our recen work [7 has sudied he µ-srongly convex and L-smooh case. This paper will focus on he convex and L-smooh case (wihou he srongly convex assumpion). Previous work in his line includes [8 ha develops Disribued Neserov Gradien (D-NG) mehod and shows ha i has a convergence rae of O( log ), which is no faser han he rae of CGD (O( )). In his paper, we propose an Acceleraed Disribued Neserov Gradien Descen (Acc-DNGD) mehod. We show ha i achieves a O(/.4 ɛ ) (for any ɛ (0,.4)) convergence rae when a vanishing sep size is used. We furher show ha he convergence rae can be improved o O(/ ) when we use a fixed sep size and he objecive funcion is a composiion of a linear map and a srongly-convex and smooh funcion. Boh raes are faser han wha CGD and CGD-based disribued mehods can achieve (O(/)). To he bes of he auhors knowledge, he O(/.4 ɛ ) rae is he fases among all disribued gradien-based algorihms being proposed so far. Our algorihm is a combinaion of CNGD and a gradien esimaion scheme. The gradien esimaion scheme has been sudied under various conexs in [9 [5. As [ has poined ou, when combining he gradien esimaion scheme wih a cenralized algorihm, he resuling disribued algorihm could poenially mach he convergence rae of he cenralized algorihm. The resuls in his paper show ha, alhough combining he scheme wih CNGD will no give a convergence rae (O(/.4 ɛ )) maching ha of CNGD (O(/ )), i does improve over previously known CGDbased disribued algorihms (O(/)). In he res of he paper, Secion II formally defines he problem and presens our algorihm and resuls. Secion III proves he convergence raes. Secion IV provides numerical simulaions and Secion V concludes he paper. Noaions. In his paper, n is he number of agens, and N is he dimension of he domain of he f i s. Noaion denoes -norm for vecors, and Frobenius norm for marices. Noaion denoes specral norm for marices. Noaion, denoes inner produc for vecors. Noaion ρ( ) denoes specral radius for square marices, and denoes a n- dimensional all one column vecor. All vecors, when having dimension N (he dimension of he domain of he f i s), will Reference [8 also sudies an algorihm ha uses muliple consensus seps per ieraion, and achieves a O(/ ) convergence rae. In his paper, we focus on algorihms ha only use one or a consan number of consensus seps per ieraion. We only include algorihms ha are gradien based (wihou exra informaion like Hessian), and use one (or a consan number of) sep(s) of consensus afer each gradien evaluaion /7/$ IEEE 60

2 be regarded as row vecors. As a special case, all gradiens, f i (x) and f(x) are inerpreed as N-dimensional row vecors. Noaion, when applied o vecors of he same dimension, denoes elemen wise less han or equal o. II. PROBLEM AND ALGORITHM A. Problem Formulaion Consider n agens, N = {,,..., n}, each of which has a funcion f i : R N R. The objecive of disribued opimizaion is o find x o minimize he average of all he funcions, i.e. min f(x) n f i (x) () x R N n i= using local communicaion and local compuaion. The local communicaion is defined hrough a conneced undireced communicaion graph G = (V, E), where he nodes V = N and he edges E V V. Agen i and j can send informaion o each oher if and only if (i, j) E. The local compuaion means ha each agen can only make is decision based on he local funcion f i and he informaion obained from is neighbors. Throughou he paper, we assume ha f has a minimizer x wih opimal value f. We will use he following assumpions in he res of he paper. Assumpion. i N, f i is convex. As a resul, f is also convex. Assumpion. i N, f i is L-smooh, ha is, f i is differeniable and he gradien is L-Lipschiz coninuous, i.e., x, y R N, f i (x) f i (y) L x y. As a resul, f is L-smooh. Assumpion 3. The se of minimizers of f is compac. B. Cenralized Neserov Gradien Descen (CNGD) We briefly inroduce a version of cenralized Neserov Gradien Descen (CNGD) ha is derived from Secion. of [6. CNGD keeps updaing hree variables x(), v(), y() R N, saring from an iniial poin x(0) = v(0) = y(0) R N, and he updae equaion is given by x( + ) = y() η f(y()) (a) v( + ) = v() η α f(y()) y( + ) = ( α + )x( + ) + α + v( + ), (c) where (α ) =0 is defined by an arbirarily chosen α 0 (0, ) and he updae equaion α + = ( α + )α, where α + always akes he unique soluion in (0, ). The following heorem (adaped from [6, Thm.., Lem...4) gives he convergence rae of CNGD. Theorem. In CNGD (), under Assumpion and, when 0 < η L, we have f(x()) f = O( ). C. Our Algorihm: Acceleraed Disribued Neserov Gradien Descen (Acc-DNGD) We design our algorihm based on a consensus marix W = [w ij R n n. Here w ij sands for how much agen i weighs is neighbor j s informaion. W saisfies he following properies: (a) (i, j) E, w ij > 0. i, w ii > 0. w ij = 0 elsewhere. Marix W is doubly sochasic, i.e. i w i j = j w ij = for all i, j N. As a resul, σ (0, ) which depends on he specrum of W, such ha for any ω R n, we have he averaging propery, W ω ω σ ω ω where ω = n T ω (he average of he enries in ω) [9. How o selec a consensus marix o saisfy hese properies has been inensely sudied, e.g. [9, [30. In our algorihm Acc-DNGD, each agen keeps a copy of he hree variables in CNGD, x i (), v i (), y i () and in addiion s i () which serves as a gradien esimaor. The iniial condiion is x i (0) = v i (0) = y i (0) = 0 and s i (0) = f(0), 3 and he algorihm updaes as follows: x i ( + ) = j w ij y j () s i () (3a) v i ( + ) = w ij v j () s i () α j (3b) y i ( + ) = ( α + )x i ( + ) + α + v i ( + ) (3c) s i ( + ) = j w ij s j () + f i (y i ( + )) f i (y i ()) (3d) where [w ij n n are he consensus weighs and (0, L ) are he sep sizes. Sequence (α ) 0 is generaed by, firs selecing α 0 = η 0 L (0, ), hen given α (0, ), selecing α + o be he unique soluion in (0, ) of he following equaion, 4 α + = + ( α + )α. We will consider wo varians of he algorihm wih he following wo sep size rules. Vanishing sep size rule: = η (+ 0) for some η β (0, L ), β (0, ) and 0. Fixed sep size rule: = η > 0. Because w ij = 0 when (i, j) / E, each node i only needs o send x i (), v i (), y i () and s i () o is neighbors. Therefore, he algorihm can be operaed in a fully disribued fashion wih only local communicaion. The addiional erm s i () allows each agen o obain an esimae on he global gradien n j f j(y j ()) (for more deails, see Secion II-D). Compared wih disribued algorihms wihou his esimaion erm, i helps improve he convergence speed. As a resul, we call his mehod as Acceleraed Disribued Neserov Gradien Descen (Acc-DNGD) mehod. D. Inuiion Behind our Algorihm Here we briefly explain how he algorihm works. Firs we noe ha Eq. (3a)-(3c) is similar o Eq. (), excep he weighed average erms ( j w ijy j (), j w ijv j ()) and he new erm s i () ha replaces he gradien erms. 3 We noe ha he iniial condiion s i (0) = f(0) requires he agens o conduc an iniial run of consensus. We impose his iniial condiion for echnical reasons, while we expec he resuls of his paper o hold for a relaxed iniial condiion, s i (0) = f i (0) which does no need iniial coordinaion. We use he relaxed iniial condiion in numerical simulaions. 4 Wihou causing any confusion wih he α in (), in he res of he paper we abuse he noaion of α. 6

3 We have he following circular argumens ha explain why algorihm (3) should work. Argumen : Assuming s i () n j= f j(y j ()), hen he algorihm converges. To see his, noice he weighed average erms ( j w ijy j (), j w ijv j ()) ensure ha differen agens reach consensus, i.e. j, x i () x j (), y i () y j () and v i () v j () (as a resul, j w ijy j () y i (), j w ijv j () v i ()). If we furher assume ha s i () n j f j(y j ()), hen since y j () y i (), we have s i () n j f j(y i ()) = f(y i ()). Hence (3a)-(3c) can be rewrien as x i ( + ) y i () f(y i ()) (4a) v i ( + ) v i () f(y i ()) (4b) α y i ( + ) ( α + )x i ( + ) + α + v i ( + ),(4c) which is exacly () (excep he sep size rule), and hence we expec convergence. Argumen : Assuming he algorihm converges, hen s i () n j= f j(y j ()). To see his, from (3d) and he fac ha i w i j =, we have s() := n j= s j() = n j= f j(y j ()). Assuming he convergence of he algorihm, we will have he inpu o (3d), f i (y i ( + )) f i (y i ()) L y i ( + ) y i () 0. Because of he vanishing inpu, and he aking weighed average of neighbor sep ( j w ijs j ()) in (3d), we can expac ha evenually s i () s() = n n j= f j(y j ()). Though Argumen and only form a circular argumen, hey provide a high-level guideline for he rigorous proof in Secion III. To give he rigorous proof, i urns ou ha we need o use a vanishing sep size in (3) insead of a fixed sep size as in () (we can sill use a fixed sep size if f i has special srucures, cf. Theorem 3). This slows down he convergence rae of our algorihm (/.4 ɛ ) compared o CNGD (/ ) (cf. Theorem ). An observaion from he above circular argumen is ha, s i () acs as a gradien esimaor ha esimaes he average gradien n j f j(y j ()). This observaion can be used o devise sopping crierion of he algorihm (e.g. he algorihm sops when s i () is sufficienly close o 0). E. Convergence of he Algorihm To sae he convergence resuls, we need o define he average sequence, x() = n i= x i() R N. We summarize our convergence resuls below. Theorem. Suppose Assumpion, and 3 are rue and wihou loss of generaliy we assume v(0) x. Le he sep η size be = (+ 0) wih β = ɛ where ɛ (0,.4). β Suppose he following condiions are me. (i) 0 >. min(( σ+3 3 σ+ 4 )σ/(8β), ( 6 ) β ) 5+σ (ii) η < min( σ ( σ)4, 9 3 L 36866L ). (iii) ( ) 3/ D(β, 0)(β 0.6)( σ) η < 96( 0 + ) β L /3 [4 + R / v(0) x where D(β, 0 ) = and R is he diameer ( 0+3) e 6+ β 6 of he (f( x(0)) f + L v(0) x )-level se of f. 5 Then, f( x()) f = O( )..4 ɛ In Theorem, condiion (i) inends o make /+ close o which is required in Lemma 6 (iii), and condiions (ii)(iii) inend o make close o 0 which is required in Lemma 6 (ii). While he condiions are needed for he proof, we expec he same resul will hold if we simply le 0 = and η = L, which is wha we choose in he simulaions in Secion IV. The reason is ha, regardless of he value of η and 0, we have 0 and /+, and hence for large enough, and /+ will auomaically be close o 0 and respecively. While in Theorem we require β > 0.6, we conjecure ha he algorihm will sill converge even if β [0, 0.6 and he convergence rae will be O( β ). We noe ha β = 0 corresponds o he case of a fixed sep size. In Secion IV we will use numerical mehods o es his conjecure. In he nex heorem, we provide a O( ) convergence resul when a fixed sep size is used and he objecive funcions belong o a special class. Theorem 3. Assume each f i (x) can be wrien as f i (x) = h i (xa i ), where A i is a non-zero N M i marix, and h i (x) : R Mi R is a µ 0 -srongly convex and L 0 -smooh funcion. Suppose we use he fixed sep size rule = η, wih 0 < η < min( σ 9 3 L, µ.5 ( σ) 3 L ) where L = L 0 ν wih ν = max i A i (where A i means he specral norm of A i ); and µ = µ 0 γ wih γ being he smalles non-zero eigenvalue of marix A = n i= A ia T i. Then, we have f( x()) f = O( ). An imporan example of he ype of funcion f i (x) in Theorem 3 is he square error for linear regression when he sample size is less han he parameer dimension. Remark. All he sep size condiions used in his secion are conservaive. This is because we have used coarse specral bounds in he proofs (see Lemma 0,, ), in order o simplify mahemaical calculaions. In numerical simulaions, we show ha large sep sizes can be used. When applying he algorihm in pracice, his may require rial and error o pre-une he sep size. III. CONVERGENCE ANALYSIS In his secion, we will provide he proof of he convergence resuls. We will firs provide a proof overview in Secion III-A and hen defer he deailed proof o he res of he secion. Due o space limi, we omi some proofs, which can be found in he full version of his paper [3. A. Proof Overview We inroduce marix noaions x(), v(), y(), s(), () R n N o simplify he mahemaical expressions, 6 x() = [x () T, x () T,..., x n () T T 5 Here we have used he fac ha by Assumpion and 3, all level ses of f are bounded. See Proposiion B.9 of [3. 6 Wihou causing any confusion wih noaions in (), in his secion we abuse he use of noaion x(), v(), y(). 6

4 v() = [v () T, v () T,..., v n () T T y() = [y () T, y () T,..., y n () T T s() = [s () T, s () T,..., s n () T T () = [ f (y ()) T, f (y ()) T,..., f n (y n ()) T T. Now our algorihm in (3) can be wrien as x( + ) = W y() s() (5a) v( + ) = W v() s() (5b) α y( + ) = ( α + )x( + ) + α + v( + ) (5c) s( + ) = W s() + ( + ) (). (5d) Apar from he average sequence x() = n i= x i() R N ha we have defined, we also define several oher average sequences, v() = n i= v i(), ȳ() = n i= y i(), s() = n i= s i(), and g() = n i= f i(y i ()). Overview of he Proof. We derive a series of lemmas (Lemma 4, 5, 6 and 7) ha will work for boh he vanishing and he fixed sep size case. We firsly derive he updae formula for he average sequences (Lemma 4). Then, we show ha he updae rule for he average sequences is in fac cenralized Neserov Gradien Descen (CNGD) wih inexac gradiens [33, and he inexacness is characerized by consensus error y() ȳ() (Lemma 5). The consensus error is bounded in Lemma 6. Then, we apply he proof of CNGD (see e.g. [6) o he average sequences in spie of he consensus error, and derive an inermediae resul in Lemma 7. Lasly, we finish he proof of Theorem in Secion III-C. The proof of Theorem 3 can be found in Appendix-F in [3. Lemma 4. The following equaliies hold. x( + ) = ȳ() g() (6a) v( + ) = v() g() (6b) α ȳ( + ) = ( α + ) x( + ) + α + v( + ) (6c) s( + ) = s() + g( + ) g() = g( + ) (6d) Proof: We omi he proof since hese equaliies can be easily derived using he fac ha W is doubly sochasic. For (6d) we also need o use he fac ha s(0) = g(0). From (6a)-(6c) we see ha he sequences x(), v() and ȳ() follow a updae rule similar o he CNGD in (). The only difference is ha he g() in (6a)-(6c) is no he exac gradien f(ȳ()) in CNGD. In he following Lemma, we show ha g() is an inexac gradien. 7 Lemma 5. Under Assumpion,,, g() is an inexac gradien of f a ȳ() wih error O( y() ȳ() ) in he sense ha, ω R N, f(ω) ˆf() + g(), ω ȳ() (7) f(ω) ˆf() + g(), ω ȳ() + L ω ȳ() + L n y() ȳ(), (8) where ˆf() = n i= [f i(y i ())+ f i (y i ()), ȳ() y i (). 7 For more informaion regarding why (7) (8) define an inexac gradien, we refer he readers o [33. Proof: We omi he proof and refer he readers o [7, Lem. 4. The consensus error y() ȳ() in he previous lemma is bounded by he following lemma whose proof is given in Secion III-B. Lemma 6. Suppose he sep sizes saisfy (i) + > 0, (ii) η 0 < min( σ 9 3 L, ( σ)3 644L ), η (iii) sup 0 + min(( σ+3 3 σ+ 4 )σ/8 6, Then, under Assumpion, we have, 5+σ ). y() ȳ() κ [ nχ ( ) L ȳ() x() + 8 σ L g() where χ : R R is a funcion saisfying 0 < χ ( ) η /3 L /3, and κ = 6 ( σ). We nex provide he following inermediae resul. The proof roughly follows he same seps of [6, Lemma..3, and can be found in Appendix-D of [3. α 0 Lemma 7. Define γ 0 = η = L 0( α 0) α 0. We define a series of funcions (Φ : R N R) 0, wih Φ 0 (ω) = f( x(0)) + γ 0 ω v(0) and Φ + (ω) = ( α )Φ (ω)+α [ ˆf()+ g(), ω ȳ(). (9) Then, under Assumpion and, he following holds. (i) We have, Φ (ω) f(ω) + λ (Φ 0 (ω) f(ω)) (0) where λ is defined hrough λ 0 =, and λ + = ( α )λ. (ii) Funcion Φ (ω) can be wrien as Φ (ω) = φ + γ ω v() () where γ is defined hrough γ + = γ ( α ), and φ is some real number ha saisfies φ 0 = f( x(0)), and φ + = ( α )φ + α ˆf() g() + α g(), v() ȳ(). () B. Proof of he Bounded Consensus Error (Lemma 6) We will frequenly use he following lemmas, whose proofs can be found in Appendix-A of [3. Lemma 8. The following equaliies are rue. [α + ȳ(+) ȳ() = α + ( v() ȳ()) + α + g() α (3) v( + ) ȳ( + ) = ( α + )( v() ȳ()) + ( α + )( α )g() (4) Lemma 9. Under Assumpion, he following are rue. ( + ) () L y( + ) y() (5) g() f(ȳ()) L n y() ȳ() (6) 63

5 Proof of Lemma 6: Overview of he proof. The proof is divided ino hree seps. In sep, we rea he algorihm (5) as a linear sysem and derive a linear sysem inequaliy (7). In sep, we analyze he sae ransiion marix in (7) and prove a few specral properies. In sep 3, we furher analyze he linear sysem (7) and bound he sae by he inpu, from which he conclusion of he lemma follows. Throughou he proof, we will frequenly use an easy-o-check fac: α is a decreasing sequence. Sep : A Linear Sysem Inequaliy. Define z() = [α v() v(), y() ȳ(), s() g() T R 3, b() = [0, 0, na() T R 3 where a() α L v() ȳ() + λl g() in which λ 4 σ >. The desired inequaliy is (7). The proof of (7) is similar o ha of [7, Eq. (8). Due o space limi, i is omied and can be found in Appendix-B of [3. G( ) [ { σ }} { 0 η z( + ) σ σ z() + b() (7) L L σ + L Sep : Specral Properies of G( ). When η is posiive, G(η) is a nonnegaive marix and G(η) is a posiive marix. By Perron-Frobenius Theorem [34, Thm G(η) has a unique larges (in magniude) eigenvalue ha is a posiive real wih mulipliciy, and he eigenvalue is associaed wih an eigenvecor wih posiive enries. We le he unique larges eigenvalue be θ(η) = ρ(g(η)) and le is eigenvecor be χ(η) = [χ (η), χ (η), χ 3 (η) T, normalized by χ 3 (η) =. We give bounds on he eigenvalue and he eigenvecor in he following lemmas, whose proofs can be found in Appendix- C of [3. Lemma 0. When 0 < ηl <, we have σ < θ(η) < σ + 4(ηL) /3, and χ (η) η /3. L /3 Lemma. When η (0, χ (η) < η (σηl) /3. σ L ), θ(η) σ +(σηl)/3 and Lemma. When ζ, ζ (0, σ 9 3 L ), hen χ (ζ ) χ (ζ ) max(( ζ ζ ) 6/σ, ( ζ ζ ) 6/σ χ ) and (ζ ) χ (ζ ) max(( ζ ζ ) 8/σ, ( ζ ζ ) 8/σ ). I is easy o check ha, under our sep size condiion (ii), all he condiions of Lemma 0,, are saisfied. Sep 3: Bound he sae by he inpu. Wih he above preparaions, now we prove, by inducion, he following saemen, z() na()κχ( ) (8) where κ = 6 σ. Equaion (8) is rue for = 0, since he lef hand side is zero when = 0. Assume (8) holds for. We now show (8) is rue for +. We divide he res of he proof ino wo sub-seps. Briefly speaking, sep 3. proves ha he inpu o he sysem (7), a( + ) does no decrease oo much compared o a() (a( + ) σ+3 4 a()); while sep 3. shows ha he sae z( + ), compared o z(), decreases enough for (8) o hold for +. Sep 3.: We prove ha a( + ) σ+3 4 a(). By (4), a( + ) = α +L ( α +)( v() ȳ()) + ( α +)( α )g() + λ+l g( + ) α +( α +)L v() ȳ() α+ α ( α +)( α )L g() + λ+l g() λ+l g( + ) g(). Therefore, we have a() a( + ) [ α α +( α +) L v() ȳ() [ α+ + ( α +)( α )L α + λl λ+l g() + λ+l g( + ) g() [ α α +( α +) L v() ȳ() + ( + λ( +))L g() + λ+l g( + ) g() max( α+ + α + η η+, + )a() α α λ + λ+l g( + ) g() (9) where in he las inequaliy, we have used he elemenary fac ha for four posiive numbers a, a, a 3, a 4 and x, y 0, we have a x+a y = a a 3 a 3 x+ a a 4 a 4 y max( a a 3, a a 4 )(a 3 x+a 4 y) Nex, we expand g( + ) g(), g( + ) g() g( + ) f(ȳ( + )) + g() f(ȳ()) + f(ȳ( + )) f(ȳ()) (a) L n y( + ) ȳ( + ) + L n y() ȳ() + L ȳ( + ) ȳ() L n σα v() v() + L n y() ȳ() + L n s() g() + a() (c) Lσκχ ()a() + Lκχ ()a() + Lκχ 3()a() + a() (d) a() { Lσκ (σl) η/3 + Lκ /3 L /3 + Lηκ + } (e) 8κa(). (0) Here (a) is due o (6); is due o he second row of (7) and he fac ha a() L ȳ( + ) ȳ() ; (c) is due o he inducion assumpion (8). In (d), we have used he bound on χ ( ) (Lemma ), χ ( ) (Lemma 0), and χ 3 ( ) =. In (e), we have used L <, σ < and κ >. Combining (0) wih (9), we have a() a( + ) max( α+ + α +, α α + 6κλ+La() λ η η+ + )a() [ max( η+ + α σ η η+ +, + ) 8 64

6 + 384 ( σ) η0l a() where in he las inequaliy, we have used he fac ha α+ + α + < α + + α α α α + = η+ ( α +) + α + < η+ + α σ, and hence 6. By he sep size condiion (ii), α + < σ 6. Combining a(). Hence a( + By he sep size condiion (iii), η+ σ α 0 = η 0 L σ 6, and η 0L 384 ( σ) he above, we have a() a( + ) σ ) 3+σ 4 a(). Sep 3.: Finishing he inducion. We have, z( + ) (a) G()z() + b() G() na()κχ() + na()χ() (c) = θ() na()κχ() + na()χ() = na()χ()(κθ() + ) (d) na( + )χ(η σ + 4 +)(κ + ) 3 + σ max( χ(η) χ, χ () (+) χ, ) (+) (e) = na( + )χ(η σ + +) κ 4 3 σ + 3 max( χ(η) χ, χ () (+) χ, ) (+) (f) na( + )κχ(+) () where (a) is due o (7), and is due o inducion assumpion (8), and (c) is because θ( ) is an eigenvalue of G( ) wih eigenvecor χ( ), and (d) is due o sep 3., and θ( ) < σ + 4(η 0 L) /3 < +σ (by sep size condiion (ii) and Lemma 0), and in (e), we have used by he definiion of κ, κ σ+ + = σ+ 3 κ. For (f), we have used ha by Lemma and sep size condiion (iii), max( χ(η) χ, χ () η, ) ( ) 8/σ σ + 3 (+) χ (+) + σ Now, (8) is proven for +, and hence is rue for all. Therefore, we have y() ȳ() κ na()χ ( ). Noice ha a() = α L v() ȳ() + λl g() L x() ȳ() + 8 σ L g(). The saemen of he lemma follows. C. Proof of Theorem We firs inroduce Lemma 3 regarding he asympoic behavior of α and λ. The proof can be found in Appendix- E of [3. Lemma 3. When he vanishing sep size is used ( = η (+ 0), β 0, β (0, )), and η 0 < 4L (equivalenly α 0 < ), we have (i) α +. (ii) λ = O( ). β D(β,0) (+ 0) β (iii) λ where D(β, 0 ) is some consan ha only depends on β and 0, given by D(β, 0 ) =. ( 0+3) e 6+ 6 β Now we proceed o prove Theorem. Proof of Theorem : I is easy o check ha all he condiions of Lemma 6 and 3 are saisfied, hence he conclusions of Lemma 6 and 3 hold. The major sep of proving he heorem is o show he following inequaliy, λ (Φ 0 (x ) f ) + φ f( x()). () If () is rue, by () and (0), we have f( x()) φ + λ (Φ 0(x ) f ) Φ (x ) + λ (Φ 0(x ) f ) f + λ (Φ 0(x ) f ). Hence f( x()) f = O(λ ) = O( β ), i.e. he desired resul of he heorem follows. Now we use inducion o prove (). Firsly, () is rue for = 0, since φ 0 = f( x(0)) and Φ 0 (x ) > f. Suppose i s rue for 0,,,...,. For 0 k, by (0), Φ k (x ) f + λ k (Φ 0 (x ) f ). Hence φ k + γ k x v(k) f + λ k (Φ 0 (x ) f ). Using he inducion assumpion, we ge f( x(k))+ γ k x v(k) f +λ k (Φ 0 (x ) f ). (3) Since f( x(k)) f and γ k = λ k γ 0, we have x v(k) 4 γ 0 (Φ 0 (x ) f ). Since v(k) = α k (ȳ(k) x(k)) + x(k), we have v(k) x = α k (ȳ(k) x(k)) + x(k) x ȳ(k) x(k) x(k) x. By (3), α k f( x(k)) Φ 0 (x ) f = f( x(0)) f +γ 0 v(0) x. Also since γ 0 = L α 0 < L, we have x(k) lies wihin he (f( x(0)) f + L v(0) x )-level se of f. By Assumpion 3 and Proposiion B.9 of [3, we have he level se is compac. Hence we have x(k) x R where R is he diameer of ha level se. Combining he above argumens, we ge ȳ(k) x(k) α k( v(k) x + x(k) x ) α k[r + 4 γ 0 (f( x(0)) f ) + v(0) x α k [R + 4 v(0) x }{{} C (4) where C is a consan ha does no depend on η. Nex, we consider (), φ + f( x( + )) = ( α )(φ f( x())) + ( α )f( x()) + α ˆf() η g() + α g(), v() ȳ() f( x( + )) (a) ( α )(φ f( x())) + α ˆf() η g() + ( α ){ ˆf() + g(), x() ȳ() } + α g(), v() ȳ() f( x( + )) = ( α )(φ f( x())) + ˆf() 65

7 η g() f( x( + )) (5) where (a) is due o Lemma 5 and is due o α ( v() ȳ()) + ( α )( x() ȳ()) = 0. By Lemma 5 and Lemma 6, f( x( + )) ˆf() ( Lη ) g() + Lκ χ () (L x() ȳ() + Combining he above wih (5), we ge, φ + f( x( + )) ( α )(φ f( x())) 64 ( σ) L η g() ). (6) + ( η Lη 4608L3 χ () η ( σ) 4 ) g() κ χ () L 3 x() ȳ() ( α )(φ f( x())) κ χ () L 3 x() ȳ() (7) where we have used he fac ha by sep size condiion (ii), η Lη 4608L3 χ () η ( σ) 4 η Lη 4608L3 η 4η /3 ( σ) 4 (/ 8433Lη ( σ) 4 ) > 0. L 4/3 Hence, expanding (7) recursively, we ge φ + f( x( + )) κ χ (η k ) L 3 x(k) ȳ(k) l=k+ ( α l ). Therefore o finish he inducion, we need o show κ χ (η k ) L 3 x(k) ȳ(k) (Φ 0(x ) f )λ +. l=k+ ( α l ) Noice ha κ χ (η k ) L 3 x(k) ȳ(k) l=k+ ( α l) (Φ 0(x ) f )λ + (a) = 4κ L 3 L v(0) x χ(η k) x(k) ȳ(k) λ k+ 4(6/( σ)) L ( v(0) x L 5L /3 C ( σ) v(0) x }{{ } C η/3 /3 k η /3 k ) C α k αk λ k+ λ k+ where C is a cosan ha does no depend on η, and in (a) we have used Φ 0 (x ) f L v(0) x > 0, and in, we have used he bound on χ (η k ) (Lemma 6) and he bound on x(k) ȳ(k) (equaion (4)). Now by Lemma 3, we ge, η /3 k α k λ k+ η /3 4 (k + + 0) β (k + 0) 3 β (k + ) D(β, 0) (a) η /3 4( 0 + ) β D(β, 0) (k + ) 5 3 β η /3 4( 0 + ) β D(β, 0) β 0.6 where in (a) we have used, k + 0 k +, k ( 0 + )(k + ); in we have used 5 3β >. So, we have κ χ (η k ) L 3 x(k) ȳ(k) l=k+ ( α l) (Φ 0(x ) f )λ + η /3 8( 0 + ) β C D(β, 0)(β 0.6) < where in he las inequaliy, we have simply required η /3 < D(β, 0)(β 0.6) 8( 0+) β C (i.e. sep size condiion (iii)), which is possible since he consans C and D(β, 0 ) do no depend on η. So he inducion is complee and we have () is rue. IV. NUMERICAL EXPERIMENTS We simulae our algorihm and compare i wih oher algorihms. We choose n = 00 agens and he graph is generaed using he Erdos-Renyi model [35 wih conneciviy probabiliy 0.3. The weigh marix W is chosen using he Laplacian mehod [6, Sec..4. We will compare our algorihm Acc-DNGD wih Disribued Gradien Descen (DGD) in [6 wih a vanishing sep size, he EXTRA algorihm in [6 (wih W = W +I ), he algorihm sudied in [9 [5 (we name i Acc-DGD ), he D-NG mehod in [8. We will also compare wih wo cenralized mehods ha direcly opimize f: Cenralized Gradien Descen (CGD) and Cenralized Neserov Gradien Descen (CNGD ()). Each elemen of he iniial poin x i (0) is drawn from i.i.d. Gaussian wih mean 0 and variance 5. The objecive funcions are given by, { f i (x) = m a i, x m + b i, x if a i, x, a i, x m m + b i, x if a i, x >, where m =, a i, b i R N (N = 4) are vecors whose enries are i.i.d. Gaussian wih mean 0 and variance, wih he excepion ha b n is se o be b n = i= b i s.. i b i = 0. I is easy o check ha f i is convex and smooh, bu no srongly convex (around he minimizer). The selecion of he objecive funcions is inended o es he sublinear convergence rae (β > 0.6) of our β algorihm Acc-DNGD (3) and he conjecure ha he rae sill holds even if β [0, 0.6 (cf. Theorem and β he commens following i). Therefore, we do wo runs of our algorihm Acc-DNGD, one wih β = 0.6 and he oher wih β = 0. The resuls are shown in Figure, where he x-axis is he ieraion, and he y-axis is he average objecive error n f(xi ()) f for disribued mehods, or objecive error f(x()) f for cenralized mehods. Noice ha Figure is a double log plo. I shows ha Acc-DNGD wih β = 0.6 performs faser han /.39, while D-NG, CGD and CGDbased disribued mehods (DGD, Acc-DGD, EXTRA) are slower han /.39. Furher, boh Acc-DNGD wih β = 0 and CNGD are faser han. 66

8 Fig. : Simulaion resuls. Seps sizes: Acc-DNGD wih β = 0.6: = (+), α = 0.707; Acc-DNGD wih β = 0: = , α 0 = 0.707; D-NG: = ; DGD: = ; EXTRA: η = 0.009; Acc-DGD: η = ; CGD: η = 0.009; CNGD: η = 0.009, α 0 = 0.5. V. CONCLUSION In his paper we propose an Acceleraed Disribued Neserov Gradien Descen algorihm for disribued opimizaion of convex and smooh funcions. We show a general O( ) ( ɛ (0,.4)) convergence rae, and an improved.4 ɛ O( ) convergence rae when he objecive funcions saisfy an addiional propery. Fuure work includes giving igher analysis of he convergence raes. REFERENCES [ B. Johansson, On disribued opimizaion in neworked sysems, 008. [ J. A. Bazerque and G. B. Giannakis, Disribued specrum sensing for cogniive radio neworks by exploiing sparsiy, IEEE Transacions on Signal Processing, vol. 58, no. 3, pp , 00. [3 P. A. Forero, A. Cano, and G. B. Giannakis, Consensus-based disribued suppor vecor machines, Journal of Machine Learning Research, vol., no. May, pp , 00. [4 J. N. Tsisiklis, D. P. Bersekas, and M. Ahans, Disribued asynchronous deerminisic and sochasic gradien opimizaion algorihms, in 984 American Conrol Conference, 984, pp [5 D. P. Bersekas and J. N. Tsisiklis, Parallel and disribued compuaion: numerical mehods. Prenice hall Englewood Cliffs, NJ, 989, vol. 3. [6 A. Nedić and A. Ozdaglar, Disribued subgradien mehods for muli-agen opimizaion, Auomaic Conrol, IEEE Transacions on, vol. 54, no., pp. 48 6, 009. [7 I. Lobel and A. Ozdaglar, Convergence analysis of disribued subgradien mehods over random neworks, in Communicaion, Conrol, and Compuing, h Annual Alleron Conference on. IEEE, 008, pp [8 J. C. Duchi, A. Agarwal, and M. J. Wainwrigh, Dual averaging for disribued opimizaion: convergence analysis and nework scaling, Auomaic conrol, IEEE Transacions on, vol. 57, no. 3, pp , 0. [9 S. S. Ram, A. Nedić, and V. V. Veeravalli, Disribued sochasic subgradien projecion algorihms for convex opimizaion, Journal of opimizaion heory and applicaions, vol. 47, no. 3, pp , 00. [0 A. Nedic and A. Olshevsky, Sochasic gradien-push for srongly convex funcions on ime-varying direced graphs, arxiv preprin arxiv: , 04. [, Disribued opimizaion over ime-varying direced graphs, Auomaic Conrol, IEEE Transacions on, vol. 60, no. 3, pp , 05. [ I. Maei and J. S. Baras, Performance evaluaion of he consensusbased disribued subgradien mehod under random communicaion opologies, Seleced Topics in Signal Processing, IEEE Journal of, vol. 5, no. 4, pp , 0. [3 A. Olshevsky, Linear ime average consensus on fixed graphs and implicaions for decenralized opimizaion and muli-agen conrol, arxiv preprin arxiv:4.486, 04. [4 M. Zhu and S. Marínez, On disribued convex opimizaion under inequaliy and equaliy consrains, Auomaic Conrol, IEEE Transacions on, vol. 57, no., pp. 5 64, 0. [5 I. Lobel, A. Ozdaglar, and D. Feijer, Disribued muli-agen opimizaion wih sae-dependen communicaion, Mahemaical Programming, vol. 9, no., pp , 0. [6 W. Shi, Q. Ling, G. Wu, and W. Yin, Exra: An exac firs-order algorihm for decenralized consensus opimizaion, SIAM Journal on Opimizaion, vol. 5, no., pp , 05. [7 C. Xi and U. A. Khan, On he linear convergence of disribued opimizaion over direced graphs, arxiv preprin arxiv:50.049, 05. [8 J. Zeng and W. Yin, Exrapush for convex smooh decenralized opimizaion over direced neworks, arxiv preprin arxiv:5.094, 05. [9 J. Xu, S. Zhu, Y. C. Soh, and L. Xie, Augmened disribued gradien mehods for muli-agen opimizaion under uncoordinaed consan sepsizes, in 05 54h IEEE Conference on Decision and Conrol (CDC). IEEE, 05, pp [0 P. Di Lorenzo and G. Scuari, Disribued nonconvex opimizaion over neworks, in Compuaional Advances in Muli-Sensor Adapive Processing (CAMSAP), 05 IEEE 6h Inernaional Workshop on. IEEE, 05, pp [ P. Di Lorenzo and G. Scuari, Nex: In-nework nonconvex opimizaion, IEEE Transacions on Signal and Informaion Processing over Neworks, vol., no., pp. 0 36, 06. [ G. Qu and N. Li, Harnessing smoohness o accelerae disribued opimizaion, arxiv preprin arxiv:605.07, 06. [3 A. Nedich, A. Olshevsky, and W. Shi, Achieving geomeric convergence for disribued opimizaion over ime-varying graphs, arxiv preprin arxiv: , 06. [4 A. Nedic, A. Olshevsky, W. Shi, and C. A. Uribe, Geomerically convergen disribued opimizaion wih uncoordinaed sep-sizes, arxiv preprin arxiv: , 06. [5 C. Xi and U. A. Khan, Add-op: Acceleraed disribued direced opimizaion, arxiv preprin arxiv: , 06. [6 Y. Neserov, Inroducory lecures on convex opimizaion: A basic course. Springer Science & Business Media, 03, vol. 87. [7 G. Qu and N. Li, Acceleraed disribued neserov gradien descen for smooh and srongly convex funcions, in Communicaion, Conrol, and Compuing (Alleron), 06 54h Annual Alleron Conference on. IEEE, 06, pp [8 D. Jakoveic, J. Xavier, and J. M. Moura, Fas disribued gradien mehods, Auomaic Conrol, IEEE Transacions on, vol. 59, no. 5, pp. 3 46, 04. [9 A. Olshevsky and J. N. Tsisiklis, Convergence speed in disribued consensus and averaging, SIAM Journal on Conrol and Opimizaion, vol. 48, no., pp , 009. [30 R. Olfai-Saber, J. A. Fax, and R. M. Murray, Consensus and cooperaion in neworked muli-agen sysems, Proceedings of he IEEE, vol. 95, no., pp. 5 33, Jan 007. [3 D. P. Bersekas, Nonlinear programming, 999. [3 N. L. Guannan Qu. (07) Acceleraed disribued neserov gradien descen for convex and smooh funcions. [Online. Available: hp://scholar.harvard.edu/files/gqu/files/cdc07fullversion.pdf [33 O. Devolder, F. Glineur, and Y. Neserov, Firs-order mehods of smooh convex opimizaion wih inexac oracle, Mahemaical Programming, vol. 46, no. -, pp , 04. [34 R. A. Horn and C. R. Johnson, Marix analysis. Cambridge universiy press, 0. [35 P. Erdos and A. Renyi, On random graphs i, Publ. Mah. Debrecen, vol. 6, pp ,

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization 1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Adaptation and Synchronization over a Network: stabilization without a reference model

Adaptation and Synchronization over a Network: stabilization without a reference model Adapaion and Synchronizaion over a Nework: sabilizaion wihou a reference model Travis E. Gibson (gibson@mi.edu) Harvard Medical School Deparmen of Pahology, Brigham and Women s Hospial 55 h Conference

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

Convergence of the Neumann series in higher norms

Convergence of the Neumann series in higher norms Convergence of he Neumann series in higher norms Charles L. Epsein Deparmen of Mahemaics, Universiy of Pennsylvania Version 1.0 Augus 1, 003 Absrac Naural condiions on an operaor A are given so ha he Neumann

More information

Stability and Bifurcation in a Neural Network Model with Two Delays

Stability and Bifurcation in a Neural Network Model with Two Delays Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Network Newton Distributed Optimization Methods

Network Newton Distributed Optimization Methods Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214

More information

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018 ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004 ODEs II, Lecure : Homogeneous Linear Sysems - I Mike Raugh March 8, 4 Inroducion. In he firs lecure we discussed a sysem of linear ODEs for modeling he excreion of lead from he human body, saw how o ransform

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Control of Stochastic Systems - P.R. Kumar

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Control of Stochastic Systems - P.R. Kumar CONROL OF SOCHASIC SYSEMS P.R. Kumar Deparmen of Elecrical and Compuer Engineering, and Coordinaed Science Laboraory, Universiy of Illinois, Urbana-Champaign, USA. Keywords: Markov chains, ransiion probabiliies,

More information

Optimality Conditions for Unconstrained Problems

Optimality Conditions for Unconstrained Problems 62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Mean-square Stability Control for Networked Systems with Stochastic Time Delay

Mean-square Stability Control for Networked Systems with Stochastic Time Delay JOURNAL OF SIMULAION VOL. 5 NO. May 7 Mean-square Sabiliy Conrol for Newored Sysems wih Sochasic ime Delay YAO Hejun YUAN Fushun School of Mahemaics and Saisics Anyang Normal Universiy Anyang Henan. 455

More information

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes

More information

Differential Harnack Estimates for Parabolic Equations

Differential Harnack Estimates for Parabolic Equations Differenial Harnack Esimaes for Parabolic Equaions Xiaodong Cao and Zhou Zhang Absrac Le M,g be a soluion o he Ricci flow on a closed Riemannian manifold In his paper, we prove differenial Harnack inequaliies

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,

More information

On the Convergence Rate of Average Consensus and Distributed Optimization over Unreliable Networks

On the Convergence Rate of Average Consensus and Distributed Optimization over Unreliable Networks On he Convergence Rae of Average Consensus and Disribued Opimizaion over Unreliable Neworks Lili Su EECS, MIT Email: lilisu@mi.edu Absrac We consider he problems of reaching average consensus and solving

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems

More information

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS LECTURE : GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS We will work wih a coninuous ime reversible Markov chain X on a finie conneced sae space, wih generaor Lf(x = y q x,yf(y. (Recall ha q

More information

A primal-dual Laplacian gradient flow dynamics for distributed resource allocation problems

A primal-dual Laplacian gradient flow dynamics for distributed resource allocation problems 2018 Annual American Conrol Conference (ACC) June 27 29, 2018. Wisconsin Cener, Milwaukee, USA A primal-dual Laplacian gradien flow dynamics for disribued resource allocaion problems Dongsheng Ding and

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

Correspondence should be addressed to Nguyen Buong,

Correspondence should be addressed to Nguyen Buong, Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se

More information

Saturation-tolerant average consensus with controllable rates of convergence

Saturation-tolerant average consensus with controllable rates of convergence Sauraion-oleran average consensus wih conrollable raes of convergence Solmaz S. Kia Jorge Cores Sonia Marinez Absrac This paper considers he saic average consensus problem for a muli-agen sysem and proposes

More information

EMS SCM joint meeting. On stochastic partial differential equations of parabolic type

EMS SCM joint meeting. On stochastic partial differential equations of parabolic type EMS SCM join meeing Barcelona, May 28-30, 2015 On sochasic parial differenial equaions of parabolic ype Isván Gyöngy School of Mahemaics and Maxwell Insiue Edinburgh Universiy 1 I. Filering problem II.

More information

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems.

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems. di ernardo, M. (995). A purely adapive conroller o synchronize and conrol chaoic sysems. hps://doi.org/.6/375-96(96)8-x Early version, also known as pre-prin Link o published version (if available):.6/375-96(96)8-x

More information

4 Sequences of measurable functions

4 Sequences of measurable functions 4 Sequences of measurable funcions 1. Le (Ω, A, µ) be a measure space (complee, afer a possible applicaion of he compleion heorem). In his chaper we invesigae relaions beween various (nonequivalen) convergences

More information

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n Lecure 3 - Kövari-Sós-Turán Theorem Jacques Versraëe jacques@ucsd.edu We jus finished he Erdős-Sone Theorem, and ex(n, F ) ( /(χ(f ) )) ( n 2). So we have asympoics when χ(f ) 3 bu no when χ(f ) = 2 i.e.

More information

SPECTRAL EVOLUTION OF A ONE PARAMETER EXTENSION OF A REAL SYMMETRIC TOEPLITZ MATRIX* William F. Trench. SIAM J. Matrix Anal. Appl. 11 (1990),

SPECTRAL EVOLUTION OF A ONE PARAMETER EXTENSION OF A REAL SYMMETRIC TOEPLITZ MATRIX* William F. Trench. SIAM J. Matrix Anal. Appl. 11 (1990), SPECTRAL EVOLUTION OF A ONE PARAMETER EXTENSION OF A REAL SYMMETRIC TOEPLITZ MATRIX* William F Trench SIAM J Marix Anal Appl 11 (1990), 601-611 Absrac Le T n = ( i j ) n i,j=1 (n 3) be a real symmeric

More information

arxiv: v1 [cs.dc] 8 Mar 2012

arxiv: v1 [cs.dc] 8 Mar 2012 Consensus on Moving Neighborhood Model of Peerson Graph arxiv:.9v [cs.dc] 8 Mar Hannah Arend and Jorgensen Jos Absrac In his paper, we sudy he consensus problem of muliple agens on a kind of famous graph,

More information

Book Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition

Book Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition Boo Correcions for Opimal Esimaion of Dynamic Sysems, nd Ediion John L. Crassidis and John L. Junins November 17, 017 Chaper 1 This documen provides correcions for he boo: Crassidis, J.L., and Junins,

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite American Journal of Operaions Research, 08, 8, 8-9 hp://wwwscirporg/journal/ajor ISSN Online: 60-8849 ISSN Prin: 60-8830 The Opimal Sopping Time for Selling an Asse When I Is Uncerain Wheher he Price Process

More information

Distributed Fictitious Play for Optimal Behavior of Multi-Agent Systems with Incomplete Information

Distributed Fictitious Play for Optimal Behavior of Multi-Agent Systems with Incomplete Information Disribued Ficiious Play for Opimal Behavior of Muli-Agen Sysems wih Incomplee Informaion Ceyhun Eksin and Alejandro Ribeiro arxiv:602.02066v [cs.g] 5 Feb 206 Absrac A muli-agen sysem operaes in an uncerain

More information

Orthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind

Orthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind Proceedings of he World Congress on Engineering 2008 Vol II Orhogonal Raional Funcions, Associaed Raional Funcions And Funcions Of The Second Kind Karl Deckers and Adhemar Bulheel Absrac Consider he sequence

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013 IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher

More information

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov Saionary Disribuion Design and Analysis of Algorihms Andrei Bulaov Algorihms Markov Chains 34-2 Classificaion of Saes k By P we denoe he (i,j)-enry of i, j Sae is accessible from sae if 0 for some k 0

More information

Anti-Disturbance Control for Multiple Disturbances

Anti-Disturbance Control for Multiple Disturbances Workshop a 3 ACC Ani-Disurbance Conrol for Muliple Disurbances Lei Guo (lguo@buaa.edu.cn) Naional Key Laboraory on Science and Technology on Aircraf Conrol, Beihang Universiy, Beijing, 9, P.R. China. Presened

More information

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach 1 Decenralized Sochasic Conrol wih Parial Hisory Sharing: A Common Informaion Approach Ashuosh Nayyar, Adiya Mahajan and Demoshenis Tenekezis arxiv:1209.1695v1 [cs.sy] 8 Sep 2012 Absrac A general model

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Lecture 10: The Poincaré Inequality in Euclidean space

Lecture 10: The Poincaré Inequality in Euclidean space Deparmens of Mahemaics Monana Sae Universiy Fall 215 Prof. Kevin Wildrick n inroducion o non-smooh analysis and geomery Lecure 1: The Poincaré Inequaliy in Euclidean space 1. Wha is he Poincaré inequaliy?

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function Elecronic Journal of Qualiaive Theory of Differenial Equaions 13, No. 3, 1-11; hp://www.mah.u-szeged.hu/ejqde/ Exisence of posiive soluion for a hird-order hree-poin BVP wih sign-changing Green s funcion

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Chapter 6. Systems of First Order Linear Differential Equations

Chapter 6. Systems of First Order Linear Differential Equations Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

Hamilton Jacobi equations

Hamilton Jacobi equations Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion

More information

arxiv: v1 [math.ca] 15 Nov 2016

arxiv: v1 [math.ca] 15 Nov 2016 arxiv:6.599v [mah.ca] 5 Nov 26 Counerexamples on Jumarie s hree basic fracional calculus formulae for non-differeniable coninuous funcions Cheng-shi Liu Deparmen of Mahemaics Norheas Peroleum Universiy

More information

Mixing times and hitting times: lecture notes

Mixing times and hitting times: lecture notes Miing imes and hiing imes: lecure noes Yuval Peres Perla Sousi 1 Inroducion Miing imes and hiing imes are among he mos fundamenal noions associaed wih a finie Markov chain. A variey of ools have been developed

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS Vincen Guigues School of Applied Mahemaics, FGV Praia de Boafogo, Rio de Janeiro,

More information

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j = 1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 On he Convergence Time of Dual Subgradien Mehods for Srongly Convex Programs Hao Yu and Michael J. Neely Universiy of Souhern California

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Essential Maps and Coincidence Principles for General Classes of Maps

Essential Maps and Coincidence Principles for General Classes of Maps Filoma 31:11 (2017), 3553 3558 hps://doi.org/10.2298/fil1711553o Published by Faculy of Sciences Mahemaics, Universiy of Niš, Serbia Available a: hp://www.pmf.ni.ac.rs/filoma Essenial Maps Coincidence

More information

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY ECO 504 Spring 2006 Chris Sims RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 1. INTRODUCTION Lagrange muliplier mehods are sandard fare in elemenary calculus courses, and hey play a cenral role in economic

More information

Introduction to Probability and Statistics Slides 4 Chapter 4

Introduction to Probability and Statistics Slides 4 Chapter 4 Inroducion o Probabiliy and Saisics Slides 4 Chaper 4 Ammar M. Sarhan, asarhan@mahsa.dal.ca Deparmen of Mahemaics and Saisics, Dalhousie Universiy Fall Semeser 8 Dr. Ammar Sarhan Chaper 4 Coninuous Random

More information

BOUNDEDNESS OF MAXIMAL FUNCTIONS ON NON-DOUBLING MANIFOLDS WITH ENDS

BOUNDEDNESS OF MAXIMAL FUNCTIONS ON NON-DOUBLING MANIFOLDS WITH ENDS BOUNDEDNESS OF MAXIMAL FUNCTIONS ON NON-DOUBLING MANIFOLDS WITH ENDS XUAN THINH DUONG, JI LI, AND ADAM SIKORA Absrac Le M be a manifold wih ends consruced in [2] and be he Laplace-Belrami operaor on M

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information