A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

Size: px
Start display at page:

Download "A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs"

Transcription

1 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers large scale consrained convex programs. These are ofen difficul o solve by inerior poin mehods or oher Newon-ype mehods due o he prohibiive compuaion and sorage complexiy for Hessians or marix inversions. Insead, large scale consrained convex programs are ofen solved by gradien based mehods or decomposiion based mehods. The convenional primal-dual subgradien mehod, also known as he Arrow-Hurwicz-Uzawa subgradien mehod, is a low complexiy algorihm wih he O(/ ) convergence rae, where is he number of ieraions. If he objecive and consrain funcions are separable, he Lagrangian dual ype mehod can decompose a large scale convex program ino muliple parallel small scale convex programs. The classical dual gradien algorihm is an example of Lagrangian dual ype mehods and has convergence rae O(/ ). Recenly, he auhors of he curren paper proposed a new Lagrangian dual ype algorihm wih faser O(/) convergence. However, if he objecive or consrain funcions are no separable, each ieraion requires o solve a large scale unconsrained convex program, which can have huge complexiy. This paper proposes a new primal-dual ype algorihm, which only involves simple gradien updaes a each ieraion and has O(/) convergence. I. INTRODUCTION Fix posiive inegers n and m, which are ypically large. Consider he general consrained convex program: minimize: f(x) () such ha: g k (x) 0, k {,,..., m} () x X (3) where se X R n is a compac convex se; funcion f(x) is convex and smooh on X ; and funcions g k (x), k {,,..., m} are convex, smooh and Lipschiz coninuous on X. Denoe he sacked vecor of muliple funcions g (x), g (x),..., g m (x) as g(x) = g (x), g (x),..., g m (x) T. The Lipschiz coninuiy of each g k (x) implies ha g(x) is Lipschiz coninuous on X. Throughou his paper, we use o represen he Euclidean norm and have he following assumpions on convex program ()-(3): Assumpion (Basic Assumpions): There exiss a (possibly non-unique) opimal soluion x X ha solves convex program ()-(3). There exiss L f 0 such ha f(x) f(y) L f x y for all x, y X, i.e., f(x) is smooh wih modulus L f. For each k {,,..., m}, here exiss L gk 0 such ha g k (x) g k (y) L gk x y The auhors are wih he Elecrical Engineering deparmen a he Universiy of Souhern California, Los Angeles, CA. for all x, y X, i.e., g k (x) is smooh wih modulus L gk. Denoe L g = L g,..., L gm T. There exiss β 0 such ha g(x) g(y) β x y, x, y X, i.e., g(x) is Lipschiz coninuous wih modulus β. There exiss C 0 such ha g(x) C, x X. There exiss R 0 such ha x y R, x, y X. Noe ha he exisence of C follows from he coninuiy of g(x) and he compacness of se X. The exisence of R follows from he compacness of se X. Assumpion (Exisence of Lagrange mulipliers): There exiss a Lagrange muliplier vecor λ = λ, λ,..., λ m 0 aaining he srong dualiy for problem ()-(3), i.e., q(λ ) = min x X {f(x) : g k(x) 0, k {,,..., m}}, where q(λ) = min {f(x) + m x X k= λ kg k (x)} is he Lagrangian dual funcion of problem ()-(3). Assumpion is a mild condiion. For example, i is implied by he Slaer condiion for convex programs. A. Large Scale Convex Programs In general, convex program ()-(3) can be solved via inerior poin mehods (or oher Newon ype mehods) which involve he compuaion of Hessians and marix inversions a each ieraion. The associaed compuaion complexiy and memory space complexiy a each ieraion is beween O(n ) and O(n 3 ), which is prohibiive when n is exremely large. For example, if n = 0 5 and each floaing poin number uses 4 byes, hen 40 Gbyes of memory is required even o save he Hessian a each ieraion. Thus, large scale convex programs are usually solved by gradien based mehods or decomposiion based mehods. B. The Primal-Dual Subgradien Mehod The primal-dual subgradien mehod, also known as he Arrow-Hurwicz-Uzawa Subgradien Mehod, applied o convex program ()-(3) is described in Algorihm. The updaes of x() and λ() only involve he compuaion of gradien and simple projecion operaions, which are much simpler han he compuaion of Hessians and marix inversions for exremely large n. Thus, compared wih he inerior poin mehods, he primal-dual subgradien algorihm has lower complexiy compuaions a each ieraion and hence is more suiable o large scale convex programs. However, he

2 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 convergence rae of Algorihm is only O(/ ), where is he number of ieraions. Algorihm The Primal-Dual Subgradien Algorihm Le c > 0 be a consan sep size. Choose any x(0) X. Iniialize Lagrangian mulipliers λ k (0) = 0, k {,,..., m}. A each ieraion {,,...}, observe x( ) and λ( ) and do he following: m Choose x() = P X x( ) c k= λ k( ) g k (x( )), where P X is he projecion ono convex se X. Updae Lagrangian mulipliers λ k () = λ k ( ) + cg k (x( )) λmax k 0, k {,,..., m}, where λ max k > λ k and max 0 is he projecion ono inerval 0, λ max k. Updae he running averages x(+) = x(τ) = x() + + x() +. C. Lagrangian Dual Type Mehods The classical dual subgradien algorihm is a Lagrangian dual ype ieraive mehod ha approaches opimaliy for sricly convex programs 3. A modificaion of he classical dual subgradien algorihm ha averages he resuling sequence of primal esimaes can solve general convex programs and has an O(/ ) convergence rae 4, 5, 6. The dual subgradien algorihm wih primal averaging is suiable o large scale convex programs because he updaes of each componen x i () are independen and parallel if funcions f(x) and g k (x) in convex program ()-(3) are separable wih respec o each componen (or block) of x, e.g., f(x) = n i= f i(x i ) and g k (x) = n i= g k,i(x i ). Recenly, a new Lagrangian dual ype algorihm wih convergence rae O(/) for general convex programs is proposed in 7. This algorihm can solve convex program ()-(3) following he seps described in Algorihm. Similar o he dual subgradien algorihm wih primal averaging, Algorihm can decompose he updaes of x() ino smaller independen subproblems if funcions f(x) and g k (x) are separable. Moreover, Algorihm has O(/) convergence, which is faser han he primal-dual subgradien or he dual subgradien algorihm wih primal averaging. However, if f(x) or g k (x) are no separable, each updae of x() requires o solve a se consrained convex program. If he dimension n is large, such a se consrained convex program should be solved via a gradien based mehod insead of a Newon mehod. However, he gradien based mehod for se consrained convex programs is an ieraive echnique and involves a leas one projecion operaion a each ieraion. In his paper, we say ha he primal dual subgradien algorihm and he dual subgradien algorihm have an O(/ ) convergence rae in he sense ha hey achieve an ɛ-approximae soluion wih O(/ɛ ) ieraions by using an O(ɛ) sep size. The error of hose algorihms does no necessarily coninue o decay afer he ɛ-approximae soluion is reached. In conras, he algorihm in he curren paper has a faser O(/) convergence and his holds for all ime, so ha error goes o zero as he number of ieraions increases. Algorihm Algorihm in 7 Le α > 0 be a consan parameer. Choose any x( ) X. Iniialize virual queues Q k (0) = max{0, g k (x( ))}, k {,,..., m}. A each ieraion {0,,,...}, observe x( ) and Q() and do he following: Choose x() = argmin x X {f(x) + Q() + g(x( )) T g(x) + α x x( ) }. Updae virual queue vecor Q() via Q k ( + ) = max{ g k (x()), Q k () + g k (x())}, k {,,..., m}. Updae he running averages via x( + ) = + D. New Algorihm x(τ) = x() + + x() +. Consider large scale convex programs wih non-separable f(x) or g k (x), e.g., f(x) = Ax b. In his case, Algorihm has convergence rae O(/ ) using low complexiy ieraions; while Algorihm has convergence rae O(/) using high complexiy ieraions. This paper proposes a new algorihm described in Algorihm 3 which combines he advanages of Algorihm and Algorihm. The new algorihm modifies Algorihm by changing he updae of x() from a minimizaion problem o a simple projecion. Meanwhile, he O(/) convergence rae of Algorihm is preserved in he new algorihm. Algorihm 3 New Algorihm Le γ > 0 be a consan sep size. Choose any x( ) X. Iniialize virual queues Q k (0) = max{0, g k (x( ))}, k {,,..., m}. A each ieraion {0,,,...}, observe x( ) and Q() and do he following: Define d() = f(x( )) + m k= Q k() + g k (x( )) g k (x( )), which is he gradien of funcion φ(x) = f(x) + Q() + g(x( )) T g(x) a poin x = x( ). Choose x() = P X x( ) γd(), where P X is he projecion ono convex se X. Updae virual queue vecor Q() via Q k ( + ) = max{ g k (x()), Q k () + g k (x())}, k {,,..., m}. Updae he running averages x( + ) = + x(τ) = x() + + x() +. II. PRELIMINARIES AND BASIC ANALYSIS This secion presens useful preliminaries on convex analysis and imporan facs of Algorihm 3. A. Preliminaries Definiion (Lipschiz Coninuiy): Le X R n be a convex se. Funcion h : X R m is said o be Lipschiz coninuous on X wih modulus L if here exiss L > 0 such ha h(y) h(x) L y x for all x, y X.

3 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 Definiion (Smooh Funcions): Le X R n and funcion h(x) be coninuously differeniable on X. Funcion h(x) is said o be smooh on X wih modulus L if h(x) is Lipschiz coninuous on X wih modulus L. Noe ha linear funcion h(x) = a T x is smooh wih modulus 0. If a funcion h(x) is smooh wih modulus L, hen ch(x) is smooh wih modulus cl for any c > 0. Lemma (Descen Lemma, Proposiion A.4 in 3): If h is smooh on X wih modulus L, hen h(y) h(x) + h(x) T (y x) + L y x for all x, y X. Definiion 3 (Srongly Convex Funcions): Le X R n be a convex se. Funcion h is said o be srongly convex on X wih modulus α if here exiss a consan α > 0 such ha h(x) α x is convex on X. If h(x) is convex and α > 0, hen h(x) + α x x 0 is srongly convex wih modulus α for any consan x 0. Lemma : Le X R n be a convex se. Le funcion h be srongly convex wih modulus α and x op be a global minimum of h on X. Then, h(x op ) h(x) α xop x, x X. Proof: A special case when h is differeniable and X = R n is Theorem..8 in 8. The proof for general srongly convex funcion h and general convex se X is in 7. B. Basic Properies This subsecion presens preliminary resuls relaed o he virual queue updae (Lemmas 3-6) ha are proven for Algorihm in 7. Lemma 3 (Lemma 3 in 7): In Algorihm 3, we have ) A each ieraion {0,,,...}, Q k () 0 for all k {,,..., m}. ) A each ieraion {0,,,...}, Q k () + g k (x( )) 0 for all k {,..., m}. 3) A ieraion = 0, Q(0) g(x( )). A each ieraion {,,...}, Q() g(x( )). Lemma 4 (Lemma 7 in 7): Le Q(), {0,,...} be he sequence generaed by Algorihm 3. For any, Q k () g k (x(τ)), k {,,..., m}. Le Q() = Q (),..., Q m () T be he vecor of virual queue backlogs. Define L() = Q(). The funcion L() shall be called a Lyapunov funcion. Define he Lyapunov drif as () = L(+) L() = Q(+) Q(). Lemma 5 (Lemma 4 in 7): A each ieraion {0,,,...} in Algorihm 3, an upper bound of he Lyapunov drif is given by () Q T ()g(x()) + g(x()). (4) Lemma 6 (Lemma 8 in 7): Le x be an opimal soluion and λ be defined in Assumpion. Le x(), Q(), {0,,...} be sequences generaed by Algorihm 3. Then, f(x(τ)) f(x ) λ Q() for all. III. CONVERGENCE RATE ANALYSIS OF ALGORITHM 3 This secion analyzes he convergence rae of Algorihm 3 for problem ()-(3). A. Upper Bounds of he Drif-Plus-Penaly Expression Lemma 7: Le x be an opimal soluion. For all 0 in Algorihm 3, we have () + f(x()) f(x ) + γ x x( ) x x() + g(x()) g(x( )) + β + L f + Q() L g + C L g γ x() x( ), where β, L f, L g and C are defined in Assumpion. Proof: Fix 0. Recall ha φ(x) = f(x) + Q() + g(x( )) T g(x) as defined in Algorihm 3. Noe ha par in Lemma 3 implies ha Q() + g(x( )) is componen-wise nonnegaive. Hence, φ(x) is convex. Since d() = φ(x( )), he projecion operaor in Algorihm 3 can be reinerpreed as an opimizaion problem: x() =P X x( ) γd() = argmin x X φ(x( )) + T φ(x( ))x x( ) + γ x x( ), (5) where follows by removing he consan erm φ(x( )) in he minimizaion, compleing he square, and using he fac ha he projecion of a poin ono a se is equivalen o he minimizaion of he Euclidean disance o his poin over he same se. (See 9 for he deailed proof.) Since γ x x( ) is srongly convex wih respec o x wih modulus γ, i follows ha φ(x( )) + T φ(x( ))x x( ) + γ x x( ) is srongly convex wih respec o x wih modulus γ. Since x() is chosen o minimize he above srongly convex funcion, by Lemma, we have φ(x( )) + T φ(x( ))x() x( ) + x() x( ) γ φ(x( )) + T φ(x( ))x x( ) + γ x x( ) γ x x() φ(x ) + γ x x( ) x x() =f(x ) + Q() + g(x( )) T g(x ) }{{} 0 + γ x x( ) x x() f(x ) + γ x x( ) x x(), (6) where follows from he convexiy of φ(x); follows from he definiion of φ(x); and follows by using he fac ha g k (x ) 0 and Q k () + g k (x( )) 0 (i.e., par in Lemma 3) for all k {,,..., m} o eliminae he erm marked by an underbrace. Recall ha f(x) is smooh on X wih modulus L f by Assumpion. By Lemma, we have f(x()) f(x( )) + T f(x( ))x() x( ) + L f x() x( ). (7)

4 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 Recall ha each g k (x) is smooh on X wih modulus L gk by Assumpion. Thus, Q k () + g k (x( ))g k (x) is smooh wih modulus Q k ()+g k (x( ))L gk. By Lemma, we have Q k () + g k (x( ))g k (x()) Q k () + g k (x( ))g k (x( ))+Q k ()+g k (x( )) T g k (x( ))x() x( )+ Q k()+g k (x( ))L gk x() x( ). Summing his inequaliy over k {,,..., m} yields Q() + g(x( )) T g(x()) Q() + g(x( )) T g(x( ))+ m Q k () + g k (x( )) T g k (x( ))x() x( ) k= + Q() + g(x( ))T L g x() x( ). (8) Summing up (7) and (8) ogeher yields f(x()) + Q() + g(x( )) T g(x()) φ(x( )) + T φ(x( ))x() x( ) + L f + Q() + g(x( )) T L g x() x( ), (9) where follows from he definiion of φ(x). Subsiuing (6) ino (9) yields Summing (4) wih his inequaliy yields () + f(x()) f(x ) + γ x x( ) x x() + g(x()) g(x( )) + + Q() + g(x( )) T L g γ β + L f x() x( ) f(x ) + γ x x( ) x x() + g(x()) g(x( )) + + Q() L g + C L g γ β + L f x() x( ), where follows from Q() + g(x( )) T L g Q() + g(x( )) L g Q() + g(x()) L g Q() L g + C L g, where he firs sep follows from Cauchy-Schwarz inequaliy, he second follows from he riangular inequaliy and he hird follows from g(x) C for all x X, i.e., Assumpion. Lemma 8: Le x be an opimal soluion and λ be defined in Assumpion. Define D = β +L f + λ L g + C L g, where β, L f, L g and C are defined in Assumpion. If γ > 0 in Algorihm 3 saisfies f(x()) + Q() + g(x( )) T g(x()) f(x ) + + Q() + g(x( )) T L g γ L f γ x x( ) x x() + x() x( ). (0) Noe ha u T u = u + u u u for any u, u R m. Thus, we have g(x( )) T g(x()) = g(x( )) + g(x()) g(x( )) g(x()). Subsiuing his ino (0) and rearranging erms yields f(x()) + Q T ()g(x()) f(x ) + L f x() x( ) γ x x( ) x x() + + Q() + g(x( )) T L g γ + g(x( )) g(x()) g(x( )) g(x()) f(x ) + β + x() x( ) γ x x( ) x x() + L f + Q() + g(x( )) T L g γ g(x( )) g(x()) where follows from g(x( )) g(x()) β x() x( ), which furher follows from he assumpion ha g(x) is Lipschiz coninuous wih modulus β. D + L g R γ γ where R is defined in Assumpion, e.g., 0, () 0 < γ /( L g R + D), () hen a each ieraion {0,,,...}, we have ) Q() λ + R γ + C. ) ()+f(x()) f(x )+ γ x x( ) x x() + g(x()) g(x( )). Proof: Before he main proof, we verify ha γ given by () saisfies (). Need o choose γ > 0 such ha D + L g R γ γ 0 Dγ + L g R γ 0 γ L g R + L g R + 4D D = L g R + L g R + 4D. Noe ha L g R+ L g R +4D L = g R+ D L, where follows from a + b a + g R+ D b, a, b 0. Thus, if γ L, i.e., 0 < γ g R+ D ( L g R+ D), hen inequaliy () holds. Nex, we prove his lemma by inducion. Consider = 0. Q(0) λ + R γ + C follows from he fac ha Q(0) g(x( )) C, where follows from par 3 in Lemma 3 and follows

5 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 from Assumpion. Thus, he firs par in his lemma holds a ieraion = 0. Noe ha β + L f + Q(0) L g + C L g /γ β + L f + ( λ + R γ + C ) L g + C L g γ =D + L g R γ γ 0, (3) where follows from Q(0) λ + R γ +C; follows from he definiion of D; and follows from (), i.e., he selecion rule of γ. Applying Lemma 7 a ieraion = 0 yields (0) + f(x(0)) f(x ) + γ x x( ) x x(0) + g(x(0)) g(x( )) + β + L f x(0) x( ) + Q(0) L g + C L g γ f(x ) + γ x x( ) x x(0) + g(x(0)) g(x( )), where follows from (3). Thus, he second par in his lemma holds a ieraion = 0. Assume (τ) + f(x(τ)) f(x ) + γ x x(τ ) x x(τ) + g(x(τ)) g(x(τ )) holds for all 0 τ and consider ieraion +. Summing his inequaliy over τ {0,,..., } yields (τ) + f(x(τ)) ( + )f(x ) + γ x x(τ ) x x(τ) + g(x(τ)) g(x(τ )). Recalling ha (τ) = L(τ + ) L(τ) and simplifying he summaions yields L( + ) L(0) + f(x(τ)) ( + )f(x ) + γ x x( ) γ x x() + g(x()) g(x( )) ( + )f(x ) + γ x x( ) + g(x()) g(x( )). Rearranging erms yields f(x(τ)) ( + )f(x ) + γ x x( ) + g(x()) g(x( )) + L(0) L( + ) = ( + )f(x ) + γ x x( ) + g(x()) g(x( )) + Q(0) Q( + ) ( + )f(x ) + R γ + C Q( + ), (4) where follows from L(0) = Q(0) and L( + ) = Q( + ) ; follows from x y R for all x, y X, i.e., Assumpion, g(x()) C, i.e., Assumpion, and Q(0) g(x( )), i.e., par 3 in Lemma 3. Applying Lemma 6 a ieraion + yields f(x(τ)) ( + )f(x ) λ Q( + ). Combining his inequaliy wih (4) and cancelling he common erm ( + )f(x ) on boh sides yields Q( + ) λ Q( + ) R γ C 0 ( Q( + ) λ ) λ + R γ + C Q( + ) λ + λ + R /γ + C Q( + ) λ + R/ γ + C, where follows from he basic inequaliy a + b + c a + b + c for any a, b, c 0. Thus, he firs par in his lemma holds a ieraion +. Noe ha β + L f + Q( + ) L g + C L g γ β + L f + ( λ + R γ + C) L g + C L g γ =D + L g R γ γ 0, (5) where follows from Q(+) λ + R γ +C; follows from he definiion of D; and follows from (), i.e., he selecion rule of γ. Applying Lemma 7 a ieraion + yields ( + ) + f(x( + )) f(x ) + γ x x() x x( + ) + g(x( + )) g(x()) + β + L f + x( + ) x() Q( + ) L g + C L g γ f(x ) + γ x x() x x( + ) + g(x( + )) g(x()), where follows from (5). Thus, he second par in his lemma holds a ieraion +. Thus, boh pars in his lemma follow by inducion. Remark : Recall ha if each g k (x) is a linear funcion, hen L gk = 0 for all k {,,..., m}. In his case, equaion () reduces o 0 < γ /(β + L f ). B. Objecive Value Violaions Theorem (Objecive Value Violaions): Le x be an opimal soluion. If we choose γ according o () in Algorihm 3, hen for all, we have f(x()) f(x )+ R γ, where R is defined in Assumpion. Proof: Fix. By par in Lemma 8, we have (τ) + f(x(τ)) f(x ) + γ x x(τ )

6 PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 x x(τ) + g(x(τ)) g(x(τ )) for all τ {0,,,...}. Summing over τ {0,,..., } yields (τ) + f(x(τ)) f(x ) + γ x x(τ ) x x(τ) + g(x(τ)) g(x(τ )). Recalling ha (τ) = L(τ +) L(τ) and simplifying he summaions yields L() L(0)+ f(x(τ)) f(x )+ γ x x( ) γ x x( ) + g(x( )) g(x( )) f(x ) + γ x x( ) + g(x( )) g(x( )). Rearranging erms yields f(x(τ)) f(x ) + γ x x( ) + g(x( )) g(x( )) + L(0) L() = f(x ) + γ x x( ) + g(x( )) g(x( )) + Q(0) Q() f(x ) + γ x x( ) f(x ) + R γ, where follows from he definiion ha L(0) = Q(0) and L() = Q() ; follows from he fac ha Q(0) g(x( )) and Q() g(x( )) for, i.e., par 3 in Lemma 3; and follows from he fac ha x y R for all x, y X, i.e., Assumpion. Dividing boh sides by facor yields f(x(τ)) f(x ) + R γ. Finally, since x() = x(τ) and f(x) is convex, By Jensen s inequaliy i follows ha f(x()) f(x(τ)). C. Consrain Violaions Theorem (Consrain Violaions): Le x be an opimal soluion and λ be defined in Assumpion. If we choose γ according o () in Algorihm( 3, hen for all, he consrains saisfy g k (x()) λ + R γ + C ), k {,,..., m}, where R and C are defined in Assumpion. Proof: x() = Fix and k {,,..., m}. Recall ha x(τ). Thus, g k (x()) g k (x(τ)) Q k() Q() ( λ + R γ + C ), where follows from he convexiy of g k (x) and Jensen s inequaliy; follows from Lemma 4; and follows from par in Lemma 8. Theorems and show ha Algorihm 3 ensures error decays like O(/) and provides an ɛ-approximiae soluion wih convergence ime O(/ɛ). D. Pracical Implemenaions By Theorems and, i suffices o choose γ according o () o guaranee he O(/) convergence rae of Algorihm 3. If all consrain funcions are linear, hen () is independen of λ by Remark. For general consrain funcions, we need o know he value of λ, which is ypically unknown, o selec γ according o (). However, i is easy o observe ha an upper bound of λ is sufficien for us o choose γ saisfying (). To obain an upper bound of λ, he nex lemma is useful if problem ()-(3) has an inerior feasible poin, i.e., he Slaer condiion is saisfied. Lemma 9 (Lemma in 5): Consider convex program min f(x) s.. g k (x) 0, k {,,..., m} x X R n and define he Lagrangian dual funcion as q(λ) = inf x X {f(x) + λ T g(x)}. If he Slaer condiion holds, i.e., here exiss ˆx X such ha g j (x) < 0, j {,,..., m}, hen he level ses Vˆλ = {λ 0 : q(λ) q(ˆλ)} are bounded for any ˆλ. In paricular, we have max λ Vˆλ λ min j m { g j(ˆx)} (f(ˆx) q(ˆλ)). By Lemma 9, if convex program ()-(3) has a feasible poin ˆx X such ha g k (ˆx) < 0, k {,,..., m}, hen we can ake an arbirary ˆλ 0 o obain he value q(ˆλ) = inf x X {f(x) + ˆλ T g(x)} and conclude ha λ min j m { g (f(ˆx) q(ˆλ)). j(ˆx)} Since f(x) is coninuous and X is a compac se, here exiss consan F > 0 such ha f(x) F for all x X. Thus, we can ake ˆλ = 0 such ha q(ˆλ) = min x X {f(x)} F. I follows from Lemma 9 ha λ min j m { g (f(ˆx) q(ˆλ)) j(ˆx)} F min j m { g. j(ˆx)} REFERENCES S. Boyd and L. Vandenberghe, Convex Opimizaion. Cambridge Universiy Press, 004. A. Nedić and A. Ozdaglar, Subgradien mehods for saddle-poin problems, Journal of Opimizaion Theory and Applicaions, vol. 4, no., pp. 05 8, D. P. Bersekas, Nonlinear Programming, nd ed. Ahena Scienific, M. J. Neely, Disribued and secure compuaion of convex programs over a nework of conneced processors, in DCDIS Conference Guelph, July A. Nedić and A. Ozdaglar, Approximae primal soluions and rae analysis for dual subgradien mehods, SIAM Journal on Opimizaion, vol. 9, no. 4, pp , M. J. Neely, A simple convergence ime analysis of drif-plus-penaly for sochasic opimizaion and convex programs, arxiv:4.079, H. Yu and M. J. Neely, A simple parallel algorihm wih an O(/) convergence rae for general convex programs, arxiv: , Y. Neserov, Inroducory Lecures on Convex Opimizaion: A Basic Course. Springer Science & Business Media, H. Yu and M. J. Neely, A primal-dual ype algorihm wih he O(/) convergence rae for large scale consrained convex programs, arxiv:604.06, 06.

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 On he Convergence Time of Dual Subgradien Mehods for Srongly Convex Programs Hao Yu and Michael J. Neely Universiy of Souhern California

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization 1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus

More information

Optimality Conditions for Unconstrained Problems

Optimality Conditions for Unconstrained Problems 62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214

More information

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

Primal-Dual Splitting: Recent Improvements and Variants

Primal-Dual Splitting: Recent Improvements and Variants Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France The proximal poin

More information

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS Vincen Guigues School of Applied Mahemaics, FGV Praia de Boafogo, Rio de Janeiro,

More information

arxiv: v1 [math.oc] 11 Sep 2017

arxiv: v1 [math.oc] 11 Sep 2017 Online Learning in Weakly Coupled Markov Decision Processes: A Convergence ime Sudy Xiaohan Wei, Hao Yu and Michael J. Neely arxiv:709.03465v [mah.oc] Sep 07. Inroducion Absrac: We consider muliple parallel

More information

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018 ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1, Issue 6 Ver. II (Nov - Dec. 214), PP 48-54 Variaional Ieraion Mehod for Solving Sysem of Fracional Order Ordinary Differenial

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems

More information

Convergence of the Neumann series in higher norms

Convergence of the Neumann series in higher norms Convergence of he Neumann series in higher norms Charles L. Epsein Deparmen of Mahemaics, Universiy of Pennsylvania Version 1.0 Augus 1, 003 Absrac Naural condiions on an operaor A are given so ha he Neumann

More information

Optimality and complexity for constrained optimization problems with nonconvex regularization

Optimality and complexity for constrained optimization problems with nonconvex regularization Opimaliy and complexiy for consrained opimizaion problems wih nonconvex regularizaion Wei Bian Deparmen of Mahemaics, Harbin Insiue of Technology, Harbin, China, bianweilvse520@163.com Xiaojun Chen Deparmen

More information

Network Newton Distributed Optimization Methods

Network Newton Distributed Optimization Methods Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are

More information

Correspondence should be addressed to Nguyen Buong,

Correspondence should be addressed to Nguyen Buong, Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

t 2 B F x,t n dsdt t u x,t dxdt

t 2 B F x,t n dsdt t u x,t dxdt Evoluion Equaions For 0, fixed, le U U0, where U denoes a bounded open se in R n.suppose ha U is filled wih a maerial in which a conaminan is being ranspored by various means including diffusion and convecion.

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

arxiv: v1 [math.oc] 13 Sep 2018

arxiv: v1 [math.oc] 13 Sep 2018 Hamilonian Descen Mehods Chris J. Maddison 1,2,*, Daniel Paulin 1,*, Yee Whye Teh 1,2, Brendan O Donoghue 2, and Arnaud Douce 1 arxiv:1809.05042v1 [mah.oc] 13 Sep 2018 1 Deparmen of Saisics, Universiy

More information

L p -L q -Time decay estimate for solution of the Cauchy problem for hyperbolic partial differential equations of linear thermoelasticity

L p -L q -Time decay estimate for solution of the Cauchy problem for hyperbolic partial differential equations of linear thermoelasticity ANNALES POLONICI MATHEMATICI LIV.2 99) L p -L q -Time decay esimae for soluion of he Cauchy problem for hyperbolic parial differenial equaions of linear hermoelasiciy by Jerzy Gawinecki Warszawa) Absrac.

More information

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS A SIMPLE PARALLEL ALGORITHM WITH AN O(/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS HAO YU AND MICHAEL J. NEELY Abstract. This paper considers convex programs with a general (possibly non-differentiable)

More information

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor

More information

Monotonic Solutions of a Class of Quadratic Singular Integral Equations of Volterra type

Monotonic Solutions of a Class of Quadratic Singular Integral Equations of Volterra type In. J. Conemp. Mah. Sci., Vol. 2, 27, no. 2, 89-2 Monoonic Soluions of a Class of Quadraic Singular Inegral Equaions of Volerra ype Mahmoud M. El Borai Deparmen of Mahemaics, Faculy of Science, Alexandria

More information

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary

More information

arxiv: v2 [math.ap] 16 Oct 2017

arxiv: v2 [math.ap] 16 Oct 2017 Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????XX0000-0 MINIMIZATION SOLUTIONS TO CONSERVATION LAWS WITH NON-SMOOTH AND NON-STRICTLY CONVEX FLUX CAREY CAGINALP arxiv:1708.02339v2 [mah.ap]

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

A primal-dual Laplacian gradient flow dynamics for distributed resource allocation problems

A primal-dual Laplacian gradient flow dynamics for distributed resource allocation problems 2018 Annual American Conrol Conference (ACC) June 27 29, 2018. Wisconsin Cener, Milwaukee, USA A primal-dual Laplacian gradien flow dynamics for disribued resource allocaion problems Dongsheng Ding and

More information

Hamilton Jacobi equations

Hamilton Jacobi equations Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion

More information

Optimal approximate dynamic programming algorithms for a general class of storage problems

Optimal approximate dynamic programming algorithms for a general class of storage problems Opimal approximae dynamic programming algorihms for a general class of sorage problems Juliana M. Nascimeno Warren B. Powell Deparmen of Operaions Research and Financial Engineering Princeon Universiy

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions

More information

Lecture 4: November 13

Lecture 4: November 13 Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae 4.1.1 Moivaion Unil his lecure we saw

More information

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL HE PUBLISHING HOUSE PROCEEDINGS OF HE ROMANIAN ACADEMY, Series A, OF HE ROMANIAN ACADEMY Volume, Number 4/200, pp 287 293 SUFFICIEN CONDIIONS FOR EXISENCE SOLUION OF LINEAR WO-POIN BOUNDARY PROBLEM IN

More information

CONTRIBUTION TO IMPULSIVE EQUATIONS

CONTRIBUTION TO IMPULSIVE EQUATIONS European Scienific Journal Sepember 214 /SPECIAL/ ediion Vol.3 ISSN: 1857 7881 (Prin) e - ISSN 1857-7431 CONTRIBUTION TO IMPULSIVE EQUATIONS Berrabah Faima Zohra, MA Universiy of sidi bel abbes/ Algeria

More information

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation: M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno

More information

1 Solutions to selected problems

1 Solutions to selected problems 1 Soluions o seleced problems 1. Le A B R n. Show ha in A in B bu in general bd A bd B. Soluion. Le x in A. Then here is ɛ > 0 such ha B ɛ (x) A B. This shows x in B. If A = [0, 1] and B = [0, 2], hen

More information

Vanishing Viscosity Method. There are another instructive and perhaps more natural discontinuous solutions of the conservation law

Vanishing Viscosity Method. There are another instructive and perhaps more natural discontinuous solutions of the conservation law Vanishing Viscosiy Mehod. There are anoher insrucive and perhaps more naural disconinuous soluions of he conservaion law (1 u +(q(u x 0, he so called vanishing viscosiy mehod. This mehod consiss in viewing

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Lecture 4 Notes (Little s Theorem)

Lecture 4 Notes (Little s Theorem) Lecure 4 Noes (Lile s Theorem) This lecure concerns one of he mos imporan (and simples) heorems in Queuing Theory, Lile s Theorem. More informaion can be found in he course book, Bersekas & Gallagher,

More information

Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness

Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness Convergence of he Lasserre Hierarchy of SDP Relaxaions for Convex Polynomial Programs wihou Compacness V. Jeyakumar, T. S. Phạm and G. Li Revised Version: Sepember 18, 2013 Absrac The Lasserre hierarchy

More information

Accelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions

Accelerated Distributed Nesterov Gradient Descent for Convex and Smooth Functions 07 IEEE 56h Annual Conference on Decision and Conrol (CDC) December -5, 07, Melbourne, Ausralia Acceleraed Disribued Neserov Gradien Descen for Convex and Smooh Funcions Guannan Qu, Na Li Absrac This paper

More information

On Oscillation of a Generalized Logistic Equation with Several Delays

On Oscillation of a Generalized Logistic Equation with Several Delays Journal of Mahemaical Analysis and Applicaions 253, 389 45 (21) doi:1.16/jmaa.2.714, available online a hp://www.idealibrary.com on On Oscillaion of a Generalized Logisic Equaion wih Several Delays Leonid

More information

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differenial Equaions 5. Examples of linear differenial equaions and heir applicaions We consider some examples of sysems of linear differenial equaions wih consan coefficiens y = a y +... + a

More information

Projection-Based Optimal Mode Scheduling

Projection-Based Optimal Mode Scheduling Projecion-Based Opimal Mode Scheduling T. M. Caldwell and T. D. Murphey Absrac This paper develops an ieraive opimizaion echnique ha can be applied o mode scheduling. The algorihm provides boh a mode schedule

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Optimality and complexity for constrained optimization problems with nonconvex regularization

Optimality and complexity for constrained optimization problems with nonconvex regularization Opimaliy and complexiy for consrained opimizaion problems wih nonconvex regularizaion Wei Bian Deparmen of Mahemaics, Harbin Insiue of Technology, Harbin, China, bianweilvse520@163.com Xiaojun Chen Deparmen

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Global Optimization for Scheduling Refinery Crude Oil Operations

Global Optimization for Scheduling Refinery Crude Oil Operations Global Opimizaion for Scheduling Refinery Crude Oil Operaions Ramkumar Karuppiah 1, Kevin C. Furman 2 and Ignacio E. Grossmann 1 (1) Deparmen of Chemical Engineering Carnegie Mellon Universiy (2) Corporae

More information

System of Linear Differential Equations

System of Linear Differential Equations Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y

More information

The L p -Version of the Generalized Bohl Perron Principle for Vector Equations with Infinite Delay

The L p -Version of the Generalized Bohl Perron Principle for Vector Equations with Infinite Delay Advances in Dynamical Sysems and Applicaions ISSN 973-5321, Volume 6, Number 2, pp. 177 184 (211) hp://campus.ms.edu/adsa The L p -Version of he Generalized Bohl Perron Principle for Vecor Equaions wih

More information

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS LECTURE : GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS We will work wih a coninuous ime reversible Markov chain X on a finie conneced sae space, wih generaor Lf(x = y q x,yf(y. (Recall ha q

More information

TO our knowledge, most exciting results on the existence

TO our knowledge, most exciting results on the existence IAENG Inernaional Journal of Applied Mahemaics, 42:, IJAM_42 2 Exisence and Uniqueness of a Periodic Soluion for hird-order Delay Differenial Equaion wih wo Deviaing Argumens A. M. A. Abou-El-Ela, A. I.

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

MA 214 Calculus IV (Spring 2016) Section 2. Homework Assignment 1 Solutions

MA 214 Calculus IV (Spring 2016) Section 2. Homework Assignment 1 Solutions MA 14 Calculus IV (Spring 016) Secion Homework Assignmen 1 Soluions 1 Boyce and DiPrima, p 40, Problem 10 (c) Soluion: In sandard form he given firs-order linear ODE is: An inegraing facor is given by

More information

556: MATHEMATICAL STATISTICS I

556: MATHEMATICAL STATISTICS I 556: MATHEMATICAL STATISTICS I INEQUALITIES 5.1 Concenraion and Tail Probabiliy Inequaliies Lemma (CHEBYCHEV S LEMMA) c > 0, If X is a random variable, hen for non-negaive funcion h, and P X [h(x) c] E

More information

arxiv: v1 [math.pr] 19 Feb 2011

arxiv: v1 [math.pr] 19 Feb 2011 A NOTE ON FELLER SEMIGROUPS AND RESOLVENTS VADIM KOSTRYKIN, JÜRGEN POTTHOFF, AND ROBERT SCHRADER ABSTRACT. Various equivalen condiions for a semigroup or a resolven generaed by a Markov process o be of

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Scheduling of Crude Oil Movements at Refinery Front-end

Scheduling of Crude Oil Movements at Refinery Front-end Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,

More information

International Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN

International Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN Inernaional Journal of Scienific & Engineering Research, Volume 4, Issue 10, Ocober-2013 900 FUZZY MEAN RESIDUAL LIFE ORDERING OF FUZZY RANDOM VARIABLES J. EARNEST LAZARUS PIRIYAKUMAR 1, A. YAMUNA 2 1.

More information

FAST CONVERGENCE OF INERTIAL DYNAMICS AND ALGORITHMS WITH ASYMPTOTIC VANISHING DAMPING

FAST CONVERGENCE OF INERTIAL DYNAMICS AND ALGORITHMS WITH ASYMPTOTIC VANISHING DAMPING FAST CONVERGENCE OF INERTIAL DYNAMICS AND ALGORITHMS WITH ASYMPTOTIC VANISHING DAMPING HEDY ATTOUCH, ZAKI CHBANI, JUAN PEYPOUQUET, AND PATRICK REDONT Absrac. In a Hilber space seing H, we sudy he fas convergence

More information

An Introduction to Stochastic Programming: The Recourse Problem

An Introduction to Stochastic Programming: The Recourse Problem An Inroducion o Sochasic Programming: he Recourse Problem George Danzig and Phil Wolfe Ellis Johnson, Roger Wes, Dick Cole, and Me John Birge Where o look in he ex pp. 6-7, Secion.2.: Inroducion o sochasic

More information

Energy-Efficient Link Adaptation on Parallel Channels

Energy-Efficient Link Adaptation on Parallel Channels Energy-Efficien Link Adapaion on Parallel Channels Chrisian Isheden and Gerhard P. Feweis Technische Universiä Dresden Vodafone Chair Mobile Communicaions Sysems D-0069 Dresden Germany Email: (see hp://www.vodafone-chair.com/

More information

International Journal of Mathematics Trends and Technology (IJMTT) Volume 37 Number 3 September 2016

International Journal of Mathematics Trends and Technology (IJMTT) Volume 37 Number 3 September 2016 Inernaional Journal of ahemaics Trends and Technology (IJTT) Volume 37 Number 3 Sepember 6 Conveiy in he s Class of he Rough ahemaical Programming Problems T... Aeya, Kamilia l-bess, Deparmen of Physics

More information

Ann. Funct. Anal. 2 (2011), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Ann. Funct. Anal. 2 (2011), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL: Ann. Func. Anal. 2 2011, no. 2, 34 41 A nnals of F uncional A nalysis ISSN: 2008-8752 elecronic URL: www.emis.de/journals/afa/ CLASSIFICAION OF POSIIVE SOLUIONS OF NONLINEAR SYSEMS OF VOLERRA INEGRAL EQUAIONS

More information

OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POINT PROBLEMS

OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POINT PROBLEMS OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POIT PROBLEMS YUMEI CHE, GUAGHUI LA, AD YUYUA OUYAG Absrac. We presen a novel acceleraed primal-dual APD) mehod for solving a class of deerminisic and

More information

Dini derivative and a characterization for Lipschitz and convex functions on Riemannian manifolds

Dini derivative and a characterization for Lipschitz and convex functions on Riemannian manifolds Nonlinear Analysis 68 (2008) 1517 1528 www.elsevier.com/locae/na Dini derivaive and a characerizaion for Lipschiz and convex funcions on Riemannian manifolds O.P. Ferreira Universidade Federal de Goiás,

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,

More information

Orthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind

Orthogonal Rational Functions, Associated Rational Functions And Functions Of The Second Kind Proceedings of he World Congress on Engineering 2008 Vol II Orhogonal Raional Funcions, Associaed Raional Funcions And Funcions Of The Second Kind Karl Deckers and Adhemar Bulheel Absrac Consider he sequence

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

Endpoint Strichartz estimates

Endpoint Strichartz estimates Endpoin Sricharz esimaes Markus Keel and Terence Tao (Amer. J. Mah. 10 (1998) 955 980) Presener : Nobu Kishimoo (Kyoo Universiy) 013 Paricipaing School in Analysis of PDE 013/8/6 30, Jeju 1 Absrac of he

More information

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE Topics MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 2-6 3. FUNCTION OF A RANDOM VARIABLE 3.2 PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE 3.3 EXPECTATION AND MOMENTS

More information

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange

More information

Heat kernel and Harnack inequality on Riemannian manifolds

Heat kernel and Harnack inequality on Riemannian manifolds Hea kernel and Harnack inequaliy on Riemannian manifolds Alexander Grigor yan UHK 11/02/2014 onens 1 Laplace operaor and hea kernel 1 2 Uniform Faber-Krahn inequaliy 3 3 Gaussian upper bounds 4 4 ean-value

More information