A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

Size: px
Start display at page:

Download "A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization"

Transcription

1 A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems whose objecive funcions are expressed as sums of convex funcions. Such problems appear in various fields, such as saisics and machine learning. One mehod for solving such problems is he online gradien mehod, which explois he gradiens of a few of he funcions in he objecive funcion a each ieraion. In his paper, we focus on he sparsiy of wo vecors, namely a soluion and a gradien of each funcion in he objecive funcion. In order o obain a sparse soluion, we usually add he L1 regularizaion erm o he objecive funcion. Then, because he L1 regularizaion erm consiss of all decision variables, he usual online gradien mehods updae all variables even if each gradien is sparse. To reduce he number of compuaions required for all variables a each ieraion, we can employ a lazy evaluaion. This only updaes he variables corresponding o nonzero componens of he gradien, by ignoring he L1 regularizaion erms, and evaluaes he ignored erms laer. Because a lazy evaluaion does no exploi he informaion relaed o all erms a every ieraion, he online algorihm using lazy evaluaion may converge slowly. In his paper, we describe a forward-backward spliing ype mehod, which explois he L1 erms corresponding o he nonzero componens of he gradien a each ieraion. We show ha is regre bound is O( T ), where T is he number of funcions in he objecive funcion. Moreover, we presen some numerical experimens using he proposed mehod, which demonsrae ha he proposed mehod gives promising resuls. Keywords: online opimizaion; regre analysis; ex classificaion 1 Inroducion In his paper, we consider he following problem: min { f (x) + r(x), (1) 1 Technical Repor , March 27, Graduae School of Informaics, Kyoo Universiy, Yoshida-honmachi, Kyoo, , Japan. nobuo@i.kyoo-u.ac.jp 1

2 where f : R n R( 1,..., T ) are differeniable convex funcions, and r : R n R is a coninuous convex funcion ha is no necessarily differeniable. Typical examples of his problem appear in machine learning and signal processing. In hese examples, f is regarded as a loss funcion, such as a logisic loss funcion f (x) log(1 + exp( b a, x )) [2, 7] or a hinge loss funcion f (x) [ b a, x + 1] + [4]. Here, (a, b ) R n+1 is a pair from he h sample daa. Furhermore, r is a regularizaion funcion o avoid over-fiing, such as he L1 regularizaion erm r(x) λ n x(i) or he L2 regularizaion erm r(x) λ 2 x 2. For some applicaions, his may be an indicaor funcion of he feasible se Ω, which is defined by { 0 if x Ω r Ω (x) + oherwise. In his paper, we consider problem (1) under he following circumsances: (i) Eiher he upper limi of he summaion T is very large, or he funcions f are given one by one a he h ieraion; ha is, he number of funcions f increases wih. { (ii) The gradien f (x) is sparse; ha is, I n, where I i f (x) 0. (iii) The funcion r is decomposable. In machine learning and saisics, T corresponds o he number of raining samples, and f is a loss funcion for he h sample. Therefore, if we have large number of samples, T becomes large. When f is he logisic funcion given above and some sample daa a in f is sparse, hen he gradien of f is also sparse. Furher, if r is he L1 or L2 regularizaion erm, hen r can be decomposed as r(x) ), (2) where r i : R R and denoes he ih componen of he decision variable x. Throughou his paper, we assume ha r is expressed as (2). We can easily exend he subsequen discussions o he case where r is decomposable wih respec o blocks of variables, such as he group lasso. For problems saisfying (i)-(iii), he online gradien mehod is useful. This generaes a sequence of reasonable soluions using only he gradien of f a he h ieraion [15, 16]. Noe ha his is essenially he same as he incremenal gradien mehod [1] and he sochasic gradien mehod [5, 6]. In his paper, we adop he online gradien mehod for he problem (1) saisfying (i)-(iii). We usually evaluae he online gradien mehod using a regre, which is defined by R(T ) 1 { f (x ) + r(x ) min x { f (x) + r(x), (3) where {x is a sequence generaed by he online gradien mehod. The regre R(T ) is he residual beween he opimum value of (1) and he sum of he values of f a x. When R(T ) R(T ) saisfies lim T T 0, we conclude ha he online gradien mehod converges o an opimal soluion. 1 2

3 Various online gradien mehods have been proposed for problem (1) [14]. When he objecive funcion consiss of non-differeniable funcions, such as he L1 regularizaion erm, he online subgradien mehod for problem (1) updaes a sequence as follows [5]: x +1 x α ( f (x ) + η ), where η r(x ) and α is he sepsize a he h ieraion. If f is convex, hen he regre of he online subgradien mehod is O( T ). Furhermore, if f is srongly convex, hen i reduces o O(log T ) [14]. However, his algorihm does no fully exploi he srucure of r, and i may converge slowly. The forward-backward spliing (FOBOS) mehod [8, 9] uses he srucure of r. This consiss of he gradien mehod for f and he proximal poin mehod for r; ha is, a sequence is generaed as { x +1 argmin x f (x ), x + 1 x x 2 + r(x). (4) 2α For given f (x), we can compue (4) in O( I ) when r i (x) 0 for all i. Therefore, when f (x) is sparse as described in (ii), such ha I n, we can obain x +1 quickly. However, when r i (x) 0 for all i we require a compuaion ime of O(n) for (4), even if I n. In order o overcome his drawback, Langford [10] proposed he lazy evaluaion mehod, which defers he evaluaion of r in (4) o a laer round a mos ieraions, and explois he informaion relaing o each deferred r all ogeher afer a number of ieraions. This echnique enables us o obain x +1 using (4) wih only a few compuaions when f (x) is sparse. However, his mehod does no fully exploi he informaion of r a all ieraions, and hence is behavior is inferior. Furhermore, even if we employ he L1 regularizaion erm as r, x canno be sparse excep a he ieraion where he deferred evaluaions of r are performed. In his paper, we propose a novel algorihm ha conducs he lazy evaluaion for r i wih respec o i such ha f (x ) 0 a each ieraion. The usual lazy evaluaion process ignores r for almos all ieraions, and evaluaes r a a few ieraions while updaing all componens in x. In conras, he proposed mehod conducs he lazy evaluaion for he updaed variables +1 (i I ) a each ieraion. Thus, he proposed mehod can exploi he full informaion given by r i (i I ) a each ieraion, and hence he FOBOS mehod is expeced o converge faser wih his echnique han wih he usual lazy evaluaion [10]. The remainder of his paper is organized as follows. In Secion 2, we inroduce some online gradien mehods and he exising lazy evaluaion procedure. In Secion 3, we propose a FOBOS mehod wih componen-wise lazy evaluaion. In Secion 4, we prove ha he regre of he proposed mehod is O( T ). In Secion 5, we presen some numerical experimens using he proposed mehod. Finally, we provide some concluding remarks in Secion 6. Throughou his paper, we employ he following noaions. Noe ha x and x p denoe he Euclidean and p-norm of x, respecively. For a differeniable funcion f : R n R, (i) f(x) denoes he ih componen of f(x). Furhermore, f denoes he subgradien of f. The funcion [ ] + denoes he max funcion; ha is, [a] + max{a, 0. We are aware of ha Lipon and Elkan [11] quie recenly presened similar idea for he problem. The idea of he proposed mehod in his paper were presened in he bachelor hesis of one of he auhors of his paper [12]. Furher, he regre analysis and numerical experimens are presened a he meeing of Operaions Research Sociey of Japan [13], which has been held in March,

4 2 Preliminaries In his secion, we inroduce he exising online gradien mehods, which are he basis of he mehod ha will be proposed in Secion Forward-Backward Spliing Mehod In his subsecion, we inroduce some well-known online gradien mehods for solving problem (1). The mos fundamenal algorihm for (1) is he online subgradien mehod [5], given as follows. Algorihm 1 Online Subgradien Mehod Sep 0: Le x 1 R n and 1. Sep 1: Compue he subgradien s {f (x ) + r(x ). Sep 2: Updae he sequence by x +1 argmin x { s, x + 1 2α x x 2. Sep 3: Updae he sepsize. Se + 1, and reurn o Sep 1. When all of he loss funcions f are convex and α c, he regre (3) of he online subgradien mehod is O( T ) [16], where c is a posiive consan. Moreover, when he funcions f are srongly convex, he regre reduces o O(log T ) [14]. However, his algorihm is inefficien when r is a non-differeniable funcion, such as he L1 regularizaion erm. The forward-backward spliing (FOBOS) mehod [8] is an efficien algorihm for problems where r has a special srucure. Algorihm 2 Forward-Backward Spliing Mehod (FOBOS) Sep 0: Le x 1 R n and 1. Sep 1: Se I {i (i) f (x ) 0 and updae he sequence using he gradien mehod. { { ˆ +1 argminy α (i) f (x )y (y x(i) ) 2 if i I (5) oherwise. Sep 2: Evaluae r i by he proximal poin mehod. { +1 argmin y r i (y) + 1 (y ˆ +1 2α )2 for i 1,..., n. (6) Se + 1, and reurn o Sep 1. When ) λ for a posiive consan λ, (6) is rewrien as +1 sgn( α (i) f (x ) ) max { α (i) f (x ) λα, 0 for i 1,..., n, 4

5 where sgn( ) denoes he signaure funcion; ha is, +1 if a > 0 sgn(a) 0 if a 0 1 oherwise. Moreover, when ) λ 2 x(i)2, (6) is given by λα α 1 + λα (i) f (x ) for i 1,..., n. The regre obained by FOBOS has been demonsraed o be O( T ). We will assume ha condiion (ii) from he inroducion holds. Then, we can noe ha x i +1 xi holds in Sep 1 for i such ha f (x ) 0. Therefore, Sep 1 only updaes for i I. However, even if f (x ) is sparse, Sep 2 has o updae all componens when ) 0 for all i. 2.2 Lazy Evaluaions In his subsecion, we inroduce he lazy evaluaion echnique [10], which ignores (6) in FOBOS for some consecuive ieraions. We carry ou he ignored updaes (6) all ogeher afer every K ieraions. Algorihm 3 FOBOS wih Lazy evaluaion (L-FOBOS) Sep 0: Le K be a naural number, x 1 R n, and 1. Sep 1: Se I {i (i) f (x ) 0 and updae using he gradien mehod. { { argminx α (i) f (x )x (x x(i) ) 2 if +1 If mod K 0, hen se + 1 and reurn o Sep 1. Oherwise, go o Sep 2. Sep 2: Conduc he lazy evaluaion for each r i. Le y 1 x and j 1. i I oherwise. y (i) j+1 argmin y{r i 1 (y) + (y y (i) j ) 2 for i 1,..., n. (7) 2α K+j If j j + 1 and j K, hen reurn o Sep 2. Oherwise, se + 1 and x y j and go o Sep 1. In his paper, we refer o he above algorihm as L-FOBOS. Sep 1 of L-FOBOS updaes only x i +1 for i I using he gradien mehod. Afer K consecuive ieraions of Sep 1, Sep 2 evaluaes he deferred r. Here, Sep 2 employs he same sepsizes as he mos recen K ieraions in Sep 1. If Sep 2 direcly calculaes (7) K imes, hen O(Kn) compuaions are required. However, if r saisfies he following propery, hen he number of compuaions required in Sep 2 can be shown o be O(n). 5

6 Propery 1. Le x 0 R n, and le {x be he sequence obained by he following formula: { 1 x +1 argmin x 2 x x 2 + α r(x). Moreover, le x be he opimal soluion of he following opimizaion problem: { ( K x 1 argmin x 2 x x α )r(x). 1 Then, x K x holds for any x 0. Noe ha he L1 and L2 regularizaion erms and he indicaor funcion for boxed consrains saisfy he above propery [8, Proposiion 11]. When r saisfies Propery 1, we can aggregae all of he compuaions needed for K calculaions (7) ino he following O(n) formula: y (i) K argmin { y r i (y) + 1 (y ) 2 (i 1,..., n), 2 α where α is he sum of he sepsizes of he las K ieraions a he h ieraion; ha is, α j K+1 α j, wih α 0 0. By using L-FOBOS, we can updae he sequence effecively wih respec o he compuaional complexiy. However, he convergence of L-FOBOS migh be slower han FOBOS for many problems, because we canno sufficienly exploi he informaion given by r. Moreover, we canno obain a sparse sequence, even if we se r as he L1 regularizaion erm. Hence, in he nex secion we will presen a novel lazy evaluaion echnique ha can exploi he informaion given by r. 3 FOBOS wih Componen-wise Lazy Evaluaion As discussed in Secion 2, applying he proximal poin mehod o i / I is inefficien for a problem wih sparse gradiens. I may no be useful o updae he variables wihou r, as in he L-FOBOS mehod. Therefore, we propose he following mehod, which explois boh he sparsiy of gradiens and he decomposabiliy of r. Noe ha he proposed algorihm adops he lazy evaluaion of r i for. (i I ) afer he gradien mehod has been used o updae Algorihm 4 is hereafer referred o as CL-FOBOS, where (i) denoes he number of ieraions o elapse since he ieraion where was mos recenly updaed. Noe ha L-FOBOS only updaes once every K ieraions using he informaion of r [10], while CL-FOBOS explois r i (i I ) a each ieraion. Sep 2 of CL-FOBOS requires (i) ieraions for i I. However, we can aggregae all of he necessary compuaions for Sep 2 if r saisfies Propery 1 from Secion 2. From his poin on, we assume ha r saisfies he following assumpion: Assumpion 1. (i) The funcion r is decomposable wih respec o each componen of he decision valuable x. 6

7 Algorihm 4 FOBOS wih Componen-wise Lazy Evaluaion (CL-FOBOS) Sep 0: Le x 1 R n and 1. Sep 1: Se I {i (i) f (x ) 0 and perform he gradien mehod. { { ˆ +1 argminy α (i) f (x )y (y x(i) ) 2 if i I oherwise. (8) Sep 2: Conduc he lazy evaluaion of r i for i I. Le y (i) 1 ˆ and j (i) 1 for i I. y (i) j (i) +1 argmin y{r i 1 (y) + (y y (i) ) 2 for i I 2α (i) +j (i) j (i). Se j (i) j (i) + 1. If j (i) (i), hen reurn o Sep 2. If j (i) > (i), hen se i / I. Furhermore, if I, hen se + 1 and y (i), and go o Sep 1. Oherwise, j (i) reurn o Sep 2. (ii) The funcion r saisfies Propery 1. Under Assumpion 1, Sep 2 can be rewrien as { { 1 argminy 2 (y ˆx(i) ) 2 + (ˇα α (i) +1 )ri (y) if i I ˆ +1 oherwise, ˇα +1 ˇα + α, (9) α (i) ˇα +1 for i I, where ˇα is he sum of he sepsizes from he firs ieraion o he h ieraion, wih ˇα 0 0. Then, ˇα α (i) denoes he sum of he sepsizes from he mos recen ieraion where x i has been updaed o he h ieraion. We see ha he compuaion of α (i) only requires O( I ) compuaions a each ieraion. Afer we calculae f (x ), we updae x in O( I ). 4 Regre Bound of CL-FOBOS In his secion, we derive he regre bound of CL-FOBOS. We can rewrie (8) and (9) as ˆ +1 { +1 { argmin α (i) f (x ) (x(i) ) 2 if i I oherwise, { argmin 1 y R 2 (y ˆx(i) +1 )2 + ( α j (i) +1 j ) r i (y) if i I ˆ +1 oherwise, (10) where (i) denoes he mos recen ieraion where has been updaed prior o he h 7

8 ieraion. The regre bound of CL-FOBOS, R cl (T ), can hen wrien as R cl (T ) { f (x ) + 1 ) { f (x + 1, (11) where x denoes he opimal soluion of problem (1). We will analyze he upper bound of R R cl (T ), and will show ha lim cl (T ) T T 0 holds under he following assumpion. Assumpion 2. (i) (i) K holds, where K is a posiive consan. (ii) There exiss a posiive consan D such ha x x D for 1,..., T. (iii) There exiss a posiive consan G such ha sup vf f (x ) v f G and sup vr r(x ) v r G for 1,..., T. (iv) The sepsize α is given as α α 0, where α 0 is a posiive consan. Assumpion 2 (i) ensures ha all of he componens of {x are updaed a leas once in every K ieraions. The condiions (ii), (iii), and (iv) of Assumpion 2 are assumed in (5) and (6) of he regre analysis of FOBOS. In paricular, Assumpion 2 (ii) implies he boundedness of {x, and hus { ) and { ) are also bounded. Hence, here exis posiive consans R and S such ha ) R for 1,..., T. (12) ) S for i 1,..., n, and 1,..., T. (13) Finally, he sepsize given in Assumpion 2 (iv) is ofen employed for online gradien mehods. Under Assumpions 1 and 2, we now provide a regre bound for CL-FOBOS. Theorem 1. Le {x be a sequence generaed by CL-FOBOS, and suppose ha Assumpions 1 and 2 hold. Then, he regre bound of CL-FOBOS is given by { { R cl (T ) f (x ) + ) f (x + 1 O(log T ) + O( T ). To prove Theorem 1, we use he exising resul on he regre bound of FOBOS. Specifically, we consider he sequence {x given by CL-FOBOS (10) as ha generaed by FOBOS for a cerain convex opimizaion problem. We firs see ha (10) is equivalen o +1 argmin { α (i) f (x ) + α 1 ) (x(i) ) 2, (14) 8

9 where r i : R R is defined as r(y) i c (i) r i (y), c (i) j (i) α +1 j α if i I 0 oherwise. Noe ha (14) generaes he same sequence as (10). The updae (14) consiues he FOBOS procedure for he following problem, wih f and n ri : { min f (x) + r(x) i. (15) 1 Noe ha r i in problem (15) depends on, whereas r(x) of problem (1) does no. Hereafer, we refer o (14) as Algorihm γ. Because Algorihm γ consiues he FOBOS procedure for problem (15), [8] demonsraes ha is regre (16) is given by R γ (T ) {( f (x ) + 1 {( f (x ) + 1 ) ( r(x i (i) ) f (x + ) ( c (i) ) f (x + ) r(x i (i) ) c (i) U γ (T ), (16) ( ) where U γ (T ) 2GD + D 2 T 2α 0 + 7G 2 α 0. Noe ha Rcl (T ) is differen from R γ (T ), whereas {x in R cl (T ) is same as ha in R γ (T ). In fac, c (i) in r i influences R γ (T ). We will firs evaluae he upper bound of R cl (T ) by using R γ (T ). From a simple rearranging, we obain he upper bound of R cl (T ) as follows: where R cl (T ) { f (x ) + 1 {( 1 + f (x ) + i I 1 are consans defined by ) ) ) 1 Throughou he subsequen discussions, le c (i) c (i) { f (x + 1 i I ) ( f (x + ) ), (17) (i). (18) c(i) Using hese noaions, we have he following lemma. be given by. (19) 9

10 Lemma 1. Le {x be a sequence generaed by CL-FOBOS, and suppose ha Assumpions 1 and 2 hold. Then, he following inequaliy holds: R cl (T ) U γ (T ) i I ) 1 c (i) 1 ) i I ) +nks. Proof. To prove his lemma, we evaluae he firs erm of he righ-hand side in (17); ha is, {( f (x ) + ) ( ) f (x + i I 1 Firs, we obain ha ). (20) 1 1 T (i) 1 1 i I + + r i ( T (i) +1 r i ( T (i) +1, (21) where he las equaion follows from (18) and he propery ha r i (x is independen of. From he condiions (i), (13), and (18) of Assumpion 2, we can evaluae he second erm in (21) as r i ( T (i) +1 K T ri ( nks. Moreover, because c (i) 0 for i / I, we obain ha c (i) ) c (i) ). (22) i I 10

11 By subsiuing (21) and (22) ino (20), we have ha {( f (x ) + ) ( ) f (x + i I 1 1 ) {( f (x ) + ) ( ) f (x + ) i I i I {( f (x ) c (i) ) + U γ (T ) + 1 ( i I ) ( c (i) ) f (x + 1 c (i) c (i) ) c (i) + 1 i I r i ( T (i) +1 1 i I +nks ) ) ( ) ) +nks, (23) where he las inequaliy of (23) follows from (16). Finally, by replacing c (i) from (19), we obain in (23) wih c (i) {( f (x ) + ) ( ) f (x + i I 1 U γ (T ) + 1 i I 1 c (i) ) ) +nks. From Lemma 1, we can now obain he upper bound of he regre R cl (T ) as where P (T ) and Q(T ) are defined by R cl (T ) U γ (T ) + P (T ) + Q(T ) + nks, (24) P (T ) Q(T ) 1 i I 1 1 c (i) ) ). (25) 1 i I ), (26) respecively. We will now evaluae he upper bounds of P (T ) and Q(T ). For his purpose, le us firs derive an upper bound for he erm 1 c (i). Lemma 2. Suppose ha Assumpion 2 holds. Then, we have ha 1 c (i) δ() 11

12 for i I, where δ() is given by δ() { 1 if K 1 K 1 2( K+1) oerwise. Proof. From he definiions of c (i) and c (i), we have for each i I, 1 c (i) 1 c(i) 1 j (i) α +1 α j. (27) Noe ha α j α holds for j, because i follows from Assumpion 2 (iv) ha α α 0. Then, i follows from (18) ha α j (i) +1 j α α j (i) +1 α ( (i) )α α 1. (28) Moreover, from (27) and (28), we obain ha 1 j (i) α +1 α j α j (i) +1 j α j (i) 1 +1(α j α ) α. (29) Nex, le us consider he upper bound of α j in (29). When > K 1, we have ha 0 < K + 1 (i) + 1, from Assumpion 2 (i). Then, we have ha α j α K+1 holds for 12

13 j (i) + 1,...,, which follows from Assumpion 2 (iv). Hence, (29) can be rewrien as 1 j (i) α +1 α j j (i) +1(α j α ) α j (i) +1(α K+1 α ) α k(i) (α K+1 α ) α (α K+1 α ) α ( K+1 α 0 α ) 0 α 0 K + 1 K + 1 K 1 K + 1( + K + 1) K 1 K + 1(2 K + 1) K 1 2( K + 1). (30) On he oher hand, when K 1, α j α 0 holds for any j. Then, i hen follows from (18) ha (29) can be evaluaed as α j (i) +1 j j (i) +1(α j α ) 1 α Consequenly, we have from (27), (30), and (31) ha 1 c (i) α j (i) +1(α 0 α ) α (α 0 α ) α 1. (31) δ(). Using Lemma 2, we can give an upper bound for P (T ) as follows. Lemma 3. Suppose ha Assumpions 1 and 2 hold. Then, he following inequaliy holds: P (T ) K(K K K + 1)R + K(K 1)R( ) 1 + log(t K + 1). 2 13

14 Proof. By applying Lemma 2 o he definiion (25) of P (T ), we ge ha P (T ) 1 K i I 1 i I 1 c (i) ) 1 c (i) ) K δ() ) 1 i I K δ() ) 1 K 1 { K ( 1) 1 1 ) +K { K 1 ), 2( K + 1) where he firs inequaliy follows from Assumpion 2 (i). Moreover, by using (12) we have ha K 1 { P (T ) K ( 1) ) r i (x (i) { K 1 ) +K ) 2( K + 1) ( 1) + K 1 KR 1 K(K 1)R 2 K K K The firs erm of he righ-hand side in (32) can be evaluaed as K 1 ( 1) 1 K 1 1 K + 1. (32) ( 1)d < K K K + 1. Moreover, he second erm of he righ-hand side in (32) is bounded above, as K T K+1 1 K i 1 + log(t K + 1). By subsiuing hese wo inequaliies ino (32), we obain he desired inequaliy. Finally, we now derive he upper bound of Q(T ) in (24). Lemma 4. Le {x be a sequence generaed by CL-FOBOS, and suppose ha Assumpions 1 and 2 hold. Then, i holds ha Q(T ) nks. Proof. To simplify, le Q (i) (T ) and Q (i) (T ) be given by Q (i) (T ) Q (i) (T ) T (i) 1 T (i) ) r T (i) +1 1 i ( ), κ (i) ), 14

15 respecively, where T (i) denoes he mos recen ieraion where was updaed prior o he T h ieraion, and each κ (i) is given by Firs, we firs show ha κ (i) { if i I 0 oherwise. { Q (i) (T ) + Q (i) (T ) Q(T ). (33) From he riangle inequaliy, we have ha { Q (i) (T ) + Q (i) (T ) T { (i) I follows from he definiion of κ (i) 1 T (i) 1 1 { Q (i) (T ) + Q (i) (T ) T (i) ) ) ) and (18) ha κ (i) T (i) T (i) 1 ) ) ) ) ) + κ (i) ) + κ (i) ) T (i) We see ha he las erm is Q(T ), and hence we have ha n 1. i I r T (i) +1 κ (i) i ( ) r i ( ) T (i) +1 ) κ (i) ) κ (i) ) ). { Q (i) (T ) + Q (i) (T ) Q(T ). From his inequaliy, we may evaluae boh Q (i) (T ) and Q (i) (T ) o obain he upper bound of Q(T ). Firs, we firs define some noaions o be employed in hese evaluaions. Le l(i, m) be he ieraion a which has been updaed m imes wih l(i, 0) 0. Furhermore, le M (i) T denoe he number imes ha has been updaed by he T h ieraion. Noe ha M (i) T saisfies l(i, M (i) T ) T (i), (34) 15

16 because he mos recen ieraion a which has been updaed before he T h ieraion is he same as ha a which has been updaed M (i) T imes. We consider he properies of he sequence { wih respec o l(i, m) and M (i) T. Because {x(i) is he sequence generaed by (10), is only updaed when i I. Therefore we have ha i I { l(i, m) (i) m 1,..., M T, (35) i / I { l(i, m) + 1,..., l(i, m + 1) 1 (i) m 1,..., M T. (36) In paricular, we have ha follows from (36) ha +1 x(i) holds a each ieraion such ha i / I. Then, i l(i,m+1) ( l(i, m) + 1,..., l(i, m + 1)). (37) Now, using hese noaions we evaluae will Q (i) (T ) and Q (i) (T ). Firs, we can use (35) o rewrie Q (i) (T ) as Q (i) (T ) T (i) 1 M (i) T 1 m0 T (i) ) l(i,m+1) 1 l(i,m)+1 i I ) ) M (i) T 1 m0 l(i,m+1) ri ( l(i,m+1) ). Here, noe ha l(i,m+1) ri ( l(i,m+1) ) l(i,m+1) l(i,m)+1 ri ( l(i,m+1) ), because k(i) l(i,m+1) l(i, m + 1) l(i, m). Therefore, we have ha Q (i) (T ) M (i) T 1 m0 l(i,m+1) l(i,m)+1 ) M (i) T 1 m0 l(i,m+1) l(i,m)+1 l(i,m+1) ) 0, (38) where he las equaion follows from (37). On he oher hand, from condiions (i) and (13) of Assumpion 2 we have ha Q (i) (T ) r T (i) +1 T K+1 i ( ) ) KS. (39) Furhermore, from (33), (38), and (39), we obain ha Q(T ) nks. Proof of Theorem 1. I is obvious ha Theorem 1 follows from Lemma 3, Lemma 4, (16), and (24). 16

17 Theorem 1 yields ha lim T R cl (T ) T 0. We can regard CL-FOBOS as no only an online gradien mehod, bu also as a sochasic gradien mehod for he following sochasic programming problem: min E ( θ(x, z) ), (40) where z is a random variable, and θ is he loss funcion wih respec o z and a parameer x. Problem (40) is approximaely equivalen o he following problem: min ϕ(x) 1 T θ(x, z ), (41) where z represens a random variable for each sample. Now, we obain he following heorem in a way similar o [15]. Theorem 2. Le {x be he sequence generaed by CL-FOBOS for problem (41), and suppose ha Assumpions 1 and 2 hold. Then, we have ha [ ( 1 lim E ϕ T T 1 x )] ϕ 0, 1 where ϕ is he opimal value of problem (40). 5 Numerical Experimens In his secion, we presen some numerical resuls using CL-FOBOS in order o verify is efficiency. The experimens are carried ou on a machine wih a 1.7 GHz Inel Core i7 CPU and 8 GB memory, and we implemen all codes in C++. We se he iniial poin x 1 o 0 in all experimens. We compare he performance of CL-FOBOS wih L-FOBOS [10] for binary classificaion problems. We use he logisic funcion f (x) log ( 1 + exp( b a, x ) ) and he L1 regularizaion erm r(x) λ n x(i), where λ is a posiive consan ha conrols he sparsiy of soluions, and (a, b ) denoes he h sample daa. We use an Amazon review daase, called he Muli-Domain Senimen Daase [3], for he experimens. This daase conains many merchandise reviews from Amazon.com. We aemp o divide he reviews ino wo classes, hose expressing posiive impressions of he merchandises and hose ha are negaive. The daa for he h review is composed of a vecor a R n and a label b {+1, 1. The vecor a R n is consruced by he so-called Bag-of-Words process, and a (i) represens he number of imes ha he word represened by i appears in he h senence of he review. Hence, a is regarded as a feaure vecor of he h senence of he review. Meanwhile, he label b indicaes he evaluaion of he merchandise given by he auhor of he h review, where + 1 represens a posiive impression and 1 represens a negaive one. In he experimens, we only use he Book domain from he daase. The book domain is comprised of 4,465 samples, and he oal number of words in all samples is 332,440. Each feaure vecor a ( 1,..., M) conains a leas 11 (a mos 5,412) nonzero elemens, and he average number of nonzero elemens is

18 The number of samples, 4,465, seems small for comparing online gradien mehods. Therefore, we consruced a large daase using he 4, 465 samples. Tha is, we randomly choose 100, 000 samples from he original 4,465, and we employed his as he daase for he experimens. We compare CL-FOBOS wih L-FOBOS in he following four aspecs: he average of he oal loss defined in (42), he rae of correc esimaions, he number of nonzero componens in {x, and he run ime. T 1{f (x ) + n R(T r(i) (x i ) ). (42) T Noe ha R(T ) represens a kind of he regre R(T ). The rae of correc esimaions is defined as follows. We esimae he class of a sample a each ieraion using a, x, and hen coun he number of correc esimaions; ha is, he number of samples such ha b a, x > 0. The rae of correc esimaions is hen he number of correc esimaions divided by he number of samples ha have been observed. We esed boh mehods wih α α 0, where α 0 denoes he iniial sepsize, and is se beween and 1.0 wih widh of We presen he resuls achieved by each mehod wih he bes choice of α 0 ; ha is, we choose α 0 such ha smalles R(T ) is obained. Tables 1-6 show he run ime, R(T ), and he rae of correc esimaions for boh mehods. All of he numbers in he ables indicae an average aken over 10 rials for 10 randomly generaed daases. We can see from Tables 1-6 ha he run ime of L-FOBOS is shorened as K becomes larger. This is because for large K L-FOBOS rarely carries ou Sep 2, which requires O(n) compuaions. We also observe ha he rae of correc esimaions for CL-FOBOS is equal o or beer han ha of L-FOBOS when λ This shows ha CL-FOBOS effecively explois he informaion given by r i when λ is sufficienly large. However, CL-FOBOS achieved a remarkably low accuracy rae when λ is large. The main reason for his is ha mos of he values become close zero a each ieraion in CL-FOBOS, whereas L-FOBOS only considers he loss funcions oher han a ieraions such ha mod K 0. Thus, he resul does no imply ha CL-FOBOS is inferior o L-FOBOS. Nex, we consider he relaion beween R(T ) and he run ime. Figure 1 presens a summary of Tables 1-6 in erms of R(T ) and he run ime. From his figure, we can confirm he radeoff of L-FOBOS as we change K when λ Furhermore, here exiss no parameer K ha enables L-FOBOS o ouperform CL-FOBOS in erms of boh he run ime and R(T ). These resuls indicae ha, o a cerain degree, CL-FOBOS is more efficien when λ is large. In addiion, CL-FOBOS does no require he uning up of he parameer K, which is an advanage of he proposed mehod. On he oher hand, when λ is smaller han , such a radeoff does no occur, because he influence of he L1 regularizaion erm is very small. Finally, we noe he sparsiy of soluions {x generaed by boh mehods. Figure 2 illusraes he percenage of nonzero componens in {x when λ and λ The red lines indicae he performance of CL-FOBOS, and he green lines indicae ha of L-FOBOS. When λ , we employ K 400 for L-FOBOS wih an iniial sepsize of α Furher, when λ 10 5, we choose K 750 for his mehod, wih α These candidaes for K and α are chosen because L-FOBOS achieves he same run ime wih hem as CL-FOBOS. In his figure, we observe ha he green lines are jagged. This is because 18

19 Table 1: Resuls for each algorihm, λ 10 1 Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS Table 2: Resuls for each algorihm, λ 10 2 Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS Table 3: Resuls for each algorihm, λ Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS Table 4: Resuls for each algorihms, λ Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS Table 5: Resuls for each algorihm, λ 10 5 Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS

20 Table 6: Resuls for each algorihm, λ 10 7 Algorihm Name α 0 Run Time R(T ) Rae of Righ Esimaions L-FOBOS (K 100) L-FOBOS (K 500) L-FOBOS (K 1000) CL-FOBOS (a) λ 10 1 (b) λ 10 2 (c) λ (d) λ (e) λ 10 5 (f) λ 10 7 Figure 1: Relaion beween R(T ) and he run ime 20

21 (a) λ (b) λ 10 5 Figure 2: The number of nonzero componens in {x x becomes radically sparse when mod K 0, on accoun of he lazy evaluaion. In paricular, when λ he soluion x obained by L-FOBOS becomes approximaely zero a such ieraions. Meanwhile, when we se λ 10 5, we see ha boh mehods generae more dense soluions, because he L1 regularizaion is almos ignored. In addiion, we can observe he resuls of classificaion for he original 4, 465 samples using several soluions, such as x 399, x 400, x 9999, x 10000, x 99999, and x The rae and number of correc esimaions for each soluion are summarized in Table 7. Moreover, Figure 3 illusraes he resuls. Table 7: Rae of correc esimaions, λ The numbers in parenhesis denoe he number of correc esimaions. Soluion Number x 399 x 400 x 9999 x x x L-FOBOS (K 400) 68.7% (3,066) 57.3% (2,953) 78.4% (3,501) 77.9% (3,476) 79.2% (3,537) 79.0% (3,529) CL-FOBOS 67.5% (3,015) 67.4% (3,020) 78.5% (3,504) 78.5% (3,506) 79.1% (3,533) 79.0% (3,530) Figure 3: Number of correc esimaions We observe from Table 7 and Figure 3 ha he rae of correc esimaions decreases in L-FOBOS following is lazy evaluaion. This means ha L-FOBOS canno achieve he wo 21

22 aims of generaing sparse soluions and minimizing he oal loss simulaneously. On he oher hand, CL-FOBOS is well-balanced. Tha is, i always creaes sparse soluions, and is accuracy is improved as i observes samples. In conclusion, we confirm ha he proposed mehod, CL-FOBOS, is more efficien for problems ha are srongly influenced by he funcion r. 6 Conclusion In his paper, we have proposed a novel algorihm, CL-FOBOS, for large-scale problems wih sparse gradiens. Furhermore, we have derived a bound on he regre of CL-FOBOS. Finally, we confirmed he efficiency of he proposed mehod using numerical experimens. As a direcion for fuure work, i would be worh invesigaing a reasonable sepsize rule o be deermined before CL-FOBOS sars. References [1] Bersekas, P, D. : Incremenal Gradien, Subgradien, and Proximal Mehods for Convex Opimizaion: A Survey, Lab. for informaion and decision sysems repor LIDSP-2848, MIT, pp.1-38, [2] Bishop, C. : Paern Recogniion and Machine Learning, Springer-Verlag New York, [3] Blizer, J., Dredze, M., Pereira, F. : Domain Adapaion for Senimen Classificaion, In proceedings of he 45h annual meeing of he associaion of compuaional linguisics, pp , [4] Blondel, M., Seki, K., Uehara, K. : Block Coordinae Descen Algorihms for Large-scale Sparse Muliclass Classificaion, The journal of machine learning research volume 93, pp , [5] Boou, L. : Online Algorihms and Sochasic Approximaions, Online learning and neural neworks, pp. 9-42, [6] Cesa-Bianchi, N. : Analysis of Two Gradien-based Algorihms for On-line Regression, Journal of compuer and sysem sciences volume 59, pp , [7] Dredze, M., Crammer, K., Pereira. : Confidence-weighed Linear Classificaion, Proceedings of inernaional conference on machine learning, pp , [8] Duchi, J., Singer, Y. : Efficien Online and Bach Learning Using Forward Backward Spliing, The journal of machine learning research volume 10, pp , [9] Duchi, J., Shalev-Shwarz, S., Singer, Y., Tewari, A. : Composie Objecive Mirror Descen, Conference on learning heory (COLT), pp , [10] Langford, J., Li, L., Zhang, T. : Sparse Online Learning via Truncaed Gradien, The journal of machine learning research volume 10, pp ,

23 [11] Lipon, Z., Elkan, C. : Efficien Elasic Ne Regularizaion for Sparse Linear Models, arxiv: , 24 May [12] Togari, Y., Yamashia, N. : Regre Analysis of Online Opimizaion Algorihm ha performs Lazy Evaluaion, Bachelor hesis, Kyoo Universiy, March 2014, hp://www-opima.amp.i.kyoo-u.ac.jp/papers/bachelor/2014_bachelor_ ogari.pdf [publish in Japanese]. [13] Togari, Y., Yamashia, N. : Regre Analysis of Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion, In Proceedings of he Spring Meeing of he Operaions Research Sociey of Japan, 2015 [publish in Japanese]. [14] Shalev-Shwarz, S. : Online Learning and Online Convex Opimizaion, Foundaions and rends in machine learning volume 4, no 2, pp , [15] Xiao, L. : Dual Averaging Mehods for Regularized Sochasic Learning and Online Opimizaion, The journal of machine learning research volume 11, pp , [16] Zinkevich, M. : Online Convex Programming and Generalized Infiniesimal Gradien Ascen, Proceedings of he wenieh inernaional conference on machine learning (ICML) pp ,

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation: M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno

More information

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 2, May 2013, pp. 209 227 ISSN 0364-765X (prin) ISSN 1526-5471 (online) hp://dx.doi.org/10.1287/moor.1120.0562 2013 INFORMS On Boundedness of Q-Learning Ieraes

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions

More information

Stability and Bifurcation in a Neural Network Model with Two Delays

Stability and Bifurcation in a Neural Network Model with Two Delays Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization

Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization Dual Averaging Mehods for Regularized Sochasic Learning and Online Opimizaion Lin Xiao Microsof Research Microsof Way Redmond, WA 985, USA lin.xiao@microsof.com Revised March 8, Absrac We consider regularized

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange

More information

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1, Issue 6 Ver. II (Nov - Dec. 214), PP 48-54 Variaional Ieraion Mehod for Solving Sysem of Fracional Order Ordinary Differenial

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

A Local Regret in Nonconvex Online Learning

A Local Regret in Nonconvex Online Learning Sergul Aydore Lee Dicker Dean Foser Absrac We consider an online learning process o forecas a sequence of oucomes for nonconvex models. A ypical measure o evaluae online learning policies is regre bu such

More information

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization 1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Ordinary dierential equations

Ordinary dierential equations Chaper 5 Ordinary dierenial equaions Conens 5.1 Iniial value problem........................... 31 5. Forward Euler's mehod......................... 3 5.3 Runge-Kua mehods.......................... 36

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Solutions from Chapter 9.1 and 9.2

Solutions from Chapter 9.1 and 9.2 Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

Optimality Conditions for Unconstrained Problems

Optimality Conditions for Unconstrained Problems 62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

Cash Flow Valuation Mode Lin Discrete Time

Cash Flow Valuation Mode Lin Discrete Time IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728,p-ISSN: 2319-765X, 6, Issue 6 (May. - Jun. 2013), PP 35-41 Cash Flow Valuaion Mode Lin Discree Time Olayiwola. M. A. and Oni, N. O. Deparmen of Mahemaics

More information

POSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION

POSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION Novi Sad J. Mah. Vol. 32, No. 2, 2002, 95-108 95 POSITIVE SOLUTIONS OF NEUTRAL DELAY DIFFERENTIAL EQUATION Hajnalka Péics 1, János Karsai 2 Absrac. We consider he scalar nonauonomous neural delay differenial

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence MATH 433/533, Fourier Analysis Secion 6, Proof of Fourier s Theorem for Poinwise Convergence Firs, some commens abou inegraing periodic funcions. If g is a periodic funcion, g(x + ) g(x) for all real x,

More information

Inventory Control of Perishable Items in a Two-Echelon Supply Chain

Inventory Control of Perishable Items in a Two-Echelon Supply Chain Journal of Indusrial Engineering, Universiy of ehran, Special Issue,, PP. 69-77 69 Invenory Conrol of Perishable Iems in a wo-echelon Supply Chain Fariborz Jolai *, Elmira Gheisariha and Farnaz Nojavan

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004 ODEs II, Lecure : Homogeneous Linear Sysems - I Mike Raugh March 8, 4 Inroducion. In he firs lecure we discussed a sysem of linear ODEs for modeling he excreion of lead from he human body, saw how o ransform

More information

On-line Adaptive Optimal Timing Control of Switched Systems

On-line Adaptive Optimal Timing Control of Switched Systems On-line Adapive Opimal Timing Conrol of Swiched Sysems X.C. Ding, Y. Wardi and M. Egersed Absrac In his paper we consider he problem of opimizing over he swiching imes for a muli-modal dynamic sysem when

More information

Spring Ammar Abu-Hudrouss Islamic University Gaza

Spring Ammar Abu-Hudrouss Islamic University Gaza Chaper 7 Reed-Solomon Code Spring 9 Ammar Abu-Hudrouss Islamic Universiy Gaza ١ Inroducion A Reed Solomon code is a special case of a BCH code in which he lengh of he code is one less han he size of he

More information

Chapter 6. Systems of First Order Linear Differential Equations

Chapter 6. Systems of First Order Linear Differential Equations Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function Elecronic Journal of Qualiaive Theory of Differenial Equaions 13, No. 3, 1-11; hp://www.mah.u-szeged.hu/ejqde/ Exisence of posiive soluion for a hird-order hree-poin BVP wih sign-changing Green s funcion

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems Single-Pass-Based Heurisic Algorihms for Group Flexible Flow-shop Scheduling Problems PEI-YING HUANG, TZUNG-PEI HONG 2 and CHENG-YAN KAO, 3 Deparmen of Compuer Science and Informaion Engineering Naional

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,

More information

Boosting with Online Binary Learners for the Multiclass Bandit Problem

Boosting with Online Binary Learners for the Multiclass Bandit Problem Shang-Tse Chen School of Compuer Science, Georgia Insiue of Technology, Alana, GA Hsuan-Tien Lin Deparmen of Compuer Science and Informaion Engineering Naional Taiwan Universiy, Taipei, Taiwan Chi-Jen

More information

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n Module Fick s laws of diffusion Fick s laws of diffusion and hin film soluion Adolf Fick (1855) proposed: d J α d d d J (mole/m s) flu (m /s) diffusion coefficien and (mole/m 3 ) concenraion of ions, aoms

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 RL Lecure 7: Eligibiliy Traces R. S. Suon and A. G. Baro: Reinforcemen Learning: An Inroducion 1 N-sep TD Predicion Idea: Look farher ino he fuure when you do TD backup (1, 2, 3,, n seps) R. S. Suon and

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

Online Learning with Partial Feedback. 1 Online Mirror Descent with Estimated Gradient

Online Learning with Partial Feedback. 1 Online Mirror Descent with Estimated Gradient Avance Course in Machine Learning Spring 2010 Online Learning wih Parial Feeback Hanous are joinly prepare by Shie Mannor an Shai Shalev-Shwarz In previous lecures we alke abou he general framework of

More information

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models

A Specification Test for Linear Dynamic Stochastic General Equilibrium Models Journal of Saisical and Economeric Mehods, vol.1, no.2, 2012, 65-70 ISSN: 2241-0384 (prin), 2241-0376 (online) Scienpress Ld, 2012 A Specificaion Tes for Linear Dynamic Sochasic General Equilibrium Models

More information

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints IJCSI Inernaional Journal of Compuer Science Issues, Vol 9, Issue 1, No 1, January 2012 wwwijcsiorg 18 Applying Geneic Algorihms for Invenory Lo-Sizing Problem wih Supplier Selecion under Sorage Capaciy

More information

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary

More information

Types of Exponential Smoothing Methods. Simple Exponential Smoothing. Simple Exponential Smoothing

Types of Exponential Smoothing Methods. Simple Exponential Smoothing. Simple Exponential Smoothing M Business Forecasing Mehods Exponenial moohing Mehods ecurer : Dr Iris Yeung Room No : P79 Tel No : 788 8 Types of Exponenial moohing Mehods imple Exponenial moohing Double Exponenial moohing Brown s

More information

BU Macro BU Macro Fall 2008, Lecture 4

BU Macro BU Macro Fall 2008, Lecture 4 Dynamic Programming BU Macro 2008 Lecure 4 1 Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an

More information

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1. Roboics I April 11, 017 Exercise 1 he kinemaics of a 3R spaial robo is specified by he Denavi-Harenberg parameers in ab 1 i α i d i a i θ i 1 π/ L 1 0 1 0 0 L 3 0 0 L 3 3 able 1: able of DH parameers of

More information

Lecture 4: November 13

Lecture 4: November 13 Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae 4.1.1 Moivaion Unil his lecure we saw

More information

KEY. Math 334 Midterm III Winter 2008 section 002 Instructor: Scott Glasgow

KEY. Math 334 Midterm III Winter 2008 section 002 Instructor: Scott Glasgow KEY Mah 334 Miderm III Winer 008 secion 00 Insrucor: Sco Glasgow Please do NOT wrie on his exam. No credi will be given for such work. Raher wrie in a blue book, or on your own paper, preferably engineering

More information

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite American Journal of Operaions Research, 08, 8, 8-9 hp://wwwscirporg/journal/ajor ISSN Online: 60-8849 ISSN Prin: 60-8830 The Opimal Sopping Time for Selling an Asse When I Is Uncerain Wheher he Price Process

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach 1 Decenralized Sochasic Conrol wih Parial Hisory Sharing: A Common Informaion Approach Ashuosh Nayyar, Adiya Mahajan and Demoshenis Tenekezis arxiv:1209.1695v1 [cs.sy] 8 Sep 2012 Absrac A general model

More information