Convergence rates of proximal gradient methods via the convex conjugate
|
|
- Oswin Reed
- 5 years ago
- Views:
Transcription
1 Convergence rates of proxmal gradent methods va the convex conjugate Davd H Gutman Javer F Peña January 8, 018 Abstract We gve a novel proof of the O(1/ and O(1/ convergence rates of the proxmal gradent and accelerated proxmal gradent methods for composte convex mnmzaton The crux of the new proof s an upper bound constructed va the convex conjugate of the objectve functon 1 Introducton The development of accelerated versons of frst-order methods has had a profound nfluence n convex optmzaton In hs semnal paper [9] Nesterov devsed a frst-order algorthm wth optmal O(1/ rate of convergence for unconstraned convex optmzaton va a modfcaton of the standard gradent descent algorthm that ncludes momentum steps A later breathrough was the acceleraton of the proxmal gradent method ndependently developed by Bec and Teboulle [] and by Nesterov [11] The proxmal gradent method, also nown as the forward-bacward method [8], s an extenson of the gradent descent method to solve the composte mnmzaton problem mn ϕ(x + ψ(x (1 x Rn where ϕ : R n R s dfferentable and ψ : R n R { } s a closed convex functon such that for t > 0 the proxmal map Prox t (x := arg mn {ψ(y + 1t } x y ( y R n s computable The sgnfcance of Nesterov s and Bec and Teboulle s breathroughs has prompted nterest n new approaches to explan how acceleraton s acheved n frst-order methods [1,3 5,7,1,13] Some of these approaches are based on geometrc [3,4], control [7], and dfferental Department of Mathematcal Scences, Carnege Mellon Unversty, USA, dgutman@andrewcmuedu Tepper School of Busness, Carnege Mellon Unversty, USA, jfp@andrewcmuedu 1
2 equatons [13] technques The recent artcle [1] reles on the convex conjugate to gve a unfed and succnct dervaton of the O(1/, O(1/, and O(1/ convergence rates of the subgradent, gradent, and accelerated gradent methods for unconstraned smooth convex mnmzaton The crux of the approach n [1] s a generc upper bound on the terates generated by the subgradent, gradent, and accelerated gradent algorthms constructed va the convex conjugate of the objectve functon We extend the man constructon n [1] to gve a unfed dervaton of the convergence rates of the proxmal gradent and accelerated proxmal gradent algorthms for the composte convex mnmzaton problem (1 As n [1], the central result of ths paper (Theorem 1 s an upper bound on the terates generated by both the non-accelerated and the accelerated proxmal gradent methods Ths bound s constructed va the convex conjugate of the objectve functon Theorem 1 readly yelds the wdely nown O(1/ and O(1/ convergence rates of the proxmal gradent and accelerated proxmal gradent algorthms for (1 when the smooth component ϕ has Lpschtz gradent and the step szes are chosen judcously Theorem 1 hghlghts some ey smlartes and dfferences between the non-accelerated and the accelerated algorthms It s noteworthy that Theorem 1 and ts varant, Theorem, hold under certan condtons on the step szes and momentum used n the algorthm but do not requre any Lpschtz assumpton The convex conjugate approach underlyng Theorem 1 also extends to a proxmal subgradent algorthm when the component ϕ s merely convex but not necessarly smooth (See Algorthm and Proposton 1 Ths extenson automatcally yelds a novel dervaton of both classcal [10, Theorem 3] as well as modern convergence rates [6, Theorem 5] for the projected subgradent algorthm The latter dervatons are smlar to the dervaton of the convergence rates for the proxmal gradent and accelerated proxmal gradent algorthms Throughout the paper we assume that R n s endowed wth an nner product, and that denotes the correspondng Eucldean norm Proxmal gradent and accelerated proxmal gradent methods Let ϕ : R n R be a dfferentable convex functon and ψ : R n R { } be a closed convex functon such that the proxmal map ( s computable Let f := ϕ + ψ and consder the problem (1 that can be rewrtten as mn f(x (3 x Rn Algorthm 1 descrbes a template of a proxmal gradent algorthm for (3 Step 7 of Algorthm 1 ncorporates a momentum step The (non-accelerated proxmal gradent method s obtaned by choosng θ +1 = 1 n Step 6 In ths case Step 7 smply sets y +1 = x +1 and does not ncorporate any momentum Other choces of θ +1 (0, 1] yeld accelerated versons of the proxmal gradent method In partcular, the FISTA algorthm n [] s obtaned by choosng θ +1 (0, 1] va the rule θ +1 = θ (1 θ +1 In ths case θ (0, 1 for 1 and there s a non-trval momentum term n Step 7
3 Algorthm 1 Template for proxmal gradent method 1: nput: x 0 R n : y 0 := x 0 ; θ 0 := 1 3: for = 0, 1,, do 4: pc t > 0 5: x +1 := Prox t (y t ϕ(y 6: pc θ +1 (0, 1] 7: y +1 := x +1 + θ +1(1 θ θ (x +1 x 8: end for The man result n ths paper s Theorem 1 below whch subsumes the wdely nown convergence rates O(1/ and O(1/ of the proxmal gradent and accelerated proxmal gradent algorthms under sutable choces of t, θ, = 0, 1, Theorem 1 reles on a sutable constructed sequence z R n, = 1,, The constructon of z R n, = 1,, n turn s motvated by the dentty (5 below Consder Step 5 n Algorthm 1, namely The optmalty condtons for (4 mply that x +1 = Prox t (y t ϕ(y (4 x +1 = y t g where g := g ϕ + gψ for gϕ := ϕ(y and for some g ψ ψ(x +1 Step 5 and Step 7 of Algorthm 1 mply that for = 0, 1, y +1 (1 θ +1 x +1 θ +1 = x +1 (1 θ x θ Snce θ 0 = 1 and y 0 = x 0, t follows that for = 1,, y (1 θ x θ = x 0 f(x +1 mn x R n 1 =0 t θ g (1 θ (y x = θ = y (1 θ x θ ( x 0 y t θ g 1 =0 t θ g (5 As t s customary, we wll assume that the step szes t chosen at Step 4 n Algorthm 1 satsfy the followng decrease condton { ϕ(y + ϕ(y, x y + 1 = ϕ(y + ψ(x +1 + g ψ, y x +1 } x y + ψ(x t t g (6 The condton (6 holds n partcular when ϕ s Lpschtz and t, = 0, 1, are chosen va a standard bactracng procedure Observe that (6 mples f(x +1 f(y Theorem 1 also reles on the convex conjugate functon Recall that f h : R n R { } s a convex functon then ts convex conjugate h : R n R { } s defned as h (z = sup x R n { z, x h(x} 3
4 Theorem 1 Suppose θ (0, 1], = 0, 1,, and the step szes t > 0, = 0, 1,, are such that (6 holds Let x R n, = 1,, be the terates generated by Algorthm 1 Let z R n, = 1, be as follows Then z := 1 t θ g =0 1 t θ =0 (7 1 =0 LHS f (z + z, x 0 z, (8 where LHS s as follows dependng on the choce of θ (0, 1] and t > 0 (a When θ = 1, = 0, 1, let t θ LHS := =0 t f(x +1 =0 t (b When t > 0 and θ (0, 1], = 0, 1, are such that 1 =0 LHS = f(x t θ = (1 θ t =0 θ Theorem 1 readly mples that n both case (a and case (b { } LHS mn {f(u z 1, u} + mn z, u + u R n u R n 1 u x t 0 =0 θ { } 1 mn f(u + u R n 1 u x t 0 =0 θ 1 f(x + 1 x x t 0 =0 θ for all x R n Let f and X respectvely denote the optmal value and set of optmal solutons to (3 If f s fnte and X s nonempty then n both case (a and case (b of Theorem 1 we get f(x f dst(x 0, X 1 (9 t =0 θ Suppose t 1, = 0, 1,, for some constant L > 0 Ths holds n partcular f ϕ s L Lpschtz and t s chosen va a standard bactracng procedure Then nequalty (9 yelds the followng nown convergence bound for the proxmal gradent method f(x f L dst(x 0, X 4 let
5 On the other hand, suppose t = 1, = 0, 1,, for some constant L > 0 and θ L, = 0, 1,, are chosen va θ 0 = 1 and θ+1 = θ (1 θ +1 Then a straghtforward nducton shows that 1 =0 t θ = (1 θ =0 t = 1 θ Lθ 1 ( + 1 4L Thus case (b n Theorem 1 apples and nequalty (9 yelds the followng nown convergence bound for the accelerated proxmal gradent method f(x f L dst(x 0, X ( + 1 Although Theorem 1 yelds the conc O(1/ convergence rate of the accelerated proxmal gradent algorthm, t apples under the somewhat restrctve condtons stated n case (b above In partcular, case (b does not cover the more general case when t, = 0, 1, are chosen va bactracng as n the FISTA wth bactracng algorthm n [] The convergence rate n ths case, namely [, Theorem 44] s a consequence of Theorem below Theorem s a varant of Theorem 1(b that apples to more flexble choces of t, θ, = 0, 1, In partcular, Theorem apples to the popular choce θ =, = 0, 1, + Theorem Suppose f = mn f(x s fnte, θ x R n (0, 1], = 0, 1,, satsfy θ 0 = 1 and θ+1 θ (1 θ +1, and the step szes t > 0, = 0, 1,, are non-ncreasng and such that (6 holds Let x R n, = 1,, be the terates generated by Algorthm 1 Let z R n, = 1,, be as follows Then for = 1,, where R 1 = 1 and R +1 = t 1 z = θ 1 1 t g t 1 θ =0 f(x f (R (f f (z + z, x 0 t 1 z θ 1, (10 t R n : f(x = f} s nonempty then f(x f mn u R n θ θ 1 (1 θ R 1, = 1,, In partcular, f X = {x { R (f(u f + θ 1 t 1 u x 0 } = θ 1 dst(x 0, X t 1 Suppose the step szes t, = 0, 1,, are non-ncreasng, satsfy (6, and t 1, = L 0, 1,, for some constant L > 0 Ths holds n partcular when ϕ s Lpschtz and t s chosen va a sutable bactracng procedure as the one n [] If θ 0 = 1 and θ+1 θ (1 θ +1, = 0, 1, then Theorem mples that f(x f Lθ 1 dst(x 0, X Hence f θ+1 = θ (1 θ +1 or θ = for = 0, 1, then + f(x f L dst(x 0, X ( + 1 5
6 3 Proof of Theorem 1 and Theorem We wll use the followng propertes of the convex conjugate Suppose h : R n R { } s a convex functon Then h (z + h(x z, x (11 for all z, x R n, and equalty holds f z h(x Suppose f, ϕ, ψ : R n R { } are convex functons and f = ϕ + ψ Then f (z ϕ + z ψ ϕ (z ϕ + ψ (z ψ for all z ϕ, z ψ R n (1 Suppose f : R n R + { } s a convex functon and R 1 Then (R f (Rz = R (f (z, (13 and (R f (z f (z (14 31 Proof of Theorem 1 1 We prove (8 by nducton To ease notaton, let µ := 1 t throughout ths proof For =0 θ = 1 we have LHS 1 = f(x 1 ϕ(x 0 + ψ(x 1 + g ψ0, x 0 x 1 t 0 g 0 = ϕ(x 0 g ϕ 0, x 0 + ψ(x 1 g ψ0, x 1 = ϕ (g ϕ 0 ψ (g ψ 0 + g 0, x 0 t 0 g 0 f (z 1 + z 1, x 0 z 1 µ 1 + g 0, x 0 t 0 g 0 The frst step follows from (6 The thrd step follows from (11 and g ϕ 0 = ϕ(x 0, g ψ 0 ψ(x 1 The last step follows from (1 and the choce of z 1 = g 0 = g ϕ 0 + g ψ 0 and µ 1 = 1 t 0 Suppose (8 holds for and let γ = t /θ =0 t /θ The constructon (7 mples that Therefore, z +1, x 0 z +1 µ +1 = (1 γ z +1 = (1 γ z + γ g µ +1 = (1 γ µ ( z, x 0 z µ ( +γ g, x 0 z γ g µ (1 γ µ (15 6
7 In addton, the convexty of f, propertes (11, (1, and g ϕ = ϕ(y, g ψ ψ(x +1, g = g ϕ + gψ mply f (z +1 (1 γ f (z γ f (g (1 γ f (z γ (ϕ (g ϕ + ψ (g ψ (16 = (1 γ f (z γ ( g ϕ, y ϕ(y + g ψ, x +1 ψ(x +1 Let RHS denote the rght-hand sde n (8 From (15 and (16 t follows that RHS +1 (1 γ RHS ( γ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y x +1 Hence to complete the proof of (8 by nducton t suffces to show that LHS +1 (1 γ LHS ( γ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y x +1 To that end, we consder case (a and case (b separately Case (a In ths case γ = x 0 y z µ = 0 Therefore t =0 t 1 and y = x Thus µ = 1, =0 t γ (1 γ µ g γ (1 γ µ g γ (1 γ µ (17 (18 = t, and LHS +1 (1 γ LHS = γ f(x +1 γ (ϕ(y + ψ(x +1 + g ψ, y x +1 t g = γ (ϕ(y + ψ(x +1 + g ψ, y γ x +1 g (1 γ µ ( = γ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y γ x +1 g (1 γ µ The second step follows from (6 The thrd and fourth steps follow from x 0 y z µ = 0 respectvely Thus (18 holds n case (a Case (b In ths case γ = θ and γ (1 γ µ = t Therefore γ (1 γ µ LHS +1 (1 γ LHS = f(x +1 (1 γ (ϕ(x + ψ(x ϕ(y + ψ(x +1 + g ψ, y x +1 t g ( (1 γ ϕ(y + g ϕ, x y + ψ(x +1 + g ψ, x x +1 = γ (ϕ(y + ψ(x +1 + g ψ, y x +1 + (1 γ g, y x t g ( = γ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y x +1 7 = t and γ (1 γ µ g
8 The second step follows from (6 and the convexty of ϕ and ψ The last step follows from θ = γ, equaton (5, and = t Thus (18 holds n case (b as well 3 Proof of Theorem γ (1 γ µ The proof of Theorem s a modfcaton of the proof of Theorem 1 Wthout loss of generalty assume f = 0 as otherwse we can wor wth f f n place of f Agan we prove (10 by nducton To ease notaton, let µ := θ 1 t 1 throughout ths proof For = 1 nequalty (10 s dentcal to (8 snce R 1 = 1 and θ 0 = 1 Hence ths case follows from the proof of Theorem 1 for = 1 Suppose (10 holds for Observe that for ρ := R +1 R Frst, = t 1 t θ θ 1 (1 θ = µ +1 µ (1 θ z +1 = ρ (1 θ z + θ g µ +1 = ρ (1 θ µ 1 Next, proceed as n the proof of Theorem 1 z +1, x 0 z +1 = ρ (1 θ ( z, x 0 z + θ g, x 0 z µ +1 µ µ = ρ (1 θ ( z, x 0 z + θ g, x 0 z µ µ Second, the convexty of f and the fact that f f = 0 mply (R +1 f (z +1 (1 θ (R +1 f (ρ z θ (R +1 f (g θ µ +1 g t g (19 (1 θ (ρ R f (ρ z θ f (g (0 ρ (1 θ (R f (z θ (ϕ (g ϕ + ψ (g ψ = ρ (1 θ (R f (z θ ( g ϕ, y ϕ(y + g ψ, x +1 ψ(x +1 The frst step follows from the convexty of f The second step follows from (14 The thrd step follows from (1 and (13 The last step follows from (11 and g ϕ = ϕ(y, g ψ ψ(x +1 Let RHS denote the rght-hand sde n (10 The nducton hypothess mples that RHS f(x 0 Thus from (19, (0, and ρ 1 t follows that RHS +1 (1 θ RHS RHS +1 ρ (1 θ RHS (1 ( θ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y x +1 t g 8
9 Fnally, proceedng exactly as n case (b n the proof of Theorem 1 we get f(x +1 (1 θ f(x θ (ϕ(y + ψ(x +1 + g ψ, y x +1 + (1 θ g, y x t g ( = θ g, x 0 y z + ϕ(y + ψ(x +1 + g ψ µ, y x +1 t g RHS +1 (1 θ RHS The second step follows from (5 The thrd step follows from (1 Ths completes the proof by nducton 4 Proxmal subgradent method Algorthm descrbes a varant of Algorthm 1 for the case when ϕ : R n R s merely convex Algorthm Proxmal subgradent method 1: nput: x 0 R n : for = 0, 1,, do 3: pc g ϕ ϕ(x and t > 0 4: x +1 := Prox t (x t g ϕ 5: end for When ψ s the ndcator functon I C of a closed convex set C, Step 4 n Algorthm can be rewrtten as x +1 = arg mn x t g ϕ x = Π C(x t g ϕ Hence when ψ = I C x C Algorthm becomes the projected subgradent method for mn ϕ(x ( x C The classcal convergence rate for the projected gradent s an mmedate consequence of Proposton 1 as we detal below Proposton 1 n turn s obtaned va a mnor twea on the constructon and proof of Theorem 1 Observe that where g = g ϕ + gψ x +1 = Prox t (x t g ϕ x +1 = x t g for some gψ ψ(x +1 Next, let z R n, = 0, 1, be as follows z = =0 t g =0 t (3 Proposton 1 Let x R n, = 0, 1,, be the sequence of terates generated by Algorthm and let z R n, = 0, 1, be defned by (3 Then for = 0, 1,, =0 t (ϕ(x + ψ(x +1 1 =0 t g ϕ =0 t f =0 (z + z, x 0 t z (4 { } 1 mn f(u + u R n =0 t u x 0 9
10 In partcular, for all x R n =0 t (ϕ(x + ψ(x +1 1 =0 t g ϕ =0 t f(x + x 0 x =0 t Proof Let LHS and RHS denote respectvely the left-hand and rght-hand sdes n (4 We proceed by nducton For = 0 we have LHS 0 = ϕ(x 0 + ψ(x 1 t 0 g ϕ 0 = ϕ (g ϕ 0 + g ϕ 0, x 0 ψ (g ψ 0 + g ψ0, x 1 t 0 g ϕ 0 f (g 0 + g 0, x 0 t 0 g 0 = RHS 0 The second step follows from (11 and g ϕ 0 ϕ(x 0, g ψ 0 ψ(x 1 The thrd step follows from (1 and g 0 = g ϕ 0 + g ψ 0, x 1 = x 0 t 0 g 0 Next we show the man nductve step to +1 Observe that z +1 = (1 γ z +γ g +1 for = 0, 1, where γ = t (0, 1 Proceedng exactly as n the proof of Theorem 1 =0 t we get RHS +1 (1 γ RHS γ (ϕ(x +1 + ψ(x + + g ψ+1, x +1 x + t +1 g +1 = γ ( ϕ(x +1 + ψ(x + + t +1 g ψ +1 t +1 g ϕ +1 The second step follows because g +1 = g ϕ +1 + gψ +1 and x + = x +1 t +1 g +1 The proof s thus completed by observng that ( LHS +1 (1 γ LHS = γ ϕ(x +1 + ψ(x + t +1 g ϕ +1 ( γ ϕ(x +1 + ψ(x + + t +1 g ψ +1 t +1 g ϕ +1 Let C R n be a nonempty closed convex set and ψ = I C As noted above, n ths case Algorthm becomes the projected subgradent algorthm for problem ( We next show that n ths case Proposton 1 yelds the classcal convergence rates (6 and (7, as well and the modern and more general one (8 recently establshed by Grmmer [6, Theorem 5] Suppose ϕ = mn ϕ(x s fnte and X := {x C : ϕ(x = ϕ} s nonempty From x C Proposton 1 t follows that =0 t (ϕ(x ϕ t g ϕ + dst(x 0, X (5 =0 10
11 In partcular, f g L for all x C and g ϕ(x then (5 mples mn (ϕ(x =0 ϕ t L + dst(x 0, X =0,, =0 t (6 Let α := t g ϕ, = 0, 1, Then Step 4 n Algorthm can be rewrtten as x +1 = g ϕ Π C (x α provded g ϕ g ϕ > 0, whch occurs as long as x s not an optmal soluton to ( If g ϕ > 0 for = 0, 1,, then (5 mples mn (ϕ(x =0 ϕ L α + dst(x 0, X =0,, =0 α (7 Let L : R + R + Followng Grmmer [6], the subgradent oracle for ϕ s L-steep on C f for all x C and g ϕ(x g L(ϕ(x ϕ As dscussed by Grmmer [6], L-steepness s a more general and weaer condton than the tradtonal bound g L for all x C and g ϕ(x Indeed, the latter bound s precsely L-steepness for the constant functon L(t = L and holds when ϕ s L-Lpschtz on C Suppose the subgradent oracle for ϕ s L-steep for some L : R + R + If α := t g ϕ > 0 for = 0, 1,, then (5 mples and thus ϕ(x ϕ α L(ϕ(x ϕ =0 mn (ϕ(x ϕ sup =0,, { t : =0 α + dst(x 0, X, t L(t =0 α + dst(x 0, X } =0 α (8 Acnowledgements Ths research has been funded by NSF grant CMMI References [1] Z Allen-Zhu and L Oreccha Lnear couplng: An ultmate unfcaton of gradent and mrror descent arxv preprnt arxv: , 014 [] A Bec and M Teboulle A fast teratve shrnage-thresholdng algorthm for lnear nverse problems SIAM Journal on Imagng Scences, (1:183 0, 009 [3] S Bubec, Y Lee, and M Sngh A geometrc alternatve to Nesterov s accelerated gradent descent arxv preprnt arxv: , 015 [4] D Drusvyatsy, M Fazel, and S Roy An optmal frst order method based on optmal quadratc averagng arxv preprnt arxv: ,
12 [5] N Flammaron and F Bach From averagng to acceleraton, there s only a step-sze In COLT, pages , 015 [6] B Grmmer Convergence rates for determnstc and stochastc subgradent methods wthout Lpschtz contnuty arxv preprnt arxv: , 017 [7] L Lessard, B Recht, and A Pacard Analyss and desgn of optmzaton algorthms va ntegral quadratc constrants SIAM Journal on Optmzaton, 6(1:57 95, 016 [8] P Lons and B Mercer Splttng algorthms for the sum of two nonlnear operators SIAM Journal on Numercal Analyss, 16(6: , 1979 [9] Y Nesterov A method for unconstraned convex mnmzaton problem wth rate of convergence O(1/ Dolady AN SSSR (n Russan (Englsh translaton Sovet Math Dol, 69: , 1983 [10] Y Nesterov Introductory Lectures on Convex Optmzaton: A Basc Course Appled Optmzaton Kluwer Academc Publshers, 004 [11] Y Nesterov Gradent methods for mnmzng composte functons Mathematcal Programmng, 140(1:15 161, 013 [1] J Peña Convergence of frst-order methods va the convex conjugate Operatons Research Letters, 45: , 017 [13] W Su, S Boyd, and E Candès A dfferental equaton for modelng Nesterov s accelerated gradent method: Theory and nsghts In Advances n Neural Informaton Processng Systems, pages , 014 1
On the Global Linear Convergence of the ADMM with Multi-Block Variables
On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationSome basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C
Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationGames of Threats. Elon Kohlberg Abraham Neyman. Working Paper
Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationOn some variants of Jensen s inequality
On some varants of Jensen s nequalty S S DRAGOMIR School of Communcatons & Informatcs, Vctora Unversty, Vc 800, Australa EMMA HUNT Department of Mathematcs, Unversty of Adelade, SA 5005, Adelade, Australa
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationarxiv: v1 [math.oc] 6 Jan 2016
arxv:1601.01174v1 [math.oc] 6 Jan 2016 THE SUPPORTING HALFSPACE - QUADRATIC PROGRAMMING STRATEGY FOR THE DUAL OF THE BEST APPROXIMATION PROBLEM C.H. JEFFREY PANG Abstract. We consder the best approxmaton
More informationThe proximal average for saddle functions and its symmetry properties with respect to partial and saddle conjugacy
The proxmal average for saddle functons and ts symmetry propertes wth respect to partal and saddle conjugacy Rafal Goebel December 3, 2009 Abstract The concept of the proxmal average for convex functons
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationExercise Solutions to Real Analysis
xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationPerfect Competition and the Nash Bargaining Solution
Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange
More informationAppendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis
A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems
More informationSELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.
SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths
More informationAppendix B. Criterion of Riemann-Stieltjes Integrability
Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for
More informationOnline Appendix. t=1 (p t w)q t. Then the first order condition shows that
Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationFACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP
C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationA Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function
A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationBeyond Zudilin s Conjectured q-analog of Schmidt s problem
Beyond Zudln s Conectured q-analog of Schmdt s problem Thotsaporn Ae Thanatpanonda thotsaporn@gmalcom Mathematcs Subect Classfcaton: 11B65 33B99 Abstract Usng the methodology of (rgorous expermental mathematcs
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationFirst day August 1, Problems and Solutions
FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationTAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES
TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent
More informationFinding Dense Subgraphs in G(n, 1/2)
Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationarxiv: v1 [quant-ph] 6 Sep 2007
An Explct Constructon of Quantum Expanders Avraham Ben-Aroya Oded Schwartz Amnon Ta-Shma arxv:0709.0911v1 [quant-ph] 6 Sep 2007 Abstract Quantum expanders are a natural generalzaton of classcal expanders.
More informationConvex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.
Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationarxiv: v3 [math.na] 1 Jul 2017
Accelerated Alternatng Drecton Method of Multplers: an Optmal O/K Nonergodc Analyss Huan L Zhouchen Ln arxv:608.06366v3 [math.na] Jul 07 July, 07 Abstract The Alternatng Drecton Method of Multplers ADMM
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationTHE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION
THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton
More informationA CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS
Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationStructured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis
Structured onconvex and onsmooth Optmzaton: Algorthms and Iteraton Complexty Analyss Bo Jang Tany Ln Shqan Ma Shuzhong Zhang ovember 13, 017 Abstract onconvex and nonsmooth optmzaton problems are frequently
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationVapnik-Chervonenkis theory
Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown
More informationform, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo
Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationResearch Article Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem
Mathematcal Problems n Engneerng Volume 2012, Artcle ID 871741, 16 pages do:10.1155/2012/871741 Research Artcle Global Suffcent Optmalty Condtons for a Specal Cubc Mnmzaton Problem Xaome Zhang, 1 Yanjun
More informationInexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC
Inexact Alternatng Mnmzaton Algorthm for Dstrbuted Optmzaton wth an Applcaton to Dstrbuted MPC Ye Pu, Coln N. Jones and Melane N. Zelnger arxv:608.0043v [math.oc] Aug 206 Abstract In ths paper, we propose
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationOn Finite Rank Perturbation of Diagonalizable Operators
Functonal Analyss, Approxmaton and Computaton 6 (1) (2014), 49 53 Publshed by Faculty of Scences and Mathematcs, Unversty of Nš, Serba Avalable at: http://wwwpmfnacrs/faac On Fnte Rank Perturbaton of Dagonalzable
More informationResearch Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization
To appear n Optmzaton Vol. 00, No. 00, Month 20XX, 1 27 Research Artcle Almost Sure Convergence of Random Projected Proxmal and Subgradent Algorthms for Dstrbuted Nonsmooth Convex Optmzaton Hdea Idua a
More informationLecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES
COMPUTATIONAL FLUID DYNAMICS: FDM: Appromaton of Second Order Dervatves Lecture APPROXIMATION OF SECOMD ORDER DERIVATIVES. APPROXIMATION OF SECOND ORDER DERIVATIVES Second order dervatves appear n dffusve
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationDECOUPLING THEORY HW2
8.8 DECOUPLIG THEORY HW2 DOGHAO WAG DATE:OCT. 3 207 Problem We shall start by reformulatng the problem. Denote by δ S n the delta functon that s evenly dstrbuted at the n ) dmensonal unt sphere. As a temporal
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationEdge Isoperimetric Inequalities
November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary
More informationRandić Energy and Randić Estrada Index of a Graph
EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL
More informationDeterminants Containing Powers of Generalized Fibonacci Numbers
1 2 3 47 6 23 11 Journal of Integer Sequences, Vol 19 (2016), Artcle 1671 Determnants Contanng Powers of Generalzed Fbonacc Numbers Aram Tangboonduangjt and Thotsaporn Thanatpanonda Mahdol Unversty Internatonal
More informationarxiv: v1 [math.co] 12 Sep 2014
arxv:1409.3707v1 [math.co] 12 Sep 2014 On the bnomal sums of Horadam sequence Nazmye Ylmaz and Necat Taskara Department of Mathematcs, Scence Faculty, Selcuk Unversty, 42075, Campus, Konya, Turkey March
More informationAnti-van der Waerden numbers of 3-term arithmetic progressions.
Ant-van der Waerden numbers of 3-term arthmetc progressons. Zhanar Berkkyzy, Alex Schulte, and Mchael Young Aprl 24, 2016 Abstract The ant-van der Waerden number, denoted by aw([n], k), s the smallest
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationNorm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise
ppled Mathematcal Scences, Vol. 4, 200, no. 60, 2955-296 Norm Bounds for a ransformed ctvty Level Vector n Sraffan Systems: Dual Exercse Nkolaos Rodousaks Department of Publc dmnstraton, Panteon Unversty
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More information