On the convergence of the block nonlinear Gauss Seidel method under convex constraints

Size: px
Start display at page:

Download "On the convergence of the block nonlinear Gauss Seidel method under convex constraints"

Transcription

1 Operatons Research Letters 26 (2000) On the convergence of the bloc nonlnear Gauss Sedel method under convex constrants L. Grppo a, M. Scandrone b; a Dpartmento d Informatca e Sstemstca, Unversta d Roma La Sapenza, Va Buonarrot Roma, Italy b Isttuto d Anals de Sstem ed Informatca del CNR, Vale Manzon Roma, Italy Receved 1 August 1998; receved n revsed form 1 September 1999 Abstract We gve new convergence results for the bloc Gauss Sedel method for problems where the feasble set s the Cartesan product of m closed convex sets, under the assumpton that the sequence generated by the method has lmt ponts. We show that the method s globally convergent for m = 2 and that for m 2 convergence can be establshed both when the objectve functon f s componentwse strctly quasconvex wth respect to m 2 components and when f s pseudoconvex. Fnally, we consder a proxmal pont modcaton of the method and we state convergence results wthout any convexty assumpton on the objectve functon. c 2000 Elsever Scence B.V. All rghts reserved. Keywords: Nonlnear programmng; Algorthms; Decomposton methods; Gauss Sedel method 1. Introducton Consder the problem mnmze f(x) (1) subject to x X = X 1 X 2 X m R n ; where f : R n R s a contnuously derentable functon and the feasble set X s the Cartesan product of closed, nonempty and convex subsets X R n, for =1;:::;m, wth m =1 n = n. If the vector x R n s parttoned nto m component vectors x R n, Ths research was partally supported by Agenza Spazale Italana, Roma, Italy. Correspondng author. Fax: E-mal address: scandro@as.rm.cnr.t (M. Scandrone) then the mnmzaton verson of the bloc-nonlnear Gauss Sedel (GS) method for the soluton of (1) s dened by the teraton: x +1 = arg mn f(x1 +1 ;:::;x 1 +1 ;y ;x+1;:::;x m); y X whch updates n turn the components of x, startng from a gven ntal pont x 0 X and generates a sequence {x } wth x =(x1 ;:::;x m). It s nown that, n general, the GS method may not converge, n the sense that t may produce a sequence wth lmt ponts that are not crtcal ponts of the problem. Some well-nown examples of ths behavor have been descrbed by Powell [12], wth reference to the coordnate method for unconstraned problems, that s to the case m = n and X = R n /00/$ - see front matter c 2000 Elsever Scence B.V. All rghts reserved. PII: S (99)

2 128 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) Convergence results for the bloc GS method have been gven under sutable convexty assumptons, both n the unconstraned and n the constraned case, n a number of wors (see e.g. [1,3,5,6,9 11,13,15]). In the present paper, by extendng some of the results establshed n the unconstraned case, we prove new convergence results of the GS method when appled to constraned problems, under the assumptons that the GS method s well dened (n the sense that every subproblem has an optmal soluton) and that the sequence {x } admts lmt ponts. More speccally, rst we derve some general propertes of the lmt ponts of the partal updates generated by the GS method and we show that each of these ponts s a crtcal pont at least wth respect to two consecutve components n the gven orderng. Ths s shown by provng that the global mnmum value n a component subspace s lower than the functon value obtanable through a convergent Armjo-type lne search along a sutably dened feasble drecton. As a consequence of these results, we get a smple proof of the fact that n case of a two bloc decomposton every lmt pont of {x } s a crtcal pont of problem (1), even n the absence of any convexty assumpton on f. As an example, we llustrate an applcaton of the two-bloc GS method to the computaton of crtcal ponts of nonconvex quadratc programmng problems va the soluton of a sequence of convex programmng subproblems. Then we consder the convergence propertes of the GS method for the general case of a m-bloc decomposton under generalzed convexty assumptons on the objectve functon. We show that the lmt ponts of the sequence generated by the GS method are crtcal ponts of the constraned problem both when () f s componentwse strctly quasconvex wth respect to m 2 blocs and when () f s pseudoconvex and has bounded level sets n the feasble regon. In case () we get a generalzaton of well nown convergence results [10,5]; n case () we extend to the constraned case the results gven n [14] for the cyclc coordnate method and n [6] for the unconstraned bloc GS method. Usng a constraned verson of a Powell s counterexample, we show also that nonconvergence of the GS method can be demonstrated for nonconvex functons, when m 3 and the precedng assumptons are not satsed. Fnally, n the general case of arbtrary decomposton, we extend a result of [1], by showng that the lmt ponts of the sequence generated by a proxmal pont modcaton of the GS method are crtcal ponts of the constraned problem, wthout any convexty assumpton on the objectve functon. Notaton. We suppose that the vector x R n s parttoned nto component vectors x R n, as x =(x 1 ;x 2 ;:::;x m ). In correspondence to ths partton, the functon value f(x) s also ndcated by f(x 1 ;x 2 ;:::;x m ) and, for =1;2;:::;m the partal gradent of f wth respect to x, evaluated at x, s ndcated by f(x)= f(x 1 ;x 2 ;:::;x m ) R n. A crtcal pont for Problem (1) s a pont x X such that f(x) T (y x) 0; for every y X, where f(x) R n denotes the gradent of f at x. If both x and y are parttoned nto component vectors, t s easly seen that x X s a crtcal pont for Problem (1) f and only f for all =1;:::;mwe have: f(x) T (y x ) 0 for every y X. We denote by L 0 X the level set of f relatve to X, correspondng to a gven pont x 0 X, that s L 0 X :={x X : f(x)6f(x0 )}. Fnally, we ndcate by the Eucldean norm (on the approprate space). 2. A lne search algorthm In ths secton we recall some well-nown propertes of an Armjo-type lne search algorthm along a feasble drecton, whch wll be used n the sequel n our convergence proofs. Let {z } be a gven sequence n X, and suppose that X for =1;:::;m. Let us choose an ndex {1;:::;m}and assume that for all we can compute search drectons z s parttoned as z =(z 1 ;:::;z m), wth z d = w z wth w X ; (2) such that the followng assumpton holds. Assumpton 1. Let {d } be the sequence of search drectons dened by (2). Then: () there exsts a number M 0such that d 6M for all ; () we have f(z ) T d 0 for all.

3 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) An Armjo-type lne search algorthm can be descrbed as follows. Lne search algorthm (LS) Data: (0; 1), (0; 1). Compute = max {( ) j : f(z1 ;:::;z +( ) j d ;:::;zm) j=0;1;::: 6f(z )+ ( ) j f(z ) T d }: (3) In the next proposton we state some well-nown propertes of Algorthm LS. It s mportant to observe that, n what follows, we assume that {z } s a gven sequence that may not depend on Algorthm LS, n the may not be the result of a lne search along d. However, ths has no substantal eect n the convergence proof, whch can be deduced easly from the nown results (see e.g. [3]). sense that z +1 Proposton 1. Let {z } be a sequence of ponts n X and let {d } be a sequence of drectons such that Assumpton 1 s satsed. Let be computed by means of Algorthm LS. Then: () there exsts a nte nteger j such that =( ) j satses the acceptablty condton (3); () f {z } converges to z and: lm f(z ) f(z1 ;:::;z + d ;:::;zm) =0; (4) then we have lm f(z ) T d =0: (5) 3. Prelmnary results In ths secton we derve some propertes of the GS method that are at the bass of some of our convergence results. Frst, we state the m-bloc GS method n the followng form: 3.1. GS Method Step 0: Gven x 0 X, set =0. Step 1: For = 1;:::;m compute x +1 = arg mn y X f(x +1 1 ;:::;y ;:::;x m): (6) Step 2: Set x +1 =(x1 +1 ;:::;xm +1 ), = + 1 and go to Step 1. Unless otherwse speced, we assume n the sequel that the updatng rule (6) s well dened, and hence that every subproblem has solutons. We consder, for all, the partal updates ntroduced by the GS method by denng the followng vectors belongng to X : w(; 0) = x ; w(; )=(x1 +1 ;:::;x 1 +1 ;x+1 ;x+1;:::;x m) =1;:::;m 1; w(; m)=x +1 : For convenence we set also w(; m +1)=w(+1;1): By constructon, for each {1;:::;m}, t follows from (6) that w(; ) s the constraned global mnmzer of f n the th component subspace, and therefore t satses the necessary optmalty condton: f(w(; )) T (y x +1 ) 0 for every y X : (7) We can state the followng propostons. Proposton 2. Suppose that for some {0;:::;m} the sequence {w(; )} admts a lmt pont w. Then; for every j {0;:::;m} we have lm f(w(; j)) = f( w): Proof. Let us consder an nnte subset K {0;1; :::;} and an ndex {0;:::;m} such that the subsequence {w(; )} K converges to a pont w. Bythe nstructons of the algorthm we have f(w( +1;))6f(w(; )): (8) Then, the contnuty of f and the convergence of {w(; )} K mply that the sequence {f(w(; ))} has a subsequence convergng to f( w). As {f(w(; ))} s nonncreasng, ths, n turn, mples that {f(w(; ))} s bounded from below and converges to f( w). Then, the asserton follows mmedately from the fact that f(w( +1; ))6f(w( +1; j))6f(w(; )) for 06j6;

4 130 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) and f(w( +2;))6f(w( +1;j))6f(w( +1;)) for j6m: Proposton 3. Suppose that for some {1;:::;m} the sequence {w(; )} admts a lmt pont w. Then we have f(w) T (y w ) 0 for every y X (9) and moreover f( w) T (y w ) 0 for every y X ; (10) where = (mod m)+1. Proof. Let {w(; )} K be a subsequence convergng to w. From (7), tang nto account the contnuty assumpton on f, we get mmedately (9). In order to prove (10), suppose rst {1;:::;m 1}, so that = + 1. Reasonng by contradcton, let us assume that there exsts a vector ỹ +1 X +1 such that +1 f(w) T (ỹ +1 w +1 ) 0: (11) Then, lettng d+1 =ỹ +1 w(; ) +1 =ỹ +1 x+1 as {w(; )} K s convergent, we have that the sequence {d+1 } K s bounded. Recallng (11) and tang nto account the contnuty assumpton on f t follows that there exsts a subset K 1 K such that +1 f(w(; )) T d+1 0 for all K 1 ; and therefore the sequences {w(; )} K1 and {d+1 } K 1 are such that Assumpton 1 holds, provded that we dentfy {z } wth {w(; )} K1. Now, for all K 1 suppose that we compute +1 by means of Algorthm LS; then we have f(x1 +1 ;:::;x +1 ;x+1+ +1d +1;:::;x m)6f(w(; )): Moreover, as x+1 X +1, x+1 + d +1 X +1, +1 (0; 1], and X +1 s convex, t follows that x d +1 X +1 : Therefore, recallng that f(w(; + 1)) = mn f(x1 +1 ;:::;x +1 ;y +1 ;:::;x y +1 X +1 m); we can wrte f(w(; + 1)) 6f(x1 +1 ;:::;x +1 ;x d +1;:::;x m) 6f(w(; )): (12) By Proposton 2 we have that the sequences {f(w(; j))} are convergent to a unque lmt for all j {0::::;m}, and hence we obtan lm f(w(; )) f( x1 +1 ;:::;x +1 ;x+1 ; K 1 ++1d +1;:::;x m)=0: Then, nvong Proposton 1, where we dentfy {z } wth {w(; )} K1, t follows that +1 f(w) T (ỹ +1 w +1 )=0; whch contradcts (11), so that we have proved that (10) holds when {1;:::;m 1}. When = m, so that = 1, we can repeat the same reasonngs notng that w(; m +1)=w(+1;1). The precedng result mples, n partcular, that every lmt pont of the sequence {x } generated by the GS method s a crtcal pont wth respect to the components x 1 and x m n the prexed orderng. Ths s formally stated below. Corollary 1. Let {x } be the sequence generated by the GS method and suppose that there exsts a lmt pont x. Then we have 1 f(x) T (y 1 x 1 ) 0 for every y 1 X 1 (13) and m f(x) T (y m x m ) 0: for every y m X m : (14) 4. The two-bloc GS method Let us consder the problem: mnmze f(x)=f(x 1 ;x 2 ); x X 1 X 2 (15) under the assumptons stated n Secton 1. We note that n many cases a two-bloc decomposton can be useful snce t may allow us to employ parallel technques for solvng one subproblem. As an example,

5 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) gven a functon of the form N f(x)= 1 (x 1 )+ 1(x 1 ) (x ) =2 f we decompose the problem varables nto the two blocs x 1 and (x 2 ;:::;x N ), then once x 1 s xed, the objectve functon can be mnmzed n parallel wth respect to the components x for =2;:::;N. When m=2, a convergence proof for the GS method (2Bloc GS method) n the unconstraned case was gven n [6]. Here the extenson to the constraned case s an mmedate consequence of Corollary 1. Corollary 2. Suppose that the sequence {x } generated by the 2Bloc GS method has lmt ponts. Then; every lmt pont x of {x } s a crtcal pont of Problem (15). As an applcaton of the precedng result we consder the problem of determnng a crtcal pont of a nonlnear programmng problem where the objectve functon s a nonconvex quadratc functon and we have dsjont constrants on two derent blocs of varables. In some of these cases the use of the two-bloc GS method may allow us to determne a crtcal pont va the soluton of a sequence of convex programmng problems of a specal structure and ths may consttute a basc step n the context of cuttng plane or branch and bound technques for the computaton of a global optmum. As a rst example, we consder a blnear programmng problem wth dsjont constrants and we reobtan a slghtly mproved verson of a result already establshed n [7] usng derent reasonngs. Consder a blnear programmng problem of the form: mnmze f(x 1 ;x 2 )=x1qx T 2 + c1 T x 1 + c2 T x 2 subject to A 1 x 1 = b 1 ;x 1 0; (16) A 2 x 2 =b 2 ;x 2 0; where x 1 R n1 and x 2 R n2. As shown n [8], problems of ths form can be obtaned, for nstance, as an equvalent reformulaton on an extended space of concave quadratc programmng problems. Suppose that the followng assumptons are sats- ed: () the sets X 1 = {x 1 R n1 : A 1 x 1 = b 1 ;x 1 0}and X 2 ={x 2 R n2 : A 2 x 2 =b 2 ;x 2 0}are non empty; () f(x 1 ;x 2 ) s bounded below on X = X 1 X 2. Note that we do not assume, as n [7] that X 1 and X 2 are bounded. Startng from a gven pont (x1 0;x0 2 ) X, the two-bloc GS method conssts n solvng a sequence of two lnear programmng subproblems. In fact, gven (x1 ;x +1 2 ), we rst obtan a soluton x1 of the problem mnmze (Qx2 + c 1 ) T x 1 (17) subject to A 1 x 1 = b 1 ;x 1 0 and then we solve for x2 +1 the problem mnmze (Q T x c 2 ) T x 2 (18) subject to A 2 x 2 = b 2 ;x 2 0: Under the assumpton stated, t s easly seen that problems (17) and (18) have optmal solutons and hence that the two bloc GS method s well dened. In fact, reasonng by contradcton, assume that one subproblem, say (17), does not admt an optmal soluton. As the feasble set X 1 s nonempty, ths mples that the objectve functon s unbounded from below on X 1. Thus there exsts a sequence of ponts z j X 1 such that lm (Qx 2 + c 1 ) T z j = j and therefore, as x2 s xed, we have also lm f(z j ;x2) = lm (Qx 2 + c 1 ) T z j + c2 T x2 = : j j But ths would contradct assumpton (), snce (z j ;x2 ) s feasble for all j. We can also assume that x1 and x 2 are vertex solutons, so that the sequence {(x1 ;x 2 )} remans n a - nte set. Then, t follows from Corollary 2 that the two bloc-gs method must determne n a nte number of steps a crtcal pont of problem (16). As a second example, let us consder a (possbly nonconvex) problem of the form mnmze f(x 1 ;x 2 )= 1 2 xt 1 Ax xt 2 Bx 2 + x1 T Qx 2 subject to x 1 X 1 ; x 2 X 2 ; +c T 1 x 1 + c T 2 x 2 (19) where X 1 R n1 ;X 2 R n2 are nonempty closed convex sets, and the matrces A; B are symmetrc and semdefnte postve.

6 132 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) Suppose that one of the followng assumptons s vered: () X 1 and X 2 are bounded; () A s postve dente and X 2 s bounded. Under ether one of these assumptons, t s easly seen that the level set L 0 X s compact for every (x0 1 ;x0 2 ) X 1 X 2. Ths mples that the two-bloc GS method s well dened and that the sequence {x } has lmt ponts. Then, agan by Corollary 2, we have that every lmt pont of ths sequence s a crtcal pont of problem (19). 5. The bloc GS method under generalzed convexty assumptons In ths secton we analyze the convergence propertes of the bloc nonlnear Gauss Sedel method n the case of arbtrary decomposton. In partcular, we show that n ths case the global convergence of the method can be ensured assumng the strct componentwse quasconvexty of the objectve functon wth respect to m 2 components, or assumng that the objectve functon s pseudoconvex and has bounded level sets. We state formally the noton of strct componentwse quasconvexty that follows mmedately from a nown denton of strct quasconvexty [10], whch sometmes s called also strong quasconvexty [2]. Denton 1. Let {1;:::;m}; we say that f s strctly quasconvex wth respect to x X on X f for every x X and y X wth y x we have f(x 1 ;:::;tx +(1 t)y ;:::;x m ) max{f(x);f(x 1 ;:::;y ;:::;x m )} for all t (0; 1): We can establsh the followng proposton, whose proof requres only mnor adaptatons of the arguments used, for nstance, n [10,5]. Proposton 4. Suppose that f s a strctly quasconvex functon wth respect to x X on X n the sense of Denton 1. Let {y } be a sequence of ponts n X convergng to some y X and let {v } be a sequence of vectors whose components are dened as follows: v j = { y j f j ; arg mn X f(y1 ;:::;;:::;ym) f j = : Then; f lm f(y ) f(v )=0; we have lm v y =0. Then, we can state the followng proposton. Proposton 5. Suppose that the functon f s strctly quasconvex wth respect to x on X; for each = 1;:::;m 2 n the sense of Denton 1 and that the sequence {x } generated by the GS method has lmt ponts. Then; every lmt pont x of {x } s a crtcal pont of Problem (1). Proof. Let us assume that there exsts a subsequence {x } K convergng to a pont x X. From Corollary 1weget m f(x) T (y m x m ) 0 for every y m X m : (20) Recallng Proposton 2 we can wrte lm f(w(; )) f(x )=0 for =1;:::;m: Usng the strct quasconvexty assumpton on f and nvong Proposton 4, where we dentfy {y } wth {x } K and {v } wth {w(; 1)} K, we obtan lm ; K w(; 1) = x. By repeated applcaton of Proposton 4 to the sequences {w(; 1)} K and {w(; )} K, for =1;:::;m 2, we obtan lm w(; )= x ; K for =1;:::;m 2: Then, Proposton 3 mples f(x) T (y x ) 0 for every y X ; =1;:::;m 1: (21) Hence, the asserton follows from (20) and (21). In the next proposton we consder the case of a pseudoconvex objectve functon. Proposton 6. Suppose that f s pseudoconvex on X and that L 0 X s compact. Then; the sequence {x } generated by the GS method has lmt ponts and every lmt pont x of {x } s a global mnmzer of f. Proof. Consder the partal updates w(; ), wth = 0;:::;m, dened n Secton 3. By denton of w(; )

7 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) we have f(x +1 )6f(w(; ))6f(x ) for =0;:::;m. Then, the ponts of the sequences {w(; )}, wth = 0;:::;m, belong to the compact set L 0 X. Therefore, f x X s a lmt pont of {x } we can construct a subsequence {x } K such that lm ; K x =x=w 0 ; (22) lm w(; )= ; K w =1;:::;m; (23) where w X for =1;:::;m. We can wrte w(; )=w(; 1) + d(; ) for =1;:::;m; (24) where the bloc components d h (; ) R n h of the vector d(; ), wth h {1;:::;m}, are such that d h (; )= 0fh. Therefore, for =1;:::;m, from (22) (24) we get w =w 1 +d ; (25) where d = lm d(; ) (26) ; K and d h =0; h : (27) By Proposton 2 we have f(x)=f(w ) for =1;:::;m: (28) From Proposton 3 t follows, for =1;:::;m, f(w ) T (y w ) 0 for all y X ; (29) and f( w ) T (y w ) 0 for all y X ; (30) where = (mod m) + 1. Now we prove that, gven j; {1;:::;m} such that f(w j ) T (y w j ) 0 for all y X ; (31) then t follows f(w j 1 ) T (y w j 1 ) 0 for all y X : (32) Obvously, (32) holds f = j (see (30)). Therefore, let us assume j. By (25) (27) we have w j =w j 1 +d j ; (33) where d j h = 0 for h j. For any gven vector R n such that w j 1 + X ; let us consder the feasble pont z()= w j 1 +d(); where d h ()=0 for h and d ()= R n. Then, from (29) and (31), observng that (33) and the fact that j mply = z () w j 1 = z () w j ; we obtan f( w j ) T (z() w j ) = f(w j ) T (w j 1 +d() w j ) = j f(w j ) T (w j 1 j w j j )+ f(w j ) T = j f(w j ) T (w j 1 j w j j )+ f(w j ) T (z () w j ) 0: It follows by the pseudoconvexty of f that f(z()) f( w j ) for all R n such that w j 1 + X : On the other hand, f( w j )=f(w j 1 ), and therefore we have: f(z()) f( w j 1 ) for all R n such that w j 1 + X ; whch, recallng the denton of z(), mples (32). Fnally, tang nto account (30), and usng the fact that (31) mples (32), by nducton we obtan j f(w 0 ) T (y j w 0 j)= j f(x) T (y j x j ) 0 for all y j X j : Snce ths s true for every j {1;:::;m}, the thess s proved. As an example, let us consder a quadratc programmng problem wth dsjont constrants, of the form mnmze f(x)= m =1 m x T Q j x j + j=1 subject to A x b ; =1;:::;m; m c T x (34) =1 where we assume that: () the sets X = {x R n : A x b }, for =1;:::;m are non empty and bounded;

8 134 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) () the matrx Q = Q 11 ::: Q 1m ::: ::: Q m1 ::: Q mm s symmetrc and semdente postve. In ths case the functon f s convex and the level set L 0 X s compact for every feasble x0, but the objectve functon may be not componentwse strctly quasconvex. In spte of ths, the m-bloc GS method s well dened and, as a result of Proposton 6, we can assert that t converges to an optmal soluton of the constraned problem, through the soluton of a sequence of convex quadratc programmng subproblems of smaller dmenson. 6. A counterexample In ths secton we consder a constraned verson of a well-nown counterexample due to Powell [12] whch ndcates that the results gven n the precedng sectons are tght n some sense. In fact, ths example shows that the GS method may cycle ndentely wthout convergng to a crtcal pont f the number m of blocs s equal to 3 and the objectve f s a nonconvex functon, whch s componentwse convex but not strctly quasconvex wth respect to each component. The orgnal counterexample of Powell conssts n the unconstraned mnmzaton of the functon f : R 3 R, dened by f(x)= x 1 x 2 x 2 x 3 x 1 x 3 +(x 1 1) 2 + where (t c) 2 + = +( x 1 1) 2 + +(x 2 1) 2 + +( x 2 1) 2 + +(x 3 1) 2 + +( x 3 1) 2 +; (35) { 0 f t6c; (t c) 2 f t c: Powell showed that, f the startng pont x 0 s the pont ( 1 ; ; 1 1 4) the steps of the GS method tend to cycle round sx edges of the cube whose vertces are (±1; ±1; ±1), whch are not statonary ponts of f. It can be easly vered that the level sets of the objectve functon (35) are not compact; n fact, settng x 2 = x 3 = x 1 we have that f(x) as x. However, the same behavor evdenced by Powell s obtaned f we consder a constraned problem wth the same objectve functon (35) and a compact feasble set, dened by the box constrants M6x 6M =1;:::;3 wth M 0 sucently large. In accordance wth the results of Secton 3, we may note that the lmt ponts of the partal updates generated by the GS method are such that two gradent components are zero. Nonconvergence s due to the fact that the lmt ponts assocated to consecutve partal updates are dstnct because of the fact that the functon s not componentwse strctly quasconvex; on the other hand, as the functon s not pseudoconvex, the lmt ponts of the sequence {x } are not crtcal ponts. Note that n the partcular case of m = 3, by Proposton 5 we can ensure convergence by requrng only the strct quasconvexty of f wth respect to one component. 7. A proxmal pont modcaton of the GS method In the precedng sectons we have shown that the global convergence of the GS method can be ensured ether under sutable convexty assumptons on the objectve functon or n the specal case of a two-bloc decomposton. Now, for the general case of nonconvex objectve functon and arbtrary decomposton, we consder a proxmal pont modcaton of the Gauss Sedel method. Proxmal pont versons of the GS method have been already consdered n the lterature (see e.g. [1,3,4]), but only under convexty assumptons on f. Here we show that these assumptons are not requred f we are nterested only n crtcal ponts. The algorthm can be descrbed as follows. PGS method Step 0: Set =0;x 0 X; 0 for =1;:::;m.. Step 1: For =1;:::;m set: x +1 = arg mn y X { f(x +1 1 ;:::;y ;:::;x m) y x 2 }: (36) Step 2: Set x +1 =(x1 +1 ;:::;xm +1 ); =+ 1 and go to Step 1.

9 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) The convergence propertes of the method can be establshed by employng essentally the same arguments used n [1] n the convex cases and we can state the followng proposton, whose proof s ncluded here for completeness. Proposton 7. Suppose that the PGS method s well dened and that the sequence {x } has lmt ponts. Then every lmt pont x of {x } s a crtcal pont of Problem (1). Proof. Let us assume that there exsts a subsequence {x } K convergng to a pont x X. Dene the vectors w(; 0) = x ; w(; )=(x1 +1 ;:::;x +1 ;x+1;:::;x m) for =1;:::;m: Then we have f( w(; )) 6f( w(; 1)) 1 2 w(; ) w(; 1) 2 ; (37) from whch t follows f(x +1 ) 6f( w(; ))6f( w(; 1))6f(x ) for =1;:::;m: (38) Reasonng as n Proposton 2 we obtan lm f(x +1 ) f(x )=0; and hence, tang lmts n (37) for we have lm w(; ) w(; 1) =0; =1;:::;m; (39) whch mples lm ; K w(; )= x; =0;:::;m: (40) Now, for every j {1;:::;m},asxj +1 accordng to rule (36), the pont w(; j)=(x1 +1 ;:::;xj +1 ;:::;xm) satses the optmalty condton [ j f( w(; j)) + j ( w j (; j) w j (; j 1))] T (y j w j (; j)) 0 for all y j X j : s generated Then, tang the lmt for ; K, recallng (39), (40) and the contnuty assumpton on f, for every j {1;:::;m} we obtan j f(x) T (y j x j ) 0 for all y j X j ; whch proves our asserton. Tang nto account the results of Secton 5, t follows that f the objectve functon f s strctly quasconvex wth respect to some component x, wth {1;:::;m}, then we can set = 0. Moreover, reasonng as n the proof of Proposton 5, we can obtan the same convergence results f we set m 1 = m =0. As an applcaton of Proposton 7, let us consder the quadratc problem (34) of Secton 5. Suppose agan that the sets X are nonempty and bounded, but now assume that Q s an arbtrary symmetrc matrx. Then the objectve functon wll be not pseudoconvex, n general, and possbly not componentwse strctly quasconvex. In ths case, the bloc-gs method may not converge. However, the PGS method s well de- ned and, moreover, f we set 2 mn (Q ), then the subproblems are strctly convex and the sequence generated by the PGS method has lmt ponts that are crtcal ponts of the orgnal problem. Acnowledgements The authors are ndebted to the anonymous revewer for the useful suggestons and for havng drawn ther attenton to some relevant references. References [1] A. Auslender, Asymptotc propertes of the Fenchel dual functonal and applcatons to decomposton problems, J. Optm. Theory Appl. 73 (3) (1992) [2] M.S. Bazaraa, H.D. Sheral, C.M. Shetty, n: Nonlnear Programmng, Wley, New Yor, [3] D.P. Bertseas, n: Nonlnear Programmng, Athena Scentc, Belmont, MA, [4] D.P. Bertseas, P. Tseng, Partal proxmal mnmzaton algorthms for convex programmng, SIAM J. Optm. 4 (3) (1994) [5] D.P. Bertseas, J.N. Tstsls, n: Parallel and Dstrbuted Computaton, Prentce-Hall, Englewood Cls, NJ, [6] L. Grppo, M. Scandrone, Globally convergent bloc-coordnate technques for unconstraned optmzaton, Optm. Methods Software 10 (1999)

10 136 L. Grppo, M. Scandrone / Operatons Research Letters 26 (2000) [7] H. Konno, A cuttng plane algorthm for solvng blnear programs, Math. Programmng 11 (1976) [8] H. Konno, Maxmzaton of a convex quadratc functon under lnear constrants, Math. Programmng 11 (1976) [9] Z.Q. Luo, P. Tseng, On the convergence of the coordnate descent method for convex derentable mnmzaton, J. Optm. Theory Appl. 72 (1992) [10] J.M. Ortega, W.C. Rhenboldt, n: Iteratve Soluton of Nonlnear Equatons n Several Varables, Academc Press, New Yor, [11] M. Patrsson, Decomposton methods for derentable optmzaton problems over Cartesan product sets, Comput. Optm. Appl. 9 (1) (1998) [12] M.J.D. Powell, On search drectons for mnmzaton algorthms, Math. Programmng 4 (1973) [13] P. Tseng, Decomposton algorthm for convex derentable mnmzaton, J. Optm. Theory Appl. 70 (1991) [14] N. Zadeh, A note on the cyclc coordnate ascent method, Management Sc. 16 (1970) [15] W.I. Zangwll, n: Nonlnear Programmng: A Uned Approach, Prentce-Hall, Englewood Cls, NJ, 1969.

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Inexact block coordinate descent methods with application to the nonnegative matrix factorization

Inexact block coordinate descent methods with application to the nonnegative matrix factorization IMA Journal of Numercal Analyss Page of 23 do:0.093/manum/drnxxx Inexact block coordnate descent methods wth applcaton to the nonnegatve matrx factorzaton SILVIA BONETTINI slva.bonettn@unfe.t Dpartmento

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

A combinatorial problem associated with nonograms

A combinatorial problem associated with nonograms A combnatoral problem assocated wth nonograms Jessca Benton Ron Snow Nolan Wallach March 21, 2005 1 Introducton. Ths work was motvated by a queston posed by the second named author to the frst named author

More information

A property of the elementary symmetric functions

A property of the elementary symmetric functions Calcolo manuscrpt No. (wll be nserted by the edtor) A property of the elementary symmetrc functons A. Esnberg, G. Fedele Dp. Elettronca Informatca e Sstemstca, Unverstà degl Stud della Calabra, 87036,

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

DIEGO AVERNA. A point x 2 X is said to be weakly Pareto-optimal for the function f provided

DIEGO AVERNA. A point x 2 X is said to be weakly Pareto-optimal for the function f provided WEAKLY PARETO-OPTIMAL ALTERNATIVES FOR A VECTOR MAXIMIZATION PROBLEM: EXISTENCE AND CONNECTEDNESS DIEGO AVERNA Let X be a non-empty set and f =f 1 ::: f k ):X! R k a functon. A pont x 2 X s sad to be weakly

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

for Linear Systems With Strictly Diagonally Dominant Matrix

for Linear Systems With Strictly Diagonally Dominant Matrix MATHEMATICS OF COMPUTATION, VOLUME 35, NUMBER 152 OCTOBER 1980, PAGES 1269-1273 On an Accelerated Overrelaxaton Iteratve Method for Lnear Systems Wth Strctly Dagonally Domnant Matrx By M. Madalena Martns*

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

( ) 2 ( ) ( ) Problem Set 4 Suggested Solutions. Problem 1

( ) 2 ( ) ( ) Problem Set 4 Suggested Solutions. Problem 1 Problem Set 4 Suggested Solutons Problem (A) The market demand functon s the soluton to the followng utlty-maxmzaton roblem (UMP): The Lagrangean: ( x, x, x ) = + max U x, x, x x x x st.. x + x + x y x,

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION PROCDINGS OF TH AMRICAN MATHMATICAL SOCITY Volume 125, Number 7, July 1997, Pages 2119{2125 S 0002-9939(97)03816-1 TH STRONG OPN ST CONDITION IN TH RANDOM CAS NORBRT PATZSCHK (Communcated by Palle. T.

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

MAT 578 Functional Analysis

MAT 578 Functional Analysis MAT 578 Functonal Analyss John Qugg Fall 2008 Locally convex spaces revsed September 6, 2008 Ths secton establshes the fundamental propertes of locally convex spaces. Acknowledgment: although I wrote these

More information

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming Solvng the Quadratc Egenvalue Complementarty Problem by DC Programmng Y-Shua Nu 1, Joaqum Júdce, Le Th Hoa An 3 and Pham Dnh Tao 4 1 Shangha JaoTong Unversty, Maths Departement and SJTU-Parstech, Chna

More information

QPCOMP: A Quadratic Programming Based Solver for Mixed. Complementarity Problems. February 7, Abstract

QPCOMP: A Quadratic Programming Based Solver for Mixed. Complementarity Problems. February 7, Abstract QPCOMP: A Quadratc Programmng Based Solver for Mxed Complementarty Problems Stephen C. Bllups y and Mchael C. Ferrs z February 7, 1996 Abstract QPCOMP s an extremely robust algorthm for solvng mxed nonlnear

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui. Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal

More information

Memoirs on Dierential Equations and Mathematical Physics Volume 11, 1997, 67{88 Guram Kharatishvili and Tamaz Tadumadze THE PROBLEM OF OPTIMAL CONTROL

Memoirs on Dierential Equations and Mathematical Physics Volume 11, 1997, 67{88 Guram Kharatishvili and Tamaz Tadumadze THE PROBLEM OF OPTIMAL CONTROL Memors on Derental Equatons and Mathematcal Physcs Volume 11, 1997, 67{88 Guram Kharatshvl and Tamaz Tadumadze THE PROBLEM OF OPTIMAL CONTROL FOR NONLINEAR SYSTEMS WITH VARIABLE STRUCTURE, DELAYS AND PIECEWISE

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness. 20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Convergence rates of proximal gradient methods via the convex conjugate

Convergence rates of proximal gradient methods via the convex conjugate Convergence rates of proxmal gradent methods va the convex conjugate Davd H Gutman Javer F Peña January 8, 018 Abstract We gve a novel proof of the O(1/ and O(1/ convergence rates of the proxmal gradent

More information

SL n (F ) Equals its Own Derived Group

SL n (F ) Equals its Own Derived Group Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Randić Energy and Randić Estrada Index of a Graph

Randić Energy and Randić Estrada Index of a Graph EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Research Article Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

Research Article Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem Mathematcal Problems n Engneerng Volume 2012, Artcle ID 871741, 16 pages do:10.1155/2012/871741 Research Artcle Global Suffcent Optmalty Condtons for a Specal Cubc Mnmzaton Problem Xaome Zhang, 1 Yanjun

More information

DIFFERENTIAL FORMS BRIAN OSSERMAN

DIFFERENTIAL FORMS BRIAN OSSERMAN DIFFERENTIAL FORMS BRIAN OSSERMAN Dfferentals are an mportant topc n algebrac geometry, allowng the use of some classcal geometrc arguments n the context of varetes over any feld. We wll use them to defne

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information