MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Successive Lagrangian Relaxation Algorithm for Nonconvex Quadratic Optimization

Size: px
Start display at page:

Download "MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Successive Lagrangian Relaxation Algorithm for Nonconvex Quadratic Optimization"

Transcription

1 MATHEMATICAL ENGINEERING TECHNICAL REPORTS Successve Lagrangan Relaxaton Algorthm for Nonconvex Quadratc Optmzaton Shnj YAMADA and Akko TAKEDA METR March 2017 DEPARTMENT OF MATHEMATICAL INFORMATICS GRADUATE SCHOOL OF INFORMATION SCIENCE AND TECHNOLOGY THE UNIVERSITY OF TOKYO BUNKYO-KU, TOKYO , JAPAN WWW page:

2 The METR techncal reports are publshed as a means to ensure tmely dssemnaton of scholarly and techncal work on a non-commercal bass. Copyrght and all rghts theren are mantaned by the authors or by other copyrght holders, notwthstandng that they have offered ther works here electroncally. It s understood that all persons copyng ths nformaton wll adhere to the terms and constrants nvoked by each author s copyrght. These works may not be reposted wthout the explct permsson of the copyrght holder.

3 Successve Lagrangan Relaxaton Algorthm for Nonconvex Quadratc Optmzaton Shnj Yamada Akko Takeda March 31th, 2017 Abstract Optmzaton problems whose objectve functon and constrants are quadratc polynomals are called quadratcally constraned quadratc programs (QCQPs). QCQPs are NP-hard n general and are mportant n optmzaton theory and practce. There have been many studes on solvng QCQPs approxmately. Among them, sem-defnte program (SDP) relaxaton s a well-known convex relaxaton method. In recent years, many researchers have tred to fnd better relaxed solutons by addng lnear constrants as vald nequaltes. On the other hand, SDP relaxaton requres a long computaton tme, and t has hgh space complexty for large-scale problems n practce; therefore, SDP relaxaton may not be useful for such problems. In ths paper, we propose a new convex relaxaton method that s weaker but faster than SDP relaxaton methods. The proposed method transforms a QCQP nto a Lagrangan dual optmzaton problem and successvely solves subproblems whle updatng the Lagrange multplers. The subproblem n our method s a QCQP wth only one constrant for whch we propose an effcent algorthm. Numercal experments confrm that our method can quckly fnd a relaxed soluton wth an approprate termnaton condton. 1 Introducton We consder the followng quadratcally constraned quadratc program (QCQP): mnmze x R n x Q 0 x + 2q 0 x + γ 0 subject to x Q x + 2q x + γ 0, = 1,, m, (1) S. Yamada s wth the Department of Mathematcal Informatcs, Graduate School of Informaton Scence and Technology, The Unversty of Tokyo, Tokyo, Japan (emal: shnj yamada@mst..u-tokyo.ac.jp) A. Takeda s wth the Department of Mathematcal Analyss and Statstcal Inference, The Insttute of Statstcal Mathematcs, Tokyo, Japan (emal: atakeda@sm.ac.jp) 1

4 where each Q s an n n symmetrc matrx. Q = O means a lnear functon. We call a QCQP wth m constrants m-qcqp. In the case Q O for every = 0,, m, (1) s a convex program. However, n general, postve semdefnteness s not assumed, and (1) s NP-hard [23]. QCQPs are mportant n optmzaton theory and n practce. QCQPs are fundamental nonlnear programmng problems that appear n many applcatons such as max-cut problems [24] and bnary quadratc optmzatons. Some relaxaton methods exst for fndng a global soluton of (1). A standard approach s the branch-and-bound (or cut) method, where a smple relaxaton, e.g., a lnear programmng (LP) relaxaton problem [2], s solved n each teraton. Audet, Hansen, Jaumard and Savard [5] proposed to ntroduce addtonal constrants, constructed usng the reformulaton lnearzaton technque (RLT), to the relaxaton problem. Lagrangan bounds,.e., bounds computed by Lagrangan relaxaton, have also been used n branch-and-bound methods n order to reduce the dualty gap [22, 27, 29]. The branch-and-bound (or cut) algorthm yelds a global soluton by solvng many relaxaton subproblems, whch restrcts the sze of QCQPs. Another avenue of research has nvestgated tght relaxaton problems for QCQPs. Among the many convex relaxaton methods, semdefnte program (SDP) relaxaton s well known and have been extensvely studed [11, 19, 28]. It s known that an SDP relaxaton can be vewed as a Lagrangan dual problem of the orgnal QCQP. SDP relaxaton has been appled to varous QCQPs that appear n combnatoral optmzaton problems [12, 13] as well as n sgnal processng and communcatons [19]. SDP relaxaton s powerful, and t gves the exact optmal value partcularly when there s only one constrant on the trust regon subproblem (TRS). Furthermore, recent studes such as [3] have proposed to add new vald nequaltes (e.g., lnear constrants for matrx varables usng the orgnal upper and lower bound constrants) to SDP relaxaton problems. In partcular, Zheng, Sun, and L [30, 31] proposed a decomposton-approxmaton scheme that generates an SDP relaxaton at least as tght as the ordnary one. Jang and L [16] proposed second order cone constrants as vald nequaltes for the ordnary SDP relaxaton. Such methods am at obtanng a better relaxed soluton even f they take more tme to solve than the orgnal SDP relaxaton. However, SDP relaxaton ncludng addtonal vald nequaltes ncreases the problem sze, whch leads to a longer computaton tme and often memory shortage errors for large-scale problems. Here, Km and Kojma [17] proposed a second-order cone programmng relaxaton (SOCP relaxaton), where vald second-order cone constrants derved from the postve semdefnte nequalty are added to the LP relaxaton. Burer, Km, and Kojma [9] proposed a weaker but faster method than SDP relaxaton that uses a block matrx decomposton. Such faster relaxaton methods are useful for large-scale problems, and they can be repeatedly solved n, e.g., a branchand-bound method. 2

5 In ths paper, we propose a faster convex relaxaton method that s not stronger than SDP relaxaton f vald constrants are not consdered. Our method solves the Lagrangan dual problem of the orgnal QCQP by usng a subgradent method, though the dual problem can be reformulated as an SDP and s solvable wth the nteror pont method. Indeed, there are varous studes that propose to solve the Lagrangan dual problems for nonconvex problems, but most of them transform the dual problem nto an SDP problem [6] or a more general cone problem [18]. Here, to resort to more easly solved problems, we dvde the mnmzaton of the objectve functon n the Lagrangan dual problem nto two stages and teratvely solve the nner problem as a 1-QCQP, whch can be solved exactly and quckly. There are manly two approaches to solvng a 1-QCQP: one s based on egenvalue computaton, the other on SDP relaxaton. In partcular, Moré and Sorensen [20] proposed to teratvely solve a symmetrc postve-defnte lnear system for TRS, whle Adach, Iwata, Nakatsukasa, and Takeda [1] proposed an accurate and effcent method that solves only one generalzed egenvalue problem. In ths paper, we propose a new relaxaton method that can solve a 1-QCQP exactly and quckly as a convex quadratc optmzaton problem. Furthermore, we prove that the convex quadratc problem constructs the convex hull of the feasble regon of a 1-QCQP. Numercal experments confrm that our convex quadratc relaxaton method for solvng 1-QCQPs s faster than SDP relaxaton and egenvalue methods. They also show that our method can quckly fnd a relaxed soluton of an m- QCQP by teratvely solvng a 1-QCQP wth updated Lagrange multplers. By addng vald constrants to our formulaton, our method can sometmes fnd a better relaxed soluton n a shorter computaton tme compared wth the ordnary SDP relaxaton. The relaxaton technque can be embedded wthn a branch-and-bound framework to determne a global optmum to the orgnal m-qcqp. The remander of ths paper s organzed as follows. We ntroduce SDP relaxaton and other related studes n Secton 2. We descrbe our method and ts some propertes n Secton 3 and 4. We present computatonal results n Secton 5. We conclude the paper n Secton 6. The Appendx contans our proofs of the presented theorems. Throughout the paper, we denote matrces by usng uppercase letters such as Q, vectors by usng bold lowercase such as q and scalars by usng normal lower case such as γ. The notaton A B or A B mples that the matrx A B s postve defnte or semdefnte. e means the all-one vector. 3

6 2 Exstng SDP relaxaton methods for m-qcqp 2.1 SDP relaxaton An SDP relaxaton can be expressed as a Lagrangan dual problem of the orgnal problem (1) as follows: max ξ 0 ϕ(ξ). (2) Here, ϕ(ξ) s an optmal value functon defned by ) ( ) m m ϕ(ξ) := mn x (Q x 0 + ξ Q x + 2 q 0 + ξ q x + γ 0 + = { q(ξ) Q(ξ) q(ξ) + γ(ξ),, (f Q(ξ) O), (otherwse), m ξ γ, where Q(ξ) := Q 0 + m ξ Q, q(ξ) := q 0 + m ξ q, γ(ξ) := γ 0 + m ξ γ and means the pseudo-nverse. Note that from (4), (2) s equvalent to max ξ 0 ϕ(ξ) s.t. Q 0 + (3) (4) m ξ Q O. (5) ( ) Q(ξ) q(ξ) By consderng q(ξ) Q(ξ) q(ξ)+γ(ξ) as a Schur complement of q(ξ), γ(ξ) we can equvalently rewrte the dual problem (5) as a semdefnte program (SDP) max ξ 0,τ s.t. τ ( ) Q(ξ) q(ξ) q(ξ) O, (6) γ(ξ) τ whch can be solved by usng an nteror pont method. It should be noted that the dual of (6) s mn x,x Q 0 X + 2q 0 x + γ 0 s.t. Q X + 2q x + γ 0, = 1,, m, (7) X xx and (6) and (7) are equvalent under the Prmal/Dual Slater condton. 4

7 SDP relaxaton s a popular approach to dealng wth (1). Sturm and Zhang [26] proved that when there s one constrant (.e. a 1-QCQP), SDP relaxaton can always obtan the exact optmal value. Goemans and Wllamson showed an approxmaton bound of SDP relaxaton for max-cut problems [13], and Goemans [12] appled SDP relaxaton to varous combnatoral problems. Ther numercal experments show that SDP relaxaton can fnd a very tght relaxed soluton for many knds of problems. However, SDP relaxaton has dsadvantages n both computaton tme and space complexty because of the matrx varable; t cannot deal wth large-scale problems because of shortage of memory. Although polynomal tme algorthms, such as an nteror pont method, have been establshed, they often take a long tme to solve an SDP relaxaton problem n practce. 2.2 Stronger SDP relaxaton usng RLT For further strengthenng the SDP relaxaton, Anstrecher [3] proposed the reformulaton lnearzaton technque (RLT). Moreover, [3] added new constrants and restrcted the range of the new varables X j,, j. Here, one assumes the orgnal problem (1) has box constrants,.e., lower and upper bounds on each varable x j (l j and u j, respectvely). Note as well that even f there are no box constrants, we may be able to compute l j and u j by usng the orgnal constrants f the feasble regon s bounded. The nequalty l j x j u j (as a vector expresson, l x u) leads to (x u )(x j u j ) 0 x x j u x j u j x + u u j 0, (8) (x u )(x j l j ) 0 x x j u x j l j x + u l j 0, (9) (x l )(x j l j ) 0 x x j l x j l j x + l l j 0, (10) for, j = 1,, n. By replacng x x j wth X j, we get X j u x j u j x + u u j 0, (11) X j u x j l j x + u l j 0, (12) X j l x j l j x + l l j 0. (13) (11) (13) are lnear nequaltes that nclude matrx varables X j. Therefore, by addng these constrants, we can get a stronger relaxaton. The dsadvantage of RLT s that t ncreases computaton tme because of the ncreased varables X j and addtonal constrants (11) (13). Many studes have amed at strengthenng the relaxaton by addng vald nequaltes other than (11) (13) [16, 25, 30, 31]. Ther methods gve very tght bounds, but they ental large amounts of computaton tme. 2.3 Weaker SDP relaxaton method by block decomposton Burer, Km, and Kojma [9] ams to solve a relaxed problem faster than SDP relaxaton can, although t s a weaker relaxaton; as such, t shares 5

8 a smlar motvaton as ours. Frst, [9] assumes that the orgnal problem has [0,1] box constrants (.e. ; 0 x 1) n order to avod a stuaton n whch the optmal value dverges. Then, [9] proves that we can compute a block dagonal matrx D whch satsfes Q + D O for the matrx Q appearng n the objectve functon or the constrants. By usng D, we can transform a quadratc polynomal, x Q x + 2q x + γ = x D x + x (Q + D )x + 2q x + γ and relax x D x to D X and X O as n SDP relaxaton. As a whole, a relaxaton problem s as follows. mn x,x D 0 X + x (Q 0 + D 0 )x + 2q 0 x + γ 0 s.t. D X + x (Q + D )x + 2q x + γ 0, = 1,, m, X k x k x k, k = 1,, r, where r denotes the number of blocks of D and X k or x k denotes a partal matrx or vector n X or x correspondng to each block of D. Note that n a smlar way as (12), we get new constrants X x, = 1,, m for the matrx varable X from the box constrants. Snce we relax only the quadratc form for each block part, the matrx X only has block part components. Therefore, we can consder the postve semdefnte constrant only for the block parts: X k x k x k. The number of varables related to the postve semdefnte constrant s reduced, and that s why we can obtan the optmal value so quckly. We call ths method Block-SDP and use t n the numercal experments n Secton 5. In [9], t s proposed to dvde D as evenly as possble, that s, by makng the dfference between the largest block sze and the smallest block sze at most one for a gven r. 3 Proposed Method 3.1 Assumptons Before we explan our method, we wll mpose the followng three assumptons. Assumpton 1. (a) The feasble regon of (1) has some nteror ponts. (b) There exsts at least one matrx Q ( = 0,, m) such that Q O. (c) When Q 0 O, any optmal soluton x := Q 0 q 0 of the followng unconstraned optmzaton problem mn x s not feasble for the orgnal QCQP (1). x Q 0 x + 2q 0 x + γ 0. (14) 6

9 Assumpton 1 (a) s the prmal Slater condton, and (b) s a suffcent condton of the Dual Slater condton of the orgnal QCQP. Assumpton 1 (c) s not a strong one because f x s feasble for (1), t s a global optmal soluton and we can check t easly. 3.2 The Whole Algorthm We further transform the Lagrangan dual problem (2) nto max λ Λ s max µ 0 ϕ(µλ), (15) where Λ s := {λ 0 e λ = 1} s a smplex. Now we defne ψ(λ) as the optmal value of the nner optmzaton problem of (15) for a gven λ Λ s : ψ(λ) := max µ 0 ϕ(µλ). (16) Note that (16) s the Lagrangan dual problem for the followng 1-QCQP: ψ(λ) = mn x s.t. x Q 0 x + 2q 0 x + γ 0 ( m ) ( m ) m x λ Q x + 2 λ q x + λ γ 0. (17) There s no dualty gap between (16) and ts Lagrangan dual (17), snce [26] proves that the SDP formulaton of (16) has the same optmal value as the 1-QCQP (17). We wll show how to solve the 1-QCQP (17) exactly and quckly n Secton 4.1. The SDP relaxaton problem (2) can be wrtten as max ψ(λ). (18) λ Λ s Here, we propose an algorthm whch teratvely solves (17) wth updated λ Λ s for fndng an optmal soluton of the SDP relaxaton problem. Λ s s a convex set, and ψ(λ) s a quas-concave functon, as shown n Secton 3.3. Therefore, we wll apply the standard gradent descent method to (18) for updatng λ. The speed of convergence of gradent methods s slow n general especally near optmal solutons, and therefore, we wll obtan a relaxed soluton by usng an approprate termnaton crteron. Algorthm 1 summarzes the proposed method. We dvde the max for the Lagrange functon nto two parts and teratvely solve the 1-QCQPs. Note that our method solves the 1-QCQPs ψ(λ) successvely, so t s not stronger than SDP relaxaton. We explan our method (especally, the relatonshp between (P k ) and (27) or (28)) n Secton 4. 7

10 Algorthm 1 Successve Lagrangan Relaxaton (SLR) Gven Q 0,, Q m ( ; Q O), q 0,, q m, γ 0,, γ m tolerance ϵ and suffcently small value ψ(λ 1 ), Step 1: Set k = 0, and defne an ntal pont λ (0). Step 2: Fnd an optmal soluton x (k) and the optmal value ψ(λ (k) ) of (P k ): ψ(λ (k) ) = mn x s.t. x Q 0 x + 2q 0 x + γ 0 ( m ) ( m x λ (k) Q x + 2 λ (k) q ) x + m λ (k) γ 0 (P k ) ψ(λ k 1 ) by solvng the convex problem (27) or (28). Step 3: If ψ(λk ) ψ(λ k 1 ) < ϵ, then stop the algorthm. Otherwse, update λ (k) by Algorthm 2 shown n Secton 4.2 and k k + 1. Go to Step Quas-Concavty of ψ(λ) Objectve functons of Lagrangan dual problems are concave for Lagrange multplers (e.g. [7]). The functon ϕ(µλ) for fxed λ s hence concave for µ, but ψ(λ) s not necessarly concave for λ. However, we can prove that ψ(λ) s a quas-concave functon and has some of the desrable propertes that concave functons have. Before we prove the quas-concavty of ψ(λ), we have to defne the set Λ +, Λ + = {λ Λ s ψ(λ) > ϕ(0)}, (19) n order to explan the propertes of ψ(λ). Note that ϕ(0) s the optmal value of the unconstraned problem (14) and ψ(λ) ϕ(0) holds for all λ. We can also see that Λ + s nonempty f and only f the SDP relaxaton value s larger than ϕ(0),.e., OPT SDP > ϕ(0) holds. The above statement s obvous from OPT SDP = max{ψ(λ) λ Λ s } (see (18)) and (19). In other words, OPT SDP = ϕ(0) means that for all λ Λ s, there exsts an optmal soluton of (14) whch s feasble for (17). From the defnton of Λ +, we can see that for λ Λ +, an optmal soluton of (16), µ λ, s a postve value. When λ / Λ + (.e. ψ(λ) = ϕ(0)), we can set µ λ to zero wthout changng the optmal value and soluton. By 8

11 usng such µ λ, we wll dentfy (19) and Λ + = {λ Λ s µ λ > 0}. (20) Now let us prove the quas-concavty of ψ(λ) and some other propertes.. Theorem 1. Let ψ(λ) be the optmal value of (16) and ( x λ, µ λ ) be ts optmal soluton. Then, the followng () (v) hold. () The vector ( g λ ) = µ λ ( x λ Q x λ + 2q x λ + γ ) (21) s a subgradent, whch s defned as a vector n the quas-subdfferental (see e.g., [14, 15]): of ψ at λ. ψ(λ) := {s s (ν λ) 0, ν; ψ(ν) > ψ(λ)} (22) () ψ(λ) s a quas-concave functon for λ Λ s. () Λ + s a convex set. (v) If ψ(λ) has statonary ponts n Λ +, all of them are global optmal solutons n Λ s. The proof s n the Appendx. Note that the set of global solutons of ψ s convex because of the quasconcavty of ψ(λ). (v) s smlar to the property that concave functons have. Therefore, a smple subgradent method such as SLR, whch searches for statonary ponts, works well. SLR s an algorthm for fndng a statonary pont n Λ +, whch Theorem 1 (v) proves to be a global optmal soluton n Λ s. Fgures 1 and 2 are mages of ψ(λ) for m = 2, where λ Λ s s expressed by one varable α [0, 1] as λ = (α, 1 α). The vertcal axs shows ψ(λ), and the horzontal one shows α for a randomly generated 2-QCQP. We can make sure that ψ(λ) s a quas-concave functon from these fgures. There are subgradent methods for maxmzng a quas-concave functon. If the objectve functon satsfes some assumptons, the convergence of the algorthms s guaranteed. However, ths may not be the case for the problem settng of (18). For example, n [15], ψ(λ) must satsfy the Hölder condton of order p > 0 wth modulus µ > 0, that s, ψ(λ) ψ µ(dst(λ, Λ )) p, λ R m, where ψ s the optmal value, Λ s the set of optmal solutons and dst(y, Y ) denotes the Eucldean dstance from a vector y to a set Y. It s hard to check whether ψ(λ) satsfes the Hölder condton. Ensurng the convergence of SLR seems dffcult, but numercal experments mply that SLR works well and often obtans the optmal value same as SDP relaxaton can. 9

12 ψ(λ) -0.5 ψ(λ) Λ + µ λ > 0 µ λ = Λ + -3 φ(0) -3.5 µ λ > 0 µ λ = 0 ( ψ(λ) = - ) Λ Λ Fgure 1: ψ(λ) and Λ + when Q 0 O Fgure 2: ψ(λ) and Λ + when Q 0 O 4 Detals of the Algorthm QCQP as a Subproblem SLR needs to solve 1-QCQP (P k ). Here, we descrbe a fast and exact soluton method for a general 1-QCQP. Frst, we transform the orgnal 1-QCQP by usng a new varable t nto the form: where Q λ = mn x,t t s.t. x Q 0 x + 2q 0 x + γ 0 t, (23) m x Q λ x + 2q λ x + γ λ 0, λ (k) Q, q λ = m λ (k) q, γ λ = m λ (k) γ, n the SLR algorthm. Here, we assume that (23) satsfes the followng Prmal and Dual Slater condtons: (Prmal Slater condton) x R n s.t. x Q λ x + 2q λ x + γ λ < 0 (Dual Slater condton) σ 0 s.t. Q 0 + σq λ O Note that (P k ) satsfes the Prmal Slater condton because of Assumpton 1 (a), and t also satsfes the Dual Slater condton because ether Q 0 or Q λ s postve defnte by the updatng rule of λ (k) explaned n the next subsecton. Here, we defne S : = {σ 0 Q 0 + σq λ O}, 10

13 whch s a convex set of one dmenson, that s, an nterval. The Dual Slater condton mples that S s not a pont, and therefore, σ < σ holds for σ := sup σ, (24) σ S σ := nf σ. (25) σ S We set σ = + when Q λ O and σ = 0 when Q 0 O. For (23), we make the followng relaxaton problem usng σ and σ: mn x,t t s.t. x (Q 0 + σq λ )x + 2(q 0 + σq λ ) x + γ 0 + σγ λ t, (26) x (Q 0 + σq λ )x + 2(q 0 + σq λ ) x + γ 0 + σγ λ t. Note that n the SLR algorthm, we keep ether Q 0 or Q λ postve semdefnte. When σ = 0, (26) s equvalent to the followng relaxed problem: mn x,t t s.t. x Q 0 x + 2q 0 x + γ 0 t, (27) x (ˆσQ 0 + Q λ )x + 2(ˆσq 0 + q λ ) x + (ˆσγ 0 + γ λ ) ˆσt, where ˆσ = 1/ σ, and ˆσ can be easly calculated. On the other hand, when σ = +, (26) s equvalent to mn x,t t s.t. x (Q 0 + σq λ )x + 2(q 0 + σq λ ) x + γ 0 + σγ λ t, (28) x Q λ x + 2q λ x + γ λ 0. (28) can be vewed as dvdng the second constrant of (26) by σ and σ. The followng theorem shows the equvalence of the proposed relaxaton problem (26) and the orgnal problem (23). Theorem 2. Under the Prmal and Dual Slater condtons, the feasble regon rel of the proposed relaxaton problem (26) s the convex hull of the feasble regon of the orgnal problem (23),.e., rel = conv( ). The proof s n the Appendx. Theorem 2 mples that (26) gves an exact optmal soluton of 1-QCQP (23) snce the objectve functon s lnear. The outlne of the proof s as follows (see Fgure 3). We choose an arbtrary pont (x, t ) n rel and show that there exsts two ponts P and Q n whch express (x, t ) as a convex combnaton of P and Q. We show n the Appendx how to obtan P and Q for an arbtrary pont n rel. Usng ths technque, we can fnd an 11

14 P (x t ) rel Q Fgure 3: Image of and rel optmal soluton of 1-QCQP (23). By comparson, SDP relaxaton can not always fnd a feasble soluton for the 1-QCQP (though t can obtan the optmal value). (26) s a convex quadratc problem equvalent to 1-QCQP, whch we wll call CQ1. Note that CQ1 has only two constrants, and we can solve t very quckly. CQ1 can be constructed for general 1-QCQPs, ncludng the Trust Regon Subproblem (TRS). The numercal experments n Secton 5 mply that CQ1 can be solved by a convex quadratc optmzaton solver faster than by an effcent method for solvng a TRS and hence that CQ1 can speed up SLR. Now let us explan how to calculate ˆσ or σ especally when ether Q 0 or Q λ s postve defnte. We explan how to convexfy a matrx whch has some negatve egenvalues by usng a postve defnte matrx,.e., for both cases when Q 0 O n (27) and Q λ O n (28). Frst, we calculate ˆσ n (27) when Q 0 O. Let Q be the square root of the matrx Q 0 and Q be ts nverse. Then, σq 0 + Q λ O Q (σq 0 + Q λ )Q O holds. Therefore, ˆσ can be calculated as σi + Q Q λ Q O ˆσ = mn{σ mn (Q Q λ Q ), 0}, where σ mn (X) s the mnmum egenvalue of X. Smlarly, we can calculate σ n (28) as σ = mn{σ mn (Q 1 2 λ Q 0 Q 1 2 λ ), 0}, when Q λ O. It s true that (26) gves us the exact optmal value of (23). When both Q 0 and Q λ have zero (or even negatve) egenvalues, ˆσ and σ can not be 12

15 computed, but such cases can be gnored because an 1-QCQP (23) wth such Q 0 and Q λ does not gve an optmal soluton of (18). Therefore, n the Algorthm 1, we keep ether Q 0 or m σ Q postve defnte. 4.2 Update Rule of λ Now let us explan the update rule of λ, whch s shown n Step 2 of Algorthm 1. The update rule s constructed n a smlar way to the gradent projecton method for solvng (18). Step 4 s needed only when Q 0 O and m λ(k+1) Q O hold. We update the Lagrange multplers correspondng to convex constrants, whose ndex set s defned as C := { 1 m, Q O}. When Q 0 O, Assumpton 1 (b) assures that C s non-empty. Algorthm 2 Update rule of λ Gven a suffcently small postve scalar δ, Step 1: Calculate the gradent vector g (k) 2q x(k) + γ. as g (k) = x (k) Q x (k) + Step 2: Normalze g (k) as g (k) λ (k+1) as g(k) e λ and set the step sze h. Update λ (k+1) = proj Λs (λ (k) + hg (k) ), (29) where proj Λs (a) := arg mn b Λ s a b 2 s the projecton onto Λ s. Step 3: If Q 0 O or m λ(k+1) Q O, termnate and return λ (k+1). Step 4: Otherwse, fnd a mnmum postve scalar α such that α C Q + m λ(k+1) Q O and update λ (k+1) λ (k+1) + α + δ for C. After computng λ (k+1) 1 m λ(k+1) λ (k+1), termnate and return λ (k+1). Theorem 1 () shows that a subgradent vector of ψ(λ) at λ (k) s g λ (k) = µ λ (k)g (k), where g (k) = x (k) Q x (k) + 2q x(k) + γ,. To fnd a larger functon value of ψ(λ) at the kth teraton, we use g (k) as the subgradent vector 13

16 of ψ(λ) rather than g λ (k) for the followng reasons. When µ λ (k) > 0, we can use g (k) as a subgradent vector of ψ(λ) at λ (k). When µ λ (k) = 0 (.e. λ (k) / Λ + ), Q 0 should be postve semdefnte because of the constrant Q 0 + µ m λ Q O, and the optmal value of (17) equals that of (14) (= ϕ(0)). In ths case, ϕ(0) s the smallest possble value, but t s not the optmal one of the orgnal problem (1) because an optmal soluton of the unconstraned problem (14), x, s not n the feasble regon of (1), from Assumpton 1 (c). Therefore, when µ λ (k) = 0, the algorthm needs to move λ (k) toward Λ + ; precsely, λ (k) s moved n the drecton of g (k), although g λ (k) s the zero vector. It can be easly confrmed that by movng λ (k) suffcently far n ths drecton, the left sde of the constrant of (P k ) becomes postve and x moves out of the feasble regon of (P k ). The whole algorthm updates λ (k) and x (k), k = 1, 2,.... In order for (P k ) to have an optmal soluton x (k), λ (k) needs to be set approprately so as to satsfy Q 0 + µ m λ(k) Q O for some µ 0. If the nput Q 0 of the gven problem satsfes Q 0 O, then Q 0 + µ m λ(k) Q O holds wth µ = 0, whch makes (P k ) bounded. On the other hand, n the case of Q 0 O, the optmal value of (P k ), ψ(λ (k) ), possbly becomes, and we can not fnd an optmal soluton. In such case, we can not calculate g (k) and the algorthm stops. To prevent ths from happenng, we defne a subset of Λ s so that the optmal value does not become. When Q 0 O, Λ + can be rewrtten as Λ + = {λ 0 e λ = 1, µ 0; Q 0 + µ m λ Q O}. Λ + s the set of λ for whch ψ(λ) >. However, the above descrpton of Λ + s complcated because of the postve semdefnte constrant. Furthermore, CQ1 requres that ether Q 0 or m λ Q be postve defnte. Therefore, when Q 0 O, we approxmate Λ + as Λ + := {λ 0 e λ = 1, m λ Q O}, and keep λ (k) n Λ + by Step 4. It can be easly confrmed that Λ + s a convex set. By replacng the feasble regon Λ s of (18) by Λ + ( Λ + ), the relaxaton can be weaker and the optmal value s not necessarly equal to the SDP relaxaton value. Thus, when Q 0 O, SLR may be worse than t s when Q 0 O. Now let us explan how to choose the step sze h. Gradent methods have varous rules to determne an approprate step sze. Smple ones nclude a constant step sze h = c or a dmnshng step sze (e.g. h = c/ k), where k s the number of teratons (e.g., see [8]). A more complcated one s the backtrackng lne search (e.g., see [4, 21]). Although the backtrackng lne 14

17 search has been shown to perform well n many cases, we use a dmnshng step sze h = c/ k to save the computaton tme of SLR. The pont of SLR s to obtan a relaxed soluton quckly, so we should choose the smpler way. In Step 2, we compute λ (k+1) by usng g (k) and h by usng (29). We can easly compute the projecton onto Λ s by usng the method proposed by Chen and Ye [10]. Here, the condton: µ 0; Q 0 + µ m λ Q O n Λ + s gnored n the projecton operaton, but when Q 0 O, the resultng projected pont λ (k+1) s n Λ +. On the other hand, when Q 0 O, the vector λ (k+1) s not necessarly n Λ + or Λ +. In such case, λ (k+1) s modfed n Step 4 so as to belong to Λ +. Step 4 s a heurstc step; t s needed to keep λ Λ + when Q 0 O. 4.3 Settng the Intal Pont The number of teratons of SLR depends on how we choose the ntal pont. In ths secton, we propose two strateges for choosng t. Note that at an optmal soluton λ, all elements λ correspondng to convex constrants wth Q O, C, are expected to have postve weghts. Hence, we wll gve postve weghts only for λ, C (f t exsts). Here, we assume that (Q, q, γ ) n each constrant s approprately scaled by a postve scalar as follows. When the matrx Q has postve egenvalues, (Q, q, γ ) s scaled so that the mnmum postve egenvalue of Q s equal to one. If Q has no postve egenvalues, t s scaled such that the maxmum negatve egenvalue s equal to 1. The frst approach s equal weghts. It gves equal weghts to λ (0) s.t. Q O or f there are no Q O (whch mples that Q 0 O), t gves equal weghts to all λ (0) as follows: Equal weghts rule we defne λ (0) as If the ndex set of convex constrants C s nonempty, λ (0) = { 1 C, f Q O, 0, otherwse. (30) If C =, we defne λ (0) as λ (0) = 1, = 1,..., m. (31) m The second approach uses the dea of the Schur complement. Note that ths rule only apples when there are some ( 1) such that Q O. For the constrant wth Q O, we have x Q x + 2q x + γ 0 (x + Q 1 q) Q (x + Q 1 q ) q Q 1 q γ. 15

18 The rght-hand sde η := q Q 1 q γ can be consdered the volume of the ellpsod. From Assumpton 1 (a), the ellpsod has postve volume and we have η > 0. A numercal experment shows that constrants havng small postve η tend to become actve n SDP relaxaton. Therefore, t seems reasonable to gve large weghts to constrants whose η (> 0) s small. On the other hand, snce we treat the constrant as ( ) ( 1 γ q ) ( ) 1 0, x q Q x ( γ q the value η can be vewed as the Schur complement of ). It q Q ( γ q s known that when Q O, ) O s equvalent to η q Q 0. ( γ q However, n ths case, ) O does not hold snce η > 0. But we q Q consder ths value to be an ndcator of convexty. We gve large weghts for the constrants whose Schur complement η s large. Then, snce η > 0, we gve large weghts for the constrants whose η (< 0) are close to zero; that s, 1 η are large. Here, we consder the followng rule: Schur complement rule For C, calculate s := 1/ η. We defne λ (0) as λ (0) = { s m s f Q O, 0, otherwse. (32) Although the Schur complement rule also has no theoretcal guarantee, numercal experments show ther usefulness especally when Q 0 O. 4.4 RQT Constrants We may be able to fnd a better optmal value of the SDP relaxaton problem (2) by addng a redundant convex quadratc constrant constructed smlarly to RLT (ths s dscussed n Secton 2.2) to (1) when there are box constrants and by applyng SLR to the resultng QCQP. Snce (9) holds for 1 = j n, we have x 2 (u + l )x + u l 0, = 1,, n. (33) The summaton of (33) for = 1,, n leads to x x (u + l) x + u l 0. (34) We call ths method the reformulaton quadratczaton technque (RQT). Snce (34) s a convex quadratc constrant, t may be effectve for the SLR 16

19 relaxaton tghter. The numercal experments n Secton 5 show that by addng (34), we could get a tghter optmal value n some cases than SDP relaxatons. There are other ways of makng new convex quadratc constrants. Furthermore, even nonconvex constrants (lke (8) or (10)) are possbly effectve for tghtenng SLR. However, n ths study, we only consdered (34) to save computaton tme. 5 Numercal Experments We mplemented SLR, SDP relaxaton, and Block-SDP (n Secton 2.3) and compared ther results. In [9], there are no rules to decde the number of blocks r of Block-SDP. In our experments, we tred several values of r and chose r := 0.05 n, whch seemed to work well. We used MATLAB Ver (R2014b) for all the numercal experments. We solved the SDP relaxaton and Block-SDP by usng SeDuM 1.3 [32]. To solve the convex quadratc optmzaton problems, 1-QCQP (27) and (28) n the SLR algorthm, we used CPLEX Ver We used a computer wth a 2.4 GHz CPU and 16GB RAM. 5.1 Random m-qcqp Frst, we checked the performance of SLR for random m-qcqp generated n the way that Zheng Sun and L [31] dd. In Sectons , we consder problems wthout box constrants; we compare SLR (or CQ1) and SDP relaxaton. In Secton 5.1.5, we consder problems ncludng box constrants; we compare SLR, SDP relaxaton, and Block-SDP Tolerance ϵ and Computaton Tme We now nvestgate the computaton tmes of SLR for gven tolerance values ϵ. We randomly generated 30 nstances of a 10-QCQP, whose problem szes were n = 30 and m = 10. Among the m = 10 constrants, there were fve convex ones. The objectve functons of all nstances were strctly convex,.e., Q 0 O. The relatonshp between the tolerance ϵ and the computaton tme s shown n Fgure 4. The smaller ϵ becomes, the longer the computaton takes. In ths settng, SLR can solve the 10-QCQP faster than SDP relaxaton can when ϵ > Hence, we set ϵ = 10 4 n what follows Effect of Intal Ponts We compared the two strateges for choosng the ntal ponts (30) (or (31)) and (32) and checked the results for Q 0 O and Q 0 O. We only show 17

20 10 0 SLR SDP relaxaton Tme(s) Tolerance ǫ Fgure 4: Average Computaton Tme versus Tolerance ϵ results for Q 0 O because both ntal pont rules gave almost the same results when Q 0 O. We randomly generated 30 nstances for each settng, where n = 100 and m = 10, and vared the number of convex constrants from C = 1 to 9. Note that SLR s not stronger than SDP relaxaton and we do not know the exact optmal value of each random m-qcqp. Therefore, we checked the performance of SLR by comparng ts value wth the optmal value of SDP relaxaton. Here, we used the error rato defned as Rato := OPT SLR. OPT SDPrelax Ths ndcator was used n all of the experments descrbed below. It s greater than or equal to one snce SLR s not stronger than SDP relaxaton. The performance of SLR s sad to be good when the rato s close to one. Fgures 5 and 6 plot the number of teratons and the error rato versus the number of convex constrants. When there s only one convex constrant, an optmal soluton λ for ψ(λ) usually has only one postve element correspondng to the convex constrant and all the other elements are zero. In ths case, SLR needs only few teratons. When Q 0 O, the Schur complement rule works well n terms of computaton tme and the error rato as the number of convex constrants ncreases. Ths may be because an optmal soluton of SDP relaxaton has many non-zero components and the equal weghts rule can not represent each weght approprately. On the bass of the above consderatons, we decded to use the Schur complement rule n the remanng experments. 18

21 Iteraton 10 3 Schur Equal Rato Schur Equal Convex constrants Convex constrants Fgure 5: Average Number of Iteratons (Q 0 O) Fgure 6: Average Error Rato (Q 0 O) Effect of the Number of Varables n on the Computaton Tme and Error We checked the computaton tme of SLR by changng the number of varables n. Note that SLR works better when Q 0 O than when Q 0 O because we have to approxmate the feasble regon Λ + when Q 0 O. In ths experment, n was vared from 25 to 5000, and we set m = 15, of whch 8 constrants were convex. We generated 30 nstances when n 250, ten nstances when 250 < n 1000, and one nstance when n In ths experment, we set ϵ = because large problems take a very long tme to solve. Case 1. Q 0 O. SLR performed well when Q 0 O (Fgures 7 and 8). The computaton tme was almost one order of magntude smaller than that of SDP relaxaton, and the error rato was less than There were many nstances whch SLR can obtan the optmal value same as SDP relaxaton can. Furthermore, SLR was able to solve problems that SDP relaxaton could not because t ran out of memory. Case 2. Q 0 O. We replaced the objectve functon of each nstance used n Case 1 by a nonconvex quadratc functon and conducted the same experments n each case. Fgures 9 and 10 show the results for Q 0 O. The performance of SLR deterorated, but t was stll faster than SDP relaxaton and the error rato was about 1.1. Note that we conducted only one experment on n = 2500, 5000 to shorten the tme of the experment. 19

22 10 5 SLR SDP relaxaton Tme(s) 10 2 Rato Varables n Varables n Fgure 7: Average Computaton Tme for n (Q 0 O) Fgure 8: Average Error Rato for n (Q 0 O) Tme(s) 10 4 SLR SDP relaxaton Rato Varables n Varables n Fgure 9: Average Computaton Tme for n (Q 0 O) Fgure 10: Average Error Rato for n (Q 0 O) 20

23 10 1 SLR SDP relaxaton Tme(s) 10 0 Rato Constrants m Constrants m Fgure 11: Average Computaton Tme for m (Q 0 O) Fgure 12: Average Error Rato for m (Q 0 O) Effect of the Number of Constrants m on the Computaton Tme and Error We checked the computaton tme of SLR by varyng the number of constrants m. For n = 100, the number of constrants m was vared from 2 to 50. Half of the constrants (.e. cel(m/2)) were convex. We generated 30 nstances for each settng. Case 1. Q 0 O. Fgures 11 and 12 show the results. As a whole, the error ratos were less than , and the computaton tme was about one order of magntude smaller than that of SDP relaxaton. Case 2. Q 0 O. Fgures 13 and 14 show the results. SLR took longer than n Case 1, and the error rato was about SLR performed worse when Q 0 O because we approxmated the feasble regon RQT Constrants We randomly generated problems wth box constrants and added the RQT constrant proposed n Secton 4.4 to the problems. n was vared from 30 to 500, and we set m = 0.3n, ncludng cel(m/2) convex constrants. We generated 30 nstances when n 100 and 10 nstances when n > 100. We added box constrants ; 1 x 1 to all the nstances. The followng results are only for the case n whch the objectve functon s nonconvex. When Q 0 O, the RQT constrant dd not affect the performance of our method by much. The results are shown n Fgures 15 and 16. In Fgure 16, the rato s less than one. Ths mples that SLR can get better optmal values than 21

24 10 1 SLR SDP relaxaton Tme(s) 10 0 Rato Constrants m Constrants m Fgure 13: Average Computaton Tme for m (Q 0 O) Fgure 14: Average Error Rato for m (Q 0 O) SDP relaxaton can by addng RQT constrants. SLR s thus stronger and faster than SDP relaxaton. In ths sense, Block-SDP s smlar to SLR. The performance of Block-SDP depends on the number of blocks r, but n ths settng, SLR s faster than Block-SDP, although ts error rato s worse than that of Block-SDP QCQP We checked the performance of CQ1 n solvng the 1-QCQP of (P k ). We compared CQ1, SDP relaxaton, and an egen-computaton-based method for random 1-QCQPs. For a 1-QCQP wth Q 1 O, Adach, Iwata, Nakatsukasa and Takeda [1] proposed an accurate and effcent method that solves a generalzed egenvalue problem only once. They called ths method GEP. We ran ther MATLAB code for solvng a 1-QCQP wth Q 1 O. Note that all of the methods obtaned the exact optmal value of 1-QCQP. The computaton tme was plotted versus n. As descrbed n Secton 5.1, we generated the 1-QCQP n the way [31] dd. Fgures 17 and 18 are double logarthmc charts of n and the average computaton tme of 30 nstances. Fgure 17 shows that CQ1 s about one order of magntude faster than SDP relaxaton for all n. Fgure 18 shows that CQ1 s faster than GEP when n s large. CQ1 or SLR s ntended to be a weaker, but faster method than SDP relaxaton, and such methods are useful for large-scale problems. 5.3 Max-Cut Problems Max-cut problems [24] can be vewed as an applcaton of 1-QCQP. A graph Laplacan matrx L can be obtaned from a gven undrected and weghted graph G. For the max-cut value for G, we solve the followng nonconvex 22

25 10 4 SLR SDP relaxaton 10 3 Block-SDP 10 0 SLR Block-SDP 10 2 Tme(s) Rato Varables n Varables n Fgure 15: Average Computaton Fgure 16: Average Error Rato for n Tme for n (Addtonal RQT constrant) (Addtonal RQT constrant) CQ1 SDP relaxaton CQ1 SDP relaxaton GEP Tme(s) 10 0 Tme(s) Varables n Varables n Fgure 17: Average Computaton Fgure 18: Average Computaton Tme for 1-QCQP n the case of Q 0 Tme for 1-QCQP n the case of Q 1 O O 23

26 {1, 1} nteger program: We relax (35) nto mn x x Lx s.t. x 2 = 1, = 1,, n. (35) mn x x Lx s.t. 1 x 1 = 1,, n, and then apply SDP relaxaton and Block-SDP. For CQ1, we further relax the box constrants as follows: ; 1 x 1 = x x n, because CQ1 needs at least one convex constrant. The resultng 1-QCQP s mn x x Lx s.t. x x n. (36) Note that (36) can be regarded as a smple mnmum egenvalue problem. An optmal soluton s an egenvector correspondng to the mnmum egenvalue. However, our purpose s to check the computatonal result, and we use CQ1 for (36). We solved max-cut nstances from [24]. Many randomly generated nstances are shown n [24], and the optmal values are known. The results are n Table 1. In ths table, the error s defned as Error := OPT method OPT OPT where OPT method s the optmal value of each method and OPT s the exact optmal value. In [24], the names of the nstances ndcate how they were generated as well as the number of varables. For example, g05 80, 80 means the number of varables, and g05 means the densty of edges and whether the weghts of graph are all postve or nclude negatve values. The detals are gven n [24] and there are ten nstances for each knd of problem. In Table 1, Tme(s) means the average tme for ten nstances, and the best methods among SDP relaxaton, Block-SDP, and CQ1 n terms of ether average computaton tme or average error are lsted n bold. Table 1 shows that CQ1 s weaker but faster than SDP relaxaton. Block-SDP s weaker and even slower than SDP relaxaton n these problem settngs. CQ1 s much faster than SDP relaxaton, so we can solve CQ1, 24

27 Table 1: Tme and Error for Max-cut Method SDP relaxaton Block-SDP CQ1 Multple CQ1 Instance Error Tme(s) Error Tme(s) Error Tme(s) Error Tme(s) g g pm1d pm1d pm1s pm1s pw pw pw w w w many tmes n the same perod of tme t takes to solve the SDP relaxaton once. Accordngly, we tred to strengthen CQ1 by teratng t wth a certan roundng rule as follows. An optmal soluton of CQ1, x, satsfes x x = n because the objectve functon s nonconvex. Consequently, there exsts a component of x whose absolute value s more than one (otherwse, all the components are ±1, and x s an exact optmal soluton for (35)). Then, we fx such a component as ±1 and solve a small problem recursvely. Note that f the objectve functon becomes postve (sem)defnte by fxng some of the components and there exsts no x whose absolute value s more than one, we set the component whch has the maxmum absolute value of all the components to 1 or 1. We perform ths roundng untl all the components are ±1. Therefore, we have a feasble soluton of the orgnal problem (35) and obtan an upper bound of the orgnal optmal value, whle SDP relaxaton, Block-SDP, and CQ1 fnd lower bounds. We call ths roundng Multple CQ1 and show the results n the rght-most column of Table 1. The results ndcate that Multple CQ1 s stll faster than SDP relaxaton. Such a faster method s useful when we want to solve a problem repeatedly. 6 Conclusons In ths paper, we proposed SLR, a new, fast convex relaxaton for QCQP. SLR s a method for solvng the Lagrangan dual problem of a gven QCQP. There have been many studes on constructng Lagrangan dual problems 25

28 for nonconvex problems and reformulatng them as semdefnte problems (SDPs). Instead of solvng an SDP, our method dvdes the objectve functon of the Lagrangan dual problem nto two parts and teratvely solves a 1-QCQP. We furthermore transform the 1-QCQP nto a convex quadratc 1- QCQP called CQ1 whose feasble regon forms the convex hull of the feasble regon of the orgnal 1-QCQP. Hence, we can obtan the exact optmal value of the 1-QCQP by solvng CQ1. SDP relaxaton can also solve the 1-QCQP exactly, but CQ1 s much faster. Numercal experments confrmed ths advantage of CQ1. CQ1 performed well for randomly generated 1-QCQP and max-cut problems. In SLR, we successvely solve a 1-QCQP wth the Lagrange multpler λ updated usng a gradent method. We proved that the objectve functon ψ(λ) s quas-concave and has the good property that all the statonary ponts n Λ + are global optmal solutons, and thus, smple gradent methods work well. SLR s a faster relaxaton compared wth the nteror pont method for SDP relaxaton for large n and m. Furthermore, by addng a new vald RQT constrant, we could obtan even a better optmal value than SDP relaxaton could for some m-qcqp nstances. Our method can be regarded as a subgradent method that s appled to a quas-concave problem nduced from the Lagrangan dual of an m-qcqp. To ensure convergence, the quas-concave problem must satsfy certan condtons, (e.g., n [15]) but unfortunately, t s not easy to check whether our quas-concave problem satsfes the Hölder condton. In the future, we would lke to nvestgate the global convergence of our algorthm. When the objectve functon s nonconvex, we need to approxmate the feasble regon Λ + of the Lagrangan dual problem, and as a result, the SLR become worse n performance than that of solvng m-qcqp wth the convex objectve functon. We would lke to mprove the performance of SLR for nstances havng nonconvex objectve functons. References [1] S. Adach, S. Iwata, Y. Nakatsukasa and A. Takeda: Solvng the trust regon subproblem by a generalzed egenvalue problem. SIAM Journal on Optmzaton, 27: , [2] F. A. Al-Khayyal, C. Larsen and T. V. Voorhs: A relaxaton method for nonconvex quadratcally constraned quadratc programs. Journal of Global Optmzaton, 6: , [3] K. M. Anstrecher: Semdefnte programmng versus the reformulatonlnearzaton technque for nonconvex quadratcally constraned quadratc programmng. Journal of Global Optmzaton, 43: ,

29 [4] L. Armjo: Mnmzaton of functons havng Lpschtz contnuous frst partal dervatves. Pacfc Journal of Mathematcs, 16: 1 3, [5] C. Audet, P. Hansen, B. Jaumard and G. Savard: A branch and cut algorthm for nonconvex quadratcally constraned quadratc programmng. Mathematcal Programmng, 87: , [6] L. Ba, J. E. Mtchell and J. S. Pang: Usng quadratc convex reformulaton to tghten the convex relaxaton of a quadratc program wth complementarty constrants. Optmzaton Letters, 8: , [7] S. Boyd and L. Vandenberghe: Convex Optmzaton. Cambrdge Unversty Press, UK, [8] S. Boyd, L. Xao, A. Mutapcc: Subgradent Methods. lecture notes of EE392o, Stanford Unversty, Autumn Quarter, [9] S. Burer, S. Km and M. Kojma: Faster, but weaker, relaxatons for quadratcally constraned quadratc programs. Computatonal Optmzaton and Applcatons, 59: 27 45, [10] Y. Chen and X. Ye: Projecton onto s smplex. ArXv: v2, [11] T. Fuje and M. Kojma: Semdefnte programmng relaxaton for nonconvex quadratc programs. Journal of Global Optmzaton, 10: , [12] M. X. Goemans: Semdefnte programmng n combnatoral optmzaton. Mathematcal Programmng, 79: , [13] M. X. Goemans and D. P. Wllamson. Improved approxmaton algorthms for maxmum cut and satsfablty problems usng semdefnte programmng. Journal of the Assocaton for Computng Machnery, 42: , [14] Y. Hu, C. Yu and C. L: Stochastc Subgradent method for quasconvex optmzaton problems. Journal of Nonlnear and Convex Analyss, 17: , [15] Y. Hu, X. Yang and C. Sm: Inexact subgradent methods for quasconvex optmzaton problems. European Journal of Operatonal Research, 240: , [16] R. Jang and D. L: Convex relaxatons wth second order cone constrants for nonconvex quadratcally constraned quadratc programmng. ArXv: v1,

30 [17] S. Km and M. Kojma: Second order cone programmng relaxaton of nonconvex quadratc optmzaton problems. Optmzaton Methods and Software, 15: , [18] C. Lu, S. Fang, Q. Jn, Z. Wang and W. Xng: KKT soluton and conc relaxaton for solvng quadratcally constraned quadratc programmng problems. SIAM Journal on Optmzaton, 21: , [19] Z. Q. Luo, W. K. Ma, A. M. C. So, Y. Ye and S. Zhang: Semdefnte relaxaton of quadratc optmzaton problems. IEEE Sgnal Processng Magazne, 27: 20 34, [20] J. J. Moré and D. C. Sorensen: Computng a trust regon step. SIAM Journal on Scentfc and Statstcal Computng, 4: , [21] J. Nocedal and S. J. Wrght: Numercal Optmzaton. Sprnger-Verlag, Berln, [22] I. Novak: Dual bounds and optmalty cuts for all-quadratc programs wth convex constrants. Journal of Global Optmzaton, 18: , [23] P. M. Pardalos and S. A. Vavass: Quadratc programmng wth one negatve egenvalue s NP-hard. Journal of Global Optmzaton, 1: 15 22, [24] F. Rendl, G. Rnald and A. Wegele: Solvng max-cut to optmalty by ntersectng semdefnte and polyhedral relaxatons. Mathematcal Programmng Seres A, 121: , [25] H. D. Sheral and B. M. P. Fratcell: Enhancng RLT relaxaton va a new class of semdefnte cuts. Journal of Global Optmzaton, 22: , [26] J. F. Sturm and S. Zhang: On cones of nonnegatve quadratc functons. Mathematcs of Operatons Research, 28: , [27] H. Tuy: On solvng nonconvex optmzaton problems by reducng the dualty gap. Journal of Global Optmzaton, 32: , [28] L. Vandenberghe and S. Boyd: Semdefnte programmng. SIAM Revew, 38: 49 95, [29] T. V. Voorhs: A global optmzaton algorthm usng Lagrangan underestmates and the nterval newton method. Journal of Global Optmzaton, 24: ,

31 [30] X. J. Zheng, X. L. Sun, D. L: Convex relaxatons for nonconvex quadratcally constraned quadratc programmng: matrx cone decomposton and polyhedral approxmaton. Mathematcal Programmng, 129: , [31] X. J. Zheng, X. L. Sun, D. L: Nonconvex quadratcally constraned quadratc programmng: best D.C.decompostons and ther SDP representatons. Journal of Global Optmzaton, 50: , [32] SeDuM optmzaton over symmetrc cones. lehgh.edu/. A Proofs of Theorems Proof of Theorem 1 Proof. The vector g λ of (21) can be found from ψ(λ) = ϕ( µ λ λ) ( = x λ Q 0 + µ λ m λ Q ) x λ + 2 ( q 0 + µ λ m λ q ) x λ + γ 0 + µ λ We prove that the vector g λ s n the quas-subdfferental ψ defned by (22). Note that n [14, 15], the quas-subdfferental s defned for a quasconvex functon, but ψ s quas-concave. Therefore (22) s modfed from the orgnal defnton of ψ for a quas-convex functon. We further consder (22) as ψ(λ) = {s ψ(ν) ψ(λ), ν; s (ν λ) < 0} = {s ψ(ν) ψ(λ), ν; s ν < s λ} (37) Now we show that g λ s n (37). When µ λ = 0, g λ = 0 satsfes (22) and g λ ψ(λ) holds. When µ λ > 0, t s suffcent to consder the vector x λ Q 1 x λ + 2q 1 x λ + γ 1 g λ :=. x λ Q m x λ + 2q m x λ + γ m nstead of g λ because ψ(λ) forms a cone. Then, snce x λ s feasble for λ, we have m λ γ. x λ ( m ) ( m ) m λ Q x λ + 2 λ q x λ + λ γ 0. (38) 29

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui. Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI]

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI] Yugoslav Journal of Operatons Research (00) umber 57-66 A SEPARABLE APPROXIMATIO DYAMIC PROGRAMMIG ALGORITHM FOR ECOOMIC DISPATCH WITH TRASMISSIO LOSSES Perre HASE enad MLADEOVI] GERAD and Ecole des Hautes

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

An Admission Control Algorithm in Cloud Computing Systems

An Admission Control Algorithm in Cloud Computing Systems An Admsson Control Algorthm n Cloud Computng Systems Authors: Frank Yeong-Sung Ln Department of Informaton Management Natonal Tawan Unversty Tape, Tawan, R.O.C. ysln@m.ntu.edu.tw Yngje Lan Management Scence

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

6) Derivatives, gradients and Hessian matrices

6) Derivatives, gradients and Hessian matrices 30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Large-scale packing of ellipsoids

Large-scale packing of ellipsoids Large-scale packng of ellpsods E. G. Brgn R. D. Lobato September 7, 017 Abstract The problem of packng ellpsods n the n-dmensonal space s consdered n the present work. The proposed approach combnes heurstc

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming Solvng the Quadratc Egenvalue Complementarty Problem by DC Programmng Y-Shua Nu 1, Joaqum Júdce, Le Th Hoa An 3 and Pham Dnh Tao 4 1 Shangha JaoTong Unversty, Maths Departement and SJTU-Parstech, Chna

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information