Iteration-complexity of a Jacobi-type non-euclidean ADMM for multi-block linearly constrained nonconvex programs

Size: px
Start display at page:

Download "Iteration-complexity of a Jacobi-type non-euclidean ADMM for multi-block linearly constrained nonconvex programs"

Transcription

1 Iteraton-complexty of a Jacob-type non-eucldean ADMM for mult-block lnearly constraned nonconvex programs Jefferson G. Melo Renato D.C. Montero May 13, 017 Abstract Ths paper establshes the teraton-complexty of a Jacob-type non-eucldean proxmal alternatng drecton method of multplers ADMM for solvng mult-block lnearly constraned nonconvex programs. The subproblems of ths ADMM varant can be solved n parallel and hence the method has great potental to solve large scale mult-block lnearly constraned nonconvex programs. Moreover, our analyss allows the Lagrange multpler to be updated wth a relaxaton parameter n the nterval 0,. 000 Mathematcs Subject Classfcaton: 47J, 49M7, 90C5, 90C6, 90C30, 90C60, 65K10. Key words: Jacob multblock ADMM, nonconvex program, teraton-complexty, frst-order methods, non-eucldean dstances. 1 Introducton Ths paper consders the followng lnearly constraned optmzaton problem { } mn f x : A x = b, x R n, = 1,..., p 1 where f : R n, ], = 1,..., p, are proper lower semcontnuous functons, A R d n, = 1,..., p, and b R d. Optmzaton problems such as 1 appear n many mportant applcatons such as dstrbuted matrx factorzaton, dstrbuted clusterng, sparse zero varance dscrmnant analyss, tensor decomposton, matrx completon, and asset allocaton see, e.g., [1, 6, 4, 39, 40, 4]. Recently, some varants of the alternatng drecton method of multplers ADMM have been successfully appled to solve some nstances of the prevous problem despte the lack of convexty. Insttuto de Matemátca e Estatístca, Unversdade Federal de Goás, Campus II- Caxa Postal 131, CEP , Goâna-GO, Brazl. E-mal: jefferson@ufg.br. The work of ths author was partally supported by CNPq Grants 40650/013-8, /014-0 and / School of Industral and Systems Engneerng, Georga Insttute of Technology, Atlanta, GA, emal: montero@sye.gatech.edu. The work of ths author was partally supported by NSF Grant CMMI

2 In ths paper we analyze the Jacob-type proxmal ADMM for solvng 1, whch recursvely computes a sequence {x k 1,, xk p, λ k } as x k = argmn x { L β x k 1 1,..., x k 1 1, x, x k 1 +1,..., xk 1 p λ k = λ k 1 θβ A x k b }, λ k 1 + dw x, x k 1, = 1,..., p, where β > 0 s a penalty parameter, θ > 0 s a relaxaton parameter, dw s a Bregman dstance, and L β x 1,..., x p, λ := f x λ, A x b + β A x b 3 s the augmented Lagrangan functon for problem 1. An mportant feature of ths ADMM varant s that the subproblems can be solved n parallel and hence the method has great potental to solve large scale mult-block lnearly constraned nonconvex programs. Under the assumpton that s full row rank and f p : R np R s a dfferentable functon whose gradent s Lpschtz contnuous, we establsh an Oρ teraton-complexty bound for the Jacob-type ADMM to obtan x 1,..., x p, λ, r 1,..., r p 1 satsfyng r f x 1,..., x p A λ, = 1,, p 1, 4 max { A x b, r 1,, r p 1, f p x p A pλ } ρ 5 where f denotes the lmtng subdfferental see for example [3, 34]. We brefly dscuss n ths paragraph the development of ADMM n the convex settng. The standard ADMM.e., where p =, w 0 for = 1,..., and x k s obtaned as above but wth x k 1 1 replaced by x k 1 was ntroduced n [7, 8] and ts complexty analyss was frst carred out n [31]. Snce then several papers have obtaned teraton-complexty results for varous ADMM varants see for example [, 9, 1, 16, 18, 3, 5, 11, 14, 5, 33]. Multblock ADMM varants have also been extensvely studed see for example [15, 19, 6, 7, 8, 17, 4, 37, 3]. In partcular, papers [17, 4, 37, 3] study the convergence and/or complexty of Jacob-type ADMM varants. Recently, there have been a lot of nterest on the study of ADMM varants for nonconvex problems see, e.g., [13, 0, 1,, 35, 36, 38, 41, 10, 30, 9]. Papers [13,, 35, 36, 38, 41] establsh convergence of the generated sequence to a statonary pont of 1 under condtons whch guarantee that a certan potental functon assocated wth the augmented Lagrangan 3 satsfes the Kurdyka-Lojasewcz property. However, these papers do not study the teraton complexty of the proxmal ADMM although ther theoretcal analyss are generally half-way towards accomplshng such goal. Paper [0] analyzes the convergence of varants of the ADMM for solvng nonconvex consensus and sharng problems and establshes the teraton complexty of ADMM for the consensus problem. Paper [1] studes the teraton-complexty of two lnearzed varants of the multblock proxmal ADMM appled to a more general problem than 1 where a couplng term s also present n ts objectve functon. Paper [10] studes the teraton-complexty of a proxmal ADMM for the two block optmzaton problem,.e., p =, and the relaxaton parameter θ s arbtrarly chosen n the nterval 0,, contrary to the prevous related lterature where ths parameter s consdered as one or at most

3 5 + 1/. Paper [30] analyzes the teraton-complexty of a mult-block proxmal ADMM va a general lnearzaton scheme. Fnally, whle the authors were n the process of fnalzng ths paper, they have learned of the recent paper [9] whch studes the asymptotc convergence of a Jacob-type lnearzed ADMM for solvng non-convex problems. The latter paper though does not deal wth the ssue of teraton-complexty and consders the case of θ = 1 only. Our paper s organzed as follows. Subsecton 1.1 contans some notaton and basc results used n the paper. Secton descrbes our assumptons and contans two subsectons. Subsecton.1 ntroduces the concept of dstance generatng functons and ts correspondng Bregman dstances consdered n ths paper, and formally states the non-eucldean Jacob-type ADMM. Secton. s devoted to the convergence rate analyss of the latter method. Our man convergence rate result s n ths subsecton Theorem.11. The appendx contans proofs of some results stated n the paper. 1.1 Notaton and basc results The doman of a functon f : R s, ] s the set dom f := {x R s : fx < + }. Moreover, f s sad to be proper f fx < for some x R s. Lemma 1.1. Let S R n p be a non-zero matrx and let σ + S of SS. Then, for every u R p, there holds denote the smallest postve egenvalue P S u 1 Su. σ + S We next recall some defntons and results of subdfferental calculus [3, 34]. Defnton 1.. Let h : R s, ] be a proper lower sem-contnuous functon. The Fréchet subdfferental of h at x dom h, denoted by ˆ hx, s the set of all elements u R s satsfyng hy hx u, y x lm nf 0. y x y x y x When x / dom h, we set ˆ hx =. The lmtng subdfferental of h at x dom h, denoted by hx, s defned as hx = {u R s : x k x, hx k hx, u k ˆ hx k, wth u k u}. A crtcal or statonary pont of h s a pont x dom h satsfyng 0 hx. The followng result presents some propertes of the lmtng subdfferental. Proposton 1.3. Let h : R s, ] be a proper lower sem-contnuous functon. a If x R s s a local mnmzer of h, then 0 hx; b If g : R s R s a contnuously dfferentable functon, then h + gx = hx + gx. 3

4 Jacob-type non-eucldean proxmal ADMM and ts convergence rate We start by recallng the defnton of crtcal ponts of 1. Defnton.1. An element x 1,..., x p, λ R n 1... R np R d s a crtcal pont of problem 1 f 0 f x A λ, = 1,..., p, A x = b. Under some mld condtons, t can be shown that f x 1,..., x p s a global mnmum of 1, then there exsts λ such that x 1,..., x p, λ s a crtcal pont of 1. The augmented Lagrangan assocated wth problem 1 and wth penalty parameter β > 0 s defned as L β x 1,..., x p, λ := f x λ, A x b + β A x b. 6 We assume that problem 1 satsfes the followng set of condtons: A0 The functons f, = 1,..., p 1, are proper lower semcontnuous; A1 0 and Im {b} ImA 1... Im 1 ; A f p : R np R s dfferentable wth gradent L p Lpschtz contnuous. A3 there exsts β 0 such that v β > where vβ := nf f x + β A x b x 1,...,x p β R..1 The non-eucldean proxmal Jacob ADMM In ths subsecton, we ntroduce a class of dstance generatng functons and ts correspondng Bregman dstances whch s sutable for our study. We also formally descrbe the non-eucldean proxmal Jacob ADMM for solvng problem 1. Defnton.. For gven set Z R s and scalars m M, we let D Z m, M denote the class of real-valued functons w whch are dfferentable on Z and satsfy wz wz wz, z z m z z z, z Z, 7 wz wz M z z z, z Z. 8 A functon w D Z m, M wth m 0 s referred to as a dstance generatng functon and ts assocated Bregman dstance dw : R s Z R s defned as dwz ; z := wz wz wz, z z z, z R s Z. 9 4

5 For every z Z, the functon dw ; z wll be denoted by dw z so that Clearly, dw z z = dwz ; z z, z R s Z. dw z z = dw z z = wz wz z, z Z, 10 We now state the non-eucldean proxmal Jacob ADMM based on the class of dstance generatng functons ntroduced n Defnton.. In ts statement and n some techncal results, we denote the block of varables x 1,..., x 1 smply by x < and the block of varables x +1,..., x p smply by x >. Hence, the whole vector x 1,..., x p can also be denoted as x <, x, x > when there s a need to emphasze the -th block. For convenence, we also extend the above for notaton for = 1 and = p. Hence, x <1, x 1, x >1 = x <p, x p, x >p = x 1,..., x p. Non-Eucldean Proxmal Jacob ADMM NEPJ-ADMM 0 Defne Z := dom f for = 1,..., p, and let β be as n A3. Let an ntal pont x 0 1,..., x0 p, λ 0 Z 1... Z p R d. Choose scalars α > 0, β β, M m > 0, = 1,..., p, and a stepsze parameter θ 0, such that δ := m 4 δ p := m p 4 p + α + γ θp + 1 σ + A p βp 1 α β max A l > 0, = 1,..., p 1 1 l p 1 + γ θp + 1L p + Mp βσ + > 0 11 where σ Ap resp., σ + denotes the smallest egenvalue resp., postve egenvalue of A p, and γ θ s gven by θ γ θ := 1 θ 1. 1 Set k = 1 and go to step 1. 1 For each = 1,..., p, choose w k D Z m, M and compute an optmal soluton x k Rn of { mn L β x k 1 x R n <, x, x k 1 >, λ k 1 + dw k x k 1 } x. 13 Set k k + 1, and go to step 1. [ ] λ k = λ k 1 θβ A x k b, 14 5

6 end Some comments about the NEPJ-ADMM are n order. Frst, t s always possble to choose the constants m,,...,p, suffcently large so as to guarantee that δ,,...,p, are strctly postve. Second, one of the man features of NEPJ-ADMM s that ts subproblems 13 are completely ndependent of one another. As a result, they can all be solved n parallel whch shows the potental of NEPJ-ADMM as a sutable ADMM varant to solve large nstance of 1. Thrd, as n the papers [10, 30], NEPJ-ADMM allows the choce of a relaxaton parameter θ 0,.. Convergence Rate Analyss of the NEPJ-ADMM Ths subsecton s dedcated to the convergence rate analyss of the NEPJ-ADMM. We frst present some techncal lemmas whch are useful to prove our man result Theorem.11. To smplfy the notaton, we denote by x k the vector x k 1,..., xk p generated by the NEPJ-ADMM. Lemma.3. Consder the sequence {x k, λ k } generated by the NEPJ-ADMM. For every k 1, defne ˆλ k := λ k 1 β A x k b 15 and where R k :=,j βa A j x k j + w k, = 1,..., p, 16 x k := x k x k 1, w k := w k x k w k x k 1, = 1,..., p. 17 Then, for every k 1, we have: where λ k := λ k λ k 1. 0 f x k A ˆλ k + R k = 1,..., p 18 [ ] 0 = A x k b + 1 θβ λk 19 Proof. The optmalty condtons see Proposton 1.3 for 13 mply that 0 f x k A λ k 1 β A x k + A j x k 1 j b + w k, = 1,..., p.,j Ths relaton combned wth 15 and 16 mmedately yeld 18. Relaton 19 follows drectly from 14. Next result presents a recursve relaton nvolvng the dsplacements λ k and λ k 1. 6

7 Lemma.4. Consder the sequence {x k, λ k } generated by the NEPJ-ADMM and defne Rp 0 = A pλ 0 f p x 0 p, λ 0 = 0. 0 Then, for every k 1, we have A p λ k = 1 θa p λ k 1 + θu k, 1 where u k := f k p + R k p, f k p := f k p x k p f k p x k 1 p, R k p := R k p R k 1 p k 1, λ k and R k p are as n Lemma.3. Proof. From 15 and 19, we obtan the followng relaton Usng ths relaton and 18 wth = p, we have λ k = 1 θλ k 1 + θˆλ k, k 1. A pλ k = 1 θa pλ k 1 + θ[ f p x k p + R k p], k 1. 3 Hence, n vew of, relaton 1 holds for every k. Now, note that 0 s equvalent to f p x 0 p + R 0 p = A pλ 0. Ths relaton combned wth and 3, both wth k = 1, yeld A p λ 1 = θa pλ 0 + θ [ f p y 1 + Rp 1 ] = θa pλ 0 + θ [ f p x 0 p + R 0 p + u 1] = θa pλ 0 + θa pλ 0 + θu 1 = θu 1. Hence, n vew of λ 0 = 0, relaton 1 also holds for k = 1. Next we consder an auxlary result to be used to compare consecutve terms of the sequence {L β x k, λ k }. See the comments mmedately before the NEPJ-ADMM about the notaton used hereafter. Lemma.5. For every y 0 = y 0 1,..., y0 p, y = y 1,..., y p dom f 1... dom f p, λ R d and =,..., p, we have L β y <, y, y 0 >, λ L β y <, y 0, y 0 >, λ = L β y 0 <, y, y 0 >, λ L β y 0 <, y 0, y 0 >, λ Proof. It s easy to see thatf the gradent of the functon 1 + β A y, A j y j. y < L β y <, y, y 0 > L β y <, y 0, y 0 > 4 s gven by β[a 1 A 1 ] A y 7

8 and ts Hessan equal to zero everywhere n dom f 1... dom f 1. Hence, the functon gven n 4 s affne. The concluson of the lemma now follows by notng that 1 [A 1 A 1 ] A y, y < = A y, A j y j. The next result compares consecutve terms of the sequence {L β x k, λ k }. Lemma.6. For every k 1, we have L β x k, λ k L β x k 1, λ k 1 1 j< p β A x k, A j x k j m xk + 1 θβ λk. Proof. Frst note that 13 together wth the fact that w k D Z m, M mply that L β x k 1 <, x k, x k 1 >, λ k 1 L β x k 1, λ k 1 m x k /, = 1,..., p. Hence, usng Lemma.5 wth y 0 = x k 1, y = x k and λ = λ k 1, we see that Hence L β x k <, x k, x k 1 >, λ k 1 L β x k <, x k 1, x k 1 >, λ k 1 L β x k, λ k 1 L β x k 1, λ k 1 = = L β x k 1 <, x k, x k 1 >, λ k 1 L β x k 1, λ k 1 + β m 1 xk + β A x k, A j x k j. [ L β x k <, x k, x k 1 m xk + β 1 A x k, A j x k j >, λ k 1 L β x k <, x k 1 ], x k 1 >, λ k 1 1 A x k, A j x k j. 5 = On the other hand, due to λ k = λ k λ k 1 and 14, we have L β x k, λ k L β x k, λ k 1 = λ k λ k 1, A x k b To conclude the proof, just add the last relaton and 5. = 1 βθ λk. Lemma.6 s essental to show that a certan sequence { ˆL k } assocated to {L β x k, λ k } s monotoncally decreasng. Ths sequence s defned as 8

9 ˆL k := L β x k, λ k + η k k 0, 6 where η 0 := m p η k := 4M p A pλ 0 f p x 0 p 7 m 4 xk + c 1 A p λ k k 1, 8 θ 1 c 1 := βθ1 θ 1 σ B Before establshng the monotoncty property of the sequence { ˆL k }, we frst present an upper bound on ˆL k ˆL k 1 n terms of some quanttes related to x k 1,..., xk p, and λ k. Lemma.7. For any k 1, there holds where ˆL k ˆL p 1 p + αβ A k 1 Θ k λ := 1 βθ λk + c 1 p 1β Ap Θ k p := α and λ 0 = 0, x 0 p = R 0 p/m p see Lemma.4. Proof. From Lemma.6 and defntons of ˆL k and Θ k λ, we obtan ˆL k ˆL k 1 1 j<<p m p 4 m x k + Θ k λ 4 + Θk p 30 A p λ k A p λ k 1 31 m p x k 4 p + x k 1 p 3 p 1 β A x k A j x k j + β A x k x k p x k p + x k 1 1 j<<p p 1 p 1 β A x k + β A j x k j p 1 m xk + Θ k λ 33 p 34 p 1 αβ + A x k p 1β + x k α p 35 m xk + Θ k λ m p x k 4 p + x k 1 p 36 p + αβ A m x k + Θ k λ + Θk p 37 where the nequaltes are due to Cauchy-Schwarz nequalty and by usng the relaton s 1 s ts 1 + 1/ts, s 1, s R for t = 1 and t = α, respectvely. 9

10 The next result compares Θ k λ wth u k, defned n 31 and, respectvely, and provdes an upper bound for both elements n terms of x k 1,..., xk p. Lemma.8. Consder Θ k λ as n 31 and uk as n. Then, Θ k λ γ θ βσ + u k γ θp + 1 βσ + p 1 β A pa j x k j + x k 1 j + L p + Mp x k p + x k 1 p. where γ θ s as n 1 and x 0 = 0, = 1,..., p 1, and x0 p = R 0 p/m p see Lemma.4. Proof. The proof of ths lemma s gven n Appendx A. The next proposton shows, n partcular, that the sequence { ˆL k } s decreasng and bounded below. Proposton.9. Let x 0 = 0, = 1,..., p 1, and x0 p = R 0 p/m p. Then, the followng statements hold: 38 a for every k 1, ˆL k ˆL k 1 δ x k + x k 1 ; b the sequence { ˆL k } gven n 6 satsfes ˆL k vβ for every k 0; c for every k 1, k δ x j + x j 1 ˆL 0 vβ where vβ and δ are as n A3 and 11, respectvely. Proof. a It follows from 30, Lemma.8, 11 and 3 that [ ˆL k ˆL p 1 m k 1 = [ m p 4 p + α + γ θp + 1 σ + A p 4 βp 1 α δ x k + x k 1, + γ θp + 1L p + M p βσ + ] x k + x k 1 β max A 1 j p 1 ] x k p + x k 1 p provng a. The proof of b s gven Appendx B. The proof of c follows mmedately from a and b. 10

11 Next proposton presents some convergence rate bounds for the dsplacements x k, = 1,..., p, and λ k n terms of some ntal parameters. Our man result wll follow easly from ths proposton, due to the fact that the resdual generated by x k, ˆλ k n order to satsfy the Lagrangan system.1 see Lemma.3 can be controlled by these dsplacements. Proposton.10. Let δ,,...,p, be as n 11 and defne δ λ := θγ θp + 1 σ + mn δ β A p max A l + L p + M 1 p l 1 l p 1 1 l p where L 0 := ˆL 0 vβ see 6 and A3. Then, for every k 1, we have {[ k ] } δ x j + x j 1 + δ λ λ j L 0 40 and there exsts j k such that x j L0 kδ, = 1,..., p, λ j Proof. It follows from Proposton.9c that k x j + x j 1 L 0 mn 1 p δ and that n order to prove 40, t suffces to show that k 39 L0 kδ λ λ j L 0 δ λ. 43 Then, n the remanng part of the proof we wll show that 43 holds. By rewrtng 31, we have [ λ k c1 = βθ A p λ k 1 A p λ k ] + Θ k λ k 1. Hence, due to λ 0 = 0 and Lemma.8, we obtan k λ j βθ k Θ j λ θγ θ θγ θp + 1 σ + σ + k u j + θγ θp + 1 σ + L p + M p θγ θp + 1 σ + β A p max A l 1 l p 1 β A p k k p 1 x j + x j 1 x j p + x j 1 p. max A l + L p + Mp 1 l p 1 L0 mn 1 p δ 11

12 where the fourth nequalty s due to 4. It s now to verfy that the prevous estmate and 39 mply 43, whch n turn mples 40. We now present the man convergence rate result for the NEPJ-ADMM. Its man concluson s that the NEPJ-ADMM generates an element x 1,..., x p, λ whch satsfes the optmalty condtons of Defnton.1 wthn an error of O1/ k. Theorem.11. Let L 0 := L β x 0, λ 0 vβ + η 0 where η 0 and vβ are as n 7 and A3, respectvely. Let ˆλ k and R k, = 1,..., p, be as n 15 and 16, respectvely. Consder δ, = 1,..., p, as n 11 and let δ λ be as n 39. Then, the followng statements hold: a L 0 0; b for every k 1, 0 f x k A ˆλ k + R k = 1,..., p, and there exsts j k such that R j β A A l + M L 0 k mn δ, = 1,..., p, l l=1,l 1 l p A p x j b 1 L0. βθ kδ λ Proof. a holds due to Proposton.9c. Lemma.3 shows that the frst statement of b holds. Now, t follows from 16, 19 and the fact that w k D Z m, M, = 1,..., p, that R k l=1,l A x k b = 1 βθ λk. β A A l x k l + M x k, = 1,..., p, Hence, to end the proof, just combne the above relatons wth 41. A Proof of Lemma.8 Let us frst prove the frst nequalty 38. Assumpton A1 clearly mples that λ k = βθ A x b Im. Hence, t follows from Lemma 1.1 that λ k = P Ap λ k 1 σ + Ap A p λ k. 1

13 Thus, n vew of 1 and 31, we have Θ k λ 1 βθσ + A p λ k + c 1 A p λ k A p λ k 1 1 = βθσ + + c 1 1 θa p λ k 1 + θu k c 1 A p λ k 1. Note that f θ = 1, then 9 mples that c 1 = 0 and the above nequalty proves the frst nequalty of the lemma. We wll now prove the frst nequalty of the lemma for the case n whch θ 1. The prevous nequalty together wth the relaton s 1 + s 1 + t s /t s whch holds for every s 1, s R m and t > 0 yeld Θ k 1 λ βθσ + + c 1 [ 1 = βθσ + + c 1 = [ 1 + tθ 1 A p λ k 1 + { 1 + tθ tθ 1 c 1 βθσ + [ tθ 1 ] c 1 ] A p λ k 1 + } ] θ u k c 1 t A p λ k 1 + A p λ k 1 1 βθσ + + c t 1 βθσ + + c t θ u k θ u k. Usng the above expresson wth t = 1 + 1/ θ 1 and notng that t > 0 n vew of the assumpton that θ 0,, we conclude that [ ] Θ k 1 λ βθσ + θ 1 1 θ 1 c 1 A p λ k βθσ + + c 1 θ 1 θ 1 u k = 1 θ 1 θ βθσ θ 1 1 θ 1 u k where the last equalty s due to 9. Hence, n vew of 1, the frst nequalty of the lemma s proved. We now prove the second nequalty n 38. Due to Rp 0 = M p x 0 p, wp k D R np m p, M p, assumpton A, and relaton, we obtan u k = f k p + Rp k p 1 L p x k p + p β A pa j x k j + x k 1 j + M p x k p + x k 1 p 1 p + 1 L p x k p + β A pa j x k j + x k 1 j + M p x k p + x k 1 p where the nequaltes follow from the trangle nequalty for norms, defnton of Rp k n, and the relaton l s l l s for s R, = 1,..., l. Hence the proof of Lemma.8 follows. B Proof of Lemma.9b Note that due to a, we just need to prove the statement of b for k 1. Hence, assume by contradcton that there exsts an ndex k 0 0 such that ˆL k0+1 < vβ. Snce by a, { ˆL k } s decreasng, we obtan j ˆL k 0 k vβ ˆL k vβ + j k 0 ˆL k0+1 vβ j > k 0, k=1 k=1 13

14 whch mples that lm j k=1 j ˆL k vβ =. On the other hand, t follows from 6, 14, 6 and A3 that and hence that ˆL k = L β x k, λ k + η k L β x k, λ k = f x k + β A x k b + 1 βθ λk, λ k λ k 1 vβ + 1 λ k λ k 1 + λ k λ k 1 vβ + 1 λ k λ k 1 βθ βθ j ˆL k vβ 1 βθ k=1 whch yelds the desred contradcton. References λ j λ 0 1 βθ λ0 j 1, [1] B. P. W. Ames and M. Hong. Alternatng drecton method of multplers for penalzed zero-varance dscrmnant analyss. Comput. Optm. Appl., 643:75 754, 016. [] Y. Cu, X. L, D. Sun, and K. C. Toh. On the convergence propertes of a majorzed ADMM for lnearly constraned convex optmzaton problems wth coupled objectve functons. J. Optm. Theory Appl., 1693: , 016. [3] W. Deng and W. Yn. On the global and lnear convergence of the generalzed alternatng drecton method of multplers. J. Sc. Comput., pages 1 8, 015. [4] We Deng, Mng-Jun La, Zhmn Peng, and Wotao Yn. Parallel mult-block admm wth o1/k convergence. Journal of Scentfc Computng, 71:71 736, 017. [5] E. X. Fang, B. He, H. Lu, and X. Yuan. Generalzed alternatng drecton method of multplers: new theoretcal nsghts and applcatons. Math. Prog. Comp., 7: , 015. [6] P. A. Forero, A. Cano, and G. B. Gannaks. Dstrbuted clusterng usng wreless sensor networks. IEEE J. Selected Topcs Sgnal Process., 54:707 74, 011. [7] D. Gabay and B. Mercer. A dual algorthm for the soluton of nonlnear varatonal problems va fnte element approxmaton. Comput. Math. Appl., :17 40, [8] R. Glownsk and A. Marroco. Sur l approxmaton, par éléments fns d ordre un, et la résoluton, par penalsaton-dualté, d une classe de problèmes de drchlet non lnéares [9] M. L. N. Gonçalves, J. G. Melo, and R. D. C. Montero. Extendng the ergodc convergence rate of the proxmal ADMM. Arxv preprnt: [10] M. L. N. Goncalves, J. G. Melo, and R. D. C. Montero. Convergence rate bounds for a proxmal ADMM wth over-relaxaton stepsze parameter for solvng nonconvex lnearly constraned problems. Arxv Preprnt: [11] M. L. N. Gonçalves, J. G. Melo, and R. D. C. Montero. Improved pontwse teraton-complexty of a regularzed ADMM and of a regularzed non-eucldean HPE framework. SIAM J. Optm., 71: ,

15 [1] Y. Gu, B. Jang, and H. Deren. A sem-proxmal-based strctly contractve Peaceman-Rachford splttng method. Arxv preprnt: [13] K. Guo, D. R. Han, and T. T. Wu. Convergence of alternatng drecton method for mnmzng sum of two nonconvex functons wth lnear constrants. Int. J. Comput. Math. DOI: / , 016. [14] W. W. Hager, M. Yashtn, and H. Zhang. An O1/k convergence rate for the varable stepsze Bregman operator splttng algorthm. SIAM J. Numer. Anal., 543: , 016. [15] D. Han and X. Yuan. A note on the alternatng drecton method of multplers. J. Optm. Theory Appl., 1551:7 38, 01. [16] B. He, F. Ma, and X. Yuan. On the step sze of symmetrc alternatng drectons method of multplers. Preprnt: [17] B. He, H-K. Xu, and X. Yuan. On the proxmal jacoban decomposton of alm for multple-block separable convex mnmzaton problems and ts relatonshp to admm. Journal of Scentfc Computng, 663: , 016. [18] B. He and X. Yuan. On the O1/n convergence rate of the Douglas-Rachford alternatng drecton method. SIAM Journal on Numer. Anal., 50: , 01. [19] M. Hong and Z.-Q. Luo. On the lnear convergence of the alternatng drecton method of multplers. Math. Programmng, 161: , 017. [0] M. Hong, Z.-Q. Luo, and M. Razavyayn. Convergence analyss of alternatng drecton method of multplers for a famly of nonconvex problems. SIAM J. Optm., 61: , 016. [1] B. Jang, T. Ln, S. Ma, and S. Zhang. Structured nonconvex and nonsmooth optmzaton: algorthms and teraton complexty analyss. Arxv Preprnt: [] G. L and T. K. Pong. Global convergence of splttng methods for nonconvex composte optmzaton. SIAM J. Optm., 54: , 015. [3] M. L and X. Yuan. The augmented lagrangan method wth full jacoban decomposton and logarthmcquadratc proxmal regularzaton for multple-block separable convex programmng. Preprnt, 015. [4] A.P. Lavas and N.D. Sdropoulos. Parallel algorthms for constraned tensor factorzaton va the alternatng drecton method of multplers. Arxv Preprnt: [5] T. Ln, S. Ma, and S. Zhang. An extragradent-based alternatng drecton method for convex mnmzaton. Found. Comput. Math., pages 1 5, 015. [6] T. Ln, S. Ma, and S. Zhang. On the global lnear convergence of the admm wth multblock varables. SIAM J. Optm., 53: , 015. [7] T. Ln, S. Ma, and S. Zhang. On the sublnear convergence rate of mult-block admm. J. Oper. Res. Chna, 33:51 74, 015. [8] T. Ln, S. Ma, and S. Zhang. Iteraton complexty analyss of mult-block admm for a famly of convex mnmzaton wthout strong convexty. J. Sc. Comput., 691:5 81, 016. [9] Q. Lu, X. Shen, and Y. Gu. Lnearzed admm for non-convex non-smooth optmzaton wth convergence analyss. Arxv Preprnt: [30] J. G. Melo and R. D. C. Montero. Iteraton-complexty of a lnearzed proxmal multblock admm class for lnearly constraned nonconvex optmzaton problems. Avalable on: [31] R. D. C. Montero and B. F Svater. Iteraton-complexty of block-decomposton algorthms and the alternatng drecton method of multplers. SIAM J. Optm., 31: ,

16 [3] B.S. Mordukhovch. Varatonal analyss and generalzed dfferentaton I: basc theory. Grundlehren der mathematschen Wssenschaften. Sprnger, Berln,, 006. [33] Y. Ouyang, Y. Chen, G. Lan, and E. Paslao Jr. An accelerated lnearzed alternatng drecton method of multplers. SIAM J. Imagng Sc., 81: , 015. [34] R. T. Rockafellar and R. J.-B. Wets. Varatonal analyss. Sprnger, Berln, [35] F. Wang, W. Cao, and Z. Xu. Convergence of mult-block Bregman ADMM for nonconvex composte problems. Arxv preprnt: [36] F. Wang, Z. Xu, and H. K. Xu. Convergence of Bregman alternatng drecton method wth multplers for nonconvex composte problems. Arxv preprnt: [37] H. Wang, A. Banerjee, and Z-Q. Luo. Parallel drecton method of multplers. Arxv Preprnt: [38] W. Wang, Y. Yn and J. Zeng. Global convergence of ADMM n nonconvex nonsmooth optmzaton. Arxv preprnt: [39] Z. Wen, X. Peng, X. Lu, X. Sun, and X. Bas. Asset allocaton under the Basel accord rsk measures. Arxv preprnt: [40] Y. Xu, W. Yn, Z. Wen, and Y. Zhang. An alternatng drecton algorthm for matrx completon wth nonnegatve factors. Fronters Math. Chna, 7: , 01. [41] L. Yang, T. K. Pong, and X. Chen. Alternatng drecton method of multplers for a class of nonconvex and nonsmooth problems wth applcatons to background/foreground extracton. SIAM J. Imagng Sc., 101:74 110, 017. [4] R. Zhang and J. T. Kwok. Asynchronous dstrbuted admm for consensus optmzaton. Proceedngs of the 31st Internatonal Conference on Machne Learnng,

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

Structured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis

Structured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis Structured onconvex and onsmooth Optmzaton: Algorthms and Iteraton Complexty Analyss Bo Jang Tany Ln Shqan Ma Shuzhong Zhang ovember 13, 017 Abstract onconvex and nonsmooth optmzaton problems are frequently

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Randomized block proximal damped Newton method for composite self-concordant minimization

Randomized block proximal damped Newton method for composite self-concordant minimization Randomzed block proxmal damped Newton method for composte self-concordant mnmzaton Zhaosong Lu June 30, 2016 Revsed: March 28, 2017 Abstract In ths paper we consder the composte self-concordant CSC mnmzaton

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Randić Energy and Randić Estrada Index of a Graph

Randić Energy and Randić Estrada Index of a Graph EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

arxiv: v3 [math.na] 1 Jul 2017

arxiv: v3 [math.na] 1 Jul 2017 Accelerated Alternatng Drecton Method of Multplers: an Optmal O/K Nonergodc Analyss Huan L Zhouchen Ln arxv:608.06366v3 [math.na] Jul 07 July, 07 Abstract The Alternatng Drecton Method of Multplers ADMM

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Convergence rates of proximal gradient methods via the convex conjugate

Convergence rates of proximal gradient methods via the convex conjugate Convergence rates of proxmal gradent methods va the convex conjugate Davd H Gutman Javer F Peña January 8, 018 Abstract We gve a novel proof of the O(1/ and O(1/ convergence rates of the proxmal gradent

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization To appear n Optmzaton Vol. 00, No. 00, Month 20XX, 1 27 Research Artcle Almost Sure Convergence of Random Projected Proxmal and Subgradent Algorthms for Dstrbuted Nonsmooth Convex Optmzaton Hdea Idua a

More information

The Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites

The Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites 7 Asa-Pacfc Engneerng Technology Conference (APETC 7) ISBN: 978--6595-443- The Two-scale Fnte Element Errors Analyss for One Class of Thermoelastc Problem n Perodc Compostes Xaoun Deng Mngxang Deng ABSTRACT

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

System of implicit nonconvex variationl inequality problems: A projection method approach

System of implicit nonconvex variationl inequality problems: A projection method approach Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 6 (203), 70 80 Research Artcle System of mplct nonconvex varatonl nequalty problems: A projecton method approach K.R. Kazm a,, N. Ahmad b, S.H. Rzv

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N)

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N) SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) S.BOUCKSOM Abstract. The goal of ths note s to present a remarably smple proof, due to Hen, of a result prevously obtaned by Gllet-Soulé,

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

6) Derivatives, gradients and Hessian matrices

6) Derivatives, gradients and Hessian matrices 30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

Inexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC

Inexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC Inexact Alternatng Mnmzaton Algorthm for Dstrbuted Optmzaton wth an Applcaton to Dstrbuted MPC Ye Pu, Coln N. Jones and Melane N. Zelnger arxv:608.0043v [math.oc] Aug 206 Abstract In ths paper, we propose

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency

More information

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class

More information

STEINHAUS PROPERTY IN BANACH LATTICES

STEINHAUS PROPERTY IN BANACH LATTICES DEPARTMENT OF MATHEMATICS TECHNICAL REPORT STEINHAUS PROPERTY IN BANACH LATTICES DAMIAN KUBIAK AND DAVID TIDWELL SPRING 2015 No. 2015-1 TENNESSEE TECHNOLOGICAL UNIVERSITY Cookevlle, TN 38505 STEINHAUS

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

MAT 578 Functional Analysis

MAT 578 Functional Analysis MAT 578 Functonal Analyss John Qugg Fall 2008 Locally convex spaces revsed September 6, 2008 Ths secton establshes the fundamental propertes of locally convex spaces. Acknowledgment: although I wrote these

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Surrogate Functional Based Subspace Correction Methods for Image Processing

Surrogate Functional Based Subspace Correction Methods for Image Processing Surrogate Functonal Based Subspace Correcton Methods for Image Processng Mchael Hntermüller and Andreas Langer Introducton Recently n [4, 5, 6] subspace correcton methods for non-smooth and non-addtve

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming Solvng the Quadratc Egenvalue Complementarty Problem by DC Programmng Y-Shua Nu 1, Joaqum Júdce, Le Th Hoa An 3 and Pham Dnh Tao 4 1 Shangha JaoTong Unversty, Maths Departement and SJTU-Parstech, Chna

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

On the Connectedness of the Solution Set for the Weak Vector Variational Inequality 1

On the Connectedness of the Solution Set for the Weak Vector Variational Inequality 1 Journal of Mathematcal Analyss and Alcatons 260, 15 2001 do:10.1006jmaa.2000.7389, avalable onlne at htt:.dealbrary.com on On the Connectedness of the Soluton Set for the Weak Vector Varatonal Inequalty

More information

Two Strong Convergence Theorems for a Proximal Method in Reflexive Banach Spaces

Two Strong Convergence Theorems for a Proximal Method in Reflexive Banach Spaces Two Strong Convergence Theorems for a Proxmal Method n Reflexve Banach Spaces Smeon Rech and Shoham Sabach Abstract. Two strong convergence theorems for a proxmal method for fndng common zeroes of maxmal

More information

A combinatorial proof of multiple angle formulas involving Fibonacci and Lucas numbers

A combinatorial proof of multiple angle formulas involving Fibonacci and Lucas numbers Notes on Number Theory and Dscrete Mathematcs ISSN 1310 5132 Vol. 20, 2014, No. 5, 35 39 A combnatoral proof of multple angle formulas nvolvng Fbonacc and Lucas numbers Fernando Córes 1 and Dego Marques

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Uniqueness of Weak Solutions to the 3D Ginzburg- Landau Model for Superconductivity

Uniqueness of Weak Solutions to the 3D Ginzburg- Landau Model for Superconductivity Int. Journal of Math. Analyss, Vol. 6, 212, no. 22, 195-114 Unqueness of Weak Solutons to the 3D Gnzburg- Landau Model for Superconductvty Jshan Fan Department of Appled Mathematcs Nanjng Forestry Unversty

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

The proximal average for saddle functions and its symmetry properties with respect to partial and saddle conjugacy

The proximal average for saddle functions and its symmetry properties with respect to partial and saddle conjugacy The proxmal average for saddle functons and ts symmetry propertes wth respect to partal and saddle conjugacy Rafal Goebel December 3, 2009 Abstract The concept of the proxmal average for convex functons

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Proseminar Optimierung II. Victor A. Kovtunenko SS 2012/2013: LV

Proseminar Optimierung II. Victor A. Kovtunenko SS 2012/2013: LV Prosemnar Optmerung II Vctor A. Kovtunenko Insttute for Mathematcs and Scentfc Computng, Karl-Franzens Unversty of Graz, Henrchstr. 36, 8010 Graz, Austra; Lavrent ev Insttute of Hydrodynamcs, Sberan Dvson

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information