On the Global Linear Convergence of the ADMM with Multi-Block Variables
|
|
- Alan Homer Harper
- 5 years ago
- Views:
Transcription
1 On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured convex optmzaton problems In partcular, the ADMM can solve convex programs that mnmze the sum of N convex functons wth N-block varables lnked by some lnear constrants Whle the convergence of the ADMM for N = was well establshed n the lterature, t remaned an open problem for a long tme whether or not the ADMM for N 3 s stll convergent Recently, t was shown n [3] that wthout further condtons the ADMM for N 3 may actually fal to converge In ths paper, we show that under some easly verfable and reasonable condtons the global lnear convergence of the ADMM when N 3 can stll be assured, whch s mportant snce the ADMM s a popular method for solvng large scale mult-block optmzaton models and s known to perform very well n practce even when N 3 Our study ams to offer an explanaton for ths phenomenon Keywords: Alternatng Drecton Method of Multplers, Global Lnear Convergence, Convex Optmzaton 1 Introducton In ths paper, we consder the global lnear convergence of the standard alternatng drecton method of multplers ADMM for solvng convex mnmzaton problems wth N-block varables when N 3 The problem under consderaton can be formulated as mn f1 x 1 f x f N x N st A 1 x 1 A x A N x N = b, x X, = 1,, N, 11 where A R p n, b R p, X R n are closed convex sets, and f : R n R p are closed convex functons Note that the convex constrant x X can be ncorporated nto the obectve usng an Department of Systems Engneerng and Engneerng Management, The Chnese Unversty of Hong Kong, Shatn, New Terrtores, Hong Kong, Chna Department of Industral and Systems Engneerng, Unversty of Mnnesota, Mnneapols, MN 5555, USA 1
2 ndcator functon, e, 11 can be rewrtten as mn f 1 x 1 f x f N x N st A 1 x 1 A x A N x N = b, 1 where f x := f x x and 1 x := { 0 f x X otherwse We thus consder the equvalent reformulaton 1 throughout ths paper for the ease of presentaton For gven x k,, xk N ; λk, a typcal teraton of the ADMM for solvng 1 can be summarzed as: where x k1 1 := argmn x1 L γ x 1, x k,, xk N ; λk x k1 := argmn x L γ x k1 1, x, x k 3,, xk N ; λk x k1 N := argmn xn L γ x k1 1, x k1,, x k1, x N; λ k N λ k1 := λ k γ A x k1, L γ x 1,, x N ; λ := f x λ, A x γ A x denotes the augmented Lagrangan functon of 1 wth λ beng the Lagrange multpler and γ > 0 beng a penalty parameter It s noted that n each teraton, the ADMM updates the prmal varables x 1,, x N n a Gauss-Sedel manner When N =, the ADMM 13 was shown to be equvalent to the Douglas-Rachford operator splttng method that dated back to 1950s for solvng varatonal problems arsng from PDEs [5, 9] The convergence of the ADMM 13 when N = was thus establshed n the context of operator splttng methods [16, 8] Recently, ADMM has been revsted due to ts success n solvng structured convex optmzaton problems arsng from sparse and low-rank optmzaton and related problems we refer the readers to some recent survey papers for more detals, see, eg, [, 6] In [16], Lons and Mercer showed that the Douglas-Rachford operator splttng method converges lnearly under the assumpton that some nvolved monotone operator s both coercve and Lpschtz Ecksten and Bertsekas [7] showed the lnear convergence of the ADMM 13 wth N = for solvng lnear programs, whch depends on a bound on the largest terate n the course of the algorthm In a recent work by Deng and Yn [], a generalzed ADMM was proposed n whch some proxmal terms were added to the two subproblems n 13, and t was shown that ths generalzed ADMM converges lnearly under certan assumptons on the strong convexty of functons f 1 and f, and the rank of A 1 and A For nstance, one suffcent condton suggested n [] that guarantees the lnear convergence of the generalzed ADMM s that f 1 and f are both strongly convex, f s Lpschtz contnuous and A s of full row rank Han and Yuan [11] and Boley [1] both studed the local lnear convergence of ADMM 13 when N = for solvng quadratc programs The result n [11] was based on some error bound condton [17], and the one gven n [1] was obtaned by frst wrtng the ADMM as a matrx recurrence and then performng a spectral 13
3 analyss on the recurrence Moreover, t was shown that the ADMM 13 when N = converges sublnearly under the smple convexty assumpton both n ergodc and non-ergodc sense [13, 18, 1] It should be noted that all the convergence results on the ADMM 13 dscussed above are for the case N = Whle the convergence propertes of the ADMM when N = have been well studed, ts convergence when N 3 has remaned unclear for a very long tme The followng ncludes some recent progresses on ths drecton In a recent work by Chen et al [3], a counter-example was gven whch shows that wthout further condtons the ADMM for N 3 may actually fal to converge Exstng works that study suffcent condtons ensurng the convergence of ADMM when N 3 are brefly summarzed as follows Han and Yuan [10] proved the global convergence of ADMM 13 under the condton that f 1,, f N are all strongly convex and γ s restrcted to certan regon Hong and Luo [1] proposed to adopt a small step sze when updatng the Lagrange multpler λ k n 13, e, they suggested that the update for λ k, e, N λ k1 := λ k γ A x k1, 1 be changed to λ k1 := λ k αγ N A x k1, 15 where α > 0 s a small step sze It was shown n [1] that ths varant of ADMM converges lnearly under the assumpton that certan error bound condton holds and α s bounded by some constant that s related to the error bound condton In a very recent work by Ln, Ma and Zhang [15], t was shown that the ADMM 13 possesses sublnear convergence rate n both ergodc and non-ergodc sense under the condtons that f,, f N are strongly convex and γ s restrcted to certan regon Our contrbuton In ths paper, we show the global lnear convergence of ADMM 13 when N 3 It should be noted that the lnear convergence results n [16,, 11, 1] are for the case N =, whle ours consder the case when N 3 Moreover, compared wth the local lnear convergence results n [11] and [1] for N =, we prove the global lnear convergence for N 3 Furthermore, our result s for the orgnal standard mult-block ADMM 13, whle the one presented n [1] s a varant of 13 whch replaces 1 wth 15 To the best of our knowledge, our results n ths paper are the frst global lnear convergence results for the orgnal standard mult-block ADMM 13 when N 3 The rest of ths paper s organzed as follows In Secton, we provde some prelmnares and prove three techncal lemmas for the subsequent analyss In Secton 3, we prove the global lnear convergence of ADMM 13 under three dfferent scenaros Fnally, we conclude the paper n Secton Prelmnares and Techncal Lemmas We use Ω X 1 X X N R p to denote the set of prmal-dual optmal solutons of 1 Note that accordng to the frst-order optmalty condtons for 1, solvng 1 s equvalent to fndng x 1,, x N, λ Ω 3
4 such that the followngs hold: A λ f x, = 1,,, N, 1 A x = 0 We thus make the followng assumpton throughout ths paper Assumpton 1 The optmal set Ω for problem 1 s non-empty In our analyss, the followng well-known dentty s used frequently: w 1 w w 3 w = 1 w1 w w 1 w 3 w w 3 w w 3 Notatons We use g to denote a subgradent of f ; λ max B and λ mn B denote respectvely the largest and smallest egenvalues of a real symmetrc matrx B; x denotes the Eucldean norm of x We use σ > 0 to denote the convexty parameter of f, e, the followng nequaltes hold for = 1,, N: x y g x g y σ x y, x, y X, where g x f x s the subdfferental of f Note that f s strongly convex f and only f σ > 0, and f f s convex but not strongly convex, then σ = 0 In ths paper, we consder three scenaros that lead to global lnear convergence of ADMM 13 The condtons of the three scenaros are lsted n Table 1 scenaro strongly convex Lpschtz contnuous full row rank full column rank 1 f,, f N f N A N f 1,, f N f 1,, f N 3 f,, f N f 1,, f N A 1 Table 1: Three scenaros leadng to global lnear convergence We remark here that when N =, the three scenaros lsted n Table 1 actually reduce to the same condtons consdered by Deng and Yn as scenaros 1, and 3, respectvely n [] We also remark here that snce we ncorporated the ndcator functons nto the obectve functon n 1, scenaro 1 actually requres that there s no constrant x N X N ; scenaros and 3 requre that there s no constrant x X, = 1,, N The frst-order optmalty condtons for the N subproblems n 13 are gven by A λ k γa A x k1 A x k f x k1, = 1,,, N, 5 =1 =1
5 where we have adopted the conventon N =N1 a = 0 By combnng wth the updatng formula for λ k 1, 5 can be rewrtten as [ ] A λk1 γa =1 A x k xk1 f x k1, = 1,,, N 6 Before we present the lnear convergence of ADMM 13, we prove the followng three techncal lemmas that wll be used n subsequent analyss Lemma Let x 1,, x N, λ Ω The sequence {x k 1, xk,, xk N, λk } generated va ADMM 13 satsfes, γ [ σ σ N =1 A x x k γn 1 λ max A A γ λ λ k γ x k1 x ] γn N λ max A NA N x k1 N x N γ =1 A 1x k1 A x x k1 1 A x k = γ λ λ k1 7 Proof Combnng 6, 1 and yelds, x k1 x A λ k1 λ γ A x k x k1 σ x k1 x, = 1,, N 8 =1 From 1 and, t s easy to obtan A x k1 x = 1 γ λk λ k1 9 Summng 8 over = 1,, N and usng 9, we can get 1 γ λk λ k1 λ k1 λ γ x x k1 A A x k x k1 =1 σ x k1 x 10 5
6 By adoptng the conventon 0 a = 0, we have that x x k1 A A x k x k1 = =1 1 A x =1 =1 A x A x k1 =1 A x k1 =1 = 1 A x A x k A x A x k1 =1 =1 =1 =1 1 1 A x A x k1 A x A x k1 A x k =1 = =1 =1 1 A x A x k A x A x k1 =1 =1 =1 =1 1 A x A x k1 1 =1 = A 1x k1 1 A x k = = 1 A x A x k A x A x k1 =1 =1 =1 =1 γ λk1 λ k 1 A x A x k1 1 A 1x k1 1 A x k = =1 = = A x k, 11 where n the second equalty we have used the dentty 3, and the last equalty follows from 1 By combnng 10 and 11, we have γ A x =1 =1 A x k A x =1 γ λk λ k1 λ k1 λ γ λk1 λ k γ σ x k1 x γ A 1x k1 1 A x k = A x k1 1 A x =1 = =1 = A x k1 1 6
7 Usng agan, we obtan 1 A x =1 = A x k1 = A x k1 x = N where the nequalty follows from the convexty of Therefore, we have = = = = 1 A x A x k1 =1 = N λ max A A x k1 x = N 1 λ max A A x k1 x By combnng 1 and 13 and usng the dentty = λ max A A x k1 x, N N λ max A NA N x k1 N x N 1 λ k λ k1 λ k1 λ γ γ λk1 λ k = 1 λ λ k λ λ k1, γ we have γ A x A x k =1 =1 λ λ k λ λ k1 γ [ σ σ N A x =1 =1 ] γn 1 λ max A A x k1 x γn N λ max A NA N x k1 N x N γ A x k1 A 1x k1 1 A x k =, 13 whch further mples 7 by usng 7
8 Remark 3 We note here that 7 can be equvalently rearranged as γ γ =1 [ σ σ N A x x k γ λ λ k γ γn 1 λ max A A x k1 x ] γn N λ max A NA N x k1 N x N γ =1 A x x k γ =1 =1 A 1x k1 A x x k1 A x x k1 1 A x k = γ λ λ k1 Both 7 and 1 wll be used n subsequent analyss In scenaro 1, we wll use 7 to show that γ =1 A x x k γ λ λ k converges to zero lnearly; n scenaros and 3, we wll use 1 to show that converges to zero lnearly γ =1 A x x k γ λ λ k 1 The next lemma consders the convergence of {x k 1,, xk N, λk } under condtons lsted n scenaros and 3 n Table 1 Lemma Assume that the condtons lsted n scenaro or scenaro 3 n Table 1 hold Moreover, we assume that γ satsfes the followng condtons: { } σ γ < mn =,, N 1λ max A A, σ N N N λ max A N A 15 N Then x k 1,, xk N, λk generated by ADMM 13 converges to some x 1,, x N, λ Ω Proof Note that the condtons lsted n scenaros and 3 n Table 1 both requre that f,, f N are strongly convex Denote the rght hand sde of nequalty 7 by ξ k It follows from 15 and 7 that ξ k 0 and k=0 ξk <, whch further mples that ξ k 0 Hence, for any x 1,, x N, λ Ω, we have x k x 0 for =,, N, and A 1 x k1 1 N = A x k 0, whch also mples that A 1 x k 1 A 1x 1 0 In scenaro, t s assumed that f 1 s strongly convex Thus σ 1 > 0 and 7 mples 8
9 that x k 1 x 1 0 In scenaro 3, t s assumed that A 1 s of full column rank It thus follows from A 1 x k 1 A 1x 1 0 that xk 1 x 1 0 Moreover, when 15 holds, t follows from 7 that γ N =1 A x xk γ λ λ k s non-ncreasng and upper bounded It thus follows that λ λ k converges and {λ k } s bounded Therefore, {λ k } has a convergng subsequence {λ k } Let λ = lm {λ k } By passng the lmt n 6, t holds that A λ = f x for = 1,,, N Thus, x 1,, x N, λ Ω and we can ust let λ = λ Snce λ λ k converges and λ k λ, we conclude that λ k λ Before proceedng to the next lemma, we defne a constant κ that wll be used subsequently Defnton 1 We defne a constant κ as follows If the matrx [A 1,, A N ] s of full row rank, then κ := λ 1 mn [A 1,, A N ][A 1,, A N ] > 0 Otherwse, assume rank[a 1,, A N ] = r < p Wthout loss of generalty, assumng that the frst r rows of [A 1,, A N ] denoted by [A r 1,, Ar N ] are lnearly ndependent, we have [A 1,, A N ] = [ I B ] [A r 1,, A r N], 16 where I R r r s the dentty matrx and B R p r r Let E := I B B[A r 1,, Ar N ] It s easy to see that E has full row rank Then κ s defned as κ := λ 1 mn EE λ max I B B > 0 The next lemma concerns boundng λ k1 λ usng terms related to x k x, = 1,, N Lemma 5 Let x 1,, x N, λ Ω Assume that the condtons lsted n scenaro or scenaro 3 n Table 1 hold, and γ satsfes 15 Suppose f s Lpschtz contnuous wth constant L for = 1,, N, and the ntal Lagrange multpler λ 0 s n the range space of [A 1,, A N ] note that lettng λ 0 = 0 suffces It holds that λ k1 λ κl x k1 x κγ λ max A A where κ > 0 s defned n Defnton 1 =1 A x k x =1 17 A x k1 x, Proof We frst show the followng nequalty λ k1 λ κ A 1 A N 9 λ k1 λ 18
10 In case, [A 1,, A N ] has full row rank, so 18 holds trvally Now we consder case By the updatng formula of λ k1 1 and, we know that f the ntal Lagrange multpler λ 0 s n the range space of [A 1,, A N ], then λ k, k = 1,,, always stay n the range space of [A 1,, A N ], so does λ Therefore, from 16, we can get λ k1 = [ I B ] [ λ k1 r, λ I = B ] λ r, A 1 A N λ k1 λ = A r 1 A r N I B Bλ k1 r λ r, where λ k1 r and λ r denote the frst r rows of λ k1 and λ, respectvely Snce E := IB B[A r 1,, Ar N ] has full row rank, t now follows that A 1 A N λ k1 λ whch mples 18 = E λ k1 r λ r λ mn EE λ k1 r λ r λ mnee λ max I B B λk1 λ, Usng the optmalty condtons 6, and the Lpschtz contnuty of f, = 1,, N, we have A 1 γa A 1 0 λk1 λ 0 A x k x k1 A = γa A Nx k N x k1 N N 0 0 = f x k1 f x L x k1 x, whch together wth 18 mples that λ k1 λ A 1 A κ λk1 λ A N γa κ A x k x k1 = γa A Nx k N x k1 N L x k1 x 0 0 κγ λ max A A A x k x k1 κ L x k1 x =1 κγ λ max A A A x k x A x k1 x κ L x k1 x =1 =1 10
11 3 Global Lnear Convergence of the ADMM In ths secton, we prove the global lnear convergence of the ADMM 13 under the three scenaros lsted n Table 1 We note the followng nequalty, =1 A x x k1 [ ] N 1 λ max A A x x k1, 31 = whch follows from the convexty of We shall use ths nequalty n our subsequent analyss 31 Q-lnear convergence under scenaro 1 Theorem 31 Suppose that the condtons lsted n scenaro 1 n Table 1 hold If γ satsfes 15, then t holds that γ A x x k =1 γ λ λ k 1 δ 1 γ 3 A x x k1 γ λ λ k1, where δ 1 := mn =,, { =1 } σ γn 1λ max A A γσ N γ N N λ max A N γn 1λ max A A, A N λ 1 mn A NA N L N γ NN 1λ max A N A N 33 Note that t follows from 15 that δ 1 > 0 As a result of 3, we conclude that A x k, A x k,, A x k, λ k converges Q-lnearly = =3 =N Proof Because f N s Lpschtz contnuous wth constant L N, by settng = N n 6 and 1, we get whch mples A Nλ k1 λ = f N x k1 N f Nx N L N x k1 N x N, due to the fact that A N s of full row rank λ k1 λ λ 1 mn A NA NL N x k1 N x N, 3 11
12 By combnng 7, 33, 31 and 3, t follows that note that we do not assume that f 1 s strongly convex, and thus σ 1 = 0, γ A x x k =1 γ λ λ k γ A x x k1 =1 γ λ λ k1 [ ] γn 1 σ λ max A A x k1 x = σ N γn N [ N [ γn 1 δ 1 = δ 1 γ A x x k1 =1 λ max A NA N x k1 N x N ] λ max A A x x k1 λ 1 mn A NA N L N γ γ λ λ k1, x N x k1 N ] whch further mples 3 3 Q-lnear convergence under scenaro Theorem 3 Suppose that the condtons lsted n scenaro n Table 1 hold If γ satsfes { } σ γ < mn =,, 3N 1λ max A A, σ N 3N 3N λ max A N A, 35 N then t holds that γ A x x k =1 γ λ λ k 1 δ γ A x x k1 γ λ λ k1, =1 36 where and δ := δ 3 := mn =,, mn,, { } σ1 γ δ := mn κl, δ 3, δ, δ 5, 37 1 { σ γ 3γ N 1λ max A A γ N 1λ max A A κl { } 1 κλ max A A, δ 5 := σ Nγ 3N 3N γ λ max A N A N γ NN 1λ max A N A N κl, N }, 38 1
13 where κ s defned n Defnton 1 Note that t follows from 35 that δ > 0 As a result of 36, we conclude that A x k, A x k,, A x k, λ k converges Q-lnearly = =3 =N Proof By combnng 17 and 31, we have 1 δ γ A x x k1 1 δ =1 γ λ λ k1 [ ] γn 1 1 δ λ max A A x x k1 = δ κγ λ max A A γ A x k x A x k1 x =1 =1 κl x k1 x γ [ σ σ N ] γn 1 λ max A A x k1 x γn N A x x k =1 λ max A NA N x k1 N x N γ =1 A x x k1, 39 where the last nequalty follows from the defnton of δ n 37 Fnally we note that combnng 39 wth 1 yelds Q-lnear convergence under scenaro 3 Theorem 33 Suppose that the condtons lsted n scenaro 3 n Table 1 hold If γ satsfes 35, then t holds that γ A x x k =1 γ λ λ k δ 6 γ A x x k1 γ λ λ k1, =1 13
14 where δ 6 := mn { γ κγ N 1λ max A 1 A 1 κl 1 λ 1 mn A 1 A 1, δ 3, δ, δ 5 }, 311 wth δ 3, δ and δ 5 defned n 38 Note that t follows from 35 that δ 6 > 0 As a result of 310, we conclude that A x k, A x k,, A x k, λ k converges Q-lnearly = =3 =N Proof Snce A 1 s of full column rank, t s easy to verfy that λ mn A 1 A 1 x k1 1 x 1 A 1 x k1 1 x 1 = A 1 x k1 1 A x k A x k x = = A 1x k1 1 A x k A x k x = = 31 Combnng 31 and 17 yelds, 1 γ λ λ k1 κl x x k1 γ = γ κl 1λ 1 mn A 1 A 1 A 1x k1 1 A x k A x k x = = κγn 1λ max A A A x k x A x k1 x =1 =
15 Combnng 313, 31 and 311 yelds, 1 δ 6 γ [ σ σ N γ A 1x k1 γ =1 A x x k1 γn 1 λ max A A γn N λ max A NA N 1 =1 A x k = A x x k γ δ 6 1 γ λ λ k1 x k1 x ] x k1 N x N =1 A x x k1, whch together wth 1 mples R-lnear Convergence From the results n Theorems 31, 3 and 33, we have the followng mmedate corollary on the R-lnear convergence of ADMM 13 Corollary 3 Under the same condtons n Theorem 31, or Theorem 3, or Theorem 33, x k N, λk and A x k, = 1,, N 1 converge R-lnearly Moreover, f A, = 1,,, N 1 are further assumed to be of full column rank, then x k, = 1,,, N 1 converge R-lnearly Proof Note that under all the three scenaros, we have shown that the sequence A x k, A x k,, A x k, λ k = =3 =N converges Q-lnearly It follows that λ k and =1 A x k, = 1,, N 1 converge R-lnearly, snce any part of a Q-lnear convergent quantty converges R-lnearly It also mples that A x k,, A Nx k N converge R-lnearly It now follows from 9 that A 1 x k 1 converges R-lnearly By settng = N n 8, one obtans, x k1 N x N A Nλ k1 λ σ N x k1 N x N, whch mples that x k1 N x N A N λ k1 λ σ N x k1 N x N, 15
16 e, x k1 N x N A N λ k1 λ σ N The R-lnear convergence of x k N then follows from the fact that λk converges R-lnearly Now we make some remarks on the convergence results presented n ths secton Remark 35 If we ncorporate the ndcator functon nto the obectve functon n 1, then ts subgradent cannot be Lpschtz contnuous on the boundary of the constrant set Therefore, scenaros and 3 can only occur f the constrant sets X s are actually the whole space However, scenaro 1 does allow most of the constrant sets to exst; essentally, t only requres that x N s unconstraned, and all other blocks of varables can be constraned It remans an nterestng queston to fgure out f the lnear convergence rate stll holds f all blocks of varables are constraned Remark 36 Fnally, we remark that the scenaro 1 n Table 1 also gves rse to a lnear convergence rate of the ADMM for convex optmzaton wth nequalty constrants: mn f1 x 1 f x f N x N st A 1 x 1 A x A N x N b x X, = 1,,, N In that case, by ntroducng a slack varable x 0 wth the constrant x 0 R p, the correspondng ADMM becomes x k1 0 := argmn x0 R p L γx 0, x k 1,, xk N ; λk = N A x k b γ λk, where x k1 := argmn x X L γ x k1 0, x k1 1,, x k1 1, x, x k 1,, xk N ; λk, = 1,,, N, λ k1 := λ k γ L γ x 0, x 1,, x N ; λ := x k1 0 A x k1 f x λ, x 0, A x γ x 0 A x Suppose that the functons f, =,, N are all strongly convex, and f N s Lpschtz contnuous, x N X N does not present and A N has full row rank, Theorem 31 assures that the above ADMM algorthm converges globally lnearly Conclusons In ths paper we proved that the orgnal ADMM for convex optmzaton wth mult-block varables s lnearly convergent under some condtons In partcular, we presented three scenaros under whch a 16
17 lnear convergence rate holds for the ADMM; these condtons can be consdered as extensons of the ones dscussed n [] for the -block ADMM Convergence and complexty analyss for mult-block ADMM are mportant because the ADMM s wdely used and acknowledged to be an effcent and effectve practcal soluton method for large scale convex optmzaton models arsng from mage processng, statstcs, machne learnng, and so on Acknowledgements Research of Shqan Ma was supported n part by the Hong Kong Research Grants Councl RGC Early Career Scheme ECS Proect ID: CUHK Research of Shuzhong Zhang was supported n part by the Natonal Scence Foundaton under Grant Number CMMI-1161 References [1] D Boley Local lnear convergence of the alternatng drecton method of multplers on quadratc or lnear programs SIAM Journal on Optmzaton, 3:183 07, 013 [] S Boyd, N Parkh, E Chu, B Peleato, and J Ecksten Dstrbuted optmzaton and statstcal learnng va the alternatng drecton method of multplers Foundatons and Trends n Machne Learnng, 31:1 1, 011 [3] C Chen, B He, Y Ye, and X Yuan The drect extenson of ADMM for mult-block convex mnmzaton problems s not necessarly convergent Preprnt, 013 [] W Deng and W Yn On the global and lnear convergence of the generalzed alternatng drecton method of multplers Techncal report, Rce Unversty CAAM, 01 [5] J Douglas and H H Rachford On the numercal soluton of the heat conducton problem n and 3 space varables Transactons of the Amercan Mathematcal Socety, 8:1 39, 1956 [6] J Ecksten Augmented lagrangan and alternatng drecton methods for convex optmzaton: A tutoral and some llustratve computatonal results Preprnt, 01 [7] J Ecksten and D P Bertsekas An alternatng drecton method for lnear programmng Techncal report, MIT Laboratory for Informaton and Decson Systems, 1990 [8] J Ecksten and D P Bertsekas On the Douglas-Rachford splttng method and the proxmal pont algorthm for maxmal monotone operators Mathematcal Programmng, 55:93 318, 199 [9] D Gabay Applcatons of the method of multplers to varatonal nequaltes In M Fortn and R Glownsk, edtors, Augmented Lagrangan Methods: Applcatons to the Soluton of Boundary Value Problems North-Hollan, Amsterdam, 1983 [10] D Han and X Yuan A note on the alternatng drecton method of multplers Journal of Optmzaton Theory and Applcatons, 1551:7 38, 01 17
18 [11] D Han and X Yuan Local lnear convergence of the alternatng drecton method of multplers for quadratc programs SIAM J Numer Anal,, 516:36 357, 013 [1] B He and X Yuan On nonergodc convergence rate of Douglas-Rachford alternatng drecton method of multplers Preprnt, 01 [13] B He and X Yuan On the O1/n convergence rate of Douglas-Rachford alternatng drecton method SIAM Journal on Numercal Analyss, 50: , 01 [1] M Hong and Z Luo On the lnear convergence of the alternatng drecton method of multplers Preprnt, 01 [15] T Ln, S Ma, and S Zhang On the convergence rate of mult-block ADMM submtted, March 01 [16] P L Lons and B Mercer Splttng algorthms for the sum of two nonlnear operators SIAM Journal on Numercal Analyss, 16:96 979, 1979 [17] Z-Q Luo and P Tseng Error bounds and the convergence analyss of matrx splttng algorthms for the affne varatonal nequalty problem SIAM J Optm, :3 5, 199 [18] R D C Montero and B F Svater Iteraton-complexty of block-decomposton algorthms and the alternatng drecton method of multplers SIAM Journal on Optmzaton, 3:75 507,
MMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationStructured Nonconvex and Nonsmooth Optimization: Algorithms and Iteration Complexity Analysis
Structured onconvex and onsmooth Optmzaton: Algorthms and Iteraton Complexty Analyss Bo Jang Tany Ln Shqan Ma Shuzhong Zhang ovember 13, 017 Abstract onconvex and nonsmooth optmzaton problems are frequently
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationConvergence rates of proximal gradient methods via the convex conjugate
Convergence rates of proxmal gradent methods va the convex conjugate Davd H Gutman Javer F Peña January 8, 018 Abstract We gve a novel proof of the O(1/ and O(1/ convergence rates of the proxmal gradent
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationIteration-complexity of a Jacobi-type non-euclidean ADMM for multi-block linearly constrained nonconvex programs
Iteraton-complexty of a Jacob-type non-eucldean ADMM for mult-block lnearly constraned nonconvex programs Jefferson G. Melo Renato D.C. Montero May 13, 017 Abstract Ths paper establshes the teraton-complexty
More informationResearch Article Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem
Mathematcal Problems n Engneerng Volume 2012, Artcle ID 871741, 16 pages do:10.1155/2012/871741 Research Artcle Global Suffcent Optmalty Condtons for a Specal Cubc Mnmzaton Problem Xaome Zhang, 1 Yanjun
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationThe lower and upper bounds on Perron root of nonnegative irreducible matrices
Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College
More informationInexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC
Inexact Alternatng Mnmzaton Algorthm for Dstrbuted Optmzaton wth an Applcaton to Dstrbuted MPC Ye Pu, Coln N. Jones and Melane N. Zelnger arxv:608.0043v [math.oc] Aug 206 Abstract In ths paper, we propose
More informationPerfect Competition and the Nash Bargaining Solution
Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange
More informationRandomized block proximal damped Newton method for composite self-concordant minimization
Randomzed block proxmal damped Newton method for composte self-concordant mnmzaton Zhaosong Lu June 30, 2016 Revsed: March 28, 2017 Abstract In ths paper we consder the composte self-concordant CSC mnmzaton
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationLeast squares cubic splines without B-splines S.K. Lucas
Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan
More informationLecture 4. Instructor: Haipeng Luo
Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationarxiv: v1 [math.oc] 6 Jan 2016
arxv:1601.01174v1 [math.oc] 6 Jan 2016 THE SUPPORTING HALFSPACE - QUADRATIC PROGRAMMING STRATEGY FOR THE DUAL OF THE BEST APPROXIMATION PROBLEM C.H. JEFFREY PANG Abstract. We consder the best approxmaton
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationA Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function
A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,
More informationMAT 578 Functional Analysis
MAT 578 Functonal Analyss John Qugg Fall 2008 Locally convex spaces revsed September 6, 2008 Ths secton establshes the fundamental propertes of locally convex spaces. Acknowledgment: although I wrote these
More informationarxiv: v3 [math.na] 1 Jul 2017
Accelerated Alternatng Drecton Method of Multplers: an Optmal O/K Nonergodc Analyss Huan L Zhouchen Ln arxv:608.06366v3 [math.na] Jul 07 July, 07 Abstract The Alternatng Drecton Method of Multplers ADMM
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More informationSolving the Quadratic Eigenvalue Complementarity Problem by DC Programming
Solvng the Quadratc Egenvalue Complementarty Problem by DC Programmng Y-Shua Nu 1, Joaqum Júdce, Le Th Hoa An 3 and Pham Dnh Tao 4 1 Shangha JaoTong Unversty, Maths Departement and SJTU-Parstech, Chna
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationConvex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.
Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal
More informationResearch Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization
To appear n Optmzaton Vol. 00, No. 00, Month 20XX, 1 27 Research Artcle Almost Sure Convergence of Random Projected Proxmal and Subgradent Algorthms for Dstrbuted Nonsmooth Convex Optmzaton Hdea Idua a
More informationOn the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros
Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong
More informationOn the convergence of the block nonlinear Gauss Seidel method under convex constraints
Operatons Research Letters 26 (2000) 127 136 www.elsever.com/locate/orms On the convergence of the bloc nonlnear Gauss Sedel method under convex constrants L. Grppo a, M. Scandrone b; a Dpartmento d Informatca
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationSELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.
SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationRandić Energy and Randić Estrada Index of a Graph
EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationControl of Uncertain Bilinear Systems using Linear Controllers: Stability Region Estimation and Controller Design
Control of Uncertan Blnear Systems usng Lnear Controllers: Stablty Regon Estmaton Controller Desgn Shoudong Huang Department of Engneerng Australan Natonal Unversty Canberra, ACT 2, Australa shoudong.huang@anu.edu.au
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationFinding Primitive Roots Pseudo-Deterministically
Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More informationLecture 17: Lee-Sidford Barrier
CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the
More informationGames of Threats. Elon Kohlberg Abraham Neyman. Working Paper
Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationMATHEMATICAL ENGINEERING TECHNICAL REPORTS. Successive Lagrangian Relaxation Algorithm for Nonconvex Quadratic Optimization
MATHEMATICAL ENGINEERING TECHNICAL REPORTS Successve Lagrangan Relaxaton Algorthm for Nonconvex Quadratc Optmzaton Shnj YAMADA and Akko TAKEDA METR 2017 08 March 2017 DEPARTMENT OF MATHEMATICAL INFORMATICS
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationA note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationChapter 2 A Class of Robust Solution for Linear Bilevel Programming
Chapter 2 A Class of Robust Soluton for Lnear Blevel Programmng Bo Lu, Bo L and Yan L Abstract Under the way of the centralzed decson-makng, the lnear b-level programmng (BLP) whose coeffcents are supposed
More informationSystem of implicit nonconvex variationl inequality problems: A projection method approach
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 6 (203), 70 80 Research Artcle System of mplct nonconvex varatonl nequalty problems: A projecton method approach K.R. Kazm a,, N. Ahmad b, S.H. Rzv
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationCoordinate friendly structures, algorithms and applications arxiv: v3 [math.oc] 14 Aug 2016
Coordnate frendly structures, algorthms and applcatons arxv:1601.00863v3 [math.oc] 14 Aug 2016 Zhmn Peng, Tanyu Wu, Yangyang Xu, Mng Yan, and Wotao Yn Ths paper focuses on coordnate update methods, whch
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationExercise Solutions to Real Analysis
xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More information