Faster and Simpler Width-Independent Parallel Algorithms for Positive Semidefinite Programming

Size: px
Start display at page:

Download "Faster and Simpler Width-Independent Parallel Algorithms for Positive Semidefinite Programming"

Transcription

1 Faster and Smpler Wdth-Independent Parallel Algorthms for Postve Semdefnte Programmng Rchard Peng Kanat Tangwongsan Carnege Mellon Unversty {yangp, ASTRACT Ths paper studes the problem of fndng a (+)-approxmate soluton to postve semdefnte programs. These are semdefnte programs n whch all matrces n the constrants and objectve are postve semdefnte and all scalars are nonnegatve. At FOCS, Jan and Yao gave an NC algorthm that requres O( 3 log 3 m log n) teratons on nput n constrant matrces of dmenson m-by-m, where each teraton performs at least Ω(m ω ) work snce t nvolves computng the spectral decomposton. We present a smpler NC parallel algorthm that on nput wth n constrant matrces, requres O( log 4 n log( )) 4 teratons, each of whch nvolves only smple matrx operatons and computng the trace of the product of a matrx exponental and a postve semdefnte matrx. Further, gven a postve SDP n a factorzed form, the total work of our algorthm s nearly-lnear n the number of non-zero entres n the factorzaton. Our algorthm can be vewed as a generalzaton of Young s algorthm and analyss technques for postve lnear programs (Young, FOCS 0 ) to the semdefnte programmng settng. Categores and Subject Descrptors: F.2 Theory of Computaton: Analyss of Algorthms and Problem Complexty General Terms: Algorthms, Theory Keywords: Parallel algorthms, semdefnte programmng, coverng semdefnte programs, approxmaton algorthms. INTRODUCTION Semdefnte programmng (SDP), alongsde lnear programmng (LP), has been an mportant tool n approxmaton algorthms, optmzaton, and dscrete mathematcs. In the context of approxmaton algorthms alone, t has emerged as a key technque whch underles a number of mpressve results that substantally mprove the approxmaton ratos. Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SPAA 2, June 25 27, 202, Pttsburgh, Pennsylvana, USA. Copyrght 202 ACM /2/06...$0.00. In order to solve a semdefnte program, algorthms from the lnear programmng lterature such as Ellpsod or nterorpont algorthms GLS93 can be appled to derve near exact solutons. ut ths can be very costly. As a result, fndng effcent approxmatons to such problems s a crtcal step n brngng these results closer to practce. From a parallel algorthms standpont, both LPs and SDPs are P-complete to even approxmate to any constant accuracy, suggestng that t s unlkely that they have a polylogarthmc depth algorthm. For lnear programs, however, the specal case of postve lnear programs, frst studed by Luby and Nsan LN93, has an algorthm that fnds a ( + )- approxmate soluton n O(poly( log n)) teratons. Ths weaker approxmaton guarantee s stll suffcent for approxmaton algorthms (e.g., solutons to vertex cover and set cover va randomzed roundng), spurrng nterest n studyng these problems n both sequental and parallel contexts (see, e.g., LN93, PST95, GK98, You0, KY07, KY09). The mportance of problems such as MaxCut and Sparsest Cut has led to the dentfcaton and study of postve SDPs. The frst defnton of postve SDPs was due to Klen and Lu KL96, who used t to characterze the MaxCut SDP. The MaxCut SDP can be vewed as a drect generalzaton of postve (packng) LPs. More recent work defnes the noton of postve packng SDPs IPS, whch captures problems such as MaxCut, sparse PCA, and colorng; and the noton of coverng SDPs IPS0, whch captures the ARV relaxaton of Sparsest Cut among others. The bulk of work n ths area tends to focus on developng fast sequental algorthms for fndng a ( + )-approxmaton, leadng to a seres of nce sequental algorthms (e.g., AHK05, AK07, IPS, IPS0). The teraton count for these algorthms, however, depends on the so-called wdth parameter of the nput program or some parameter of the spectrum of the nput program. In some nstances, the wdth parameter can be as large as Ω(n), makng t a bottleneck n the depth of drect parallelzaton. Most recently, Jan and Yao JY studed a partcular class of postve SDPs and gave the frst postve SDP algorthm whose work and depth are ndependent of the wdth parameter (commonly known as wdth-ndependent algorthms). Our Work. We present a smple algorthm that offers the same approxmaton guarantee as JY but has less workdepth complexty. Each teraton of our algorthm nvolves only smple matrx operatons and computng the trace of the product of a matrx exponental and a postve semdefnte matrx. Furthermore, our proof only uses elementary

2 lnear-algebrac technques and the now-standard Golden- Thompson nequalty. The nput conssts of an accuracy parameter > 0 and a postve semdefnte program (PSDP) n the followng standard prmal form: Mnmze C Y Subject to: A Y b for =,..., n Y 0, (.) where the matrces C, A,..., A n are m-by-m symmetrc postve semdefnte matrces, denotes the pontwse dot product between matrces (see Secton 2), and the scalars b,..., b n are non-negatve reals. Ths s a subclass of SDPs where the matrces and scalars are postve n ther respectve settngs. We also make the now-standard assumpton that the SDP has strong dualty. Our man result s as follows: Theorem. (Man Theorem) Gven a prmal postve SDP nvolvng m m matrces wth n constrants and an accuracy parameter > 0, there s an algorthm approxpsdp that produces a ( + )-approxmaton n O( log 4 n log( )) 4 teratons, where each teraton nvolves computng matrx sums and a specal prmtve that computes exp(φ) A n the case when Φ and A are both postve semdefnte. The theorem quantfes the cost of our algorthm n terms of the number of teratons. The work and depth bounds mpled by ths theorem vary wth the format of the nput and how the matrx exponental s computed n each teraton. As we wll dscuss n Secton 4, wth nput gven n a sutable form, our algorthm runs n nearly-lnear work and polylogarthmc depth. For comparson, the algorthm gven n JY requres O( log 3 m log n) teratons, each of whch nvolves computng 3 spectral decompostons usng least Ω(m ω ) work. Recently, n an ndependent work, Jan and Yao JY2 gave a smlar algorthm for postve SDPs that s also based on Young s algorthm. Ther algorthm solves a class of SDPs whch contans both packng and dagonal coverng constrants. Snce matrx packng condtons between dagonal matrces are equvalent to pont-wse condtons of the dagonal entres, these constrants are closer to a generalzaton of postve coverng LP constrants. We beleve that removng ths restrcton on dagonal packng matrces would greatly wden the class of problems ncluded n ths class of SDPs and dscuss possbltes n ths drecton n Secton 5.. Overvew All the parallel postve SDP algorthms to date can be seen as generalzatons of prevous works on postve lnear programs (postve LPs). In the postve LPs lterature, Luby and Nsan (LN) were the frst to gve a parallel algorthm for approxmately solvng a postve LP LN93. Ths algorthm provded the foundatons for the algorthm gven n JY, whch lke the LN algorthm, also works drectly on the prmal program. Usng the dual as gude, the update step s ntrcate as ther analyss s based on carefully analyzng the egenspaces of a partcular matrx before and after each update. Each of these teratons nvolves computng the spectral decomposton. Our algorthm follows a dfferent approach, based on the algorthm of Young You0 for postve LPs. At the core of our algorthm s an algorthm for solvng the decson verson of the dual program. We derve ths core routne by generalzng the algorthm and analyss technques of Young You0, usng the matrx multplcatve weghts update (MMWU) mechansm n place of Young s soft mn and max. Ths leads to a smple algorthm: besdes standard operatons on (sparse) matrx, the only specal prmtve needed s the matrx dot product exp(φ) A, where Φ and A are both postve semdefnte. More specfcally, our algorthm works wth normalzed prmal/dual programs shown n Fgure.. Ths s wthout loss of generalty because any nput program can be transformed nto the normalzed form by dvdng through by C (see Appendx A). We solve a normalzed SDP by resortng to an algorthm for ts decson verson and bnary search. In partcular, we desgn an algorthm wth the property that gven a goal value ô, ether fnd a dual soluton x R + n to (.2-D) wth objectve at least ( )ô, or a prmal soluton Y to (.2-P) wth objectve at most ô. Furthermore, by scalng the A s, t suffces to only consder the case where ô =. Wth ths algorthm, the optmzaton verson can be solved by bnary searchng on the objectve a total of at most O(log( n )) teratons. Intutons. For ntuton about packng SDPs and the matrx multplcatve weghts update method for fndng approxmate solutons, a useful analogy of the decson problem s that of packng a (fractonal) amount of ellpses nto the unt ball. Fgure 2 provdes an example nvolvng 3 matrces (ellpses) n 2 dmensons. Note that A and A 2 are axs-algned, so ther sum s also an axs-algned sum n ths case. In fact, postve lnear programs n the broader context corresponds exactly to the restrcton of all ellpsods beng axs-algned. In ths settng, the algorthm of You0 can be vewed as creatng a penalty functon by weghtng the length of the axses usng an exponental functon. Then, ellpsods wth suffcently small penalty subject to ths functon have ther weghts ncreased. However, once we allow general ellpsods such as A 3, the resultng sum wll no longer be axs-algned. In ths settng, a natural extenson s to take the exponental of the sem-major axses of the resultng ellpsod nstead. As we wll see n later sectons, the rest of Young s technque for achevng wdth-ndependence can also be adapted to ths more general settng. Work and Depth. We now dscuss the work and depth bounds of our algorthm. The man cost of each teraton of our algorthm comes from computng the dot product between a matrx exponental and a PSD matrx. Lke n the sequental settng AHK05, AK07, we need to compute for each teraton the product A exp(φ), where Φ s some PSD matrx. The cost of our algorthm therefore depends on how the nput s specfed. When the nput s gven prefactored that s, the m-by-m matrces A s are gven as A = Q Q and the matrx C /2 s gven, then Theorem 4. can be used to compute matrx exponental n O( (m + 3 q) log n log q log( /)) work and O( log n log q log( /)) depth, where q s the number of nonzero entres across Q s and C /2. Ths s because the matrx Φ that we exponentate has Φ 2 O( log n), as shown n Lemma 3.8. Therefore, as a corollary to the man theorem, we have the followng cost bounds:

3 Prmal Dual Mnmze Tr Y Subject to: A Y for =,..., n Y 0 Maxmze Subject to: n xa x I x 0. (.2) Fgure. Normalzed prmal/dual postve SDPs. The symbol I represents the dentty matrx. A A 2 A 3 A + A 2 2 A + A2 + A3 2 Fgure 2. An nstance of a packng SDP n 2 dmensons. Corollary.2 The algorthm approxpsdp has n Õ(n+m+ q) work and O(log O() (n + m + q)) depth. If, however, the nput program s not gven n ths form, we can add a preprocessng step that factors each A nto Q Q snce A s postve semdefnte. In general, ths preprocessng requres at most O(m 4 ) work and O(log 3 m) depth usng standard parallel QR factorzaton JáJ92. Furthermore, these matrces often have certan structure that makes them easer to factor. Smlarly, we can factor and nvert C wth the same cost bound, and can do better f t also has specalzed structure. 2. ACKGROUND AND NOTATION We revew notaton and facts that wll prove useful later n the paper. Throughout ths paper, we use the notaton Õ(f(n)) to mean O(f(n) polylog(f(n))). Matrces and Postve Semdefnteness. Unless otherwse stated, we wll deal wth real symmetrc matrces n R m m. A symmetrc matrx A s postve semdefnte, denoted by A 0 or 0 A, f for all z R m, z Az 0. Equvalently, ths means that all egenvalues of A are nonnegatve and the matrx A can be wrtten as A = λ v v, where v, v 2,..., v m are the egenvectors of A wth egenvalues λ λ m respectvely. We wll use λ (A), λ 2(A),..., λ m(a) to represent the egenvalues of A n decreasng order and also use λ max(a) to denote λ (A). Notce that postve semdefnteness nduces a partal orderng on matrces. We wrte A f A 0. The trace of a matrx A, denoted Tr A, s the sum of the matrx s dagonal entres: Tr A = A,. Alternatvely, the trace of a matrx can be expressed as the sum of ts egenvalues, so Tr A = λ(a). Furthermore, we defne A =,j A,j,j = Tr A. It follows that A s postve semdefnte f and only f A 0 for all PSD. Matrx Exponental. Gven an m m symmetrc postve semdefnte matrx A and a functon f : R R, we defne m f(a) = f(λ )v v, where, agan, v s the egenvector correspondng to the egenvalue λ. It s not dffcult to check that for exp(a), ths defnton concdes wth exp(a) = 0! A. Our algorthm reles on a matrx multplcatve weghts (MMW) algorthm, whch can be summarzed as follows. For a fxed 0 and 2 W() = I, we play a game a number of tmes, where n teraton t =, 2,..., the followng steps are performed:. Produce a probablty matrx P (t) = W (t) / Tr 2. Incur a gan matrx M (t) ; and 3. Update the weght matrx as W (t+) = exp( 0 M (t ) ). t t W (t) ; Lke n the standard settng of multplcatve weght algorthms, the gan matrx s chosen by an external party, possbly adversarally. In our algorthm, the gan matrx s chosen to reflect the step we make n the teraton. Arora and Kale AK07 shows that the MMW algorthm has the followng guarantees (restated for our settng): Theorem 2. (AK07) For 0, f 2 M(t) s are all PSD and M (t) I, then after T teratons, ( T T ) ( + 0) M (t) P (t) λ max M (t) ln n. (2.) 0 t= t= 3. SOLVING POSITIVE SDPS In ths secton, we descrbe a parallel algorthm for solvng postve packng SDPs, nspred by Young s algorthm for

4 postve LPs. As descrbed earler, by usng bnary search and approprately scalng the nput program, a postve packng SDP can be solved assumng an algorthm for the followng decson problem: Decson Problem: Fnd ether an x R + n (a dual soluton) such that x and n x A I or a PSD matrx Y (a prmal soluton) such that Tr Y and, A Y. The followng theorem provdes a soluton to ths problem: Theorem 3. Let 0 <. There s an ( algorthm ) decsonpsdp that gven a postve SDP, n O log 3 n teratons solves the Decson 4 Problem. Presented n Algorthm 3. s an algorthm that satsfes the theorem. ut before we go about provng t, let us take a closer look at the algorthm. Fx an accuracy parameter > 0. We set K to ( + ln n). The reason for choosng ths value s techncal, but the motvaton was so that we can absorb the ln n term n Theorem 2. and account for the contrbuton of the startng soluton x (0). The algorthm s a multplcatve weght updates algorthm, whch proceeds n rounds. Intally, we work wth the startng soluton x (0) = n TrA. Ths soluton s chosen to be small so that x(0) A I, hence respectng the dual constrant, and t contans enough mass that subsequent updates (each update s a multple of the current soluton) are guaranteed to make rapd progress. In each teraton, we compute W (t) = exp(ψ (t ) ), where Ψ (t ) = x(t ) A. (For some ntutons for the W (t) matrx, we refer the reader to AK07, Kal07.) The next two steps (Steps 3 4) of the algorthm are responsble for dentfyng whch coordnates of x to update. For starters, t may help to thnk of them as follows: let P (t) = W (t) / Tr W (t) and (t) be { n : P (t) A + }. The actual algorthm dscretzes Tr W (t) to ensure certan monotoncty propertes on (t). As we show later on n Lemma 3.2, the set (t) cannot be empty unless the system s nfeasble. The fnal steps of the algorthm ncrement each coordnate (t) of the soluton by the amount α x (t). The choce of α may seem mysterous at ths pont; t s chosen to ensure that () δ(t) A I and (2) δ (t). Intutvely, these bounds prevent us from takng too bg a step from the current soluton. At a more techncal level, the frst requrement s needed to satsfy the MMW algorthm s condton, and the second requrement makes sure when we cannot overshoot by much when extng from the whle loop. 3. Analyss We wll bound the approxmaton guarantees and analyze the cost of the algorthm. efore we start, we wll need some notaton and defntons. An easy nducton gves that the Algorthm 3. Parallel Packng SDP algorthm Let K = ( + ln n). Let x (0) = n TrA. Intalze Ψ (0) = n x(0) A, t = 0. Whle x (t) K. t = t Let W (t) = exp(ψ (t ) ). 3. Let p be such that ( + ) p < Tr W (t) ( + ) p. 4. Let (t) = { n : W (t) A ( + ) p+ }. 5. If (t) s empty, return Y = W (t) / Tr W (t) as a prmal soluton. 6. Let δ (t) = α x (t ), where α = mn{/ x (t ), /(+0)K}. 7. Update x (t) = x (t ) + δ (t) and Ψ (t) = Ψ (t ) + n δ(t) A Return x = K(+0) x(t) as a dual soluton. quanttes that we track across the teratons of Algorthm 3. satsfy the followng relatonshps: x (t) = x (0) + t δ (τ) (3.) W (t) = exp(ψ (t ) ) (3.2) (t) def P = W (t) / Tr W (t) (3.3) Tr P (t) = Tr W (t) / Tr W (t) = Tr W (t) / Tr W (t) = (3.4) (t) def M = Ψ (t) = n n x (t) δ (t) A when t (3.5) A = t M (τ) (3.6) τ=0 To bound the approxmaton guarantees and the cost of ths algorthm, we reason about the spectrum of Ψ (t) and the l norm of the vector x (t) as the algorthm executes. Snce the coordnates of our vector x (t) are always non-negatve, we note that x (t) = x (t) and we use ether notaton as convenent. We begn the analyss by showng that (t) can never be empty unless the system s nfeasble: Lemma 3.2 (Feasblty) If there s an teraton t such that (t) s empty, then P (t) = W (t) / Tr W (t) s a vald prmal soluton wth objectve value. Furthermore, by dualty theory, there exsts no dual soluton x R n + wth objectve value at least. Proof. The fact that (t) s empty means that for all =,..., n, W (t) A ( + ) p+. ut we know that Tr W (t) > ( + ) p, so P (t) A = W(t) Tr W (t) A ( + )2,

5 As noted above, Tr P (t) =, so P (t) s a vald prmal soluton wth objectve at most, so no dual soluton wth objectve more than exsts. We proceed to analyze the vector x (t) n the case that (t) never becomes empty. The man loop n Algorthm 3. termnates only f x (t) > K, so the soluton we produce satsfes x K = ( + 0)K x(t) 0 (3.7) ( + 0)K In order for ths to be a dual soluton, we stll need to show that t satsfes x A I. In partcular, t suffces to show that (+0)K Ψ(T ) I, where T s the fnal teraton. Spectrum ounds. The rest of the analyss hnges on boundng the spectrum of Ψ (t). More specfcally, we prove the followng lemma, whch shows that the spectrum of all ntermedate Ψ (t) s s bounded by ( + O())K: Lemma 3.3 (Spectrum ound) For t = 0,..., T, where T s the fnal teraton, n Ψ (t) = x (t) A ( + 0)KI. (3.8) We prove ths lemma by resortng to propertes of the MMW algorthm (Theorem 2.), whch relates the fnal spectral values to the gan derved at each ntermedate step. For ths, we wll show a clam (Clam 3.5) that quantfes the gan we get n each step as a functon of the l -norm of the change we make n that step. ut frst, we analyze the ntal matrx: Clam 3.4 ( ( λ max Ψ (0)) n ) = λ max x (0) A Proof. Our choce of x (0) guarantees that for all =,..., n, x (0) A = n Tr A A n I. Summng across =,..., n gves the desred bound. The followng clam bounds the value of M (t) P (t) n terms of the l norm of the change to the dual soluton vector. Ths s precsely the quantty we track n Theorem 2.: Clam 3.5 For all t =,..., T, M (t) P (t) Proof. Consder that ( + )2 ( n M (t) P (t) = ( = Every (t) has the property that δ (t). (3.9) δ (t) A ) P (t) δ (t) A P (t) ) A W (t) ( + ) p+ So then, snce we have and thus P (t) = W(t) Tr W (t) W (t) ( + ), p A P (t) ( + ) 2 M (t) P (t) whch proves the clam. ( δ (t) ( + ) 2 ) ( + )2 δ (t), We can then bound the total l norm of the change to the dual soluton by boundng the l norm of x (t). Specfcally we show that unless the algorthm termnates at the frst teraton, x (T ) does not exceed K + n l norm. Clam 3.6 For t =,..., T, x (t) ( + )K Proof. Snce T s the teraton when the algorthm termnate and for all t 0, δ (t) R n +, we have x (t) x (t ) + δ (t) K + δ (t) y our choce of α, we know that α / x (t ) and therefore δ (T ) = α x (t ). Substtutng ths nto the equaton above gves x (t) K + and the clam follows from K. We are now ready to complete the proof of the spectrum bound lemma (Lemma 3.3): Proof of Lemma 3.3. We can rewrte Ψ (t) as n t n n t Ψ (t) = x (0) A + δ (τ) A = x (0) A + M (τ), so λ max(ψ (t) ) λ max ( n ) ( t ) x (0) A + λ max M (τ) snce both sums yeld postve semdefnte matrces. y Clam 3.4, we know that the frst term s at most. To bound the second term, we agan apply Theorem 2., whch we restate below: ( + ) t M (τ) P (τ) λ max ( t Rearrangng terms, we have λ max ( t M (τ) ) ( + ) t M (t) ) ln n M (τ) P (τ) + ln n. We also want to make sure that each M (τ) satsfes M (τ) I. Wth an easy nducton on t, we can show that (3.8) holds

6 for all τ t. Ths means that for τ =,..., t, each M (τ) satsfes M (τ) = n δ (τ) A α n x (τ) A /(+0)K n x (τ) A /(+0)K ( + 0)KI I, snce α /(+0)K. It then follow from Clam 3.5 that λ max ( t M (τ) ) ( + ) t ( + ) 2 δ (τ) + ln n = ln n + ( + ) 3 x (t) Where the last step follows from x (t) = x (0) + t δ(τ) and each entry of x (0) and δ (τ) beng non-negatve. Applyng the bound on x (t) from Clam 3.6 then gves: λ max ( t Puttng these together, we get: M (τ) ) ln n + ( + ) 4 K λ max(ψ (t) ) + ln n + ( + ) 4 K K + ( + ) 4 K, whch allows us to conclude that Ψ (t) ( + 0)KI. The spectrum bound lemma says that at any pont n the algorthm, the soluton vector x (t) satsfes x(t) A ( + 0)KI. Together wth Equaton (3.7), we know that f the algorthm completes the whle loop, the soluton x that we return satsfes x 0 and x A = (+0)K x (t) A I. Thus, x s ndeed a dual soluton wth value at least 0. To pece everythng together, we set to /0, so f there s an teraton n whch (t) s empty, we produce a prmal soluton wth value at most ; otherwse, we return a dual soluton wth value at least. Hence, the algorthm has the promsed approxmaton bounds. Next we wll analyze ts cost. Cost Analyss. Smlar to Young s analyss, our analyss reles on the noton of phases, groupng together teratons wth smlar W (t) matrces nto a phase n a way that ensures the exstence of a coordnate wth the property that ths coordnate s ncremented (.e., δ (t) > 0) by a sgnfcant amount n every teraton of ths phase. To ths end, we say that an teraton t belongs to phase p f and only f ( + ) p < Tr W (t) ( + ) p. A phase ends when the algorthm termnates or the next teraton belongs to a dfferent phase. Almost mmedate from ths defnton s a bound on the number of phases: Lemma 3.7 The total number of phases s at most O(K/). Proof. On the one hand, we have 0 Ψ (0), so Tr W (0) n e 0 = n. On the other hand, we know that Ψ (T ) ( + O())K, so Tr W (t) n exp (( + O())K). Ths means that the total of number s phases s at most Tr W (T ) log + log Tr W (0) + exp (( + O())K) (3.0) ( ) K ( + O())K = O (3.) To bound the total number of steps, we ll analyze the number of teratons wthn a phase. For ths, we ll need a couple of clams. The frst clam shows that f a coordnate s ncremented at the end of a phase, t must have been ncremented at every teraton of ths phase. Clam 3.8 If (t), then for all t < t belongng to the same phase as t, (t ). Proof. Suppose t belongs to phase p. As (t), we know that W (t) A ( + ) p+. Snce t < t we have Ψ (t) Ψ (t ) = M (τ) (3.2) t τ<t = t τ<t Therefore W (t ) W (t) and that δ (τ) A 0 (3.3) W (t ) A W (t) A ( + ) p+, (3.4) whch means that (t ), as desred. In the second clam, we ll place a boundng box around each coordnate x (t) of our soluton vectors. Ths turns out to be an mportant machnery n boundng the number of teratons requred by the algorthm. Clam 3.9 (oundng ox) For all ndex, at any teraton t, x (t) ( + O())n 2 K/ Tr A x (0) Proof. Recall that x (0) = /(n Tr A ). To argue an upper bound on x (t), note that Lemma 3.3 gves A = Ψ(t) and each A s postve semdef- Snce n nte, we have Tr j= x(t) j Ψ (t) ( + O())IK (3.5) x (t) A Tr Ψ (t) ( + O())nK (3.6) We conclude that x (t) ( + O())n 2 K/ Tr A x (0), as clamed. The fnal clam shows that each teraton makes sgnfcant progress n ncrementng the soluton. Clam 3.0 In each teraton, ether δ (t) = or α Ω(/K).

7 Proof. We chose α to be mn{/ x (t ), /(+0)K}. If α = / x (t ), then δ (t) = and we are done. Otherwse, we have α = /(+0)K, whch s Ω(/K). Combnng these clams, we have the followng bounds on the number of teratons: Corollary 3. The number of teratons per phase s at most ( ) K O ln (nk). and the total number of teratons s at most: ( ) log 3 n O 4 Proof. Consder a phase of the algorthm, and let f be the fnal teraton of ths phase. y Clam 3.0, each teraton t satsfes δ (t) = or α Ω(/K). Snce x (t) K + for all t T, the number of teratons n whch δ (t) = can be at most O(K/). Now, let (f) be a coordnate that got ncremented n the fnal teraton of ths phase. y Clam 3.8, ths coordnate got ncremented n every teraton of ths phase. Therefore, the number of teratons wthn ths phase where α Ω(/K) s at most log +Ω(/K) (x (f) /x (0) ( ) log +Ω(/K) ( + O()n 2 K ) ( ) K = O ln (( + O())K). Combnng wth Lemma 3.7 and the settng of K = O ( ) log n gves the overall bound. 4. MATRIX EXPONENTIAL EVALUATION We descrbe a fast algorthm for computng the matrx dot product of a postve semdefnte matrx and the matrx exponental of another postve semdefnte matrx. Theorem 4. There s an algorthm bgdotexp that when gven a m-by-m matrx Φ wth p non-zero entres, κ max{, Φ 2}, and m-by-m matrces A n factorzed form A = Q Q where the total number of nonzeros across all Q s q; bgdotexp(φ, {A = Q Q } n ) computes ( ± ) approxmatons to all exp (Φ) A n O(κ log m log( /)) depth and O( (κ log( 2 /ɛ)p + q) log m) work. The dea behnd Theorem 4. s to approxmate the matrx exponental usng a low-degree polynomal because evaluatng matrx exponentals exactly s costly. For ths, we wll apply the followng lemma, reproduced from Lemma 6 n AK07: Lemma 4.2 (AK07) If s a PSD matrx such that 2 κ, then the operator = satsfes 0 <k! where k = max{e 2 κ, ln(2 )} ( ) exp () exp (). Proof of Theorem 4.. The gven factorzaton of each A allows us to wrte exp (Φ) A as the 2-norm of a vector: exp (Φ) A = Tr exp (Φ)Q Q = Tr Q exp ( 2 Φ) exp ( 2 Φ)Q = exp ( 2 Φ)Q 2 y Lemma 4.2, t suffces to evaluate A where s an approxmaton to = exp( Φ). To further reduce the 2 work, we can apply the Johnson-Lndenstrauss transformaton DG03, IM98 to reduce the length of the vectors to O(log m); specfcally, we fnd a O( log m) m Gaussan matrx Π and evaluate 2 Π Q 2 Snce Π only has O( 2 log m) rows, we can compute Π usng O(log m) evaluatons of. The work/depth bounds follow from dong each of the evaluatons of Π, where Π denotes the -th column of Π, and matrx-vector multples nvolvng Φ n parallel. 5. CONCLUSION We presented a smple NC parallel algorthm for packng SDPs that requres O( log 4 n log( )) teratons, where each 4 teraton nvolves only smple matrx operatons and computng the trace of the product of a matrx exponental and a postve semdefnte matrx. When a postve SDP s gven n a factorzed form, we showed how the dot product wth matrx exponental can be mplemented n nearly-lnear work, leadng to an algorthm wth Õ(m + n + q) work, where n s the number of constrant matrces, m s the dmenson of these matrces, and q s the total number of nonzero entres n the factorzaton. Compared to the stuaton wth postve LPs, the classfcaton of postve SDPs s much rcher because packng/coverng constrants can take many forms, ether as matrces (e.g. n xa I for packng, n xa I for coverng) or as dot products between matrces (e.g. A Y for packng, A Y for coverng). The postve SDPs studed n JY and our paper should be compared wth the closely related noton of coverng SDPs studed by Iyengar et al IPS0; however, among the applcatons they examne, only the beamformng SDP relaxaton dscussed n Secton 2.2 of IPS0 falls completely wthn the framework of packng SDPs as defned n.2. Problems such as MaxCut and SparsestCut requre addtonal matrx-based packng constrants. We beleve extendng these algorthms to solve mxed packng/coverng SDPs s an nterestng drecton for future work. Acknowledgments Ths work s partally supported by the Natonal Scence Foundaton under grant numbers CCF-08463, CCF-0888, and CCF and by generous gfts from IM, Intel, and Mcrosoft. Rchard Peng s supported by a Mcrosoft Fellowshp. We thank the SPAA revewers for suggestons that helped mprove ths paper.

8 References AHK05 Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorthms for approxmate semde.nte programmng usng the multplcatve weghts update method. In FOCS, pages , AK07 Sanjeev Arora and Satyen Kale. A combnatoral, prmal-dual approach to semdefnte programs. In STOC, pages , DG03 Sanjoy Dasgupta and Anupam Gupta. An elementary proof of a theorem of johnson and lndenstrauss. Random Struct. Algorthms, 22():60 65, GK98 Naveen Garg and Jochen Könemann. Faster and smpler algorthms for multcommodty flow and other fractonal packng problems. In Proceedngs of the 39th Symposum on the Foundatons of Computer Scence (FOCS), pages , 998. GLS93 Martn Grötschel, László Lovász, and Alexander Schrjver. Geometrc Algorthms and Combnatoral Optmzaton. Sprnger-Verlag, New York, 2nd edton, 993. IM98 Potr Indyk and Rajeev Motwan. Approxmate nearest neghbors: Towards removng the curse of dmensonalty. In Proceedngs of the 30th ACM Symposum on the Theory of Computng (STOC), pages , 998. IPS0 Garud Iyengar, Davd J. Phllps, and Clfford Sten. Feasble and accurate algorthms for coverng semdefnte programs. In SWAT, pages 50 62, 200. IPS Garud Iyengar, Davd J. Phllps, and Clfford Sten. Approxmatng semdefnte packng programs. SIAM Journal on Optmzaton, 2():23 268, 20. JáJ92 Joseph JáJá. An Introducton to Parallel Algorthms. Addson-Wesley, 992. JY Rahul Jan and Penghu Yao. A parallel approxmaton algorthm for postve semdefnte programmng. In FOCS, pages , 20. JY2 Rahul Jan and Penghu Yao. A parallel approxmaton algorthm for mxed packng and coverng semdefnte programs. CoRR, abs/ , 202. Kal07 Satyen Kale. Effcent Algorthms usng the Multplcatve Weghts Update Method. PhD thess, Prnceton Unversty, August Prnceton Tech Report TR KL96 Phlp N. Klen and Hsueh-I Lu. Effcent approxmaton algorthms for semdefnte programs arsng from MAX CUT and COLORING. In STOC, pages , 996. KY07 Chrstos Koufogannaks and Neal E. Young. eatng smplex for fractonal packng and coverng lnear programs. In FOCS, pages , KY09 Chrstos Koufogannaks and Neal E. Young. Dstrbuted and parallel algorthms for weghted vertex cover and other coverng problems. In PODC, pages 7 79, LN93 Mchael Luby and Noam Nsan. A parallel approxmaton algorthm for postve lnear programmng. In STOC 93, pages , New York, NY, USA, 993. PST95 Serge A. Plotkn, Davd. Shmoys, and Éva Tardos. Fast approxmaton algorthms for fractonal packng and coverng problems. Math. Oper. Res., 20(2):257 30, 995. You0 Neal E. Young. Sequental and parallel algorthms for mxed packng and coverng. In FOCS, pages , 200. APPENDIX A. NORMALIZED POSITIVE SDPS Ths s the same transformaton that Jan and Yao presented JY; we only present t here for easy reference. Consder the prmal program n (.). It suffces to show that t can be transformed nto the followng program wthout changng the optmal value: Mnmze Tr Z Subject to: Z for =,..., m Z 0, (A.) We can make the followng assumptons wthout loss of generalty: Frst, b > 0 for all =,..., m because f b were 0, we could have thrown t away. Second, all A s are the support of C, or otherwse we know that the correspondng dual varable must be set to 0 and therefore can be removed rght away. Therefore, we wll treat C as havng a full-rank, allowng us to defne def = b C /2 A C /2 It s not hard to verfy that the normalzed program (A.) has the same optmal value as the orgnal SDP (.). Note that f we re gven factorzaton of A nto Q Q, then can also be factorzed as: = b (C /2 Q )(C /2 Q ) Furthermore, t can be checked that the dual of the normalzed program s the same as the dual n Equaton.2.

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Lecture 17. Solving LPs/SDPs using Multiplicative Weights Multiplicative Weights

Lecture 17. Solving LPs/SDPs using Multiplicative Weights Multiplicative Weights Lecture 7 Solvng LPs/SDPs usng Multplcatve Weghts In the last lecture we saw the Multplcatve Weghts (MW) algorthm and how t could be used to effectvely solve the experts problem n whch we have many experts

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise ppled Mathematcal Scences, Vol. 4, 200, no. 60, 2955-296 Norm Bounds for a ransformed ctvty Level Vector n Sraffan Systems: Dual Exercse Nkolaos Rodousaks Department of Publc dmnstraton, Panteon Unversty

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Lecture 5 September 17, 2015

Lecture 5 September 17, 2015 CS 229r: Algorthms for Bg Data Fall 205 Prof. Jelan Nelson Lecture 5 September 7, 205 Scrbe: Yakr Reshef Recap and overvew Last tme we dscussed the problem of norm estmaton for p-norms wth p > 2. We had

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008 Game Theory Lecture Notes By Y. Narahar Department of Computer Scence and Automaton Indan Insttute of Scence Bangalore, Inda February 2008 Chapter 10: Two Person Zero Sum Games Note: Ths s a only a draft

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

SL n (F ) Equals its Own Derived Group

SL n (F ) Equals its Own Derived Group Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information