Lecture 17. Solving LPs/SDPs using Multiplicative Weights Multiplicative Weights

Size: px
Start display at page:

Download "Lecture 17. Solving LPs/SDPs using Multiplicative Weights Multiplicative Weights"

Transcription

1 Lecture 7 Solvng LPs/SDPs usng Multplcatve Weghts In the last lecture we saw the Multplcatve Weghts (MW) algorthm and how t could be used to effectvely solve the experts problem n whch we have many experts and wsh to make predctons that are approxmately as good as the predctons made by the best expert. In ths lecture we wll see how to apply the MW algorthm to effcently approxmate the optmal soluton to LPs and SDPs. 7. Multplcatve Weghts Recall the followng result from Lecture 6 about the Hedge algorthm: heorem 7.. Suppose the cost vectors are m (t) [, ] N. hen for any ɛ, and for any, the Hedge algorthm guarantees that for all [m], p (t) m (t) m (t) + ɛ + ln N ɛ So the total cost pad by the algorthm s no more than an addtve factor of ɛ + ln N ɛ worse than the cost ncurred by any ndvdual component of the cost vector. heorem 7. mples a smlar result for the average cost ncurred per round. (One can get a smlar result for the MW algorthm, where nstead of the update rule w (t) the rule w (t) w (t) ( ɛm (t) ).) w (t) exp( ɛm (t) ), we used Corollary 7.2. Suppose the cost vectors are m (t) [ ρ, ρ] N. hen for any ɛ, and for 2 any 4 ln N ρ 2, the Hedge algorthm guarantees than for all [m] ɛ 2 p (t) m (t) m (t) + ɛ * Lecturer: Anupam Gupta. Scrbe: m Wlson.

2 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS 2 Note: We dd not cover ths n lecture, but one can show that f the cost vectors are n [0, ρ], then usng the MW algorthm, the settng 4 ln N ɛ ρ suffces to get the same guarantee of 2 Lemma 7.3. Suppose the cost vectors are m (t) [0, ρ] N. hen for any ɛ 2, and for any 4 ln N ɛ ρ, the MW algorthm guarantees than for all [m] 2 p (t) m (t) m (t) + ɛ A proof of ths can be found n the Arora, Hazan, and Kale survey [AHK05]. 7.2 Solvng LPs wth Multplcatve Weghts We wll use the MW algorthm to help solve LPs wth m constrants of the form mn c x s.t. Ax b x 0 Supposng that we know c x = OP (by bnary search), we wll am to fnd an ɛ-approxmate soluton x such that c x = OP A x b ɛ x 0 or output nfeasble f no soluton exsts. he runtme for ths wll be O s the wdth of the LP whch wll be defned shortly Smplfyng the Constrants ( ρ 2 log m ɛ 2 ) where ρ Instead of searchng for solutons x R n, we wll package together the easy constrants nto the smple convex regon K = {x R n x 0, c x = OP} Now we wsh to solve Ax b such that x K. Note that ths s partcularly easy to solve f Ax b s only one constrant,.e., we are tryng to determne whether x K such that α x β for some α R n, β R. For example, f c 0 and OP max α c we can set x = OP e whch wll satsfy our constrants; else we could output Infeasble. c For general c we are essentally reduced to solvng an LP over two constrants, whch whle not as trval as ths, s stll smple. We wll henceforth assume we have an oracle that gven α R n, β R, and K R n ether returns x R n such that α x β, or correctly asserts that there s no such x. β

3 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS Usng Multplcatve Weghts We wll use ths oracle that allows us to satsfy one constrant (αx β) for k K, along wth the MW algorthm to get an algorthm satsfy all of the constrants Ax b for x K. Each of the constrants a topx b wll be vewed as an expert for a total of m experts. Each round we wll produce a vector p (t) that wll gve us a convex combnaton of the constrants as follows p (t) A x p (t) b }{{}}{{} α (t) β (t) Usng our oracle, we can determne whether α (t) x β (t) has some soluton x (t) K, or f no such soluton exsts. Clearly f no soluton exsts, then Ax b s nfeasble over K, so our LP s nfeasble. (It s easy to see the contrapostve: f there were a soluton to Ax b, x K, then ths vector x would also satsfy α (t) x β (t) ; here we use the fact that p (t) 0.) Moreover, the vector p (t) serves as proof of ths nfeasblty. Otherwse, we wll set our cost vector so that m (t) = a x (t) b, update our weghts and proceed wth the next round. If we have not determned the LP to be nfeasble after rounds we wll termnate and return the soluton x = Why do we set our cost vectors ths way? It almost seems lke we should ncur no cost when a x (t) b 0 (.e., when we satsfy ths constrant), whereas we are ncurrng a hgher cost the more we satsfy t. Well, the dea s whenever a (t) x b s postve, we have oversatsfed the constrant. Gvng a postve cost to ths constrant causes us to reduce the weght of ths constrant n ths next round. hs works analogously to the experts problem where an expert who s wrong (has hgh cost) s gven less credence (less weght) n future rounds. Smlarly, for any constrant n whch a (t) x b s negatve, we have faled the constrant. Gvng a negatve cost to ths constrant causes us to ncrease the weght of ths constrant n the next round. Intally we set all of our weghts equal to express our gnorance; all constrants are equally hard. Whenever we update our weghts we reduce the weghts of constrants we oversatsfed so we ll cover them less n future rounds. We ncrease the weghts of constrants we ddn t satsfy so we ll cover them more n future rounds. Our hope s that over tme ths wll converge to a soluton where we satsfy all constrants to a roughly equal extent. x (t) Analyzng Multplcatve Weghts Supposng that we do not dscover our LP s nfeasble, how many rounds should we run and how good wll our soluton be? If we defne ρ = max{, max,x K { a x b }}

4 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS 4 to be the maxmum magntude of any cost assgned to a constrant, then we may mmedately apply Corollary 7.2 to fnd that after 4 ln n ρ 2 rounds, ɛ 2 p (t) m (t) m (t) + ɛ where ɛ 2, m(t) = a x (t) b [ ρ, ρ] n for all [m], and each x () K. Note that we do not actually need to fnd ρ; t suffces to keep track of ρ t = max{, max,t t{ a x (t ) b }}, the maxmum cost seen so far, and run untl 4 ln n ɛ 2 ρ 2. What guarantee do we get? On the left hand sde of ths nequalty we have p (t) m (t) = p (t) (Ax (t) b) = p (t) Ax (t) p (t) b 0 where the fnal nequalty holds due to our oracle s propertes. herefore the left hand sde s at least 0. And on the rght hand sde we have m (t) = = a a x (t) b ( ) x (t) b = a x b Combnng ths wth our nequalty for the rght hand sde we get : a x b + ɛ 0 a x b ɛ herefore we can obtan an ɛ-feasble soluton to Ax b, x K n tme O ( log m ρ 2) tme ɛ 2 where ρ = max{, max,x K { a x b }} s the wdth of the LP Example: Mnmum Set Cover Recall the mnmum fractonal set cover problem wth m sets F = {S, S 2,..., S m } and n elements U. he goal s to pck fractons of sets n order to cover each element to an extent of :.e., to solve the followng LP s.t. mn x S S x S S e x S 0 e

5 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS 5 Suppose we know OP = L [, m], so K = { S x S = L, x S 0}. We want to fnd x K such that S e x S for all elements e. Our oracle, gven some p, must try to fnd x K such that p e p e = e S S x S S e e x S p e e S x S p(s) where p(s) s the total weght of elements n S. hs quantty s clearly maxmzed over K by concentratng on a set wth the maxmum weght and settng { L for some S F maxmzng p(s) x S = 0 for all other S Note that the wdth of ths LP s at most max x S L m e S e How does the weght update step work? Intally we set w () for all constrants. Whenever a set s overcovered, we reduce the weght of that set so we don t try as hard to cover t n the next step. Whenever a set s undercovered we ncrease the weght of the set so we try harder to cover t n the next step. Now, after 4L 2 ln n/ɛ 2 steps we wll obtan an ɛ-approxmate soluton x such that x S = L S x S ɛ S e x 0 Note that, n ths case, the constrant matrx s completely nonnegatve, and we can scale up our soluton to get a feasble soluton x = x/( ɛ) so that x S = S x S S e x 0 L ɛ L( + ɛ)

6 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS Comments. he scalng we used for mnmum set cover to obtan a non-optmal, feasble soluton can be appled to any LP where b > ɛ ndeed, we could just multply all the x values by max /(b ɛ). hs s often useful, partcularly when we re gong to round ths LP soluton and ncur further losses, and hence losng ths factor may be nsgnfcant. 2. If the constrant matrx A s all postve the problem s sad to be a coverng problem (we are just nterested n puttng enough weght on x to cover every constrant). If the constrant matrx s all negatve or equvalently, f we have Ax b wth an allpostve matrx A the problem s sad to be a packng problem (we are packng as much weght nto x as possble wthout volatng any constrant). In ether case, we can use a smlar scalng trck to get a non-optmal, feasble soluton. In ths case we can reduce the run-tme further. Assume we have a coverng problem: mn{c x Ax b, x 0}. By scalng, we can transform ths nto a problem of the form mn{c x Ax, x 0} he unform values of b = allows us to set the cost vectors m (t) = a x (t) nstead of m (t) = a x (t) ; ths translaton does not change the algorthm. But the postve cost vectors allow us to use Lemma 7.3 to reduce the runtme from O ( log m ɛ 2 O ( log m ɛ 2 ρ ). ρ 2) to 3. In general, the wdth of our LPs may not turn out to be as nce. For example, n the weghted mnmum set cover problem mn c S x S S s.t. x S S e x S 0 e our optmum, and hence the wdth, can ncrease to as much as m max S c S mn S c S. An approach developed by Garg and Könemann [GK07] can be useful to solve the problems wthout the wdth penalty. 4. he MW algorthm does not need a perfect oracle. Beng able to determne gven α R n and β R f there s no x K wth α topx β, or else returnng an x K such that α x β ɛ s suffcent for our purposes. hs gves us solutons x K such that Ax b (ɛ + ɛ ). 5. here was exactly one pont where we used the fact that our constrants were lnear. hat was concludng that a x (t) b = a x b

7 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS 7 However, we can make a smlar clam for any set of convex constrants as well: f we wanted to fnd x K such that f (x) 0 for [m], wth the f s convex. hen as long as we could solve the oracle and fnd x K wth p(t) f (x) 0 effcently, the rest of the argument would go through. In partcular, n the step where we used lnearty, we could nstead use ( ) f (x (t) ) f x (t) = f ( x). 7.3 Solvng SDPs wth Multplcatve Weghts Suppose we now move to solvng SDPs of the form mn C X s.t. A X b X 0 note that the frst few constrants are lnear constrants. It s only the psd-ness constrant that s non-lnear so we only need to modfy our MW algorthm by absorbng the X 0 constrant nto the oracle. It wll be also convenent to requre the constrant tr(x) = as well: usually we can guess the trace of the soluton X. (If the trace of the soluton we seek s not but R, we can scale the problem by R to get unt trace.) hen the oracle we must mplement s ths: Let K := {X X 0, tr(x) = }. Gven a symmetrc matrx A R n n and β R, does there exst X K such that A X β? (Agan, A, β wll be obtaned n the algorthm by settng A () := p (t) A, and β () := p (t) b.) But we know from Lecture 2 that ths s equvalent to askng whether the maxmum egenvalue of the symmetrc matrx A s at least β. Indeed, f ths s so, and f λ max s the maxmum egenvalue of A wth unt egenvector x, then A (xx ) = tr(a xx ) = tr(axx ) = tr(λ max xx ) = λ max so our oracle should return X = xx, else t should return Infeasble. Moreover, usng the Observaton #4 on the prevous page, t suffces to return x such that x Ax λ max ɛ. How fast ths can be done depends on the partcular structure of the matrx A; n the next secton we see that for the max-cut problem, the matrx A tself s psd, and hence we can fnd such an x relatvely quckly.

8 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS Example: Max Cut hs part s loosely based on the paper of Klen and Lu [KL96]. Recall the Max Cut SDP we derved n Lecture 2: max 4 L X s.t. (e e ) X = X 0 As usual, we wll thnk of the edge weghts as summng to : ths means that tr(l) = L = j L j =. If we let b = OP and scale X by /n, we are lookng for feasblty of the constrants: n 4b L X n(e e ) X = X 0 Fnally, f we take K = {X X 0, tr(x) = }, the above SDP s equvalent to fndng X K such that n 4b L X n(e e ) X (hs s because tr(x) = means X =. Snce we have the constrants n(e e ) X = nx, ths means X = /n for all.) By the dscussons of the prevous secton, our oracle wll need to check whether there exsts X K such that D (t) X, where D (t) = p (t) 0 n n 4b L + = p (t) n(e e ). And agan, s s equvalent to checkng whether λ max (D (t) ). Implementng the oracle. It s useful to note that D (t) s postve semdefnte: ndeed, t s the sum of the Laplacan (whch s psd), and a bunch of matrces e e (whch are psd). Note: In Homework #6, you wll show that for any psd matrx D, the power method startng wth a random unt vector can fnd x K such that D (xx ) [λ max (D)/( + ɛ), λ max (D)]. he algorthm succeeds wth hgh probablty, and runs n tme O(ɛ m log n) tme, where m s the number of edges n G (and hence the number of non-zeroes n L). So we can run ths algorthm: f t answers wth an x such that D (t) (xx ) s smaller than /(+ɛ), we answer sayng λ max (D (t) ) <. Else we return the vector x: ths has the property that D (t) (xx ) /( + ɛ) ɛ. Now, usng the Observaton #4 on the prevous page, we know ths wll suffce to get a soluton that has an O(ɛ) nfeasblty. Boundng the wdth. he wdth of our algorthm s the maxmum possble magntude of D (t) X for X K,.e., the maxmum possble egenvalue of D (t). Snce D (t) s postve

9 LECURE 7. SOLVING LPS/SDPS USING MULIPLICAIVE WEIGHS 9 semdefnte all of ts egenvalues are non-negatve. Moreover, tr(l) =, and also tr(e e ) =. So λ max (D (t) ) λ (D (t) ) = tr(d (t) ) ( ) = tr p (t) n n 0 4b L + p (t) n(e e ) = p (t) 0 = n n 4b tr(l) + = n( + /4b). = p (t) n tr(e e ) Fnally, the max-cut values we are nterested n le between /2 (snce the max-cut s at least half the edge-weght) and. So b [/2, ], and the wdth s O(n). Runnng me. Settng the wdth ρ = O(n) gves us a runtme of ( ) n 2 log n O whch we can reduce to ɛ 2 oracle ( ) n log n O ɛ 2 oracle usng Lemma 7.3, snce our cost vectors can be made all nonnegatve. Fnally, pluggng n our oracle gves a fnal runtme of ( mn log 2 ) n O, where m s the number of edges n our graph. Note: We can now scale the average matrx X by n to get a matrx X satsfyng: ɛ 3 4 L X b( ɛ) X ɛ tr( X) = n X 0 he attentve reader wll observe that ths s not as nce as we d lke. We d really want each X [ ɛ, + ɛ] then we could transform ths soluton nto one where X = and 4 L X b( ɛ O() ). What we have only guarantees that X [ ɛ, + nɛ], and so we d need to set ɛ /n for any non-trval guarantees. hs would stll gve us a run-tme of O(ɛ 3 mn 4 poly log n) stll polynomal (and useful to examplfy the technque), but t could be better. One can avod ths loss by defnng K dfferently n fact, n a way that s smlar to Secton 7.2. the detals can be found n [KL96]. One can do even better usng the matrx multplcatve weghts algorthms: see, e.g., [AK07, Ste0].

10 Bblography [AHK05] Sanjeev Arora, Elad Hazan, and Satyen Kale. he multplcatve weghts update method: a meta algorthm and applcatons. echncal report, Prnceton Unversty, [AK07] [GK07] [KL96] [Ste0] Sanjeev Arora and Satyen Kale. A combnatoral, prmal-dual approach to semdefnte programs. In SOC, pages , Naveen Garg and Jochen Könemann. Faster and smpler algorthms for multcommodty flow and other fractonal packng problems. SIAM J. Comput., 37(2): (electronc), Phlp Klen and Hsueh-I Lu. Effcent approxmaton algorthms for semdefnte programs arsng from MAX CU and COLORING. In Proceedngs of the wentyeghth Annual ACM Symposum on the heory of Computng (Phladelpha, PA, 996), pages , New York, 996. ACM Davd Steurer. Fast sdp algorthms for constrant satsfacton problems. In SODA, pages ,

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013 Stanford Unversty Graph Parttonng and Expanders Handout 3 Luca Trevsan May 8, 03 Lecture 3 In whch we analyze the power method to approxmate egenvalues and egenvectors, and we descrbe some more algorthmc

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

The Experts/Multiplicative Weights Algorithm and Applications

The Experts/Multiplicative Weights Algorithm and Applications Chapter 2 he Experts/Multplcatve Weghts Algorthm and Applcatons We turn to the problem of onlne learnng, and analyze a very powerful and versatle algorthm called the multplcatve weghts update algorthm.

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Math 217 Fall 2013 Homework 2 Solutions

Math 217 Fall 2013 Homework 2 Solutions Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 6 Luca Trevsan September, 07 Scrbed by Theo McKenze Lecture 6 In whch we study the spectrum of random graphs. Overvew When attemptng to fnd n polynomal

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desgn and Analyss of Algorthms CSE 53 Lecture 4 Dynamc Programmng Junzhou Huang, Ph.D. Department of Computer Scence and Engneerng CSE53 Desgn and Analyss of Algorthms The General Dynamc Programmng Technque

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

p 1 c 2 + p 2 c 2 + p 3 c p m c 2 Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance

More information

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists * How Strong Are Weak Patents? Joseph Farrell and Carl Shapro Supplementary Materal Lcensng Probablstc Patents to Cournot Olgopolsts * September 007 We study here the specal case n whch downstream competton

More information

Lecture 5 September 17, 2015

Lecture 5 September 17, 2015 CS 229r: Algorthms for Bg Data Fall 205 Prof. Jelan Nelson Lecture 5 September 7, 205 Scrbe: Yakr Reshef Recap and overvew Last tme we dscussed the problem of norm estmaton for p-norms wth p > 2. We had

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Faster and Simpler Width-Independent Parallel Algorithms for Positive Semidefinite Programming

Faster and Simpler Width-Independent Parallel Algorithms for Positive Semidefinite Programming Faster and Smpler Wdth-Independent Parallel Algorthms for Postve Semdefnte Programmng Rchard Peng Kanat Tangwongsan Carnege Mellon Unversty {yangp, ktangwon}@cs.cmu.edu ASTRACT Ths paper studes the problem

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Randić Energy and Randić Estrada Index of a Graph

Randić Energy and Randić Estrada Index of a Graph EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 5, No., 202, 88-96 ISSN 307-5543 www.ejpam.com SPECIAL ISSUE FOR THE INTERNATIONAL CONFERENCE ON APPLIED ANALYSIS AND ALGEBRA 29 JUNE -02JULY 20, ISTANBUL

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Homework Notes Week 7

Homework Notes Week 7 Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Spectral Graph Theory and its Applications September 16, Lecture 5

Spectral Graph Theory and its Applications September 16, Lecture 5 Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

1 Gradient descent for convex functions: univariate case

1 Gradient descent for convex functions: univariate case prnceton unv. F 13 cos 51: Advanced Algorthm Desgn Lecture 19: Gong wth the slope: offlne, onlne, and randomly Lecturer: Sanjeev Arora Scrbe: hs lecture s about gradent descent, a popular method for contnuous

More information