How to Cope with NP-Completeness

Size: px
Start display at page:

Download "How to Cope with NP-Completeness"

Transcription

1 How to Cope wth NP-Completeness Kurt Mehlhorn June 12, 2013 NP-complete problems arse frequently n practce. In today s lecture, we dscuss varous approaches for copng wth NP-complete problems. When NP-completeness was dscovered, algorthm desgners could take t as an excuse. It s true, that I have not found an effcent algorthm. However, nobody wll ever fnd an effcent algorthm (unless P = NP). Nowadays, NP-completeness s consdered a challenge. One searchers for exact algorthms wth runnng tme c n wth small c, one searches for approxmaton algorthms wth a good approxmaton rato, one tres to prove that the problem s hard to approxmate unless P = NP, one searches for specal cases that are easy to solve or easy to approxmate, and one ntroduces parameters besdes nput sze to descrbe the problem and to analyze the runnng tme n. Contents 1 Exact Algorthms Enumeraton Branch-and-Bound-and-Cut Approxmaton Algorthms The Knapsack Problem: A PTAS va Scalng and Dynamc Programmng Set Cover: A Greedy Algorthm Maxmum Satsfablty: A Randomzed Algorthm Vertex Cover Hardness of Approxmaton The PCP-Theorem (Probablstcally Checkable Proofs) Max-3-SAT s hard to approxmate From Max-3-SAT to Vertex Cover Approxmaton Algs for Problems n P General Search Heurstcs 13 4 Parameterzed Complexty Parameterzed Complexty for Problems n P

2 1 Exact Algorthms 1.1 Enumeraton The smplest exact algorthm s complete enumeraton. For example, n order to fnd out whether a boolean formula over varables x 1 to x n s satsfable, we could smply terate over all 2 n possble assgnments. Many short-cuts are possble. For example, f the formula contans a unt-clause,.e., a clause consstng of a sngle lteral, ths lteral must be set to true and hence only half of the assgnments have to be tred. Note that unt-clauses arse automatcally when we systematcally try assgnments, e.g., f x 1 x 2 x 3 s a clause and we have set x 1 and x 2 to false, then x 3 must be set to true. There are many rules of ths knd that can be used to speed-up the search for a satsfyng assgnment. We refer the reader to [NOT06] for an n-depth dscusson. There has been sgnfcant progress on speedng-up SAT-solvers n recent years and good open source mplementatons are avalable. Local Search There s a smple randomzed algorthm due to Schönng (1999) that runs n tme (4/3) n where n s the number of clauses and decdes 3-SAT wth hgh probablty [Sch99]. The algorthm works as follows (adapted from [Sch01]). We assume that all clauses have at most k lterals. ) n do T = 2 ln 1 ε tmes (ths choce guarantees an error probablty of ε ( 2(k 1) k Choose an assgnment x {0,1} n unformly at random; do C k n tmes (C k s a (small) constant; see below) f x s satsfyng, stop and accept (ths answer s always correct) Let C be a clause that s not satsfed choose a lteral n C at random and flp the value of the lteral n x. declare the formula unsatsfable (ths s ncorrect wth probablty at most ε) The algorthm s surprsngly smple to analyze. The analyss determnes the value of T as a functon of the desred error probablty. Suppose that the nput formula F s satsfable and let x be some satsfyng assgnment. We splt the set of all assgnments nto blocks. Block B contans all assgnments wth Hammng dstance exactly to x. Let x be our current assgnment and let C be a clause that s not satsfed by x. Then all lterals of C are set to false by x and l 1 lterals of C are set to true by x. If we flp the value of one of these l lterals, the Hammng dstance of x and x decreases by one. If we flp the value of one of the other at most k l lterals of C, the Hammng dstance wll ncrease by one. Thus, f x belongs to B, the next x belongs to B 1 wth probalty at least 1/k and belongs to B +1 wth probablty at most 1 1/k. The algorthm accepts at the latest when x B 0 ; t may accept earler when x equals some other satsfyng assgnment. >= 1/k >= 1/k +1 1 <= 1 1/k <= 1 1/k 2

3 So n order to analyze the algorthm, t suffces to analyze the random walk wth states 0, 1, 2,... and transton probablty 1/k from state to state 1 and probablty 1 1/k from state to state + 1. We want to know the probablty of reachng state 0 when startng n state and the expected length of a walk leadng us from state to state 0. What should we expect? The walk has a strng drft away from zero and hence the probablty of ever reachng state 0 should decrease exponentally n the ndex of the startng state. For the same reason, most walks wll never reach state 0. However, walks that reach state 0 should be relatvely short. Ths s because long walks are more lkely to dverge. The ntal assgnment x has some Hammng dstance to x. Clearly, ths dstance s bnomally dstrbuted,.e., ( ) n prob(hammng dstance = ) = 2 n. For ths (nfnte) Markov chan t s known (see, for example, [GS92]) assumng k 3: The probablty 1 of reachng 0 from. exponentally n. More precsely, 1 prob(absorbng state 0 s reached process started n state ) = ( k 1 ). Walks startng n state and reachng state 0 are relatvely short 2. More precsely, E[number of steps untl state 0 s reached process started n state and the absorbng state 0 s reached k ]) =C, where C k = k2 2k+2 k(k 2) artcle. Schoenng states C k = k(k 2).. WARNING: ths s NOT the constant that s gven n Schoenng s WARNING k 2 k = 1 + k 2 2 wthout proof. My calculaton yelds C k = Combnng these facts wth the ntal probablty for state, we obtan 1 Let p be the probablty of reachng state 0 from state 1. Such a walk has the followng form: the last transton s from 1 to 0 (probablty 1/k). Before that t performs t 0 loops,..e, t goes from 1 to 2 and then returns wth probablty p back to 1 (probablty ((k 1)/k)p) t. Thus p = 1 k ( k 1 p) t = 1 1 t 0 k k 1 k 1 k p ) = 1 k (k 1)p. Ths equaton has solutons p = 1 and p = k 1 1. We exclude the former on semantc reasons. Thus the probablty of reachng state 0 from state 1 s 1/(k 1) and the probablty of reachng t from state s the -th power of ths probablty. 2 Let L be the expected length of a walk from 1 to 0. Such a walk has the followng form: the last transton s from 1 to 0 (length 1). Before that t performs exactly t 0 loops,..e, t goes from 1 to 2 and then returns back to 1 (probablty ((k 1)/k)p) t (1/k), where p s as n the prevous tem) and length t(l + 1). Thus L = 1 + t 0 ( k 1 k = 1 (L + 1) d dk p) t 1 k t(1 + L) = 1 + (L + 1) tk t 1 = 1 + (L + 1) t 1 t 1 d dk ( k t ) 1/k 1 1/k = 1 (L + 1) d dk (k 1) 1 = 1 + (L + 1)(k 1) 2. Ths solves to L = k2 2k+2 k(k 2). The expected length of a walk startng and and reachng 0 s tmes ths number. 3

4 for the probablty of reachng state 0: prob(the absorbng state 0 s reached) = 0 j n ( ) n 2 n (1/(k 1)) ) n ( = = 2 n ( k 1 and for the expected number of steps of a walk reachng state 0: ( n E[# of steps untl 0 s reached 0 s reached] = C k 0 ) 2 n = C k n 2 n 1 k 2(k 1) ) n ( ) n = C kn 1 2. By Markov s nequalty the probablty that a non-negatve random varables s less than twce ts expectaton s at least 1/2. Thus prob(0 s reached n at most C k n steps 0 s reached) 1/2. Combnng ths wth the probablty that 0 s reached, we obtan prob(after at most C k n steps the state 0 s reached) (1/2)(k/(2(k 1))) n. Let p = (1/2)(k/(2(k 1))) n. The probablty that x s reached from a random x s at least p. If we repeat the experment T tmes, the probablty that we never reach x s bounded by (1 p) T = exp(t ln(1 p)) exp( T p) (recall that ln(1 p) p for 0 p < 1). In order to have error probablty at most ε, t therefore suffces to choose T such that exp( T p) ε,.e., T 1 p ln 1 ( ) 2(k 1) n ε = 2 ln 1 k ε. ( ) Theorem 1 (Schoenng) In tme O(2 2(k 1) n k ln 1 ε poly(n,m)) one can decde satsfablty of formulas n varables, m clauses, and at most k lterals per clause wth (one-sded) error ε. A derandomzaton wth essentally the same tme bound was obtaned recently by Moser and Schreder []. 1.2 Branch-and-Bound-and-Cut For optmzaton problems, the branch-and-bound and branch-and-bound-and-cut paradgm are very useful. We llustrate the latter for the Travellng Salesman Problem. Recall the subtour elmnaton LP for the travelng salesman problem n a graph G = (V,E). Let c e be the cost of edge e. We have a varable x e for each edge. subject to e δ(v) mn c e x e e x e = 2 x e 2 e δ(s) x e 0 for each vertex v for each set S V, /0 S V 4

5 We solve the LP above. If we are lucky, the soluton s ntegral and we found the optmal tour. In general, we wll not be lucky. We select a fractonal varable, say x e, and branch on t,.e., we generate the subproblems x e = 0 and x e = 1. We solve the LP for both subproblems. Ths gves us lower bounds for the cost of optmal tours for both cases. We contnue to work on the problem for whch the lower bound s weaker (= lower). We may also want to ntroduce addtonal cuts. There are addtonal constrants known for the TSP. There are also generc methods for ntroducng addtonal cuts. We refer the reader to [ABCC06] for a detaled computatonal study of the TSP. 2 Approxmaton Algorthms Approxmaton construct provably good solutons and run n polynomal tme. Algorthms that run n polynomal tme but do not come wth a guarantee on the qualty of the soluton found are called heurstcs. Approxmaton algorthms may start out as heurstcs. Improved analyss turns the heurstc nto an approxmaton algorthm. Problems vary wdely n ther approxmablty. 2.1 The Knapsack Problem: A PTAS va Scalng and Dynamc Programmng The Knapsack Problem s easly stated. We are gven n tems,, each wth a value and a weght. Let v and w be the value and the weght of the -th tem. We are also gven a weght bound W. The goal s to determne a subset S [n] of the tems, maxmzng the value S v and obeyng the weght constrant S w W. We may assume w W for all. The Greedy Heurstc: We order the tems by value per unt weght,.e., by v/w. We start wth the empty set S of tems and then terate over the tems n decreasng order of value-perunt-weght-rato. If an tem fts,.e., the current weght plus the weght of the tem does not exceed the weght bound, we add t, otherwse we dscard t. Let V g be the value computed. The greedy heurstc s smple, but no good. Consder the followng example. We have two tems: the frst one has value 1 and weght 1 and the second one has value 99 and weght 100. The weght bound s 100. The greedy heurstc consders the frst tem frst (snce 1/1 > 99/100) and adds t to the knapsack. Havng added the frst tem, the second tem wll not ft. Thus the heurstc produces a value of 1. However, the optmum s 99. A small change turns the greedy heurstc nto an approxmaton algorthm that guarantees half of the optmum value. Return the maxmum of V g and max v. Lemma 1 The modfed greedy algorthm acheves a value of V opt /2 and runs n tme O(nlogn). Proof: Order the tems by decreasng rato of value to weght,.e., v 1 /w 1 v 2 /w 2... v n /w n. Let k be mnmal such that w w k+1 > W. Then w w k W. The greedy algorthm wll pack the frst k tems and maybe some more. Thus V g v v k. 5

6 The value of the optmal soluton s certanly bounded by v v k + v k+1. Thus V opt v v k + v k+1 V g + maxv 2max(V g,maxv ). The tme bound s obvous. We sort the tem accordng to decreasng rato of value to weght and then terate over the tems. Dynamc Programmng: We assume that the values are ntegers. We show how to compute an optmal soluton n tme O(n 2 v max ). Clearly, the maxmum value of the knapsack s bounded by nv max. We fll a table B[0,nv max ] such that, at the end, B[s] s the mnmum weght of a knapsack of value s for 0 s nv max. We fll the table n phases 1 to n and mantan the nvarant that after the -th phase: { } B[s] = mn w j x j ; v j x j = s and x j s {0,1}. 1 j 1 j The mnmum of the empty set s. The entres of the table are readly determned. Wth no tems, we can only obtan value 0, and we do so at a weght of zero. Consder now the -th tem. We can obtan a value of s n one of two ways: ether by obtanng value s wth the frst 1 tems or by obtanng value s v wth the frst 1 tems. We choose the better of the two alternatves. B[0] = 0 and B[s] = for s 1. for from 1 to n do for s = nv max step 1 downto v do B[s] = mn(b[s],b[s v ] + w ) let s be maxmal such that B[s] W; return s. Lemma 2 The dynamc programmng algorthm solves the knapsack problem n tme O(n 2 v max ). Proof: Obvous. A small trck mproves the runnng tme to O(nV opt ). We mantan a value K whch s the maxmum value seen so far. We ntalze K to zero and replace the nner loop by K = K + v for s = K step 1 downto v do B[s] = mn(b[s],b[s v ] + w ) Observe that ths runnng tme s NOT polynomal as v max may be exponental n the sze of the nstance. Scalng: We now mprove the runnng tme by scalng at the cost of gvng up optmalty. We wll see that we can stay arbtrarly close to optmalty. Let S be an nteger. We wll fx t later. Consder the modfed problem, where the scale the values by S,.e., we set ˆv = v /S. We can compute the optmal soluton to the scaled problem n tme O(n 2 v max /S). We wll next show that an optmal soluton to the scaled problem s an excellent soluton to the orgnal problem. 6

7 Let x = (x 1,...,x n ) be an optmal soluton of the orgnal nstance and let y = (y 1,...,y n ) be an optmal soluton of the scaled nstance,.e., { } V opt = v x = max v z ; w z W and z s {0,1},and 1 n 1 n { } ˆv y = max ˆv z ; 1 n w z W and z s {0,1} 1 n Let V approx = v y be the value of the knapsack when flled accordng to the optmal soluton of the scaled problem. Then Thus V approx = v y = S S S v S y y S v x S v S ( v S 1)x v x S x snce y s an optmal soluton of scaled nstance v x ns = V opt ns. snce x s an optmal soluton of orgnal nstance V approx (1 ns V opt )V opt. It remans to choose S? Let ε > 0 be arbtrary. Set S = max(1, εv opt /n ). Then V approx (1 ε)v opt, snce V approx = V opt f S = 1. The runnng tme becomes O(mn(nV opt,n 2 /ε)). Ths s nce, but the defnton of S nvolves a quantty that we do not know. The modfed greedy algorthm comes to rescue. It determnes V opt up to a factor of two. We may use the approxmaton nstead of the true value n the defnton of S. We obtan. Theorem 2 Let ε > 0. In tme O(n 2 /ε) one can compute a soluton to the knapsack problem wth V approx (1 ε)v opt. 2.2 Set Cover: A Greedy Algorthm We are gven subsets S 1 to S n of some ground set U. Each set come wth a cost c(s ) 0. The goal s to the cheapest way to cover U,.e., mn c(s ) subject to I S = U. I The greedy strategy apples naturally to the set cover problem: we always pck the most effectve set,.e., the set that covers uncovered elements at the smallest cost per newly covered element. 7.

8 More formally, let C be the set of covered elements before an teraton. Intally, C s the empty set. Defne the cost-effectveness of S as c(s )/ S \C,.e., as the cost per uncovered elements covered by S. C = /0 whle C U do Let mnmze c(s )/ S \C. C = C S set the prce of every e S \C to prce(e) = c(s )/ S \C. output the sets pcked. Let OPT be the cost of the optmum soluton. For the analyss, number the elements n the ground set n the order n whch they covered by the algorthm above. Let e 1 to e n be the elements n ths numberng. Lemma 3 For all k, prce(e k ) OPT/(n k + 1). Proof: Consder the teraton n whch e k s covered. Clearly, the sets n the optmal soluton cover the remanng elements for a cost of at most OPT,.e., at a cost of OPT/( U \ C OPT/(n k + 1) per element. Thus there must be a set whch covers elements at ths cost or less. The algorthm pcks the most cost-effectve set and hence prce(e k ) OPT/(n k + 1). Theorem 3 The greedy algorthm for the set cover problem produces a soluton of cost at most H n OPT, where H n = 1 + 1/2 + 1/ /n s the n-th Harmonc number. Proof: By the precedng Lemma, the cost of the soluton produced by the greedy algorthm s bounded by 1 OPT 1 k n n k + 1 = OPTH n. No better approxmaton rato can be proved for ths algorthm as the followng example shows. The ground set has sze n. We have the followng sets: a sngleton set coverng element at cost 1/ and a set coverng all elements at cost 1 + ε. The optmal cover has a cost of 1 + ε. However, the greedy algorthm chooses the n sngletons. Hardness of Approxmaton: the greedy one. It s unlkely that there s a better algorthm for set-cover than Theorem 4 If there s an approxmaton algorthm for set cover wth approxmaton rato o(logn) then NP ZTIME(n O(loglogn) ). ZTIME(T (n)) s the set of problems whch have a Las Vegas algorthm wth runnng tme T (n). 8

9 2.3 Maxmum Satsfablty: A Randomzed Algorthm We consder formulae n 3-CNF. Max-3-SAT s the problem of satsfyng a maxmum number of clauses n a gven formula Φ. Lemma 4 Let Φ be a formula n whch every clause has exactly dstnct three lterals. There s a randomzed algorthm that satsfes an expected number of 7/8 of the clauses. Proof: Consder a random assgnment x. For any clause C, the probablty that x satsfes C s 7/8 because exactly one out of the eght possble assgnments to the varables n C falsfes C. Thus the expected number of satsfed clauses s 7/8 tmes the number of clauses. It s crucal for the argument above that every clause has exactly three lterals and that ths lterals are dstnct. Clauses of length 1 are only satsfed wth probablty 1/2 and clauses of length 2 only wth probablty 3/4. So the randomzed algorthm does not so well when there are short clauses. However, there s an algorthm that does well on short clauses. See the book by Vazran or by Motwan and Raghavan. A student ponted out to me that the algorthm above s easly derandomzed. Let {C 1,...,C m } be a set of clauses (not necessarly of length 3). Let I be an ndcator varable for the -th clause. It s one f C s satsfed and zero otherwse. Then E[# of satsfed clauses] = E[I I m ] = E[I 1 ] E[I m ] = A clause wth k dstnct lterals s true wth probablty 1 2 k. We can now use the method of condtonal expectons. Note that prob(c s true). 1 m E[# of satsfed clauses] = (E[# of satsfed clauses x 1 = 0]+E[# of satsfed clauses x 1 = 1])/2. And hence one of the two expectons on the rght s at least the expecton on the left. Snce we can compute the expectons, we can select the better choce between x 1 = 0 and x 1 = 1. In ths way, we fnd determnstcally an assgnment that has the same guarantee as the randomzed algorthm. What can one acheve determnstcally? Consder the followng two assgnments. Set all varables to TRUE or set all varables to FALSE. Snce any clause s satsfed by all but one of the assgnments to ts lterals, every clause s satsfed by one of the assgnments. Hence one assgnment satsfes at least half of the clauses. Karloff-Zwck have shown how to satsfy at 7/8 of the clauses n any satsfable formula. Ths s not the same as a 7/8-approxmaton algorthm. 2.4 Vertex Cover Consder the vertex cover problem. We are gven a graph G = (V,E). The goal s to fnd the smallest subset of the vertces that covers all the edges. Lemma 5 A factor 2-approxmaton to the optmum vertex cover can be computed n polynomal tme. 9

10 Proof: Compute a maxmal matchng M and output the endponts of the edges n the matchng. A maxmal matchng can be computed greedly. We terate over the edges n arbtrary order. If an edge s uncovered we add t to the matchng. Clearly, the sze of the cover s 2 M. An optmal cover must contan at least one endpont of every edge n M. 2.5 Hardness of Approxmaton Ths secton follows the book by Vazran [Vaz03]. How does one show that certan problems do not have good approxmaton algorthms unless P = NP. The key are gap-ntroducng and gap-preservng reductons. Assume now that we have a polynomal tme reducton from SAT to MAX-3-SAT wth the followng property. It maps an nstance Φ of SAT to a nstance Ψ of MAX-3-SAT such that f Φ s satsfable, Ψ s satsfable. f Φ s not satsfable, every assgnment to the varables n Ψ satsfes a fracton less than (1 α) of the clauses of Ψ. where α > 0 s a constant. Theorem 5 In the stuaton above, there s no approxmaton algorthm for Max-3-SAT that acheves an approxmaton rato of 1 α, unless P = NP. Proof: Assume otherwse. Let Φ be any formula and let Ψ be constructed by the reducton. Construct a maxmzng assgnment x for Ψ usng the approxmaton algorthm and let m be the number of clauses n Ψ. If x satsfes at least (1 α)m clauses of Ψ, declare Φ satsfable. Otherwse, declare Φ unsatsfable. The correctness s almost mmedate. If Φ s satsfable, there s a satsfyng assgnment for Ψ and hence the assgnment produced by the approxmaton algorthm must satsfy at least (1 α)m clauses of Ψ. Thus the algorthm declares Φ satsfable. On the other hand, f Φ s not satsfable, any assgnment for the varables n Ψ satsfes less than (1 α)m clauses of Ψ. In partcular, the assgnment returned by the approxmaton algorthm cannot do better. Thus Ψ s declared unsatsfable The PCP-Theorem (Probablstcally Checkable Proofs) We are all famlar wth the characterzaton of NP va wtnesses (proofs) and polynomal tme verfers. A language L belongs to NP f and only f there s a determnstc polynomal tme Turng machne V (called the verfer) wth the followng property. On nput x, V s provded wth an addtonal nput y, called the proof, such that: If x L, there s a y such that V accepts. If x L, there s no y such that V accepts. 10

11 In probabstcally checkable proofs, we make the verfer a probablstc machne (ths makes the verfer stronger), but allow t to read only part of the proof n any partcular run. A language L belongs to PCP(logn,1) f and only f there s polynomal tme Turng machne V (called the verfer) and constants c and q wth the followng propertes. On nput x of length n, V s provded wth x, a proof y, and a random strng 3 r of length clogn. It performs a computaton queryng at most q bts of y. The q bts probed depend on x and r, but not 4 on y. If x L, then there s a proof y that makes V accept wth probablty 1. If x L, then for every proof y, V accepts wth probablty less than 1/2. The probabltes are computed wth respect to the random choces r. Theorem 6 (Arora, Lund, Motwan, Sudan, and Szegedy) NP = PCP(logn,1). The drecton PCP(logn,1) NP s easy (Guess y and then run V for all random strngs of length clogn. Accept f V accepts for all random strngs.) The other drecton s a deep theorem. The crux s brngng the error probablty below 1/2. Wth a larger error bound, the statement s easy. Consder the followng algorthm for SAT. The verfer nterprets y as an assgnment and the random strng as the selector of a clause C. It checks whether the assgnment satsfes the clause. Clearly, f the nput formula s satsfable, the verfer wll always accept (f provded wth a satsfyng assgnment). If the nput formula s not satsfable, there s always at least one clause that s not satsfed. Therefore the verfer wll accept wth probablty at most 1 1/m, where m s the number of clauses. The PCP-theorem gves rse to an optmzaton problem whch does not have a 1/2-approxmaton algorthm unless P = NP. Maxmze Accept Probablty: Let V be a PCP(logn,1)-verfer for SAT. On nput Φ, fnd a proof y that maxmzes the acceptance probablty of V. Theorem 7 Maxmze Accept Probablty has no factor 1/2 approxmaton algorthm unless P = NP. Proof: Assume otherwse. Let Φ be any formula and let y be the proof returned by approxmaton alg. We run V wth y and all random strngs and compute the acceptance probablty p. If p s less than 1/2 we declare Φ non-satsfable, If p 1/2, we declare Φ satsfable. Why s ths correct? If Φ s satsfable, there s proof wth acceptance probablty 1. Hence the approxmaton alg must return a proof wth acceptance probablty at least 1/2. If Φ s not satsfable, there s no proof wth acceptance probablty at least 1/2. In partcular, for the proof returned by the approxmaton alg, we must have p < 1/2. 3 It s convenent to make V a determnstc Turng machne and to provde the randomness explctly. 4 Ths requrement s not essental. Assume we would allow the bts read to depend on y. The frst bt nspected s ndependent of y, the second bt may depend on the value of the frst bt, and so on. So there are only q 1 potental bts that can be read. 11

12 2.7 Max-3-SAT s hard to approxmate We gve a gap-ntroducng reducton from SAT to MAX-3-SAT. Let V be a PCP(logn,1)-verfer for SAT. On an nput Φ of length n t uses a random strng r of length clogn and nspects (at most) q postons of the proof. Thus there are at most qn c bts of the proof that are nspected for some random strng. Let B = B 1...B qn c be a bt-strng that encodes these places of the proof. The other places of the proof are rrelevant. We wll construct a formula Ψ over varables B 1 to B qn c wth the desred propertes. If Φ s satsfable, Ψ s satsfable. If Φ s not satsfable, then any assgnment does not satsfy a constant fracton of the clauses n Ψ. For any random strng r, the verfer nspect a partcular set of q bts, say the bts ndexed by (r,1) to (r,q). Let f r (B (r,1),...,b (r,q) ) be a boolean functon of q varables that s one f and only f the verfer accepts wth random strng r and the q bts of the proof as gven by B (r,1) to B (r,q). Let us pause and see what we have acheved. (A) If Φ s satsfable, there s a proof y that makes V accept wth probablty one. Set B accordng to y. Then all functons f r, r {0,1} clogn, evaluate to true. (B) If Φ s not satsfable, every proof makes V accept wth probablty less than 1/2. Hence every B falsfes more than half of the functons f r. Each f r s a functon of q varables and hence has a conjunctve normal form Ψ r wth at most 2 q clauses. Each clause s a dsjuncton of q lterals. An assgnment that falsfes f r falsfes at least one clause n Ψ r. Let Ψ be the conjuncton of the Ψ r,.e., Ψ = r Ψ r. We fnally turn Ψ nto a formula Ψ wth exactly three lterals per clause usng the standard trck. Consder a clause C = x 1 x 2... x q. Introduce new varables y 1 to y q 2 and consder C = (x 1 x 2 y q ) ( y 1 x 3 y 2 )... ( y q 2 x q 1 x q ). An assgnment satsfyng C can be extended to an assgnment satsfyng C. Conversely, an assgnment falsfyng C falsfes at least one clause of C. What have we acheved? Ψ conssts of no more than n c 2 q (q 2) clauses. Each clause has 3 lterals. (A) If Φ s satsfable, there s a proof y that makes V accept wth probablty one. Set B accordng to y. Then all functons f r, r {0,1} clogn, evaluate to true and hence Ψ s satsfed. (B) If Φ s not satsfable, every proof makes V accept wth probablty less than 1/2. Hence every B falsfes more than half of the functons f r. Hence B falsfes more than n c /2 clauses n Ψ. Let α = (n c /2)/(n c 2 q (q 2)) = 1/((q 2)2 q+1 ). Theorem 8 The above s a gap ntroducng reducton from SAT to MAX-3-SAT wth α = 1/((q 2)2 q+1 ). The theorem can be sharpened consderably. Hastad showed that the theorem holds for any α = 1/8 ε for any ε > 0. 12

13 2.8 From Max-3-SAT to Vertex Cover Step 1: One shows that Max-3-SAT stays hard to approxmate f one ntroduces the addtonal constrant that each varable appears at most 30 tmes. Step 2: From Max-3-SAT(30) to vertex cover. One uses the standard reducton. 2.9 Approxmaton Algs for Problems n P 3 General Search Heurstcs 4 Parameterzed Complexty 4.1 Parameterzed Complexty for Problems n P References [ABCC06] D. Applegate, B. Bxby, V. Chvatal, and W. Cook. The Travelng Salesman Problem: A Computatonal Study. Prnceton Unversty Press, [GS92] [NOT06] [Sch99] [Sch01] G. R. Grmmett and D. R. Strzacker. Probablty and Random Processes. Oxford Unversty Press, R. Neuwenhus, A. Olveras, and C. Tnell. Solvng SAT and SAT modulo theores: From an abstract Davs-Putnam-Logemann-Loveland procedure to DPLL(T). Journal of the ACM, 53(6): , U. Schönng. A probablstc algorthm for k-sat and constrant satsfacton problems. FOCS, Uwe Schönng. New algorthms for k-sat based on the local search prncple. In Mathematcal Foundatons of Computer Scence 2001, volume 2136 of Lecture Notes n Computer Scence, pages Sprnger Berln Hedelberg, [Vaz03] V. Vazran. Approxmaton Algorthms. Sprnger,

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang CS DESIGN ND NLYSIS OF LGORITHMS DYNMIC PROGRMMING Dr. Dasy Tang Dynamc Programmng Idea: Problems can be dvded nto stages Soluton s a sequence o decsons and the decson at the current stage s based on the

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 6 Luca Trevsan September, 07 Scrbed by Theo McKenze Lecture 6 In whch we study the spectrum of random graphs. Overvew When attemptng to fnd n polynomal

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Algorithmica 2002 Springer-Verlag New York Inc.

Algorithmica 2002 Springer-Verlag New York Inc. Algorthmca (2002 32: 615 623 DOI: 101007/s00453-001-0094-7 Algorthmca 2002 Sprnger-Verlag New Yor Inc A Probablstc Algorthm for -SAT Based on Lmted Local Search and Restart U Schönng 1 Abstract A smple

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Combining Constraint Programming and Integer Programming

Combining Constraint Programming and Integer Programming Combnng Constrant Programmng and Integer Programmng GLOBAL CONSTRAINT OPTIMIZATION COMPONENT Specal Purpose Algorthm mn c T x +(x- 0 ) x( + ()) =1 x( - ()) =1 FILTERING ALGORITHM COST-BASED FILTERING ALGORITHM

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

6.842 Randomness and Computation February 18, Lecture 4

6.842 Randomness and Computation February 18, Lecture 4 6.842 Randomness and Computaton February 18, 2014 Lecture 4 Lecturer: Rontt Rubnfeld Scrbe: Amartya Shankha Bswas Topcs 2-Pont Samplng Interactve Proofs Publc cons vs Prvate cons 1 Two Pont Samplng 1.1

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment

O-line Temporary Tasks Assignment. Abstract. In this paper we consider the temporary tasks assignment O-lne Temporary Tasks Assgnment Yoss Azar and Oded Regev Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978, Israel. azar@math.tau.ac.l??? Dept. of Computer Scence, Tel-Avv Unversty, Tel-Avv, 69978,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Communication Complexity 16:198: February Lecture 4. x ij y ij

Communication Complexity 16:198: February Lecture 4. x ij y ij Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Text S1: Detailed proofs for The time scale of evolutionary innovation

Text S1: Detailed proofs for The time scale of evolutionary innovation Text S: Detaled proofs for The tme scale of evolutonary nnovaton Krshnendu Chatterjee Andreas Pavloganns Ben Adlam Martn A. Nowak. Overvew and Organzaton We wll present detaled proofs of all our results.

More information

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University Math Revew CptS 223 dvanced Data Structures Larry Holder School of Electrcal Engneerng and Computer Scence Washngton State Unversty 1 Why do we need math n a data structures course? nalyzng data structures

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds. U.C. Berkeley CS273: Parallel and Dstrbuted Theory Lecture 1 Professor Satsh Rao August 26, 2010 Lecturer: Satsh Rao Last revsed September 2, 2010 Lecture 1 1 Course Outlne We wll cover a samplng of the

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

How to Cope with NP-Completeness

How to Cope with NP-Completeness 2 How to Cope with NP-Completeness Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 NP-hard optimization problems appear everywhere in our life. Under the assumption that P

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desgn and Analyss of Algorthms CSE 53 Lecture 4 Dynamc Programmng Junzhou Huang, Ph.D. Department of Computer Scence and Engneerng CSE53 Desgn and Analyss of Algorthms The General Dynamc Programmng Technque

More information

Dynamic Programming! CSE 417: Algorithms and Computational Complexity!

Dynamic Programming! CSE 417: Algorithms and Computational Complexity! Dynamc Programmng CSE 417: Algorthms and Computatonal Complexty Wnter 2009 W. L. Ruzzo Dynamc Programmng, I:" Fbonacc & Stamps Outlne: General Prncples Easy Examples Fbonacc, Lckng Stamps Meater examples

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

Turing Machines (intro)

Turing Machines (intro) CHAPTER 3 The Church-Turng Thess Contents Turng Machnes defntons, examples, Turng-recognzable and Turng-decdable languages Varants of Turng Machne Multtape Turng machnes, non-determnstc Turng Machnes,

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t 8.5: Many-body phenomena n condensed matter and atomc physcs Last moded: September, 003 Lecture. Squeezed States In ths lecture we shall contnue the dscusson of coherent states, focusng on ther propertes

More information

Note on EM-training of IBM-model 1

Note on EM-training of IBM-model 1 Note on EM-tranng of IBM-model INF58 Language Technologcal Applcatons, Fall The sldes on ths subject (nf58 6.pdf) ncludng the example seem nsuffcent to gve a good grasp of what s gong on. Hence here are

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information