String Hashing for Linear Probing

Size: px
Start display at page:

Download "String Hashing for Linear Probing"

Transcription

1 Strng Hashng for Lnear Probng Mkkel Thorup Abstract Lnear probng s one of the most popular mplementatons of dynamc hash tables storng all keys n a sngle array. When we get a key, we frst hash t to a locaton. Next we probe consecutve locatons untl the key or an empty locaton s found. At STC 07, Pagh et al. presented data sets where the standard mplementaton of 2-unversal hashng leads to an expected number of Ωlog n probes. They also showed that wth 5-unversal hashng, the expected number of probes s constant. Unfortunately, we do not have 5-unversal hashng for, say, varable length strngs. When we want to do such complex hashng from a complex doman, the generc standard soluton s that we frst do collson free hashng w.h.p. nto a smpler ntermedate doman, and second do the complcated hash functon on ths ntermedate doman. ur contrbuton s that for an expected constant number of lnear probes, t s suffces that each key has 1 expected collsons wth the frst hash functon, as long as the second hash functon s 5-unversal. Ths means that the ntermedate doman can be n tmes smaller, and such a smaller ntermedate doman typcally means that the overall hash functon can be made smpler and at least twce as fast. The same doublng of hashng speed for 1 expected probes follows for most domans bgger than 32-bt ntegers, e.g., 64-bt ntegers and fxed length strngs. In addton, we study how the overhead from lnear probng dmnshes as the array gets larger, and what happens f strngs are stored drectly as ntervals of the array. These cases were not consdered by Pagh et al. 1 Introducton A hash table or dctonary s the most basc non-trval data structure. We want to store a set S of keys from some unverse U so that we can check membershp, that s, f some x U s n S, and f so, look up satellte nformaton assocated wth x. ften we want the set S to be dynamc so that we can nsert and delete keys. We are not nterested n the orderng of the elements of U, and that s why we can use hashng for effcent mplementaton of these tables. Hash tables for strngs and other complex objects are central to the AT&T Labs Research, Shannon Laboratory, 180 Park Avenue, Florham Park, NJ 07932, USA. mthorup@research.att.com. analyss of data, and they are drectly bult nto hgh level programmng languages such as python, andperl. Lnear probng s one of the most popular mplementatons of hash tables n practce. We store all keys n a sngle array T, and have a hash functon h mappng keys from U nto array locatons. When we get a key x, we frst check locaton hx nt.ifadfferentkeysn T [hx], we scan next locatons sequentally untl ether x s found, or we get to an empty spot concludng that key x s new. If x s to be nserted, we place t n ths empty spot. To delete a key x from locaton, wehave to check f there s a later locaton j wth a key y such that hy, and n that case delete y from j and move t up to. Recursvely, we look for a later key z to move up to j. Ths deleton process termnates when we get to an empty spot. Thus, for each operaton, we only consder locatons from hx and to the frst empty locaton. Above, the successor to the last locaton n the array s the frst locaton, but we wll generally gnore ths boundary case. It can also be avoded n the code f we leave some extra space at the end of the array that we do not hash to. If the keys are complex objects lke varable length strngs, we wll typcally not store them drectly n the array T. Instead of storng a key x, we store ts hash value hx and a ponter to x. Storng the hash value serves two purposes: 1 durng deletons we can quckly dentfy keys to be moved up as descrbed above, and 2 when lookng for a key x, we only need to check keys wth the same hash value. We wll, however, also consder the opton of storng strngs drectly as ntervals of the array that we probe, and thus avod the ponters. Practce The practcal use of lnear probng dates back at least to 1954 to an assembly program by Samuel, Amdahl, Boehme c.f. [10]. It s one of the smplest schemes to mplement n dynamc settngs where keys can be nserted and deleted. Several recent expermental studes [2, 7, 14] have found lnear probng to be the fastest hash table organzaton for moderate load factors 30-70%. Whle lnear probng s known to requre more nstructons than other open addressng methods, the fact that we access an nterval of array entres means that lnear probng works very well wth modern archtectures for whch sequental access s much faster than random access assumng that the 655 Copyrght by SIAM.

2 elements we are accessng are each sgnfcantly smaller than a cache lne, or a dsk block, etc.. However, the hash functons used to mplement lnear probng n practce are heurstcs, and there s no known theoretcal guarantee on ther performance. Snce lnear probng s partcularly senstve to a bad choce of hash functon, Heleman and Luo [7] advce aganst lnear probng for general-purpose use. Analyss Lnear probng was frst analyzed by Knuth n a 1963 memorandum [9] now consdered to be the brth of the area of analyss of algorthms [15]. Knuth s analyss, as well as most of the work that has snce gone nto understandng the propertes of lnear probng, s based on the assumpton that h s a truly random functon, mappng all keys ndependently. In 1977, Carter and Wegman s noton of unversal hashng [3] ntated a new era n the desgn of hashng algorthms, where explct and effcent ways of choosng provably good hash functons replaced the unrealstc assumpton of complete randomness. They asked to extend the analyss to lnear probng. Carter and Wegman [3] defned unversal hashng as havng low collson probablty, that s 1/t f we hash to a doman of sze t. Later, n [20], they defne k-unversal hashng as a functon mappng any k keys ndependently and unformly at random. Note that 2-unversal hashng s stronger than unversal hashng n that the dentty s unversal but not 2-unversal. ften the unformty does not have to be exact, but the ndependence s crtcal for the analyss. The frst analyss of lnear probng based on k- unversal hashng was gven by Segel and Schmdt n [16, 17]. Specfcally, they show that log n- unversal hashng s suffcent to acheve essentally the same performance as n the fully random case. Here n denotes the number of keys nserted n the hash table. However, we do not have any practcal mplementaton of log n-unversal hashng. In 2007, Pagh et al. [12] studed the expected number of probes wth worst-case data sets. They showed that wth the standard mplementaton of 2- unversal hashng, the expected number of lnear probes could be Ωlog n. The worst-case s one or two ntervals somethng that could very well appear n practce, possbly explanng the experenced unrelablty from [7]. It s nterestng to contrast ths wth the result from [11] that smple hashng works f the nput has hgh entropy. The stuaton s smlar to the classc one for non-randomzed quck sort, where we get nto quadratc runnng tme f the nput s already sorted: somethng very unlkely for random data, but somethng that happens frequently n practce. Lkewse, f we were hashng characters, then n practce t could happen that they were mostly letters and dgts, and then they would be concentrated n two ntervals. n the postve sde, Pagh et al. [12] showed that wth 5-unversal hashng, the expected number of probes s 1. From [19] we know that 5-unversal hashng s fast for small domans lke 32-bt ntegers. Man contrbuton ur man contrbuton s that for an expected constant number of lnear probes, we can salvage 2-unversal hashng and even unversal hashng f we follow t by 5-unversal hashng. The unversal hashng collects keys wth the same hash value n buckets. The subsequent 5-unversal hashng shuffles the buckets. Clearly ths s weaker hashng than f we appled the 5-unversal hashng drectly to all keys, but t may seem that the ntal unversal hashng s wasted work. ur motvaton s to get effcent hashng for lnear probng of complex objects lke varable length strngs. The pont s that we do not have any reasonable 5- unversal hashng for these domans. When we want complex hashng from a complex doman, the generc standard soluton s that we frst do collson free hashng w.h.p. nto a smpler ntermedate doman, and second do the complcated hash functon on ths ntermedate doman. ur contrbuton s that for expected constant lnear probes, t suffces that each key has 1 expected collsons wth the frst hash functon, as long as the second hash functon s 5-unversal. Ths means that the ntermedate doman can be n tmes smaller. Such a smaller ntermedate doman typcally means that both the frst and the second hash functon are smpler and run at least twce as fast. The same doublng of hashng speed for 1 expected probes follows for most domans bgger than 32-bt ntegers, e.g., fxed length strngs and even the mportant case of 64-bt ntegers whch may be stored drectly n the array. A dfferent way of vewng our contrbuton s to consder the extra prce pad for constant expected lnear probng on worst-case nput. For any effcent mplementaton of hash tables, we need a hash functon h 1 nto say [2n] ={0,..., 2n 1} where each key has 1 expected collsons. ur result s that for lnear probng wth constant expected cost, t suffces to compose h 1 wth a 5-unversal hash functon h 2 :[2n] [2n]. Usng the fast mplementaton of h 2 from [19], the extra prce of h 2 s much less than that of hashng a longer strng or that of a sngle cache mss. ur general contrbuton here s to prove good performance for a specfc problem for a low-complexty hash functon followed by a hgh complexty one, where the frst hash functon s not expected collson-free. We hope that ths approach can be appled elsewhere to 656 Copyrght by SIAM.

3 speed up hashng for specfc problems. Addtonal results We extend our analyss to two scenaros not consdered by Pagh et al. [12]. ne s f the sze of the array s large compared wth the number of elements. The other s when we store strngs drectly as ntervals of the array. ur results for these cases are thus also new n the case of a sngle drect 5-unversal hash functon from the orgnal doman to locatons n the array. Contents In Secton 2 we state our formal result andnsectons3-4weprovet. InSecton5weshow how to apply our result to dfferent domans and how the tradtonal soluton wth a frst collson free hashng nto a larger ntermedate doman would be at least twce as slow. In Secton 6 we consder the opton of storng strngs as ntervals of the lnear probng array. 2 The formal result As descrbed above, we use lnear probng to place a dynamc set of keys S U of sze S n n an array T wth t> S locatons [t] ={0,..., t 1}. We wll often refer to α = n/t as the fll. Each key x S s gven a unque locaton x [t] ={0,..., t 1}. Locaton arthmetc s done modulo t, sof<jthen [j, ] ={j,.., t 1, 0,.., }. IfQ s a set of locatons, then + Q = { + q q Q}. The locatons depend on a hash functon h : U [t]. For lnear probng, the assgnment of locatons has to satsfy the followng nvarant: Invarant 2.1. For every x S, the nterval [hx, x] s full. Invarant 2.1 mples that to check f x s n S, wecan start n locaton hx and search consecutve locatons untl ether x or an empty locaton s found. In the later case, we can add x, settng x =. Invarant 2.1 also mples that a locaton s full f and only f there s an l such that S h 1 [l] l. Here S h 1 [l] = {y S l<hy }. For each key x U, weletfull h,s x denotethe number of locatons from hx to the nearest empty locaton, or more precsely, the largest l such that hx +[l] s full. We can now bound the number of locatons consdered by the dfferent operatons: nsert h,s x fnds empty locaton for x S n full h,s x+1probes. delete h,s x consders full h,s x + 1 locatons, movng keys up to fll gaps created by the deleton. fnd h,s x uses at most full h,s\{x} x +1 probes. If x S, t fnds an empty locaton. If x S, toget to x, t only to passes locatons that would be full wthout x n S. ur hash functon h : U [t] wll always be composed of two functons va some ntermedate doman A; namely h 1 : U A and h 2 : A [t]. Then hx =h 2 h 1.Foreachx U, let the bucket sze b 1 x be the number of elements y S wth h 1 y =h 1 x. Note that b 1 x countsx f x S and that b 1x s the sum of the squares of the bucket szes. ur man techncal result can now be stated as follows: Theorem 2.1. Let h 1 be fxed arbtrarly, and let h 2 s be a 5-unversal hash functon from A to [t] where for every a A and [t], the probablty that h 2 a = s at most α/n. Then, for any x 0 U, b1 x 0 E [full h,s x 0 ] = 1 α + b 1x/n 1 α 5 Concernng the factor 1 α 5,weknowfromKnuth [9] that a factor 1 α 2 s needed even when h 1 s the dentty and h = h 2 s a truly random functon. Applyng Theorem 2.1 to an unversal hash functon h 1, we get the corollary below, whch s what was clamed n the ntroducton. Corollary 2.1. Let h 1 be a hash functon from U to an ntermedate doman A such that the expected bucket sze b 1 x of any key x s constant. Let h 2 be a 5- unversal hash functon from A to [t] where for every a A and [t], the probablty that h 2 a = s at most α/n for some constant α < 1. Then for every x 0 U, we have E [full h,s x 0 ] = Small fll We wll also consder cases wth fll α = o1. It s then useful to consder coll h,s x =full h,s x [x S] Here the Boolean expresson x S has value 1 f x S; 0 otherwse. We can thnk of coll h,s x asthenumber of elements from S \{x} that x colldes wth n the longest full nterval startng n hx. Smlarly, we defne c 1 x =b 1 x [x S], whch s the standard number of collsons between x and S \{x} under h 1.Notethat we may have full h,s\{x} x coll h,s x because x may fll a an empty gap between two full ntervals. Theorem 2.2. Let h 1 be fxed arbtrarly, and let h 2 s be a 5-unversal hash functon from A to [t] where for every a A and [t], the probablty that h 2 a = s at most α/n wth α 0.9. Then, for any x 0 U, E [full h,s x 0 ] = b 1 x α b 1 x 0 + b 1 x/n E [coll h,s x 0 ] = c 1 x 0 +α b 1 x/n 657 Copyrght by SIAM.

4 Usng the last bound of Theorem 2.2, we get Corollary 2.2. Let n 0.9 t and t t. Let h 1 be a unversal hash functon from U to [t ] and let h 2 be a 5-unversal hash functon from [t ] to [t]. Then for every x 0 U, E [coll h,s x 0 ] = n/t. Proof. Snce h 1 s unversal, we get E [c 1 x] <n/t n/t = α for all x U. If h 1 s the dentty and h = h 2, then ths s the scenaro studed by Pagh et al. [12], except that we have subconstant fll α = n/t = o1. ne can thnk of coll h,s x as the hashng overhead from not havng each key mapped drectly to a unque locaton n the array. Havng E [coll h,s x] = α s clearly best possble wthn a constant factor. 3 Proof of Theorem 2.1 We wll now start provng Theorem 2.1. Let [hx H, hx +L be the longest full nterval contanng hx. Then full h,s x =L and we want to lmt the expectaton of L. As n Theorem 2.1 we have h = h 2 h 1 where h 1 s fxed and h 2 s 5-unversal. Defne S 0 = {x S h 1 x h 1 x 0 }. Then S 0 + b 1 x 0 = S. Snce[hx H, hx+l sexactly full, we have H + L = S h 1 [hx 0 H, hx 0 +L = b 1 x 0 + S 0 h 1 [hx 0 H, hx 0 + S 0 h 1 [hx 0,hx 0 +L It follows that at least one of the followng cases are satsfed: 1. b 1 x 0 1 α/3 L. 2. S 0 h 1 [hx 0,hx 0 +L α +1 α/3 L. 3. S 0 h 1 [hx 0 H, hx 0 H 1 α/3 L. We are gong to pay L n each of the above cases. In fact, n each case separately, we are gong to seek the largest L satsfyng the condton, and then we wll use 3 =1 L as a bound on L. Incase1,wehave L 1 3b 1 x 0 /1 α, correspondng to the frst term n the bound of Theorem 2.1. To complete the proof, we need to bound the expectatons of L 2 and L 3. We call L 2 the tal because t only nvolves keys hashng to hx 0 and later, and we call L 3 the head because t only nvolves keys hashng before hx A basc lemma The followng lemma wll be used to bound the expectatons of both L 2 and L 3. Lemma 3.1. For any Q [t] of sze q, 3.1 Pr [ αq + d S 0 h 1 hx 0 +Q <D ] αq 0 [b 1 x <D]b 3 1x d 4 n + 3α2 q 2 0 [b 1 x <D]b 1 x 2 d 4 n 2 Above, the Boolean expresson [b 1 x <D]hasvalue1 f b 1 x <D; 0 otherwse. Before contnung, note that f h 1 s the dentty, that s, f we just used 5-unversal hashng, then b 1 x = 1 for all x S, and then the rght hand sde reduces to αq, whch s what s proved n [12]. Here we need to handle dfferent bucket szes b 1 x. For our later analyss, t wll be crucal that we have ntroduced the cut-off parameter D. d + 3q2 α 2 4 d 4 Proof of Lemma 3.1. Apart from the cut-off parameter D, our proof s very smlar to one used n [12]. Snce h 2 s 5-unversal, t s 4-unversal when we condton on any partcular value of hx 0 =h 2 h 1 x 0, so we can thnk of Q 0 = hx 0 +Q as a fxed set and h 2 as 4-unversal on h 1 S 0 =h 1 S \{h 1 x 0 }. For each a A, setb a = {x S 0 h 1 x =a} and b a =[b a <D]b a, observng that S 0 h 1 Q 0 <D S 0 h 1 Q 0 = a A[h 2 a Q 0 ]b a Therefore, the event n 3.1 mples 3.2 αq + d a A[h 2 a Q 0 ]b a For each a A, letp a =Pr[h 2 a Q 0 ] αq/n, and consder the random varables X a = b a [h 2 a Q 0 ] p a Then E [X a ]=0. SetX = a X a.then a A[h 2 a Q 0 ]b a = X + a A p a b a X + αq. Hence 3.2 mples 3.3 d X To bound the probablty of 3.3, we use the 4th moment nequalty Pr [X d] E [ X 4] /d 4. Snce E [X ]=0andtheX are 4-wse ndependent, we get E [ X 4] = E [X a1 X a2 X a3 X a4 ] a 1,a 2,a 3,a 4 A E [ X 4 ] +3 a A a A E [ Xa 2 ] Copyrght by SIAM.

5 Here E [ X 4 ] < and a A I E [ X 2 ] a A p a b 4 a αq n p a b 2 a a A 2 a S 0 [b 1 a <D]b 3 1 a αq 2 [b 1 x <D]b 1 x n 0 For the probablty bound of the lemma, we dvde the total expectaton of E [ X 4] by d The tal L 2 We want to show that E [L 2 ]= b 1x/n α 5. Let W l = {S 0 h 1 [hx 0,hx 0 +l} Then L 2 s the largest value n [t] such that W L 2 α +1 α/3l 2. In [12] they have b 1 x = [x S] 1 for all x U, and they use that E [L 2 ]= =1 Pr[L 2 ]. However, wth some large b 1 x, we have no good bound on Pr[L 2 ]. Ths s why we ntroduced our cut-off parameter D n Lemma 3.1. Let l 0 = 27/1 α and l = l 1 /1 1 α/9. Thenl 1 > 1 1 α4/27l. We say that >0sgood f W l [α +1 α/9l, α +1 α/3l. Lemma 3.2. If l 1 L 2 <l,then s good. Proof. If L 2 < l then W l < α +1 α/3l. Also, we have W l W L 2 α +1 α/3l 2 α+1 α/3l 1 > α+1 α/31 1 α4/27l > α +1 α/9l. Hence E [L 2 ] <l 0 + Pr[ good]l. =1 Here l 0 satsfes 3.4. To bound Pr[ good] we use Lemma 3.1 wth q = l, Q =[q], d =1 α/9 l, and D = l.then Pr[ good] = αq 0 [b 1 x <D]b 3 1 x d 4 n 2 + 3α2 q 2 0 [b 1 x <D]b 1 x 2 d 4 n 2 = 0 [b 1 x <l ]b 3 1 x 1 α 4 l 3 n 0 [b 1 x <l ]b 1 x 2 + Hence Pr[ good]l = + = 0 = 0 = 1 α 4 l 2 n2 0 [b 1 x <l ]b 3 1 x 1 α 4 l 2 n 0 [b 1 x <l ]b 1 x 2 :l >b 1x 1 α 4 l n 2 b 3 1x 1 α 4 l 2 n b 1 x + 1 α 4 l n 2 b 1 x 0 b 1 x 1 α 5 n α 5 n 2 0 b 1 x 1 α 5 n. 0 b 1 x Above, the marks the pont at whch we explot our cut-off [b 1 x < l ]. Ths completes the proof of 3.4. Below we are gong to need several smlar calculatons, but wth dfferent parameters, carefully chosen dependng on the context. The presentaton wll be focused on the dfferences, only sketchng the smlar parts. 3.3 The head L 3 We want to show that E [L 3 ]= b 1x/n α 5. Here L 3 s the largest value such that from some H, we have S 0 h 1 [hx 0 H, hx 0 H 1 α/3 L 3. We want to bound the expectaton of L 3. The argument s bascally a messy verson of that n Secton 3.2 for the tal L 2. Let UH = S 0 h 1 [hx 0 H, hx 0. ThenH maxmzes D = UH H, andthenl 3 =3D/1 α, so we are really tryng to bound D. Losng at most a factor 2, t suffces to consder cases where UH 2H, for suppose UH > 2H and consder the largest H such that UH 1 > 2H 1. Then H Copyrght by SIAM.

6 H + D/2 souh 1 H 1 > H + D/2, mplyng UH 1 H 1 D/2+1. Consequently, D = UH H UH 1 H D/2. Thus there s an H such that UH 2H andsuchthat UH H 1 α/6 L 3. Fnally, defne R such that UH =α + R 1 αh.then1 R 2/1 α and L 3 6R 1H = R H. For I =0, 1,.., log 2 1 α,leth I be the largest value such that UH I α +2 I 1 αh I. Let I be such that R /2 < 2 I R. Then H I H,so 2 I H I >R H /2. Thus we conclude that L 3 = max I {2 I H I }= log 2 1 α I=0 2 I H I. As n Secton 3.2, let l 0 = 27/1 α, and l = l 1 /1 1 α/9. Wesaythat s I-good f Ul [α +2 I 1 α/3l, α +2 I 1 αl. Smlar to Lemma 3.2, we have that s I-good f l 1 H I <l. Hence log 2 1 α E [L 3 ] = 2 I l 0 + Pr[ I-good]l I=0 = 1/1 α 2 log 2 1 α + Pr[ I-good]2 I l I=0 The frst term satsfes 3.5 so we only have to consder the double sum. To bound Pr[ I-good] we use Lemma 3.1 wth q = l, Q =[q], d =2 I 1 α/3 l, and D =2 I l. Wth calculatons smlar to those n Secton 3.2, we get Pr[ I-good]2 I l = = It follows that 0 [b 1 x < 2 I l ]b 3 1 x 1 α 4 2 3I l 2 n 0 [b 1 x < 2 I l ]b 1 x b 1 x 2 I 1 α 5 n log 2 1 α I=0 1 α 4 2 3I l n 2. Pr[ I-good]2 I l = log 2 1 α I=0 = 0 b 1 x 1 α 5 n 0 b 1 x 2 I 1 α 5 n. Ths completes the proof of 3.5, hence of Theorem Proof of Theorem 2.2 We wll now prove Theorem 2.2. The bound concdes wth that of Theorem 2.1 when α<1saconstant,so we may assume α = o1. We wll use the same basc defntons as n the proof of Theorem 2.1. Recall that h = h 2 h 1 where h 1 s fxed and h 2 s 5-unversal. Defne S 0 = {x S h 1 x h 1 x 0 }. Welet[hx H, hx+l be the longest full nterval contanng hx. Then full h,s x =L and we want to lmt the expectaton of L. We ntroduce a parameter β, and note that at least one of the followng cases are satsfed: 4. b 1 x 0 1 α1 β L. 5. S 0 h 1 [hx 0,hx 0 +L α+1 αβ/2 L. 6. S 0 h 1 [hx 0 H, hx 0 H 1 αβ/2 L. Wth β =2/3, the above concdes wth cases 1-3 from Secton 3. However, n our analyss below, we wll have α<β/8andβ 2/5. In each case separately, we are gong to seek the largest L satsfyng the condton, and then we wll use 6 =4 L as a bound on L. In case 4, we trvally have 4.6 L 4 b 1 x 0 /1 α1 β. We are gong to bound L 5 and L 6 by α 4.7 L 5 + L 6 = β 2 b 1 x/n. We wll now show how 4.6 and 4.7 mply Theorem 2.2. For the bound on full h,s x 0 wesetβ = α 1/3. Then L 4 b 1 x 0 /1 α1 3 α = b 1 x α whle L 5 + L 6 = 3 α b 1x/n. Gettng the bound on coll h,s x 0 s a bt more subtle. We use β =2/5 andα < 1/6. Then L 4 b 1 x 0 /1 α1 β <b 1 x 0 /5/6 3/5 = 2b 1 x 0. Snce L 4 and b 1 x 0 are nteger, we get that L 4 = b 1 x 0 for b 1 x 0 1, and L 4 1 3b 1 x 0 1 for b 1 x 0 > 1. Snce x 0 S mples b 1 x 0 1, we get Hence L 4 [x 0 S] 3b 1 x 0 [x 0 S] = 3c 1 x 0. coll h,s x 0 = full h,s x 0 [x 0 S] 660 Copyrght by SIAM.

7 L 4 [x 0 S]+L 5 + L 6 3c 1 x 0 +α b 1 x/n. Thus Theorem 2.2 follows from 4.6 and 4.7. It remans to bound the expectatons of L 5 and L 6 as n 4.7. We call them the small head and tal because they are much smaller now that α = o The small tal L 5 We want to show that α 4.8 E [L 5 ]= β 2 b 1 x/n. usng α β<1. The proof s very smlar to that n Secton 3.2. Let W l = {S 0 h 1 [hx 0,hx 0 +l} Then L 2 s the largest value n [t] such that W L 2 α +1 αβ/2l 5 >β/2 L 5. Ths tme, for =0, 1,.., we defne l =2,whchs a much faster growth than t had n Secton 3.2 We say that >0sgood f W l [β/4 l,βl. Smlar to Lemma 3.2, we have that s good f l 1 L 5 <l. Hence E [L 5 ] Pr[ good]l. =0 To bound Pr[ good] we use Lemma 3.1 wth q = l, Q = [q], d = β/4 αl β/8 l for α β/8, and D = βl. Wth calculatons smlar to those n Secton 3.2, we get Pr[ good]l = α = α 0 [b 1 x <βl ]b 3 1 x β 4 l 2 n + α2 0 [b 1 x <βl ]b 1 x 2 β 4 l n 2. 0 b 1 x β 2 n Ths completes the proof of The small head L 6 We want to show that α 4.9 E [L 6 ]= b 1 x/n. β Here L 6 s the largest value such that for some H 6,we have S 0 h 1 [hx 0 H 6,hx 0 H 6 1 αβ/2 L 6. Defne UH = S 0 h 1 [hx 0 H, hx 0. We wll look for the largest H such that UH H.Note that UH 6 +1 αβ/2 L 6 UH 6 H 6 +1 αβ/2 L 6. Hence H H 6 +1 αβ/2 L 6, and therefore L 6 = H /β. As n the prevous secton, we defne l =2. Ths tme we say that s good f Ul [l /2,l. Smlar to Lemma 3.2, we have that s good f l 1 H <l. Hence E [L 3 ]=E[H ] /β = Pr[ good]l /β. =0 To bound Pr[ good] we use Lemma 3.1 wth q = l, Q = [q], d = 1/2 αl 1/3 l for α 1/6, and D = l. Wth calculatons smlar to those n Secton 3.2, we get Pr[ good]l /β = α 0 [b 1 x < 2l ]b 3 1 x l 2 n + α2 0 [b 1 x < 2l ]b 1 x 2 α = 0 b 1 x βn. l n 2 Ths completes the proof of 4.9, hence of Theorem Applcaton We wll now show how to apply our result to dfferent domans and how the tradtonal soluton wth a frst collson free hashng nto a larger ntermedate doman would be at least twce as slow. Frst we consder 64-bt ntegers, then fxed length strngs, and fnally varable length strngs. We assume that the mplementaton s done n a programmng language lke C [8] or C++ [18] leadng to effcent and portable code that can be nlned from many other programmng languages. We note that ths secton n tself has no theoretcal contrbuton. It s demonstratng how the theoretcal result of the prevous secton plays together wth the fastest exstng technques, yeldng smpler code runnng at least twce as fast. For the reader less famlar wth effcent hashng, ths secton may serve as a mnsurvey of hashng that s both fast and theoretcally good. 661 Copyrght by SIAM.

8 Recall the general stuaton: We are gong to hash a subset S of sze n of some doman U. The hash range should be ndces of the lnear probng array. For smplcty, the range wll be [2n], and we assume that 2n s a power of 4. ur hash functon h : U [2n] wll always be composed of two functons va some ntermedate doman [m]; namely h 1 : U [m] andh 2 :[m] [2n]. Then hx =h 2 h 1 x. Wth the tradtonal method, we want h 2 to be collson free w.h.p., whch means m old = ωn 2. From our Corollary 2.1, t follows that for expected constant number of probes, t suffces wth expected constant bucket sze, and hence t suffces wth m new = n. Thus m new = o m old. As a concrete example, we wll thnk of m new =2 30 and m old =2 60. We denote the correspondng bt-lengths l new =30andl old = unversal hashng from the ntermedate doman Below we consder two dfferent methods for 5-unversal hashng. Degree 4 polynomal The classc method [20] for 5-unversal hashng s to use a random degree 4 polynomal h a0,..,a 4 x = 4 =0 a x over Z p. To get afast modp operaton, Carter and Wegman [3] suggest usng a Mersenne prme p such as , , , or The pont s that wth p =2 a 1, we can explot that y =y & p+y>>a mod p where>> s rght shft and & s bt-wse and. At the end, we extract the l least sgnfcant bts. A typcal stuaton would be that m new <p new = whle < m old < p old = For multplcatons n Z , we can use that 64- bt multplcaton provdes exact 32-bt multplcaton. However, for multplcaton n Z , we have the problem that 64-bt multplcaton dscards overflow, so we need 4 64-bt multplcatons to get the full answer. For larger n, we mght have p new = and p old {2 81 1, }, but, we stll need more than twce as many 64-bt multplcatons wth the large old doman. Tabulaton More recently [19] suggested a tabulaton based approach to 5-unversal hashng whch s roughly 5 tmes faster than the above approach 1. The smplest case s that we dvde a key x [m] nto 2 characters x 0 x 1. The hash functon s computed as T 0 [x 0 ] T [x 1 ] T [x 0 +x 1 ]. Here s bt-wse xor and T 0,T 1,T 2 are ndependently tabulated 5-unversal hash functons. T 0 and T 1 have m entres, and T 2 has 2 m entres. Ths s fast f m s small enough to ft n 1 The method s desgned for 4-unversal hashng, but as ponted out n [12] t works unchanged for 5-unversal hashng. fast memory. The experments n [19] found ths to be the case for m =2 32, ganng a factor 5 n speed over the polynomal-based method above. Ths s perfect for our new small ntermedate doman m = m new =2 30, but not wth the old large ntermedate doman m = m old =2 60. The method n [19] s generalzed so that we can dvde the key nto q characters, and then perform 2q 1 look-ups n tables of sze 2m 1/q, but the method s more complcated for q>2. Assumng that we want the same small table sze wth the old and the new method, ths means that q old 2q new, hence that the large old doman wll requre more than twce as many look-ups and twce as much space bt ntegers We only need unversal hashng from 64-bt ntegers to l-bt ntegers, so we can use the method from [6]: We pck a random odd 64-bt nteger a, and compute h a x =a x >>64 l. Ths method actuallyexplots that the overflowfroma x s dscarded. The probablty of collson between any two keys s at most 1/2 l. We use ths method for h 1 usng l new =30 n the new approach and l old = 60 n the old approach. The value of l does not affect the speed of h 1, but h 1 s very fast compared wth the 5-unversal h 2.Theoverall hashng tme s domnated by h 2 whch we above found more than twce as fast wth our smaller ntermedate doman. 5.3 Fxed length strngs We now consder the case that the nput doman s a strng of 2r 32-bt characters. For h 1 we wll use a straghtforward combnaton of a trck for fast sgnatures n [1] wth the 2-unversal hashng from [4]. The combnaton s folklore [13]. Thus, our nput s x = x 0 x 2r 1,wherex s a 32- bt nteger. The hash functon s defned n terms of 2r random 64-bt ntegers a 0,..., a 2r 1. The hash functon s computed as h a0,...,a 2r 1 x 0 x 2r 1 r 1 = x 2 + a 2 x a 2+1 >> 64 l =0 Ths method only works f l 33, but then the collson probablty for any two keys s 2 l. The cost of ths functon s domnated by the r 64-bt multplcatons. We can use the above method drectly for our l new = 30. However, for l old = 60, the best we can do s to apply the above type of hash functon twce, usng a new set of random ndces a 0,..., a 2r 1 for the second applcaton, and then concatenate the resultng hash values, thus gettng an output of 2l new = l old bts. Thus for the ntal hashng h 1 of fxed length strngs, we gan 662 Copyrght by SIAM.

9 a factor 2 n speed wth our new smaller ntermedate doman. 5.4 Varable length strngs For the ntal hashng h 1 of varable length strngs x = x 0 x 1 x v,wecanuse the method from [5]. We vew x 0,..., x v as coeffcents of a polynomal over Z p, assumng x 0,..., x v [p]. We pck a sngle random a [p], and compute the hash functon v h a x 0 x v = x a. =0 If y s another strng that s no longer than x, then Pr[h a x =h a y] v/p. As usual, the computatons are faster f we use Mersenne prmes. In ths case, we would probably use p = and 32-bt characters x, regardless of the sze of the ntermedate doman. We wll hash down to the rght sze usng the 64-bt hashng from Secton 5.2. As stated above, our smaller ntermedate doman does not make any dfference n speed, but f we want fast hashng of longer varable length strngs, then the above method s too slow. For hgher speed, we dvde the nput strng nto chunks of, say, ten 32-bt characters, and apply the fxed length hashng from Secton 5.3 to each chunk, thus gettng a reduced strng of chunk hash values. If we have more then one chunk, we apply the above varable length strng hashng to the strng of chunk hash values. Note that two strngs reman dfferent under ths reducton f there s a chunk n whch they dffer, and the hashng of that chunk preserves the dfference. Hence we can apply exactly the same hash functon to each chunk. The overall tme bound s domnated by the length reducng chunk hashng, and here, wth the old method, we should hash chunks to 64 bts whle we wth the new method can hash chunks to 32 bts. As dscussed n Secton 5.3, the new method gans a factor 2 n speed. 6 Storng varable length strngs drectly Aboveweassumedthateachstrngx had a sngle array entry contanng ts hash value and a ponter to x. Pagh [13] suggested analyzng the alternatve where we store varable length strngs drectly as ntervals of the array we probe. Each strng s termnated by an end-of-strng character \0. If a strng x s n the array, t should be preceded by ether \0 or an empty locaton, and t should be located between hx andthefrstempty locaton after hx, as stated by Invarant 2.1. We nsert anewkeyx startng at the frst empty locaton after hx. We may have to push some later keys further back to make room, but all n all, we only consder locatons between hx and the frst empty locaton after hx n the state after x has been nserted. The advantage to the above drect representaton s that we do not need to follow a ponter to get to x. Deletons are mplemented usng the same dea. As n the prevous sectons, we let full h,s x denote the largest l such that hx+[l] s full. Then full h,s x+ 1 bounds the number of locatons consdered by any operaton assocated wth x. In ths measure we should nclude x n S f x s nserted or deleted. By a smple reducton to Theorem 2.2, we wll prove Theorem 6.1. Let S U be the set of varable length strngs x. Letlengthx be the length of x ncludng an end-of-strng character \0. Suppose the total length of the strngs n S s at most αt for some α<0.9. Leth 1 be a unversal hash functon from U to [t], andleth 2 be a 5-unversal hash functon from [t] to [t]. Then for every x 0 U, we have 6.10E [full h,s x 0 ] = lengthx α lengthx 0 + Moreover, when x 0 S, 6.11 E [full h,s x 0 ] = α length2 x lengthx length2 x lengthx To bound the number of cells consdered when nsertng or deletng x 0, we use 6.10 and get a bound of lengthx α lengthx 0 + length2 x lengthx. For fndng x 0 we can use the stronger bound from 6.11, yeldng 1+ α length2 x lengthx f x 0 S lengthx 0 + α length2 x lengthx f x 0 S Proof of Theorem 6.1. To apply Theorem 2.2, we smply thnk of all characters of a key x as hashng ndvdually to hx. The fllng of the array s ndependent of the order n whch elements are nserted, so as long as we satsfy Invarant 2.1, t doesn t matter for full that we want characters of x to land at consecutve locatons. More specfcally, we let the hash functon h 1 apply ths way to the ndvdual characters. Recall here that h 1 s 663 Copyrght by SIAM.

10 arbtrary n Theorem 2.2. We get that E [full h,s x 0 ] b 1 x 0 = 3 a α b 1 x 0 + b 1x {a x S} = 3 α b 1 x 0 + lengthxb 1x lengthx, Now b 1 x s the total length of strngs y S wth h 1 y =h 1 x. Snce h 1 s unversal, for each strng x, E [b 1 x] = [x S]lengthx+ [x S]lengthx+α. y S\{x} lengthy/t By lnearty of expectaton, 6.10 follows of the theorem. For the bound 6.11 when x 0 S, we use the bound on coll h,s x 0 from Theorem 2.2. When x 0 S, we have coll h,s x 0 =full h,s x 0 andc 1 x 0 =b 1 x 0 so E [c 1 x 0 ] = α. We therefore get E [full h,s x 0 ] = E [coll h,s x 0 ] a = c 1 x 0 +α b 1x {a x S} = α lengthx2 lengthx. It s easy to see that the bound of Theorem 6.1 s tght wthn a constant factor even f a truly random hash functon h s used. The frst term s trvally needed. For the second term, the basc pont n the square s that the probablty of httng y s proportonal to ts length, and so s the expected cost of httng y. Ths drect storage of strngs was not consdered by Pagh et al. [12], so ths s the frst proof that lmted randomness suffces n ths case. The bound of Theorem 6.1 gves a large penalty for large strngs, and one may consder a hybrd approach where one for strngs of length bgger than some parameter l store a ponter to the suffx, that s, f the strng does not end after l characters, then the next characters represent a ponter to a locaton wth the remanng characters. Then t s only for longer strngs that we need to follow a ponter, and the cost of that can be amortzed over the work on the frst l characters. References [1] J. Black, S. Halev, H. Krawczyk, T. Krovetz, and P. Rogaway. UMAC: fast and secure message authentcaton. In Proc. 19th CRYPT, pages , [2] J. R. Black, C. U. Martel, and H. Q. Graph and hashng algorthms for modern archtectures: Desgn and performance. In Proc. 2nd WAE, pages 37 48, [3] J. Carter and M. Wegman. Unversal classes of hash functons. J. Comp. Syst. Sc., 18: , Announced at STC 77. [4] M. Detzfelbnger. Unversal hashng and k-wse ndependent random varables va nteger arthmetc wthout prmes. In Proc. 13th STACS, LNCS 1046, pages , [5] M. Detzfelbnger, J. Gl, Y. Matas, and N. Pppenger. Polynomal hash functons are relable extended abstract. In Proc. 19th ICALP, LNCS 623, pages , [6] M. Detzfelbnger, T. Hagerup, J. Katajanen, and M. Penttonen. A relable randomzed algorthm for the closest-par problem. J. Algorthms, 25:19 51, [7] G. L. Heleman and W. Luo. How cachng affects hashng. In Proc. 7th ALENEX, pages , [8] B. Kernghan and D. Rtche. The C Programmng Language 2nd ed.. Prentce Hall, [9] D. E. Knuth. Notes on open addressng, Unpublshed memorandum. Avalable at [10] D. E. Knuth. The Art of Computer Programmng, Volume III: Sortng and Searchng. Addson-Wesley, [11] M. Mtzenmacher and S. P. Vadhan. Why smple hash functons work: explotng the entropy n a data stream. In Proc. 19th SDA, pages , [12] A. Pagh, R. Pagh, and M. Ruzc. Lnear probng wth constant ndependence. In Proc. 39th STC, pages , [13] R. Pagh. Personal communcaton, [14] R. Pagh and F. F. Rodler. Cuckoo hashng. J. Algorthms, 51: , [15] H. Prodnger and W. Szpankowsk. Preface for specal ssue on average case analyss of algorthms. Algorthmca, 224: , [16] J. P. Schmdt and A. Segel. The analyss of closed hashng under lmted randomness. In Proc. 22nd STC, pages , [17] A. Segel and J. Schmdt. Closed hashng s computable and optmally randomzable wth unversal hash functons. Techncal Report TR , Currant Insttute, [18] B. Stroustrup. The C++ Programmng Language, Specal Edton. Addson-Wesley, Readng, MA, [19] M. Thorup and Y. Zhang. Tabulaton based 4- unversal hashng wth applcatons to second moment estmaton. In Proc. 15th SDA, pages , [20] M. Wegman and J. Carter. New hash functons and ther use n authentcaton and set equalty. J. Comp. Syst. Sc., 22: , Copyrght by SIAM.

Introduction to Algorithms

Introduction to Algorithms Introducton to Algorthms 6.046J/8.40J Lecture 7 Prof. Potr Indyk Data Structures Role of data structures: Encapsulate data Support certan operatons (e.g., INSERT, DELETE, SEARCH) Our focus: effcency of

More information

Lecture 3 January 31, 2017

Lecture 3 January 31, 2017 CS 224: Advanced Algorthms Sprng 207 Prof. Jelan Nelson Lecture 3 January 3, 207 Scrbe: Saketh Rama Overvew In the last lecture we covered Y-fast tres and Fuson Trees. In ths lecture we start our dscusson

More information

Hashing. Alexandra Stefan

Hashing. Alexandra Stefan Hashng Alexandra Stefan 1 Hash tables Tables Drect access table (or key-ndex table): key => ndex Hash table: key => hash value => ndex Man components Hash functon Collson resoluton Dfferent keys mapped

More information

Introduction to Algorithms

Introduction to Algorithms Introducton to Algorthms 6.046J/18.401J Lecture 7 Prof. Potr Indyk Data Structures Role of data structures: Encapsulate data Support certan operatons (e.g., INSERT, DELETE, SEARCH) What data structures

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM Example of Extended Eucldean Algorthm Recall that gcd(84, 33) = gcd(33, 18) = gcd(18, 15) = gcd(15, 3) = gcd(3, 0) = 3 We work backwards to wrte 3 as a lnear combnaton of 84 and 33: 3 = 18 15 [Now 3 s

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds. U.C. Berkeley CS273: Parallel and Dstrbuted Theory Lecture 1 Professor Satsh Rao August 26, 2010 Lecturer: Satsh Rao Last revsed September 2, 2010 Lecture 1 1 Course Outlne We wll cover a samplng of the

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Polynomials. 1 More properties of polynomials

Polynomials. 1 More properties of polynomials Polynomals 1 More propertes of polynomals Recall that, for R a commutatve rng wth unty (as wth all rngs n ths course unless otherwse noted), we defne R[x] to be the set of expressons n =0 a x, where a

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Provable Security Signatures

Provable Security Signatures Provable Securty Sgnatures UCL - Louvan-la-Neuve Wednesday, July 10th, 2002 LIENS-CNRS Ecole normale supéreure Summary Introducton Sgnature FD PSS Forkng Lemma Generc Model Concluson Provable Securty -

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Nice plotting of proteins II

Nice plotting of proteins II Nce plottng of protens II Fnal remark regardng effcency: It s possble to wrte the Newton representaton n a way that can be computed effcently, usng smlar bracketng that we made for the frst representaton

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Note on EM-training of IBM-model 1

Note on EM-training of IBM-model 1 Note on EM-tranng of IBM-model INF58 Language Technologcal Applcatons, Fall The sldes on ths subject (nf58 6.pdf) ncludng the example seem nsuffcent to gve a good grasp of what s gong on. Hence here are

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness. 20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Problem Solving in Math (Math 43900) Fall 2013

Problem Solving in Math (Math 43900) Fall 2013 Problem Solvng n Math (Math 43900) Fall 2013 Week four (September 17) solutons Instructor: Davd Galvn 1. Let a and b be two nteger for whch a b s dvsble by 3. Prove that a 3 b 3 s dvsble by 9. Soluton:

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials MA 323 Geometrc Modellng Course Notes: Day 13 Bezer Curves & Bernsten Polynomals Davd L. Fnn Over the past few days, we have looked at de Casteljau s algorthm for generatng a polynomal curve, and we have

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

An efficient algorithm for multivariate Maclaurin Newton transformation

An efficient algorithm for multivariate Maclaurin Newton transformation Annales UMCS Informatca AI VIII, 2 2008) 5 14 DOI: 10.2478/v10065-008-0020-6 An effcent algorthm for multvarate Maclaurn Newton transformaton Joanna Kapusta Insttute of Mathematcs and Computer Scence,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities Algorthms Non-Lecture E: Tal Inequaltes If you hold a cat by the tal you learn thngs you cannot learn any other way. Mar Twan E Tal Inequaltes The smple recursve structure of sp lsts made t relatvely easy

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Spectral Graph Theory and its Applications September 16, Lecture 5

Spectral Graph Theory and its Applications September 16, Lecture 5 Spectral Graph Theory and ts Applcatons September 16, 2004 Lecturer: Danel A. Spelman Lecture 5 5.1 Introducton In ths lecture, we wll prove the followng theorem: Theorem 5.1.1. Let G be a planar graph

More information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION 1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:

More information

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University Math Revew CptS 223 dvanced Data Structures Larry Holder School of Electrcal Engneerng and Computer Scence Washngton State Unversty 1 Why do we need math n a data structures course? nalyzng data structures

More information