Notes on Frequency Estimation in Data Streams

Size: px
Start display at page:

Download "Notes on Frequency Estimation in Data Streams"

Transcription

1 Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to the doman {1,..., n}, and v s the change n the frequency of the tem: f v 1 then the meanng s v addtons of tem and f v 1 then the meanng s v deletons of tem. The goal s to compute some functon whle usng space that s sublnear n the length of the stream. Ths s relevant both when data s lterally obtaned as a long stream of sgnals, where the stream s too long to keep n memory, and when the data resdes on some external devce and readng t n one pass s much more effcent than allowng random access. A natural specal case s that v = +1 for every elements. In ths case the stream s smply a sequence of tems (wth repettons) a j = for {1,..., n}. One of the frst problems that was studed n ths model (wth the specal case of sngle addtons), s computng frequency moments. Namely, let m = {j : a j = } denote the number of occurrences of n the stream. Then for each k 0 we defne F k = n (m ) k. (1) In partcular, F 1 equals m, the length of the sequence, F 0 s the number of dstnct elements appearng n the sequence (snce f m > 0 then m 0 = 1 and f m = 0 then m 0 = 0), and F 2 s the repeat rate or Gn s ndex of homogenety needed n order to compute the surprse ndex of the sequence. Fnally, for k = we defne F = max 1 n m (2) Gven an approxmaton parameter ɛ and a securty parameter δ, the algorthm should compute an estmate ˆF k such that the probablty that ˆF k F k > ɛf k s at most δ. What s known? 1. There s a lower bound of n 1 2 k (for constant ɛ and δ), whch n partcular means that for k 3 the lower bound s of the form n α for constant α (that approaches 1 when k ncreases). 2. There s a (recent) upper bound whose dependence on n s Õ n 1 2 k, so that t roughly matches the lower bound (the exact expresson s O k 2 log(1/δ) n 1 2 ɛ 2+4/k k log 2 m(log m + log n) ). 3. For the specal case of k = 1, clearly the exact value of F k = m can be computed usng space log m. To get an estmate, O(log log m + log(1/ɛ)) bts suffce. 4. For the specal case of k = 0 t s possble to compute an estmate that s wthn a factor 1/c and a factor c of F 0 wth probablty at least 1 2/c, where c > 2, usng O(log n) bts. 1

2 5. For the specal case of k = 2 t suffces to use O log(1/δ) (log n + log m). ɛ 2 6. Estmatng F requres space Ω(n) for m = O(n) and constant ɛ and δ. 7. Randomness s crucal: for k 1, every algorthm that computes an estmate of F k wth constant ɛ must use Ω(n) space. We shall dscuss the orgnal result of Alon et. al. whose dependence on n s Õ n 1 1 k (to be precse: O k log(1/δ) n 1 1 ɛ 2 k (log n + log m). If tme permts we wll talk about some of the specal cases. Assume frst that the length of the sequence, m, s known n advance. Ths assumpton s removed later. Let s 1 = 8 kn 1 1 ɛ 2 k and s 2 = 2 log(1/δ). The algorthm computes s 2 random varables, Y 1,..., Y s2 and outputs ther medan (ths s a standard technque to go from a constant probablty of devatng by more than some allowed devaton, to only an δ probablty that ths event occurs so the nterestng part s n defnng and analyzng the behavor of the Y t s). Each Y t s the average of s 1 random varables, X t,j where 1 j s 1. The X t,j s are ndependent, dentcally dstrbuted random varables. In order to explan how each X t,j = X s dstrbuted, we ntroduce some notaton. For each p {1,..., m}, let r(p) = {q : q p, a q = a p } (3) denote the number of occurrences of l = a p among the elements n the sequence that follow a p, ncludng a p (so that r(p) 1). Next defne ( R k (p) = m (r(p)) k (r(p) 1) k) (4) Each varable X t,j = X s determned (ndependently) by selectng an ndex p {1,..., m} unformly at random and lettng X = R k (p). Note that n order to compute r(p) and hence X = R k (p) t suffces to use log m bts to select p and count up to p, and then t suffces to mantan the log n bts representng a p and the log m bts representng r(p) and the log m bts representng R k (p). By defnton of X (recall that m = {j : a j = }), Exp[X] = 1 m = = = m n ( m R k (j) (5) ( (r(j)) k (r(j) 1) k) (6) ) ((m ) k (m 1) k ) + ((m 1) k (m 2) k ) (2 k 1 k ) + (1 k 0 k )(7) k (m ) k = F k. Thus we have an unbased estmator of F k. What remans to be done s to bound the devaton of the average of the X t,j s from ths correct expected value. (The X t,j s are ndependent, so we could (8) 2

3 apply Chernoff. However, ther range s very bg so we wouldn t get a very good bound.) To ths end we bound the varance Var[X] = Exp[X 2 ] Exp 2 [X] and apply Chebshev: so that Pr[ X Exp[X] t Var 1/2 [X]] 1 t 2 Pr[ X Exp[X] T ] Var[X] T 2 In order to bound Exp[X 2 ] we shall use the followng nequalty, whch holds for any par of numbers a, b such that a > b > 0: a k b k = (a b)(a k 1 + a k 2 b ab k 2 + b k 1 ) (9) (a b)ka k 1 (10) (You may be famlar wth the specal case of (a 2 b 2 ) = (a b)(a + b).) We use ths nequalty wth a = b + 1, so that a k (a 1) k ka k 1, and get: Exp[X 2 ] = 1 m m = m k m k n m ( ) 2 m (r k (r 1) k ) r=1 m n k r k 1 ((r k (r 1) k ) r=1 n n ( (m ) 2k 1 (m ) k 1 (m 1) k + (m 1) 2k 1 (m 1) k 1 (m 2) k k 1 m 2k 1 = m k F 2k 1 = k F 1 F 2k 1 It can be shown (and s gven as an exercse) that F 1 F 2k 1 n 1 1/k (F k ) 2 (16) where we have used the nequalty ( 1 n n m ) k 1 n n mk. Therefore, Var[X] Exp[X 2 ] k F 1 F 2k 1 k n 1 1/k F 2 k (17) and so whereas Var[Y t ] = Var 1 s 1 s 1 X t,j Exp[Y t ] = Exp 1 s 1 = 1 s 1 Var[X] k n1 1/k F 2 k s 1 (18) s 1 X t,j = Exp[X] = F k (19) 3

4 By Chebyshev s nequalty ] Pr [ Y t F k > ɛf k Var[Y t] ɛ 2 F 2 k k n1 1/k F 2 k s 1 ɛ 2 F 2 k (20) By our choce of s 1 = 8 kn 1 1/k ths s at most 1 ɛ 2 8. As mentoned before, a standard analyss transform the constant probablty of small devaton of the Y t s to a hgh probablty of small devaton of ther medan (gven as an exercse). Dealng wth an unknown m. In ths case we start computng the random varable X wth the assumpton that m = 1, so that necessarly a p = a 1 (and we get that r(p) = 1 and X = 1 (1 k 0 k ) = 1). If ndeed m = 1 the process ends (note that f m = 1 then F k = 1 for every k). Otherwse, the value of m s updated to 2, and p = 1 s replaced by p = 2 wth probablty 1/2. In ether case, r(p) s modfed accordngly. In general, after vewng the frst t 1 tems, there s a current choce of p t 1 and a correspondng value of r(p t 1 ). If a new tem arrves, the belef for m s changed to t and p t s set to t wth probablty 1/t and remans p t 1 wth probablty 1 1/t. In the former case we have that r(p t ) = 1, and n the latter case r(p t ) s r(p t 1 ) + 1, f a t = a pt, and s r(p t 1 ) otherwse. As n the case that m s known, the algorthm only needs to remember a pt and r(p t ) at each step, at a cost of O(log n + log m) bts, and flppng a con wth bas 1/m takes O(log m) bts as well. On the relaton between m and n. If m = poly(n) then the factor of (log n + log m) s smply log n. When m s very large then nstead of computng r(p) exactly, we can estmate t usng log log m + log(1/ɛ) bts. Improved Estmaton of F 2 If we plug n k = 2 n the aforementoned expresson, we get a dependence on n that grows lke Õ( n). We next show how to get an estmate usng only O log(1/δ) (log n + log m) memory bts. ɛ 2 We set s 2 = 2 log(1/δ) as before and s 1 = 16. Here too the output s the medan of s ɛ 2 1 random varables Y 1,..., Y s1, where each Y t s the average of X t,j for j = 1,..., s 2. Each X,j = X s computed as follows. A central dea s usng a set V = {v 1,..., v h } of vectors of length n wth +1, 1 entres that are four-wse ndependent. That s, for every four dstnct coordnates, 1 1 < 2 < 3 < 4 n, and for every choce of γ 1,..., γ 4 { 1, +1}, exactly a (1/16)-fracton of the vectors n V have γ j n ther j coordnate for every j = 1,..., 4. (Note that 4-wse ndependence mples that for each coordnate, half of the vectors n V have +1 n the th coordnate and half have 1, and t mples s-wse ndependence for s = 2 and s = 3.) Such sets, of sze only h = O(n 2 ), not only exst but t s possble to compute each partcular coordnate of any v p of our choce usng O(log n) space. To compute X, we frst select 1 p h unformly at random (ths requres O(log n) bts of space). Ths determnes v p = (β 1,..., β n ) (where we wll compute the coordnates of v p when we need them). Let Z = n β m. Computng Z can be done n one pass usng O(log n + log m) space: Intally, Z = 0. For each a j, j = 1,..., m, f β aj = +1 then Z s ncremented by 1, and f β aj = 1 then t s decremented by 1. To compute each β aj t takes O(log n) space, and to mantan Z t takes O(log m) space. When the sequence termnates, we set X = Z 2. 4

5 As n the proof for general k we next compute Exp[X] and Var[X]. Before dong so, we make a few observatons that follow from the fact that each β { 1, +1} and the 4-wse ndependece: 1. For every, β 2 = β4 = 1, whle β3 = β. 2. For every, j Exp[β β j ] = 1 4 (+1 +1) (+1 1)1 4 ( 1 +1)1 ( 1 1) = Smlarly, for every j k, Exp[β β j β k ] = 0 and for every j k l, Exp[β β j β k β l ] = 0. Usng the frst two propertes: ( n ) 2 Exp[X] = Exp[Z 2 ] = Exp β m (21) = Exp β β j m m j (22) j (m ) 2 Exp[β 2 ] + m m j Exp[β β j ] (23) j (m ) 2 = F 2 (24) Smlarly (though a bt more tedously...) ( n ) 4 Exp[X 2 ] = Exp β m (25) (m ) 4 Exp[β 4 ] + 4 (m ) 3 m j Exp[β 3 β j ] + 4 (m ) 2 m j m k Exp[β 2 β j β k ] j j k + 6 (m ) 2 (m j ) 2 Exp[β 2 βj 2 ] + m m j m k m l Exp[β β j β k β l ] (26) j j k l (m ) j (m ) 2 (m j ) 2 (27) It follows that Var[X] = Exp[X 2 ] (Exp[X]) 2 (28) = (m ) (m ) 2 (m j ) 2 (m ) 2 (29) j (m ) (m ) 2 (m j ) 2 j (m ) j (m ) 2 (m j ) 2 (30) = 4 j (m ) 2 (m j ) 2 2F 2 2 (31) 5

6 By Chebshev, for each 1 s 2, Pr [ Y F 2 > ɛf 2 ] Var[Y ] ɛ 2 F 2 2 2F 2 2 s 1 ɛ 2 F 2 2 = 1 8 (32) and we complete the argument as before. Estmatng F 0 to wthn a constant factor Here we ll only gve the dea wthout the full analyss. Let F = GF (2 d ) where d = log n. We vew each a j n the sequence as an element n the feld F. To compute an estmate for F 0 (the number of dstnct elements n the sequence), the algorthm selects α, β unformly at random n F. For each a j, the algorthm computes z j = z(a j ) = αa j + β, and consders the representaton of z j as a d-bt vector z j,1,..., z j,d. It then sets r j = r(z j ) to be the largest ndex such that all r(z j ) rghtmost bts of z j are 0. It mantans R as the maxmum over all r j, and when the sequence termnates t outputs Y = 2 R. The underlyng dea s that for each fxed l F, z(l) s unformly dstrbuted n F (gven the choce of α and β). That s, for every l, l F, Pr α,β [z(l) = l ] = 1 F. and so, for any r, the probablty that ts r rghtmost bts are 0 s 2 r. Now, f F 0 < 2 r /c, then by Markov, the probablty that any one of the the F 0 dfferent values l gves z(l) wth r(z(l)) > r s less than 1/c. (A few more detals: Let B denote the subset of elements that appear n the stream (where we want to estmate B. Let F r denote all elements n F whose r rghtmost bts are 0, so that F r = 2 d r. For each element l F, let X l be a 0/1 random varable that s 1 f and only f z(l) F r. Now, Pr[X l = 1] = 2 r so that Exp[ l B X l] = B 2 r. If B < 2 r /c then Ths expectaton s less than 1/c, and so the probablty that get at least 1 (.e., c tmes the expectaton) s less than 1/c) For the other drecton (showng that f F 0 > c2 r then the probablty that none of the F 0 dfferent l s gve r(z(l)) > r s less than 1/c), requres to apply Chebshev, and to use the fact that for any par l l the probablty that r(z(l)), r(z(l )) r, s 2 2r. 6

7 Constructng k-wse Independent Sample Spaces In the estmaton of F 2 we bult on the exstence of a set of n-dmensonal vectors of sze O(n 2 ) that are 4-wse ndependent. Here we shall show a general (but slghtly weaker) constructon of k-wse ndepdent sample spaces of sze O(n k ). Here too let F = GF (2 d ) where d = log n. We shall actually construct a set of n k vectors over F n (f we want to get bnary vectors we can take the least sgnfcant bt of each coordnate). Let w 1,..., w n denote the elements of the feld F. For each choce of k elements c 0,..., c k 1 F we defne the vector v c 0,...,c k 1 as follows: v c 0,...,c k 1 = k 1 j=0 c j w j. In other words, f we defne the (unvarate) polynomal p c 0,...,c k 1 = k 1 j=0 c jx j, then v c 0,...,c k 1 = p c 0,...,c k 1 (w ). By constructon there are n k vectors and each coordnate of any gven vector can be computed usng O(log n) bts. To see why we get a k-wse ndependent space, consder the n d Vendermonde matrx, where M,j = w j. Then each vector vc 0,...,c k 1 s the result of multplyng the matrx M wth the vector (c 0,..., c k 1 ). If we consder any choce of k rows, they are lnearly ndependent, mplyng the desred k-wse ndependence. 7

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Mining Data Streams-Estimating Frequency Moment

Mining Data Streams-Estimating Frequency Moment Mnng Data Streams-Estmatng Frequency Moment Barna Saha October 26, 2017 Frequency Moment Computng moments nvolves dstrbuton of frequences of dfferent elements n the stream. Frequency Moment Computng moments

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Lecture 5 September 17, 2015

Lecture 5 September 17, 2015 CS 229r: Algorthms for Bg Data Fall 205 Prof. Jelan Nelson Lecture 5 September 7, 205 Scrbe: Yakr Reshef Recap and overvew Last tme we dscussed the problem of norm estmaton for p-norms wth p > 2. We had

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 6 Luca Trevsan September, 07 Scrbed by Theo McKenze Lecture 6 In whch we study the spectrum of random graphs. Overvew When attemptng to fnd n polynomal

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

9 Characteristic classes

9 Characteristic classes THEODORE VORONOV DIFFERENTIAL GEOMETRY. Sprng 2009 [under constructon] 9 Characterstc classes 9.1 The frst Chern class of a lne bundle Consder a complex vector bundle E B of rank p. We shall construct

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13] Algorthms Lecture 11: Tal Inequaltes [Fa 13] If you hold a cat by the tal you learn thngs you cannot learn any other way. Mark Twan 11 Tal Inequaltes The smple recursve structure of skp lsts made t relatvely

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities Algorthms Non-Lecture E: Tal Inequaltes If you hold a cat by the tal you learn thngs you cannot learn any other way. Mar Twan E Tal Inequaltes The smple recursve structure of sp lsts made t relatvely easy

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product 12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA Here s an outlne of what I dd: (1) categorcal defnton (2) constructon (3) lst of basc propertes (4) dstrbutve property (5) rght exactness (6) localzaton

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

FINITELY-GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN

FINITELY-GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN FINITELY-GENERTED MODULES OVER PRINCIPL IDEL DOMIN EMMNUEL KOWLSKI Throughout ths note, s a prncpal deal doman. We recall the classfcaton theorem: Theorem 1. Let M be a fntely-generated -module. (1) There

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

find (x): given element x, return the canonical element of the set containing x;

find (x): given element x, return the canonical element of the set containing x; COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness. 20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables LINEAR REGRESSION ANALYSIS MODULE VIII Lecture - 7 Indcator Varables Dr. Shalabh Department of Maematcs and Statstcs Indan Insttute of Technology Kanpur Indcator varables versus quanttatve explanatory

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

Introduction to Algorithms

Introduction to Algorithms Introducton to Algorthms 6.046J/8.40J Lecture 7 Prof. Potr Indyk Data Structures Role of data structures: Encapsulate data Support certan operatons (e.g., INSERT, DELETE, SEARCH) Our focus: effcency of

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Bernoulli Numbers and Polynomials

Bernoulli Numbers and Polynomials Bernoull Numbers and Polynomals T. Muthukumar tmk@tk.ac.n 17 Jun 2014 The sum of frst n natural numbers 1, 2, 3,..., n s n n(n + 1 S 1 (n := m = = n2 2 2 + n 2. Ths formula can be derved by notng that

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

DECOUPLING THEORY HW2

DECOUPLING THEORY HW2 8.8 DECOUPLIG THEORY HW2 DOGHAO WAG DATE:OCT. 3 207 Problem We shall start by reformulatng the problem. Denote by δ S n the delta functon that s evenly dstrbuted at the n ) dmensonal unt sphere. As a temporal

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information