Lecture 6 : Dimensionality Reduction
|
|
- Luke Rodgers
- 5 years ago
- Views:
Transcription
1 CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing n oints in a metric sace with large imension into a metric sace with much smaller imension, while reserving airwise istances aroximately. Unbiase Estimators of Distance The main iea comes from hashing. In stanar hashing, the hash functions we use were oblivious to the relationshi between ifferent items. Can we make hash function behave in such a way that they ma nearby objects accoring to some similarity measure into nearby buckets? Surrisingly, it turns out this can be one for many similarity istance measures. Warm-u: Hamming Cube Consier n oints on the Hamming cube {0, } in imensions. Suose the istance function is the l -istance. D x, y = x k y k which simly counts the number of coorinates where the oints iffer. Consier now the following simle hash family: H = {h k h k x = k th bit of x} Such a hash function mas each item to one of two buckets. Let Z ij enote the ranom variable that is if items x i an x j ma to ifferent buckets an 0 otherwise, where the ranomness is over the hash function chosen from H. Then it is easy to check that E[Z ij ] = hk x i =h k x j = x ik x jk = Dx i, x j Therefore, a hash function from this family can be viewe as maing the inut oints to {0, } so that the execte istance between the oints is exactly reserve. Of course, the execte istance oes not mean much if you choose any one hash function h k, the n oints ma to one of two oints, so the istance in the embeing is either 0 or. Therefore, it is very likely none of the istances are reserve in this maing the istances are either comresse to 0 or exane to. We can now use the stanar trick choose r hash functions h k, h k2,..., h k at ranom from H, an ma x to the -imensional oint h k x,..., h k x, that is, ma x to / times the value of the k th m bit, for k m being a ranom imension for each m =, 2,...,. By linearity of exectation alie to the revious argument, if Z ij is the ranom variable enoting the l istance between the maing of x i an x j, then E[Z ij ] = Dx i, x j. Since we are choosing the hash functions at ranom, it is unlikely that two inut items x i an x j will ma to the same bucket in all imensions.
2 2 UNBIASED ESTIMATORS OF DISTANCE The question then becomes, how large shoul be so that with large robability, all Z ij for i, j {, 2,..., n} are close to their execte value? It is easy to check that this has to be as large as. For instance, with =, the smallest istance between two hashe vectors is, but it coul be that these vectors were originally only istance aart. There is a moral to the above story: It is not merely sufficient to construct a hash function that in exectation reserves istance. It is also necessary to ensure the variance of this estimator is much smaller than the mean. Otherwise the trick of taking many ineenent coies of this hash function will en u requiring more imensions than the initial sace! However, there is a subtle oint with the above hash function. It ensures that Pr[h k x = h k y] = Dx i, x j Though labeling the buckets as 0 an an using the resulting istance introuces too much variance, surrisingly, we can use the fact that the robability of maing to the same bucket eens on istance in orer to beat brute force for similarity search. This will be the toic of the next lecture on LSH. Digression: Central Limit Theorem Before roceeing further, we will resent a rough statement of the Central Limit Theorem. Let X, X 2,..., X be ineenent ranom variables, each with mean µ an stanar eviation τ. Then uner mil restrictions on higher moments of these variables, in the limit of large, the X k converges to N µ, σ 2, where σ = τ, an N µ, σ 2 is the stanar istribution of Z = Normal istribution with mean µ an stanar eviation σ. What is a normal istribution? The istribution N µ, σ 2 has the ensity function fx = σ x µ 2 2π e 2σ 2 For a Normal istribution Y N µ, σ 2, the eviation aroun the mean exonentially falls off, so that roughly seaking, assuming k, Pr[ Y i µ kσ] 2 2π e k2 /2 k e k2 /2 Therefore, the robability that the ranom variable eviates significantly from the mean is very small, rovie we take eviations relative to the stanar eviation. For instance, six sigma means six stanar eviations from the mean, whose robability looks like /e 8 0 9, a really small number! Eucliean Saces The question that arises from the above iscussion is: Is there some other similarity measure in which we can construct an unbiase estimator with low variance? The answer surrisingly is yes for the Eucliean sace. The hash family is H = {h r h r x = r x an r 2 = } In other wors, choose unit vector r at ranom from the unit shere; the hash value is the length of the rojection of x in the irection of r.
3 Eucliean Saces 3 Ranom Vectors. In orer to unerstan what the above roceure achieves, we shoul first unerstan how to generate a ranom unit vector. We will show that the following rocess obeys a istribution that looks the same in all irections: For each coorinate r k, k =, 2,...,, generate r k ineenently from a Normal istribution with mean 0 an variance, that is, from N 0,. This vector will not have unit norm, but the vector r = r/ r 2 has unit norm, an oints in the same irection as r. Let us write the close form for the ensity of r. Let fr, r 2,..., r enote the ensity function. Since r N 0,, an ineenently r 2 N 0,, an so on, the joint ensity is simly the rouct of the ensities of r, r 2,..., r. This means fr, r 2,..., r = 2π e r2 k /2 = r k 2 e 2 = 2π /2 2π /2 e r 2 2 /2 The ensity only eens on the length of r an not on its irection! This means that for a given length, all irections have the same ensity; In other wors, this rocess generates a vector whose irection is ranom. Proerties of Normal Distributions. In alying the Central Limit Theorem, we use the fact that the Normal istribution s robability of eviation from the mean falls off exonentially the robability of being k stanar eviations away ros off roughly as e k2. In unerstaning the above hashing scheme, we nee a ifferent roerty of Normal istributions. The roerty is the following: Claim. If X N µ, σ 2 an a 0 is a constant, then ax N aµ, a 2 σ 2. Furthermore, if Y N µ, σ 2 an Z N µ 2, σ 2 2 are ineenent ranom variables, then Y +Z N µ +µ 2, σ 2 +σ2 2. The roof follows by writing out the resective ensity functions an checking. The key oint is that taking linear combinations of ineenent Normal ranom variables yiels a Normal ranom variable. Unbiase Estimator. Why is the above fact relevant? Consier our hash function h r x = r x = r k x k Since r k N 0,, the first art of the above claim imlies r k x k N 0, x 2 k. Since the r k s in ifferent imensions are ineenent, the above claim also imlies h r x = r k x k N 0, x 2 k = N 0, x 2 2 This shows that the hash value is a Normal istribution with zero mean, an variance equal to the square norm of x. For a ranom variable with mean zero, the exectation of the square equals the variance. Show this! This means: E [ h x 2] = x 2 2 where the exectation is over the choice of r. Thus we have an unbiase estimator of the square norm of x: Generate a ranom vector r by choosing each coorinate from a N 0, istribution ineenently; take the square of the length of the rojection of x onto r.
4 4 UNBIASED ESTIMATORS OF DISTANCE The roerties of Normal istributions imlies the above hols even for the ifference of two vectors, so that [ 2 E h x h y 2] = E r k x k y k = x y 2 2 Therefore, if we are given n oints D = { x, x 2,..., x n }, if we roject these oints onto a ranom vector, the square istance between any two oints is the same as the exectation of the square ifference in their rojections. So far so goo. But ha something similar even for Hamming saces; the roblem was that the variance of the estimator there was too large for it to be useful. What about in this case? Bouning the Variance. We erform the same trick as before: In orer to reuce variance in our estimator, we choose hash functions at ranom from H each of this is a ranom vector generate ineenently of the others, an ma x to the oint Π x in -imensional sace: Π x = x r, Each imension of Π x is istribute as N x r 2,..., x r, where the variance is ivie by because 0, x 2 2 we scale own each imension by a factor of. Similarly, Π x Π y has each of its coorinates istribute as N. 0, x y 2 2 By the same argument as before, [ E Π x Π y 2] = s= x y 2 2 = x y 2 2 All we nee to show is that the ranom variable Π x Π y 2, which is the sum of the squares of ineenent ranom variables each istribute as N 0, x y 2 2, is tightly concentrate aroun its exectation, for reasonable values of. Sums of Squares of Normals. We will nee to know what an Exonential istribution is. This istribution has a arameter λ, an has the ensity function fx = λe λx This istribution has mean λ, an stanar eviation λ. Intuitively, think about tossing a coin with bias till a heas is obtaine. The ranom variable X enoting the number of faile coin tosses before success. This is a Geometric istribution that satisfies Pr[X = k] = k. Intuitively, the Exonential istribution is the continuous version of the Geometric istribution, where you make the success robability become smaller an smaller, an squish time so there are a large number of coin tosses in a unit time interval. What is the connection between Exonential an Normal istributions? Here s a math fact whose roof is teious algebra: Claim 2. Suose X N 0, σ 2 an Y N 0, σ 2 are ineenent ranom variables, then X 2 + Y 2 Exonential 2σ 2
5 Eucliean Saces 5 Consier some two imensions of Π x Π y. Let the values here be X an Y. Then, we know that X, Y N 0, x y 2 2. This means X 2 + Y 2 Exonential 2 x y 2 2 This istribution has mean 2 x y 2 2, an the same value as stanar eviation. This is the crux the stanar eviation of our estimator of square length is comarable to the mean. This was not true for Hamming saces where it coul have been times larger! The quantity Π x Π y has square length equal to the sum of /2 ineenent coies of this ranom variable. This means its mean is /2 times larger, an its stanar eviation is /2 times larger. Furthermore, by CLT, the istribution becomes aroximately normal. This means the square length of Π x Π y is aroximately normal with mean x y 2 2, an stanar eviation x y Note that the stanar eviation is now much smaller than the mean! Suose we choose 6 log n, where n is the number of oints in our atabase. Then the robability that we eviate by more than k = /2 times the stanar eviation is at most e k2 = e /2 n 3 But k times the stanar eviation is at most x y 2 2, which is at most the mean. This means that with very high robability, the square length is at most twice the original square length. By union bouns over all the n 2 airs of oints, all the istances are reserve to within this factor with robability at least n 2 /n 3 = /n. The argument is somewhat rough, but it is reasonably comlete an can be extene to show that by choosing slightly more hash functions, the istances are in fact very close to the true value with very high robability. We finally have the following theorem, which shows that we can reuce the imension to roughly log n while reserving istances among n inut oints. Theorem Johnson-Linenstrauss. Given any n oints in Eucliean sace, for any ɛ > 0, there log n ɛ 2 is a maing to = O imensions so that with robability at least n, the istance between any air of oints is reserve to within a factor of ± ɛ. Note that the quantification in the above theorem is crucial: If we ranomly roject onto O log n imensions, then with very high robability, all airwise istances between n inut oints ɛ 2 are reserve. This is what makes the result algorithmically useful. Furthermore, the neat feature is that the scheme is oblivious to the inut for any set S of n inut oints, the ranom irections are chosen from the same istribution. This has alications in settings where ata is scanne one inut at a time. We will see another such metho, the Fourier Transform, a bit later in the course. Of course, it is conceivable that a imension reuction scheme that eens on the inut oints nees fewer imensions to reserve salient roerties of the ata. Maybe all ata lies on a 2-imensional subsace to start with. In such a case, an oblivious scheme such as the above still requires log n imensions, but a scheme that eens on the ata coul ientify the subsace an roject onto it. We will consier this when we iscuss algebraic methos such as the PCA.
Colin Cameron: Brief Asymptotic Theory for 240A
Colin Cameron: Brief Asymtotic Theory for 240A For 240A we o not go in to great etail. Key OLS results are in Section an 4. The theorems cite in sections 2 an 3 are those from Aenix A of Cameron an Trivei
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationColin Cameron: Asymptotic Theory for OLS
Colin Cameron: Asymtotic Theory for OLS. OLS Estimator Proerties an Samling Schemes.. A Roama Consier the OLS moel with just one regressor y i = βx i + u i. The OLS estimator b β = ³ P P i= x iy i canbewrittenas
More informationConvergence of random variables, and the Borel-Cantelli lemmas
Stat 205A Setember, 12, 2002 Convergence of ranom variables, an the Borel-Cantelli lemmas Lecturer: James W. Pitman Scribes: Jin Kim (jin@eecs) 1 Convergence of ranom variables Recall that, given a sequence
More informationEconometrics I. September, Part I. Department of Economics Stanford University
Econometrics I Deartment of Economics Stanfor University Setember, 2008 Part I Samling an Data Poulation an Samle. ineenent an ientical samling. (i.i..) Samling with relacement. aroximates samling without
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationLower bounds on Locality Sensitive Hashing
Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,
More informationConsistency and asymptotic normality
Consistency an asymtotic normality Class notes for Econ 842 Robert e Jong March 2006 1 Stochastic convergence The asymtotic theory of minimization estimators relies on various theorems from mathematical
More informationApplication of Measurement System R&R Analysis in Ultrasonic Testing
17th Worl Conference on Nonestructive Testing, 5-8 Oct 8, Shanghai, China Alication of Measurement System & Analysis in Ultrasonic Testing iao-hai ZHANG, Bing-ya CHEN, Yi ZHU Deartment of Testing an Control
More information7. Introduction to Large Sample Theory
7. Introuction to Large Samle Theory Hayashi. 88-97/109-133 Avance Econometrics I, Autumn 2010, Large-Samle Theory 1 Introuction We looke at finite-samle roerties of the OLS estimator an its associate
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationTopic 7: Convergence of Random Variables
Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information
More informationProbabilistic Learning
Statistical Machine Learning Notes 11 Instructor: Justin Domke Probabilistic Learning Contents 1 Introuction 2 2 Maximum Likelihoo 2 3 Examles of Maximum Likelihoo 3 3.1 Binomial......................................
More informationCenter of Gravity and Center of Mass
Center of Gravity an Center of Mass 1 Introuction. Center of mass an center of gravity closely parallel each other: they both work the same way. Center of mass is the more important, but center of gravity
More informationMulti-View Clustering via Canonical Correlation Analysis
Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in
More informationThe Effect of a Finite Measurement Volume on Power Spectra from a Burst Type LDA
The Effect of a Finite Measurement Volume on Power Sectra from a Burst Tye LDA Preben Buchhave 1,*, Clara M. Velte, an William K. George 3 1. Intarsia Otics, Birkerø, Denmark. Technical University of Denmark,
More informationConsistency and asymptotic normality
Consistency an ymtotic normality Cls notes for Econ 842 Robert e Jong Aril 2007 1 Stochtic convergence The ymtotic theory of minimization estimators relies on various theorems from mathematical statistics.
More informationProbabilistic Learning
Statistical Machine Learning Notes 10 Instructor: Justin Domke Probabilistic Learning Contents 1 Introuction 2 2 Maximum Likelihoo 2 3 Examles of Maximum Likelihoo 3 3.1 Binomial......................................
More informationDatabase-friendly Random Projections
Database-frienly Ranom Projections Dimitris Achlioptas Microsoft ABSTRACT A classic result of Johnson an Linenstrauss asserts that any set of n points in -imensional Eucliean space can be embee into k-imensional
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More informationLecture Note 2. 1 Bonferroni Principle. 1.1 Idea. 1.2 Want. Material covered today is from Chapter 1 and chapter 4
Lecture Note 2 Material covere toay is from Chapter an chapter 4 Bonferroni Principle. Iea Get an iea the frequency of events when things are ranom billion = 0 9 Each person has a % chance to stay in a
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More informationChapter 7: Special Distributions
This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationMATH 6210: SOLUTIONS TO PROBLEM SET #3
MATH 6210: SOLUTIONS TO PROBLEM SET #3 Rudin, Chater 4, Problem #3. The sace L (T) is searable since the trigonometric olynomials with comlex coefficients whose real and imaginary arts are rational form
More informationNonlinear Estimation. Professor David H. Staelin
Nonlinear Estimation Professor Davi H. Staelin Massachusetts Institute of Technology Lec22.5-1 [ DD 1 2] ˆ = 1 Best Fit, "Linear Regression" Case I: Nonlinear Physics Data Otimum Estimator P() ˆ D 1 D
More informationAcute sets in Euclidean spaces
Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of
More informationBivariate distributions characterized by one family of conditionals and conditional percentile or mode functions
Journal of Multivariate Analysis 99 2008) 1383 1392 www.elsevier.com/locate/jmva Bivariate istributions characterize by one family of conitionals an conitional ercentile or moe functions Barry C. Arnol
More informationOnline Nearest Neighbor Search in Hamming Space
Online Nearest Neighbor Search in Hamming Sace Seehr Eghbali, Hassan Ashtiani, Laan Tahvilari University of Waterloo, Ontario, Canaa Email: {s2eghbal, mhzokaei, laan.tahvilari}@uwaterloo.ca Abstract We
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationTopic 7: Using identity types
Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules
More informationarxiv: v1 [math.pr] 17 Dec 2007
GRAPH LIMITS AND EXCHANGEABLE RANDOM GRAPHS arxiv:0712.2749v1 [math.pr] 17 Dec 2007 PERSI DIACONIS AND SVANTE JANSON Abstract. We evelo a clear connection between efinetti s theorem for exchangeable arrays
More information6 General properties of an autonomous system of two first order ODE
6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationA Randomized Approximate Nearest Neighbors Algorithm - a short version
We present a ranomize algorithm for the approximate nearest neighbor problem in - imensional Eucliean space. Given N points {x } in R, the algorithm attempts to fin k nearest neighbors for each of x, where
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationA note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz
A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationarxiv: v2 [math.nt] 28 Jun 2017
GEERATIG RADOM FACTORED IDEALS I UMBER FIELDS ZACHARY CHARLES arxiv:62.06260v2 [math.t] 28 Jun 207 Abstract. We resent a ranomize olynomial-time algorithm to generate an ieal an its factorization uniformly
More informationLECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]
LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for
More informationLeast Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2
High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity
More informationMulti-View Clustering via Canonical Correlation Analysis
Keywors: multi-view learning, clustering, canonical correlation analysis Abstract Clustering ata in high-imensions is believe to be a har problem in general. A number of efficient clustering algorithms
More informationTowards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK
Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)
More informationShared-State Sampling
Share-State Samling Freeric Rasall, Sebastia Sallent an Jose Yufera Det. of Telematics, Technical University of Catalonia (UPC) frei@entel.uc.es, sallent@entel.uc.es, yufera@entel.uc.es ABSTRACT We resent
More informationTutorial on Maximum Likelyhood Estimation: Parametric Density Estimation
Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing
More informationTwo formulas for the Euler ϕ-function
Two formulas for the Euler ϕ-function Robert Frieman A multiplication formula for ϕ(n) The first formula we want to prove is the following: Theorem 1. If n 1 an n 2 are relatively prime positive integers,
More informationAnalyzing Tensor Power Method Dynamics in Overcomplete Regime
Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical
More informationGRAPH LIMITS AND EXCHANGEABLE RANDOM GRAPHS
GRAPH LIMITS AND EXCHANGEABLE RANDOM GRAPHS PERSI DIACONIS AND SVANTE JANSON Abstract. We evelo a clear connection between efinetti s theorem for exchangeable arrays (work of Alous Hoover Kallenberg) an
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationJUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson
JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises
More informationTHE ZEROS OF A QUADRATIC FORM AT SQUARE-FREE POINTS
THE ZEROS OF A QUADRATIC FORM AT SQUARE-FREE POINTS R. C. BAKER Abstract. Let F(x 1,..., x n be a nonsingular inefinite quaratic form, n = 3 or 4. Results are obtaine on the number of solutions of F(x
More informationON THE AVERAGE NUMBER OF DIVISORS OF REDUCIBLE QUADRATIC POLYNOMIALS
ON THE AVERAGE NUMBER OF DIVISORS OF REDUCIBLE QUADRATIC POLYNOMIALS KOSTADINKA LAPKOVA Abstract. We give an asymtotic formula for the ivisor sum c
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationCHS GUSSET PLATE CONNECTIONS ANALYSES Theoretical and Experimental Approaches
EUROSTEEL 8, 3-5 Setember 8, Graz, Austria 561 CHS GUSSET PLATE CONNECTIONS ANALYSES Theoretical an Exerimental Aroaches Arlene M. S. Freitas a, Daniela G. V. Minchillo b, João A. V. Requena c, Afonso
More informationThis module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics
This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.
More informationSubmitted to the Journal of Hydraulic Engineering, ASCE, January, 2006 NOTE ON THE ANALYSIS OF PLUNGING OF DENSITY FLOWS
Submitte to the Journal of Hyraulic Engineering, ASCE, January, 006 NOTE ON THE ANALYSIS OF PLUNGING OF DENSITY FLOWS Gary Parker 1, Member, ASCE an Horacio Toniolo ABSTRACT This note is evote to the correction
More informationELECTRON DIFFRACTION
ELECTRON DIFFRACTION Electrons : wave or quanta? Measurement of wavelength an momentum of electrons. Introuction Electrons isplay both wave an particle properties. What is the relationship between the
More informationA Second Time Dimension, Hidden in Plain Sight
A Secon Time Dimension, Hien in Plain Sight Brett A Collins. In this paper I postulate the existence of a secon time imension, making five imensions, three space imensions an two time imensions. I will
More informationDigitally delicate primes
Digitally elicate rimes Jackson Hoer Paul Pollack Deartment of Mathematics University of Georgia Athens, Georgia 30602 Tao has shown that in any fixe base, a ositive roortion of rime numbers cannot have
More informationNormalized Ordinal Distance; A Performance Metric for Ordinal, Probabilistic-ordinal or Partial-ordinal Classification Problems
Normalize rinal Distance; A Performance etric for rinal, Probabilistic-orinal or Partial-orinal Classification Problems ohamma Hasan Bahari, Hugo Van hamme Center for rocessing seech an images, KU Leuven,
More informationOn the Surprising Behavior of Distance Metrics in High Dimensional Space
On the Surprising Behavior of Distance Metrics in High Dimensional Space Charu C. Aggarwal, Alexaner Hinneburg 2, an Daniel A. Keim 2 IBM T. J. Watson Research Center Yortown Heights, NY 0598, USA. charu@watson.ibm.com
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationBinary Discrimination Methods for High Dimensional Data with a. Geometric Representation
Binary Discrimination Methos for High Dimensional Data with a Geometric Representation Ay Bolivar-Cime, Luis Miguel Corova-Roriguez Universia Juárez Autónoma e Tabasco, División Acaémica e Ciencias Básicas
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationLenny Jones Department of Mathematics, Shippensburg University, Shippensburg, Pennsylvania Daniel White
#A10 INTEGERS 1A (01): John Selfrige Memorial Issue SIERPIŃSKI NUMBERS IN IMAGINARY QUADRATIC FIELDS Lenny Jones Deartment of Mathematics, Shiensburg University, Shiensburg, Pennsylvania lkjone@shi.eu
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationLECTURE NOTES ON DVORETZKY S THEOREM
LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.
More informationAgmon Kolmogorov Inequalities on l 2 (Z d )
Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,
More informationSums of independent random variables
3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where
More informationOptimal Spatial Reuse in Poisson Multi-hop Networks
Otimal Satial Reuse in Poisson Multi-ho Networks Kostas Stamatiou an Martin Haenggi Deartment of Electrical Engineering University of Notre Dame, Notre Dame, IN 46556 {kstamati,mhaenggi}@n.eu Abstract
More informationNecessary and Sufficient Conditions for Sketched Subspace Clustering
Necessary an Sufficient Conitions for Sketche Subspace Clustering Daniel Pimentel-Alarcón, Laura Balzano 2, Robert Nowak University of Wisconsin-Maison, 2 University of Michigan-Ann Arbor Abstract This
More informationLECTURE 6: FIBER BUNDLES
LECTURE 6: FIBER BUNDLES In this section we will introduce the interesting class o ibrations given by iber bundles. Fiber bundles lay an imortant role in many geometric contexts. For examle, the Grassmaniann
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationMonte Carlo Methods with Reduced Error
Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for
More informationLecture 21: Quantum Communication
CS 880: Quantum Information Processing 0/6/00 Lecture : Quantum Communication Instructor: Dieter van Melkebeek Scribe: Mark Wellons Last lecture, we introduced the EPR airs which we will use in this lecture
More informationQuantum Mechanics in Three Dimensions
Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.
More informationOn colour-blind distinguishing colour pallets in regular graphs
J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article
More informationIntroduction to Markov Processes
Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav
More informationThe thermal wind 1. v g
The thermal win The thermal win Introuction The geostrohic win is etermine by the graient of the isobars (on a horizontal surface) or isohyses (on a ressure surface). On a ressure surface the graient of
More informationMath 1B, lecture 8: Integration by parts
Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores
More informationBeating CountSketch for Heavy Hitters in Insertion Streams
Beating CountSketch for eavy itters in Insertion Streams ABSTRACT Vlaimir Braverman Johns opkins University Baltimore, MD, USA vova@cs.jhu.eu Nikita Ivkin Johns opkins University Baltimore, MD, USA nivkin1@jhu.eu
More informationCS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA
CS9840 Learning an Computer Vision Prof. Olga Veksler Lecture Some Concepts from Computer Vision Curse of Dimensionality PCA Some Slies are from Cornelia, Fermüller, Mubarak Shah, Gary Braski, Sebastian
More informationPhysics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2
Physics 505 Electricity an Magnetism Fall 003 Prof. G. Raithel Problem Set 3 Problem.7 5 Points a): Green s function: Using cartesian coorinates x = (x, y, z), it is G(x, x ) = 1 (x x ) + (y y ) + (z z
More informationRobust Control of Robot Manipulators Using Difference Equations as Universal Approximator
Proceeings of the 5 th International Conference of Control, Dynamic Systems, an Robotics (CDSR'18) Niagara Falls, Canaa June 7 9, 218 Paer No. 139 DOI: 1.11159/csr18.139 Robust Control of Robot Maniulators
More informationNovel Algorithm for Sparse Solutions to Linear Inverse. Problems with Multiple Measurements
Novel Algorithm for Sarse Solutions to Linear Inverse Problems with Multile Measurements Lianlin Li, Fang Li Institute of Electronics, Chinese Acaemy of Sciences, Beijing, China Lianlinli1980@gmail.com
More informationIMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES
IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES OHAD GILADI AND ASSAF NAOR Abstract. It is shown that if (, ) is a Banach sace with Rademacher tye 1 then for every n N there exists
More informationMod p 3 analogues of theorems of Gauss and Jacobi on binomial coefficients
ACTA ARITHMETICA 2.2 (200 Mo 3 analogues of theorems of Gauss an Jacobi on binomial coefficients by John B. Cosgrave (Dublin an Karl Dilcher (Halifax. Introuction. One of the most remarkable congruences
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More informationA secure approach for embedding message text on an elliptic curve defined over prime fields, and building 'EC-RSA-ELGamal' Cryptographic System
International Journal of Comuter Science an Information Security (IJCSIS), Vol. 5, No. 6, June 7 A secure aroach for embeing message tet on an ellitic curve efine over rime fiels, an builing 'EC-RSA-ELGamal'
More informationOn combinatorial approaches to compressed sensing
On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu
More informationLecture 12: November 6, 2013
Information an Coing Theory Autumn 204 Lecturer: Mahur Tulsiani Lecture 2: November 6, 203 Scribe: Davi Kim Recall: We were looking at coes of the form C : F k q F n q, where q is prime, k is the message
More informationExperiment 2, Physics 2BL
Experiment 2, Physics 2BL Deuction of Mass Distributions. Last Upate: 2009-05-03 Preparation Before this experiment, we recommen you review or familiarize yourself with the following: Chapters 4-6 in Taylor
More informationMulti-View Clustering via Canonical Correlation Analysis
Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu
More informationApproximate Constraint Satisfaction Requires Large LP Relaxations
Approximate Constraint Satisfaction Requires Large LP Relaxations oah Fleming April 19, 2018 Linear programming is a very powerful tool for attacking optimization problems. Techniques such as the ellipsoi
More informationConvergence Analysis of Terminal ILC in the z Domain
25 American Control Conference June 8-, 25 Portlan, OR, USA WeA63 Convergence Analysis of erminal LC in the Domain Guy Gauthier, an Benoit Boulet, Member, EEE Abstract his aer shows how we can aly -transform
More informationLEIBNIZ SEMINORMS IN PROBABILITY SPACES
LEIBNIZ SEMINORMS IN PROBABILITY SPACES ÁDÁM BESENYEI AND ZOLTÁN LÉKA Abstract. In this aer we study the (strong) Leibniz roerty of centered moments of bounded random variables. We shall answer a question
More informationBy completing this chapter, the reader will be able to:
hater 4. Mechanics of Particles Particle mechanics governs many rinciles of article measurement instrumentation an air cleaning technologies. Therefore, this chater rovies the funamentals of article mechanics.
More information1 Probability Spaces and Random Variables
1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1
More informationExtension of Minimax to Infinite Matrices
Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,
More information