Lecture 8 January 30, 2014
|
|
- Angel Poole
- 5 years ago
- Views:
Transcription
1 MTH 995-3: Intro to CS and Big Data Spring 14 Inst. Mark Ien Lecture 8 January 3, 14 Scribe: Kishavan Bhola 1 Overvie In this lecture, e begin a probablistic method for approximating the Nearest Neighbor problem by ay of a locality sensitive hash function. We compute the to probabilities for the relevant hash function. Problem Given r R +, c > 1, and X := { x 1,..., x P } R D, compute f : [P ] [P ] { 1} such that 1. d( x j, x f(j) ) c r for all j [P ] such that j i [P ] ith d( x j, x i ) r; and. f(j) = 1 if there does not exist j i [P ] ith d( x j, x i ) c r. Remark 1 The above can easily be generalized to arbitrary metric spaces, here d is the metric, but in the folloing, e ill focus on R D ith d being the Euclidean -norm. Remark This is knon as the (c, r) - Nearest Neighbor Problem. Note that (1) and () do not uniquely determine a function, and moreover, it is possible that x f(j) is not the nearest neighbor to x j (see Figure 1). Hence, such an f is an approximation to the standard Nearest Neighbor problem. 3 Naive Solution 1. Compute every pairise distance x j x i, i j. This takes O(P D)-time.. Output the index of the closest point to each x j as f(j). Clearly such an f gives an exact solution to the Nearest-Neighbor problem, and thus also satisfies (1) and () in the Problem statement. We ish to approximate this Naive solution ith better runtime than O(P D). 1
2 Figure 1: Ambiguity of f 4 Idea Project x 1,..., x p onto a random vector or one dimensional subspace and then see ho far their projections are from one another (see Figure ): Runtime of Idea 1. Projecting all times is O(P D)-time (just inner products).. Finding close projected points is equivalent to sorting a list and thus has time-complexity O(P log(p )) (using, for example, merge-sort). Therefore, the total time-complexity is O(P (D + log(p ))). This is an approximation even to our approximated problem, and so e ould like error guarantees. 5 Solution Definition Call a random function h : R D Z a locality sensitive hash function if there is p 1, p (, 1) ith p 1 > p and such that the folloing holds for arbitrary x, y R D, (i) x y < r implies h( x) = h( y) ith probability at least p 1. (ii) if x y > c r, then h( x) = h( y) ith probability at most p.
3 Figure : Projection onto 1-Dimensional Subspace Remark A locality sensitive hash function h sends points that are close to the same integer and sends far points to different integers (this follos from (i) and (ii) and the fact that p 1 > p ). No consider the folloing random function: Pick R +. Then let G N(, I D D ) and U U([, ]). Finally, define h : R D Z as g, x + u h(x) = (*) here U = u and G = g are instances of the random variable and vector defined above. Remark The above notation means that g is a random vector ith independent, identically distributed, mean, and variance 1, Gaussian entries. Similarly, u is a random uniform variable from the closed interval [, ]. Theorem 1. The function h defined by (*) is a locality-sensitive hash function. Proof: Let x, y R D be arbitrary. Define the folloing to events A and B, A : h( x) = h( y) B : g, x y < Note, by the definition of h (*), if A occurs, then B occurs; that is P [B A] = 1. Therefore, Bayes La simplifies: P[A] P[B A] = P[B] P[A B] 3
4 P[A] = P[B] P[A B] (1) Then, by using the variable z := g, x y, and considering all possible values of z for hich event B is true, e may transform the right-hand side of (I) to an integral. Writing this all out, e get, P [h( x) = h( y)] = P [h( x) = h( y) z = g, x y ] P [z = g, x y ] dz () We no ish to simplify P [h( x) = h( y) z = g, x y ]. One can sho that for z, e have, P [h( x) = h( y) z = g, x y ] = z. This follos from considering the different values of u in (*) that ill either (i) shift the integer parts of g, x / and g, y / to be the same hen they are different, or (ii) shift them so that they stay the same hen they are already the same. So () becomes z P [ g, x y = z] dz = ( = x y π P [ g, x y = z] dz ( ) z exp x y dz z z exp P [ g, x y = z] dz ( z x y ) ) dz No riting n := x y, using the change of variables integral, e get, z n z, and integrating the second P [h( x) = h( y)] = π n exp( z )dz + [ ( ( ) ) ] n exp n 1 π (3) Define p (n) = P [h( x) = h( y)]. (3) shos that p (n) = so that p (n) < for all n (since n ). No, let s consider the to cases: (i) n r: this implies here e have defined p 1 above. ( ) e n 1 π ( ) ( ( ) ) r p (n) erf r + e r 1 =: p 1 4
5 (ii) n cr: this and p (n) < imply ( ) ( ( ) ) rc p (n) erf + e r 1 =: p rc here e have defined p above. Finally, e note p 1 and p satisfy the required properties in the definition of a locality sensitive hash function. In particular, p 1 > p. Therefore, h is a locality sensitive hash function for Euclidean distance. References [1] M. Datar, P. Indyk, N. Immorlica, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-stable Distributions. SCG 4 Proceedings of the tentieth annual symposium on Computational Geometry, pages 53-6, 4. [] P. Indyk and R. Motani. Approximate Nearest Neighbors: Toards Removing the Curse of Dimensionality. STOC 98 Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages ,
Lecture 14 October 16, 2014
CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 14 October 16, 2014 Scribe: Jao-ke Chin-Lee 1 Overview In the last lecture we covered learning topic models with guest lecturer Rong Ge.
More informationBloom Filters and Locality-Sensitive Hashing
Randomized Algorithms, Summer 2016 Bloom Filters and Locality-Sensitive Hashing Instructor: Thomas Kesselheim and Kurt Mehlhorn 1 Notation Lecture 4 (6 pages) When e talk about the probability of an event,
More informationLocality Sensitive Hashing
Locality Sensitive Hashing February 1, 016 1 LSH in Hamming space The following discussion focuses on the notion of Locality Sensitive Hashing which was first introduced in [5]. We focus in the case of
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 26: Probability and Random Processes Problem Set Spring 209 Self-Graded Scores Due:.59 PM, Monday, February 4, 209 Submit your
More information4 Locality-sensitive hashing using stable distributions
4 Locality-sensitive hashing using stable distributions 4. The LSH scheme based on s-stable distributions In this chapter, we introduce and analyze a novel locality-sensitive hashing family. The family
More informationLecture 17 03/21, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Piotr Indyk Lecture 17 03/21, 2017 Scribe: Artidoro Pagnoni, Jao-ke Chin-Lee 1 Overview In the last lecture we saw semidefinite programming, Goemans-Williamson
More informationLower bounds on Locality Sensitive Hashing
Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,
More informationSTATC141 Spring 2005 The materials are from Pairwise Sequence Alignment by Robert Giegerich and David Wheeler
STATC141 Spring 2005 The materials are from Pairise Sequence Alignment by Robert Giegerich and David Wheeler Lecture 6, 02/08/05 The analysis of multiple DNA or protein sequences (I) Sequence similarity
More informationy(x) = x w + ε(x), (1)
Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationLogistic Regression. Machine Learning Fall 2018
Logistic Regression Machine Learning Fall 2018 1 Where are e? We have seen the folloing ideas Linear models Learning as loss minimization Bayesian learning criteria (MAP and MLE estimation) The Naïve Bayes
More informationCh. 2 Math Preliminaries for Lossless Compression. Section 2.4 Coding
Ch. 2 Math Preliminaries for Lossless Compression Section 2.4 Coding Some General Considerations Definition: An Instantaneous Code maps each symbol into a codeord Notation: a i φ (a i ) Ex. : a 0 For Ex.
More informationLecture 3a: The Origin of Variational Bayes
CSC535: 013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton The origin of variational Bayes In variational Bayes, e approximate the true posterior across parameters
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationLinear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.
Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also
More informationRandomized Algorithms
Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models
More information1 Effects of Regularization For this problem, you are required to implement everything by yourself and submit code.
This set is due pm, January 9 th, via Moodle. You are free to collaborate on all of the problems, subject to the collaboration policy stated in the syllabus. Please include any code ith your submission.
More informationGeometry of Similarity Search
Geometry of Similarity Search Alex Andoni (Columbia University) Find pairs of similar images how should we measure similarity? Naïvely: about n 2 comparisons Can we do better? 2 Measuring similarity 000000
More information16 Embeddings of the Euclidean metric
16 Embeddings of the Euclidean metric In today s lecture, we will consider how well we can embed n points in the Euclidean metric (l 2 ) into other l p metrics. More formally, we ask the following question.
More informationMMSE Equalizer Design
MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)
More informationIntroduction To Resonant. Circuits. Resonance in series & parallel RLC circuits
Introduction To esonant Circuits esonance in series & parallel C circuits Basic Electrical Engineering (EE-0) esonance In Electric Circuits Any passive electric circuit ill resonate if it has an inductor
More informationAlgorithms for Data Science: Lecture on Finding Similar Items
Algorithms for Data Science: Lecture on Finding Similar Items Barna Saha 1 Finding Similar Items Finding similar items is a fundamental data mining task. We may want to find whether two documents are similar
More informationRandom Feature Maps for Dot Product Kernels Supplementary Material
Random Feature Maps for Dot Product Kernels Supplementary Material Purushottam Kar and Harish Karnick Indian Institute of Technology Kanpur, INDIA {purushot,hk}@cse.iitk.ac.in Abstract This document contains
More informationBucket handles and Solenoids Notes by Carl Eberhart, March 2004
Bucket handles and Solenoids Notes by Carl Eberhart, March 004 1. Introduction A continuum is a nonempty, compact, connected metric space. A nonempty compact connected subspace of a continuum X is called
More informationCS168: The Modern Algorithmic Toolbox Lecture #4: Dimensionality Reduction
CS168: The Modern Algorithmic Toolbox Lecture #4: Dimensionality Reduction Tim Roughgarden & Gregory Valiant April 12, 2017 1 The Curse of Dimensionality in the Nearest Neighbor Problem Lectures #1 and
More informationPreliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1
90 8 80 7 70 6 60 0 8/7/ Preliminaries Preliminaries Linear models and the perceptron algorithm Chapters, T x + b < 0 T x + b > 0 Definition: The Euclidean dot product beteen to vectors is the expression
More informationProblem 1. Possible Solution. Let f be the 2π-periodic functions defined by f(x) = cos ( )
Problem Let f be the π-periodic functions defined by f() = cos ( ) hen [ π, π]. Make a draing of the function f for the interval [ 3π, 3π], and compute the Fourier series of f. Use the result to compute
More informationEarly & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process
Early & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process uca Santillo (luca.santillo@gmail.com) Abstract COSMIC-FFP is a rigorous measurement method that makes possible to measure the functional
More informationLecture 3 Frequency Moments, Heavy Hitters
COMS E6998-9: Algorithmic Techniques for Massive Data Sep 15, 2015 Lecture 3 Frequency Moments, Heavy Hitters Instructor: Alex Andoni Scribes: Daniel Alabi, Wangda Zhang 1 Introduction This lecture is
More informationHomework 6 Solutions
Homeork 6 Solutions Igor Yanovsky (Math 151B TA) Section 114, Problem 1: For the boundary-value problem y (y ) y + log x, 1 x, y(1) 0, y() log, (1) rite the nonlinear system and formulas for Neton s method
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment 1 Caramanis/Sanghavi Due: Thursday, Feb. 7, 2013. (Problems 1 and
More informationNeural Networks. Associative memory 12/30/2015. Associative memories. Associative memories
//5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models
More informationAn Algorithmist s Toolkit Nov. 10, Lecture 17
8.409 An Algorithmist s Toolkit Nov. 0, 009 Lecturer: Jonathan Kelner Lecture 7 Johnson-Lindenstrauss Theorem. Recap We first recap a theorem (isoperimetric inequality) and a lemma (concentration) from
More informationLecture 2: Linear Algebra Review
CS 4980/6980: Introduction to Data Science c Spring 2018 Lecture 2: Linear Algebra Review Instructor: Daniel L. Pimentel-Alarcón Scribed by: Anh Nguyen and Kira Jordan This is preliminary work and has
More informationOptimal Lower Bounds for Locality Sensitive Hashing (except when q is tiny)
Innovations in Computer Science 20 Optimal Lower Bounds for Locality Sensitive Hashing (except when q is tiny Ryan O Donnell Yi Wu 3 Yuan Zhou 2 Computer Science Department, Carnegie Mellon University,
More informationThese notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.
NOTES ON AUTONOMOUS ORDINARY DIFFERENTIAL EQUATIONS MARCH 2017 These notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.
More informationHigh Dimensional Geometry, Curse of Dimensionality, Dimension Reduction
Chapter 11 High Dimensional Geometry, Curse of Dimensionality, Dimension Reduction High-dimensional vectors are ubiquitous in applications (gene expression data, set of movies watched by Netflix customer,
More informationCS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)
CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis
More informationCS Topics in Cryptography January 28, Lecture 5
CS 4501-6501 Topics in Cryptography January 28, 2015 Lecture 5 Lecturer: Mohammad Mahmoody Scribe: Ameer Mohammed 1 Learning with Errors: Motivation An important goal in cryptography is to find problems
More informationreader is comfortable with the meaning of expressions such as # g, f (g) >, and their
The Abelian Hidden Subgroup Problem Stephen McAdam Department of Mathematics University of Texas at Austin mcadam@math.utexas.edu Introduction: The general Hidden Subgroup Problem is as follos. Let G be
More informationNearest Neighbor Preserving Embeddings
Nearest Neighbor Preserving Embeddings Piotr Indyk MIT Assaf Naor Microsoft Research Abstract In this paper we introduce the notion of nearest neighbor preserving embeddings. These are randomized embeddings
More informationCHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum
CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 19 CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION 3.0. Introduction
More informationTerm Filtering with Bounded Error
Term Filtering with Bounded Error Zi Yang, Wei Li, Jie Tang, and Juanzi Li Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University, China {yangzi, tangjie, ljz}@keg.cs.tsinghua.edu.cn
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008 In the previous lecture, e ere introduced to the SVM algorithm and its basic motivation
More informationCOS 424: Interacting with Data. Lecturer: Rob Schapire Lecture #15 Scribe: Haipeng Zheng April 5, 2007
COS 424: Interacting ith Data Lecturer: Rob Schapire Lecture #15 Scribe: Haipeng Zheng April 5, 2007 Recapitulation of Last Lecture In linear regression, e need to avoid adding too much richness to the
More informationLecture 1: 01/22/2014
COMS 6998-3: Sub-Linear Algorithms in Learning and Testing Lecturer: Rocco Servedio Lecture 1: 01/22/2014 Spring 2014 Scribes: Clément Canonne and Richard Stark 1 Today High-level overview Administrative
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationLinear models and the perceptron algorithm
8/5/6 Preliminaries Linear models and the perceptron algorithm Chapters, 3 Definition: The Euclidean dot product beteen to vectors is the expression dx T x = i x i The dot product is also referred to as
More informationBivariate Uniqueness in the Logistic Recursive Distributional Equation
Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860
More informationOn Symmetric and Asymmetric LSHs for Inner Product Search
Behnam Neyshabur Nathan Srebro Toyota Technological Institute at Chicago, Chicago, IL 6637, USA BNEYSHABUR@TTIC.EDU NATI@TTIC.EDU Abstract We consider the problem of designing locality sensitive hashes
More informationRegression coefficients may even have a different sign from the expected.
Multicolinearity Diagnostics : Some of the diagnostics e have just discussed are sensitive to multicolinearity. For example, e kno that ith multicolinearity, additions and deletions of data cause shifts
More informationArtificial Neural Networks. Part 2
Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological
More informationA Generalization of a result of Catlin: 2-factors in line graphs
AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 72(2) (2018), Pages 164 184 A Generalization of a result of Catlin: 2-factors in line graphs Ronald J. Gould Emory University Atlanta, Georgia U.S.A. rg@mathcs.emory.edu
More informationEnhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment
Enhancing Generalization Capability of SVM Classifiers ith Feature Weight Adjustment Xizhao Wang and Qiang He College of Mathematics and Computer Science, Hebei University, Baoding 07002, Hebei, China
More informationImproved Consistent Sampling, Weighted Minhash and L1 Sketching
Improved Consistent Sampling, Weighted Minhash and L1 Sketching Sergey Ioffe Google Inc., 1600 Amphitheatre Pkwy, Mountain View, CA 94043, sioffe@google.com Abstract We propose a new Consistent Weighted
More informationApproximate Nearest Neighbor (ANN) Search in High Dimensions
Chapter 17 Approximate Nearest Neighbor (ANN) Search in High Dimensions By Sariel Har-Peled, February 4, 2010 1 17.1 ANN on the Hypercube 17.1.1 Hypercube and Hamming distance Definition 17.1.1 The set
More informationSome Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform
Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform Nir Ailon May 22, 2007 This writeup includes very basic background material for the talk on the Fast Johnson Lindenstrauss Transform
More informationExercise 1a: Determine the dot product of each of the following pairs of vectors.
Bob Bron, CCBC Dundalk Math 53 Calculus 3, Chapter Section 3 Dot Product (Geometric Definition) Def.: The dot product of to vectors v and n in is given by here θ, satisfying 0, is the angle beteen v and.
More informationStat 8112 Lecture Notes Weak Convergence in Metric Spaces Charles J. Geyer January 23, Metric Spaces
Stat 8112 Lecture Notes Weak Convergence in Metric Spaces Charles J. Geyer January 23, 2013 1 Metric Spaces Let X be an arbitrary set. A function d : X X R is called a metric if it satisfies the folloing
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationCSE446: non-parametric methods Spring 2017
CSE446: non-parametric methods Spring 2017 Ali Farhadi Slides adapted from Carlos Guestrin and Luke Zettlemoyer Linear Regression: What can go wrong? What do we do if the bias is too strong? Might want
More informationCoding for Random Projections and Approximate Near Neighbor Search
Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu
More informationOptimal Data-Dependent Hashing for Approximate Near Neighbors
Optimal Data-Dependent Hashing for Approximate Near Neighbors Alexandr Andoni 1 Ilya Razenshteyn 2 1 Simons Institute 2 MIT, CSAIL April 20, 2015 1 / 30 Nearest Neighbor Search (NNS) Let P be an n-point
More informationLecture 15: Random Projections
Lecture 15: Random Projections Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 15 1 / 11 Review of PCA Unsupervised learning technique Performs dimensionality reduction
More informationAn Analog Analogue of a Digital Quantum Computation
An Analog Analogue of a Digital Quantum Computation arxiv:quant-ph/9612026v1 6 Dec 1996 Edard Farhi Center for Theoretical Physics Massachusetts Institute of Technology Cambridge, MA 02139 Sam Gutmann
More information3 - Vector Spaces Definition vector space linear space u, v,
3 - Vector Spaces Vectors in R and R 3 are essentially matrices. They can be vieed either as column vectors (matrices of size and 3, respectively) or ro vectors ( and 3 matrices). The addition and scalar
More informationCOS 598D - Lattices. scribe: Srdjan Krstic
COS 598D - Lattices scribe: Srdjan Krstic Introduction In the first part we will give a brief introduction to lattices and their relevance in some topics in computer science. Then we show some specific
More informationMTH 35, SPRING 2017 NIKOS APOSTOLAKIS
MTH 35, SPRING 2017 NIKOS APOSTOLAKIS 1. Linear independence Example 1. Recall the set S = {a i : i = 1,...,5} R 4 of the last two lectures, where a 1 = (1,1,3,1) a 2 = (2,1,2, 1) a 3 = (7,3,5, 5) a 4
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationDimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices
Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke
More information25.2 Last Time: Matrix Multiplication in Streaming Model
EE 381V: Large Scale Learning Fall 01 Lecture 5 April 18 Lecturer: Caramanis & Sanghavi Scribe: Kai-Yang Chiang 5.1 Review of Streaming Model Streaming model is a new model for presenting massive data.
More informationNon-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007
Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007 Lecturer: Roman Vershynin Scribe: Matthew Herman 1 Introduction Consider the set X = {n points in R N } where
More information1 :: Mathematical notation
1 :: Mathematical notation x A means x is a member of the set A. A B means the set A is contained in the set B. {a 1,..., a n } means the set hose elements are a 1,..., a n. {x A : P } means the set of
More informationSuccinct Data Structures for Approximating Convex Functions with Applications
Succinct Data Structures for Approximating Convex Functions with Applications Prosenjit Bose, 1 Luc Devroye and Pat Morin 1 1 School of Computer Science, Carleton University, Ottawa, Canada, K1S 5B6, {jit,morin}@cs.carleton.ca
More informationFaster Johnson-Lindenstrauss style reductions
Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss
More informationNearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2
Nearest Neighbor Machine Learning CSE546 Kevin Jamieson University of Washington October 26, 2017 2017 Kevin Jamieson 2 Some data, Bayes Classifier Training data: True label: +1 True label: -1 Optimal
More informationLecture 2: Streaming Algorithms
CS369G: Algorithmic Techniques for Big Data Spring 2015-2016 Lecture 2: Streaming Algorithms Prof. Moses Chariar Scribes: Stephen Mussmann 1 Overview In this lecture, we first derive a concentration inequality
More informationReynolds Averaging. We separate the dynamical fields into slowly varying mean fields and rapidly varying turbulent components.
Reynolds Averaging Reynolds Averaging We separate the dynamical fields into sloly varying mean fields and rapidly varying turbulent components. Reynolds Averaging We separate the dynamical fields into
More information2.4 Hilbert Spaces. Outline
2.4 Hilbert Spaces Tom Lewis Spring Semester 2017 Outline Hilbert spaces L 2 ([a, b]) Orthogonality Approximations Definition A Hilbert space is an inner product space which is complete in the norm defined
More informationEcon 201: Problem Set 3 Answers
Econ 20: Problem Set 3 Ansers Instructor: Alexandre Sollaci T.A.: Ryan Hughes Winter 208 Question a) The firm s fixed cost is F C = a and variable costs are T V Cq) = 2 bq2. b) As seen in class, the optimal
More informationEndogeny for the Logistic Recursive Distributional Equation
Zeitschrift für Analysis und ihre Anendungen Journal for Analysis and its Applications Volume 30 20, 237 25 DOI: 0.47/ZAA/433 c European Mathematical Society Endogeny for the Logistic Recursive Distributional
More informationHomework Set 2 Solutions
MATH 667-010 Introduction to Mathematical Finance Prof. D. A. Edards Due: Feb. 28, 2018 Homeork Set 2 Solutions 1. Consider the ruin problem. Suppose that a gambler starts ith ealth, and plays a game here
More informationand sinθ = cosb =, and we know a and b are acute angles, find cos( a+ b) Trigonometry Topics Accuplacer Review revised July 2016 sin.
Trigonometry Topics Accuplacer Revie revised July 0 You ill not be alloed to use a calculator on the Accuplacer Trigonometry test For more information, see the JCCC Testing Services ebsite at http://jcccedu/testing/
More informationbe a deterministic function that satisfies x( t) dt. Then its Fourier
Lecture Fourier ransforms and Applications Definition Let ( t) ; t (, ) be a deterministic function that satisfies ( t) dt hen its Fourier it ransform is defined as X ( ) ( t) e dt ( )( ) heorem he inverse
More informationNCG Group New Results and Open Problems
NCG Group Ne Results and Open Problems Table of Contents NP-Completeness 1 NP-Completeness 2 3 4 2 / 19 NP-Completeness 3 / 19 (P)NSP OPT - Problem Definition An instance I of (P)NSP opt : I = (G, F ),
More informationLecture 10 September 27, 2016
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 10 September 27, 2016 Scribes: Quinten McNamara & William Hoza 1 Overview In this lecture, we focus on constructing coresets, which are
More informationRandomized Smoothing Networks
Randomized Smoothing Netorks Maurice Herlihy Computer Science Dept., Bron University, Providence, RI, USA Srikanta Tirthapura Dept. of Electrical and Computer Engg., Ioa State University, Ames, IA, USA
More informationECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b)
ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE : BASICS KRISTOFFER P. NIMARK. Bayes Rule De nition. Bayes Rule. The probability of event A occurring conditional on the event B having
More informationHow Long Does it Take to Catch a Wild Kangaroo?
Ho Long Does it Take to Catch a Wild Kangaroo? Ravi Montenegro Prasad Tetali ABSTRACT The discrete logarithm problem asks to solve for the exponent x, given the generator g of a cyclic group G and an element
More informationLecture 3: Lower bound on statistically secure encryption, extractors
CS 7880 Graduate Cryptography September, 015 Lecture 3: Lower bound on statistically secure encryption, extractors Lecturer: Daniel Wichs Scribe: Giorgos Zirdelis 1 Topics Covered Statistical Secrecy Randomness
More informationECE380 Digital Logic. Synchronous sequential circuits
ECE38 Digital Logic Synchronous Sequential Circuits: State Diagrams, State Tables Dr. D. J. Jackson Lecture 27- Synchronous sequential circuits Circuits here a clock signal is used to control operation
More informationSpeech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright c All rights reserved. Draft of August 7, 2017.
Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright c 2016. All rights reserved. Draft of August 7, 2017. CHAPTER 7 Logistic Regression Numquam ponenda est pluralitas sine necessitate
More informationStatistical Machine Learning
Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x
More informationLecture 3 Sept. 4, 2014
CS 395T: Sublinear Algorithms Fall 2014 Prof. Eric Price Lecture 3 Sept. 4, 2014 Scribe: Zhao Song In today s lecture, we will discuss the following problems: 1. Distinct elements 2. Turnstile model 3.
More informationWhy do Golf Balls have Dimples on Their Surfaces?
Name: Partner(s): 1101 Section: Desk # Date: Why do Golf Balls have Dimples on Their Surfaces? Purpose: To study the drag force on objects ith different surfaces, ith the help of a ind tunnel. Overvie
More informationCS 229r: Algorithms for Big Data Fall Lecture 17 10/28
CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 17 10/28 Scribe: Morris Yau 1 Overview In the last lecture we defined subspace embeddings a subspace embedding is a linear transformation
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationNotes for Lecture 16
COS 533: Advanced Cryptography Lecture 16 (11/13/2017) Lecturer: Mark Zhandry Princeton University Scribe: Boriana Gjura Notes for Lecture 16 1 Lattices (continued) 1.1 Last time. We defined lattices as
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationOutline of lectures 3-6
GENOME 453 J. Felsenstein Evolutionary Genetics Autumn, 013 Population genetics Outline of lectures 3-6 1. We ant to kno hat theory says about the reproduction of genotypes in a population. This results
More information