Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar)

Size: px
Start display at page:

Download "Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar)"

Transcription

1 Improved Fast Gauss Transform (based on work by Changjiang Yang and Vikas Raykar)

2 Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate the weighted sum of Gaussians: G(y j )= N i=1 q i e y j x i =h ; j =1; : : : ; M : FGT is an important variant of Fast Multipole Method (Greengard & Rokhlin 1987)

3 Fast Gauss Transform (cont d) The weighted sum of Gaussians G(y j )= N i=1 q i e y j x i =h ; j =1; : : : ; M : Targets Sources is equivalent to the matrix-vector product: G(y 1 ) e x 1 y 1 =h e x y 1 =h e x N y 1 =h G(y ). = e x 1 y =h e x y =h e x N y =h q G(y M ) e x 1 y M =h e x y M =h e x N y M =h Direct evaluation of the sum of Gaussians requires O(N ) operations. Fast Gauss transform reduces the cost to O(N logn) operations. q 1 q N

4 Fast Gauss Transform (cont d) Three key components of the FGT: The factorization or separation of variables Space subdivision scheme Error bound analysis

5 Factorization of Gaussian Factorization is one of the key parts of the FGT. Factorization breaks the entanglements between the sources and targets. The contributions of the sources are accumulated at the centers, then distributed to each target. Center Sources Targets Sources Targets Sources Targets

6 Factorization in FGT: Hermite Expansion The Gaussian kernel is factorized into Hermite functions e y x i =h = p 1 n=0 where Hermite function h n (x) is defined by Exchange the order of the summations 1 n! ( xi x h h n (x)=( 1) n dn dx n ) n ( y x h n ( e x) : h ) +²(p); a.k.a. Moment, A n

7 The FGT Algorithm Step 0: Subdivide the space into uniform boxes. Step 1: Assign sources and targets into boxes. Step : For each pair of source and target boxes, compute the interactions using one of the four possible ways: N B N F, M C M L : Direct evaluation N B N F, M C > M L : Taylor expansion N B > N F, M C M L : Hermite expansion N B > N F, M C > M L : Hermite expansion --> Taylor expansion Step 3: Loop through boxes evaluating Taylor series for boxes with more than M L targets. N B : # of sources in Box B M C : # of targets in Box C N F : cutoff of Box B M L : cutoff of Box C

8 Too Many Terms in Higher Dimensions The higher dimensional Hermite expansion is the Kronecker product of d univariate Hermite expansions. Total number of terms is O(p d ), p is the number of truncation terms. The number of operations in one factorization is O(p d ). h 0 h 0 h 0 h 1 h 0 h h 0 h 1 h h 1 h 0 h 1 h 1 h 1 h h h 0 h h 1 h h d=1 d= d=3 d>3

9 Too Many Boxes in Higher Dimensions The FGT subdivides the space into uniform boxes and assigns the source points and target points into boxes. For each box the FGT maintain a neighbor list. The number of the boxes increases exponentially with the dimensionality. d=1 d= d=3 d>3

10 FGT in Higher Dimensions The FGT was originally designed to solve the problems in mathematical physics (heat equation, vortex methods, etc), where the dimension is up to 3. The higher dimensional Hermite expansion is the product of univariate Hermite expansion along each dimension. Total number of terms is O(p d ). The space subdivision scheme in the original FGT is uniform boxes. The number of boxes grows exponentially with dimension. Most boxes are empty. The exponential dependence on the dimension makes the FGT extremely inefficient in higher dimensions.

11 Improved fast Gauss transform (IFGT)

12 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

13 New factorization

14 Multivariate Taylor Expansions The Taylor expansion of the Gaussian function: e y j x i =h = e y j x =h e x i x =h e (y j x ) (x i x )=h ; The first two terms can be computed independently. The Taylor expansion of the last term is: where α=(α 1, L, α d ) is multi-index. The multivariate Taylor expansion about center x * : where coefficients C α are given by

15 Reduction from Taylor Expansions The idea of Taylor expansion of the factored Gaussian kernel is first introduced in the course CMSC878R. The number of terms in multivariate Hermite expansion is O(p d ). The number of terms in multivariate Taylor expansion ( ) p+d 1 d is, asymptotically O(d p ), a big reduction for large d Number of terms Number of terms Hermite Expansion Taylor Expansion Number of terms Number of terms Hermite Expansion Taylor Expansion 10 0 d Dimension d p Truncation order p Fix p = 10, vary d = 1:0 Fix d = 10, vary p = 1:

16 Global nature of the new expansion

17 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

18 Monomial Orders Let α=(α 1, L, α n ), β=(β 1, L, β n ), then three standard monomial orders: Lexicographic order, or dictionary order: α lex β iff the leftmost nonzero entry in α - β is negative. Graded lexicographic order (Veronese map, a.k.a, polynomial embedding): α grlex β iff 1 i n α i < 1 i n β i or ( 1 i n α i = 1 i n β i and α lex β ). Graded reverse lexicographic order: α grevlex β iff 1 i n α i < 1 i n β i or ( 1 i n α i = 1 i n β i and the rightmost nonzero entry in α - β is positive). Example: Let f(x,y,z) = xy 5 z + x y 3 z 3 + x 3, then w.r.t. lex f is: f(x,y,z) = x 3 + x y 3 z 3 + xy 5 z ; w.r.t. grlex f is: f(x,y,z) = x y 3 z 3 + xy 5 z + x 3 ; w.r.t. grevlex f is:f(x,y,z) = xy 5 z + x y 3 z 3 + x 3.

19 The Horner s Rule The Horner s rule (W.G.Horner 1819) is to recursively evaluate the polynomial p(x) = a n x n + L + a 1 x + a 0 as: p(x) = ((L(a n x + a n-1 )x+l)x + a 0. It costs n multiplications and n additions, no extra storage. We evaluate the multivariate polynomial iteratively using the graded lexicographic order. It costs n multiplications and n additions, and n storage. 1 a b c a b c x 0 x 1 x=(a,b,c) a b c a ab ac b bc c x a b c a 3 a b a c ab abc ac b 3 b c bc c 3 x 3 Figure 1: Efficient expansion of the multivariate polynomials. The arrows point to the leading terms.

20 An Example of Taylor Expansion Suppose x = (x 1, x, x 3 ) and y = (y 1, y, y 3 ), then Constant 19 ops Direct: 9ops 1 4/ / / ops Direct:9ops x 1 x 1 x x 1 x x 3 x 1 x 3 x x x 3 x 3 x 1 3 x 1 x x 1 x 3 x 1 x x 1 x x 3 x 1 x 3 x 3 x x 3 x x 3 x ops Direct: 9 ops y 1 y 1 y 3 1 y y 1 y y 1 y y 3 y 1 y 3 y 1 y 3 y y 1 y y y 3 y 1 y y 3 y 3 y 1 y 3 y 3 y y 3 y y 3 y 3 3

21 An Example of Taylor Expansion (Cont d) Constant 19 ops Direct: 9ops 4/3 1 x 1 x 1 4 x x 1 x 4 x 3 x 1 x 3 4 x 8 x x 3 4 x 3 4/ /3 1N ops Direct:31Nops x y 1 y 1 x 1 x y y 1 y x 1 x 3 y 3 y 1 y 3 x 1 x y x 1 x x 3 y y 3 x 1 x 3 y 3 x 3 x x 3 x x 3 x ops Direct:30ops y 1 3 y 1 y y 1 y 3 y 1 y y 1 y y 3 y 1 y 3 y 3 y y 3 y y 3 y 3 3

22 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

23 Space Subdivision Scheme The space subdivision scheme in the original FGT is uniform boxes. The number of boxes grows exponentially with the dimensionality. The desirable space subdivision should adaptively fit the density of the points. The cell should be as compact as possible. The algorithm is better to be a progressive one, that is the current space subdivision is from the previous one. Based on the above considerations, we model the task as k-center problem.

24 k-center Algorithm The k-center problem is defined to seek the best partition of a set of points into clusters (Gonzalez 1985, Hochbaum and Shmoys 1985, Feder and Greene 1988). Given a set of points and a predefined number k, k-center clustering is to find a partition S = S 1 S L S k that minimizes max 1 i k radius(s i ), where radius(s i ) is the radius of the smallest disk that covers all points in S i. The k-center problem is NP-hard but there exists a simple -approximation algorithm. Smallest circles

25 Farthest-Point Algorithm The farthest-point algorithm (a.k.a. k-center algorithm) is a -approximation of the optimal solution (Gonzales 1985). The total running time is O(kn), n is the number of points. It can be reduced to O(n log k) using a slightly more complicated algorithm (Feder and Greene 1988). 1. Initiallyrandomlypick a point v 0 as thefirst center andaddit to the center set C.. For i=1 tok 1 do For every point v V,computethedistancefromv to thecurrent centerset C={v 0,v 1,...,v i 1 }: d i (v,c)=min c C v c. FromthepointsV C findapointv i thatisfarthestawayfromthe currentcenter setc,i.e. d i (v i,c)=max v min c C v c. Addv i tothecenter setc. 3. Returnthecenter set C ={v 0,v 1,...,v k 1 } as thesolutionto k-center problem.

26 A Demo of k-center Algorithm k = 4

27 Results of k-center Algorithm The results of k-center algorithm. 40,000 points are divided into 64 clusters in 0.48 sec on a 900MHZ PIII PC.

28 More Results of k-center Algorithm The 40,000 points are on the manifolds.

29 Results of k-center Algorithm The computational complexity of k-center algorithm is O(n logk). Points are generated using uniform distribution. (Left) Number of points varies from 1000 to for 64 clusters; (Right) The number of clusters k varies from 10 to 500 for points. Time(s) Time(s) Time(s) Time (s) n x 10 4 n Log k log k

30 Improved Fast Gauss Transform Algorithm Control series truncation error Collect the contributions from sources to centers Control far field cutoff error Summarize the contributions from centers to targets

31 Complexities FGT Expansion and evaluation Translation IFGT Expansion and evaluation; data structures

32 Optimal number of centers

33 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

34 Error Bound of IFGT The total error from the series truncation and the cutoff outside of the neighborhood of targets is bounded by E(y) ( p ( rx ) p ( ry ) p+e ) (r q i y =h). p! h h Truncation error Cutoff error r x r y r x Sources Targets

35 Error Bound Analysis Increase the number of truncation terms p, the error bound will decrease. With the progress of k-center algorithm, the radius of the source points r x will decrease, until the error bound is less than a given precision. The error bound first decreases, then increases with respect to the cutoff radius r y Real max abs error Estimated error bound Real max abs error Estimated error bound Real max abs error Estimated error bound Error Error Error Error Error p rx ry Truncation order p Max radius of cells Cutoff radius

36 Experimental Result The speedup of the fast Gauss transform in 4, 6, 8, 10 dimensions (h=1.0) direct method, 4D fast method, 4D direct method, 6D fast method, 6D direct method, 8D fast method, 8D direct method, 10D fast method, 10D D 6D 8D 10D CPU time Max abs error N N

37 New improvements in the IFGT

38

39 Pointwise error bounds and translation Can be used to speed-up general FMM also

40 Some Applications Kernel density estimation Mean-shift based segmentation Object tracking Robot navigation

41 Kernel Density Estimation (KDE) Kernel density estimation (a.k.a Parzen method, Rosenblatt 1956, Parzen 196) is an important nonparametric technique. KDE is the keystone of many algorithms: Radial basis function networks Support vector machines Mean shift algorithm Regularized particle filter The main drawback is the quadratic computational complexity. Very slow for large dataset.

42 Kernel Density Estimation Given a set of observations {x 1, L, x n }, an estimate of density function is Kernel function fˆ n (x)= 1 nh d Dimension Some commonly used kernel functions n i=1 K ( ) x xi h Bandwidth Rectangular Triangular Epanechnikov Gaussian The computational requirement for large datasets is O(N ), for N points.

43 Efficient KDE and FGT In practice, the most widely used kernel is the Gaussian K N (x)=(π) d= e 1 x. The density estimate using the Gaussian kernel: ˆp n (x)=c N N e x x i =h i=1 Fast Gauss transform can reduce the cost to O(N logn) in low-dimensional spaces. Improved fast Gauss transform accelerates the KDE in both lower and higher dimensions.

44 Experimental Result Image segmentation results of the mean-shift algorithm with the Gaussian kernel. Size: 43X94 Time: s Direct evaluation: more than hours Size: 481X31 Time: s Direct evaluation: more than hours

45 Some Applications Kernel density estimation Mean-shift based segmentation Object tracking Robot Navigation

46 Object Tracking Goal of object tracking: find the moving objects between consecutive frames. A model image or template is given for tracking. Usually a feature space is used, such as pixel intensity, colors, edges, etc. Usually a similarity measure is used to measure the difference between the model image and current image. Temporal correlation assumption: the change between two consecutive frames is small. Similarity function Model image Target image

47 Proposed Similarity Measures Our similarity measure Directly computed from the data points. Well scaled into higher dimensions. Ready to be speeded up by fast Gauss transform, if Gaussian kernels are employed. Interpreted as the expectation of the density estimations over the model and target images. d

48 Experimental results Pure translation Feature space: RGB + D spatial space Average processing time per frame: s. Our method Histogram tracking using Bhattacharyya distance

49 Experimental results Pure translation Feature space: RGB + D gradient + D spatial space Average processing time per frame: s.

50 Experimental results Pure translation Feature space: RGB + D spatial space The similarity measure is robust and accurate to the small target, compared with the other methods, such as histogram.

51 Experimental Results Translation + Scaling Feature space: RGB + D spatial space

52 Conclusions Improved fast Gauss transform is presented for higher dimensional spaces. Kernel density estimation is accelerated by the IFGT. A similarity measure in joint feature-spatial space is proposed for robust object tracking. With different motion models, we derived several tracking algorithms. The improved fast Gauss transform is applied to speed up the tracking algorithms to real-time performance.

Improved Fast Gauss Transform. Fast Gauss Transform (FGT)

Improved Fast Gauss Transform. Fast Gauss Transform (FGT) 10/11/011 Improved Fast Gauss Transform Based on work by Changjiang Yang (00), Vikas Raykar (005), and Vlad Morariu (007) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to

More information

Improved Fast Gauss Transform

Improved Fast Gauss Transform Improved Fast Gauss Transform Changjiang Yang Department of Computer Science University of Maryland Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate

More information

Efficient Kernel Machines Using the Improved Fast Gauss Transform

Efficient Kernel Machines Using the Improved Fast Gauss Transform Efficient Kernel Machines Using the Improved Fast Gauss Transform Changjiang Yang, Ramani Duraiswami and Larry Davis Department of Computer Science University of Maryland College Park, MD 20742 {yangcj,ramani,lsd}@umiacs.umd.edu

More information

Computational tractability of machine learning algorithms for tall fat data

Computational tractability of machine learning algorithms for tall fat data Computational tractability of machine learning algorithms for tall fat data Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Scalable machine learning for massive datasets: Fast summation algorithms

Scalable machine learning for massive datasets: Fast summation algorithms Scalable machine learning for massive datasets: Fast summation algorithms Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Vikas Chandrakant Raykar Doctor of Philosophy, 2007

Vikas Chandrakant Raykar Doctor of Philosophy, 2007 ABSTRACT Title of dissertation: SCALABLE MACHINE LEARNING FOR MASSIVE DATASETS : FAST SUMMATION ALGORITHMS Vikas Chandrakant Raykar Doctor of Philosophy, 2007 Dissertation directed by: Professor Dr. Ramani

More information

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation Ahmed Elgammal Dept of Computer Science CS 534 Segmentation III- Nonparametric Methods - - 1 Outlines Density

More information

Fast optimal bandwidth selection for kernel density estimation

Fast optimal bandwidth selection for kernel density estimation Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu

More information

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University Mean-Shift Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Mean Shift Algorithm A mode seeking algorithm Fukunaga & Hostetler (1975) Mean Shift Algorithm A mode seeking algorithm

More information

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov Week 1. Introduction. What are multipole methods and what is this course about. Problems from physics, mathematics,

More information

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Improved Fast Gauss Transform and Efficient Kernel Density Estimation Improved Fast Gauss Transform and Efficient Kernel Density Estimation Cangjiang Yang, Ramani Duraiswami, Nail A. Gumerov and Larry Davis Perceptual Interfaces and Reality Laboratory University of Maryland,

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 4 Spatial Point Patterns Definition Set of point locations with recorded events" within study

More information

Solving systems of polynomial equations and minimization of multivariate polynomials

Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur, Polynomial solving & minimization - 1 - First Prev Next Last Full Screen Quit Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur UCL/CORE

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Dual-Tree Fast Gauss Transforms

Dual-Tree Fast Gauss Transforms Dual-Tree Fast Gauss Transforms Dongryeol Lee Computer Science Carnegie Mellon Univ. dongryel@cmu.edu Alexander Gray Computer Science Carnegie Mellon Univ. agray@cs.cmu.edu Andrew Moore Computer Science

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Fast large scale Gaussian process regression using the improved fast Gauss transform

Fast large scale Gaussian process regression using the improved fast Gauss transform Fast large scale Gaussian process regression using the improved fast Gauss transform VIKAS CHANDRAKANT RAYKAR and RAMANI DURAISWAMI Perceptual Interfaces and Reality Laboratory Department of Computer Science

More information

Fast Multipole Methods for The Laplace Equation. Outline

Fast Multipole Methods for The Laplace Equation. Outline Fast Multipole Methods for The Laplace Equation Ramani Duraiswami Nail Gumerov Outline 3D Laplace equation and Coulomb potentials Multipole and local expansions Special functions Legendre polynomials Associated

More information

Nonparametric Density Estimation

Nonparametric Density Estimation Nonparametric Density Estimation Econ 690 Purdue University Justin L. Tobias (Purdue) Nonparametric Density Estimation 1 / 29 Density Estimation Suppose that you had some data, say on wages, and you wanted

More information

5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE

5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE 5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER 2008 Nonlinear Filtering Using a New Proposal Distribution and the Improved Fast Gauss Transform With Tighter Performance Bounds Roni

More information

Lecture 2: The Fast Multipole Method

Lecture 2: The Fast Multipole Method CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 2: The Fast Multipole Method Gunnar Martinsson The University of Colorado at Boulder Research support by: Recall:

More information

Fast Multipole Methods

Fast Multipole Methods An Introduction to Fast Multipole Methods Ramani Duraiswami Institute for Advanced Computer Studies University of Maryland, College Park http://www.umiacs.umd.edu/~ramani Joint work with Nail A. Gumerov

More information

Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions

Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions Monday, June 22 Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions Junghyun Kwon 1, Kyoung Mu Lee 1, and Frank C. Park 2 1 Department of EECS, 2 School

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) Diffeomorphic Warping Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) What Manifold Learning Isn t Common features of Manifold Learning Algorithms: 1-1 charting Dense sampling Geometric Assumptions

More information

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β Introduction - Introduction -2 Introduction Linear Regression E(Y X) = X β +...+X d β d = X β Example: Wage equation Y = log wages, X = schooling (measured in years), labor market experience (measured

More information

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements

More information

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y]

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y] Lecture 1 1. Polynomial Rings, Gröbner Bases Definition 1.1. Let R be a ring, G an abelian semigroup, and R = i G R i a direct sum decomposition of abelian groups. R is graded (G-graded) if R i R j R i+j

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 2015

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 2015 Department of Mathematics, University of California, Berkeley YOUR OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 205. Please write your - or 2-digit exam number on this

More information

Non-linear least squares

Non-linear least squares Non-linear least squares Concept of non-linear least squares We have extensively studied linear least squares or linear regression. We see that there is a unique regression line that can be determined

More information

Nonparametric Methods Lecture 5

Nonparametric Methods Lecture 5 Nonparametric Methods Lecture 5 Jason Corso SUNY at Buffalo 17 Feb. 29 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49 Nonparametric Methods Lecture 5 Overview Previously,

More information

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods) Machine Learning InstanceBased Learning (aka nonparametric methods) Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Non parametric CSE 446 Machine Learning Daniel Weld March

More information

PREMUR Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications

PREMUR Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications PREMUR 2007 - Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications Day 1: Monomial Orders In class today, we introduced the definition of a monomial order in the polyomial

More information

ECE521 lecture 4: 19 January Optimization, MLE, regularization

ECE521 lecture 4: 19 January Optimization, MLE, regularization ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity

More information

TAYLOR SERIES [SST 8.8]

TAYLOR SERIES [SST 8.8] TAYLOR SERIES [SST 8.8] TAYLOR SERIES: Every function f C (c R, c + R) has a unique Taylor series about x = c of the form: f (k) (c) f(x) = (x c) k = f(c) + f (c) (x c) + f (c) (x c) 2 + f (c) (x c) 3

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

Numerical Analysis for Statisticians

Numerical Analysis for Statisticians Kenneth Lange Numerical Analysis for Statisticians Springer Contents Preface v 1 Recurrence Relations 1 1.1 Introduction 1 1.2 Binomial CoefRcients 1 1.3 Number of Partitions of a Set 2 1.4 Horner's Method

More information

Adaptive Nonparametric Density Estimators

Adaptive Nonparametric Density Estimators Adaptive Nonparametric Density Estimators by Alan J. Izenman Introduction Theoretical results and practical application of histograms as density estimators usually assume a fixed-partition approach, where

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior

More information

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised

More information

Gaussian mean-shift algorithms

Gaussian mean-shift algorithms Gaussian mean-shift algorithms Miguel Á. Carreira-Perpiñán Dept. of Computer Science & Electrical Engineering, OGI/OHSU http://www.csee.ogi.edu/~miguel Gaussian mean-shift (GMS) as an EM algorithm Gaussian

More information

Fast Krylov Methods for N-Body Learning

Fast Krylov Methods for N-Body Learning Fast Krylov Methods for N-Body Learning Nando de Freitas Department of Computer Science University of British Columbia nando@cs.ubc.ca Maryam Mahdaviani Department of Computer Science University of British

More information

QUALITY AND EFFICIENCY IN KERNEL DENSITY ESTIMATES FOR LARGE DATA

QUALITY AND EFFICIENCY IN KERNEL DENSITY ESTIMATES FOR LARGE DATA QUALITY AND EFFICIENCY IN KERNEL DENSITY ESTIMATES FOR LARGE DATA Yan Zheng Jeffrey Jestes Jeff M. Philips Feifei Li University of Utah Salt Lake City Sigmod 2014 Snowbird Kernel Density Estimates Let

More information

A Fast N-Body Solver for the Poisson(-Boltzmann) Equation

A Fast N-Body Solver for the Poisson(-Boltzmann) Equation A Fast N-Body Solver for the Poisson(-Boltzmann) Equation Robert D. Skeel Departments of Computer Science (and Mathematics) Purdue University http://bionum.cs.purdue.edu/2008december.pdf 1 Thesis Poisson(-Boltzmann)

More information

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov What are multipole methods and what is this course about. Problems from phsics, mathematics,

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

A Remark on the Fast Gauss Transform

A Remark on the Fast Gauss Transform Publ. RIMS, Kyoto Univ. 39 (2003), 785 796 A Remark on the Fast Gauss Transform By Kenta Kobayashi Abstract We propose an improvement on the Fast Gauss Transform which was presented by Greengard and Sun

More information

The Fast Multipole Method and other Fast Summation Techniques

The Fast Multipole Method and other Fast Summation Techniques The Fast Multipole Method and other Fast Summation Techniques Gunnar Martinsson The University of Colorado at Boulder (The factor of 1/2π is suppressed.) Problem definition: Consider the task of evaluation

More information

Fast Methods for Inference in Graphical Models

Fast Methods for Inference in Graphical Models Fast Methods for Inference in Graphical Models and Beat Tracking the Graphical Model Way by Dustin Lang B.Sc., The University of British Columbia, 2002 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS

More information

18.02 Multivariable Calculus Fall 2007

18.02 Multivariable Calculus Fall 2007 MIT OpenourseWare http://ocw.mit.edu 18.02 Multivariable alculus Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.02 Lecture 21. Test for

More information

Rewriting Polynomials

Rewriting Polynomials Rewriting Polynomials 1 Roots and Eigenvalues the companion matrix of a polynomial the ideal membership problem 2 Automatic Geometric Theorem Proving the circle theorem of Appolonius 3 The Division Algorithm

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows Kn-Nearest

More information

Section 5.2 Series Solution Near Ordinary Point

Section 5.2 Series Solution Near Ordinary Point DE Section 5.2 Series Solution Near Ordinary Point Page 1 of 5 Section 5.2 Series Solution Near Ordinary Point We are interested in second order homogeneous linear differential equations with variable

More information

Problem 1A. Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1. (Hint: 1. Solution: The volume is 1. Problem 2A.

Problem 1A. Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1. (Hint: 1. Solution: The volume is 1. Problem 2A. Problem 1A Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1 (Hint: 1 1 (something)dz) Solution: The volume is 1 1 4xydz where x = y = 1 z 2 This integral has value 16/3 Problem 2A Let f(x)

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

APPROXIMATING GAUSSIAN PROCESSES

APPROXIMATING GAUSSIAN PROCESSES 1 / 23 APPROXIMATING GAUSSIAN PROCESSES WITH H 2 -MATRICES Steffen Börm 1 Jochen Garcke 2 1 Christian-Albrechts-Universität zu Kiel 2 Universität Bonn and Fraunhofer SCAI 2 / 23 OUTLINE 1 GAUSSIAN PROCESSES

More information

An Overview of Fast Multipole Methods

An Overview of Fast Multipole Methods An Overview of Fast Multipole Methods Alexander T. Ihler April 23, 2004 Sing, Muse, of the low-rank approximations Of spherical harmonics and O(N) computation... Abstract We present some background and

More information

Segmentation of Subspace Arrangements III Robust GPCA

Segmentation of Subspace Arrangements III Robust GPCA Segmentation of Subspace Arrangements III Robust GPCA Berkeley CS 294-6, Lecture 25 Dec. 3, 2006 Generalized Principal Component Analysis (GPCA): (an overview) x V 1 V 2 (x 3 = 0)or(x 1 = x 2 = 0) {x 1x

More information

CS 556: Computer Vision. Lecture 21

CS 556: Computer Vision. Lecture 21 CS 556: Computer Vision Lecture 21 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Meanshift 2 Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay SP-CNN: A Scalable and Programmable CNN-based Accelerator Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay Motivation Power is a first-order design constraint, especially for embedded devices. Certain

More information

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Probabilistic Graphical Models (I)

Probabilistic Graphical Models (I) Probabilistic Graphical Models (I) Hongxin Zhang zhx@cad.zju.edu.cn State Key Lab of CAD&CG, ZJU 2015-03-31 Probabilistic Graphical Models Modeling many real-world problems => a large number of random

More information

Kernelized Perceptron Support Vector Machines

Kernelized Perceptron Support Vector Machines Kernelized Perceptron Support Vector Machines Emily Fox University of Washington February 13, 2017 What is the perceptron optimizing? 1 The perceptron algorithm [Rosenblatt 58, 62] Classification setting:

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015 CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature

More information

ax 2 + bx + c = 0 where

ax 2 + bx + c = 0 where Chapter P Prerequisites Section P.1 Real Numbers Real numbers The set of numbers formed by joining the set of rational numbers and the set of irrational numbers. Real number line A line used to graphically

More information

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas 0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity

More information

Fast Straightening Algorithm for Bracket Polynomials Based on Tableau Manipulations

Fast Straightening Algorithm for Bracket Polynomials Based on Tableau Manipulations Fast Straightening Algorithm for Bracket Polynomials Based on Tableau Manipulations Changpeng Shao, Hongbo Li KLMM, Chinese Academy of Sciences July 18, 2018 1 / 30 Outline Background: Bracket Polynomials

More information

Topics in Nonlinear and Robust Estimation Theory

Topics in Nonlinear and Robust Estimation Theory Topics in Nonlinear and Robust Estimation Theory A Dissertation Presented by Roni Mittelman to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree

More information

On the Complexity of Gröbner Basis Computation for Regular and Semi-Regular Systems

On the Complexity of Gröbner Basis Computation for Regular and Semi-Regular Systems On the Complexity of Gröbner Basis Computation for Regular and Semi-Regular Systems Bruno.Salvy@inria.fr Algorithms Project, Inria Joint work with Magali Bardet & Jean-Charles Faugère September 21st, 2006

More information

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani) Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow

More information

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu Analysis of Algorithm Efficiency Dr. Yingwu Zhu Measure Algorithm Efficiency Time efficiency How fast the algorithm runs; amount of time required to accomplish the task Our focus! Space efficiency Amount

More information

Diffusion/Inference geometries of data features, situational awareness and visualization. Ronald R Coifman Mathematics Yale University

Diffusion/Inference geometries of data features, situational awareness and visualization. Ronald R Coifman Mathematics Yale University Diffusion/Inference geometries of data features, situational awareness and visualization Ronald R Coifman Mathematics Yale University Digital data is generally converted to point clouds in high dimensional

More information

Curve Fitting Re-visited, Bishop1.2.5

Curve Fitting Re-visited, Bishop1.2.5 Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the

More information

CS 664 Segmentation (2) Daniel Huttenlocher

CS 664 Segmentation (2) Daniel Huttenlocher CS 664 Segmentation (2) Daniel Huttenlocher Recap Last time covered perceptual organization more broadly, focused in on pixel-wise segmentation Covered local graph-based methods such as MST and Felzenszwalb-Huttenlocher

More information

Lecture 15: Algebraic Geometry II

Lecture 15: Algebraic Geometry II 6.859/15.083 Integer Programming and Combinatorial Optimization Fall 009 Today... Ideals in k[x] Properties of Gröbner bases Buchberger s algorithm Elimination theory The Weak Nullstellensatz 0/1-Integer

More information

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Kuangyu Wen & Ximing Wu Texas A&M University Info-Metrics Institute Conference: Recent Innovations in Info-Metrics October

More information

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive

More information

Between Sparse and Dense Arithmetic

Between Sparse and Dense Arithmetic Between Sparse and Dense Arithmetic Daniel S. Roche Computer Science Department United States Naval Academy NARC Seminar November 28, 2012 The Problem People want to compute with really big numbers and

More information

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177 Second Order ODEs Often physical or biological systems are best described by second or higher-order ODEs. That is, second or higher order derivatives appear in the mathematical model of the system. For

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Chenhan D. Yu The 3rd BLIS Retreat Sep 28, 2015

Chenhan D. Yu The 3rd BLIS Retreat Sep 28, 2015 GSKS GSKNN BLIS-Based High Performance Computing Kernels in N-body Problems Chenhan D. Yu The 3rd BLIS Retreat Sep 28, 2015 N-body Problems Hellstorm Astronomy and 3D https://youtu.be/bllwkx_mrfk 2 N-body

More information

Designing Kernel Functions Using the Karhunen-Loève Expansion

Designing Kernel Functions Using the Karhunen-Loève Expansion July 7, 2004. Designing Kernel Functions Using the Karhunen-Loève Expansion 2 1 Fraunhofer FIRST, Germany Tokyo Institute of Technology, Japan 1,2 2 Masashi Sugiyama and Hidemitsu Ogawa Learning with Kernels

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Gross Motion Planning

Gross Motion Planning Gross Motion Planning...given a moving object, A, initially in an unoccupied region of freespace, s, a set of stationary objects, B i, at known locations, and a legal goal position, g, find a sequence

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model

The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model Jan Vondrák 1 1 Dept. of Mathematics Stanford University joint work with Nick Harvey (UBC) The Lovász Local Lemma

More information

Fast Direct Methods for Gaussian Processes

Fast Direct Methods for Gaussian Processes Fast Direct Methods for Gaussian Processes Mike O Neil Departments of Mathematics New York University oneil@cims.nyu.edu December 12, 2015 1 Collaborators This is joint work with: Siva Ambikasaran Dan

More information

Learning Methods for Linear Detectors

Learning Methods for Linear Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2011/2012 Lesson 20 27 April 2012 Contents Learning Methods for Linear Detectors Learning Linear Detectors...2

More information