Improved Fast Gauss Transform

Size: px
Start display at page:

Download "Improved Fast Gauss Transform"

Transcription

1 Improved Fast Gauss Transform Changjiang Yang Department of Computer Science University of Maryland

2 Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate the weighted sum of Gaussians: G(y j )= NX i=1 q i e ky j x i k /h, FGT is an important variant of Fast Multipole Method (Greengard & Rolklin 1987, Greengard and Strain 1990). FGT similar to fast Fourier transform: j =1,...,M. FFT is for the data on regular grid; FGT is for data on arbitrary positio FFT is arithmetic based; FGT is approximate analysis based Widely used in mathematical physics, computational chemistry, mechanics. Recently applied in our group to RBF interpolation, acoustic scattering EM scattering, non uniform FFT and computer vision.

3 Fast Gauss Transform (cont d) The weighted sum of Gaussians G(y j )= NX i=1 q i e ky j x i k /h, Targets Sources is equivalent to the matrix-vector product: j =1,...,M. G(y 1 ) e k x 1 y 1 k /h e kx y 1 k /h e kx N y 1 k /h G(y ).. = e k x 1 y k /h e kx y k /h e kx N y k /h G(y M ) e kx 1 y M k /h e kx y M k /h e kx N y M k /h Direct evaluation of the sum of Gaussians requires O(N ) operations. Fast Gauss transform reduces the cost to O(N logn) operations. q 1 q.. q N

4 Fast Gauss Transform (cont d) Three key components of the FGT: The factorization or separation of variables Space subdivision scheme Error bound analysis

5 Factorization of Gaussian Factorization is one of the key parts of the FGT. Factorization breaks the entanglements between the sources and targets. The contributions of the sources are accumulated at the centers, then distributed to each target. Center Sources Targets Sources Targets Sources Targets

6 Factorization in FGT: Hermite Expansion The Gaussian kernel is factorized into Hermite functions e ky x ik /h = p 1 X n=0 where Hermite function h n (x) is defined by Exchange the order of the summations G y j 1 n! µ xi x h h n (x) =( 1) n dn dx n NX p X q i i n p X NX n n n i n h n µ y x µ e x. h µ xi x n µ yj x hn h µ xi x q i h n h h n µ yj x h +²(p), ² p, ² p a.k.a. Moment, A n

7 The FGT Algorithm Step 0: Subdivide the space into uniform boxes. Step 1: Assign sources and targets into boxes. Step : For each pair of source and target boxes, compute the interactions using one of the four possible ways: N B N F, M C M L : Direct evaluation N B N F, M C > M L : Taylor expansion N B > N F, M C M L : Hermite expansion N B > N F, M C > M L : Hermite expansion --> Taylor expansion Step 3: Loop through boxes evaluating Taylor series for boxes with more than M L targets. N B : # of sources in Box B M C : # of targets in Box C N F : cutoff of Box B M L : cutoff of Box C

8 Too Many Terms in Higher Dimensions The higher dimensional Hermite expansion is the Kronecker product of d univariate Hermite expansions. Total number of terms is O(p d ), p is the number of truncation terms. The number of operations in one factorization is O(p d ). h 0 h 0 h 0 h 1 h 0 h h 0 h 1 h h 1 h 0 h 1 h 1 h 1 h h h 0 h h 1 h h d=1 d= d=3 d>3

9 Too Many Boxes in Higher Dimensions The FGT subdivides the space into uniform boxes and assigns the source points and target points into boxes. For each box the FGT maintain a neighbor list. The number of the boxes increases exponentially with the dimensionality.

10 FGT in Higher Dimensions The FGT was originally designed to solve the problems in mathematical physics (heat equation, vortex methods, etc), where the dimension is up to 3. The higher dimensional Hermite expansion is the product of univariate Hermite expansion along each dimension. Total number of terms is O(p d ). The space subdivision scheme in the original FGT is uniform boxes. The number of boxes grows exponentially with dimension. Most boxes are empty. The exponential dependence on the dimension makes the FGT extremely inefficient in higher dimensions.

11 r x /h The Decay of Gaussian Functions The decay of the Gaussian kernel function is very fast. The effect of Gaussian kernel outside of certain range can be safely ignored. The time consuming translation operators in original FGT can be safely removed f(r x /h) E E-11.3E E- 1.6E E E

12 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

13 C α = α α! NX µ qi e kx i x k /h x i x α. h Multivariate Taylor Expansions The Taylor expansion of the Gaussian function: e ky j x i k /h = e ky j x k /h e kx i x k /h e (y j x ) (x i x )/h, The first two terms can be computed independently. The Taylor expansion of the last term is: e (y j x ) (x i x )/h = X α 0 α where α=(α 1, L, α d ) is multi-index. The multivariate Taylor expansion about center x * : G(y j )= X α 0 α! where coefficients C α are given by µ xi x h Cαe ky j x k /h µ y j x h α µ yj x α, h α.

14 Reduction from Taylor Expansions The idea of Taylor expansion of Gaussian kernel is first introduced in the course CMSC878R by Ramani and Nail. The number of terms in multivariate Hermite expansion is O(p d ). The number of terms in multivariate Taylor expansion ³ is p+d 1, asymptotically O(d p ), a big reduction for large d. d Number of terms Number of terms Hermite Expansion Taylor Expansion Number of terms Number of terms Hermite Expansion Taylor Expansion d Dimension d p Truncation order p

15 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

16 Monomial Orders Let α=(α 1, L, α n ), β=(β 1, L, β n ), then three standard monomial orders: Lexicographic order, or dictionary order: α lex β iff the leftmost nonzero entry in α - β is negative. Graded lexicographic order (Veronese map, a.k.a, polynomial embedding): α grlex β iff 1 i n α i < 1 i n β i or ( 1 i n α i = 1 i n β i and α lex β ). Graded reverse lexicographic order: α grevlex β iff 1 i n α i < 1 i n β i or ( 1 i n α i = 1 i n β i and the rightmost nonzero entry in α - β is positive). Example: Let f(x,y,z) = xy 5 z + x y 3 z 3 + x 3, then w.r.t. lex f is: f(x,y,z) = x 3 + x y 3 z 3 + xy 5 z ; w.r.t. grlex f is: f(x,y,z) = x y 3 z 3 + xy 5 z + x 3 ; w.r.t. grevlex f is:f(x,y,z) = xy 5 z + x y 3 z 3 + x 3.

17 The Horner s Rule The Horner s rule (W.G.Horner 1819) is to recursively evaluate the polynomial p(x) = a n x n + L + a 1 x + a 0 as: p(x) = ((L(a n x + a n-1 )x+l)x + a 0. It costs n multiplications and n additions, no extra storage. We evaluate the multivariate polynomial iteratively using the graded lexicographic order. It costs n multiplications and n additions, and n storage. 1 a b c a b c x 0 x 1 x=(a,b,c) a b c a ab ac b bc c x a b c a 3 a b a c ab abc ac b 3 b c bc c 3 x 3

18 An Example of Taylor Expansion Suppose x = (x 1, x, x 3 ) and y = (y 1, y, y 3 ), then stant ps ct: 9ops ps ct:9ops 1 4/3 1 x 1 x 1 x x x 1 x x 1 x e x y X α α xα y α 4 4 x 3 x 1 x 3 x 1 x 3 4 x x 1 x 4 8 α x x 3 x 1 x x 3 4 x 3 x 1 x 3 4/3 x x x 3 4 x x 3 4/3 x ps ct: 9 ops y 1 y 1 3 y y 1 y y 3 y 1 y 3 y y y 3 y 3 3 3

19 3 3 3 An Example of Taylor Expansion (Cont d) G y NX q i e kx i yk NX q i e kx ik e kyk X i i α α α xα i y α 1 e kyk X α NX α α yα i q i e kx ik x α i Constant 19 ops Direct: 9ops q i e x x x e y y y 4/3 1 x 1 x 1 x y 1 y 1 4 x x 1 x x 1 x y y 1 y 4 x 3 x 1 x 3 x 1 x 3 y 3 y 1 y 3 4 x x 1 x y 8 x x 3 x 1 x x 3 y y 3 4 x 3 x 1 x 3 y 3 4/3 x 3 4 x x 3 4 x x 3 4/3 1N ops Direct:31Nops x ops Direct:30ops

20 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

21 Space Subdivision Scheme The space subdivision scheme in the original FGT is uniform boxes. The number of boxes grows exponentially with the dimensionality. The desirable space subdivision should adaptively fit the density of the points. The cell should be as compact as possible. The algorithm is better to be a progressive one, that is the current space subdivision is from the previous one. Based on the above considerations, we model the task as k-center problem.

22 k-center Algorithm The k-center problem is defined to seek the best partition of a set of points into clusters (Gonzalez 1985, Hochbaum and Shmoys 1985, Feder and Greene 1988). Given a set of points and a predefined number k, k-center clustering is to find a partition S = S 1 S L S k that minimizes max 1 i k radius(s i ), where radius(s i ) is the radius of the smallest disk that covers all points in S i. The k-center problem is NP-hard but there exists a simple -approximation algorithm. Smallest circles

23 Farthest-Point Algorithm The farthest-point algorithm (a.k.a. k-center algorithm) is a -approximation of the optimal solution (Gonzales 1985). The total running time is O(kn), n is the number of points. It can be reduced to O(n log k) using a slightly more complicated algorithm (Feder and Greene 1988). 1. Initially randomly pick a point v 0 as the first center and add it to the center set C.. For i =1 tok 1do For every point v V, compute the distance from v to the current center set C = {v 0,v 1,...,v i 1 }: d i (v, C) =min c C kv ck. From the points V C find a point v i that is farthest away from the current center set C, i.e. d i (v i,c)=max v min c C kv ck. Addv i to the center set C. 3. Return the center set C = {v 0,v 1,...,v k 1 } as the solution to k-center

24 A Demo of k-center Algorithm k = 4

25 Results of k-center Algorithm The results of k-center algorithm. 40,000 points are divided into 64 clusters in 0.48 sec on a 900MHZ PIII PC.

26 More Results of k-center Algorithm The 40,000 points are on the manifolds.

27 Results of k-center Algorithm The computational complexity of k-center algorithm is O(n logk). Points are generated using uniform distribution. (Left) Number of points varies from 1000 to for 64 clusters; (Right) The number of clusters k varies from 10 to 500 for points. Time(s) Time(s) Time(s) Time (s) n x Log k log k

28 Improved Fast Gauss Transform Algorithm Step 1 Assign the N sources into K clusters using the farthest-point clustering algorithm such that the radius is less than r x. Step Choose p sufficiently large such that the error estimate is less than the desired precision ². Step 3 For each cluster S k with center c k, compute the coefficien ts: C k α = α X µ α q i e k xi ckk /h xi c k. α! h x i S k Collect the contributions from sources to centers Step 4 Repeat for each target y j, find its neighbor clusters whose centers lie within the range r y. Then the sum of Gaussians can be evaluated by the expression: Control far field cutoff error G(y j )= X ky j c k k<hρ y X α <p Control series truncation error C k αe ky j c k k /h µ yj c k h α.

29 Improved Fast Gauss Transform Multivariate Taylor expansion Multivariate Horner s Rule Space subdivision based on k-center algorithm Error bound analysis

30 Error Bound of IFGT The total error from the series truncation and the cutoff outside of the neighborhood of targets is bounded by E(y) X µ p ³ rx p ³ ry p q i + e (r y /h). p! h h Truncation error Cutoff error r x r y r x Sources Targets

31 Error Bound Analysis Increase the number of truncation terms p, the error bound will decrease. With the progress of k-center algorithm, the radius of the source points r x will decrease, until the error bound is less than a given precision. The error bound first decreases, then increases with respect to the cutoff radius r y Real max abs error Estimated error bound Real max abs error Estimated error bound Real max abs error Estimated error bo Error Error Error Error Error p rx ry

32 Experimental Result The speedup of the fast Gauss transform in 4, 6, 8, 10 dimensions (h=1.0) direct method, 4D fast method, 4D direct method, 6D fast method, 6D direct method, 8D fast method, 8D direct method, 10D fast method, 10D D 6D 8D 10D CPU time Max abs error N N

33 Programming Issues Streaming SIMD Extensions (SSE) and SSE (Intel)

34 Example of SSE/SSE Codes _asm{ mov eax, DWORD PTR [ebp+8] ;h mov edi, DWORD PTR [ebp+1] ;w mov ebx, DWORD PTR [ebp+0] ;img mov ecx, DWORD PTR [ebp+4] ;ii xor edx, edx nextcol1: movdqa xmm0, XMMWORD PTR [ebx] movdqa XMMWORD PTR [ecx], xmm0 mov esi, 1 add ebx, 16 nextrow1: paddd xmm0, XMMWORD PTR [ebx] add ecx, 16 movdqa XMMWORD PTR [ecx], xmm0 add ebx, 16 inc esi cmp esi, eax jl nextrow1 add ecx, 16 inc edx cmp edx, edi jl nextcol1 }

35 Some Applications Kernel density estimation Mean-shift based segmentation Object tracking Robot navigation

36 Kernel Density Estimation (KDE) Kernel density estimation (a.k.a Parzen method, Rosenblatt 1956, Parzen 196) is an important nonparametric technique. KDE is the keystone of many algorithms: Radial basis function networks Support vector machines Mean shift algorithm Regularized particle filter The main drawback is the quadratic computational complexity. Very slow for large dataset.

37 Kernel Density Estimation Given a set of observations {x 1, L, x n }, an estimate of density function is Kernel function Ã! nx kx xi k ˆf n (x) = 1 nh d Dimension i=1 Some commonly used kernel functions K h Bandwidth Rectangular Triangular Epanechnikov Gaussian The computational requirement for large datasets is O(N ), for N points.

38 Efficient KDE and FGT In practice, the most widely used kernel is the Gaussian K N (x)=(π) d/ e 1 kxk. The density estimate using the Gaussian kernel: ˆp n (x) =c N X N e k x x i k /h i=1 Fast Gauss transform can reduce the cost to O(N logn) in low-dimensional spaces. Improved fast Gauss transform accelerates the KDE in both lower and higher dimensions.

39 Experimental Result Image segmentation results of the mean-shift algorithm with the Gaussian kernel. Size: 43X94 Time: s Direct evaluation: more than hours Size: 481X31 Time: s Direct evaluation: more than hours

40 Some Applications Kernel density estimation Mean-shift based segmentation Object tracking Robot Navigation

41 Object Tracking Goal of object tracking: find the moving objects between consecutive frames. A model image or template is given for tracking. Usually a feature space is used, such as pixel intensity, colors, edges, etc. Usually a similarity measure is used to measure the difference between the model image and current image. Temporal correlation assumption: the change between two consecutive frames is small. Similarity function

42 d Proposed Similarity Measures Our similarity measure Directly computed from the data points. Well scaled into higher dimensions. Ready to be speeded up by fast Gauss transform, if Gaussian kernels are employed. Interpreted as the expectation of the density estimations over the model and target images.

43 Experimental results Pure translation Feature space: RGB + D spatial space Average processing time per frame: s. Our method Histogram tracking using Bhattacharyya distance

44 Experimental results Pure translation Feature space: RGB + D gradient + D spatial space Average processing time per frame: s.

45 Experimental results Pure translation Feature space: RGB + D spatial space The similarity measure is robust and accurate to the small target, compared with the other methods, such as histogram.

46 Experimental Results Translation + Scaling Feature space: RGB + D spatial space

47 Conclusions Improved fast Gauss transform is presented for higher dimensional spaces. Kernel density estimation is accelerated by the IFGT. A similarity measure in joint feature-spatial space is proposed for robust object tracking. With different motion models, we derived several tracking algorithms. The improved fast Gauss transform is applied to speed up the tracking algorithms to real-time performance.

Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar)

Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar) Improved Fast Gauss Transform (based on work by Changjiang Yang and Vikas Raykar) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate the weighted sum

More information

Improved Fast Gauss Transform. Fast Gauss Transform (FGT)

Improved Fast Gauss Transform. Fast Gauss Transform (FGT) 10/11/011 Improved Fast Gauss Transform Based on work by Changjiang Yang (00), Vikas Raykar (005), and Vlad Morariu (007) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to

More information

Efficient Kernel Machines Using the Improved Fast Gauss Transform

Efficient Kernel Machines Using the Improved Fast Gauss Transform Efficient Kernel Machines Using the Improved Fast Gauss Transform Changjiang Yang, Ramani Duraiswami and Larry Davis Department of Computer Science University of Maryland College Park, MD 20742 {yangcj,ramani,lsd}@umiacs.umd.edu

More information

Computational tractability of machine learning algorithms for tall fat data

Computational tractability of machine learning algorithms for tall fat data Computational tractability of machine learning algorithms for tall fat data Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Scalable machine learning for massive datasets: Fast summation algorithms

Scalable machine learning for massive datasets: Fast summation algorithms Scalable machine learning for massive datasets: Fast summation algorithms Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov Week 1. Introduction. What are multipole methods and what is this course about. Problems from physics, mathematics,

More information

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University

Mean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University Mean-Shift Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Mean Shift Algorithm A mode seeking algorithm Fukunaga & Hostetler (1975) Mean Shift Algorithm A mode seeking algorithm

More information

Vikas Chandrakant Raykar Doctor of Philosophy, 2007

Vikas Chandrakant Raykar Doctor of Philosophy, 2007 ABSTRACT Title of dissertation: SCALABLE MACHINE LEARNING FOR MASSIVE DATASETS : FAST SUMMATION ALGORITHMS Vikas Chandrakant Raykar Doctor of Philosophy, 2007 Dissertation directed by: Professor Dr. Ramani

More information

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 4 Spatial Point Patterns Definition Set of point locations with recorded events" within study

More information

Solving systems of polynomial equations and minimization of multivariate polynomials

Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur, Polynomial solving & minimization - 1 - First Prev Next Last Full Screen Quit Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur UCL/CORE

More information

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Improved Fast Gauss Transform and Efficient Kernel Density Estimation Improved Fast Gauss Transform and Efficient Kernel Density Estimation Cangjiang Yang, Ramani Duraiswami, Nail A. Gumerov and Larry Davis Perceptual Interfaces and Reality Laboratory University of Maryland,

More information

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation

CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation Ahmed Elgammal Dept of Computer Science CS 534 Segmentation III- Nonparametric Methods - - 1 Outlines Density

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Fast optimal bandwidth selection for kernel density estimation

Fast optimal bandwidth selection for kernel density estimation Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu

More information

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov What are multipole methods and what is this course about. Problems from phsics, mathematics,

More information

Fast Multipole Methods

Fast Multipole Methods An Introduction to Fast Multipole Methods Ramani Duraiswami Institute for Advanced Computer Studies University of Maryland, College Park http://www.umiacs.umd.edu/~ramani Joint work with Nail A. Gumerov

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings CS8803: Statistical Techniques in Robotics Byron Boots Hilbert Space Embeddings 1 Motivation CS8803: STR Hilbert Space Embeddings 2 Overview Multinomial Distributions Marginal, Joint, Conditional Sum,

More information

Nonparametric Methods Lecture 5

Nonparametric Methods Lecture 5 Nonparametric Methods Lecture 5 Jason Corso SUNY at Buffalo 17 Feb. 29 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49 Nonparametric Methods Lecture 5 Overview Previously,

More information

Fast Multipole Methods for The Laplace Equation. Outline

Fast Multipole Methods for The Laplace Equation. Outline Fast Multipole Methods for The Laplace Equation Ramani Duraiswami Nail Gumerov Outline 3D Laplace equation and Coulomb potentials Multipole and local expansions Special functions Legendre polynomials Associated

More information

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y]

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y] Lecture 1 1. Polynomial Rings, Gröbner Bases Definition 1.1. Let R be a ring, G an abelian semigroup, and R = i G R i a direct sum decomposition of abelian groups. R is graded (G-graded) if R i R j R i+j

More information

Nonparametric Density Estimation

Nonparametric Density Estimation Nonparametric Density Estimation Econ 690 Purdue University Justin L. Tobias (Purdue) Nonparametric Density Estimation 1 / 29 Density Estimation Suppose that you had some data, say on wages, and you wanted

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Regularized Least Squares

Regularized Least Squares Regularized Least Squares Ryan M. Rifkin Google, Inc. 2008 Basics: Data Data points S = {(X 1, Y 1 ),...,(X n, Y n )}. We let X simultaneously refer to the set {X 1,...,X n } and to the n by d matrix whose

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

Dual-Tree Fast Gauss Transforms

Dual-Tree Fast Gauss Transforms Dual-Tree Fast Gauss Transforms Dongryeol Lee Computer Science Carnegie Mellon Univ. dongryel@cmu.edu Alexander Gray Computer Science Carnegie Mellon Univ. agray@cs.cmu.edu Andrew Moore Computer Science

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

Linear Scale-Space. Algorithms for Linear Scale-Space. Gaussian Image Derivatives. Rein van den Boomgaard University of Amsterdam.

Linear Scale-Space. Algorithms for Linear Scale-Space. Gaussian Image Derivatives. Rein van den Boomgaard University of Amsterdam. $ ( Linear Scale-Space Linear scale-space is constructed by embedding an image family of images that satisfy the diffusion equation: into a one parameter with initial condition convolution: The solution

More information

Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions

Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions Monday, June 22 Visual Tracking via Geometric Particle Filtering on the Affine Group with Optimal Importance Functions Junghyun Kwon 1, Kyoung Mu Lee 1, and Frank C. Park 2 1 Department of EECS, 2 School

More information

A Fast N-Body Solver for the Poisson(-Boltzmann) Equation

A Fast N-Body Solver for the Poisson(-Boltzmann) Equation A Fast N-Body Solver for the Poisson(-Boltzmann) Equation Robert D. Skeel Departments of Computer Science (and Mathematics) Purdue University http://bionum.cs.purdue.edu/2008december.pdf 1 Thesis Poisson(-Boltzmann)

More information

5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE

5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE 5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER 2008 Nonlinear Filtering Using a New Proposal Distribution and the Improved Fast Gauss Transform With Tighter Performance Bounds Roni

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Kernel methods, kernel SVM and ridge regression

Kernel methods, kernel SVM and ridge regression Kernel methods, kernel SVM and ridge regression Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Collaborative Filtering 2 Collaborative Filtering R: rating matrix; U: user factor;

More information

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) Diffeomorphic Warping Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) What Manifold Learning Isn t Common features of Manifold Learning Algorithms: 1-1 charting Dense sampling Geometric Assumptions

More information

PREMUR Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications

PREMUR Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications PREMUR 2007 - Seminar Week 2 Discussions - Polynomial Division, Gröbner Bases, First Applications Day 1: Monomial Orders In class today, we introduced the definition of a monomial order in the polyomial

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Neighbor Tables Long-Range Potentials

Neighbor Tables Long-Range Potentials Neighbor Tables Long-Range Potentials Today we learn how we can handle long range potentials. Neighbor tables Long-range potential Ewald sums MSE485/PHY466/CSE485 1 Periodic distances Minimum Image Convention:

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

APPROXIMATING GAUSSIAN PROCESSES

APPROXIMATING GAUSSIAN PROCESSES 1 / 23 APPROXIMATING GAUSSIAN PROCESSES WITH H 2 -MATRICES Steffen Börm 1 Jochen Garcke 2 1 Christian-Albrechts-Universität zu Kiel 2 Universität Bonn and Fraunhofer SCAI 2 / 23 OUTLINE 1 GAUSSIAN PROCESSES

More information

Lecture 2: The Fast Multipole Method

Lecture 2: The Fast Multipole Method CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 2: The Fast Multipole Method Gunnar Martinsson The University of Colorado at Boulder Research support by: Recall:

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

A Remark on the Fast Gauss Transform

A Remark on the Fast Gauss Transform Publ. RIMS, Kyoto Univ. 39 (2003), 785 796 A Remark on the Fast Gauss Transform By Kenta Kobayashi Abstract We propose an improvement on the Fast Gauss Transform which was presented by Greengard and Sun

More information

Adaptive Nonparametric Density Estimators

Adaptive Nonparametric Density Estimators Adaptive Nonparametric Density Estimators by Alan J. Izenman Introduction Theoretical results and practical application of histograms as density estimators usually assume a fixed-partition approach, where

More information

Numerical Analysis for Statisticians

Numerical Analysis for Statisticians Kenneth Lange Numerical Analysis for Statisticians Springer Contents Preface v 1 Recurrence Relations 1 1.1 Introduction 1 1.2 Binomial CoefRcients 1 1.3 Number of Partitions of a Set 2 1.4 Horner's Method

More information

ECE521 lecture 4: 19 January Optimization, MLE, regularization

ECE521 lecture 4: 19 January Optimization, MLE, regularization ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

CS 556: Computer Vision. Lecture 21

CS 556: Computer Vision. Lecture 21 CS 556: Computer Vision Lecture 21 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Meanshift 2 Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering

More information

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 5 Topic Overview 1) Introduction/Unvariate Statistics 2) Bootstrapping/Monte Carlo Simulation/Kernel

More information

Results: MCMC Dancers, q=10, n=500

Results: MCMC Dancers, q=10, n=500 Motivation Sampling Methods for Bayesian Inference How to track many INTERACTING targets? A Tutorial Frank Dellaert Results: MCMC Dancers, q=10, n=500 1 Probabilistic Topological Maps Results Real-Time

More information

18.02 Multivariable Calculus Fall 2007

18.02 Multivariable Calculus Fall 2007 MIT OpenourseWare http://ocw.mit.edu 18.02 Multivariable alculus Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.02 Lecture 21. Test for

More information

Clustering. Léon Bottou COS 424 3/4/2010. NEC Labs America

Clustering. Léon Bottou COS 424 3/4/2010. NEC Labs America Clustering Léon Bottou NEC Labs America COS 424 3/4/2010 Agenda Goals Representation Capacity Control Operational Considerations Computational Considerations Classification, clustering, regression, other.

More information

Kernel methods for comparing distributions, measuring dependence

Kernel methods for comparing distributions, measuring dependence Kernel methods for comparing distributions, measuring dependence Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Principal component analysis Given a set of M centered observations

More information

Recent Results for Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation

More information

Ch 01. Analysis of Algorithms

Ch 01. Analysis of Algorithms Ch 01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T.

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Segmentation of Subspace Arrangements III Robust GPCA

Segmentation of Subspace Arrangements III Robust GPCA Segmentation of Subspace Arrangements III Robust GPCA Berkeley CS 294-6, Lecture 25 Dec. 3, 2006 Generalized Principal Component Analysis (GPCA): (an overview) x V 1 V 2 (x 3 = 0)or(x 1 = x 2 = 0) {x 1x

More information

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 2015

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 2015 Department of Mathematics, University of California, Berkeley YOUR OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 205. Please write your - or 2-digit exam number on this

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Kernel Methods. Charles Elkan October 17, 2007

Kernel Methods. Charles Elkan October 17, 2007 Kernel Methods Charles Elkan elkan@cs.ucsd.edu October 17, 2007 Remember the xor example of a classification problem that is not linearly separable. If we map every example into a new representation, then

More information

Gaussian mean-shift algorithms

Gaussian mean-shift algorithms Gaussian mean-shift algorithms Miguel Á. Carreira-Perpiñán Dept. of Computer Science & Electrical Engineering, OGI/OHSU http://www.csee.ogi.edu/~miguel Gaussian mean-shift (GMS) as an EM algorithm Gaussian

More information

Lecture 15: Algebraic Geometry II

Lecture 15: Algebraic Geometry II 6.859/15.083 Integer Programming and Combinatorial Optimization Fall 009 Today... Ideals in k[x] Properties of Gröbner bases Buchberger s algorithm Elimination theory The Weak Nullstellensatz 0/1-Integer

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information

The Fast Multipole Method and other Fast Summation Techniques

The Fast Multipole Method and other Fast Summation Techniques The Fast Multipole Method and other Fast Summation Techniques Gunnar Martinsson The University of Colorado at Boulder (The factor of 1/2π is suppressed.) Problem definition: Consider the task of evaluation

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows Kn-Nearest

More information

Ch01. Analysis of Algorithms

Ch01. Analysis of Algorithms Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich

More information

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods) Machine Learning InstanceBased Learning (aka nonparametric methods) Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Non parametric CSE 446 Machine Learning Daniel Weld March

More information

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements

More information

Basics on 2-D 2 D Random Signal

Basics on 2-D 2 D Random Signal Basics on -D D Random Signal Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Time: Fourier Analysis for -D signals Image enhancement via spatial filtering

More information

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised

More information

Lecture 11: Unsupervised Machine Learning

Lecture 11: Unsupervised Machine Learning CSE517A Machine Learning Spring 2018 Lecture 11: Unsupervised Machine Learning Instructor: Marion Neumann Scribe: Jingyu Xin Reading: fcml Ch6 (Intro), 6.2 (k-means), 6.3 (Mixture Models); [optional]:

More information

Fast large scale Gaussian process regression using the improved fast Gauss transform

Fast large scale Gaussian process regression using the improved fast Gauss transform Fast large scale Gaussian process regression using the improved fast Gauss transform VIKAS CHANDRAKANT RAYKAR and RAMANI DURAISWAMI Perceptual Interfaces and Reality Laboratory Department of Computer Science

More information

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta November 29, 2007 Outline Overview of Kernel Density

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Exploiting k-nearest Neighbor Information with Many Data

Exploiting k-nearest Neighbor Information with Many Data Exploiting k-nearest Neighbor Information with Many Data 2017 TEST TECHNOLOGY WORKSHOP 2017. 10. 24 (Tue.) Yung-Kyun Noh Robotics Lab., Contents Nonparametric methods for estimating density functions Nearest

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Between Sparse and Dense Arithmetic

Between Sparse and Dense Arithmetic Between Sparse and Dense Arithmetic Daniel S. Roche Computer Science Department United States Naval Academy NARC Seminar November 28, 2012 The Problem People want to compute with really big numbers and

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Kuangyu Wen & Ximing Wu Texas A&M University Info-Metrics Institute Conference: Recent Innovations in Info-Metrics October

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Gross Motion Planning

Gross Motion Planning Gross Motion Planning...given a moving object, A, initially in an unoccupied region of freespace, s, a set of stationary objects, B i, at known locations, and a legal goal position, g, find a sequence

More information

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML) Machine Learning Fall 2017 Kernels (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang (Chap. 12 of CIML) Nonlinear Features x4: -1 x1: +1 x3: +1 x2: -1 Concatenated (combined) features XOR:

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Fast multipole method

Fast multipole method 203/0/ page Chapter 4 Fast multipole method Originally designed for particle simulations One of the most well nown way to tae advantage of the low-ran properties Applications include Simulation of physical

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

A butterfly algorithm. for the geometric nonuniform FFT. John Strain Mathematics Department UC Berkeley February 2013

A butterfly algorithm. for the geometric nonuniform FFT. John Strain Mathematics Department UC Berkeley February 2013 A butterfly algorithm for the geometric nonuniform FFT John Strain Mathematics Department UC Berkeley February 2013 1 FAST FOURIER TRANSFORMS Standard pointwise FFT evaluates f(k) = n j=1 e 2πikj/n f j

More information

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay SP-CNN: A Scalable and Programmable CNN-based Accelerator Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay Motivation Power is a first-order design constraint, especially for embedded devices. Certain

More information

Interpolation and Approximation

Interpolation and Approximation Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

Algorithms and Data Structures

Algorithms and Data Structures Charles A. Wuethrich Bauhaus-University Weimar - CogVis/MMC June 1, 2017 1/44 Space reduction mechanisms Range searching Elementary Algorithms (2D) Raster Methods Shell sort Divide and conquer Quicksort

More information