The Fast Multipole Method and other Fast Summation Techniques
|
|
- Clifford Adams
- 5 years ago
- Views:
Transcription
1 The Fast Multipole Method and other Fast Summation Techniques Gunnar Martinsson The University of Colorado at Boulder
2 (The factor of 1/2π is suppressed.) Problem definition: Consider the task of evaluation the sum N (1) u i = G(x i, x j ) q j, i = 1, 2,..., N, j=1 where {x i } N i=1 is a given set of points in a square Ω in the plane, where {q i } N i=1 is a given set of real numbers which we call charges, and where {u i } N i=1 is a sought set of real numbers which we call potentials. The kernel G is given by log(x y), when x y (2) G(x, y) = 0 when x = y. Recall: A point x R 2 is represented by the complex number x = x 1 + i x 2 C. Then the kernel in (2) is a complex representation of the fundamental solution to the Laplace equation in R 2 since log x y = Real ( log(x y) ).
3 Special case: Sources and targets are separate Charges q j at locations {y j } N j=1. Potentials u i at locations {x i } M i=1. u i = N q j log(x i y j ), j=1 i = 1, 2,..., M u i = u(x i ), i = 1, 2,..., M u(x) = N q j log(x y j ) j=1 Direct evaluation Cost is O(M N).
4 Special case: Sources and targets are separate Charges q j at locations {y j } N j=1. Potentials u i at locations {x i } M i=1. u i = N q j log(x i y j ), j=1 i = 1, 2,..., M u i = u(x i ), i = 1, 2,..., M u(x) = N q j log(x y j ) j=1 Multipole expansion: Since u is harmonic outside the magenta circle, we know that 1 u(x) = α 0 log(x c) + α p (x c) p, where α p = 1 N ( q j yj c) p. p p=1 j=1
5 Special case: Sources and targets are separate Charges q j at locations {y j } N j=1. Potentials u i at locations {x i } M i=1. u i = N q j log(x i y j ), j=1 i = 1, 2,..., M u i = u(x i ), i = 1, 2,..., M u(x) = N q j log(x y j ) j=1 Multipole expansion truncated to P + 1 terms: Since u is harmonic outside the magenta circle, we know that P 1 u(x) = α 0 log(x c) + α p (x c) p + E P, where α p = 1 N ( q j yj c) p. p p=1 j=1 ( r ) ( ) P ( ) P P 2a 2 The approximation error E P scales as E P = =. R a
6 Special case: Sources and targets are separate Charges q j at locations {y j } N j=1. Potentials u i at locations {x i } M i=1. u i = N q j log(x i y j ), j=1 i = 1, 2,..., M u i = u(x i ), i = 1, 2,..., M u(x) = N q j log(x y j ) j=1 Multipole expansion truncated to P + 1 terms costs: Evaluate α p = 1 N ( q j yj c) p for p = 0, 1,..., P cost is O(N P ). p j=1 Evaluate u i = α 0 log(x c) + P p=1 1 α p cost is O(M P ). (x c) p
7 We write the map in matrix form as A B C ε, or q C ˆq A B where q = [q j ] N j=1 RN is the vector of sources u = [u i ] M i=1 CM is the vector of potentials ˆq = [ˆq p ] N p=0 CP +1 is the outgoing expansion (the multipole coefficients ) A C M N C C (P +1) N is the outgoing-from-sources map B C M (P +1) is the targets-from-outgoing map. u A ij = log(x i y j ) 1 p = 0 C pj = (1/p) (y j c) p p 0 log(x i c) p = 0 B ip = 1/(x i c) p p 0
8 Definition: Let Ω be a square with center c = (c 1, c 2 ) and side length 2a. Then we say that a point x R 2 is well-separated from Ω if max( x 1 c 1, x 2 c 2 ) a. y 1 6a 2a c y 2 y Ω x 1 x 2 x x Any point on or outside of the dashed square is well-separated from Ω. Consequently, the points x 2, x, and x are well-separated from Ω, but x 1 is not.
9 Single-level Barnes-Hut Suppose that we seek to evaluate all pairwise interactions between the N particles shown above. We suppose the particles are more or less evenly distributed for now.
10 Single-level Barnes-Hut Place a square grid of boxes on top of the computational box. Assume each box holds about N box particles (so there are about N/N box boxes).
11 Single-level Barnes-Hut How do you evaluate the potentials at the blue locations?
12 Single-level Barnes-Hut How do you evaluate the potentials at the blue locations? Directly evaluate interactions with the particles close by. For all long-distance interactions, use the out-going expansion!
13 Cost of the single-level Barnes-Hut algorithm: Let N box denote the number of particles in a box. For each particle, we need to do the following: Step 1: Evaluate the P outgoing moments: Step 2: Evaluate potentials from out-going expansions: Step : Evaluate potentials from close particles: P ((N/N box ) 9) P 9 N box Using that P is a smallish constant we find cost N 2 N box + N N box. Set N box N 1/2 to obtain: cost N 1.5. We re doing better than O(N 2 ) but still not great.
14 Notation single level Barnes-Hut: Partition the box Ω into smaller boxes and label them: For a box τ, let L (nei) τ denote its neighbors, and L (far) τ denote the remaining boxes.
15 Notation single level Barnes-Hut: Partition the box Ω into smaller boxes and label them: For a box τ, let L (nei) τ denote its neighbors, and L (far) τ denote the remaining boxes. The box τ = 21 is marked with red.
16 Notation single level Barnes-Hut: Partition the box Ω into smaller boxes and label them: For a box τ, let L (nei) τ denote its neighbors, and L (far) τ denote the remaining boxes. The box τ = 21 is marked with red. L (nei) 21 = {12, 1, 1, 20, 22, 28, 29, 0} are the blue boxes.
17 Notation single level Barnes-Hut: Partition the box Ω into smaller boxes and label them: For a box τ, let L (nei) τ denote its neighbors, and L (far) τ denote the remaining boxes. The box τ = 21 is marked with red. L (nei) 21 = {12, 1, 1, 20, 22, 28, 29, 0} are the blue boxes. L (far) 21 = are the green boxes.
18 Let J τ denote an index vector marking which particles belong to τ: j J τ x j is in box τ. Set q τ = q(j τ ) the sources in τ ˆq τ C P +1 the outgoing expansion for τ (the multipole coefficients ) u τ = u(j τ ) the potentials in τ Then q τ A σ,τ u σ C τ ˆq τ Bσ,τ Recall: P is an integer parameter balancing cost versus accuracy: P large high accuracy and high cost
19 Single-level Barnes-Hut Compute the outgoing expansions on all boxes: loop over all boxes τ ˆq τ = C τ q(j τ ) end loop Evaluate the far field potentials. Each box τ aggregates the contributions from all well-separated boxes: u = 0 loop over all boxes τ loop over all σ L (far) τ (i.e. all σ that are well-separated from τ) u(j τ ) = u(j τ ) + B τ,σ ˆq σ end loop end loop Evaluate the near field interactions: loop over all leaf boxes τ end u(j τ ) = u(j τ ) + A(J τ, J τ ) q(j τ ) + σ L (nei) τ A(J τ, J σ ) q(j σ )
20 To get the asymptotic cost down further, we need a hierarchical tree structure on the computational domain: Level 0 Level Level 2 Level
21 We are given N locations and seek the potential at each location.
22 How do you find the potential at the locations marked in blue?
23 Tessellate the remainder of the domain using as large boxes as you can, with the constraint that the target box has to be well-separated from every box that is used.
24 Then replace the original charges in each well-separated box by the corresponding multipole expansion. The work required to evaluate one potential is now O(log N).
25 Cost of the multi-level Barnes-Hut algorithm: Suppose there are L levels in the tree, and that there are about N box particles in each box so that L log (N/N box ). We let N box is a fixed number (say N box 200) so L log(n). Observation: On each level, there are at most 27 well-separated boxes. For each particle x i, we need to do the following: Step 1: Evaluate the P outgoing moments for the L boxes holding x i : Step 2: Evaluate potentials from out-going expansions: Step : Evaluate potentials from neighbors: L P 27 L P 9 N box Using that P is a smallish constant (say P = 20) we find This is not bad. cost N L N log(n).
26 Multi-level Barnes-Hut Compute the outgoing expansions on all boxes: loop over all boxes τ on all levels ˆq τ = C τ q(j τ ) end loop For each box τ tessellate Ω into a minimal collection of boxes from which τ is well-separated, and evaluate the far-field potential: u = 0 loop over all boxes τ loop over all σ L (BH) τ (i.e. all σ that are well-separated from τ) u(j τ ) = u(j τ ) + B τ,σ ˆq σ end loop end loop Evaluate the near field interactions: loop over all leaf boxes τ end u(j τ ) = u(j τ ) + A(J τ, J τ ) q(j τ ) + σ L (nei) τ A(J τ, J σ ) q(j σ )
27 In the Barnes-Hut method, there is a need to construct for each box τ, the list L (BH) τ which identifies a minimal tessellation of Ω into boxes from which τ is well-separated. Example: The box τ is marked with blue. L (BH) τ = {???}
28 How do you find the tessellation?
29 How do you find the tessellation? Hint: Consider which level each box belongs to.
30 Level Level Level 2
31 Definition: For a box τ, define its interaction list L (int) τ 1. σ and τ populate the same level of the tree. 2. σ and τ are well-separated.. The parents of σ and τ touch. as the set of all boxes σ such that:
32 Example: Consider box τ =
33 Example: Consider box τ = The green boxes are the neighbors of τ s parents.
34 Example: Consider box τ = The green boxes are the children of the neighbors of τ s parents.
35 Example: Consider box τ = The green boxes are the children of the neighbors of τ s parents. Only some of them are well-separated from τ. We find: L (int) τ = {6, 8, 1, 15, 16, 17, 18, 19, 20, 21}
36 Example: Consider box τ = 0. 0
37 Example: Consider box τ = 0. 0 The green boxes are the neighbors of τ s parents.
38 Example: Consider box τ = 0. 0 The green boxes are the children of the neighbors of τ s parents.
39 Example: Consider box τ = The green boxes are the children of the neighbors of τ s parents. Only some of them are well-separated from τ. We find: L (int) τ = {26, 28,, 6, 7, 2,,, 5, 8, 9, 50, 51, 52, 5}
40 Example: Consider box τ =
41 Example: Consider box τ = The green boxes are the neighbors of τ s parents.
42 Example: Consider box τ = The green boxes are the children of the neighbors of τ s parents.
43 Example: Consider box τ = The green boxes are the children of the neighbors of τ s parents. Only some of them are well-separated from τ. We find: L (int) τ = {106, 107, 108, 109, 11, 115, 116, 117, 18, 19, 10, 11, 18, 185, 187, 188, 189, 150, 151, 152, 15, 15, 155, 156, 157, 16, 165}
44 Now let L (anc) τ great-grand-parent, etc.). Then denote the list of all ancestors (parent, grand-parent, L (BH) τ = L (int) τ σ L (anc) τ L (int) σ
45
46 L (anc) 161 = {, 10, 0} L (BH) 161 = L (int) L (int) 10 L (int) 0 L (int) 161 = {6, 8, 1, 15, 16, 17, 18, 19, 20, 21, 26, 28,, 6, 7, 2,,, 5, 8, 9, 50, 51, 52, 5, 106, 107, 108, 109, 11, 115, 116, 117, 18, 19, 10, 11, 150, 151, 152, 15, 15, 155, 156, 157, 16, 165, 18, 185, 187, 188, 189}
Lecture 2: The Fast Multipole Method
CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 2: The Fast Multipole Method Gunnar Martinsson The University of Colorado at Boulder Research support by: Recall:
More information(1) u i = g(x i, x j )q j, i = 1, 2,..., N, where g(x, y) is the interaction potential of electrostatics in the plane. 0 x = y.
Encyclopedia entry on Fast Multipole Methods. Per-Gunnar Martinsson, University of Colorado at Boulder, August 2012 Short definition. The Fast Multipole Method (FMM) is an algorithm for rapidly evaluating
More informationINTRODUCTION TO FAST MULTIPOLE METHODS LONG CHEN
INTRODUCTION TO FAST MULTIPOLE METHODS LONG CHEN ABSTRACT. This is a simple introduction to fast multipole methods for the N-body summation problems. Low rank approximation plus hierarchical decomposition
More informationFast multipole method
203/0/ page Chapter 4 Fast multipole method Originally designed for particle simulations One of the most well nown way to tae advantage of the low-ran properties Applications include Simulation of physical
More information18.336J/6.335J: Fast methods for partial differential and integral equations. Lecture 13. Instructor: Carlos Pérez Arancibia
18.336J/6.335J: Fast methods for partial differential and integral equations Lecture 13 Instructor: Carlos Pérez Arancibia MIT Mathematics Department Cambridge, October 24, 2016 Fast Multipole Method (FMM)
More informationA direct solver with O(N) complexity for integral equations on one-dimensional domains
Front Math China DOI 101007/s11464-012-0188-3 A direct solver with O(N) complexity for integral equations on one-dimensional domains Adrianna GILLMAN, Patrick M YOUNG, Per-Gunnar MARTINSSON Department
More informationA fast randomized algorithm for computing a Hierarchically Semi-Separable representation of a matrix
A fast randomized algorithm for computing a Hierarchically Semi-Separable representation of a matrix P.G. Martinsson, Department of Applied Mathematics, University of Colorado at Boulder Abstract: Randomized
More informationFast matrix algebra for dense matrices with rank-deficient off-diagonal blocks
CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices
More informationApproximate Voronoi Diagrams
CS468, Mon. Oct. 30 th, 2006 Approximate Voronoi Diagrams Presentation by Maks Ovsjanikov S. Har-Peled s notes, Chapters 6 and 7 1-1 Outline Preliminaries Problem Statement ANN using PLEB } Bounds and
More informationFast Matrix Computations via Randomized Sampling. Gunnar Martinsson, The University of Colorado at Boulder
Fast Matrix Computations via Randomized Sampling Gunnar Martinsson, The University of Colorado at Boulder Computational science background One of the principal developments in science and engineering over
More informationFast Multipole Methods
An Introduction to Fast Multipole Methods Ramani Duraiswami Institute for Advanced Computer Studies University of Maryland, College Park http://www.umiacs.umd.edu/~ramani Joint work with Nail A. Gumerov
More informationA direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré-Steklov operators
(1) A direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré-Steklov operators S. Hao 1, P.G. Martinsson 2 Abstract: A numerical method for variable coefficient elliptic
More informationScalability Metrics for Parallel Molecular Dynamics
Scalability Metrics for arallel Molecular Dynamics arallel Efficiency We define the efficiency of a parallel program running on processors to solve a problem of size W Let T(W, ) be the execution time
More informationLecture 8: Boundary Integral Equations
CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 8: Boundary Integral Equations Gunnar Martinsson The University of Colorado at Boulder Research support by: Consider
More informationIntroduction to Discrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 28, 2009 Discussions: April 2, 2009 Introduction to Discrete Optimization Spring 2009 s 8 Exercise What is the smallest number n such that an
More informationFast numerical methods for solving linear PDEs
Fast numerical methods for solving linear PDEs P.G. Martinsson, The University of Colorado at Boulder Acknowledgements: Some of the work presented is joint work with Vladimir Rokhlin and Mark Tygert at
More informationThe Sommerfeld Polynomial Method: Harmonic Oscillator Example
Chemistry 460 Fall 2017 Dr. Jean M. Standard October 2, 2017 The Sommerfeld Polynomial Method: Harmonic Oscillator Example Scaling the Harmonic Oscillator Equation Recall the basic definitions of the harmonic
More informationParticle Methods. Lecture Reduce and Broadcast: A function viewpoint
Lecture 9 Particle Methods 9.1 Reduce and Broadcast: A function viewpoint [This section is being rewritten with what we hope will be the world s clearest explanation of the fast multipole algorithm. Readers
More informationComputing and Communicating Functions over Sensor Networks
Computing and Communicating Functions over Sensor Networks Solmaz Torabi Dept. of Electrical and Computer Engineering Drexel University solmaz.t@drexel.edu Advisor: Dr. John M. Walsh 1/35 1 Refrences [1]
More informationHierarchical Matrices. Jon Cockayne April 18, 2017
Hierarchical Matrices Jon Cockayne April 18, 2017 1 Sources Introduction to Hierarchical Matrices with Applications [Börm et al., 2003] 2 Sources Introduction to Hierarchical Matrices with Applications
More informationA Crash Course on Fast Multipole Method (FMM)
A Crash Course on Fast Multipole Method (FMM) Hanliang Guo Kanso Biodynamical System Lab, 2018 April What is FMM? A fast algorithm to study N body interactions (Gravitational, Coulombic interactions )
More informationMULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS
MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS JIANLIN XIA Abstract. We propose multi-layer hierarchically semiseparable MHS structures for the fast factorizations of dense matrices arising from
More informationInt. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS014) p.4149
Int. Statistical Inst.: Proc. 58th orld Statistical Congress, 011, Dublin (Session CPS014) p.4149 Invariant heory for Hypothesis esting on Graphs Priebe, Carey Johns Hopkins University, Applied Mathematics
More informationAn implementation of the Fast Multipole Method without multipoles
An implementation of the Fast Multipole Method without multipoles Robin Schoemaker Scientific Computing Group Seminar The Fast Multipole Method Theory and Applications Spring 2002 Overview Fast Multipole
More informationThe Convergence of Mimetic Discretization
The Convergence of Mimetic Discretization for Rough Grids James M. Hyman Los Alamos National Laboratory T-7, MS-B84 Los Alamos NM 87545 and Stanly Steinberg Department of Mathematics and Statistics University
More informationFundamental Solutions and Green s functions. Simulation Methods in Acoustics
Fundamental Solutions and Green s functions Simulation Methods in Acoustics Definitions Fundamental solution The solution F (x, x 0 ) of the linear PDE L {F (x, x 0 )} = δ(x x 0 ) x R d Is called the fundamental
More informationH 2 -matrices with adaptive bases
1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices
More informationFast Algorithms and Integral Equation Methods. CS 598 APK - Fall 2017
Fast Algorithms and Integral Equation Methods CS 598 APK - Fall 2017 What s the point of this class? Starting point: Large-scale scientific computing Many popular numerical algorithms: O(n α ) for α >
More informationImproved Fast Gauss Transform. Fast Gauss Transform (FGT)
10/11/011 Improved Fast Gauss Transform Based on work by Changjiang Yang (00), Vikas Raykar (005), and Vlad Morariu (007) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to
More informationPRIMES Math Problem Set
PRIMES Math Problem Set PRIMES 017 Due December 1, 01 Dear PRIMES applicant: This is the PRIMES 017 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 01.
More informationNotes on the second moment method, Erdős multiplication tables
Notes on the second moment method, Erdős multiplication tables January 25, 20 Erdős multiplication table theorem Suppose we form the N N multiplication table, containing all the N 2 products ab, where
More informationA HIGH-ORDER ACCURATE ACCELERATED DIRECT SOLVER FOR ACOUSTIC SCATTERING FROM SURFACES
A HIGH-ORDER ACCURATE ACCELERATED DIRECT SOLVER FOR ACOUSTIC SCATTERING FROM SURFACES JAMES BREMER,, ADRIANNA GILLMAN, AND PER-GUNNAR MARTINSSON Abstract. We describe an accelerated direct solver for the
More informationLecture 1: Introduction
CBMS Conference on Fast Direct Solvers Dartmouth College June June 7, 4 Lecture : Introduction Gunnar Martinsson The University of Colorado at Boulder Research support by: Many thanks to everyone who made
More informationFast multipole boundary element method for the analysis of plates with many holes
Arch. Mech., 59, 4 5, pp. 385 401, Warszawa 2007 Fast multipole boundary element method for the analysis of plates with many holes J. PTASZNY, P. FEDELIŃSKI Department of Strength of Materials and Computational
More informationSMT 2013 Power Round February 2, 2013
Time limit: 90 minutes. Maximum score: 200 points. Instructions: For this test, you work in teams of eight to solve a multi-part, proof-oriented question. Problems that use the words compute or list only
More informationA Fast N-Body Solver for the Poisson(-Boltzmann) Equation
A Fast N-Body Solver for the Poisson(-Boltzmann) Equation Robert D. Skeel Departments of Computer Science (and Mathematics) Purdue University http://bionum.cs.purdue.edu/2008december.pdf 1 Thesis Poisson(-Boltzmann)
More informationAnalysis of Approximate Quickselect and Related Problems
Analysis of Approximate Quickselect and Related Problems Conrado Martínez Univ. Politècnica Catalunya Joint work with A. Panholzer and H. Prodinger LIP6, Paris, April 2009 Introduction Quickselect finds
More informationCourse notes: Fast Methods in Scientific Computation
Course notes: Fast Methods in Scientific Computation Per-Gunnar Martinsson March, 20 Contents Chapter. Introduction 5 Chapter 2. Basic concepts 7 2.. Quadrature 7 2.2. The FFT 8 2.3. Accuracy and complexity
More informationHilbert Space Methods for Reduced-Rank Gaussian Process Regression
Hilbert Space Methods for Reduced-Rank Gaussian Process Regression Arno Solin and Simo Särkkä Aalto University, Finland Workshop on Gaussian Process Approximation Copenhagen, Denmark, May 2015 Solin &
More information7.4 Relative Rates of Growth
7.4 Relative Rates of Growth Domination This section is about comparing functions to see which dominate as x. Definition Let f (x) and g(x) be positive for some sufficiently large x. 1 f grows faster than
More informationFast direct solvers for elliptic PDEs
Fast direct solvers for elliptic PDEs Gunnar Martinsson The University of Colorado at Boulder Students: Adrianna Gillman (now at Dartmouth) Nathan Halko Sijia Hao Patrick Young (now at GeoEye Inc.) Collaborators:
More informationSUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION
SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION 1 Outline Basic terminology Features Training and validation Model selection Error and loss measures Statistical comparison Evaluation measures 2 Terminology
More informationIn-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2
In-Class Soln 1 Let f(n) be an always positive function and let g(n) = f(n) log n. Show that f(n) = o(g(n)) CS 361, Lecture 4 Jared Saia University of New Mexico For any positive constant c, we want to
More informationSequences and Summations. CS/APMA 202 Rosen section 3.2 Aaron Bloomfield
Sequences and Summations CS/APMA 202 Rosen section 3.2 Aaron Bloomfield 1 Definitions Sequence: an ordered list of elements Like a set, but: Elements can be duplicated Elements are ordered 2 Sequences
More informationA fast method for solving the Heat equation by Layer Potentials
A fast method for solving the Heat equation by Layer Potentials Johannes Tausch Abstract Boundary integral formulations of the heat equation involve time convolutions in addition to surface potentials.
More informationJ.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009
Parallel Preconditioning of Linear Systems based on ILUPACK for Multithreaded Architectures J.I. Aliaga M. Bollhöfer 2 A.F. Martín E.S. Quintana-Ortí Deparment of Computer Science and Engineering, Univ.
More informationRapid evaluation of electrostatic interactions in multi-phase media
Rapid evaluation of electrostatic interactions in multi-phase media P.G. Martinsson, The University of Colorado at Boulder Acknowledgements: Some of the work presented is joint work with Mark Tygert and
More informationImage Reconstruction And Poisson s equation
Chapter 1, p. 1/58 Image Reconstruction And Poisson s equation School of Engineering Sciences Parallel s for Large-Scale Problems I Chapter 1, p. 2/58 Outline 1 2 3 4 Chapter 1, p. 3/58 Question What have
More informationA direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method Abstract: Problem formulation.
A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method P.G. Martinsson, Department of Applied Mathematics, University of Colorado at Boulder Abstract:
More informationdata structures and algorithms lecture 2
data structures and algorithms 2018 09 06 lecture 2 recall: insertion sort Algorithm insertionsort(a, n): for j := 2 to n do key := A[j] i := j 1 while i 1 and A[i] > key do A[i + 1] := A[i] i := i 1 A[i
More informationCombinatorial Optimization
Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is
More informationApplications of the Lopsided Lovász Local Lemma Regarding Hypergraphs
Regarding Hypergraphs Ph.D. Dissertation Defense April 15, 2013 Overview The Local Lemmata 2-Coloring Hypergraphs with the Original Local Lemma Counting Derangements with the Lopsided Local Lemma Lopsided
More informationDiscrete Bayesian Networks: The Exact Posterior Marginal Distributions
arxiv:1411.6300v1 [cs.ai] 23 Nov 2014 Discrete Bayesian Networks: The Exact Posterior Marginal Distributions Do Le (Paul) Minh Department of ISDS, California State University, Fullerton CA 92831, USA dminh@fullerton.edu
More informationA Fast Adaptive Multipole Algorithm in Three Dimensions
Journal of Computational Physics 155, 468 498 (1999) Article ID jcph.1999.6355, available online at http://www.idealibrary.com on A Fast Adaptive Multipole Algorithm in Three Dimensions H. Cheng, L. Greengard,
More informationOn the Average Path Length of Complete m-ary Trees
1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 17 2014, Article 14.6.3 On the Average Path Length of Complete m-ary Trees M. A. Nyblom School of Mathematics and Geospatial Science RMIT University
More informationINF2220: algorithms and data structures Series 1
Universitetet i Oslo Institutt for Informatikk I. Yu, D. Karabeg INF2220: algorithms and data structures Series 1 Topic Function growth & estimation of running time, trees (Exercises with hints for solution)
More informationFast evaluation of boundary integral operators arising from an eddy current problem. MPI MIS Leipzig
1 Fast evaluation of boundary integral operators arising from an eddy current problem Steffen Börm MPI MIS Leipzig Jörg Ostrowski ETH Zürich Induction heating 2 Induction heating: Time harmonic eddy current
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationNotes on the Matrix-Tree theorem and Cayley s tree enumerator
Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will
More informationHoldout and Cross-Validation Methods Overfitting Avoidance
Holdout and Cross-Validation Methods Overfitting Avoidance Decision Trees Reduce error pruning Cost-complexity pruning Neural Networks Early stopping Adjusting Regularizers via Cross-Validation Nearest
More informationAlgorithms PART II: Partitioning and Divide & Conquer. HPC Fall 2007 Prof. Robert van Engelen
Algorithms PART II: Partitioning and Divide & Conquer HPC Fall 2007 Prof. Robert van Engelen Overview Partitioning strategies Divide and conquer strategies Further reading HPC Fall 2007 2 Partitioning
More informationAn Overview of Fast Multipole Methods
An Overview of Fast Multipole Methods Alexander T. Ihler April 23, 2004 Sing, Muse, of the low-rank approximations Of spherical harmonics and O(N) computation... Abstract We present some background and
More informationarxiv: v2 [math.pr] 23 Feb 2017
TWIN PEAKS KRZYSZTOF BURDZY AND SOUMIK PAL arxiv:606.08025v2 [math.pr] 23 Feb 207 Abstract. We study random labelings of graphs conditioned on a small number (typically one or two) peaks, i.e., local maxima.
More informationAnalysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort
Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Xi Chen Columbia University We continue with two more asymptotic notation: o( ) and ω( ). Let f (n) and g(n) are functions that map
More informationMultipole-Based Preconditioners for Sparse Linear Systems.
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation. Overview Summary of Contributions Generalized Stokes Problem Solenoidal
More informationWhat is a semigroup? What is a group? What is the difference between a semigroup and a group?
The second exam will be on Thursday, July 5, 2012. The syllabus will be Sections IV.5 (RSA Encryption), III.1, III.2, III.3, III.4 and III.8, III.9, plus the handout on Burnside coloring arguments. Of
More informationSparse Fourier Transform via Butterfly Algorithm
Sparse Fourier Transform via Butterfly Algorithm Lexing Ying Department of Mathematics, University of Texas, Austin, TX 78712 December 2007, revised June 2008 Abstract This paper introduces a fast algorithm
More informationMATH 829: Introduction to Data Mining and Analysis Graphical Models I
MATH 829: Introduction to Data Mining and Analysis Graphical Models I Dominique Guillot Departments of Mathematical Sciences University of Delaware May 2, 2016 1/12 Independence and conditional independence:
More informationTransforming Hierarchical Trees on Metric Spaces
CCCG 016, Vancouver, British Columbia, August 3 5, 016 Transforming Hierarchical Trees on Metric Spaces Mahmoodreza Jahanseir Donald R. Sheehy Abstract We show how a simple hierarchical tree called a cover
More informationThe Prime Number Theorem
The Prime Number Theorem We study the distribution of primes via the function π(x) = the number of primes x 6 5 4 3 2 2 3 4 5 6 7 8 9 0 2 3 4 5 2 It s easier to draw this way: π(x) = the number of primes
More informationImproved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar)
Improved Fast Gauss Transform (based on work by Changjiang Yang and Vikas Raykar) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate the weighted sum
More informationCHAPTER 3 POTENTIALS 10/13/2016. Outlines. 1. Laplace s equation. 2. The Method of Images. 3. Separation of Variables. 4. Multipole Expansion
CHAPTER 3 POTENTIALS Lee Chow Department of Physics University of Central Florida Orlando, FL 32816 Outlines 1. Laplace s equation 2. The Method of Images 3. Separation of Variables 4. Multipole Expansion
More informationNotes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.
Lecture: Hélène Barcelo Analytic Combinatorics ECCO 202, Bogotá Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.. Tuesday, June 2, 202 Combinatorics is the study of finite structures that
More informationThe number of Euler tours of random directed graphs
The number of Euler tours of random directed graphs Páidí Creed School of Mathematical Sciences Queen Mary, University of London United Kingdom P.Creed@qmul.ac.uk Mary Cryan School of Informatics University
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationAssignment 5: Solutions
Comp 21: Algorithms and Data Structures Assignment : Solutions 1. Heaps. (a) First we remove the minimum key 1 (which we know is located at the root of the heap). We then replace it by the key in the position
More informationMcGill University Faculty of Science. Solutions to Practice Final Examination Math 240 Discrete Structures 1. Time: 3 hours Marked out of 60
McGill University Faculty of Science Solutions to Practice Final Examination Math 40 Discrete Structures Time: hours Marked out of 60 Question. [6] Prove that the statement (p q) (q r) (p r) is a contradiction
More informationDivide and Conquer Algorithms
Divide and Conquer Algorithms Introduction There exist many problems that can be solved using a divide-and-conquer algorithm. A divide-andconquer algorithm A follows these general guidelines. Divide Algorithm
More informationMARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM
MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM XIANG LI Abstract. This paper centers on the analysis of a specific randomized algorithm, a basic random process that involves marking
More informationPolynomial evaluation and interpolation on special sets of points
Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the
More informationFastMMLib: a generic Fast Multipole Method library. Eric DARRIGRAND. IRMAR Université de Rennes 1
FastMMLib: a generic Fast Multipole Method library Eric DARRIGRAND joint work with Yvon LAFRANCHE IRMAR Université de Rennes 1 Outline Introduction to FMM Historically SVD principle A first example of
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationACO Comprehensive Exam March 20 and 21, Computability, Complexity and Algorithms
1. Computability, Complexity and Algorithms Part a: You are given a graph G = (V,E) with edge weights w(e) > 0 for e E. You are also given a minimum cost spanning tree (MST) T. For one particular edge
More informationOn the inverse matrix of the Laplacian and all ones matrix
On the inverse matrix of the Laplacian and all ones matrix Sho Suda (Joint work with Michio Seto and Tetsuji Taniguchi) International Christian University JSPS Research Fellow PD November 21, 2012 Sho
More informationCS 542G: The Poisson Problem, Finite Differences
CS 542G: The Poisson Problem, Finite Differences Robert Bridson November 10, 2008 1 The Poisson Problem At the end last time, we noticed that the gravitational potential has a zero Laplacian except at
More informationExtreme Point Solutions for Infinite Network Flow Problems
Extreme Point Solutions for Infinite Network Flow Problems H. Edwin Romeijn Dushyant Sharma Robert L. Smith January 3, 004 Abstract We study capacitated network flow problems with supplies and demands
More information3. Numerical Quadrature. Where analytical abilities end...
3. Numerical Quadrature Where analytical abilities end... Numerisches Programmieren, Hans-Joachim Bungartz page 1 of 32 3.1. Preliminary Remarks The Integration Problem Numerical quadrature denotes numerical
More informationPHYSICS DEPARTMENT, PRINCETON UNIVERSITY PHYSICS 301 MIDTERM EXAMINATION. October 22, 2003, 10:00 10:50 am, Jadwin A06 SOLUTIONS
PHYSICS DEPARTMENT, PRINCETON UNIVERSITY PHYSICS 31 MIDTERM EXAMINATION October 22, 23, 1: 1:5 am, Jadwin A6 SOUTIONS This exam contains two problems. Work both problems. The problems count equally although
More informationCurved spacetime tells matter how to move
Curved spacetime tells matter how to move Continuous matter, stress energy tensor Perfect fluid: T 1st law of Thermodynamics Relativistic Euler equation Compare with Newton =( c 2 + + p)u u /c 2 + pg j
More information5.6 Logarithmic and Exponential Equations
SECTION 5.6 Logarithmic and Exponential Equations 305 5.6 Logarithmic and Exponential Equations PREPARING FOR THIS SECTION Before getting started, review the following: Solving Equations Using a Graphing
More informationn f(k) k=1 means to evaluate the function f(k) at k = 1, 2,..., n and add up the results. In other words: n f(k) = f(1) + f(2) f(n). 1 = 2n 2.
Handout on induction and written assignment 1. MA113 Calculus I Spring 2007 Why study mathematical induction? For many students, mathematical induction is an unfamiliar topic. Nonetheless, this is an important
More informationPhylogenetic Networks, Trees, and Clusters
Phylogenetic Networks, Trees, and Clusters Luay Nakhleh 1 and Li-San Wang 2 1 Department of Computer Science Rice University Houston, TX 77005, USA nakhleh@cs.rice.edu 2 Department of Biology University
More informationNewton s Method and Localization
Newton s Method and Localization Workshop on Analytical Aspects of Mathematical Physics John Imbrie May 30, 2013 Overview Diagonalizing the Hamiltonian is a goal in quantum theory. I would like to discuss
More informationGraphical Models and Kernel Methods
Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.
More informationOn Procedures Controlling the FDR for Testing Hierarchically Ordered Hypotheses
On Procedures Controlling the FDR for Testing Hierarchically Ordered Hypotheses Gavin Lynch Catchpoint Systems, Inc., 228 Park Ave S 28080 New York, NY 10003, U.S.A. Wenge Guo Department of Mathematical
More informationHomework #5 Solutions
Homework #5 Solutions p 83, #16. In order to find a chain a 1 a 2 a n of subgroups of Z 240 with n as large as possible, we start at the top with a n = 1 so that a n = Z 240. In general, given a i we will
More informationSpatial Misalignment
November 4, 2009 Objectives Land use in Madagascar Example with WinBUGS code Objectives Learn how to realign nonnested block level data Become familiar with ways to deal with misaligned data which are
More informationRandomized Algorithms III Min Cut
Chapter 11 Randomized Algorithms III Min Cut CS 57: Algorithms, Fall 01 October 1, 01 11.1 Min Cut 11.1.1 Problem Definition 11. Min cut 11..0.1 Min cut G = V, E): undirected graph, n vertices, m edges.
More informationUnranked Tree Automata with Sibling Equalities and Disequalities
Unranked Tree Automata with Sibling Equalities and Disequalities Wong Karianto Christof Löding Lehrstuhl für Informatik 7, RWTH Aachen, Germany 34th International Colloquium, ICALP 2007 Xu Gao (NFS) Unranked
More informationOn zero-sum partitions and anti-magic trees
Discrete Mathematics 09 (009) 010 014 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/disc On zero-sum partitions and anti-magic trees Gil Kaplan,
More information