Solving Boundary Value Problems (with Gaussians)

Similar documents
Positive Definite Kernels: Opportunities and Challenges

Using Gaussian eigenfunctions to solve boundary value problems

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

High Performance Nonlinear Solvers

Iterative solvers for linear equations

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012

Finite Difference Methods for Boundary Value Problems

Iterative solvers for linear equations

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

MATH 590: Meshfree Methods

Fundamental Solutions and Green s functions. Simulation Methods in Acoustics

Fast numerical methods for solving linear PDEs

Efficient hp-finite elements

Finite difference method for elliptic problems: I

Iterative Methods. Splitting Methods

Final year project. Methods for solving differential equations

1 Computing with constraints

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

National Taiwan University

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Integration (Quadrature) Another application for our interpolation tools!

MATH 590: Meshfree Methods

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

Finite Difference Methods (FDMs) 1

MATH 590: Meshfree Methods

Scientific Computing I

Numerical Analysis and Methods for PDE I

Introduction to Smoothing spline ANOVA models (metamodelling)

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

2 Two-Point Boundary Value Problems

8 A pseudo-spectral solution to the Stokes Problem

OR MSc Maths Revision Course

FINITE-DIMENSIONAL LINEAR ALGEBRA

9. Numerical linear algebra background

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Radial basis function partition of unity methods for PDEs

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Chapter 2 Interpolation

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

PDEs, part 1: Introduction and elliptic PDEs

The automatic solution of PDEs using a global spectral method

[ ], [ ] [ ] [ ] = [ ] [ ] [ ]{ [ 1] [ 2]

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Lecture 8: Boundary Integral Equations

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

Discretization of PDEs and Tools for the Parallel Solution of the Resulting Systems

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Simple Examples on Rectangular Domains

Numerical Methods I Solving Nonlinear Equations

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Lecture: Local Spectral Methods (1 of 4)

Accuracy, Precision and Efficiency in Sparse Grids

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Numerical Analysis: Solving Systems of Linear Equations

Qualifying Examination

We consider the problem of finding a polynomial that interpolates a given set of values:

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Existence Theory: Green s Functions

(x + 1)(x 2) = 4. x

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Applied Linear Algebra in Geoscience Using MATLAB

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

Question 9: PDEs Given the function f(x, y), consider the problem: = f(x, y) 2 y2 for 0 < x < 1 and 0 < x < 1. x 2 u. u(x, 0) = u(x, 1) = 0 for 0 x 1

The Fourier spectral method (Amath Bretherton)

Review I: Interpolation

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

Iterative solvers for linear equations

1 GSW Sets of Systems

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Numerical Solutions to Partial Differential Equations

Introduction to Boundary Value Problems

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Introduction to PDEs and Numerical Methods Tutorial 5. Finite difference methods equilibrium equation and iterative solvers

MATH 333: Partial Differential Equations

From Completing the Squares and Orthogonal Projection to Finite Element Methods

CS6964: Notes On Linear Systems

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Boundary Value Problems and Iterative Methods for Linear Systems

Iterative Methods for Linear Systems

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Numerical Methods for PDEs

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

RBF-FD Approximation to Solve Poisson Equation in 3D

17 Source Problems for Heat and Wave IB- VPs

f(g(x)) g (x) dx = f(u) du.

Deep Learning for Partial Differential Equations (PDEs)

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Computational Methods. Systems of Linear Equations

Fast Structured Spectral Methods

Transcription:

What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar May 22, 2012 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 1 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 2 / 51 Boundary value problems Differential Equation Example u (x) + u(x) = 0 has the solution u(x) = A sin x + B cos x Boundary Value Problem Example u (x) + u(x) = 0, u(0) = u(π) = 0 has the solution Boundary value problems Unlike initial value problems, boundary value problems will not always have a solution u (x) + u(x) = 0, u(0) = a, u (0) = b has a solution for all a, b R The equivalent boundary value problem u (x) + u(x) = 0, u(0) = a, u(π) = b only has a unique solution for some a and b a = 0 and b = 0 has infinitely many solutions a = 0 and b 0 has no solutions u(x) = sin x mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 3 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 4 / 51

Boundary value problems General boundary value problems How do we solve boundary value problems analytically? Usually, we 1 Find possible solutions without the boundary values, and 2 Eliminate all the solutions that don t satisfy the BC This is useful for some problems, including many involving kernels The real problem is the first step, which requires knowledge of possible solutions to the differential equation General linear problems are of the form Lu = f Bu = g (Differential equation) (Boundary conditions) where L is some differential operator (such as d 2 dx 2 ) and B is some operator of lower degree For some L and f we know the answer, but in general we will not be able to write the solution on a piece of paper mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 5 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 6 / 51 Intractable boundary value problems Numerical boundary value problems Consider the problem u (x) + u(x) = e x tan(x), u(0) = 1, u(1) = 0 This might have an answer, but I don t know what it is Even if it has an answer, it won t be easy to find Most importantly, I don t want to waste my time looking for an answer if I might not be able to find one What can be done to find a solution? Many methods exist for numerically solving boundary value problems One such method, called finite differences, consists of approximating derivatives using divided differences: u u(x + h) u(x) (x) = + O(h) h u u(x + h) u(x h) (x) = + O(h 2 ) 2h u u(x + h) 2u(x) + u(x h) (x) = h 2 + O(h 2 ) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 7 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 8 / 51

Finite Differences Finite differences A finite difference discretization allows us to approximate the solution to a boundary value problem on a uniform grid Suppose u (x) + u(x) = e x tan(x) with u(0) = 1, u(1) = 0 and N = 5: x1 = 0 x2 = 25 x3 = 5 x4 = 75 x5 = 1 u1 u(x1) u2 u(x2) u3 u(x3) u4 u(x4) u5 u(x5) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 9 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 10 / 51 Finite differences Finite differences u (x) + u(x) = e x tan(x) with u(0) = 1, u(1) = 0 and h = 25 Discretizing u (x) is required on the interior of the domain [0, 1] u 2 = 1 (u1 2u2 + u3) h2 u 3 = 1 (u2 2u3 + u4) h2 u 4 = 1 (u3 2u4 + u5) h2 For this problem, the boundary operator B does not need discretized u (x) + u(x) = e x tan(x) with u(0) = 1, u(1) = 0 and h = 25 The discretization has resulted in a system of linear equations 1 u1 1 1/h 2 2/h 2 + 1 1/h 2 u2 1/h 2 2/h 2 + 1 1/h 2 u3 1/h 2 2/h 2 + 1 1/h 2 = e x 2 tan(x2) e 3 tan(x3) u4 e x 4 tan(x4) 1 u5 0 Solving this system will give us an approximate solution u on the grid mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 11 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 12 / 51

Finite Differences Finite Differences If we don t have the true solution, we don t know how much error is present Assuming that we are converging to the true solution as N (not always valid) we will see N ration 17 19351 33 19818 65 19954 129 19988 257 19997 513 19999 1025 20000 Recall that u (5) N = Value at 5 for N point FD solution u (5) N 2 ration = log 2 u(5) N 1 u (5) N 1 u(5) N u (x) = 1 h 2 (u(x h) 2u(x) + u(x + h)) + O(h2 ) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 13 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 14 / 51 Finite Differences Computational Cost Finite differences are awesome because: They are easy to understand (just Calculus) They are easy to implement ( 10 lines of Matlab) They are computationally cheap (more on this later) Finite differences are weak because: They are best on a uniform lattice (killer in higher dimensions) They poorly handle discontinuities (2 derivatives required) They don t preserve physical properties (positivity, eg) What does it mean for something to be computationally reasonable? In the simplest sense, we expect linear solves of size N N to be solved faster than O(N 3 ) We can always solve systems with O(N 3 ) work, but we need to do less for almost any real application mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 15 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 16 / 51

Computational Cost Other PDE solvers The most common reason why a discretization scheme is computationally efficient is because the linear system is sparse When working in weird domains, it is pretty much impossible to use finite differences Instead the domain is often cut into a tessellation A sparse matrix can be stored with less than O(N 2 ) elements A sparse matrix can be multiplied to a vector with less than O(N 2 ) work Using this sparsity, an iterative linear solver is probably appropriate That is a discussion for another day mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 17 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 18 / 51 Other PDE solvers Other PDE solvers After the domain has been cut up into pieces, we still need to somehow discretize the derivatives One common choice is finite elements: After the domain has been cut up into pieces, we still need to somehow discretize the derivatives One common choice is finite volumes: Piecewise polynomials are chosen on each triangle, and their summation is the approximate solution Weak formulation - allow for solutions with less smoothness Sparsity - Polynomials have support on only adjacent elements Higher order polynomials may be chosen An integral involving Green s theorem is approximated over each triangle to describe the differential equation in each cell Smoothing - The computed values are cell averages Conservation laws - Preserved naturally by design Shocks and discontinuities are easily handled mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 19 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 20 / 51

Smoothness: Limitations and Opportunities Each of these techniques (FD, FV, FE) are local methods This is great for problems with limited smoothness, but if the solution has a lot of smoothness, we should take advantage of that Recall the finite differences from earlier: u u(x + h) 2u(x) + u(x h) (x) = h 2 + O(h 2 ) This equation is valid if the solution u has 4 smooth derivatives But if u has 20 smooth derivatives, then we could use a better approximation Smoothness: Limitations and Opportunities If our solution has more smoothness than we are leveraging, we are using more data points than necessary Consider a piecewise polynomial fitting of increasing order: N\Order 1 2 3 4 5 10 36 47 52 55 59 20 43 59 66 77 78 40 49 69 80 99 99 80 55 79 93 118 119 160 62 88 107 133 139 320 68 98 120 148 159 These are the log 10 of the errors Notice, eg, the same accuracy can be reached using N 250 for Order 1 or N 20 for Order 3 It is valuable to consider high order methods, for some problems mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 21 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 22 / 51 Smoothness: Limitations and Opportunities When you take this idea to its final conclusion: Spectral methods Methods with exponential convergence: error= O(e N ) Contrast this with the Finite Difference approximation of second order: that produced error= O(N 2 ) Different people use the term spectral methods differently, so other definitions exist I imagine this will be sufficient for our purposes Question What spectral methods exist? Spectral Methods Polynomial interpolation is spectrally accurate: Interpolation s(x) = a0 + a1x + a2x 2 + an 1x N 1 N 1 = akx k k=0 Given function values, the polynomial interpolant can be computed to machine precision The point location is significant, but beyond our scope mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 23 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 24 / 51

Spectral Methods Polynomial Collocation Polynomial interpolation is spectrally accurate: Polynomial interpolation is spectrally accurate how can we use this to solve boundary value problems? Collocation Assume that the solution is a N 1 degree polynomial u(x) = N 1 k=0 akx k, and require the solution to match the boundary value problem at some chosen points In the interpolation setting, we made the same assumption on s(x), but we enforced it slightly differently Here, we need to apply L to our approximate BVP solution u(x) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 25 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 26 / 51 Polynomial Collocation Recall that the linear BVP we are considering is Polynomial Collocation Here we see the same incredibly fast convergence Lu = f, Bu = g, x Ω x Ω Under the assumption that u(x) = N 1 k=0 akx k B1 Bx1 B(x1) N 2 B(x1) N 1 a1 L1 Lx2 L(x2) N 2 L(x2) N 1 a2 = L1 LxN 1 L(xN 1) N 2 L(xN 1) N 1 an 1 B1 BxN B(xN 1) N 2 B(xN) N 1 an g(x1) f (x2) f (xn 1) g(xn) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 27 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 28 / 51

Kernel Collocation Among the limitations to polynomial collocation, it does not transfer well to higher dimensions or weird domains Fortunately, some great mathematicians invented kernel methods (as introduced yesterday) These have the power to beat polynomial methods in the best case, and are not subject to any mesh constraints Kernel Collocation Conducting kernel collocation is much the same as conducting polynomial collocation Polynomial assumption Assume that our solution takes the form Kernel assumption u(x) = ak pk(x) = akx k 1 }{{} =x k 1 Assume that our solution takes the form u(x) = ak pk(x) }{{} =K (x,x k ) = akk (x, xk) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 29 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 30 / 51 Kernel Collocation For this talk, we will always assume we are working with the Gaussian: K (x, x k) = e ε2 x x k 2 1D e ε2 (x x k ) 2 Comparing the kernel and polynomial collocation matrix shows similar structure B1 Bx1 B(x1) N 2 B(x1) N 1 L1 Lx2 L(x2) N 2 L(x2) N 1 L1 LxN 1 L(xN 1) N 2 L(xN 1) N 1 B1 BxN B(xN 1) N 2 B(xN) N 1 for u(x) = akx k 1 Kernel Collocation For this talk, we will always assume we are working with the Gaussian: K (x, x k) = e ε2 x x k 2 1D e ε2 (x x k ) 2 Comparing the kernel and polynomial collocation matrix shows similar structure BK (x1, x1) BK (x1, x2) BK (x1, xn 1) BK (x1, xn) LK (x2, x1) LK (x2, x2) LK (x2, xn 1) LK (x2, xn) LK (xn 1, x1) LK (xn 1, x2) LK (xn 1, xn 1) LK (xn 1, xn) BK (xn, x1) BK (xn, x2) BK (xn, xn 1) BK (xn, xn) for u(x) = akk (x, xk) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 31 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 32 / 51

Kernel Collocation Ill-Conditioning in Kernel Collocation Small ε values make basis functions nearly indistinguishable For many of these kernel problems we also have a shape parameter ε which must be set in order to solve the problem Choosing an ε is an open problem, but there is often an optimal choice It is often the case, however, that the best value of ε is also a value for which the linear system is very dangerous to solve (a) ε = 3 (b) ε = 1 (c) ε = 3 As ε 0 the basis functions become flatter, which produces a matrix which looks increasingly like a matrix of all ones (a low rank matrix) 1 1 1 1 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 33 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 34 / 51 Kernel Collocation What is the difference between RBF-Direct and True RBF? Ill-Conditioning GaussQR How is it that we can circumvent the Gaussian ill-conditioning? We use the (truncated) eigenfunction expansion of the Gaussians: where λn = α 2 α 2 + δ 2 + ε M e ε2 (x z) 2 = λnϕn(x)ϕn(z) ( n=0 ε α 2 + δ 2 + ε ) n, ϕn(x) = γne δ2 x 2 Hn(βαx) For 1D problems (including the previous example), we may use the RBF-QR algorithm and choose M > N For larger problems, it is better to choose M < N and perform eigenfunction regression mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 35 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 36 / 51

Gaussian BVP Methods in 2D Performing collocation in 2D with Gaussians (using M < N eigenfunction expansion) is actually exactly the same as in 1D The system matrix looks identical BK (x 1, x 1) BK (x 1, x 2) BK (x 1, x N 1) BK (x 1, x N) LK (x 2, x 1) LK (x 2, x 2) LK (x 2, x N 1) LK (x 2, x N) LK (x N 1, x 1) LK (x N 1, x 2) LK (x N 1, xn 1) LK (x N 1, x N) BK (x N, x 1) BK (x N, x 2) BK (x N, x N 1) BK (x N, x N) only now there will be many more boundary rows, because two points will no longer suffice It is useful at this point to write the problem in a block setting mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 37 / 51 Using 2 block matrices to represent the interior (L) and boundary (B) portions of the problem Lu = f, Bu = g, x Ω x Ω gives us the overdetermined (least squares) system ( ) ΦL ( ) f a = ΦB g Here we are changing our previous collocation assumption u(x) = akk (x, x k) to the M term eigenfunction expansion M u(x) = akϕk(x) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 38 / 51 Laplacian Example on a Difficult Domain One of the advantages of these kernel-based methods is their indifference to the problem domain Consider the Laplacian 2 u = 0 on the domain Laplacian Example on a Difficult Domain As expected, the eigenfunction expansion successfully solves the problem with spectral convergence This technique is comparable to the Method of Fundamental Solutions, despite the fact that MFS requires a homogeneous problem whereas GaussQR does not mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 39 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 40 / 51

Tensor-product grids It is generally assumed that as you move to higher dimensions, you will be unable to keep structured grids around This is (part of) the curse of dimensionality If you happen to have a problem on a tensor grid, you can take advantage of it by treating it as the tensor product of two 1D problems Tensor-product grids Suppose you have a matrix D (2) which approximates the second derivative to a vector of uniform 1D points: D (2) v(x1) v (x1) v(xn) v (xn) If you wanted to approximate the Laplacian 2 u(x, y) = uxx(x, y) + uyy(x, y) on a uniform 2D grid, how could you use D (2)? mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 41 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 42 / 51 Tensor-product grids If you wanted to approximate the Laplacian 2 u = uxx + uyy on a uniform 2D grid, how could you use D (2)? u(x1, x1) 2 u(x1, x1) u(x1, xn) 2 u(x1, xn) u(x2, x1) 2 u(x2, x1) (D (2) IN + IN D (2) ) u(x2, xn) 2 u(x2, xn) u(xn, x1) 2 u(xn, x1) u(xn, xn) 2 u(xn, xn) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 43 / 51 Method of Fundamental Solutions As mentioned earlier, the MFS is a special method for solving homogeneous problems where the Green s function is available This is not practical in general, but, when possible, the MFS is a powerful, meshfree approach Traditional BVP solvers discretize the domain Ω, but the MFS moves the problem to the boundary, and discretizes only Ω This is preferable because Ω is of lower dimension Other boundary methods, which transfer the problem to the boundary, include boundary element and boundary integral methods mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 44 / 51

Method of Fundamental Solutions Much as we did when we first considered kernel methods, the MFS assumes that the solution to the problem Lu = 0, Bu = g, x Ω x Ω consists of a linear combination of basis functions M u(x) = akg(x, xk) The difference now is that G is chosen to automatically satisfy Lu = 0, therefore only the equation Bu = g needs to be solved to find a The transfer of the problem to the boundary is part of the appeal of this method mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 45 / 51 Method of Fundamental Solutions What hasn t yet been discussed is how to find the functions G which satisfy Lu = 0 For most L we have no idea what G is For a very special set of problems (called elliptic problems), we know the Green s function G which we will use as our kernel basis Some useful Green s functions include (with r = x z ) L G(x, z)inr 2 G(x, z)inr 3 2 log r 1 r ( 2 ) 2 r 2 log r r 2 λ 2 K0(λr) e λr 2 + λ 2 ıh (2) 0 (λr) e ıλr mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 46 / 51 Method of Fundamental Solutions What hasn t yet been discussed is how to find the functions G which satisfy Lu = 0 For most L we have no idea what G is For a very special set of problems (called elliptic problems), we know the Green s function G which we will use as our kernel basis Some useful Green s functions include (with r = x z ) Method of Fundamental Solutions Because many of these Green s functions have singularities, it is often necessary to choose points outside the problem domain to serve as sources (centers) for the kernels L G(x, z)inr 2 G(x, z)inr 3 2 log r 1 r ( 2 ) 2 r 2 log r r 2 λ 2 K0(λr) e λr 2 + λ 2 ıh (2) 0 (λr) e ıλr mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 47 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 48 / 51

Method of Fundamental Solutions When we find the solution via MFS, we see that this method is also spectrally accurate, and in fact superior to GaussQR Method of Particular Solutions The Method of Fundamental Solutions succeeds at approximating homogeneous BVP, ie, those where f = 0 When f 0, MFS is inappropriate and another technique must be considered We have already discussed collocation as one method for solving Lu = f, Bu = g, x Ω x Ω but there is a boundary type method called the Method of Particular Solutions which is also available In this method, we assume that the solution to the BVP has two parts: u = uf + up a homogeneous solution LuF = 0 and a particular solution LuP = f mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 49 / 51 mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 50 / 51 Method of Particular Solutions The homogeneous solution is chosen via the Green s functions uf = M akg(x, x (s) k ), x (s) {Source points} and the particular solution can be chosen with any basis functions What makes MPS so difficult is effectively approximating f by solving the pseudo-interpolation (or ill-posed) problem Lu = f, x (i) Ω The common approach for general f is to choose basis functions K (, x (i) ) you can work with, and solve the system ) LK (x (i) 1, x (i) 1 ) (i) LK (x 1 x (i) N ) a1 f (x (i) 1, = LK (x (i) N, x (i) 1 ) (i) LK (x N, x (i) N ) an f (x (i) N ) mccomic@mcsanlgov (Argonne) BVP Solvers May 22, 2012 51 / 51