Algorithms for Scientific Computing

Similar documents
Algorithms for Scientific Computing

Scientific Computing I

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Multilevel stochastic collocations with dimensionality reduction

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids

1 Discretizing BVP with Finite Element Methods.

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

INTRODUCTION TO FINITE ELEMENT METHODS

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

4th Preparation Sheet - Solutions

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

Simple Examples on Rectangular Domains

Lena Leitenmaier & Parikshit Upadhyaya KTH - Royal Institute of Technology

Euler Equations: local existence

Chapter 1: The Finite Element Method

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Mathematics Department Stanford University Math 61CM/DM Inner products

4. Numerical Quadrature. Where analytical abilities end... continued

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Function Approximation

Iowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v

Introduction to Signal Spaces

Fourier Transform & Sobolev Spaces

Math 320-2: Midterm 2 Practice Solutions Northwestern University, Winter 2015

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Numerical Solution I

Best approximation in the 2-norm

(1)(a) V = 2n-dimensional vector space over a field F, (1)(b) B = non-degenerate symplectic form on V.

Functional Analysis Exercise Class

Dimension-adaptive Sparse Grids

Overview. A Posteriori Error Estimates for the Biharmonic Equation. Variational Formulation and Discretization. The Biharmonic Equation

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Math 5520 Homework 2 Solutions

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam

arxiv: v1 [math.na] 1 May 2013

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Problem Set 2: Solutions Math 201A: Fall 2016

Lecture 2: Linear Algebra Review

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

Sparse Tensor Galerkin Discretizations for First Order Transport Problems

Lecture 6: The Pigeonhole Principle and Probability Spaces

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle.

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Adaptive Finite Element Methods Lecture Notes Winter Term 2017/18. R. Verfürth. Fakultät für Mathematik, Ruhr-Universität Bochum

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

Part III. 10 Topological Space Basics. Topological Spaces

Mid Term-1 : Practice problems

MH 7500 THEOREMS. (iii) A = A; (iv) A B = A B. Theorem 5. If {A α : α Λ} is any collection of subsets of a space X, then

Math 328 Course Notes

LINEAR ALGEBRA: THEORY. Version: August 12,

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

Wavelets in Scattering Calculations

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Spanning and Independence Properties of Finite Frames

Lecture 3: Haar MRA (Multi Resolution Analysis)

Functional Analysis MATH and MATH M6202

Warm-up Quantifiers and the harmonic series Sets Second warmup Induction Bijections. Writing more proofs. Misha Lavrov

3. Numerical Quadrature. Where analytical abilities end...

Defining the Discrete Wavelet Transform (DWT)

Lecture Notes 1: Vector spaces

2 Metric Spaces Definitions Exotic Examples... 3

Some Background Material

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Cheng Soon Ong & Christian Walder. Canberra February June 2018

A very short introduction to the Finite Element Method

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Lecture 2: Vector Spaces, Metric Spaces

Decoupling course outline Decoupling theory is a recent development in Fourier analysis with applications in partial differential equations and

Fitting multidimensional data using gradient penalties and combination techniques

Introduction to finite element exterior calculus

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math Linear Algebra II. 1. Inner Products and Norms

The Plane Stress Problem

Accuracy, Precision and Efficiency in Sparse Grids

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

The Finite Element Method for the Wave Equation

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Finite Element Methods

LECTURE NOTE #11 PROF. ALAN YUILLE

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

3 Orthogonality and Fourier series

A Generic Bound on Cycles in Two-Player Games

1 Adjacency matrix and eigenvalues

Iterative Methods for Linear Systems

In class midterm Exam - Answer key

Inner Product Spaces 6.1 Length and Dot Product in R n

Minimal Element Interpolation in Functions of High-Dimension

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

( nonlinear constraints)

Lecture 13: Lower Bounds using the Adversary Method. 2 The Super-Basic Adversary Method [Amb02]

Vectors in Function Spaces

Transcription:

Algorithms for Scientific Computing Hierarchical Methods and Sparse Grids d-dimensional Hierarchical Basis Michael Bader Technical University of Munich Summer 208

Intermezzo/ Big Picture : Archimedes Quadrature Start with 2d example (compare tutorials): f := 6x (x )x 2 (x 2 ), Ω = [0, ] 2 f Ω = 0 Consider hierarchical surplus at grid points with n = 3, h 3 = 2 3 64 6 64 64 6 64 64 4 6 64 6 64 64 0 6 4 64 6 6 6 4 64 6 64 6 64 4 64 6 64 64 6 64 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 2

Big Picture : Archimedes Quadrature (2) Ω f d x = 4/9 = 0. 4 = 44 024 = 0.4306640625 Consider volume of subvolumes (pagodas) for quadrature 6384 2048 6384 6384 2048 6384 2048 2048 32 2048 2048 6384 2048 6384 6384 2048 6384 32 4 32 6384 2048 6384 6384 2048 6384 2048 6384 2048 2048 6384 32 2048 6384 2048 2048 6384 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 3

Big Picture : Archimedes Quadrature (3) What, if we leave out (adaptively) all subvolumes with volume < ɛ =? 49 grid points (full grid) 7 grid points (sparse grid) 0 0 Approximation of volume: 44 024 = 0.4306640625 27 64 = 0.42875 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 4

Part I Hierarchical Decomposition, d-dimensional Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 5

Recall: Archimedes and Hierarchical Basis (in D) Archimedes Quadrature: Hierarchical Basis: t= t=2 ¼ 0 ½ ¼. x, Φ, Φ 2, Φ 2,3. x 2, x 2,3 0 ½ 0 ½ Φ 3, Φ 3,3 Φ 3,5 Φ 3,7. x 3, x 3,3 x 3,5 x 3,7 use nodal basis functions φ l,i with I l := {i : i < 2 l, i odd} hierarchical basis Ψ n := n l= {φ l,i : i I l } hierachical function spaces W l := span {φ l,i : i I l } and V l = V l W l unique hierachical representation u = n l= w l = n l= i I l v l,i φ l,i size of surpluses v l,i roughly decays with 4 n for smooth functions Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 6

Hierarchical Decomposition Step by Step Now (and more formally), starting with d-dimensional hierarchical decompositions... Transfer from d = to d > Functions in multiple variables x = (x,..., x d ) Domain Ω := [0, ] d We consider only functions u which are 0 on Ω (on the edges of the square, faces of the cube,... ) Each hierarchical grid described by multi-index l = (l,..., l d ) N d Grids have different mesh-widths in different dimensions: h l := (h,..., h d ) := (2 l,..., 2 l d ) =: 2 l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 7

Hierarchical Decomposition, d > Introducing further notation (which we ll need later on): Grid points (for function evaluations): x l, i = (i h l,..., i d h ld ) Comparisons of multi-indices component-wise: Two norms for multi-indices l l i lk i k, k =,..., d index sum: l := l +... + l d maximum index: l := max { l,..., l d } Note: taking the absolute values,, for l k N is not necessary, but is part of the usual definition of and Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 8

Practicing Identifiers l, h l, x l, i l = l =2 l =3 l l 2 = l 2 =2 l 2 =3 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 9

Piecewise d-linear Functions Suitable generalization of piecewise linear functions Piecewise d-linear functions w.r.t. h l grid If you fix d coordinates, they are linear in remaining x j V l : space of all functions for given l Alternative point of view: Define suitable basis Φ l Regard V l as span of Φ l d-dimensional basis functions: products of one-dimensional hat functions: φ l, i ( x) = d φ lj,i j (x j ) = φ l,i (x ) φ l2,i 2 (x 2 )... φ ld,i d (x d ) j= Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 0

d-dimensional Basis Functions Basis functions are pagoda functions (not pyramids!) Examples: φ (,),(,), and φ (2,3),(3,5) : 0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 x2 0.2 0 0.8 0.6 0.4 0.2 x 0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 x2 0.2 0 0.8 0.6 0.4 0.2 x Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208

Function Spaces V l and V n Basis for space of piecewise linear functions w.r.t. h l grid Φ l := {φ l, i, i < 2 l } Function space V l := span{φ l } with dim V l = (2 l )... (2 l d ) O(2 l ) Special case l =... = l d function space denoted as V n : V n := V (n,...,n) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 2

Hierarchical Increments W l Analogous to d: Omit grid points with even index (exist on coarser grid) Now in all directions I l := { i : i < 2 l, all i j odd} Hierarchical increment spaces l 2 = l 2 =2 l = l =2 l =3 l W l := span{φ l, i } i I l contain all functions of V l that vanish at all grid points of all coarser grids l 2 =3 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 3

Hierarchical Increments W l vs. Nodal Basis Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 4

Hierarchical Subspace Decomposition For l N d we obtain a unique representation of each u V l as u = l l w l with w l W l Representation in the hierarchical basis u = w l = v l, i φ l, i l l l l i I l with d-dimensional hierarchical surpluses v l, i Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 5

Determining the Hierarchical Surpluses in 2D We now compute the hierarchical surpluses v l, i for some u V n = V (n,n) : u( x) = φ l, i Φ (n,n) u(x l, i ) φ l, i ( x) = 2 n i = 2 n i 2 = First step Hierarchization in x -direction (fix x 2 and employ d hierarchization in x -direction): u( x) = n 2 n l = i I l i 2 = u(x n, i ) φ n,i (x )φ n,i2 (x 2 ) v l,i (x n,i2 ) φ l,i (x ) φ n,i2 (x 2 ) with d surplus (still depending on x 2, evaluated at all x 2 = x n,i2 ) v l,i (x 2 ) = u(x l,i, x 2 ) u(x l,i, x 2 ) + u(x l,i +, x 2 ) 2 Note: the indices i ± of the grid points x l,i and x l,i + are even, such that the corresponding hierarchical basis functions belong to a parent/ancestor level. Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 6

Determining the Hierarchical Surpluses in 2D (2) A bit more intuitive: We mark the grid points of the corresponding ansatz functions we use (before and after) l = l =2 l =3 l l = l =2 l =3 l l 2 = l 2 = l 2 =2 l 2 =2 l 2 =3 l 2 =3 l 2 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 7

Determining the Hierarchical Surpluses in 2D (3) Second step Hierarchize every v l,i (x 2 ) (separately) in x 2 dimension:: n n u( x) = v (l,l 2 ),(i,i 2 ) φ l,i (x ) φ l2,i 2 (x 2 ) l = l 2 = i 2 I l2 i I l l = l =2 l =3 l l = l =2 l =3 l l 2 = l 2 = l 2 =2 l 2 =2 l 2 =3 l 2 =3 l 2 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 8

Determining the Hierarchical Surpluses (the general d-dimensional case) Now: compute the d-dim. hierarchical surpluses v l, i for some u( x) V n : u( x) = u(x l, i ) φ l, i ( x) = u(x l, i ) φ l,i (x )... φ ld,i d (x d ) φ l, i Φ (n,...,n) φ l, i Φ (n,...,n) First step Hierarchization in x d -direction (fix x,..., x d and employ d hierarchization): n u = v ld,i d (x n,(i,...,i d )) φ ld,i d (x d ) φ l,i (x )... φ ld,i d (x d ) φ l, i Φ (n,...,n) l d = i d I ld with d surplus evaluted at (x,..., x d ) = x n,(i,...,i d ): v ld,i d (x,..., x d ) = u(x,..., x d, x ld,i d ) u(x,..., x d, x ld,i d ) + u(x,..., x d, x ld,i d +) 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 9

Determining the Hierarchical Surpluses (the general d-dimensional case, second step) Second step Hierarchize every v ld,i d : R d R (separately) in its first argument: n n ( u( x) = l d = i d I ld l d = i d I ld φ l, Φ i (n,...,n) ) v ld,i d (x n,(i,...,i d 2 )) φ ld,i d (x d ) φ ld,i d (x d ) φ l,i (x )... φ ld 2,i d 2 (x d 2 ) Steps 3 to d All steps correspondingly for each remaining dimension Afterwards we have computed surpluses v l, i (functions in zero parameters / scalar values) u( x) = v l, i φ l,i (x )... φ ld,i d (x d ) = v l, i φ l, i ( x) = l i I l l i I l where l i I l is short for n l d = i d I ld n l = i I l w l ( x) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 20 l

Comparison 2D with Wavelet Transform Apply Level-Wise Hierarchisation First level: First step: split into nodal basis and hierarchical surpluses (st argument) Second step: split into nodal basis and hierarchical surpluses (2nd argument) l = l =2 l =3 l l = l =2 l =3 l l 2 = l 2 = l 2 =2 l 2 =2 l 2 =3 l 2 =3 l 2 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 2

Comparison 2D with Wavelet Transform (2) Apply Level-Wise Hierarchisation Second level: First step: split into nodal basis and hierarchical surpluses (st argument) Second step: split into nodal basis and hierarchical surpluses (2nd argument) l = l =2 l =3 l l = l =2 l =3 l l 2 = l 2 = l 2 =2 l 2 =2 l 2 =3 l 2 =3 l 2 l 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 22

Part II Hierarchical Decomposition Outlook on Cost and Accuracy Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 23

Analysis of Hierarchical Decomposition Contribution of summands in hierarchical decomposition in D: n n u = w l = v l,i φ l,i l= l= i I l in dd: u = l w l = l start analysis in univariate setting i I l v l, i φ l, i ( x) and port to mulitvariate setting Cost/benefit analysis quantifies reduction of effort Need several norms to measure w l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 24

Norms of Functions As always, we assume sufficiently smooth functions u : [0, ] R, then: Maximum norm L 2 norm u := max x [0,] u(x) for the L 2 scalar product u 2 := 0 u(x) 2 dx, (u, v) 2 := 0 u(x)v(x) dx Energy norm u E := u 2 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 25

Norms of Basis Functions For the basis functions φ l,i, we obtain φ l,i = φ l,i 2 = φ l,i E = 2hl 3 2 h l h -2 Φ Φ 2 (Φ')2... x l,i- x l,i x l,i+ x l,i- x l,i x l,i+ x l,i- x l,i x l,i+ Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 26

Estimation of Surpluses Consider surplus v l,i of basis function φ l,i : u two times differentiable v l,i := u(x l,i ) 2 (u(x l,i ) + u(x l,i+ )) We can then write v l,i as (see separate proof) v l,i = 0 ψ l,i (x)u (x) dx with ψ l,i := h l 2 φ l,i v l,i depends on u, thus we define for future use µ 2 (u) := u 2 and µ (u) := u. note: µ 2 (u) and µ (u) are properties of the function u Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 27

Estimation of Surplusses (2) With integral representation v l,i = 0 h l 2 φ l,i(x)u (x) dx, we can bound ( ) v l,i h l 2 φ l,i dx µ (u) = h2 l 2 µ (u) O(hl 2 ) 0 and, via Cauchy-Schwartz inequality (u, v) u v, v l,i h l 2 φ hl 3 l,i 2 µ 2 (u Ti ) = 6 µ 2(u Ti ), where u Ti restricts u to the support T i = [x l,i, x l,i+ ] of φ l,i Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 28

Estimation of w l Estimate contribution of entire level l in hierarchical decomposition of u, i.e. w l = i I l v l,i φ l,i Use that supports of φ l,i are pairwise disjoint Maximum norm w l h2 l 2 µ (u) O(h 2 l ) L 2 norm w l 2 2 = v l,i 2 φ l,i 2 2 h3 l 6 2h l 3 µ 2 (u Ti ) 2 = h4 l 9 µ 2(u) 2 i I l i I l w l 2 O(h 2 l ) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 29

Estimation of w l (2) Energy norm w l 2 E = i I l v l,i 2 φ l,i 2 E = i I l v l,i 2 2 h l 2 h4 l h l 4 µ (u) 2 = h2 l 2h l 4 µ (u) 2 (2 l = /(2h l ) summands) w l E O(h l ) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 30

Estimation of w l (3) We can write u (twice differentiable) as infinite series u = l= w l Convergent in all three norms Approximation error given as u u n := u in maximum and L 2 norm: O(h 2 n) in energy norm: O(h n ) n w l = l= l=n+ w l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 3

Towards d Dimensions: Norms of φ l, i Estimating the w l will enable us to select those subspaces that contribute most to overall solution (best cost-benefit ratios) Same procedure as for d =, but slightly more complicated functions Start with norms Maximum norm: L 2 norm: φ l, i 2 := [0,] d φ l, i ( x) 2 d x = φ l, i := max φ l, i ( x) = x [0,] d d ) d d (2 ) d φ lj,i j 2 = ( 2 h lj = 2 3 3 l j= j= Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 32

Norms of φ l, i (2) Energy norm (defined as L 2 norm of the Euclidean norm of the gradient φ l, i ): φ l, i E := φ l, i ( x) φ l, i ( x) d x =... = [0,] d ( ) d d = 2 h... h d 2 3 h 2 (here always: h j := h lj ) j= j ( ) d = 2 d 2 2 3 l 2 2l j For the two-dimensional settings (d = 2), we obtain ( 4 h φ l, i E = + h ) 2 3 h 2 h j= Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 33

Estimation of Surpluses Hierarchical surpluses now depend on mixed 2nd derivatives If we define ψ l, i := 2d 2d u u := x 2... x d 2 d d ψ lj,i j = j= j= h j 2 φ l, i = ( ) d 2 l d φ l, i we can derive an integral representation similar to D: v l, i = ψ l, i 2d u d x [0,] d (Proof: Fubini s theorem and d integral representation) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 34

Estimation of Surpluses (2) We define (correspondingly to d) We can thus bound v l, i as ( d v l, i j= µ 2 (u) := 2d u 2 and µ (u) := 2d u ) ( ) ( d h j h 2 ) j φ l, 2 i d x µ (u) = µ (u) = 2 2 l d µ (u) 2 [0,] d j= and v l, i ( d j= ) h j h 3 φ l, 2 i 2 µ 2 (u T i ) =... h3 d 6 d µ 2 (u T i ) = ( 6) d/2 2 3 l /2 µ 2 (u T i ) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 35

Estimation of w l Obtain estimates for w l in subspace W l analogously as in d: Make use of the fact that supports of basis functions for a grid are disjoint (apart from the boundaries) Maximum norm L 2 norm d w l j= d w l 2 Energy norm w l E ( ) d d 4 2 j= j= h 2 j 2 h 2 j 3 h 4... h4 d h 2 j µ (u) = 2 2 l d µ (u), µ 2 (u) = 3 d 2 2 l µ 2 (u), µ (u) = ( ) d d 2 4 2 4 l 2 2l j µ (u) j= Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 36

Analysis of Cost-Benefit Ratio Consider not individual basis functions, but whole hierarchical increments From the tableau of subspaces, select those subspaces that minimize the cost, or maximize the benefit respectively, for u : [0, ] d R (u sufficiently often differentiable) Cost Measure cost in number of grid points ( coefficients ) c( l) = I l = 2 l d Benefit How to measure benefit? interpolation error Let L N d be the set of indices of all selected grids, then u L := l L w l and u u L = l L w l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 37

Analysis of Cost-Benefit Ratio (2) For each component w l, we have derived bounds of the type w l s( l) µ(u) with s( l) = 2 d 2 2 l or s( l) = 3 d 2 2 l and appropriate indices for norm and µ We obtain u u L ( ) w l s( l) µ(u) l L l L = ( l N d ) ( ) s( l) s( l) µ(u) l L st factor depends only on selected subspaces, 2nd factor only on u Justifies to interpret s( l) as benefit/contribution of subspace W l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 38

Quality of Approximation of Full Grid V n Examine cost c( l) and benefit s( l) for full grid Regular grid with mesh-width h = 2 n in each direction (full grid) for function space V n total cost dim V n O(2 dn ), or dim V n O(h d ) Considered subset of hierarchical increments: L n := { l : l n}. Bounds in L 2 and maximum norm involve factor s( l) = C 2 2 l In the following estimation, leave out l-independent factor C can be appended to the estimate in the end Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 39

Quality of Approximation of Full Grid V n (2) We can estimate s( l) = 2 2 l = l Ln l Ln = ( 4 n 4 4 n l = ( n n ) d 2 2(l + +l n) = 2 2k l d = k= ) d ( = d ( 3) ) 2 2n d ( ) d ( ) 3 d 2 2n using ( ɛ) d dɛ for 0 ɛ and d N For n we obtain ( ) d s( l) = and thus s( 3 l) = l N d l Ln ( ) d d 2 2n 3 Leads to bounds for the approximation error in L 2 - and maximum norm u u Ln C s( l) C d 3 d 2 2n O(hn) 2 l Ln with constant C (independent of n) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 40

Sparse Grids Final steps to high-dimensional numerics Consider sum of benefits/contributions (for L 2 and maximum norm) l Ln 2 2 l Equal benefit of hierarchical increments W l for constant l Same for cost c( l) = 2 l d (number of grid points of W l ) Constant cost-benefit ratio c( l)/s( l) for constant l Full grids? Quadratic extract of subspaces is not economical: We take large subgrids with low contribution We could have taken others with much higher contribution Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 4

Part III Sparse Grids Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 42

Sparse Grids! cost-benefit analysis: equal contribution of hierarchical increments W l for constant l Best choice: Cut diagonally in tableau of subspaces: L n := { l : l n + d } Resulting sparse grid space Vn := l n+d W l l 2 = l 2 =2 l 2 =3 l 2 l = l =2 l =3 l Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 43

Sparse Grids! Diagonall cut in tableau of subspaces: L n := { l : l n + d } Resulting sparse grid space Vn := l n+d W l Sparse grid for d = 2 and overall level n = 5 Grid points x l, i of same cost/benefit ratio in same color Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 44

Sparse Grids Cost Number of grid points? For d = 2: dim V n = For d = 3: l n+ dim V n = dim W l = n k= l n+ k(k + ) 2 2 l 2 = n k 2 k = 2 n (n ) +, k= ( n 2 k = 2 n 2 2 n ) 2 +, Both in O(2 n n d ) Holds for general d as well (proof with some combinatorics) Expressed in terms of N = 2 n (max. points per dimension): O ( N(log N) d ) Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 45

Sparse Grids Cost (2) In numbers... Compare cost for full grid V n and sparse grid V n : d = 2: n 2 3 4 5... 0 dim V n = (2 n ) 2 9 49 225 96...,046,529 dim V n = 2 n (n ) + 5 7 49 29... 9,27 Even more distinct for d = 3: n 2 3 4... 0 dim V n = (2 n ) 3 27 343 3,375...,070,590,67 ( ) dim Vn = 2 n n 2 2 2 7 3... 47,03 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 46

Sparse Grids Cost (3)... and for overall level n = 5 in different dimensions d V 5 V5 3 3 2 96 29 3 29,79 35 4 923,52 769 5 28,629,5,47 6 887,503,68 2,56 7 27,52,64, 4,59 8 852,89,037,44 6,40 9 26,439,622,60,67 9,439 0 89,628,286,980,80 3,44 The higher the dimension, the higher the benefit of sparse grids! Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 47

Sparse Grids Examples Sparse Grids of overall level n = 6 in d = 2 and d = 3 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 48

Sparse Grids Accuracy Much fewer grid points much lower accuracy? Would force us to choose larger n to obtain similar accuracy (and spoil everything) Error in L 2 and maximum norm: Compute sum ( l = k + ): l L n s( l) = k=n+ And for d = 3 (with l = k + 2): l L n s( l) = k=n+3 k(k + ) 2 k 2 2(k+) = 2 2(k+2) = ( n 2 + ) 2 2n 9 ( n 2 96 + n 288 + ) 2 2n 27 Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 49

Sparse Grids Accuracy (2) In general, it can be shown Error of interpolation in L 2 and maximum norm is O(2 2n n d ) or, expressed in mesh size h := 2 n : O ( h ( ) 2 log d ) h Only polynomial (in n) factor worse than full grid with O(2 2n ) or, expressed in mesh size h := 2 n : O(h 2 ) Outlook on Energy norm: ( Algorithms for Scientific Computing II) Analysis is more complicated (lines through subspaces with similar s( l), and thus c( l)/s( l), are more complicated) Overall result even better: obtain accuracy of O(2 n ) with only O(2 n ) grid points no polynomial terms (of type n d ) left! Michael Bader Algorithms for Scientific Computing d-dim. Hierarchical Basis Summer 208 50