Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Similar documents
Comparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs

Sparse Quadrature Algorithms for Bayesian Inverse Problems

Fast Numerical Methods for Stochastic Computations

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Giovanni Migliorati. MATHICSE-CSQI, École Polytechnique Fédérale de Lausanne

STOCHASTIC SAMPLING METHODS

arxiv: v2 [math.na] 21 Jul 2016

Collocation methods for uncertainty quantification in PDE models with random data

Introduction to Uncertainty Quantification in Computational Science Handout #3

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

Stochastic methods for solving partial differential equations in high dimension

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018

Slow Exponential Growth for Clenshaw Curtis Sparse Grids jburkardt/presentations/... slow growth paper.

Slow Growth for Sparse Grids

Optimal polynomial approximation of PDEs with stochastic coefficients by Galerkin and collocation methods

Accuracy, Precision and Efficiency in Sparse Grids

STRONGLY NESTED 1D INTERPOLATORY QUADRATURE AND EXTENSION TO ND VIA SMOLYAK METHOD

Multi-Element Probabilistic Collocation Method in High Dimensions

Efficient Methods for Multidimensional Global Polynomial Approximation with Applications to Random PDEs

Interpolation via weighted l 1 -minimization

arxiv: v2 [math.na] 6 Aug 2018

Slow Growth for Gauss Legendre Sparse Grids

arxiv: v1 [math.na] 14 Dec 2018

Sparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs

A MULTILEVEL STOCHASTIC COLLOCATION METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

Chebfun and equispaced data

Stochastic Regularity of a Quadratic Observable of High Frequency Waves

Analysis and Computation of Hyperbolic PDEs with Random Data

On the Stability of Polynomial Interpolation Using Hierarchical Sampling

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

A sparse grid stochastic collocation method for elliptic partial differential equations with random input data

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Fast Sparse Spectral Methods for Higher Dimensional PDEs

Differentiation and Integration

Perturbation and Projection Methods for Solving DSGE Models

A sparse grid stochastic collocation method for partial differential equations with random input data

arxiv: v1 [cs.na] 17 Apr 2018

arxiv: v1 [math.na] 3 Apr 2019

A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Matrix assembly by low rank tensor approximation

Elastic Fields of Dislocations in Anisotropic Media

One dimension and tensor products. 2.1 Legendre and Jacobi polynomials

J. Beck 1, F. Nobile 2, L. Tamellini 2 and R. Tempone 1

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

Interpolation via weighted l 1 -minimization

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multidimensional interpolation. 1 An abstract description of interpolation in one dimension

ASSESSMENT OF COLLOCATION AND GALERKIN APPROACHES TO LINEAR DIFFUSION EQUATIONS WITH RANDOM DATA

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Adaptive Collocation with Kernel Density Estimation

Simulating with uncertainty : the rough surface scattering problem

Polynomial approximation of PDEs with stochastic coefficients

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics

Downloaded 06/07/18 to Redistribution subject to SIAM license or copyright; see

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE

Dimension-adaptive sparse grid for industrial applications using Sobol variances

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

4. Numerical Quadrature. Where analytical abilities end... continued

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

MA2501 Numerical Methods Spring 2015

Scientific Computing: Numerical Integration

arxiv: v1 [math.na] 14 Sep 2017


The Spectral-Element Method: Introduction

EXPLORING THE LOCALLY LOW DIMENSIONAL STRUCTURE IN SOLVING RANDOM ELLIPTIC PDES

Contents. I Basic Methods 13

Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9.

PART IV Spectral Methods

12.0 Properties of orthogonal polynomials

J~ J.~ Dennis D. Cox Professor of Statistics

MATH 590: Meshfree Methods

Juan Vicente Gutiérrez Santacreu Rafael Rodríguez Galván. Departamento de Matemática Aplicada I Universidad de Sevilla

On an Eigenvalue Problem Involving Legendre Functions by Nicholas J. Rose North Carolina State University

Slow Exponential Growth for Clenshaw Curtis Sparse Grids

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

AN ADAPTIVE REDUCED BASIS COLLOCATION METHOD BASED ON PCM ANOVA DECOMPOSITION FOR ANISOTROPIC STOCHASTIC PDES

Sparse Grid Collocation for Uncertainty Quantification

ERROR ESTIMATES FOR THE ANOVA METHOD WITH POLYNOMIAL CHAOS INTERPOLATION: TENSOR PRODUCT FUNCTIONS

Interpolation and function representation with grids in stochastic control

Iterative methods for positive definite linear systems with a complex shift

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Progress in high-dimensional numerical integration and its application to stochastic optimization

3. Numerical Quadrature. Where analytical abilities end...

Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed

Hp-Adaptivity on Anisotropic Meshes for Hybridized Discontinuous Galerkin Scheme

Recent Results for Moving Least Squares Approximation

Chebyshev finite difference method for a Two Point Boundary Value Problems with Applications to Chemical Reactor Theory

Numerical Integration in Meshfree Methods

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Numerical Methods I Orthogonal Polynomials

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Transcription:

Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University February 17, 2017

Curse of Dimensionality The Problem A Recurring Question How to interpolate where and d is "large" (> 1) f : X R d R d X = [ 1, 1] i=1 2

Curse of Dimensionality Naive Solution Simply use repeated one-dimensional rules We have a tensor of rules d Tn d f = In i i f i=1 = I 1 n 1 I 2 n 2 I d n d f In practice (T d n f )(x) = m(n 1) k 1=1 m(n d ) k d =1 f ( x 1,k1,..., x d,kd ) Tk 1 }{{} 1 (x 1 )... Tk d d (x d ) }{{} interpolation nodes Lagrange basis functions 3

Curse of Dimensionality The Issue? Curse of Dimensionality m nodes per dimension m d nodes total 4

Outline 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion

Sparse Grids 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion

Sparse Grids The Main Idea Consider the tensor rule T d n = d i=1 I i n i Define Rewrite T d n i j = I i j I i j 1 = α n d i=1 i α i 5

Sparse Grids Sparse Grids Sparse Grids Interpolator d Q d Θ = α Θ with Θ N d and where ideally Θ << n d i α i i=1 If Θ = {α N d : α n} Q d Θ = T d n 6

Sparse Grids Back to the Interpolation Formula We want an expression like (If )(x) = k T k (x)f (x k ) We have (if Θ is admissible 1 ) Q d Θ = α Θ = α Θ d i=1 i α i d c α Iα i i i=1 1 α Θ, {β : β α} Θ 7

Sparse Grids Back to the Interpolation Formula Since ( d ) Iα i i [f ](x) = i=1 We have m(α 1) k 1=1 = k m(α) m(α d ) k d =1 f ( x α1,k 1,..., x αd,k d )U α1,k 1 (x 1 )... U αd,k d (x d ) f ( x α,k )U α,k (x) Q d Θ = α Θ Q d Θ[f ](x) = α Θ c α c α d i=1 I i α i k m(α) f ( x α,k )U α,k (x) 8

Sparse Grids Back to the Interpolation Formula If nodes are nested factor f ( x α,k ) and find K(α) such that QΘ[f d ](x) = f (x k ) c α U α,k (x) k α Θ:k K(α) = k f (x k )U k (x) 9

Sparse Grids Univariates Rules It is preferable to have nested interpolation nodes Let m(l) be the number of nodes as a function of the order l Different rules are possible: Clenshaw-Curtis (m(l) = 2 l 1 + 1), R-Leja 2 (m(l) = 2l 1), etc. Clenshaw-Curtis R-Leja 2 10

Sparse Grids Example 1 With tensor method (ɛ = 10 16 ). f (x, y) = x 6 + y 6 11

Sparse Grids Example 1 f (x, y) = x 6 + y 6 With sparse grid method (ɛ = 10 16 ) using R-Leja 2 rule. 12

Sparse Grids Example 2 f (x, y) = 1 + x 2 + y 2 With sparse grid method (ɛ = 10 4 ) using R-Leja 2 rule. 13

Sparse Grids Example 2: Basis Functions Q d Θ[f ](x) = k f (x k )U k (x) U 2 (x) U 5 (x) 14

Sparse Grids Example 3: Sum of Tensor Rules f (x, y) = x 4 + x 2 y 2 + y 4 Q d Θ = α Θ d c α Iα i i i=1 15

Sparse Grids 16

Sparse Grids A Question Remains How to choose Θ? 17

How to Choose Θ? 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion

How to Choose Θ? How to Choose Θ M. Stoyanov, C. Webster, "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" 18

How to Choose Θ? Let s Take a Step Back There are actually 2 questions: How well can we expand a function f in terms of mixed-order polynomials. This is projection How good/bad is the interpolation versus projection. 19

How to Choose Θ? Quasi-Optimal Polynomial Space Projection Working on Γ = [ 1, 1] d Λ N d is the degrees space Project in P Λ = span {x x ν : ν Λ} Legendre polynomials is a good basis Project as f f Λ = c ν L ν ν Λ where L ν are Legendre polynomials of degree at most x ν and c ν = f (x)l ν (x)dx. Γ 20

How to Choose Θ? Quasi-Optimal Polynomial Space Projection Projection is f f Λ = ν Λ c ν L ν L 2 error is f f Λ 2 L 2 = c ν 2 ν Λ Question How fast does c ν decrease? 21

y How to Choose Θ? Quasi-Optimal Polynomial Space Example f (x, y) = 1 (x 2)2 + (y 2) 2 = ν c ν L ν (x, y) 14 Contourf of coefficients 9 12 8 10 7 6 8 5 4 6 3 4 2 1 2 0 2 4 6 8 10 12 14 x 22

How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Projection Space Assumption 1 f is holomorphic a in a poly-ellipse E ρ = d θ [0,2π] i=1 {z k C : R(z k ) ρ k + ρ 1 k 2 cos θ, I(z k ) ρ k ρ 1 k 2 a Differentiable in the neighborhood of every point infinitively differentiable } sin θ Result 1 Under this assumption, d c v C exp( α ν) 2νk + 1 with α = log(ρ) i=1 Proof : Taylor Integral Theorem and Formula 23

How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Projection Space The optimal projection space of level p is then defined as { } Λ α (p) = ν : α ν 1 d log(ν i + 0.5) p 2 i=1 24

How to Choose Θ? Quasi-Optimal Interpolation Interpolation Interpolation Projection It depends on the set of nodes Can be arbitrarily bad even for "nice" function (Runge Phenomenon) Quantified by the Lebesgue constant 25

How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant I is the interpolation operator, I : {f bounded} P Λ p is the projection of f on P Λ Then, f If (1 + C Λ ) f p With I = C Λ = sup g Ig g and g bounded. 26

How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant Assumption 2 For Λ ν = {α : α ν} (tensor), d C Λν C γ (ν k + 1) γ k i=1 i.e., polynomial growth. True for Chebyshev-like polynomial interpolation False for equispaced interpolation. 27

How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Sparse Grids Lebesgue Constant If then λ l = I i l C γ (l + 1) γ Q d Θ C d γ Θ γ+1 Usually not sharp Polynomial 28

How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Clenshaw-Curtis Roots of Chebyshev Polynomials ( ) πk x k = cos n k = 0,..., n where One can show m(l) = 2 l 1 + 1 λ l log(m(l)) l 29

How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Other Rules R-Leja 2 with and m(l) = 2l 1 λ l m(l) l Selecting the nodes by minimizing I l m(l) = l + 1, λ l 4 l + 1 30

How to Choose Θ? Quasi-Optimal Interpolation Quasi-Optimal Interpolation Combine both results to get level sets like d d C ν C exp( α ν) 2νk + 1 C exp( α ν) (ν k + 1) γ k +0.5 i=1 i=1 Optimal Interpolation Space of level L Λ α,β (L) = {ν : α ν + β log(ν + 1) L} for unknown α (projection error) and β (interpolation error). 31

How to Choose Θ? Estimating the Parameters Estimating α and β We do not know α and β We can estimate them from the interpolant f Λ 1 Get c ν f (x) f Λ (x) = c νl ν(x) 2 Assume 3 Solve min α,β ν Λ c ν exp( α ν)(ν + 1) β (C + α ν + β log(ν + 1) + log c ν ) 2 ν Λ 32

How to Choose Θ? Optimal Sparse Grids Interpolant Let s Summarize Sparse Grids QΘ d = d i α i = α Θ i=1 α Θ d c α Iα d i i=1 with Θ the order (level) space and Il i a sequence of 1-d rules of order l with m(l) nodes and polynomial Lebesgue constant λ l. Optimal level-p interpolant has a degree space as Λ(p) = {ν : α ν + β log(ν + 1) p} where α and β can be approximated based on a current interpolant Minimal Polynomial Interpolant For given p, the smallest Θ that covers Λ(p) (unique) is optimal. 33

How to Choose Θ? Optimal Sparse Grids Interpolant Algorithm Given univariate rules, Select initial Λ 0 and Θ 0 optimal Repeat for n = 0, 1,... Compute f Λ n = Q d Θ nf Compute c ν Estimate α and β Update Λ n+1 and Θ n+1 34

Numerical Results 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion

Numerical Results Estimating Lebesgue s Constant 3 l + 1 and 4 l + 1 3 2 (l + 1) and 2 π log(2l + 1) 35

Numerical Results Experiment 1 : Parametrized Elliptic PDE Solve in x D Evaluate for y Γ R 7 x (a(x, y) x u(x, y)) = b(x) Goal : interpolate f f (y) = u(x, y) L2 36

Numerical Results Experiment 1 : Parametrized Elliptic PDE Clenshaw-Curtis R-Leja 37

Numerical Results Experiment 1 : Parametrized Elliptic PDE 38

Numerical Results Experiment 2 : Steady-State Burger s Equation Solve in x D R 2 x (a(y) x u(x, y)) + (v(y) x u(x, y))u(x, y) = 0 Evaluate for y [ 1, 1] 3 f (y) = D u(x, y)dx Goal : interpolate f 39

Numerical Results Experiment 2 : Steady-State Burger s Equation Clenshaw-Curtis R-Leja 40

Conclusion Conclusion Adaptive sparse grids algorithm Source of error : projection (α - "best M terms") and (β - Lebesgue s constant) Coefficients can be estimated from previous interpolator Several new univariate rules based on minimization of Lebesgue s constant 41

Conclusion References All results are from [1]. [2, 3, 4] are interesting references. Stoyanov, Miroslav K., and Clayton G. Webster. "A dynamically adaptive sparse grid method for quasi-optimal interpolation of multidimensional analytic functions." arxiv preprint arxiv:1508.01125 (2015). Kaarnioja, Vesa. "Smolyak quadrature." (2013). Beck, Joakim, et al. "Convergence of quasi-optimal stochastic Galerkin methods for a class of PDEs with random coefficients." Computers & Mathematics with Applications 67.4 (2014): 732-751. Nobile, Fabio, Lorenzo Tamellini, and Raúl Tempone. "Convergence of quasi-optimal sparse grid approximation of Hilbert-valued functions: application to random elliptic PDEs." Mathicse report 12 (2014). 42

1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion