Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University February 17, 2017
Curse of Dimensionality The Problem A Recurring Question How to interpolate where and d is "large" (> 1) f : X R d R d X = [ 1, 1] i=1 2
Curse of Dimensionality Naive Solution Simply use repeated one-dimensional rules We have a tensor of rules d Tn d f = In i i f i=1 = I 1 n 1 I 2 n 2 I d n d f In practice (T d n f )(x) = m(n 1) k 1=1 m(n d ) k d =1 f ( x 1,k1,..., x d,kd ) Tk 1 }{{} 1 (x 1 )... Tk d d (x d ) }{{} interpolation nodes Lagrange basis functions 3
Curse of Dimensionality The Issue? Curse of Dimensionality m nodes per dimension m d nodes total 4
Outline 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion
Sparse Grids 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion
Sparse Grids The Main Idea Consider the tensor rule T d n = d i=1 I i n i Define Rewrite T d n i j = I i j I i j 1 = α n d i=1 i α i 5
Sparse Grids Sparse Grids Sparse Grids Interpolator d Q d Θ = α Θ with Θ N d and where ideally Θ << n d i α i i=1 If Θ = {α N d : α n} Q d Θ = T d n 6
Sparse Grids Back to the Interpolation Formula We want an expression like (If )(x) = k T k (x)f (x k ) We have (if Θ is admissible 1 ) Q d Θ = α Θ = α Θ d i=1 i α i d c α Iα i i i=1 1 α Θ, {β : β α} Θ 7
Sparse Grids Back to the Interpolation Formula Since ( d ) Iα i i [f ](x) = i=1 We have m(α 1) k 1=1 = k m(α) m(α d ) k d =1 f ( x α1,k 1,..., x αd,k d )U α1,k 1 (x 1 )... U αd,k d (x d ) f ( x α,k )U α,k (x) Q d Θ = α Θ Q d Θ[f ](x) = α Θ c α c α d i=1 I i α i k m(α) f ( x α,k )U α,k (x) 8
Sparse Grids Back to the Interpolation Formula If nodes are nested factor f ( x α,k ) and find K(α) such that QΘ[f d ](x) = f (x k ) c α U α,k (x) k α Θ:k K(α) = k f (x k )U k (x) 9
Sparse Grids Univariates Rules It is preferable to have nested interpolation nodes Let m(l) be the number of nodes as a function of the order l Different rules are possible: Clenshaw-Curtis (m(l) = 2 l 1 + 1), R-Leja 2 (m(l) = 2l 1), etc. Clenshaw-Curtis R-Leja 2 10
Sparse Grids Example 1 With tensor method (ɛ = 10 16 ). f (x, y) = x 6 + y 6 11
Sparse Grids Example 1 f (x, y) = x 6 + y 6 With sparse grid method (ɛ = 10 16 ) using R-Leja 2 rule. 12
Sparse Grids Example 2 f (x, y) = 1 + x 2 + y 2 With sparse grid method (ɛ = 10 4 ) using R-Leja 2 rule. 13
Sparse Grids Example 2: Basis Functions Q d Θ[f ](x) = k f (x k )U k (x) U 2 (x) U 5 (x) 14
Sparse Grids Example 3: Sum of Tensor Rules f (x, y) = x 4 + x 2 y 2 + y 4 Q d Θ = α Θ d c α Iα i i i=1 15
Sparse Grids 16
Sparse Grids A Question Remains How to choose Θ? 17
How to Choose Θ? 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion
How to Choose Θ? How to Choose Θ M. Stoyanov, C. Webster, "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" 18
How to Choose Θ? Let s Take a Step Back There are actually 2 questions: How well can we expand a function f in terms of mixed-order polynomials. This is projection How good/bad is the interpolation versus projection. 19
How to Choose Θ? Quasi-Optimal Polynomial Space Projection Working on Γ = [ 1, 1] d Λ N d is the degrees space Project in P Λ = span {x x ν : ν Λ} Legendre polynomials is a good basis Project as f f Λ = c ν L ν ν Λ where L ν are Legendre polynomials of degree at most x ν and c ν = f (x)l ν (x)dx. Γ 20
How to Choose Θ? Quasi-Optimal Polynomial Space Projection Projection is f f Λ = ν Λ c ν L ν L 2 error is f f Λ 2 L 2 = c ν 2 ν Λ Question How fast does c ν decrease? 21
y How to Choose Θ? Quasi-Optimal Polynomial Space Example f (x, y) = 1 (x 2)2 + (y 2) 2 = ν c ν L ν (x, y) 14 Contourf of coefficients 9 12 8 10 7 6 8 5 4 6 3 4 2 1 2 0 2 4 6 8 10 12 14 x 22
How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Projection Space Assumption 1 f is holomorphic a in a poly-ellipse E ρ = d θ [0,2π] i=1 {z k C : R(z k ) ρ k + ρ 1 k 2 cos θ, I(z k ) ρ k ρ 1 k 2 a Differentiable in the neighborhood of every point infinitively differentiable } sin θ Result 1 Under this assumption, d c v C exp( α ν) 2νk + 1 with α = log(ρ) i=1 Proof : Taylor Integral Theorem and Formula 23
How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Projection Space The optimal projection space of level p is then defined as { } Λ α (p) = ν : α ν 1 d log(ν i + 0.5) p 2 i=1 24
How to Choose Θ? Quasi-Optimal Interpolation Interpolation Interpolation Projection It depends on the set of nodes Can be arbitrarily bad even for "nice" function (Runge Phenomenon) Quantified by the Lebesgue constant 25
How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant I is the interpolation operator, I : {f bounded} P Λ p is the projection of f on P Λ Then, f If (1 + C Λ ) f p With I = C Λ = sup g Ig g and g bounded. 26
How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant Assumption 2 For Λ ν = {α : α ν} (tensor), d C Λν C γ (ν k + 1) γ k i=1 i.e., polynomial growth. True for Chebyshev-like polynomial interpolation False for equispaced interpolation. 27
How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Sparse Grids Lebesgue Constant If then λ l = I i l C γ (l + 1) γ Q d Θ C d γ Θ γ+1 Usually not sharp Polynomial 28
How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Clenshaw-Curtis Roots of Chebyshev Polynomials ( ) πk x k = cos n k = 0,..., n where One can show m(l) = 2 l 1 + 1 λ l log(m(l)) l 29
How to Choose Θ? Quasi-Optimal Interpolation Lebesgue Constant for Other Rules R-Leja 2 with and m(l) = 2l 1 λ l m(l) l Selecting the nodes by minimizing I l m(l) = l + 1, λ l 4 l + 1 30
How to Choose Θ? Quasi-Optimal Interpolation Quasi-Optimal Interpolation Combine both results to get level sets like d d C ν C exp( α ν) 2νk + 1 C exp( α ν) (ν k + 1) γ k +0.5 i=1 i=1 Optimal Interpolation Space of level L Λ α,β (L) = {ν : α ν + β log(ν + 1) L} for unknown α (projection error) and β (interpolation error). 31
How to Choose Θ? Estimating the Parameters Estimating α and β We do not know α and β We can estimate them from the interpolant f Λ 1 Get c ν f (x) f Λ (x) = c νl ν(x) 2 Assume 3 Solve min α,β ν Λ c ν exp( α ν)(ν + 1) β (C + α ν + β log(ν + 1) + log c ν ) 2 ν Λ 32
How to Choose Θ? Optimal Sparse Grids Interpolant Let s Summarize Sparse Grids QΘ d = d i α i = α Θ i=1 α Θ d c α Iα d i i=1 with Θ the order (level) space and Il i a sequence of 1-d rules of order l with m(l) nodes and polynomial Lebesgue constant λ l. Optimal level-p interpolant has a degree space as Λ(p) = {ν : α ν + β log(ν + 1) p} where α and β can be approximated based on a current interpolant Minimal Polynomial Interpolant For given p, the smallest Θ that covers Λ(p) (unique) is optimal. 33
How to Choose Θ? Optimal Sparse Grids Interpolant Algorithm Given univariate rules, Select initial Λ 0 and Θ 0 optimal Repeat for n = 0, 1,... Compute f Λ n = Q d Θ nf Compute c ν Estimate α and β Update Λ n+1 and Θ n+1 34
Numerical Results 1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion
Numerical Results Estimating Lebesgue s Constant 3 l + 1 and 4 l + 1 3 2 (l + 1) and 2 π log(2l + 1) 35
Numerical Results Experiment 1 : Parametrized Elliptic PDE Solve in x D Evaluate for y Γ R 7 x (a(x, y) x u(x, y)) = b(x) Goal : interpolate f f (y) = u(x, y) L2 36
Numerical Results Experiment 1 : Parametrized Elliptic PDE Clenshaw-Curtis R-Leja 37
Numerical Results Experiment 1 : Parametrized Elliptic PDE 38
Numerical Results Experiment 2 : Steady-State Burger s Equation Solve in x D R 2 x (a(y) x u(x, y)) + (v(y) x u(x, y))u(x, y) = 0 Evaluate for y [ 1, 1] 3 f (y) = D u(x, y)dx Goal : interpolate f 39
Numerical Results Experiment 2 : Steady-State Burger s Equation Clenshaw-Curtis R-Leja 40
Conclusion Conclusion Adaptive sparse grids algorithm Source of error : projection (α - "best M terms") and (β - Lebesgue s constant) Coefficients can be estimated from previous interpolator Several new univariate rules based on minimization of Lebesgue s constant 41
Conclusion References All results are from [1]. [2, 3, 4] are interesting references. Stoyanov, Miroslav K., and Clayton G. Webster. "A dynamically adaptive sparse grid method for quasi-optimal interpolation of multidimensional analytic functions." arxiv preprint arxiv:1508.01125 (2015). Kaarnioja, Vesa. "Smolyak quadrature." (2013). Beck, Joakim, et al. "Convergence of quasi-optimal stochastic Galerkin methods for a class of PDEs with random coefficients." Computers & Mathematics with Applications 67.4 (2014): 732-751. Nobile, Fabio, Lorenzo Tamellini, and Raúl Tempone. "Convergence of quasi-optimal sparse grid approximation of Hilbert-valued functions: application to random elliptic PDEs." Mathicse report 12 (2014). 42
1 Curse of Dimensionality 2 Sparse Grids 3 How to Choose Θ? Quasi-Optimal Polynomial Space Quasi-Optimal Interpolation Estimating the Parameters Optimal Sparse Grids Interpolant 4 Numerical Results 5 Conclusion