Sparse Quadrature Algorithms for Bayesian Inverse Problems

Similar documents
Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Comparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs

Sparse deterministic approximation of Bayesian inverse problems

Sparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs

On the Stability of Polynomial Interpolation Using Hierarchical Sampling

arxiv: v1 [math.na] 14 Dec 2018

ACMAC s PrePrint Repository

Fast Numerical Methods for Stochastic Computations

Collocation methods for uncertainty quantification in PDE models with random data

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic methods for solving partial differential equations in high dimension

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

arxiv: v1 [math.st] 26 Mar 2012

Interpolation via weighted l 1 -minimization

Scalable Algorithms for Optimal Control of Systems Governed by PDEs Under Uncertainty

Multi-Element Probabilistic Collocation Method in High Dimensions

Giovanni Migliorati. MATHICSE-CSQI, École Polytechnique Fédérale de Lausanne

A sparse grid stochastic collocation method for elliptic partial differential equations with random input data

Approximation of fluid-structure interaction problems with Lagrange multiplier

Stochastic Spectral Approaches to Bayesian Inference

Space-time sparse discretization of linear parabolic equations

arxiv: v2 [math.na] 6 Aug 2018

STOCHASTIC SAMPLING METHODS

Iterative regularization of nonlinear ill-posed problems in Banach space

Parametric Problems, Stochastics, and Identification

High order Galerkin appoximations for parametric second order elliptic partial differential equations

Analytic regularity and GPC approximation for control problems constrained by linear parametric elliptic and parabolic PDEs

A sparse grid stochastic collocation method for partial differential equations with random input data

Bayesian inverse problems with Laplacian noise

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

arxiv: v3 [math.na] 18 Sep 2016

Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors

Simulating with uncertainty : the rough surface scattering problem

arxiv: v2 [math.na] 21 Jul 2016

Approximation of Geometric Data

Interpolation via weighted l 1 -minimization

Downloaded 06/07/18 to Redistribution subject to SIAM license or copyright; see

Applied/Numerical Analysis Qualifying Exam

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Introduction to Uncertainty Quantification in Computational Science Handout #3

A MULTILEVEL STOCHASTIC COLLOCATION METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

Multilevel stochastic collocations with dimensionality reduction

Analysis and Computation of Hyperbolic PDEs with Random Data

Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G.

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors

Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Supraconvergence of a Non-Uniform Discretisation for an Elliptic Third-Kind Boundary-Value Problem with Mixed Derivatives

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

(Part 1) High-dimensional statistics May / 41

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE

Deep Learning in High Dimension

Lecture Notes: African Institute of Mathematics Senegal, January Topic Title: A short introduction to numerical methods for elliptic PDEs

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

Sparse Tensor Methods for PDEs with Stochastic Data II: Convergence Rates of gpcfem

A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere

Interpolation and function representation with grids in stochastic control

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

On Reparametrization and the Gibbs Sampler

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Sparse Legendre expansions via l 1 minimization

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Scientific Computing I

recent developments of approximation theory and greedy algorithms

Preconditioned space-time boundary element methods for the heat equation

Optimal polynomial approximation of PDEs with stochastic coefficients by Galerkin and collocation methods

Interpolation-Based Trust-Region Methods for DFO

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

arxiv: v3 [math.na] 7 Dec 2018

On some nonlinear parabolic equation involving variable exponents

Covariance function estimation in Gaussian process regression

Isogeometric Analysis:

Regularity of Weak Solution to Parabolic Fractional p-laplacian

PDEs in Image Processing, Tutorials

hp-dg timestepping for time-singular parabolic PDEs and VIs

Continuous Functions on Metric Spaces

Besov regularity for the solution of the Stokes system in polyhedral cones

Stochastic Regularity of a Quadratic Observable of High Frequency Waves

arxiv: v3 [math.st] 16 Jul 2013

INTRODUCTION TO REAL ANALYTIC GEOMETRY

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes

PREPRINT 2010:23. A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO

Steps in Uncertainty Quantification

Sparse grids and optimisation

Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

Interpolation via weighted l 1 minimization

Accuracy, Precision and Efficiency in Sparse Grids

ADAPTIVE GALERKIN APPROXIMATION ALGORITHMS FOR PARTIAL DIFFERENTIAL EQUATIONS IN INFINITE DIMENSIONS

Ergodicity in data assimilation methods

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

Jitka Dupačová and scenario reduction

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Variational problems in machine learning and their solution with finite elements

Regularity of the density for the stochastic heat equation

Transcription:

Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research supported by the European Research Council under FP7 Grant AdG247277 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 1 / 24

Outline 1 Bayesian Inversion of Parametric Operator Equations 2 Sparsity of the Forward Solution 3 Sparsity of the Posterior Density 4 Sparse Quadrature 5 Numerical Results Model Parametric Parabolic Problem 6 Summary C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 2 / 24

Bayesian Inverse Problems (Stuart 2010) Find the unknown data u X from noisy observations δ = G(u) + η, X separable Banach space G : X X the forward map Abstract Operator Equation Given u X, find q X : A(u; q) = f with A L(X, Y ), X, Y reflexive Banach spaces, a(v, w) := Y w, Av Y v X, w Y corresponding bilinear form O : X R K bounded, linear observation operator G : X R K uncertainty-to-observation map, G = O G η R K the observational noise (η N (0, Γ)) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 3 / 24

Bayesian Inverse Problems (Stuart 2010) Find the unknown data u X from noisy observations δ = G(u) + η, X separable Banach space G : X X the forward map O : X R K bounded, linear observation operator G : X R K uncertainty-to-observation map, G = O G η R K the observational noise (η N (0, Γ)) Least squares potential Φ : X R K R Φ(u; δ) := 1 2 ( ) (δ G(u)) Γ 1 (δ G(u)) Reformulation of the forward problem with unknown stochastic input data as an infinite dimensional, parametric deterministic problem C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 3 / 24

Bayesian Inverse Problems (Stuart 2010) Parametric representation of the unknown u u = u(y) := u + y j ψ j X j J y = (y j) j J i.i.d sequence of real-valued random variables y j U[ 1, 1] u, ψ j X J finite or countably infinite index set Prior measure on the uncertain input data (U, B) = µ 0 (dy) := j J 1 2 λ 1(dy j ). ( [ 1, 1] J, ) j J B1 [ 1, 1] measurable space C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 3 / 24

(p, ɛ) Analyticity (p, ɛ) : 1 (well-posedness) For each y U, there exists a unique realization u(y) X and a unique solution q(y) X of the forward problem. This solution satisfies the a-priori estimate where U y C 0(y) L 1 (U; µ 0). y U : q(y) X C 0(y), (p, ɛ) : 2 (analyticity) There exist 0 < p < 1 and b = (b j) j J l p (J) such that for 0 < ɛ < 1, there exist C ɛ > 0 and ρ = (ρ j) j J of poly-radii ρ j > 1 such that ρ jb j 1 ɛ, j J and U y q(y) X admits an analytic continuation to the open polyellipse E ρ := j J Eρ j C J with z E ρ : q(z) X C ɛ(y). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 4 / 24

Sparsity of the Forward Solution Theorem (Chkifa, Cohen, DeVore and Schwab) Assume that the parametric forward solution map q(y) admits a (p, ɛ)-analytic extension to the poly-ellipse E ρ C J. The Legendre series converges unconditionally, q(y) = q P νp ν (y) in L (U, µ 0 ; X ) ν F with Legendre polynomials P k (1) = 1, P k L ( 1,1) = 1, k = 0, 1,... There exists a p-summable, monotone envelope q = {q ν } ν F, i.e. q ν := sup µ ν q P ν X with C(p, q) := q l p (F) <. and monotone Λ P N F corresponding to the N largest terms of q with sup q(y) q P νp ν (y) C(p, q)n (1/p 1). X y U ν Λ P N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 5 / 24

(p, ɛ) Analyticity of Affine Parametric Operator Families Affine Parametric Operator Families A(y) = A 0 + y j A j L(X, Y ). j J Assumption A1 There exists µ > 0 such that inf sup 0 v X 0 w Y a 0(v, w) v X w Y µ 0, inf sup 0 w Y 0 v X a 0(v, w) v X w Y µ 0 Assumption A2 There exists a constant 0 < κ < 1 b j κ < 1, where b j := A 1 0 A j L(X,Y ) j J Assumption A3 For some 0 < p < 1 b p l p (J) = j J b p j < C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 6 / 24

(p, ɛ) Analyticity of Affine Parametric Operator Families Theorem (Cohen, DeVore and Schwab 2010) Under Assumption A1 - A3, for every realization y U of the parameters, A(y) is boundedly invertible, uniformly with respect to the parameter sequence y U. For the parametric bilinear form a(y;, ) : X Y R, there holds the uniform inf-sup conditions with µ = (1 κ)µ 0, y U : inf 0 v X sup 0 w Y a(y; v, w) v X w Y µ, inf sup 0 w Y 0 v X a(y; v, w) v X w Y µ. The forward map q : U X, q := G(u) and the uncertainty-to-observation map G : U R K are globally Lipschitz and (p, ɛ)-analytic with 0 < p < 1 as in Assumption A3. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 7 / 24

Examples Stationary Elliptic Diffusion Problem A 1 (u; q) := (u q ) = f in D, q = 0 in D with X = Y = V = H 1 0(D). Time Dependent Diffusion where ι 0 denotes the time t = 0 trace, A 2 (y) := ( t + A 1 (y), ι 0 ) X = L 2 (0, T; V) H 1 (0, T; V ), Y = L 2 (0, T; V) H. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 8 / 24

Bayesian Inverse Problem Theorem (Schwab and Stuart 2011) Assume that G(u) is bounded and continuous. u= u + j J y jψ j Then µ δ (dy), the distribution of y U given δ, is absolutely continuous with respect to µ 0 (dy), ie. dµ δ dµ 0 (y) = 1 Z Θ(y) with the parametric Bayesian posterior Θ given by Θ(y) = exp ( Φ(u; δ) ) u= u + j J y jψ j, and the normalization constant Z = U Θ(y)µ 0 (dy). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 9 / 24

Bayesian Inverse Problem Expectation of a Quantity of Interest φ : X S E µδ [φ(u)] = Z 1 exp ( Φ(u; δ) ) φ(u) µ 0 (dy) =: Z /Z u= u + j J y jψ j U with Z = exp( ( 1 y U 2 (δ G(u)) Γ 1 (δ G(u)) ) )µ 0(dy). Reformulation of the forward problem with unknown stochastic input data as an infinite dimensional, parametric deterministic problem Parametric, deterministic representation of the derivative of the posterior measure with respect to the prior µ 0 Approximation of Z and Z to compute the expectation of QoI under the posterior given data δ Efficient algorithm to approximate the conditional expectations given the data with dimension-independent rates of convergence C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 9 / 24

Sparsity of the Posterior Density Theorem (C.S. and Ch. Schwab 2013) Assume that the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1. Then the Bayesian posterior density Θ(y) is, as a function of the parameter y, likewise (p, ɛ)-analytic, with the same p and the same ɛ. N-term Approximation Results sup Θ(y) Θ P νp ν (y) N s θ P l p X m (F), s := 1 p 1. y U ν Λ P N Adaptive Smolyak quadrature algorithm with convergence rates depending only on the summability of the parametric operator C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 10 / 24

Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Assumption 1 (i) (I Q k )(g k ) = 0, g k P k = span{y j : j N 0, j k} with I(g k ) = [ 1,1] g k(y)λ 1(dy) (ii) w k j > 0, 0 j n k, k N 0. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 11 / 24

Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Univariate quadrature difference operator with Q 1 = 0 and z 0 0 = 0, w 0 0 = 1 j = Q j Q j 1, j 0 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 11 / 24

Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Univariate quadrature operator rewritten as telescoping sum Q k = k j=0 j with Z k = {z k j : 0 j n k } [ 1, 1] set of points corresponding to Q k C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 11 / 24

Tensorization Tensorized multivariate operators Q ν = j 1 Q ν j, ν = j 1 νj with associated set of multivariate points Z ν = j 1 Z ν j U If ν = 0 F, then νg = Q ν g = g(z 0F ) = g(0 F) If 0 F ν F, with ˆν = (ν j) j i Q ν g = Q ν i (t j 1 Qˆν j g t), i I ν and νg = νi (t j 1 ˆνj g t), i I ν, for g Z, g t is the function defined on Z N by g t(ŷ) = g(y), y = (..., y i 1, t, y i+1,...), i > 1 and y = (t, y 2,...), i = 1 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 12 / 24

Sparse Quadrature Operator For any finite monotone set Λ F, the quadrature operator is defined by Q Λ = ν = νj ν Λ ν Λ j 1 with associated collocation grid Z Λ = ν Λ Z ν Theorem For any monotone index set Λ N F, the sparse quadrature Q ΛN is exact for any polynomial g P ΛN, i.e. it holds Q ΛN (g) = I(g), g P ΛN, with P ΛN = span{y ν : ν Λ N } and I(g) = U g(y)µ 0(dy). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 13 / 24

Convergence Rates for Adaptive Smolyak Integration Theorem Assume that the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1. Then there exist two sequences (Λ 1 N ) N 1, (Λ 2 N ) N 1 of monotone index sets Λ 1,2 N F such that #Λ1,2 N N and I[Θ] Q Λ 1 N [Θ] C 1 N s, with s = 1/p 1, I[Θ] = Θ(y)µ0(dy) and, U I[Ψ] Q Λ 2 N [Ψ] X C 2 N s, s = 1 p 1. with s = 1/p 1, I[Ψ] = U Ψ(y)µ0(dy), C1, C 2 > 0 independent of N. C.S. and Ch. Schwab. Sparsity in Bayesian Inversion of Parametric Operator Equations, 2013. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 14 / 24

Convergence Rates for Adaptive Smolyak Integration Sketch of proof Relating the quadrature error with the Legendre coefficients I(Θ) Q Λ(Θ) 2 γ ν θν P ν / Λ and I(Ψ) Q Λ(Ψ) X 2 γ ν ψν P X ν / Λ for any monotone set Λ F, where γ ν := j J (1 + νj)2. (γ ν θ P ν ) ν F l p m(f) and (γ ν ψ P ν X ) ν F l p m(f). sequence (Λ N ) N 1 of monotone sets Λ N F, #Λ N N, such that the Smolyak quadrature converges with order 1/p 1. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 14 / 24

Adaptive Construction of the Monotone Index Set Successive identification of the N largest contributions ν (Θ) = j 1 νj (Θ), ν F A. Chkifa, A. Cohen and Ch. Schwab. High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs, 2012. Set of reduced neighbors N (Λ) := {ν / Λ : ν e j Λ, j I ν and ν j = 0, j > j(λ) + 1} with j(λ) = max{j : ν j > 0 for some ν Λ}, I ν = {j N : ν j 0} N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 15 / 24

Adaptive Construction of the Monotone Index Set 1: function ASG 2: Set Λ 1 = {0}, k = 1 and compute 0 (Θ). 3: Determine the set of reduced neighbors N (Λ 1 ). 4: Compute ν (Θ), ν N (Λ 1 ). 5: while ν N (Λ k ) ν(θ) > tol do 6: Select ν N (Λ k ) with largest ν and set Λ k+1 = Λ k {ν}. 7: Determine the set of reduced neighbors N (Λ k+1 ). 8: Compute ν (Θ), ν N (Λ k+1 ). 9: Set k = k + 1. 10: end while 11: end function T. Gerstner and M. Griebel. Dimension-adaptive tensor-product quadrature, Computing, 2003 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 15 / 24

Numerical Experiments Model parametric parabolic problem t q(t, x) div(u(x) q(t, x)) = 100 tx (t, x) T D, q(0, x) = 0 x D, q(t, 0) = q(t, 1) = 0 t T with 64 u(x, y) = u + y j ψ j, where u = 1 and ψ j = α j χ Dj j=1 where D j = [(j 1) 1, j 1 0.9 ], y = (yj)j=1,...,64 and αj =, ζ = 2, 3, 4. 64 64 j ζ Finite element method using continuous, piecewise linear ansatz functions in space, backward Euler scheme in time Uniform mesh with meshwidth h T = h D = 2 11 LAPACK s DPTSV routine C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 16 / 24

Numerical Experiments Find the unknown data u for given (noisy) data δ, δ = G(u) + η, Expectation of interest Z /Z Z = Z = U U exp ( Φ(u; δ) ) φ(u) u= u + 64 j=1 y jψ j µ 0 (dy) exp ( Φ(u; δ) ) u= u + 64 j=1 y jψ j µ 0 (dy) Observation operator O consists of system responses at K observation points in T D at t i = i 2 N, i = 1,..., K,T 2N K,T 1, x j = j 2 N, k = 1,..., K,D 2N K,D 1, o k (, ) = δ( t k )δ( x k ) with K = 1, N K,D = 1, N K,T = 1, K = 3, N K,D = 2, N K,T = 1, K = 9, N K,D = 2, N K,T = 2 G : X R K, with K = 1, 3, 9, φ(u) = G(u) η = (η j ) j=1,...,k iid with η j N (0, 1), η j N (0, 0.5 2 ) and η j N (0, 0.1 2 ) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 16 / 24

Numerical Experiments Quadrature points Clenshaw-Curtis (CC) z k j = cos z k 0 = 0, if n k = 1 with n 0 = 1 and n k = 2 k + 1, for k 1 ( ) πj, j = 0,..., n k 1, if n k > 1 and n k 1 R-Leja sequence (RL) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 16 / 24

Numerical Experiments Quadrature points Clenshaw-Curtis (CC) R-Leja sequence (RL) projection on [ 1, 1] of a Leja sequence for the complex unit disk initiated at i z k 0 = 0, z k 1 = 1, z k 2 = 1, if j = 0, 1, 2 and j 1 z k j = R(ẑ), with ẑ = argmax z 1 z z k l, j = 3,..., n k, if j odd, l=1 z k j = z k j 1, j = 3,..., n k, if j even, with n k = 2 k + 1, for k 0 J.-P. Calvi and M. Phung Van. On the Lebesgue constant of Leja sequences for the unit disk and its applications to multivariate interpolation Journal of Approximation Theory, 2011. J.-P. Calvi and M. Phung Van. Lagrange interpolation at real projections of Leja sequences for the unit disk Proceedings of the American Mathematical Society, 2012. A. Chkifa. On the Lebesgue constant of Leja sequences for the unit disk Journal of Approximation Theory, 2013. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 16 / 24

Leja quadrature points Proposition Let Q RL Λ denote the sparse quadrature operator for any monotone set Λ based on the univariate quadrature formulas associated with the R-Leja sequence. If the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1 and ɛ > 0, then (γ ν θ P ν ) ν F l p m(f) and (γ ν ψ P ν S ) ν F l p m(f). Furthermore, there exist two sequences (Λ RL,1 N ) N 1, (Λ RL,2 N )) N 1 of monotone index sets Λ RL,i N F such that #Λ RL,i N N, i = 1, 2, and such that, for some C 1, C 2 > 0 independent of N, with s = 1 p 1, I[Θ] Q Λ RL,1[Θ] C 1 N s, N and I[Ψ] Q Λ RL,2[Ψ[ S C 2 N s. N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 17 / 24

Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = (I I[I k RL])(g k ) = I(g k I k RL(g k )) = 0 g k P k = span{y j : j N 0, j k}. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 17 / 24

Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = 0, g k P k Q k Q k RL RL = sup (g) S 0 g C(U;S) g L (U;S) IRL k sup (g) L (U;S) 0 g C(U;S) g L (U;S) 3(k + 1) 2 log(k + 1) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 17 / 24

Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = 0, g k P k Q k RL 3(k + 1) 2 log(k + 1) Relating the quadrature error with the Legendre coefficients θ P ν of g C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 17 / 24

Normalization Constant Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Normalization Constant Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Normalization Constant Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Normalization Constant Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Normalization Constant Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Normalization Constant Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 18 / 24

Quantity Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Quantity Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Quantity Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Quantity Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Quantity Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Quantity Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 19 / 24

Numerical Experiments Model parametric parabolic problem t q(t, x) div(u(x) q(t, x)) = 100 tx (t, x) T D, q(0, x) = 0 x D, q(t, 0) = q(t, 1) = 0 t T with 128 u(x, y) = u + y j ψ j, where u = 1 and ψ j = α j χ Dj j=1 where D j = [(j 1) 1, j 1 0.6 ], y = (yj)j=1,...,128 and αj =, ζ = 2, 3, 4. 128 128 j ζ Finite element method using continuous, piecewise linear ansatz functions in space, backward Euler scheme in time Uniform mesh with meshwidth h T = h D = 2 11 LAPACK s DPTSV routine C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 20 / 24

Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 21 / 24

Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 21 / 24

Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 21 / 24

Quantity Z (128 parameters) Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 22 / 24

Quantity Z (128 parameters) Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.5 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 22 / 24

Quantity Z (128 parameters) Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) 10 1 10 2 10 3 10 1 10 2 10 3 10 1 10 2 10 3 Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 0.1 2 ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 22 / 24

Conclusions and Outlook New class of sparse, adaptive quadrature methods for Bayesian inverse problems for a broad class of operator equations Dimension-independent convergence rates depending only on the summability of the parametric operator Numerical confirmation of the predicted convergence rates C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 23 / 24

Conclusions and Outlook New class of sparse, adaptive quadrature methods for Bayesian inverse problems for a broad class of operator equations Dimension-independent convergence rates depending only on the summability of the parametric operator Numerical confirmation of the predicted convergence rates Gaussian priors and lognormal coefficients Adaptive control of the discretization error of the forward problem with respect to the expected significance of its contribution to the Bayesian estimate Efficient treatment of large sets of data δ C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 23 / 24

References Calvi J-P and Phung Van M 2011 On the Lebesgue constant of Leja sequences for the unit disk and its applications to multivariate interpolation Journal of Approximation Theory 163 608-622 Calvi J-P and Phung Van M 2012 Lagrange interpolation at real projections of Leja sequences for the unit disk Proceedings of the American Mathematical Society 140 4271 4284 CHKIFA A 2013 On the Lebesgue constant of Leja sequences for the unit disk Journal of Approximation Theory 166 176-200 Chkifa A, Cohen A and Schwab C 2013 High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs Foundations of Computational Mathematics 1-33 Cohen A, DeVore R and Schwab C 2011 Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs Analysis and Applications 9 1-37 Gerstner T and Griebel M 2003 Dimension-adaptive tensor-product quadrature Computing 71 65-87 Hansen M and Schwab C 2013 Sparse adaptive approximation of high dimensional parametric initial value problems Vietnam J. Math. 1 1-35 Hansen M, Schillings C and Schwab C 2012 Sparse approximation algorithms for high dimensional parametric initial value problems Proceedings 5th Conf. High Performance Computing, Hanoi Schillings C and Schwab C 2013 Sparse, adaptive Smolyak algorithms for Bayesian inverse problems Inverse Problems 29 065011 Schillings C and Schwab C 2013 Sparsity in Bayesian inversion of parametric operator equations SAM-Report 2013-17 Schwab C and Stuart A M 2012 Sparse deterministic approximation of Bayesian inverse problems Inverse Problems 28 045003 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, 2013 24 / 24