Sparse Quadrature Algorithms for Bayesian Inverse Problems
|
|
- Cora Curtis
- 5 years ago
- Views:
Transcription
1 Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research supported by the European Research Council under FP7 Grant AdG C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
2 Outline 1 Bayesian Inversion of Parametric Operator Equations 2 Sparsity of the Forward Solution 3 Sparsity of the Posterior Density 4 Sparse Quadrature 5 Numerical Results Model Parametric Parabolic Problem 6 Summary C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
3 Bayesian Inverse Problems (Stuart 2010) Find the unknown data u X from noisy observations δ = G(u) + η, X separable Banach space G : X X the forward map Abstract Operator Equation Given u X, find q X : A(u; q) = f with A L(X, Y ), X, Y reflexive Banach spaces, a(v, w) := Y w, Av Y v X, w Y corresponding bilinear form O : X R K bounded, linear observation operator G : X R K uncertainty-to-observation map, G = O G η R K the observational noise (η N (0, Γ)) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
4 Bayesian Inverse Problems (Stuart 2010) Find the unknown data u X from noisy observations δ = G(u) + η, X separable Banach space G : X X the forward map O : X R K bounded, linear observation operator G : X R K uncertainty-to-observation map, G = O G η R K the observational noise (η N (0, Γ)) Least squares potential Φ : X R K R Φ(u; δ) := 1 2 ( ) (δ G(u)) Γ 1 (δ G(u)) Reformulation of the forward problem with unknown stochastic input data as an infinite dimensional, parametric deterministic problem C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
5 Bayesian Inverse Problems (Stuart 2010) Parametric representation of the unknown u u = u(y) := u + y j ψ j X j J y = (y j) j J i.i.d sequence of real-valued random variables y j U[ 1, 1] u, ψ j X J finite or countably infinite index set Prior measure on the uncertain input data (U, B) = µ 0 (dy) := j J 1 2 λ 1(dy j ). ( [ 1, 1] J, ) j J B1 [ 1, 1] measurable space C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
6 (p, ɛ) Analyticity (p, ɛ) : 1 (well-posedness) For each y U, there exists a unique realization u(y) X and a unique solution q(y) X of the forward problem. This solution satisfies the a-priori estimate where U y C 0(y) L 1 (U; µ 0). y U : q(y) X C 0(y), (p, ɛ) : 2 (analyticity) There exist 0 < p < 1 and b = (b j) j J l p (J) such that for 0 < ɛ < 1, there exist C ɛ > 0 and ρ = (ρ j) j J of poly-radii ρ j > 1 such that ρ jb j 1 ɛ, j J and U y q(y) X admits an analytic continuation to the open polyellipse E ρ := j J Eρ j C J with z E ρ : q(z) X C ɛ(y). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
7 Sparsity of the Forward Solution Theorem (Chkifa, Cohen, DeVore and Schwab) Assume that the parametric forward solution map q(y) admits a (p, ɛ)-analytic extension to the poly-ellipse E ρ C J. The Legendre series converges unconditionally, q(y) = q P νp ν (y) in L (U, µ 0 ; X ) ν F with Legendre polynomials P k (1) = 1, P k L ( 1,1) = 1, k = 0, 1,... There exists a p-summable, monotone envelope q = {q ν } ν F, i.e. q ν := sup µ ν q P ν X with C(p, q) := q l p (F) <. and monotone Λ P N F corresponding to the N largest terms of q with sup q(y) q P νp ν (y) C(p, q)n (1/p 1). X y U ν Λ P N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
8 (p, ɛ) Analyticity of Affine Parametric Operator Families Affine Parametric Operator Families A(y) = A 0 + y j A j L(X, Y ). j J Assumption A1 There exists µ > 0 such that inf sup 0 v X 0 w Y a 0(v, w) v X w Y µ 0, inf sup 0 w Y 0 v X a 0(v, w) v X w Y µ 0 Assumption A2 There exists a constant 0 < κ < 1 b j κ < 1, where b j := A 1 0 A j L(X,Y ) j J Assumption A3 For some 0 < p < 1 b p l p (J) = j J b p j < C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
9 (p, ɛ) Analyticity of Affine Parametric Operator Families Theorem (Cohen, DeVore and Schwab 2010) Under Assumption A1 - A3, for every realization y U of the parameters, A(y) is boundedly invertible, uniformly with respect to the parameter sequence y U. For the parametric bilinear form a(y;, ) : X Y R, there holds the uniform inf-sup conditions with µ = (1 κ)µ 0, y U : inf 0 v X sup 0 w Y a(y; v, w) v X w Y µ, inf sup 0 w Y 0 v X a(y; v, w) v X w Y µ. The forward map q : U X, q := G(u) and the uncertainty-to-observation map G : U R K are globally Lipschitz and (p, ɛ)-analytic with 0 < p < 1 as in Assumption A3. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
10 Examples Stationary Elliptic Diffusion Problem A 1 (u; q) := (u q ) = f in D, q = 0 in D with X = Y = V = H 1 0(D). Time Dependent Diffusion where ι 0 denotes the time t = 0 trace, A 2 (y) := ( t + A 1 (y), ι 0 ) X = L 2 (0, T; V) H 1 (0, T; V ), Y = L 2 (0, T; V) H. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
11 Bayesian Inverse Problem Theorem (Schwab and Stuart 2011) Assume that G(u) is bounded and continuous. u= u + j J y jψ j Then µ δ (dy), the distribution of y U given δ, is absolutely continuous with respect to µ 0 (dy), ie. dµ δ dµ 0 (y) = 1 Z Θ(y) with the parametric Bayesian posterior Θ given by Θ(y) = exp ( Φ(u; δ) ) u= u + j J y jψ j, and the normalization constant Z = U Θ(y)µ 0 (dy). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
12 Bayesian Inverse Problem Expectation of a Quantity of Interest φ : X S E µδ [φ(u)] = Z 1 exp ( Φ(u; δ) ) φ(u) µ 0 (dy) =: Z /Z u= u + j J y jψ j U with Z = exp( ( 1 y U 2 (δ G(u)) Γ 1 (δ G(u)) ) )µ 0(dy). Reformulation of the forward problem with unknown stochastic input data as an infinite dimensional, parametric deterministic problem Parametric, deterministic representation of the derivative of the posterior measure with respect to the prior µ 0 Approximation of Z and Z to compute the expectation of QoI under the posterior given data δ Efficient algorithm to approximate the conditional expectations given the data with dimension-independent rates of convergence C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
13 Sparsity of the Posterior Density Theorem (C.S. and Ch. Schwab 2013) Assume that the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1. Then the Bayesian posterior density Θ(y) is, as a function of the parameter y, likewise (p, ɛ)-analytic, with the same p and the same ɛ. N-term Approximation Results sup Θ(y) Θ P νp ν (y) N s θ P l p X m (F), s := 1 p 1. y U ν Λ P N Adaptive Smolyak quadrature algorithm with convergence rates depending only on the summability of the parametric operator C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
14 Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Assumption 1 (i) (I Q k )(g k ) = 0, g k P k = span{y j : j N 0, j k} with I(g k ) = [ 1,1] g k(y)λ 1(dy) (ii) w k j > 0, 0 j n k, k N 0. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
15 Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Univariate quadrature difference operator with Q 1 = 0 and z 0 0 = 0, w 0 0 = 1 j = Q j Q j 1, j 0 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
16 Univariate Quadrature Univariate quadrature operators of the form n k Q k (g) = w k i g(z k i ) i=0 with g : [ 1, 1] S for some Banach space S (Q k ) k 0 sequence of univariate quadrature formulas (z k j ) n k j=0 [ 1, 1] with zk j [ 1, 1], j, k and z k 0 = 0, k quadrature points w k j, 0 j n k, k N 0 quadrature weights Univariate quadrature operator rewritten as telescoping sum Q k = k j=0 j with Z k = {z k j : 0 j n k } [ 1, 1] set of points corresponding to Q k C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
17 Tensorization Tensorized multivariate operators Q ν = j 1 Q ν j, ν = j 1 νj with associated set of multivariate points Z ν = j 1 Z ν j U If ν = 0 F, then νg = Q ν g = g(z 0F ) = g(0 F) If 0 F ν F, with ˆν = (ν j) j i Q ν g = Q ν i (t j 1 Qˆν j g t), i I ν and νg = νi (t j 1 ˆνj g t), i I ν, for g Z, g t is the function defined on Z N by g t(ŷ) = g(y), y = (..., y i 1, t, y i+1,...), i > 1 and y = (t, y 2,...), i = 1 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
18 Sparse Quadrature Operator For any finite monotone set Λ F, the quadrature operator is defined by Q Λ = ν = νj ν Λ ν Λ j 1 with associated collocation grid Z Λ = ν Λ Z ν Theorem For any monotone index set Λ N F, the sparse quadrature Q ΛN is exact for any polynomial g P ΛN, i.e. it holds Q ΛN (g) = I(g), g P ΛN, with P ΛN = span{y ν : ν Λ N } and I(g) = U g(y)µ 0(dy). C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
19 Convergence Rates for Adaptive Smolyak Integration Theorem Assume that the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1. Then there exist two sequences (Λ 1 N ) N 1, (Λ 2 N ) N 1 of monotone index sets Λ 1,2 N F such that #Λ1,2 N N and I[Θ] Q Λ 1 N [Θ] C 1 N s, with s = 1/p 1, I[Θ] = Θ(y)µ0(dy) and, U I[Ψ] Q Λ 2 N [Ψ] X C 2 N s, s = 1 p 1. with s = 1/p 1, I[Ψ] = U Ψ(y)µ0(dy), C1, C 2 > 0 independent of N. C.S. and Ch. Schwab. Sparsity in Bayesian Inversion of Parametric Operator Equations, C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
20 Convergence Rates for Adaptive Smolyak Integration Sketch of proof Relating the quadrature error with the Legendre coefficients I(Θ) Q Λ(Θ) 2 γ ν θν P ν / Λ and I(Ψ) Q Λ(Ψ) X 2 γ ν ψν P X ν / Λ for any monotone set Λ F, where γ ν := j J (1 + νj)2. (γ ν θ P ν ) ν F l p m(f) and (γ ν ψ P ν X ) ν F l p m(f). sequence (Λ N ) N 1 of monotone sets Λ N F, #Λ N N, such that the Smolyak quadrature converges with order 1/p 1. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
21 Adaptive Construction of the Monotone Index Set Successive identification of the N largest contributions ν (Θ) = j 1 νj (Θ), ν F A. Chkifa, A. Cohen and Ch. Schwab. High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs, Set of reduced neighbors N (Λ) := {ν / Λ : ν e j Λ, j I ν and ν j = 0, j > j(λ) + 1} with j(λ) = max{j : ν j > 0 for some ν Λ}, I ν = {j N : ν j 0} N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
22 Adaptive Construction of the Monotone Index Set 1: function ASG 2: Set Λ 1 = {0}, k = 1 and compute 0 (Θ). 3: Determine the set of reduced neighbors N (Λ 1 ). 4: Compute ν (Θ), ν N (Λ 1 ). 5: while ν N (Λ k ) ν(θ) > tol do 6: Select ν N (Λ k ) with largest ν and set Λ k+1 = Λ k {ν}. 7: Determine the set of reduced neighbors N (Λ k+1 ). 8: Compute ν (Θ), ν N (Λ k+1 ). 9: Set k = k : end while 11: end function T. Gerstner and M. Griebel. Dimension-adaptive tensor-product quadrature, Computing, 2003 C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
23 Numerical Experiments Model parametric parabolic problem t q(t, x) div(u(x) q(t, x)) = 100 tx (t, x) T D, q(0, x) = 0 x D, q(t, 0) = q(t, 1) = 0 t T with 64 u(x, y) = u + y j ψ j, where u = 1 and ψ j = α j χ Dj j=1 where D j = [(j 1) 1, j ], y = (yj)j=1,...,64 and αj =, ζ = 2, 3, j ζ Finite element method using continuous, piecewise linear ansatz functions in space, backward Euler scheme in time Uniform mesh with meshwidth h T = h D = 2 11 LAPACK s DPTSV routine C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
24 Numerical Experiments Find the unknown data u for given (noisy) data δ, δ = G(u) + η, Expectation of interest Z /Z Z = Z = U U exp ( Φ(u; δ) ) φ(u) u= u + 64 j=1 y jψ j µ 0 (dy) exp ( Φ(u; δ) ) u= u + 64 j=1 y jψ j µ 0 (dy) Observation operator O consists of system responses at K observation points in T D at t i = i 2 N, i = 1,..., K,T 2N K,T 1, x j = j 2 N, k = 1,..., K,D 2N K,D 1, o k (, ) = δ( t k )δ( x k ) with K = 1, N K,D = 1, N K,T = 1, K = 3, N K,D = 2, N K,T = 1, K = 9, N K,D = 2, N K,T = 2 G : X R K, with K = 1, 3, 9, φ(u) = G(u) η = (η j ) j=1,...,k iid with η j N (0, 1), η j N (0, ) and η j N (0, ) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
25 Numerical Experiments Quadrature points Clenshaw-Curtis (CC) z k j = cos z k 0 = 0, if n k = 1 with n 0 = 1 and n k = 2 k + 1, for k 1 ( ) πj, j = 0,..., n k 1, if n k > 1 and n k 1 R-Leja sequence (RL) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
26 Numerical Experiments Quadrature points Clenshaw-Curtis (CC) R-Leja sequence (RL) projection on [ 1, 1] of a Leja sequence for the complex unit disk initiated at i z k 0 = 0, z k 1 = 1, z k 2 = 1, if j = 0, 1, 2 and j 1 z k j = R(ẑ), with ẑ = argmax z 1 z z k l, j = 3,..., n k, if j odd, l=1 z k j = z k j 1, j = 3,..., n k, if j even, with n k = 2 k + 1, for k 0 J.-P. Calvi and M. Phung Van. On the Lebesgue constant of Leja sequences for the unit disk and its applications to multivariate interpolation Journal of Approximation Theory, J.-P. Calvi and M. Phung Van. Lagrange interpolation at real projections of Leja sequences for the unit disk Proceedings of the American Mathematical Society, A. Chkifa. On the Lebesgue constant of Leja sequences for the unit disk Journal of Approximation Theory, C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
27 Leja quadrature points Proposition Let Q RL Λ denote the sparse quadrature operator for any monotone set Λ based on the univariate quadrature formulas associated with the R-Leja sequence. If the forward solution map U y q(y) is (p, ɛ)-analytic for some 0 < p < 1 and ɛ > 0, then (γ ν θ P ν ) ν F l p m(f) and (γ ν ψ P ν S ) ν F l p m(f). Furthermore, there exist two sequences (Λ RL,1 N ) N 1, (Λ RL,2 N )) N 1 of monotone index sets Λ RL,i N F such that #Λ RL,i N N, i = 1, 2, and such that, for some C 1, C 2 > 0 independent of N, with s = 1 p 1, I[Θ] Q Λ RL,1[Θ] C 1 N s, N and I[Ψ] Q Λ RL,2[Ψ[ S C 2 N s. N C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
28 Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = (I I[I k RL])(g k ) = I(g k I k RL(g k )) = 0 g k P k = span{y j : j N 0, j k}. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
29 Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = 0, g k P k Q k Q k RL RL = sup (g) S 0 g C(U;S) g L (U;S) IRL k sup (g) L (U;S) 0 g C(U;S) g L (U;S) 3(k + 1) 2 log(k + 1) C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
30 Leja quadrature points Sketch of proof Univariate polynomial interpolation operator n k IRL(g) k = g ( z k ) i l k i, i=0 with g : U S, li k (y) := n k y z i i=0,i j z j z i the Lagrange polynomials. (I Q k RL)(g k ) = 0, g k P k Q k RL 3(k + 1) 2 log(k + 1) Relating the quadrature error with the Legendre coefficients θ P ν of g C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
31 Normalization Constant Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
32 Normalization Constant Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
33 Normalization Constant Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
34 Normalization Constant Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
35 Normalization Constant Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
36 Normalization Constant Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
37 Quantity Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
38 Quantity Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
39 Quantity Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
40 Quantity Z Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
41 Quantity Z Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
42 Quantity Z Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence RL with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
43 Numerical Experiments Model parametric parabolic problem t q(t, x) div(u(x) q(t, x)) = 100 tx (t, x) T D, q(0, x) = 0 x D, q(t, 0) = q(t, 1) = 0 t T with 128 u(x, y) = u + y j ψ j, where u = 1 and ψ j = α j χ Dj j=1 where D j = [(j 1) 1, j ], y = (yj)j=1,...,128 and αj =, ζ = 2, 3, j ζ Finite element method using continuous, piecewise linear ansatz functions in space, backward Euler scheme in time Uniform mesh with meshwidth h T = h D = 2 11 LAPACK s DPTSV routine C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
44 Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
45 Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
46 Normalization Constant Z (128 parameters) Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the normalization constant Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
47 Quantity Z (128 parameters) Z, ζ=2, η j N(0,1) Z, ζ=3, η j N(0,1) Z, ζ=4, η j N(0,1) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, 1) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
48 Quantity Z (128 parameters) Z, ζ=2, η j N(0,0.5 2 ) Z, ζ=3, η j N(0,0.5 2 ) Z, ζ=4, η j N(0,0.5 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
49 Quantity Z (128 parameters) Z, ζ=2, η j N(0,0.1 2 ) Z, ζ=3, η j N(0,0.1 2 ) Z, ζ=4, η j N(0,0.1 2 ) Figure: Comparison of the estimated error and actual error. Curves computed by the reference solution of the quantity Z with respect to the cardinality of the index set Λ N based on the sequence CC with K = 1, 3, 9, η N (0, ) and with ζ = 2 (l.), ζ = 3 (m.) and ζ = 4 (r.), h T = h D = 2 11 for the reference and the adaptively computed solution. C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
50 Conclusions and Outlook New class of sparse, adaptive quadrature methods for Bayesian inverse problems for a broad class of operator equations Dimension-independent convergence rates depending only on the summability of the parametric operator Numerical confirmation of the predicted convergence rates C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
51 Conclusions and Outlook New class of sparse, adaptive quadrature methods for Bayesian inverse problems for a broad class of operator equations Dimension-independent convergence rates depending only on the summability of the parametric operator Numerical confirmation of the predicted convergence rates Gaussian priors and lognormal coefficients Adaptive control of the discretization error of the forward problem with respect to the expected significance of its contribution to the Bayesian estimate Efficient treatment of large sets of data δ C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
52 References Calvi J-P and Phung Van M 2011 On the Lebesgue constant of Leja sequences for the unit disk and its applications to multivariate interpolation Journal of Approximation Theory Calvi J-P and Phung Van M 2012 Lagrange interpolation at real projections of Leja sequences for the unit disk Proceedings of the American Mathematical Society CHKIFA A 2013 On the Lebesgue constant of Leja sequences for the unit disk Journal of Approximation Theory Chkifa A, Cohen A and Schwab C 2013 High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs Foundations of Computational Mathematics 1-33 Cohen A, DeVore R and Schwab C 2011 Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs Analysis and Applications Gerstner T and Griebel M 2003 Dimension-adaptive tensor-product quadrature Computing Hansen M and Schwab C 2013 Sparse adaptive approximation of high dimensional parametric initial value problems Vietnam J. Math Hansen M, Schillings C and Schwab C 2012 Sparse approximation algorithms for high dimensional parametric initial value problems Proceedings 5th Conf. High Performance Computing, Hanoi Schillings C and Schwab C 2013 Sparse, adaptive Smolyak algorithms for Bayesian inverse problems Inverse Problems Schillings C and Schwab C 2013 Sparsity in Bayesian inversion of parametric operator equations SAM-Report Schwab C and Stuart A M 2012 Sparse deterministic approximation of Bayesian inverse problems Inverse Problems C. Schillings (SAM) Sparsity in Bayesian Inversion Pro*Doc - August 15, / 24
Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University
Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University
More informationCOMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS
COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS HOLGER RAUHUT AND CHRISTOPH SCHWAB Abstract. We analyze the convergence of compressive sensing based
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationComparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs
MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 41.2014 September 2014 Comparison of Clenshaw-Curtis
More informationSparse deterministic approximation of Bayesian inverse problems
Home Search Collections Journals About Contact us My IOPscience Sparse deterministic approximation of Bayesian inverse problems This article has been downloaded from IOPscience. Please scroll down to see
More informationSparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs
Sparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs Dinh Dũng Information Technology Institute, Vietnam National University
More informationOn the Stability of Polynomial Interpolation Using Hierarchical Sampling
On the Stability of Polynomial Interpolation Using Hierarchical Sampling Albert Cohen, Abdellah Chkifa To cite this version: Albert Cohen, Abdellah Chkifa. On the Stability of Polynomial Interpolation
More informationarxiv: v1 [math.na] 14 Dec 2018
A MIXED l 1 REGULARIZATION APPROACH FOR SPARSE SIMULTANEOUS APPROXIMATION OF PARAMETERIZED PDES NICK DEXTER, HOANG TRAN, AND CLAYTON WEBSTER arxiv:1812.06174v1 [math.na] 14 Dec 2018 Abstract. We present
More informationACMAC s PrePrint Repository
ACMAC s PrePrint Repository Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs Angela Kunoth and Christoph Schwab Original Citation:
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationCollocation methods for uncertainty quantification in PDE models with random data
Collocation methods for uncertainty quantification in PDE models with random data Fabio Nobile CSQI-MATHICSE, EPFL, Switzerland Acknowlegments: L. Tamellini, G. Migliorati (EPFL), R. Tempone, (KAUST),
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More informationStochastic methods for solving partial differential equations in high dimension
Stochastic methods for solving partial differential equations in high dimension Marie Billaud-Friess Joint work with : A. Macherey, A. Nouy & C. Prieur marie.billaud-friess@ec-nantes.fr Centrale Nantes,
More informationQuasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient
Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient F. Nobile, L. Tamellini, R. Tempone, F. Tesei CSQI - MATHICSE, EPFL, Switzerland Dipartimento di
More informationarxiv: v1 [math.st] 26 Mar 2012
POSTERIOR CONSISTENCY OF THE BAYESIAN APPROACH TO LINEAR ILL-POSED INVERSE PROBLEMS arxiv:103.5753v1 [math.st] 6 Mar 01 By Sergios Agapiou Stig Larsson and Andrew M. Stuart University of Warwick and Chalmers
More informationInterpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,
More informationScalable Algorithms for Optimal Control of Systems Governed by PDEs Under Uncertainty
Scalable Algorithms for Optimal Control of Systems Governed by PDEs Under Uncertainty Alen Alexanderian 1, Omar Ghattas 2, Noémi Petra 3, Georg Stadler 4 1 Department of Mathematics North Carolina State
More informationMulti-Element Probabilistic Collocation Method in High Dimensions
Multi-Element Probabilistic Collocation Method in High Dimensions Jasmine Foo and George Em Karniadakis Division of Applied Mathematics, Brown University, Providence, RI 02912 USA Abstract We combine multi-element
More informationGiovanni Migliorati. MATHICSE-CSQI, École Polytechnique Fédérale de Lausanne
Analysis of the stability and accuracy of multivariate polynomial approximation by discrete least squares with evaluations in random or low-discrepancy point sets Giovanni Migliorati MATHICSE-CSQI, École
More informationA sparse grid stochastic collocation method for elliptic partial differential equations with random input data
A sparse grid stochastic collocation method for elliptic partial differential equations with random input data F. Nobile R. Tempone C. G. Webster June 22, 2006 Abstract This work proposes and analyzes
More informationApproximation of fluid-structure interaction problems with Lagrange multiplier
Approximation of fluid-structure interaction problems with Lagrange multiplier Daniele Boffi Dipartimento di Matematica F. Casorati, Università di Pavia http://www-dimat.unipv.it/boffi May 30, 2016 Outline
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationSpace-time sparse discretization of linear parabolic equations
Space-time sparse discretization of linear parabolic equations Roman Andreev August 2, 200 Seminar for Applied Mathematics, ETH Zürich, Switzerland Support by SNF Grant No. PDFMP2-27034/ Part of PhD thesis
More informationarxiv: v2 [math.na] 6 Aug 2018
Bayesian model calibration with interpolating polynomials based on adaptively weighted Leja nodes L.M.M. van den Bos,2, B. Sanderse, W.A.A.M. Bierbooms 2, and G.J.W. van Bussel 2 arxiv:82.235v2 [math.na]
More informationSTOCHASTIC SAMPLING METHODS
STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationParametric Problems, Stochastics, and Identification
Parametric Problems, Stochastics, and Identification Hermann G. Matthies a B. Rosić ab, O. Pajonk ac, A. Litvinenko a a, b University of Kragujevac c SPT Group, Hamburg wire@tu-bs.de http://www.wire.tu-bs.de
More informationHigh order Galerkin appoximations for parametric second order elliptic partial differential equations
Research Collection Report High order Galerkin appoximations for parametric second order elliptic partial differential equations Author(s): Nistor, Victor; Schwab, Christoph Publication Date: 2012 Permanent
More informationAnalytic regularity and GPC approximation for control problems constrained by linear parametric elliptic and parabolic PDEs
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Analytic regularity and GPC approximation
More informationA sparse grid stochastic collocation method for partial differential equations with random input data
A sparse grid stochastic collocation method for partial differential equations with random input data F. Nobile R. Tempone C. G. Webster May 24, 2007 Abstract This work proposes and analyzes a Smolyak-type
More informationBayesian inverse problems with Laplacian noise
Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat
More informationMultigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients
Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Università degli Studi del Sannio Dipartimento e Facoltà di Ingegneria, Benevento, Italia Random fields
More informationarxiv: v3 [math.na] 18 Sep 2016
Infinite-dimensional integration and the multivariate decomposition method F. Y. Kuo, D. Nuyens, L. Plaskota, I. H. Sloan, and G. W. Wasilkowski 9 September 2016 arxiv:1501.05445v3 [math.na] 18 Sep 2016
More informationWell-posed Bayesian Inverse Problems: Beyond Gaussian Priors
Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors Bamdad Hosseini Department of Mathematics Simon Fraser University, Canada The Institute for Computational Engineering and Sciences University
More informationSimulating with uncertainty : the rough surface scattering problem
Simulating with uncertainty : the rough surface scattering problem Uday Khankhoje Assistant Professor, Electrical Engineering Indian Institute of Technology Madras Uday Khankhoje (EE, IITM) Simulating
More informationarxiv: v2 [math.na] 21 Jul 2016
Multi-Index Stochastic Collocation for random PDEs Abdul-Lateef Haji-Ali a,, Fabio Nobile b, Lorenzo Tamellini b,c, Raúl Tempone a a CEMSE, King Abdullah University of Science and Technology, Thuwal 23955-6900,
More informationApproximation of Geometric Data
Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with
More informationInterpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz
More informationDownloaded 06/07/18 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 40, No. 1, pp. A366 A387 c 2018 Society for Industrial and Applied Mathematics WEIGHTED APPROXIMATE FEKETE POINTS: SAMPLING FOR LEAST-SQUARES POLYNOMIAL APPROXIMATION LING GUO,
More informationApplied/Numerical Analysis Qualifying Exam
Applied/Numerical Analysis Qualifying Exam August 9, 212 Cover Sheet Applied Analysis Part Policy on misprints: The qualifying exam committee tries to proofread exams as carefully as possible. Nevertheless,
More informationScientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1
Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient
More informationIntroduction to Uncertainty Quantification in Computational Science Handout #3
Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di
More informationA MULTILEVEL STOCHASTIC COLLOCATION METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA
A MULTILEVEL STOCHASTIC COLLOCATION METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA A. L. TECKENTRUP, P. JANTSCH, C. G. WEBSTER, AND M. GUNZBURGER Abstract. Stochastic collocation methods
More informationMultilevel stochastic collocations with dimensionality reduction
Multilevel stochastic collocations with dimensionality reduction Ionut Farcas TUM, Chair of Scientific Computing in Computer Science (I5) 27.01.2017 Outline 1 Motivation 2 Theoretical background Uncertainty
More informationAnalysis and Computation of Hyperbolic PDEs with Random Data
Analysis and Computation of Hyperbolic PDEs with Random Data Mohammad Motamed 1, Fabio Nobile 2,3 and Raúl Tempone 1 1 King Abdullah University of Science and Technology, Thuwal, Saudi Arabia 2 EPFL Lausanne,
More informationParameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G.
COMPUTATIONAL & APPLIED MATHEMATICS Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks Polynomial approximations via compressed
More informationRobust MCMC Sampling with Non-Gaussian and Hierarchical Priors
Division of Engineering & Applied Science Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors IPAM, UCLA, November 14, 2017 Matt Dunlop Victor Chen (Caltech) Omiros Papaspiliopoulos (ICREA,
More informationSpace-Time Adaptive Wavelet Methods for Parabolic Evolution Problems
Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam joint work with Christoph Schwab (ETH, Zürich), Nabi
More informationBayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems
Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.
More informationSupraconvergence of a Non-Uniform Discretisation for an Elliptic Third-Kind Boundary-Value Problem with Mixed Derivatives
Supraconvergence of a Non-Uniform Discretisation for an Elliptic Third-Kind Boundary-Value Problem with Mixed Derivatives Etienne Emmrich Technische Universität Berlin, Institut für Mathematik, Straße
More informationEfficient Solvers for Stochastic Finite Element Saddle Point Problems
Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite
More informationGenerative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis
Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :
More information(Part 1) High-dimensional statistics May / 41
Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2
More informationNON-LINEAR APPROXIMATION OF BAYESIAN UPDATE
tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific
More informationDeep Learning in High Dimension
Deep Learning in High Dimension Ch. Schwab and J. Zech Research Report No. 2017-57 December 2017 Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule CH-8092 Zürich Switzerland Funding
More informationLecture Notes: African Institute of Mathematics Senegal, January Topic Title: A short introduction to numerical methods for elliptic PDEs
Lecture Notes: African Institute of Mathematics Senegal, January 26 opic itle: A short introduction to numerical methods for elliptic PDEs Authors and Lecturers: Gerard Awanou (University of Illinois-Chicago)
More informationMATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5
MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have
More informationSparse Tensor Methods for PDEs with Stochastic Data II: Convergence Rates of gpcfem
Sparse Tensor Methods for PDEs with Stochastic Data II: Convergence Rates of gpcfem Christoph Schwab Seminar for Applied Mathematics ETH Zürich Albert Cohen (Paris, France) & Ron DeVore (Texas A& M) R.
More informationA High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere
A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere Michael Levy University of Colorado at Boulder Department of Applied Mathematics August 10, 2007 Outline 1 Background
More informationInterpolation and function representation with grids in stochastic control
with in classical Sparse for Sparse for with in FIME EDF R&D & FiME, Laboratoire de Finance des Marchés de l Energie (www.fime-lab.org) September 2015 Schedule with in classical Sparse for Sparse for 1
More informationAdaptive discretization and first-order methods for nonsmooth inverse problems for PDEs
Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,
More informationOn Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationrecent developments of approximation theory and greedy algorithms
recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in
More informationPreconditioned space-time boundary element methods for the heat equation
W I S S E N T E C H N I K L E I D E N S C H A F T Preconditioned space-time boundary element methods for the heat equation S. Dohr and O. Steinbach Institut für Numerische Mathematik Space-Time Methods
More informationOptimal polynomial approximation of PDEs with stochastic coefficients by Galerkin and collocation methods
Optimal polynomial approximation of PDEs with stochastic coefficients by Galerkin and collocation methods Fabio Nobile MOX, Department of Mathematics, Politecnico di Milano Seminaire du Laboratoire Jacques-Louis
More informationInterpolation-Based Trust-Region Methods for DFO
Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv
More informationTensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs
Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.
More informationarxiv: v3 [math.na] 7 Dec 2018
A data-driven framework for sparsity-enhanced surrogates with arbitrary mutually dependent randomness Huan Lei, 1, Jing Li, 1, Peiyuan Gao, 1 Panos Stinis, 1, 2 1, 3, and Nathan A. Baker 1 Pacific Northwest
More informationOn some nonlinear parabolic equation involving variable exponents
On some nonlinear parabolic equation involving variable exponents Goro Akagi (Kobe University, Japan) Based on a joint work with Giulio Schimperna (Pavia Univ., Italy) Workshop DIMO-2013 Diffuse Interface
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationIsogeometric Analysis:
Isogeometric Analysis: some approximation estimates for NURBS L. Beirao da Veiga, A. Buffa, Judith Rivas, G. Sangalli Euskadi-Kyushu 2011 Workshop on Applied Mathematics BCAM, March t0th, 2011 Outline
More informationRegularity of Weak Solution to Parabolic Fractional p-laplacian
Regularity of Weak Solution to Parabolic Fractional p-laplacian Lan Tang at BCAM Seminar July 18th, 2012 Table of contents 1 1. Introduction 1.1. Background 1.2. Some Classical Results for Local Case 2
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationhp-dg timestepping for time-singular parabolic PDEs and VIs
hp-dg timestepping for time-singular parabolic PDEs and VIs Ch. Schwab ( SAM, ETH Zürich ) (joint with O. Reichmann) ACMAC / Heraklion, Crete September 26-28, 2011 September 27, 2011 Motivation Well-posedness
More informationContinuous Functions on Metric Spaces
Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0
More informationBesov regularity for the solution of the Stokes system in polyhedral cones
the Stokes system Department of Mathematics and Computer Science Philipps-University Marburg Summer School New Trends and Directions in Harmonic Analysis, Fractional Operator Theory, and Image Analysis
More informationStochastic Regularity of a Quadratic Observable of High Frequency Waves
Research in the Mathematical Sciences manuscript No. (will be inserted by the editor) Stochastic Regularity of a Quadratic Observable of High Frequency Waves G. Malenová M. Motamed O. Runborg Received:
More informationarxiv: v3 [math.st] 16 Jul 2013
Posterior Consistency for Bayesian Inverse Problems through Stability and Regression Results arxiv:30.40v3 [math.st] 6 Jul 03 Sebastian J. Vollmer Mathematics Institute, Zeeman Building, University of
More informationINTRODUCTION TO REAL ANALYTIC GEOMETRY
INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E
More informationMultilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland
Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly
More informationKarhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes
TTU, October 26, 2012 p. 1/3 Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes Hao Zhang Department of Statistics Department of Forestry and Natural Resources Purdue University
More informationPREPRINT 2010:23. A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO
PREPRINT 2010:23 A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO Department of Mathematical Sciences Division of Mathematics CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG
More informationSteps in Uncertainty Quantification
Steps in Uncertainty Quantification Challenge: How do we do uncertainty quantification for computationally expensive models? Example: We have a computational budget of 5 model evaluations. Bayesian inference
More informationSparse grids and optimisation
Sparse grids and optimisation Lecture 1: Challenges and structure of multidimensional problems Markus Hegland 1, ANU 10.-16. July, MATRIX workshop on approximation and optimisation 1 support by ARC DP
More informationSuboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids
Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationAccuracy, Precision and Efficiency in Sparse Grids
John, Information Technology Department, Virginia Tech.... http://people.sc.fsu.edu/ jburkardt/presentations/ sandia 2009.pdf... Computer Science Research Institute, Sandia National Laboratory, 23 July
More informationADAPTIVE GALERKIN APPROXIMATION ALGORITHMS FOR PARTIAL DIFFERENTIAL EQUATIONS IN INFINITE DIMENSIONS
ADAPTIVE GALERKIN APPROXIMATION ALGORITHMS FOR PARTIAL DIFFERENTIAL EQUATIONS IN INFINITE DIMENSIONS CHRISTOPH SCHWAB AND ENDRE SÜLI Abstract. Space-time variational formulations of infinite-dimensional
More informationErgodicity in data assimilation methods
Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation
More informationNONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger
NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS Max Gunzburger Department of Scientific Computing Florida State University North Carolina State University, March 10, 2011
More informationJitka Dupačová and scenario reduction
Jitka Dupačová and scenario reduction W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch Session in honor of Jitka Dupačová ICSP 2016, Buzios (Brazil),
More informationTraces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains
Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis
More informationVariational problems in machine learning and their solution with finite elements
Variational problems in machine learning and their solution with finite elements M. Hegland M. Griebel April 2007 Abstract Many machine learning problems deal with the estimation of conditional probabilities
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More information