Sparse Tensor Methods for PDEs with Stochastic Data II: Convergence Rates of gpcfem Christoph Schwab Seminar for Applied Mathematics ETH Zürich Albert Cohen (Paris, France) & Ron DeVore (Texas A& M) R. Andreev, M. Bieri and G.J. Gittelson ( UQ-Group ) (SAM,ETH) Workshop Tutorial Computation with Uncertainty IMA, Minneapolis, MN, USA, October 16 & 17, 2010 ERC Project 247277 STAHDPDE and Swiss National Science Foundation
Outline 1. Elliptic BVP with stochastic data 2. infinite - Dimensional Deterministic BVP 3. Regularity 4. Best N-term Convergence Rates of sgfem 5. Localization of active gpc-modes. Complexity. 6. Discretization in D. Sparse Tensor sgfem. 7. Parabolic and Hyperbolic Problems 8. Conclusions & References 1
Elliptic BVP with stochastic data Given: probability space (Ω,Σ,P) on data space X(D) L (D), V H 1 (D), random diffusion coefficient a(x,ω) L (Ω,dP;X(D)), deterministic source term f H 1 (D) = (H0 1(D)) =: V, (sbvp) Find u(x,ω) L 2 (Ω,dP;V) = L 2 (Ω,dP;H0 1(D)) L2 (Ω,dP) V such that [ˆ ] [ˆ ] E a(x, ) x u(x, ) x v(x, )dx = E f(x)v(x, )dx D for all v L 2 (Ω,dP;V) a L (Ω,dP;X(D)) and a max a(, ) a min > 0!u L 2 (Ω,dP;H0 1(D)). D 2
Karhunen-Loève expansion - separation of deterministic and stochastic variables - Example 1 (Karhunen-Loève Expansion) If a L 2 (Ω,dP;L (D)) then in L 2 (Ω,dP;L 2 (D)), a(x,ω) = E[a](x)+ m 1 ψ m (x)y m (ω) = E[a](x)+ m 1 λm ϕ m (x)y m (ω), (λ m,ϕ m ) m 1 eigensequence of (cpt., s.a.) covariance operator ˆ C[a] : L 2 (D) L 2 (D) (C[a]v)(x) := C a (x,x )v(x )dx v L 2 (D), D C a = E[(a E[a]) (a E[a])] Y m (ω) := 1 λmˆ D (a(x,ω) E[a](x))ϕ m (x)dx : Ω Γ m R m = 1,2,... 3
Karhunen-Loève expansion - convergence - a(x,ω) = E a (x)+ m 1 λm ϕ m (x)y m (ω) KL expansion converges in L 2 (D Ω), not necessarily in L (D Ω) To ensure L (D Ω) convergence, must - estimate decay rate of KL eigenvalues λ m : Schwab & Todor JCP (2006), - bound ϕ m L (D): Todor Diss ETH (2005), SINUM (2006) - assume: bounds for Y m L (Ω) 4
Karhunen-Loève expansion - eigenvalue estimates - Regularity of C a ensures decay of KL-eigenvalue sequence (λ m ) m 1 C a (x,x ) : D D R is piecewise analytic on D D if ex. smoothness partition D = {D j } J j=1 of D into a finite sequence of simplices D j such that D = J j=1 D j (1) and C a (x,x ) analytic in open neighbourhood of D j D j for any (j,j ). piecewise H t,t on D D if V a Hpw t,t (D D) := L 2 (D i,h t (D j )) i,j J 5
Karhunen-Loève expansion - eigenvalue estimates - (H,, ) separable Hilbert space, C K(H) compact, s.a., eigenpair sequence (λ m,φ m ) m 1. If C m B(H) is any operator of rank at most m, λ m+1 C C m B(H). (2) (e.g. Pinkus 1985: n-widths in Approx. Theory). 6
Proposition 2 (KL-eigenvalue decay) Karhunen-Loève expansion - eigenvalue estimates - (> exponential KL decay: Gaussian C a (x,x )) C a (x,x ) := σ 2 exp( γ x x 2 ) = 0 λ m c(γ,σ)/m! m 1 (exponential KL decay: Piecewise analytic C a (x,x )) C a pw analytic on D D = c > 0 0 λ m cexp( bm 1/d ) m 1 (Algebraic KL-eigenvalue decay for p.w. H t (D)-kernels) C a H t,t pw(d D) (t d/2) = 0 λ m cm t/d m 1 7
Karhunen-Loève expansion - eigenvalue estimates - 10 Largest eigenvalues of exp( x x 2 ), x ( 1,1) 10 1 10 0 10 1 10 2 observed predicted 10 1 10 Largest eigenvalues of exp( x x 1+δ ), x ( 1,1) 10 0 10 1 δ=0.00, observed δ=0.00, predicted δ=0.25, observed δ=0.25, predicted δ=0.50, observed δ=0.50, predicted δ=0.75, observed δ=0.75, predicted 10 3 10 4 10 2 10 5 10 6 10 3 10 7 10 4 10 8 10 9 1 2 3 4 5 6 7 8 9 10 10 5 0 1 2 3 4 5 6 7 8 9 10 8
Karhunen-Loève expansion - eigenfunction estimates - Regularity of C a ensures L bounds for L 2 -scaled eigenfunctions (ϕ m ) m 1 Theorem 3 ( Schwab & Todor JCP 2006) Assume C a H t,t pw(d D) for t > d. Then Hence: δ > 0 ex.c(δ) > 0 s.t. m 1 : ϕ m L (D) C(δ)λ δ m. b m := λ1/2 m ϕ m L (D) inf x D E[a] C(δ)λ 1/2 δ m C(δ)m t/2d δ 9
Karhunen-Loève expansion - convergence rate - Conclusion: KL expansion of a(x,ω) L 2 (Ω,dP;L (D)) converges uniformly and exponentially on D Ω if - C a (x,x ) piecewise analytic - (Y m (ω)) m 1 uniformly bounded on Ω (e.g. Y m (ω) U( 1,1)) 10
Karhunen-Loève expansion - convergence rate - λ m 10 1 Largest 2000 Eigenvalues of Factorizable Kernel exp( 10 x x 2 ), x ( 1,1) 3 10 0 10 1 10 2 10 3 10 4 10 5 computed eigenvalues λ m estimated eigenvalues 1/ Γ(m 1/3 /2) relative L 2 norm of remainder after truncation at level M 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 L 2 Norm Decay of KL Series Remainder for Kernel e 10 x y 2 in 3D 10 6 0 200 400 600 800 1000 1200 1400 1600 1800 2000 m 0 0 50 100 150 200 250 300 350 400 450 500 M 11
Karhunen-Loève expansion - truncation from infinite to finite dimension M - > M N KL-truncation order a M (x,ω) := E[a](x)+ M m 1 (SBVP) with stochastic coefficient a(x,ω) div(a(x,ω) x u(x,ω)) = f(x) λm ϕ m (x)y m (ω) in L 2 (Ω,dP;H 1 (D)) (SBVP) M with truncated stochastic coefficient a M (x,ω) div(a M (x,ω) x u M (x,ω)) = f(x) in L 2 (Ω,dP;H 1 (D)) Theorem 4 If C a pw analytic and (Y m ) m 1 uniformly bounded, then δ > 0 ex. b,c(δ),m 0 > 0 such that (SBVP) M well-posed for M M 0 and u u M L2 (Ω;H 1 0(D)) { Cexp( bm 1/d ) M M 0 if C a pw analytic C(δ)M t/2d+1 δ M M 0 if C a H t,t pw(d D) 12
High dimensional deterministic bvp a M : D Ω R, a M (x,ω) = E[a](x)+ M m 1 λm ϕ m (x)y m (ω) Assumption (Y m ) m 1 independent, uniformly bounded family of rv s (e.g. Y m uniformly distributed in Γ m = I = ( 1/2,1/2), m = 1,2,3,...) Random variable Y m Parameter y m I m, m = 1,2,... (Y 1,Y 2,...,Y M ) y = (y 1,y 2,...,y M ) I M := I 1... I M dp(ω) ρ(y)dy = m 1ρ m (y m )dy m M ã M : D I M R, ã M (x,y) = E[a](x)+ λm ϕ m (x)y m m 1 13
Parametric Deterministic BVP a(x,ω) = E[a](x)+ j 1 ψ j (x)y j (ω) a(x,y) := E[a](x)+ j 1ψ j (x)y j, y = (y 1,y 2,...) U = ( 1,1) N, P(dω) = ρ(dy) = ρ(y)dy = j 1 ρ j (y j )dy j Parametric, deterministic problem, pointwise form: for every y U = ( 1,1) N, find u(x,y) such that x (a(x,y) x u(x,y)) = f(x) in D, u D = 0. 14
Parametric, deterministic problem, variational form: for every y U = ( 1,1) N, find U u u(y) V = H0 1 (D) such that ˆ ˆ v V : a(,y) u vdx = f(x)v(x)dx. } D {{} D =:b(y;u,v) Parametric deterministic problem, stochastic Galerkin form: Find u L 2 (U,dµ;V) such that ˆ ˆ ˆ ˆ v L 2 (U,dµ;V) : a(,y) u vdxµ(dy) = y U x D U D }{{} =:B(u,v)= y U b(y;u,v)µ(dy) f(x)v(x, y)dxµ(dy).
sgfem F: set of all ν = (ν j ) j 1 N N 0 such that only finitely many ν j are non-zero. Notation: ν F : ν := j 1 ν j = ν l 1 <, ν! = j 1 ν j! <, y ν = j 1 y ν j j y U. Tensorized (L 2 ( 1,1;1/2dx)-normalized) Legendre polynomials: L ν (y) = j 1L νj (y j ). Proposition 5: Assume Y j U( 1,1), i.e. ρ j (dy j ) = µ j (dy j ) := 1/2dy j, µ := j 1 µ j(dy j ). Then (L ν ) ν F is a complete orthonormal system in L 2 (U,dµ). 15
each v L 2 (U,dρ;V) has a representation ˆ v = v ν L ν, where v ν = v(,y)l ν (y)µ(dy) V ν F and there holds Parseval s equation v L2 (U,dµ;V) = ( v ν V ) l2 (F). U
sgfem For any finite subset Λ F, define X Λ := {v Λ (x,y) = ν Λv ν (x)l ν (y) ; v ν V} L (U,dµ;V) L 2 (U,dµ;V) where {L ν } ν F is the basis of Legendre polynomials. Galerkin approximation: u Λ = ν Λ u νl ν X Λ s.t. Cea s lemma: where C 1 := amax a min. u Λ X Λ such that B(u Λ,v Λ ) = F(v Λ ) v Λ X Λ. u u Λ L2 (U,dµ;V) C 1 inf v Λ X Λ u v Λ L2 (U,dµ;V), 16
sgfem Legendre expansion of u(x, y): u(x,y) = ν Fv ν (x)l ν (y) where v ν := ˆ U u(,y)l ν (y)µ(dy) V. u u Λ L2 (U,dµ;V) C 1 ( ν/ Λ v ν 2 V Choice of Λ? Summability of sequence α ν := v ν V, ν F. Lemma 6 (Stechkin): Let 0 < p q and assume α = (α ν ) ν F l p (F). If F N is the set of indices corresponding to N largest α ν, ( α ν q 1 ) q α lp (F)N r, ν/ F N where r := 1 p 1 q 0. )1 2. 17
Regularity b = (b j ) j=1, b j := ψ j L (D) a min, j = 1,2,... Theorem 7 (A-priori estimates of gpc coefficients) Assume ψ j L (D) b l1 (N) = κ < 1. a min j 1 Then v ν V ( ) ν! f V ν! bν, b ν = b ν 1 a min 1 bν 2 2..., ν F. Estimate of v ν V by a real variable bootstrap argument. 18
Regularity Theorem 8 (Sequence Summability) For 0 < p 1, ( ν! ν! bν ) ν F l p (F) iff b l1 (N) < 1 b l p (N). 19
Convergence rates - semi-discretization in stochastic variable - Focus on approximation in L 2 (mean square, maximum likelihood) and L (worst case). Tool: Classical Legendre basis (P n ) n 0 : P n L ([ 1,1]) = P n (1) = 1. (3) L 2 normalized Legendre Sequence L n (t) = 2n+1P n (t), Note ˆ 1 1 L n (t) 2dt 2 = 1. L 0 = P 0 = 1. For ν F, recall tensorized polynomials P ν (y) := j 1 P νj (y j ) and L ν (y) := j 1L νj (y j ). (4) Note: 20
(L ν ) ν F orthonormal basis of L 2 (U,dµ), µ = j 1 1 2 dy j every u L (U,dµ;V) L 2 (U,dµ;V) admits gpc expansions u(y) = ν F u ν P ν (y) = ν Fv ν L ν (y). (5) These gpc expansions converge in L 2 (U,dµ;V) with ˆ v ν := u(y)l ν (y)µ(dy) and u ν := j ) j 1(1+2ν U 1/2 v ν. (6) Estimates of u ν V, v ν V through complex analysis. Note v ν V u ν V.
Uniform Ellipticity Assumption: UEA(r, R) 0 < r R < such that x D and y U Example: 1-term KL expansion. 0 < r a(x,y) R <. (7) a(x,y) = ā(x)+yψ(x), ess inf x Dā(x) a min, ψ L (D) κa min = r = a min (1 κ).
Define A δ = {z C N : δ R(a(x,z)) a(x,z) 2R for every x D.} (8) Polydiscs: let ρ = (ρ j ) j 1 be a sequence of radii ρ j > 0 and define: U U ρ := j C : z j ρ j } C j 1{z N. (9) When a and ψ j are real valued, UEA(r,R) = x D and z U, 0 < r R(a(x,z)) a(x,z) 2R, (10) ρ = (ρ j ) j 1 is δ-admissible iff x D : ρ j ψ j (x) R(ā(x)) δ. (11) j 1 If ρ = (ρ j ) j 1 is δ-admissible, then U ρ A δ.
Lemma 9: Assume UEAC(r,R) for some 0 < r R <, ρ = (ρ j ) j 1 δ-admissible for some 0 < δ < r with ρ j > 1 for all j such that ν j 0. Then for any ν F hold a-priori bounds on gpc coeff s v ν V u ν V f V φ(ρ j )(2ν j +1)ρ ν j j, (12) δ where φ(t) := πt 2(t 1) for t > 1. j 1,ν j 0 Choice of δ-admissible weight sequence (ρ j ) j 1?
Legendre coefficient estimate in Lemma 6 and ν-dependent δ-admissible ρ : fix 1 < κ 2 such that (κ 1) j 1 ψ j L (D) r 8. (13) Choose J 0 as smallest integer such that j>j 0 ψ j L (D) r(κ 1) 18πκ, (14) Then define ρ j := Then ρ is δ admissible with δ = r 2 and κ if 1 j J 0, rν 2+ j 4 ν F ψ j L (D), j J 0 such that ν j 0, 1 j J 0 such that ν j = 0. v ν V u ν V ρ ν ν F. (15)
Theorem 10: Assume a(x,z) satisfies UEAC(r,R) for some 0 < r R < ( ψ j L (D)) j 1 l p (N) for some p < 1, Then for the same value of p ( u ν V ) ν F, ( v ν V ) ν F l p (F). The Legendre expansions (5) converge in L (U,dµ;V): i.e. S ΛN u(y) := ν Λ N u ν (x)p ν (y) = ν Λ N v ν (x)l ν (y) converge lim sup u(y) S ΛN u(y) V = 0 (16) N + y U for (Λ N ) N 1 F any sequence of finite sets which exhausts F. Moreover,...
1. L ( worst case ) convergence: If Λ N is the set of ν F corresponding to indices of N largest u ν V, then holds the convergence estimate sup y U u(y) S ΛN u(y) V ( u ν V ) lp (F)N r, r = 1 p 1 0. (17) 2. L 2 ( maximum likelihood ) convergence: If Λ N is the set of ν F corresponding to indices of N largest v ν V, then u S ΛN u L2 (U,dµ;V) ( v ν V ) lp (F)N r, r = 1 p 1 2 1 2. (18)
Finding Λ l - I Problem: Theorem does not give a constructive way of finding sets Λ 1 Λ 2... Λ l...f 1. Adaptive sgfem, or 2. A-priori selection of the Λ l. Consider 2: given ρ = (ρ 1,ρ 2,...) > 0 s.t. 1 < ρ 1 ρ 2... ρ m and ε > 0, define Λ ε (ρ) := { ν F ρ ν ε } = {ν F ρ ν 1/ε} If our bounds on gpc coeff s v ν V are sharp, Λ ε (ρ) will contain essentially the #Λ ε (ρ) largest coefficients v ν V. Monotonicity: for any ρ, ρ Λ as above and any ε,ε > 0, ε ε implies Λ ε (ρ) Λ ε (ρ), ρ ρ implies Λ ε (ρ) Λ ε ( ρ). 21
Finding Λ l - II Consider comparison sequences ν σ l (N) such that Note: ρ 1 ν σ := {(m+1) σ : m = 1,2,...}, σ > 0. ρ 1 ν σ ν F : ρ ν νσ ν Scaling: given σ > 0, find { Λ ε (ν σ ) = Λ ε 1/σ(ν } 1 ) = ν N N 0 : (m+1) ν m ε 1/σ, ε 0. m N ν Λ ε (ν σ ) iff ν is a multiplicative partition of some integer x < ε 1/σ. 22
Complexity (finding Λ l ) Proposition 11 (R. Andreev 08) 1. Λ ε (ν σ ) can be localized in work and memory growing log-linearly in #Λ ε (ν σ ). 2. As ε 0, e 2 logx #Λ ε (ν σ ) x 2 with x = ε 1/σ, π(logx) 3/4 E.R. Canfield, Paul Erdös and C. Pomerance: On a Problem of Oppenheim concerning Factorisatio Numerorum J. Number Theory 17 (1983) 1 ff., G. Szekeres and P. Turán: Über das zweite Hauptproblem der Factorisatio Numerorum Acta Litt. Sci. Szeged 6 (1933) 143-154. 23
Smoothness in D: Norms: Discretization in D W = {v V : v L 2 (D)} V. v W := v V + v W, Note: D convex = W = H 2 (D) V. v W = v L2 (D). Theorem 12 (Regularity) Assume f L 2 (D) and a(x,z) satisfies UEAC(r,R) for 0 < r R <. If for some 0 < p < 1 ( ψ j L (D)) j 1 l p (N) and ( ψ j L (D)) j 1 l p (N) then ( u ν W ) ν F,( v ν W ) ν F l p (F). 24
Discretization in D (Nonadaptive(!)) Finite Element Approximation in D: D Convex = inf w v h V Ch w W CM 1 d w W. v h V h Non-convex polyhedra D: W H 2 (D). Convergence as M = dim(v h ) reduced to inf v h V h w v h V C t M t w W some 0 < t < 1 d. See Brenner & Scott: Intro FEM, Babuška, Kellogg and Pitkäranta (1979) 25
Discretization in D (Nonadaptive(!)) Finite Element Approximation in D: Theorem 13: Assume FE spaces (V h ) h>0 in D R d have approximation order 0 < t 1/d, a satisfies UEAC(r,R) and ( ψ j W 1, (D)) j=1 lp (N) some 0 < p 1. Then (i) With Λ N the set of indices corresponding to N largest u ν V, there exist finite element spaces V ν of dimensions M ν, ν Λ N, such that u(y) sup y U ũ ν P ν (y) V CN min{r,t} dof, r := 1 p 1, ν Λ N where N dof = ν Λ N M ν and C = ( C t + ( u ν V ) lp (F)) ( u ν W ) lp (F). 26
(ii) With Λ N the set of indices corresponding to N largest v ν V, there exist finite element spaces V ν of dimension M ν, ν Λ N, such that u ṽ ν L ν L2 (U,dµ;V) CN min{r,t} dof, r := 1 p 1 2, ν Λ N where N dof = ν Λ N M ν and C = ( C 2 t + ( v ν V ) 2 l p (F) )1 2 ( v ν W ) lp (F). Here, t ν, ũ ν and ṽ ν are V-projections of t ν, u ν and v ν, respectively, onto V ν.
Numerical Example (R. Andreev) D = ( 1,1), E a (x) = 5+x, C a (x,x ) = min{x,x }+1 2 H 1,1 (D D). KL Eigenpairs: λ m = Algebraic KL decay with rate 2. 8 π 2 (2m 1) 2, ψ m(x) = sin((x+1)/ 2λ m ) Bieri, Andreev and CS. SISC (2010) 27
Error vs. N total 10 1 error 10 2 10 3 10 4 MC, L 2 (H 1 ) FTP, L 2 (H 1 ) STP, L 2 (H 1 ) FTP, L 2 (L 2 ) STP, L 2 (L 2 ) 10 2 10 3 10 4 10 5 10 6 ndof
Heat Eqn w. Random Conductivity (w. V.H. Hoang) Q T = I D. In Q T, u t (a(x,ω) u) = g(t,x), u D I = 0, u t=0 = h(x). V = H 1 0(D) H = L 2 (D) H V = H 1 (D) X = L 2 (I;V) H 1 (I;V ) and Y = L 2 (I;V) H. In X and Y norms X and Y, for u X and v = (v 1,v 2 ) Y given by u X = ( u 2 L 2 (I;V) + u 2 H 1 (I;V ) )1/2 and v Y = ( v 1 2 L 2 (I;V) + v 2 2 H )1/2. 28
Heat Eqn w. Random Conductivity Parametric Deterministic Problem: u t (t,x,y) x [a(x,y) x u(t,x,y)] = g(t,x) in Q T, u(t,x,y) D I = 0, u t=0 = h(x), where, for y = (y 1,y 2,...) U a(x,y) = ā(x)+ y j ψ j (x). Pointwise parametric (x, t)-variational Formulation [Sc. & R. Stevenson2009]: given f Y, find u(y) : U y X such that j=1 b(y;u,v) = f(v), v = (v 1,v 2 ) Y. where ˆ b(y;u,(v 1,v 2 )) = du ˆ ˆ I dt,v 1(t, ) H dt+ a(x,y) u(t,x) v 1 (t,x)dxdt+ w(0),v 2 H. D I ˆ f(v) = g(t),v 1 (t) H dt+ h,v 2 H, v = (v 1,v 2 ) Y. I 29
Heat Eqn w. Random Conductivity Galerkin parametric (x, t) variational formulation: Bochner spaces X = L 2 (U,dρ;X) L 2 (U,dρ) X and Y = L 2 (U,dρ;Y) L 2 (U,dρ) Y. where Find u X such that B(u,v) = F(v) for all v Y B(u,v) = ˆ ˆ b(y, u, v)ρ(dy) and F(v) = U U f(v)ρ(dy). 30
Heat Eqn w. Random Conductivity gpc Approximation: Λ F, N = #Λ <. X Λ := {u Λ (t,x,y) = ν Λu ν (t,x)l ν (y) : u ν X} X, and Y Λ := {v Λ (t,x,y) = ν Λv ν (t,x)l ν (y) : v ν X} Y. In the Legendre basis (L ν ) ν F, we write v 1Λ (t,x,y) = ν Λ v 1ν (t,x)l ν (y) and v 2Λ (x,y) = ν Λv 2ν (x)l ν (y), respectively, where v ν = (v 1ν,v 2ν ) Y for all ν F. Galerkin approximation: u Λ X Λ : B(u Λ,v Λ ) = F(v Λ ) v Λ Y Λ. (19) 31
Heat Eqn w. Random Conductivity Theorem 14: Λ F, (19) is coupled system of N = #Λ parabolic equations, admits a unique solution u Λ X Λ a-priori error bound: ( u u Λ X c ν/ Λ u ν 2 X) 1/2. u ν X Legendre coefficients of the solution of the parametric deterministic problem, c independent of Λ. ( ψ j L (D)) j 1 l p (N) for some 0 < p 1 = ( u ν ) ν F l p (F) 32
and (Λ N ) N N F of index sets with #Λ N such that the solutions u ΛN of the Galerkin semidiscretized problems (19) satisfy u u ΛN X CN r, r = 1 p 1 2. Space-Time (x, t) Discretization: regularity, (x,t) wavelets, optimal error bounds in terms of N dof Analogous results for wave equations in random media
Conclusion gpc: Transform SPDE into parametric, deterministic PDE on U = ( 1,1) N ST approximation by gpc Ψ of Regularity: W V smoothness space in (x,t) U y u(y, ) V ( u ν W ) ν F l p (F), 0 < p 1. p = 1: ST-rate equal to MLMC convergence rate (A. Barth, CS, N. Zollinger 2010) Sparse tensorization of D and Ω Galerkin discretizations: Dimension independent convergence rates in terms of N total. 33
requires mixed regularity in both, D and U: if { ψ j W 1, (D)} j=1 l p (N) then { u ν W : ν F} l p (F) L 2 (mean square) and L (worst case) convergence rates. Adaptivity essential to avoid curse of dimensionality. Diffusion problems in physical dimension d = 2, with λ m m 2.2 and suppλ {1,...,1500} on PC (R. Andreev, M. Bieri and CS (SISC 2010)) (superior to Smolyak construction). Sparse composite collocation on lattices of points in U = ( 1,1) N, U = R N (M. Bieri Dissertation ETH (2009) and SINUM (2010)) Lognormal Case (C.J. Gittelson (M 3AS 2010)) Heat and Wave Eqn. (C.S. & V. Hoang 2010), Eigenvalue Problems (C.S. & R. Andreev) Adaptive, Convergent Algorithm (w. A. Cohen, R. DeVore 2010)
References N. Wiener (Am. J. Math. 1938), Cameron and Martin (Ann. Math. 1947) G. E. Karniadakis et al. ( 2000 - ), Babuška, Nobile, Tempone etal. (SINUM 2004, 2006, 2008) (CMAME 2005) P. Frauenfelder, R.A. Todor and CS (CMAME 2005) R.A. Todor (SINUM 2005 and Diss. ETH 2005) CS and R.A. Todor (JCP 2006 and IMAJNA 2007) R. Andreev, M. Bieri and CS (SISC 2010) M. Bieri (SINUM 2010) V. H. Hoang and CS (2010) A. Cohen, R. DeVore and CS (2009) (2010) C.J. Gittelson Dissertation ETH (2010). www.sam.math.ethz.ch/ reports 34