A regularized least-squares method for sparse low-rank approximation of multivariate functions
|
|
- Gladys Hood
- 5 years ago
- Views:
Transcription
1 Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with Prashant Rai, Loic Giraldi, Anthony Nouy GeM Institut de Recherche en Génie Civil et Mécanique LUNAM Université UMR CNRS 6183 / Université de Nantes / Centrale Nantes
2 Motivations Chorus ANR project Aero-thermal regulation in an aircraft cabin 39 random parameters Data basis of 2000 evaluations of the model Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 2
3 Motivations In telecommunication: electromagnetic field and the Specific Absorption Rate (SAR) induced in the body Over 4 random parameters FDTD method: 2 days/run. Few evaluations of the model available Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 3
4 Motivations In telecommunication: electromagnetic field and the Specific Absorption Rate (SAR) induced in the body Over 4 random parameters FDTD method: 2 days/run. Few evaluations of the model available Aim Construct a surrogate model of the true model from a small collection of evaluations of the true model that allows fast evaluations of output quantities of interest, observables or objective function. Propagation: estimation of quantiles, sensitivity analysis... Optimization or identification Probabilistic inverse problem Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 3
5 Uncertainty quantification using functional approaches Stochastic/parametric models Uncertainties represented by simple random variables ξ = (ξ 1,, ξ d ) : Θ Ξ defined on a probability space (Θ, B, P). ξ Model u(ξ) Ideal approach Compute an accurate and explicit representation of u(ξ): u(ξ) α IP u αφ α(ξ), ξ Ξ where the φ α(ξ) constitute a suitable basis of multiparametric functions Polynomial chaos [Ghanem and Spanos 1991, Xiu and Karniadakis 2002, Soize and Ghanem 2004] Piecewise polynomials, wavelets [Deb 2001, Le Maître 2004, Wan 2005] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 4
6 Motivations Aproximation spaces S P = span{φ α(ξ) = φ (1) α 1 (ξ 1)... φ (d) α d (ξ d ); α I P } with a pre-defined index set I P, e.g. { } { } α N d ; α r α N d ; α 1 r { } α N d ; α q r, 0 < q < 1 Issue Approximation of a high dimensional function u(ξ), ξ Ξ R d #(I P ) 10, 10 10, , ,... Use of deterministic solvers in a black box manner Numerous evaluations of possibly fine deterministic models Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 5
7 Motivations Objective Compute an approximation of u S P u(ξ) u αφ α(ξ) α IP using few samples {u(y q )} Q q=1 where {y q } Q q=1 is a collection of sample points and the u(y q ) are solutions of the deterministic problem Exploit structures of u(ξ) u(ξ) can be sparse on particular basis functions u(ξ) can have suitable low rank representations Can we exploit sparsity within low rank structure of u? Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 6
8 Outline 1 Motivations and framework 2 Sparse low rank approximation 3 Tensor formats and algorithms Canonical decomposition Tensor Train format 4 Conclusion Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 7
9 Outline 1 Motivations and framework 2 Sparse low rank approximation 3 Tensor formats and algorithms Canonical decomposition Tensor Train format 4 Conclusion Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 7
10 Low rank approximation Approximation of function u using tensor approximation methods Exploit the tensor structure of function space ( S P = S 1 P 1... S d P d ; S k P k = span φ (k) i ) Pk i=1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8
11 Low rank approximation Approximation of function u using tensor approximation methods Exploit the tensor structure of function space Low rank tensor subsets M with dim(m) = O(d) ( S P = S 1 P 1... S d P d ; S k P k = span M = {v = F M (p 1, p 2,..., p n)} [Nouy 2010, Khoromskij and Schwab 2010, Ballani 2010, Beylkin et al 2011, Matthies and Zander 2012, Doostan et al 2012,...] φ (k) i ) Pk i=1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8
12 Low rank approximation Approximation of function u using tensor approximation methods Exploit the tensor structure of function space Low rank tensor subsets M with dim(m) = O(d) ( S P = S 1 P 1... S d P d ; S k P k = span M = {v = F M (p 1, p 2,..., p n)} [Nouy 2010, Khoromskij and Schwab 2010, Ballani 2010, Beylkin et al 2011, Matthies and Zander 2012, Doostan et al 2012,...] φ (k) i ) Pk i=1 Sparse low rank tensor subsets M m-sparse, ideally M m-sparse = {v = F M (p 1, p 2,..., p n); p i 0 m i ; 1 i n} with dim(m m-sparse ) dim(m). Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 8
13 Low rank approximation Least-squares in low rank subsets Approximation of v(ξ) M defined by min u v M v 2 Q with u v 2 Q = [Beylkin et al 2011, Doostan et al 2012] Q u(y k ) v(y k ) 2 k=1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9
14 Low rank approximation Least-squares in low rank subsets Approximation of v(ξ) M defined by min u v M v 2 Q with u v 2 Q = [Beylkin et al 2011, Doostan et al 2012] Approximation of v(ξ) M m sparse defined by Q u(y k ) v(y k ) 2 k=1 min v M u v 2 Q s.t. p i 0 m i Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9
15 Low rank approximation Least-squares in low rank subsets Approximation of v(ξ) M defined by min u v M v 2 Q with u v 2 Q = [Beylkin et al 2011, Doostan et al 2012] Approximation of v(ξ) M m sparse defined by Q u(y k ) v(y k ) 2 k=1 min v M u v 2 Q s.t. p i 0 m i min v M u v 2 Q + n λ i p i 1 i=1 (Lasso) Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9
16 Low rank approximation Least-squares in low rank subsets Approximation of v(ξ) M defined by min u v M v 2 Q with u v 2 Q = [Beylkin et al 2011, Doostan et al 2012] Approximation of v(ξ) M m sparse defined by Q u(y k ) v(y k ) 2 k=1 min v M u v 2 Q s.t. p i 0 m i min v M u v 2 Q + n λ i p i 1 i=1 (Lasso) Alternating least-squares with sparse regularization For 1 i n and for fixed p j with j i min p i u F M (p 1,..., p i,..., p n) 2 Q+λ i p i 1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 9
17 Outline 1 Motivations and framework 2 Sparse low rank approximation 3 Tensor formats and algorithms Canonical decomposition Tensor Train format 4 Conclusion Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 10
18 Approximation in canonical tensor subset Rank-one canonical tensor subset R 1 = {w = w (1)... w (d) ; w (k) S k Pk s.t. w (k) (ξ k ) = φ (k) (ξ k ) T w (k)} Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11
19 Approximation in canonical tensor subset Rank-one canonical tensor subset R 1 = where φ = { w = φ, w (1)... w (d) ; w (k) R P k (φ (1)... φ (d) ) (ξ) and with dim(r 1 ) = d k=1 P k } Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11
20 Approximation in canonical tensor subset Rank-one canonical tensor subset where φ = R γ 1 = { w = φ, w (1)... w (d) ; w (k) R P k, w (k) 1 γ k } (φ (1)... φ (d) ) (ξ) and with dim(r γ 1 ) = d k=1 P k Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11
21 Approximation in canonical tensor subset Rank-one canonical tensor subset where φ = R γ 1 = { w = φ, w (1)... w (d) ; w (k) R P k, w (k) 1 γ k } (φ (1)... φ (d) ) (ξ) and with dim(r γ 1 ) = d k=1 P k Rank-m tensor subsets R γ1,...,γ m m ={v = m i=1 w i ; w i R γi 1 } Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11
22 Approximation in canonical tensor subset Rank-one canonical tensor subset where φ = R γ 1 = { w = φ, w (1)... w (d) ; w (k) R P k, w (k) 1 γ k } (φ (1)... φ (d) ) (ξ) and with dim(r γ 1 ) = d k=1 P k Rank-m tensor subsets R γ1,...,γ m m ={v = = { m i=1 v = φ, w i ; w i R γi 1 } m i=1 w (1) i... w (d) i ; w (k) 1 γk i i } Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 11
23 Algorithm for adaptive sparse tensor approximation Algorithms Progressive construction based on corrections in R γ 1 greedy construction of a basis {w i } m i=1 selected in a tensor subset Rγi 1 Compute u m = m i=1 α i w i using regularized least-squares [MC, R. Lebrun, A. Nouy, P. Ray, A least-squares method for sparse low rank approximation of multivariate functions, arxiv: , 2013] Direct approximation in R γ1,...,γ m m Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 12
24 Algorithm for adaptive sparse tensor approximation Algorithm for progressive construction Let u 0 = 0. For m 1, Compute a sparse rank-one correction w m R γ 1 by solving min u u m 1 w 2 w R γ Q 1 Computed using alternating minimization on the parameters of R γ 1. Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 13
25 Algorithm for adaptive sparse tensor approximation Algorithm for progressive construction Let u 0 = 0. For m 1, Compute a sparse rank-one correction w m R γ 1 by solving min u u m 1 φ, w (1)... w (d) 2 w (1) R P 1,...,w (d) R P Q + d Computed using Alternating regularized Least Squares d λ k w (k) 1 k=1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 13
26 Algorithm for adaptive sparse tensor approximation Algorithm for progressive construction Let u 0 = 0. For m 1, Compute a sparse rank-one correction w m R γ 1 by solving min u u m 1 φ, w (1)... w (d) 2 w (1) R P 1,...,w (d) R P Q + d Computed using Alternating regularized Least Squares: for 1 j d d λ k w (k) 1 k=1 min z Φ (j) w (j) 2 w (j) R n 2 + λ j w (j) 1 j where (Φ (j) ) qi = φ (j) i (y q j ) w (k) (y q k ) k j Lasso problem computed with LARS algorithm Optimal solution selected using the fast LOO CV error estimate [Blatman, Sudret 2011] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 13
27 Algorithm for adaptive sparse tensor approximation Algorithm for progressive construction Let u 0 = 0. For m 1, Compute a sparse rank-one correction w m R γ 1 by solving min u u m 1 φ, w (1)... w (d) 2 w (1) R P 1,...,w (d) R P Q + d Computed using Alternating regularized Least Squares: for 1 j d d λ k w (k) 1 k=1 min z Φ (j) w (j) 2 w (j) R n 2 + λ j w (j) 1 j where (Φ (j) ) qi = φ (j) i (y q j ) w (k) (y q k ) k j Lasso problem computed with LARS algorithm Optimal solution selected using the fast LOO CV error estimate [Blatman, Sudret 2011] Set U m = span{w i } m i=1 (reduced approximation space) Compute u m = m i=1 α iw i R γ1,...,γ m m using sparse regularization min u m α i w i 2 α=(α 1,...,α m) R m Q + λ α 1 Best rank selected using cross validation method Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 13 i=1
28 Illustration: checker-board function Rank-2 function: u(ξ 1, ξ 2) = 2 i=1 w (1) i (ξ 1)w (2) i (ξ 2) w 1 (1) (ξ1 ) w 2 (1) (ξ1 ) 1 ξ 1 ξ 1 5/6 (a) w (1) 1 (ξ 1) (b) w (1) 2 (ξ 1) 4/6 3/6 w 1 (2) (ξ2 ) w 2 (2) (ξ2 ) 2/6 ξ 2 (c) w (2) 1 (ξ 2) with dimension: d = 2 ξ i U(0, 1). Ξ = (0, 1) 2. (d) w (2) 2 (ξ 2) ξ 2 1/ /6 2/6 3/6 4/6 5/6 1 Approximation of u in S 1 P 1 S 2 P 2 Piecewise polynomials of degree p defined on a uniform partition of Ξ k of s intervals: S k P k = P p,s Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 14
29 Illustration: checker-board function Performance of the method for sparse low rank approximation Q = 200 samples Optimal rank m op selected using 3-fold cross validation Relative error ε estimated with Monte Carlo integration Comparison of different regularizations within Alternated Least Squares OLS l 2 l 1 Approximation space ε m op ε m op ε m op R m(p 2,3 P 2,3 ), P = R m(p 2,6 P 2,6 ), P = R m(p 2,12 P 2,12 ), P = R m(p 10,6 P 10,6 ), P = With few samples: l 1-regularization detects sparsity and gives accurate results Rank 2 is retrieved Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 15
30 Illustration: Friedman function Friedman function f (ξ) = 10sin(πξ 1ξ 2) + 20(ξ 3 0.5) ξ 4 + 5ξ 5 Dimensions: d = 5 ξ i, i = 1,..., 5 are uniform random variables over [0, 1]. Approximation in S P = 5 k=1 Sk P k Polynomials of degree p: S k P k = P p Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 16
31 Illustration: Friedman function Friedman function f (ξ) = 10sin(πξ 1ξ 2) + 20(ξ 3 0.5) ξ 4 + 5ξ 5 Dimensions: d = 5 ξ i, i = 1,..., 5 are uniform random variables over [0, 1]. Approximation in S P = 5 k=1 Sk P k Polynomials of degree p: S k P k = P p What is the sufficient number of samples Q just needed given an a priori underlying approximation space? Q = f (p, d, m) [G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone, 2011]: sufficient condition for a stable approximation of a multivariate function using OLS: Q (#(I P )) 2 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 16
32 Illustration: Friedman function Number of samples needed (no regularization): rank-1 Number of samples: Q = cd(p + 1) c R + Number of samples: Q = cd(p + 1) 2 c R c= c=3 c=5 c=10 c= c=0.5 c=1 c=1.5 c= c=3 Error 0.14 Error Polynomial degree Polynomial degree Q = d(p + 1) 2 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 17
33 Illustration: Friedman function Number of samples needed (no regularization): rank-4 Number of samples: Q = cmd(p + 1) c R + Number of samples: Q = cmd(p + 1) 2 c R c=1 c=3 c=5 c=10 c= c=0.5 c=1 c=1.5 c=2 c=3 Error Error Polynomial degree Polynomial degree Q = md(p + 1) 2 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 17
34 Illustration: vibration analysis Discrete problem u C N, ( ω 2 M + iωc + K)u = f where K = E K and C = iωηe K with { ξ 1 on horizontal plate, E = ξ 2 on vertical plate, { ξ 3 on horizontal plate, η = ξ 4 on vertical plate, where the ξ k U( 1, 1), k = 1,, 4. Ξ = ( 1, 1) 8. Approximation of a Variable of Interest I (u) in S P = 5 k=1 Sk P k Polynomials of degree p: S k P k = P p I (u)(ξ) = log u c, Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 18
35 Illustration: vibration analysis Evolution of error w.r.t. p Q = 80 Sparsity ratio w.r.t. p Q = 80 Q = 200 Q = 200 dashed lines: OLS, solid lines: with l 1 regularization Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 19
36 Illustration: vibration analysis Evolution of error w.r.t. p dashed lines: OLS, solid lines: with l 1 regularization Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 20
37 Outline 1 Motivations and framework 2 Sparse low rank approximation 3 Tensor formats and algorithms Canonical decomposition Tensor Train format 4 Conclusion Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 21
38 Tensor Train format ξ 1... ξ d v = r 1 i 1 =1 v (1) 1,i 1 v (2,...,d) i 1 ξ 1 ξ 2... ξ d ξ 2... ξ d 2 ξ d 1 ξ d ξ d 1 ξ d [Oseldets 2009,...] Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 22
39 Tensor Train format ξ 1 ξ 1... ξ d ξ 2... ξ d v = v = r 1 i 1 =1 r 1 i 1 =1 v (1) 1,i 1 v (2,...,d) i 1 v (1) 1,i 1 r 2 i 2 =1 v (2) i 1 i 2... r d 1 i d 1 =1 v (d 1) i d 2 i d 1 v (d) i d 1,1 ξ 2... ξ d 2 ξ d 1 [Oseldets 2009,...] ξ d 1 ξ d ξ d Tensor Train subsets TT (1,r1,...,r d 1,1) = TT r The set of tensors TT r (S) is defined by { TT r = v = } v (k) i k 1 i k ; v (k) i k 1 i k S k P k. i I k where I = {i = (i 0, i 1,..., i d 1, i d ); i k {1,..., r k }} with r 0 = r d = 1 Parameterization } TT r = {v = F r (v 1,..., v d ); v k (R P k ) r k 1 r k Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 22
40 Algorithms Alternating least-squares in TT r For a given rank vector r min u v 2 Q + v TT r d λ k vec(v k ) 1 k=1 Question of selection of rank vector r Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 23
41 Algorithm for adaptive sparse tensor approximation: DMRG Re-parameterization Consider the tensor w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 : w (k) = rk ik =1 v (k, ) i k v (k+1, ) i k v = F k r (v, w (k) ) = r 1 i 1 =1... r k 1 r k+1 i k 1 =1 i k+1 =1... r d 1 i d 1 =1 Compute sparse low-rank w (k) with adaptive rank Modified alternating least-squares algorithm For k {1,, d 1} Compute w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 by solving min w (k) R (r k 1 P k ) (r k+1 P k+1 ) u F k (v, w (k) ) v (1) 1i 1... w (k) i k 1 i k+1... v (d) i d Q Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 24
42 Algorithm for adaptive sparse tensor approximation: DMRG Re-parameterization Consider the tensor w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 : w (k) = rk ik =1 v (k, ) i k v (k+1, ) i k v = F k r (v, w (k) ) = r 1 i 1 =1... r k 1 r k+1 i k 1 =1 i k+1 =1... r d 1 i d 1 =1 Compute sparse low-rank w (k) with adaptive rank Modified alternating least-squares algorithm For k {1,, d 1} Compute sparse w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 by solving min w (k) R (r k 1 P k ) (r k+1 P k+1 ) u F k (v, w (k) ) v (1) 1i 1... w (k) i k 1 i k+1... v (d) i d Q + λ k vec(w (k) ) 1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 24
43 Algorithm for adaptive sparse tensor approximation: DMRG Re-parameterization Consider the tensor w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 : w (k) = rk ik =1 v (k, ) i k v (k+1, ) i k v = F k r (v, w (k) ) = r 1 i 1 =1... r k 1 r k+1 i k 1 =1 i k+1 =1... r d 1 i d 1 =1 Compute sparse low-rank w (k) with adaptive rank Modified alternating least-squares algorithm For k {1,, d 1} Compute sparse w (k) (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 by solving min w (k) R (r k 1 P k ) (r k+1 P k+1 ) u F k (v, w (k) ) v (1) 1i 1... w (k) i k 1 i k+1... v (d) i d Q + λ k vec(w (k) ) 1 Compute best low-rank approximation in (S k P k ) r k 1 (S k+1 P k+1 ) r k+1 using SVD adaptive rank rk v = r 1 i 1 =1... rk... i k =1 r d 1 i d 1 =1 v (1) 1i 1... v (k, ) i k 1 i k v (k+1, ) i k i k+1... v (d) i d 1 1 Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 24
44 Illustration: sine of a sum Sine function: with ξ i U( 1, 1). Ξ = ( 1, 1) 6. u(ξ) = sin(ξ 1 + ξ ξ 6) Evolution of error with respect to sample size Q Approx. in S P = 6 k=1 Sk P k ;S k P k = P 2 Approx. in S P = 6 k=1 Sk P k ;S k P k = P Error 10 1 Tensor Train (MALS) Error Tensor Train (MALS) Least Square+l 1 Greedy R Least Square+l 1 Greedy R 1 R m 10 3 R m Sample Size Sample Size Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 25
45 Illustration: borehole function The Borehole function models water flow through a borehole: Dimension: d = 8 2πT u(h u H l ) f (ξ) = ( ) 2LT ln(r/r w ) 1 + u + Tu ln(r/r w )rw 2 Kw T l r w radius of borehole (m) N(µ = 0.10, σ = ) r radius of influence (m) LN(µ = 7.71, σ = ) T u transmissivity of upper aquifer (m 2 /yr) U[63070, ] H u potentiometric head of upper aquifer (m) U[990, 1110] T l transmissivity of lower aquifer (m 2 /yr) U[63.1, 116] H l potentiometric head of lower aquifer (m) U[700, 820] L length of borehole (m) U[1120, 1680] K w hydraulic conductivity of borehole (m/yr) U[9855, 12045] Approximation in S P = 8 k=1 Sk P k Polynomials of degree p: S k P k = P p p = 2: P = 6561 p = 3: P = Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 26
46 Illustration: borehole function Behavior of the algorithm Evolution of error w.r.t. Q TT ranks (Q=200, p=3) r 1 p=2 r p=3 15 r 3 r 4 r 5 Error Rank 10 r 6 r Sample Size (Q) DMRG iteration Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 27
47 Illustration: Canister Stochastic PDE u (κ u) + c(d u) = σu t u = ξ 1 on Γ 1 Ω t on Ω1 Ω2 u = 0 on Γ 2 Ω t u,n = 0 on ( Ω\(Γ 1 Γ 2)) Ω t with ξ 1 u(t = 0) U[0.8, 1.2] on Ω ξ 2 σ U[8, 12] on Ω 2 ξ 3 σ U[0.8, 1] on Ω 1 ξ 4 c U[1, 5] ξ 5 κ U[0.02, 0.03] Approximation of a Variable of Interest I (u) in S P = 5 k=1 Sk P k I (u) = u(x, t)dxdt T Ω 3 Polynomials of degree p: S k P k = P p Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 28
48 Illustration: Canister Evolution of error with respect to sample size Q Approx. in S P = 5 k=1 Sk P k ;S k P k = P 2 Approx. in S P = 5 k=1 Sk P k ;S k P k = P 3 Tensor Train (MALS) Least Square+l 1 Tensor Train (MALS) Least Square+l 1 Error 10 1 Greedy R 1 Error 10 1 Greedy R Sample Size Sample Size Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 29
49 Illustration: Canister Order of separation of variables ξ 1... ξ 5 ξ 1 ξ 2... ξ 5 ξ 2 ξ 3ξ 4ξ 5 ξ 3 ξ 4ξ ξ 4 ξ Error Sample Size Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 30
50 Illustration: Canister Order of separation of variables ξ 1... ξ 5 ξ 1 ξ 2... ξ 5 ξ 2 ξ 3ξ 4ξ 5 ξ 3 ξ 4ξ ξ 4 ξ Error Sample Size Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 30
51 Illustration: Canister Order of separation of variables ξ 1... ξ 5 ξ 1... ξ 5 ξ 1 ξ 2... ξ 5 ξ 1 ξ 2... ξ 5 ξ 2 ξ 3ξ 4ξ 5 ξ 5 ξ 2ξ 3ξ 4 ξ 3 ξ 4ξ 5 ξ 2 ξ 3ξ ξ 4 ξ ξ 3 ξ Error Error Sample Size Sample Size Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 30
52 Outline 1 Motivations and framework 2 Sparse low rank approximation 3 Tensor formats and algorithms Canonical decomposition Tensor Train format 4 Conclusion Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 31
53 Conclusion Least-squares method for sparse low rank approximation of high dimensional functions A non intrusive method Detects and exploits low-rank and sparsity Adaptive rank Outlook More analyses on the suffisant number of samples to find an approximation in a tensor subset Include adaptivity with respect to polynomial degree for underlying approximation spaces Strategies for optimal separation of variables (choice of tree) This research was supported by EADS Innovation Works and by the French National Research Agency (ANR) (CHORUS project) Motivations and framework Sparse LR approx. Tensor formats & alg. Conclusion 32
Adaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationarxiv: v2 [math.na] 9 Jul 2014
A least-squares method for sparse low rank approximation of multivariate functions arxiv:1305.0030v2 [math.na] 9 Jul 2014 M. Chevreuil R. Lebrun A. Nouy P. Rai Abstract In this paper, we propose a low-rank
More informationPolynomial chaos expansions for sensitivity analysis
c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty
More informationSparse polynomial chaos expansions in engineering applications
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,
More informationAddressing high dimensionality in reliability analysis using low-rank tensor approximations
Addressing high dimensionality in reliability analysis using low-rank tensor approximations K Konakli, Bruno Sudret To cite this version: K Konakli, Bruno Sudret. Addressing high dimensionality in reliability
More informationIdentification of multi-modal random variables through mixtures of polynomial chaos expansions
Identification of multi-modal random variables through mixtures of polynomial chaos expansions Anthony Nouy To cite this version: Anthony Nouy. Identification of multi-modal random variables through mixtures
More informationQuantifying conformation fluctuation induced uncertainty in bio-molecular systems
Quantifying conformation fluctuation induced uncertainty in bio-molecular systems Guang Lin, Dept. of Mathematics & School of Mechanical Engineering, Purdue University Collaborative work with Huan Lei,
More informationPolynomial chaos expansions for structural reliability analysis
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for structural reliability analysis B. Sudret & S. Marelli Incl.
More informationUncertainty Quantification in Computational Models
Uncertainty Quantification in Computational Models Habib N. Najm Sandia National Laboratories, Livermore, CA, USA Workshop on Understanding Climate Change from Data (UCC11) University of Minnesota, Minneapolis,
More informationUncertainty analysis of large-scale systems using domain decomposition
Center for Turbulence Research Annual Research Briefs 2007 143 Uncertainty analysis of large-scale systems using domain decomposition By D. Ghosh, C. Farhat AND P. Avery 1. Motivation and objectives A
More informationProper Generalized Decomposition for Linear and Non-Linear Stochastic Models
Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Olivier Le Maître 1 Lorenzo Tamellini 2 and Anthony Nouy 3 1 LIMSI-CNRS, Orsay, France 2 MOX, Politecnico Milano, Italy 3 GeM,
More informationNON-LINEAR APPROXIMATION OF BAYESIAN UPDATE
tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific
More informationNon-Intrusive Solution of Stochastic and Parametric Equations
Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes,
More informationSampling and low-rank tensor approximation of the response surface
Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,
More informationSolving the Stochastic Steady-State Diffusion Problem Using Multigrid
Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei
More informationUncertainty quantification for sparse solutions of random PDEs
Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More informationResearch Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion
Mathematical Problems in Engineering Volume 2012, Article ID 215109, 15 pages doi:10.1155/2012/215109 Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based
More informationMéthodes spectrales stochastiques et réduction de modèle pour la quantification et la propagation d incertitudes
MAS 2010, Bordeaux Sep. 2010 Méthodes spectrales stochastiques et réduction de modèle pour la quantification et la propagation d incertitudes Anthony Nouy GeM (Institut de Recherche en Génie Civil et Mécanique)
More informationBeyond Wiener Askey Expansions: Handling Arbitrary PDFs
Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1
More informationOptimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering
Optimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering Hermann G. Matthies Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de
More informationarxiv: v1 [stat.ap] 26 Dec 2016
Noname manuscript No. (will be inserted by the editor) Orbit Uncertainty Propagation and Sensitivity Analysis With Separated Representations Marc Balducci Brandon Jones Alireza Doostan arxiv:1612.8423v1
More informationSURROGATE MODELLING FOR STOCHASTIC DYNAMICAL
SURROGATE MODELLING FOR STOCHASTIC DYNAMICAL SYSTEMS BY COMBINING NARX MODELS AND POLYNOMIAL CHAOS EXPANSIONS C. V. Mai, M. D. Spiridonakos, E. N. Chatzi, B. Sudret CHAIR OF RISK, SAFETY AND UNCERTAINTY
More informationDinesh Kumar, Mehrdad Raisee and Chris Lacor
Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014
More informationarxiv: v2 [math.na] 8 Sep 2017
arxiv:1704.06339v [math.na] 8 Sep 017 A Monte Carlo approach to computing stiffness matrices arising in polynomial chaos approximations Juan Galvis O. Andrés Cuervo September 3, 018 Abstract We use a Monte
More informationCollocation methods for uncertainty quantification in PDE models with random data
Collocation methods for uncertainty quantification in PDE models with random data Fabio Nobile CSQI-MATHICSE, EPFL, Switzerland Acknowlegments: L. Tamellini, G. Migliorati (EPFL), R. Tempone, (KAUST),
More informationSchwarz Preconditioner for the Stochastic Finite Element Method
Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty
More informationEnabling Advanced Automation Tools to manage Trajectory Prediction Uncertainty
Engineering, Test & Technology Boeing Research & Technology Enabling Advanced Automation Tools to manage Trajectory Prediction Uncertainty ART 12 - Automation Enrique Casado (BR&T-E) enrique.casado@boeing.com
More informationarxiv: v1 [math.na] 14 Sep 2017
Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis arxiv:1709.04584v1 [math.na] 14 Sep 2017 Anindya Bhaduri a, Yanyan He 1b, Michael D. Shields a, Lori Graham-Brady
More informationSparse polynomial chaos expansions as a machine learning regression technique
Research Collection Other Conference Item Sparse polynomial chaos expansions as a machine learning regression technique Author(s): Sudret, Bruno; Marelli, Stefano; Lataniotis, Christos Publication Date:
More informationAnalysis and Computation of Hyperbolic PDEs with Random Data
Analysis and Computation of Hyperbolic PDEs with Random Data Mohammad Motamed 1, Fabio Nobile 2,3 and Raúl Tempone 1 1 King Abdullah University of Science and Technology, Thuwal, Saudi Arabia 2 EPFL Lausanne,
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationA Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations
An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive
More informationMODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS
MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS L. TAMELLINI, O. LE MAÎTRE, AND A. NOUY Abstract. In this paper we consider a
More informationComputational methods for uncertainty quantification and sensitivity analysis of complex systems
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Computational methods for uncertainty quantification and sensitivity analysis of complex systems
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationAdaptive Collocation with Kernel Density Estimation
Examples of with Kernel Density Estimation Howard C. Elman Department of Computer Science University of Maryland at College Park Christopher W. Miller Applied Mathematics and Scientific Computing Program
More informationSteps in Uncertainty Quantification
Steps in Uncertainty Quantification Challenge: How do we do uncertainty quantification for computationally expensive models? Example: We have a computational budget of 5 model evaluations. Bayesian inference
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationSolving the steady state diffusion equation with uncertainty Final Presentation
Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem
More informationStrain and stress computations in stochastic finite element. methods
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 2007; 00:1 6 [Version: 2002/09/18 v2.02] Strain and stress computations in stochastic finite element methods Debraj
More informationUncertainty Quantification for multiscale kinetic equations with high dimensional random inputs with sparse grids
Uncertainty Quantification for multiscale kinetic equations with high dimensional random inputs with sparse grids Shi Jin University of Wisconsin-Madison, USA Kinetic equations Different Q Boltmann Landau
More informationUncertainty Quantification and related areas - An overview
Uncertainty Quantification and related areas - An overview Alexander Litvinenko, talk given at Department of Mathematical Sciences, Durham University Bayesian Computational Statistics & Modeling, KAUST
More informationUncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion
Uncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion EU-US-Asia workshop on hybrid testing Ispra, 5-6 October 2015 G. Abbiati, S. Marelli, O.S.
More informationUncertainty Quantification in Discrete Fracture Network Models
Uncertainty Quantification in Discrete Fracture Network Models Claudio Canuto Dipartimento di Scienze Matematiche, Politecnico di Torino claudio.canuto@polito.it Joint work with Stefano Berrone, Sandra
More informationAN ADAPTIVE REDUCED BASIS COLLOCATION METHOD BASED ON PCM ANOVA DECOMPOSITION FOR ANISOTROPIC STOCHASTIC PDES
AN ADAPTIVE REDUCED BASIS COLLOCATION METHOD BASED ON PCM ANOVA DECOMPOSITION FOR ANISOTROPIC STOCHASTIC PDES Heyrim Cho & Howard Elman 2 Department of Mathematics, University of Maryland, College Park,
More informationStochastic methods for solving partial differential equations in high dimension
Stochastic methods for solving partial differential equations in high dimension Marie Billaud-Friess Joint work with : A. Macherey, A. Nouy & C. Prieur marie.billaud-friess@ec-nantes.fr Centrale Nantes,
More informationStochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations
Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations Boris Jeremić 1 Kallol Sett 2 1 University of California, Davis 2 University of Akron, Ohio ICASP Zürich, Switzerland August
More informationA Unified Framework for Uncertainty and Sensitivity Analysis of Computational Models with Many Input Parameters
A Unified Framework for Uncertainty and Sensitivity Analysis of Computational Models with Many Input Parameters C. F. Jeff Wu H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute
More informationRank reduction of parameterized time-dependent PDEs
Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized
More informationEstimating functional uncertainty using polynomial chaos and adjoint equations
0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,
More informationLow-rank techniques applied to moment equations for the stochastic Darcy problem with lognormal permeability
Low-rank techniques applied to moment equations for the stochastic arcy problem with lognormal permeability Francesca Bonizzoni 1,2 and Fabio Nobile 1 1 CSQI-MATHICSE, EPFL, Switzerland 2 MOX, Politecnico
More informationextended Stochastic Finite Element Method for the numerical simulation of heterogenous materials with random material interfaces
extended Stochastic Finite Element Method for the numerical simulation of heterogenous materials with random material interfaces Anthony Nouy, Alexandre Clement To cite this version: Anthony Nouy, Alexandre
More informationICES REPORT A posteriori error control for partial differential equations with random data
ICES REPORT 13-8 April 213 A posteriori error control for partial differential equations with random data by Corey M. Bryant, Serge Prudhomme, and Timothy Wildey The Institute for Computational Engineering
More informationParametric Problems, Stochastics, and Identification
Parametric Problems, Stochastics, and Identification Hermann G. Matthies a B. Rosić ab, O. Pajonk ac, A. Litvinenko a a, b University of Kragujevac c SPT Group, Hamburg wire@tu-bs.de http://www.wire.tu-bs.de
More informationHierarchical Parallel Solution of Stochastic Systems
Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations
More informationGeneralized Spectral Decomposition for Stochastic Non Linear Problems
Generalized Spectral Decomposition for Stochastic Non Linear Problems Anthony Nouy O.P. Le Maître Preprint submitted to Journal of Computational Physics Abstract We present an extension of the Generalized
More informationPerformance Evaluation of Generalized Polynomial Chaos
Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu
More informationSolving the stochastic steady-state diffusion problem using multigrid
IMA Journal of Numerical Analysis (2007) 27, 675 688 doi:10.1093/imanum/drm006 Advance Access publication on April 9, 2007 Solving the stochastic steady-state diffusion problem using multigrid HOWARD ELMAN
More informationSENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH
XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN
More informationKeywords: Sonic boom analysis, Atmospheric uncertainties, Uncertainty quantification, Monte Carlo method, Polynomial chaos method.
Blucher Mechanical Engineering Proceedings May 2014, vol. 1, num. 1 www.proceedings.blucher.com.br/evento/10wccm SONIC BOOM ANALYSIS UNDER ATMOSPHERIC UNCERTAINTIES BY A NON-INTRUSIVE POLYNOMIAL CHAOS
More informationA reduced-order stochastic finite element analysis for structures with uncertainties
A reduced-order stochastic finite element analysis for structures with uncertainties Ji Yang 1, Béatrice Faverjon 1,2, Herwig Peters 1, icole Kessissoglou 1 1 School of Mechanical and Manufacturing Engineering,
More informationIntro BCS/Low Rank Model Inference/Comparison Summary References. UQTk. A Flexible Python/C++ Toolkit for Uncertainty Quantification
A Flexible Python/C++ Toolkit for Uncertainty Quantification Bert Debusschere, Khachik Sargsyan, Cosmin Safta, Prashant Rai, Kenny Chowdhary bjdebus@sandia.gov Sandia National Laboratories, Livermore,
More informationA Stochastic Collocation based. for Data Assimilation
A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm
More informationCERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS
CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS Puneet Singla Assistant Professor Department of Mechanical & Aerospace Engineering University at Buffalo, Buffalo, NY-1426 Probabilistic Analysis
More informationMULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES
SIAM J. SCI. COMPUT. Vol. 8, No. 3, pp. 9 98 c 6 Society for Industrial and Applied Mathematics MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES XIAOLIANG WAN AND GEORGE EM
More informationNumerical Analysis of Elliptic PDEs with Random Coefficients
Numerical Analysis of Elliptic PDEs with Random Coefficients (Lecture I) Robert Scheichl Department of Mathematical Sciences University of Bath Workshop on PDEs with Random Coefficients Weierstrass Institute,
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,
More informationMulti-Element Probabilistic Collocation Method in High Dimensions
Multi-Element Probabilistic Collocation Method in High Dimensions Jasmine Foo and George Em Karniadakis Division of Applied Mathematics, Brown University, Providence, RI 02912 USA Abstract We combine multi-element
More informationProbabilistic collocation and Lagrangian sampling for tracer transport in randomly heterogeneous porous media
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Probabilistic collocation and Lagrangian sampling
More informationSuboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids
Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität
More informationA reduced basis method and ROM-based optimization for batch chromatography
MAX PLANCK INSTITUTE MedRed, 2013 Magdeburg, Dec 11-13 A reduced basis method and ROM-based optimization for batch chromatography Yongjin Zhang joint work with Peter Benner, Lihong Feng and Suzhou Li Max
More informationLecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:
tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients
More informationA Posteriori Adaptive Low-Rank Approximation of Probabilistic Models
A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models Rainer Niekamp and Martin Krosche. Institute for Scientific Computing TU Braunschweig ILAS: 22.08.2011 A Posteriori Adaptive Low-Rank
More informationBlock-diagonal preconditioning for spectral stochastic finite-element systems
IMA Journal of Numerical Analysis (2009) 29, 350 375 doi:0.093/imanum/drn04 Advance Access publication on April 4, 2008 Block-diagonal preconditioning for spectral stochastic finite-element systems CATHERINE
More informationarxiv: v2 [math.na] 8 Apr 2017
A LOW-RANK MULTIGRID METHOD FOR THE STOCHASTIC STEADY-STATE DIFFUSION PROBLEM HOWARD C. ELMAN AND TENGFEI SU arxiv:1612.05496v2 [math.na] 8 Apr 2017 Abstract. We study a multigrid method for solving large
More informationCharacterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method
Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan
More informationSTOCHASTIC SAMPLING METHODS
STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions
More informationStochastic Spectral Methods for Uncertainty Quantification
Stochastic Spectral Methods for Uncertainty Quantification Olivier Le Maître 1,2,3, Omar Knio 1,2 1- Duke University, Durham, North Carolina 2- KAUST, Saudi-Arabia 3- LIMSI CNRS, Orsay, France 38th Conf.
More informationEstimation of probability density functions by the Maximum Entropy Method
Estimation of probability density functions by the Maximum Entropy Method Claudio Bierig, Alexey Chernov Institute for Mathematics Carl von Ossietzky University Oldenburg alexey.chernov@uni-oldenburg.de
More informationDownloaded 06/07/18 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 40, No. 1, pp. A366 A387 c 2018 Society for Industrial and Applied Mathematics WEIGHTED APPROXIMATE FEKETE POINTS: SAMPLING FOR LEAST-SQUARES POLYNOMIAL APPROXIMATION LING GUO,
More informationEfficient domain decomposition methods for the time-harmonic Maxwell equations
Efficient domain decomposition methods for the time-harmonic Maxwell equations Marcella Bonazzoli 1, Victorita Dolean 2, Ivan G. Graham 3, Euan A. Spence 3, Pierre-Henri Tournier 4 1 Inria Saclay (Defi
More informationParameter Selection Techniques and Surrogate Models
Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity
More informationDipartimento di Scienze Matematiche
Exploiting parallel computing in Discrete Fracture Network simulations: an inherently parallel optimization approach Stefano Berrone stefano.berrone@polito.it Team: Matìas Benedetto, Andrea Borio, Claudio
More informationX-SFEM, a computational technique based on X-FEM to deal with random shapes
X-SFEM, a computational technique based on X-FEM to deal with random shapes Anthony Nouy, Franck Schoefs, Nicolas Moës To cite this version: Anthony Nouy, Franck Schoefs, Nicolas Moës. X-SFEM, a computational
More informationNumerical Approximation of Stochastic Elliptic Partial Differential Equations
Numerical Approximation of Stochastic Elliptic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de
More informationError Budgets: A Path from Uncertainty Quantification to Model Validation
Error Budgets: A Path from Uncertainty Quantification to Model Validation Roger Ghanem Aerospace and Mechanical Engineering Civil Engineering University of Southern California Los Angeles Advanced Simulation
More informationA Polynomial Chaos Approach to Robust Multiobjective Optimization
A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it
More informationUncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos
Noname manuscript No. (will be inserted by the editor) Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Umamaheswara Konda Puneet Singla Tarunraj Singh Peter Scott Received: date
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationStochastic structural dynamic analysis with random damping parameters
Stochastic structural dynamic analysis with random damping parameters K. Sepahvand 1, F. Saati Khosroshahi, C. A. Geweth and S. Marburg Chair of Vibroacoustics of Vehicles and Machines Department of Mechanical
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationSparse polynomial chaos expansions of frequency response functions using stochastic frequency transformation
Sparse polynomial chaos expansions of frequency response functions using stochastic frequency transformation V. Yaghoubi, S. Marelli, B. Sudret, T. Abrahamsson, To cite this version: V. Yaghoubi, S. Marelli,
More informationarxiv: v2 [physics.comp-ph] 15 Sep 2015
On Uncertainty Quantification of Lithium-ion Batteries: Application to an LiC 6 /LiCoO 2 cell arxiv:1505.07776v2 [physics.comp-ph] 15 Sep 2015 Mohammad Hadigol a, Kurt Maute a, Alireza Doostan a, a Aerospace
More informationInfluence of uncertainties on the B(H) curves on the flux linkage of a turboalternator
Influence of uncertainties on the B(H) curves on the flux linkage of a turboalternator Hung Mac, Stéphane Clenet, Karim Beddek, Loic Chevallier, Julien Korecki, Olivier Moreau, Pierre Thomas To cite this
More informationUncertainties in structural dynamics: overview and comparative analysis of methods
Uncertainties in structural dynamics: overview and comparative analysis of methods Sami Daouk a,,françoislouf a,olivierdorival b, Laurent Champaney a, Sylvie Audebert c a LMT (ENS-Cachan, CNRS, Université
More informationOn Probabilistic Yielding of Materials
COMMUNICATIONS IN NUMERICAL METHODS IN ENGINEERING Commun. Numer. Meth. Engng 2; :1 6 [Version: 22/9/18 v1.2] On Probabilistic Yielding of Materials Boris Jeremić and Kallol Sett Department of Civil and
More information