Simulating with uncertainty : the rough surface scattering problem

Size: px
Start display at page:

Download "Simulating with uncertainty : the rough surface scattering problem"

Transcription

1 Simulating with uncertainty : the rough surface scattering problem Uday Khankhoje Assistant Professor, Electrical Engineering Indian Institute of Technology Madras Uday Khankhoje (EE, IITM) Simulating with uncertainty 0 / 36

2 A problem often encountered in numerical analysis 1 A simulation, σ(x), x D is computationally expensive 2 The input x or the domain D displays uncertainty, e.g. in simulating the modes of an optical fibre, the refractive index might not be exactly known, or the boundary may be rough In this light, more useful than σ(x) is the expectation σ(x) Std. dev. in σ(x) due to parameter uncertainty also interesting Uday Khankhoje (EE, IITM) Simulating with uncertainty 1 / 36

3 A problem often encountered in numerical analysis 1 A simulation, σ(x), x D is computationally expensive 2 The input x or the domain D displays uncertainty, e.g. in simulating the modes of an optical fibre, the refractive index might not be exactly known, or the boundary may be rough In this light, more useful than σ(x) is the expectation σ(x) Std. dev. in σ(x) due to parameter uncertainty also interesting Uday Khankhoje (EE, IITM) Simulating with uncertainty 1 / 36

4 A problem often encountered in numerical analysis 1 A simulation, σ(x), x D is computationally expensive 2 The input x or the domain D displays uncertainty, e.g. in simulating the modes of an optical fibre, the refractive index might not be exactly known, or the boundary may be rough In this light, more useful than σ(x) is the expectation σ(x) Std. dev. in σ(x) due to parameter uncertainty also interesting Uday Khankhoje (EE, IITM) Simulating with uncertainty 1 / 36

5 Computing the expectation and standard deviation Assumptions Setup: d-dimensional x = [x 1, x 2,..., x d ] Each x i is mutually independent, distributed with known pdf ρ i Construct a multi-variate pdf, ρ(x) = n i=1 ρ i(x i ) Quantities of interest Expectation σ(x) = σ(x)ρ(x)dx Std. dev. σ(x) = D (σ(x) σ(x) ) 2 ρ(x)dx D Uday Khankhoje (EE, IITM) Simulating with uncertainty 2 / 36

6 Computing the expectation and standard deviation Assumptions Setup: d-dimensional x = [x 1, x 2,..., x d ] Each x i is mutually independent, distributed with known pdf ρ i Construct a multi-variate pdf, ρ(x) = n i=1 ρ i(x i ) Quantities of interest Expectation σ(x) = σ(x)ρ(x)dx Std. dev. σ(x) = D (σ(x) σ(x) ) 2 ρ(x)dx D Uday Khankhoje (EE, IITM) Simulating with uncertainty 2 / 36

7 Any problem with σ(x) = D σ(x)ρ(x)dx? Yes! σ(x) is only known numerically. Recall: σ(x) is the output of an (possibly expensive) simulation Monte Carlo method The most commonly used method to estimate expectation. Instantiate several x i s and approximate σ(x) = n i=1 1 n σ(x i) Convergence rate: independent of d, but slow O( 1 n ) Extra: Create a histogram from σ(x i ) values to estimate pdf of σ. Objective of this talk: Discuss alternatives to Monte Carlo Uday Khankhoje (EE, IITM) Simulating with uncertainty 3 / 36

8 Any problem with σ(x) = D σ(x)ρ(x)dx? Yes! σ(x) is only known numerically. Recall: σ(x) is the output of an (possibly expensive) simulation Monte Carlo method The most commonly used method to estimate expectation. Instantiate several x i s and approximate σ(x) = n i=1 1 n σ(x i) Convergence rate: independent of d, but slow O( 1 n ) Extra: Create a histogram from σ(x i ) values to estimate pdf of σ. Objective of this talk: Discuss alternatives to Monte Carlo Uday Khankhoje (EE, IITM) Simulating with uncertainty 3 / 36

9 Any problem with σ(x) = D σ(x)ρ(x)dx? Yes! σ(x) is only known numerically. Recall: σ(x) is the output of an (possibly expensive) simulation Monte Carlo method The most commonly used method to estimate expectation. Instantiate several x i s and approximate σ(x) = n i=1 1 n σ(x i) Convergence rate: independent of d, but slow O( 1 n ) Extra: Create a histogram from σ(x i ) values to estimate pdf of σ. Objective of this talk: Discuss alternatives to Monte Carlo Uday Khankhoje (EE, IITM) Simulating with uncertainty 3 / 36

10 The alternatives to Monte Carlo Broadly, two families of methods will be discussed: 1 Galerkin Polynomial Chaos (gpc) Need to code a new solver, works for small range of d values 2 Stochastic Collocation (SC) Uses an existing solver, works for small medium range of d values Attractive, because compatible with commercial software What about Monte Carlo? Use for benchmarking and for large range of d values Uday Khankhoje (EE, IITM) Simulating with uncertainty 4 / 36

11 The alternatives to Monte Carlo Broadly, two families of methods will be discussed: 1 Galerkin Polynomial Chaos (gpc) Need to code a new solver, works for small range of d values 2 Stochastic Collocation (SC) Uses an existing solver, works for small medium range of d values Attractive, because compatible with commercial software What about Monte Carlo? Use for benchmarking and for large range of d values Uday Khankhoje (EE, IITM) Simulating with uncertainty 4 / 36

12 Start with Stochastic Collocation: Polynomial interpolation For simplicity, consider a 1-D case of the simulation, σ(x) Polynomial interpolation and integration Choose n-points to evaluate σ(x) and construct a n 1 degree polynomial (Lagrange interpolation): σ n 1 (x) = n i=1 σ(x i)l i (x) Recall, L i (x) = n j=1,j i (x x j)/(x i x j ) Compute expectation as σ(x) n i=1 σ(x i) L i (x)ρ(x)dx = n i=1 σ(x i) α i α i can be pre-computed, and σ(x) is accurate to order n 1. Can we do better? Uday Khankhoje (EE, IITM) Simulating with uncertainty 5 / 36

13 Start with Stochastic Collocation: Polynomial interpolation For simplicity, consider a 1-D case of the simulation, σ(x) Polynomial interpolation and integration Choose n-points to evaluate σ(x) and construct a n 1 degree polynomial (Lagrange interpolation): σ n 1 (x) = n i=1 σ(x i)l i (x) Recall, L i (x) = n j=1,j i (x x j)/(x i x j ) Compute expectation as σ(x) n i=1 σ(x i) L i (x)ρ(x)dx = n i=1 σ(x i) α i α i can be pre-computed, and σ(x) is accurate to order n 1. Can we do better? Uday Khankhoje (EE, IITM) Simulating with uncertainty 5 / 36

14 Start with Stochastic Collocation: Polynomial interpolation For simplicity, consider a 1-D case of the simulation, σ(x) Polynomial interpolation and integration Choose n-points to evaluate σ(x) and construct a n 1 degree polynomial (Lagrange interpolation): σ n 1 (x) = n i=1 σ(x i)l i (x) Recall, L i (x) = n j=1,j i (x x j)/(x i x j ) Compute expectation as σ(x) n i=1 σ(x i) L i (x)ρ(x)dx = n i=1 σ(x i) α i α i can be pre-computed, and σ(x) is accurate to order n 1. Can we do better? Uday Khankhoje (EE, IITM) Simulating with uncertainty 5 / 36

15 Stochastic Collocation: Gaussian quadrature Yes! Gaussian quadrature Use the theory of orthogonal polynomials, i.e. find polynomials s.t. p m (x)p n (x)ρ(x)dx = δ m,n Pick the n points, {x i } to be roots of n th degree polynomial, p n (x) This also gives σ(x) n i=1 σ(x i) α i, but now integral is accurate to order 2n 1 Common orthogonal polynomials Legendre, x [ 1, 1], ρ(x) = 1. Jacobi, x ( 1, 1), ρ(x) = (1 x) α (1 + x) β, α, β > 1. Hermite, x (, ), ρ(x) = e x2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 6 / 36

16 Stochastic Collocation: Gaussian quadrature Yes! Gaussian quadrature Use the theory of orthogonal polynomials, i.e. find polynomials s.t. p m (x)p n (x)ρ(x)dx = δ m,n Pick the n points, {x i } to be roots of n th degree polynomial, p n (x) This also gives σ(x) n i=1 σ(x i) α i, but now integral is accurate to order 2n 1 Common orthogonal polynomials Legendre, x [ 1, 1], ρ(x) = 1. Jacobi, x ( 1, 1), ρ(x) = (1 x) α (1 + x) β, α, β > 1. Hermite, x (, ), ρ(x) = e x2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 6 / 36

17 Stochastic Collocation: Gaussian quadrature Yes! Gaussian quadrature Use the theory of orthogonal polynomials, i.e. find polynomials s.t. p m (x)p n (x)ρ(x)dx = δ m,n Pick the n points, {x i } to be roots of n th degree polynomial, p n (x) This also gives σ(x) n i=1 σ(x i) α i, but now integral is accurate to order 2n 1 Common orthogonal polynomials Legendre, x [ 1, 1], ρ(x) = 1. Jacobi, x ( 1, 1), ρ(x) = (1 x) α (1 + x) β, α, β > 1. Hermite, x (, ), ρ(x) = e x2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 6 / 36

18 But our problem is d-dimensional! Solution: Extend one-dimensional Gaussian quadrature to d dimensions With x = [x i, x 2,..., x d ], express σ(x) = d i=1 σ i(x i ) (implicitly) The integral splits up: σ(x)ρ(x)dx = i σi (x i )ρ i (x i )dx i Apply n-point Gaussian quadrature (GQ) in each dimension: σ(x) = d i=1 n j=1 σ i(x i,j )α j. Combine the σ i s to get σ(x k ). An example in 2D (x, y) and a 2-point GQ in each dimension: σ(x, y) = [σ x (x 1 )α 1 + σ x (x 2 )α 2 ][σ y (y 1 )α 1 + σ y (y 2 )α 2 ] = σ(x 1, y 1 )α 1,1 + σ(x 1, y 2 )α 1,2 + σ(x 2, y 1 )α 2,1 + σ(x 2, y 2 )α 2,2 All dimensions are multiplied: called the tensor product rule Uday Khankhoje (EE, IITM) Simulating with uncertainty 7 / 36

19 But our problem is d-dimensional! Solution: Extend one-dimensional Gaussian quadrature to d dimensions With x = [x i, x 2,..., x d ], express σ(x) = d i=1 σ i(x i ) (implicitly) The integral splits up: σ(x)ρ(x)dx = i σi (x i )ρ i (x i )dx i Apply n-point Gaussian quadrature (GQ) in each dimension: σ(x) = d i=1 n j=1 σ i(x i,j )α j. Combine the σ i s to get σ(x k ). An example in 2D (x, y) and a 2-point GQ in each dimension: σ(x, y) = [σ x (x 1 )α 1 + σ x (x 2 )α 2 ][σ y (y 1 )α 1 + σ y (y 2 )α 2 ] = σ(x 1, y 1 )α 1,1 + σ(x 1, y 2 )α 1,2 + σ(x 2, y 1 )α 2,1 + σ(x 2, y 2 )α 2,2 All dimensions are multiplied: called the tensor product rule Uday Khankhoje (EE, IITM) Simulating with uncertainty 7 / 36

20 But our problem is d-dimensional! Solution: Extend one-dimensional Gaussian quadrature to d dimensions With x = [x i, x 2,..., x d ], express σ(x) = d i=1 σ i(x i ) (implicitly) The integral splits up: σ(x)ρ(x)dx = i σi (x i )ρ i (x i )dx i Apply n-point Gaussian quadrature (GQ) in each dimension: σ(x) = d i=1 n j=1 σ i(x i,j )α j. Combine the σ i s to get σ(x k ). An example in 2D (x, y) and a 2-point GQ in each dimension: σ(x, y) = [σ x (x 1 )α 1 + σ x (x 2 )α 2 ][σ y (y 1 )α 1 + σ y (y 2 )α 2 ] = σ(x 1, y 1 )α 1,1 + σ(x 1, y 2 )α 1,2 + σ(x 2, y 1 )α 2,1 + σ(x 2, y 2 )α 2,2 All dimensions are multiplied: called the tensor product rule Uday Khankhoje (EE, IITM) Simulating with uncertainty 7 / 36

21 Visualizing the tensor product rule E.g. function evaluation points in a 2D 5-point GQ (25 evals) : Denote as σ 5,5 Curse of dimensionality is clear: number of function evaluations = n d Can we do better? y x Theorem: (Mysovskikh 1968, Möller 1976) To attain a polynomial exactness equal to m, the (optimal) required number of grid-points has lower and upper bounds given by ( ) ( ) d + m /2 d + m N min = N opt = N max m /2 m Uday Khankhoje (EE, IITM) Simulating with uncertainty 8 / 36

22 Visualizing the tensor product rule E.g. function evaluation points in a 2D 5-point GQ (25 evals) : Denote as σ 5,5 Curse of dimensionality is clear: number of function evaluations = n d Can we do better? y x Theorem: (Mysovskikh 1968, Möller 1976) To attain a polynomial exactness equal to m, the (optimal) required number of grid-points has lower and upper bounds given by ( ) ( ) d + m /2 d + m N min = N opt = N max m /2 m Uday Khankhoje (EE, IITM) Simulating with uncertainty 8 / 36

23 Visualizing the tensor product rule E.g. function evaluation points in a 2D 5-point GQ (25 evals) : Denote as σ 5,5 Curse of dimensionality is clear: number of function evaluations = n d Can we do better? y x Theorem: (Mysovskikh 1968, Möller 1976) To attain a polynomial exactness equal to m, the (optimal) required number of grid-points has lower and upper bounds given by ( ) ( ) d + m /2 d + m N min = N opt = N max m /2 m Uday Khankhoje (EE, IITM) Simulating with uncertainty 8 / 36

24 Sparse Grids: Smolyak (1963) 1 Every σ i,j is an approximation to the actual integral, σ 2 Introduce a level parameter k = (i + j) and let the max level be denoted by l = max{k} Telescope a series of different levels to approximate σ e.g. (Max) level l = 4: σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 e.g. (Max) level l = 5: σ [ σ 3,2 + σ 2,3 + σ 1,4 + σ 4,1 ] [ σ 3,1 + σ 1,3 + σ 2,2 ] As the max level increases, approximation becomes better Uday Khankhoje (EE, IITM) Simulating with uncertainty 9 / 36

25 Sparse Grids: Smolyak (1963) 1 Every σ i,j is an approximation to the actual integral, σ 2 Introduce a level parameter k = (i + j) and let the max level be denoted by l = max{k} Telescope a series of different levels to approximate σ e.g. (Max) level l = 4: σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 e.g. (Max) level l = 5: σ [ σ 3,2 + σ 2,3 + σ 1,4 + σ 4,1 ] [ σ 3,1 + σ 1,3 + σ 2,2 ] As the max level increases, approximation becomes better Uday Khankhoje (EE, IITM) Simulating with uncertainty 9 / 36

26 Sparse Grids: Smolyak (1963) 1 Every σ i,j is an approximation to the actual integral, σ 2 Introduce a level parameter k = (i + j) and let the max level be denoted by l = max{k} Telescope a series of different levels to approximate σ e.g. (Max) level l = 4: σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 e.g. (Max) level l = 5: σ [ σ 3,2 + σ 2,3 + σ 1,4 + σ 4,1 ] [ σ 3,1 + σ 1,3 + σ 2,2 ] As the max level increases, approximation becomes better Uday Khankhoje (EE, IITM) Simulating with uncertainty 9 / 36

27 Sparse Grids: Smolyak (1963) 1 Every σ i,j is an approximation to the actual integral, σ 2 Introduce a level parameter k = (i + j) and let the max level be denoted by l = max{k} Telescope a series of different levels to approximate σ e.g. (Max) level l = 4: σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 e.g. (Max) level l = 5: σ [ σ 3,2 + σ 2,3 + σ 1,4 + σ 4,1 ] [ σ 3,1 + σ 1,3 + σ 2,2 ] As the max level increases, approximation becomes better Uday Khankhoje (EE, IITM) Simulating with uncertainty 9 / 36

28 Visualizing the sparse grid rule σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 Sparse Grid Points Tensor Grid Points = Substantial savings in higher dims 13 Points (SG) v/s 25 points (TP)! Uday Khankhoje (EE, IITM) Simulating with uncertainty 10 / 36

29 Visualizing the sparse grid rule σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 Sparse Grid Points Tensor Grid Points = Substantial savings in higher dims 13 Points (SG) v/s 25 points (TP)! Uday Khankhoje (EE, IITM) Simulating with uncertainty 10 / 36

30 Visualizing the sparse grid rule σ [ σ 3,1 + σ 1,3 + σ 2,2 ] k=4 [ σ 2,1 + σ 1,2 ] k=3 Sparse Grid Points Tensor Grid Points = Substantial savings in higher dims 13 Points (SG) v/s 25 points (TP)! Uday Khankhoje (EE, IITM) Simulating with uncertainty 10 / 36

31 Sparse Grid rule: finer points TP rule: σ = n 1 i 1 =1... n d i d =1 σ(x i 1,..., x id )α i1... α id = (Q n 1 Q n 2... Q n d)[σ] n i point quadrature in the i th dim; n i n d points SG rule: With max level l, and k = k 1 + k k d : σ = ( 1) l k( d 1 l k) (Q k 1 Q k 2... Q k d)[σ] l d+1 k l k i point quadrature in the i th dim; 2 l d l /l! points Nested quadrature rules (e.g. Clenshaw-Curtis, Gauss-Konrad) re-use fn evaluation points between levels. Uday Khankhoje (EE, IITM) Simulating with uncertainty 11 / 36

32 Sparse Grid rule: finer points TP rule: σ = n 1 i 1 =1... n d i d =1 σ(x i 1,..., x id )α i1... α id = (Q n 1 Q n 2... Q n d)[σ] n i point quadrature in the i th dim; n i n d points SG rule: With max level l, and k = k 1 + k k d : σ = ( 1) l k( d 1 l k) (Q k 1 Q k 2... Q k d)[σ] l d+1 k l k i point quadrature in the i th dim; 2 l d l /l! points Nested quadrature rules (e.g. Clenshaw-Curtis, Gauss-Konrad) re-use fn evaluation points between levels. Uday Khankhoje (EE, IITM) Simulating with uncertainty 11 / 36

33 Sparse Grid rule: finer points TP rule: σ = n 1 i 1 =1... n d i d =1 σ(x i 1,..., x id )α i1... α id = (Q n 1 Q n 2... Q n d)[σ] n i point quadrature in the i th dim; n i n d points SG rule: With max level l, and k = k 1 + k k d : σ = ( 1) l k( d 1 l k) (Q k 1 Q k 2... Q k d)[σ] l d+1 k l k i point quadrature in the i th dim; 2 l d l /l! points Nested quadrature rules (e.g. Clenshaw-Curtis, Gauss-Konrad) re-use fn evaluation points between levels. Uday Khankhoje (EE, IITM) Simulating with uncertainty 11 / 36

34 Switching gears: from sampling to projection The two methods (Monte Carlo, Stochastic Collocation) considered so far were of a sampling kind: σ estimated using samples of σ(x). Projection based approach: Overview of Galerkin method Governing equation: Θf(x) = g(x), where Θ is an operator, g is a known function, and f is to be determined. Project f in a known basis, {φ j }: f(x) = j u jφ j (x) Take an inner product with the same basis functions on both sides to get: j φ i(x), Θφ j (x) u j = φ i (x), g(x). BC used to simplify. This is a system of equations; solve for u and get f. Straightforward when x is spatio-temporal. But when stochastic? Uday Khankhoje (EE, IITM) Simulating with uncertainty 12 / 36

35 Switching gears: from sampling to projection The two methods (Monte Carlo, Stochastic Collocation) considered so far were of a sampling kind: σ estimated using samples of σ(x). Projection based approach: Overview of Galerkin method Governing equation: Θf(x) = g(x), where Θ is an operator, g is a known function, and f is to be determined. Project f in a known basis, {φ j }: f(x) = j u jφ j (x) Take an inner product with the same basis functions on both sides to get: j φ i(x), Θφ j (x) u j = φ i (x), g(x). BC used to simplify. This is a system of equations; solve for u and get f. Straightforward when x is spatio-temporal. But when stochastic? Uday Khankhoje (EE, IITM) Simulating with uncertainty 12 / 36

36 Switching gears: from sampling to projection The two methods (Monte Carlo, Stochastic Collocation) considered so far were of a sampling kind: σ estimated using samples of σ(x). Projection based approach: Overview of Galerkin method Governing equation: Θf(x) = g(x), where Θ is an operator, g is a known function, and f is to be determined. Project f in a known basis, {φ j }: f(x) = j u jφ j (x) Take an inner product with the same basis functions on both sides to get: j φ i(x), Θφ j (x) u j = φ i (x), g(x). BC used to simplify. This is a system of equations; solve for u and get f. Straightforward when x is spatio-temporal. But when stochastic? Uday Khankhoje (EE, IITM) Simulating with uncertainty 12 / 36

37 Switching gears: from sampling to projection The two methods (Monte Carlo, Stochastic Collocation) considered so far were of a sampling kind: σ estimated using samples of σ(x). Projection based approach: Overview of Galerkin method Governing equation: Θf(x) = g(x), where Θ is an operator, g is a known function, and f is to be determined. Project f in a known basis, {φ j }: f(x) = j u jφ j (x) Take an inner product with the same basis functions on both sides to get: j φ i(x), Θφ j (x) u j = φ i (x), g(x). BC used to simplify. This is a system of equations; solve for u and get f. Straightforward when x is spatio-temporal. But when stochastic? Uday Khankhoje (EE, IITM) Simulating with uncertainty 12 / 36

38 generalized Polynomial Chaos (gpc): a very brief history Polynomial Chaos (PC): coined by Norbert Wiener studying Gaussian stochastic processes (1938). Used Hermite polynomials as basis. Ghanem (1998) used theory of Wiener-Hermite PC to represent random processes in an orthogonal basis of Hermite polynomials. Xiu, Karniadakis (2002) generalize to non-gaussian using other orthogonal polynomials, wavelets, etc: generalized PC (gpc) Finally, solve the gpc system of equations using galerkin projection: stochastic galerkin (SG) method. Uday Khankhoje (EE, IITM) Simulating with uncertainty 13 / 36

39 generalized Polynomial Chaos (gpc): a very brief history Polynomial Chaos (PC): coined by Norbert Wiener studying Gaussian stochastic processes (1938). Used Hermite polynomials as basis. Ghanem (1998) used theory of Wiener-Hermite PC to represent random processes in an orthogonal basis of Hermite polynomials. Xiu, Karniadakis (2002) generalize to non-gaussian using other orthogonal polynomials, wavelets, etc: generalized PC (gpc) Finally, solve the gpc system of equations using galerkin projection: stochastic galerkin (SG) method. Uday Khankhoje (EE, IITM) Simulating with uncertainty 13 / 36

40 generalized Polynomial Chaos (gpc): a very brief history Polynomial Chaos (PC): coined by Norbert Wiener studying Gaussian stochastic processes (1938). Used Hermite polynomials as basis. Ghanem (1998) used theory of Wiener-Hermite PC to represent random processes in an orthogonal basis of Hermite polynomials. Xiu, Karniadakis (2002) generalize to non-gaussian using other orthogonal polynomials, wavelets, etc: generalized PC (gpc) Finally, solve the gpc system of equations using galerkin projection: stochastic galerkin (SG) method. Uday Khankhoje (EE, IITM) Simulating with uncertainty 13 / 36

41 The basis functions in gpc Start with a RV, call it z as before. Let it have a distribution function, F z (θ) = P (z θ) and a pdf ρ(θ) s.t. df z (θ) = ρ(θ)dθ The generalized Polynomial Chaos basis functions are orthogonal basis functions, ψ i (z), satisfying : ψ i (z)ψ j (z) = ψ i (θ)ψ j (θ)ρ(θ)dθ = γ i δ ij Construct linear space of polynomials of degree at most n: P n (z) Various kinds depending on ρ(θ) Legendre, θ [ 1, 1], ρ(θ) = 1/2. Jacobi, θ ( 1, 1), ρ(θ) = (1 θ) α (1 + θ) β, α, β > 1. Hermite, θ (, ), ρ(θ) = e θ2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 14 / 36

42 The basis functions in gpc Start with a RV, call it z as before. Let it have a distribution function, F z (θ) = P (z θ) and a pdf ρ(θ) s.t. df z (θ) = ρ(θ)dθ The generalized Polynomial Chaos basis functions are orthogonal basis functions, ψ i (z), satisfying : ψ i (z)ψ j (z) = ψ i (θ)ψ j (θ)ρ(θ)dθ = γ i δ ij Construct linear space of polynomials of degree at most n: P n (z) Various kinds depending on ρ(θ) Legendre, θ [ 1, 1], ρ(θ) = 1/2. Jacobi, θ ( 1, 1), ρ(θ) = (1 θ) α (1 + θ) β, α, β > 1. Hermite, θ (, ), ρ(θ) = e θ2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 14 / 36

43 The basis functions in gpc Start with a RV, call it z as before. Let it have a distribution function, F z (θ) = P (z θ) and a pdf ρ(θ) s.t. df z (θ) = ρ(θ)dθ The generalized Polynomial Chaos basis functions are orthogonal basis functions, ψ i (z), satisfying : ψ i (z)ψ j (z) = ψ i (θ)ψ j (θ)ρ(θ)dθ = γ i δ ij Construct linear space of polynomials of degree at most n: P n (z) Various kinds depending on ρ(θ) Legendre, θ [ 1, 1], ρ(θ) = 1/2. Jacobi, θ ( 1, 1), ρ(θ) = (1 θ) α (1 + θ) β, α, β > 1. Hermite, θ (, ), ρ(θ) = e θ2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 14 / 36

44 A scalar example of gpc-sg Consider: a simple equation in one unknown, u: a u = b Now: Let the system parameters a, b have some uncertainty, e.g. a(θ) = a 0 + αθ, where θ is a uniform RV in [ 0.5, 0.5]. In this case what is u or u 2? Expand: solution in the ψ basis u(θ) = n j=1 u jψ j (θ) Do: Galerkin testing with the same basis functions: Get a system of equations, where we solve for u: Au = b, where A ij = ψ i (θ)a 0 ψ j (θ) + ψ i (θ)αθψ j (θ) = a 0 γ i δ ij + α ψ i (θ)θψ j (θ) and b j = ψ i (θ)b(θ) So: u = u(θ)ρ(θ)dθ = i u i ψ i (θ) and u 2 = i u2 i γ i. = std. dev. in u can be computed : u 2 u 2 = uncertainty quantified! Uday Khankhoje (EE, IITM) Simulating with uncertainty 15 / 36

45 A scalar example of gpc-sg Consider: a simple equation in one unknown, u: a u = b Now: Let the system parameters a, b have some uncertainty, e.g. a(θ) = a 0 + αθ, where θ is a uniform RV in [ 0.5, 0.5]. In this case what is u or u 2? Expand: solution in the ψ basis u(θ) = n j=1 u jψ j (θ) Do: Galerkin testing with the same basis functions: Get a system of equations, where we solve for u: Au = b, where A ij = ψ i (θ)a 0 ψ j (θ) + ψ i (θ)αθψ j (θ) = a 0 γ i δ ij + α ψ i (θ)θψ j (θ) and b j = ψ i (θ)b(θ) So: u = u(θ)ρ(θ)dθ = i u i ψ i (θ) and u 2 = i u2 i γ i. = std. dev. in u can be computed : u 2 u 2 = uncertainty quantified! Uday Khankhoje (EE, IITM) Simulating with uncertainty 15 / 36

46 A scalar example of gpc-sg Consider: a simple equation in one unknown, u: a u = b Now: Let the system parameters a, b have some uncertainty, e.g. a(θ) = a 0 + αθ, where θ is a uniform RV in [ 0.5, 0.5]. In this case what is u or u 2? Expand: solution in the ψ basis u(θ) = n j=1 u jψ j (θ) Do: Galerkin testing with the same basis functions: Get a system of equations, where we solve for u: Au = b, where A ij = ψ i (θ)a 0 ψ j (θ) + ψ i (θ)αθψ j (θ) = a 0 γ i δ ij + α ψ i (θ)θψ j (θ) and b j = ψ i (θ)b(θ) So: u = u(θ)ρ(θ)dθ = i u i ψ i (θ) and u 2 = i u2 i γ i. = std. dev. in u can be computed : u 2 u 2 = uncertainty quantified! Uday Khankhoje (EE, IITM) Simulating with uncertainty 15 / 36

47 Random inputs? (More than just a collection of RVs!) Example: A random surface adjacent points are not independent of each other, there is some correlation: y x The Kosambi-Karhunen-Loeve (KL) expansion is widely used to represent random processes: s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) s 0 (x) is the mean of the random process η, f solve this eigenvalue problem: C(i, j)f k (j)dj = η k f k (i) where C(i, j) = cov(z i, z j ) is the correlation between two RVs, z i, z j z(θ) represents mutually uncorrelated normal RVs ( z k = 0) Expansion truncated to d terms in practice Uday Khankhoje (EE, IITM) Simulating with uncertainty 16 / 36

48 Random inputs? (More than just a collection of RVs!) Example: A random surface adjacent points are not independent of each other, there is some correlation: y x The Kosambi-Karhunen-Loeve (KL) expansion is widely used to represent random processes: s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) s 0 (x) is the mean of the random process η, f solve this eigenvalue problem: C(i, j)f k (j)dj = η k f k (i) where C(i, j) = cov(z i, z j ) is the correlation between two RVs, z i, z j z(θ) represents mutually uncorrelated normal RVs ( z k = 0) Expansion truncated to d terms in practice Uday Khankhoje (EE, IITM) Simulating with uncertainty 16 / 36

49 Random inputs? (More than just a collection of RVs!) Example: A random surface adjacent points are not independent of each other, there is some correlation: y x The Kosambi-Karhunen-Loeve (KL) expansion is widely used to represent random processes: s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) s 0 (x) is the mean of the random process η, f solve this eigenvalue problem: C(i, j)f k (j)dj = η k f k (i) where C(i, j) = cov(z i, z j ) is the correlation between two RVs, z i, z j z(θ) represents mutually uncorrelated normal RVs ( z k = 0) Expansion truncated to d terms in practice Uday Khankhoje (EE, IITM) Simulating with uncertainty 16 / 36

50 Random inputs? (More than just a collection of RVs!) Example: A random surface adjacent points are not independent of each other, there is some correlation: y x The Kosambi-Karhunen-Loeve (KL) expansion is widely used to represent random processes: s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) s 0 (x) is the mean of the random process η, f solve this eigenvalue problem: C(i, j)f k (j)dj = η k f k (i) where C(i, j) = cov(z i, z j ) is the correlation between two RVs, z i, z j z(θ) represents mutually uncorrelated normal RVs ( z k = 0) Expansion truncated to d terms in practice Uday Khankhoje (EE, IITM) Simulating with uncertainty 16 / 36

51 KL expansion for exponential correlation s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) KL expansion eigenvalues and functions can be analytically calculated in some cases, e.g. exponential correlation function C(i, j) = exp( i j /l), (correlation length l). Decay rate of eigenvalues depends inversely on the correlation length = more RVs for smaller l. Uday Khankhoje (EE, IITM) Simulating with uncertainty 17 / 36

52 KL expansion for exponential correlation s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) KL expansion eigenvalues and functions can be analytically calculated in some cases, e.g. exponential correlation function C(i, j) = exp( i j /l), (correlation length l). Decay rate of eigenvalues depends inversely on the correlation length = more RVs for smaller l. Uday Khankhoje (EE, IITM) Simulating with uncertainty 17 / 36

53 KL expansion for exponential correlation s(x, θ) = s 0 (x) + k=1 ηk f k (x)z k (θ) KL expansion eigenvalues and functions can be analytically calculated in some cases, e.g. exponential correlation function C(i, j) = exp( i j /l), (correlation length l). Decay rate of eigenvalues depends inversely on the correlation length = more RVs for smaller l. Eigenvalues l = 2 l = 1 l = Eigenvalue number Uday Khankhoje (EE, IITM) Simulating with uncertainty 17 / 36

54 The case of multiple random variables with correlation Consider again the random rough surface... y x given by the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) consists of multiple random variables, z i. Multivariate gpc (i.e. tensor product of univariate gpc) Ψ i ( z) = ψ i1 (z 1 )... ψ id (z d ), 0 d j=1 i j n (max degree) where i = (i 1,..., d): indices, z = (z 1,..., z d ) RVs Belong to the space P d n of dimension w = ( ) n+d n As in univariate case, proceed by Galerkin method and compute mean, variance, etc Uday Khankhoje (EE, IITM) Simulating with uncertainty 18 / 36

55 The case of multiple random variables with correlation Consider again the random rough surface... y x given by the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) consists of multiple random variables, z i. Multivariate gpc (i.e. tensor product of univariate gpc) Ψ i ( z) = ψ i1 (z 1 )... ψ id (z d ), 0 d j=1 i j n (max degree) where i = (i 1,..., d): indices, z = (z 1,..., z d ) RVs Belong to the space P d n of dimension w = ( ) n+d n As in univariate case, proceed by Galerkin method and compute mean, variance, etc Uday Khankhoje (EE, IITM) Simulating with uncertainty 18 / 36

56 The case of multiple random variables with correlation Consider again the random rough surface... y x given by the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) consists of multiple random variables, z i. Multivariate gpc (i.e. tensor product of univariate gpc) Ψ i ( z) = ψ i1 (z 1 )... ψ id (z d ), 0 d j=1 i j n (max degree) where i = (i 1,..., d): indices, z = (z 1,..., z d ) RVs Belong to the space P d n of dimension w = ( ) n+d n As in univariate case, proceed by Galerkin method and compute mean, variance, etc Uday Khankhoje (EE, IITM) Simulating with uncertainty 18 / 36

57 Changing gears... So far... This completes an overview of stochastic computation: 1 Monte Carlo (MC) 2 Stochastic Collocation (SC) 3 Stochastic Galerkin (SG) using generalized Polynomial Chaos Moving on... To make matters more concrete, consider the problem of computing the electromagnetic scattering from a random rough surface: e.g. seen in microwave remote sensing, [MC] a, [SC,SG] b a Khankhoje et al. Computation of radar scattering from heterogeneous rough soil using the finite element method, 2013 IEEE TGRS b Khankhoje et al. Stochastic Solutions to Rough Surface Scattering using the finite element method, 2017 IEEE TAP Uday Khankhoje (EE, IITM) Simulating with uncertainty 19 / 36

58 Changing gears... So far... This completes an overview of stochastic computation: 1 Monte Carlo (MC) 2 Stochastic Collocation (SC) 3 Stochastic Galerkin (SG) using generalized Polynomial Chaos Moving on... To make matters more concrete, consider the problem of computing the electromagnetic scattering from a random rough surface: e.g. seen in microwave remote sensing, [MC] a, [SC,SG] b a Khankhoje et al. Computation of radar scattering from heterogeneous rough soil using the finite element method, 2013 IEEE TGRS b Khankhoje et al. Stochastic Solutions to Rough Surface Scattering using the finite element method, 2017 IEEE TAP Uday Khankhoje (EE, IITM) Simulating with uncertainty 19 / 36

59 Traditional FEM setup for rough surface scattering Random rough surface instance generated, and the domain meshed Based on incident field, Radar cross-section (RCS) computed Above steps repeated for several ( 100) instances Quite inefficient! Uday Khankhoje (EE, IITM) Simulating with uncertainty 20 / 36

60 Traditional FEM setup for rough surface scattering Random rough surface instance generated, and the domain meshed Based on incident field, Radar cross-section (RCS) computed Above steps repeated for several ( 100) instances Quite inefficient! Uday Khankhoje (EE, IITM) Simulating with uncertainty 20 / 36

61 Traditional FEM setup for rough surface scattering Random rough surface instance generated, and the domain meshed Based on incident field, Radar cross-section (RCS) computed Above steps repeated for several ( 100) instances Quite inefficient! Uday Khankhoje (EE, IITM) Simulating with uncertainty 20 / 36

62 Handle the rough surface intelligently Note that the mesh need only change near the surface... A P C E E i Γ h t h b B Q D F Let s(x) define rough surface (e.g. KL expansion) Move each node smoothly within sandwich region: y y + y { s(x)( h t y h y = t ), 0 < y < h t s(x)( y+h b h b ), h b < y < 0 Partition the domain into parts that can move, and those that need not CD will deform to rough surface Zero deformation by the time y = h t or y = h b Uday Khankhoje (EE, IITM) Simulating with uncertainty 21 / 36

63 Handle the rough surface intelligently Note that the mesh need only change near the surface... A P C E E i Γ h t h b B Q D F Let s(x) define rough surface (e.g. KL expansion) Move each node smoothly within sandwich region: y y + y { s(x)( h t y h y = t ), 0 < y < h t s(x)( y+h b h b ), h b < y < 0 Partition the domain into parts that can move, and those that need not CD will deform to rough surface Zero deformation by the time y = h t or y = h b Uday Khankhoje (EE, IITM) Simulating with uncertainty 21 / 36

64 Handle the rough surface intelligently Note that the mesh need only change near the surface... A P C E E i Γ h t h b B Q D F Let s(x) define rough surface (e.g. KL expansion) Move each node smoothly within sandwich region: y y + y { s(x)( h t y h y = t ), 0 < y < h t s(x)( y+h b h b ), h b < y < 0 Partition the domain into parts that can move, and those that need not CD will deform to rough surface Zero deformation by the time y = h t or y = h b Uday Khankhoje (EE, IITM) Simulating with uncertainty 21 / 36

65 Handle the rough surface intelligently Note that the mesh need only change near the surface... A P C E E i Γ h t h b B Q D F Let s(x) define rough surface (e.g. KL expansion) Move each node smoothly within sandwich region: y y + y { s(x)( h t y h y = t ), 0 < y < h t s(x)( y+h b h b ), h b < y < 0 Partition the domain into parts that can move, and those that need not CD will deform to rough surface Zero deformation by the time y = h t or y = h b Uday Khankhoje (EE, IITM) Simulating with uncertainty 21 / 36

66 The standard FEM recipe: deterministic solver 2D vector FE basis functions W A P C E E i Γ h t h b B Q D F Maxwell s equations in weak form: W [ ( 1 ɛ r H) k 2 0 µ r H] ds = 0 First order absorbing boundary conditions on A B F E A Performing Galerkin testing Au = b, A C m m, u, b C m, A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Uday Khankhoje (EE, IITM) Simulating with uncertainty 22 / 36

67 The standard FEM recipe: deterministic solver 2D vector FE basis functions W A P C E E i Γ h t h b B Q D F Maxwell s equations in weak form: W [ ( 1 ɛ r H) k 2 0 µ r H] ds = 0 First order absorbing boundary conditions on A B F E A Performing Galerkin testing Au = b, A C m m, u, b C m, A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Uday Khankhoje (EE, IITM) Simulating with uncertainty 22 / 36

68 The standard FEM recipe: deterministic solver 2D vector FE basis functions W A P C E E i Γ h t h b B Q D F Maxwell s equations in weak form: W [ ( 1 ɛ r H) k 2 0 µ r H] ds = 0 First order absorbing boundary conditions on A B F E A Performing Galerkin testing Au = b, A C m m, u, b C m, A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Uday Khankhoje (EE, IITM) Simulating with uncertainty 22 / 36

69 The standard FEM recipe: deterministic solver 2πr Ef z r E 2 z i Quantity of interest: radar cross-section σ 2D = lim Ez f k ( r) = 0 e i(k 0 r π/4) 8π r ẑ (ˆr M( r ) + Z 0 µ rˆr ˆr J( r ))e ik 0ˆr r dl 1 Start with a blank mesh: rough surface is flat 2 Deform mesh virtually using KL expansion for rough surface 3 Compute σ 2D for this mesh 4 Reset the mesh and go to (2) till convergence ( 100 times) Uday Khankhoje (EE, IITM) Simulating with uncertainty 23 / 36

70 The standard FEM recipe: deterministic solver 2πr Ef z r E 2 z i Quantity of interest: radar cross-section σ 2D = lim Ez f k ( r) = 0 e i(k 0 r π/4) 8π r ẑ (ˆr M( r ) + Z 0 µ rˆr ˆr J( r ))e ik 0ˆr r dl 1 Start with a blank mesh: rough surface is flat 2 Deform mesh virtually using KL expansion for rough surface 3 Compute σ 2D for this mesh 4 Reset the mesh and go to (2) till convergence ( 100 times) Uday Khankhoje (EE, IITM) Simulating with uncertainty 23 / 36

71 The standard FEM recipe: deterministic solver 2πr Ef z r E 2 z i Quantity of interest: radar cross-section σ 2D = lim Ez f k ( r) = 0 e i(k 0 r π/4) 8π r ẑ (ˆr M( r ) + Z 0 µ rˆr ˆr J( r ))e ik 0ˆr r dl RCS (db) SPM FEM Scattering Angle (θ) 1 Start with a blank mesh: rough surface is flat 2 Deform mesh virtually using KL expansion for rough surface 3 Compute σ 2D for this mesh 4 Reset the mesh and go to (2) till convergence ( 100 times) Uday Khankhoje (EE, IITM) Simulating with uncertainty 23 / 36

72 What is deterministic here? Input to the FEM solver: 1 Incidence angle, ɛ(r) for object 2 Set of d normal RVs, {z k }, to use in the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) The FEM is run for a specified surface, hence deterministic solver Monte Carlo process converges as O(1/ n mc ), independent of d. (Recall: d depends on the correlation length of the surface) i.e. σ = n mc i=1 σ( r, z i)/n mc Can we keep the same solver, but have faster convergence? Possibly: Let s try stochastic collocation Uday Khankhoje (EE, IITM) Simulating with uncertainty 24 / 36

73 What is deterministic here? Input to the FEM solver: 1 Incidence angle, ɛ(r) for object 2 Set of d normal RVs, {z k }, to use in the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) The FEM is run for a specified surface, hence deterministic solver Monte Carlo process converges as O(1/ n mc ), independent of d. (Recall: d depends on the correlation length of the surface) i.e. σ = n mc i=1 σ( r, z i)/n mc Can we keep the same solver, but have faster convergence? Possibly: Let s try stochastic collocation Uday Khankhoje (EE, IITM) Simulating with uncertainty 24 / 36

74 What is deterministic here? Input to the FEM solver: 1 Incidence angle, ɛ(r) for object 2 Set of d normal RVs, {z k }, to use in the KL expansion: s(x, θ) = s 0 (x) + d k=1 ηk f k (x)z k (θ) The FEM is run for a specified surface, hence deterministic solver Monte Carlo process converges as O(1/ n mc ), independent of d. (Recall: d depends on the correlation length of the surface) i.e. σ = n mc i=1 σ( r, z i)/n mc Can we keep the same solver, but have faster convergence? Possibly: Let s try stochastic collocation Uday Khankhoje (EE, IITM) Simulating with uncertainty 24 / 36

75 From Monte Carlo to Stochastic Collocation 1 Construct multivariate pdf of the d random normal variables: ρ( z) = d j=1 ρ j(z j ) over domain D d, D = (, ) 2 Express expected value as σ = D d σ( r, z)ρ( z)d z 3 Consider n sc evals of the above integral at predecided quadrature points, z i. 4 Express σ( r, z) in terms of interpolating multivariate polynomials (e.g. Langrage) {P (i) ( z)} d i=1 giving σ( r, z) = n sc i=1 σ( r, z i)p (i) ( z) 5 Finally, σ = n sc i=1 σ( r, z i)α i where α i = D d ρ( z)p (i) ( z)d z Uday Khankhoje (EE, IITM) Simulating with uncertainty 25 / 36

76 From Monte Carlo to Stochastic Collocation 1 Construct multivariate pdf of the d random normal variables: ρ( z) = d j=1 ρ j(z j ) over domain D d, D = (, ) 2 Express expected value as σ = D d σ( r, z)ρ( z)d z 3 Consider n sc evals of the above integral at predecided quadrature points, z i. 4 Express σ( r, z) in terms of interpolating multivariate polynomials (e.g. Langrage) {P (i) ( z)} d i=1 giving σ( r, z) = n sc i=1 σ( r, z i)p (i) ( z) 5 Finally, σ = n sc i=1 σ( r, z i)α i where α i = D d ρ( z)p (i) ( z)d z Uday Khankhoje (EE, IITM) Simulating with uncertainty 25 / 36

77 From Monte Carlo to Stochastic Collocation 1 Construct multivariate pdf of the d random normal variables: ρ( z) = d j=1 ρ j(z j ) over domain D d, D = (, ) 2 Express expected value as σ = D d σ( r, z)ρ( z)d z 3 Consider n sc evals of the above integral at predecided quadrature points, z i. 4 Express σ( r, z) in terms of interpolating multivariate polynomials (e.g. Langrage) {P (i) ( z)} d i=1 giving σ( r, z) = n sc i=1 σ( r, z i)p (i) ( z) 5 Finally, σ = n sc i=1 σ( r, z i)α i where α i = D d ρ( z)p (i) ( z)d z Uday Khankhoje (EE, IITM) Simulating with uncertainty 25 / 36

78 Stochastic Collocation for rough surface scattering SC recipe: σ = n sc i=1 σ( r, z i)α i This just looks like the old MC formula! σ = n mc i=1 σ( r, z i)/n mc So, we can use the same solver for σ( r, z i ) Results How many function evals using sparse grid? 2 l d l /l Compare MC and SC for d = 50, l = 1 Evals: MC=100, SC=101 Uday Khankhoje (EE, IITM) Simulating with uncertainty 26 / 36

79 RCS (db) Stochastic Collocation for rough surface scattering SC recipe: σ = n sc i=1 σ( r, z i)α i This just looks like the old MC formula! σ = n mc i=1 σ( r, z i)/n mc So, we can use the same solver for σ( r, z i ) Sparse Grid Stroud 3 Monte Carlo Scattering Angle (3) Results How many function evals using sparse grid? 2 l d l /l Compare MC and SC for d = 50, l = 1 Evals: MC=100, SC=101 Uday Khankhoje (EE, IITM) Simulating with uncertainty 26 / 36

80 The devil is in the details For the same surface as before, what happens if you reduce d? Compare MC and SC for d = 20, l = 1 Evals: MC=100, SC=41 Uday Khankhoje (EE, IITM) Simulating with uncertainty 27 / 36

81 RCS (db) The devil is in the details For the same surface as before, what happens if you reduce d? Sparse Grid Stroud 3 Monte Carlo Scattering Angle (3) Compare MC and SC for d = 20, l = 1 Evals: MC=100, SC=41 Uday Khankhoje (EE, IITM) Simulating with uncertainty 27 / 36

82 Conclusions about Stochastic Collocation Given surface needs to be accurately represented by the finite KL expansion, i.e. there is an optimal d. For MC, found that 100 iterations give convergence to within 1dB. The cheapest SC would have level l = 1, giving n sc 2d = if the surface requires d > 50, MC is better This critical number may decrease if we employ anisotropic sparse grids. Why? KL eigen values decay and not all dims are as important. Uday Khankhoje (EE, IITM) Simulating with uncertainty 28 / 36

83 Conclusions about Stochastic Collocation Given surface needs to be accurately represented by the finite KL expansion, i.e. there is an optimal d. For MC, found that 100 iterations give convergence to within 1dB. The cheapest SC would have level l = 1, giving n sc 2d = if the surface requires d > 50, MC is better This critical number may decrease if we employ anisotropic sparse grids. Why? KL eigen values decay and not all dims are as important. Uday Khankhoje (EE, IITM) Simulating with uncertainty 28 / 36

84 Conclusions about Stochastic Collocation Given surface needs to be accurately represented by the finite KL expansion, i.e. there is an optimal d. For MC, found that 100 iterations give convergence to within 1dB. The cheapest SC would have level l = 1, giving n sc 2d = if the surface requires d > 50, MC is better This critical number may decrease if we employ anisotropic sparse grids. Why? KL eigen values decay and not all dims are as important. Uday Khankhoje (EE, IITM) Simulating with uncertainty 28 / 36

85 Moving over to Stochastic Galerkin Recall the earlier (scalar) example of a u = b, where we replaced a a 0 + αθ, where θ was a uniform RV. In the case of the FEM, we have a sparse matrix equation Au = b to solve. Now, A pq must be transformed as per the KL expansion for the surface. Recall: A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Use our mesh deformation scheme, which transformed y y + y. This leads to A pq Ãpq, b p b p Uday Khankhoje (EE, IITM) Simulating with uncertainty 29 / 36

86 Moving over to Stochastic Galerkin Recall the earlier (scalar) example of a u = b, where we replaced a a 0 + αθ, where θ was a uniform RV. In the case of the FEM, we have a sparse matrix equation Au = b to solve. Now, A pq must be transformed as per the KL expansion for the surface. Recall: A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Use our mesh deformation scheme, which transformed y y + y. This leads to A pq Ãpq, b p b p Uday Khankhoje (EE, IITM) Simulating with uncertainty 29 / 36

87 Moving over to Stochastic Galerkin Recall the earlier (scalar) example of a u = b, where we replaced a a 0 + αθ, where θ was a uniform RV. In the case of the FEM, we have a sparse matrix equation Au = b to solve. Now, A pq must be transformed as per the KL expansion for the surface. Recall: A pq = Σ e α e,pq ( r e ) + δ pq ν p ( r p ), b p = τ p ( r p ), where r e = (x i, x j, x k, y i, y j, y k ), e th ele r p = (x i, x j, y i, y j ), p th edge Use our mesh deformation scheme, which transformed y y + y. This leads to A pq Ãpq, b p b p Uday Khankhoje (EE, IITM) Simulating with uncertainty 29 / 36

88 Steps in Stochastic Galerkin 1 Accomplish the perturbations by Taylor expanding to second order: (1 st, 2 nd derivatives are computed by finite differences) Ã pq = β pq + d i=1 β(i) pq z i + d i,j=1 β(i,j) pq z i z j, matrices m m bp = ζ p + d i=1 ζ(i) p z i, vectors m 1 2 Now do Galerkin testing in the basis of orthogonal polynomials corresponding to the PDF of the RVs: Expand each u w i=1 u iψ i ( z) Take inner products with Ψ i ( z) on both sides of Au = b 3 Resulted in another bigger system of equations: F v = g, F C mw mw, v, g C mw Uday Khankhoje (EE, IITM) Simulating with uncertainty 30 / 36

89 Steps in Stochastic Galerkin 1 Accomplish the perturbations by Taylor expanding to second order: (1 st, 2 nd derivatives are computed by finite differences) Ã pq = β pq + d i=1 β(i) pq z i + d i,j=1 β(i,j) pq z i z j, matrices m m bp = ζ p + d i=1 ζ(i) p z i, vectors m 1 2 Now do Galerkin testing in the basis of orthogonal polynomials corresponding to the PDF of the RVs: Expand each u w i=1 u iψ i ( z) Take inner products with Ψ i ( z) on both sides of Au = b 3 Resulted in another bigger system of equations: F v = g, F C mw mw, v, g C mw Uday Khankhoje (EE, IITM) Simulating with uncertainty 30 / 36

90 Steps in Stochastic Galerkin 1 Accomplish the perturbations by Taylor expanding to second order: (1 st, 2 nd derivatives are computed by finite differences) Ã pq = β pq + d i=1 β(i) pq z i + d i,j=1 β(i,j) pq z i z j, matrices m m bp = ζ p + d i=1 ζ(i) p z i, vectors m 1 2 Now do Galerkin testing in the basis of orthogonal polynomials corresponding to the PDF of the RVs: Expand each u w i=1 u iψ i ( z) Take inner products with Ψ i ( z) on both sides of Au = b 3 Resulted in another bigger system of equations: F v = g, F C mw mw, v, g C mw Uday Khankhoje (EE, IITM) Simulating with uncertainty 30 / 36

91 Details of Stochastic Galerkin What is a the structure of: F v = g, F C mw mw, v, g C mw mw m Each block is sparse, multiplied by stochastic inner products like: E ab = Ψ a ( z)ψ b ( z), E (i) ab = Ψ a( z)z i Ψ b ( z), E (i,j) ab = Ψ a ( z) z i z j Ψ b ( z), Needs to be solved only once Once we solve this equation, we get an expression for the far field, and from that we get the RCS as E f ( r) 2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 31 / 36

92 Details of Stochastic Galerkin What is a the structure of: F v = g, F C mw mw, v, g C mw mw m Each block is sparse, multiplied by stochastic inner products like: E ab = Ψ a ( z)ψ b ( z), E (i) ab = Ψ a( z)z i Ψ b ( z), E (i,j) ab = Ψ a ( z) z i z j Ψ b ( z), Needs to be solved only once Once we solve this equation, we get an expression for the far field, and from that we get the RCS as E f ( r) 2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 31 / 36

93 Details of Stochastic Galerkin What is a the structure of: F v = g, F C mw mw, v, g C mw mw m Each block is sparse, multiplied by stochastic inner products like: E ab = Ψ a ( z)ψ b ( z), E (i) ab = Ψ a( z)z i Ψ b ( z), E (i,j) ab = Ψ a ( z) z i z j Ψ b ( z), Needs to be solved only once Once we solve this equation, we get an expression for the far field, and from that we get the RCS as E f ( r) 2. Uday Khankhoje (EE, IITM) Simulating with uncertainty 31 / 36

94 Computational details of Stochastic Galerkin In solving F v = g, F C mw mw, v, g C mw Not a cheap computation: direct matrix solvers, use an iterative method: BiCgStab + block-diagonal, mean-based pre-conditioner The matrix F is never stored explicitly, instead we only need to be able to compute the product of F with another vector for the iterative method to work Compute times and memory demands are still much higher than MC/SC. Scope for lot of research in sparse data structures for F, g SG-gPC 1200s (10 iters), 10 GB RAM MC 950s (100 surfaces), 150 MB RAM Uday Khankhoje (EE, IITM) Simulating with uncertainty 32 / 36

95 Computational details of Stochastic Galerkin In solving F v = g, F C mw mw, v, g C mw Not a cheap computation: direct matrix solvers, use an iterative method: BiCgStab + block-diagonal, mean-based pre-conditioner The matrix F is never stored explicitly, instead we only need to be able to compute the product of F with another vector for the iterative method to work Compute times and memory demands are still much higher than MC/SC. Scope for lot of research in sparse data structures for F, g SG-gPC 1200s (10 iters), 10 GB RAM MC 950s (100 surfaces), 150 MB RAM Uday Khankhoje (EE, IITM) Simulating with uncertainty 32 / 36

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Multi-Element Probabilistic Collocation Method in High Dimensions

Multi-Element Probabilistic Collocation Method in High Dimensions Multi-Element Probabilistic Collocation Method in High Dimensions Jasmine Foo and George Em Karniadakis Division of Applied Mathematics, Brown University, Providence, RI 02912 USA Abstract We combine multi-element

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

Dimension-adaptive sparse grid for industrial applications using Sobol variances

Dimension-adaptive sparse grid for industrial applications using Sobol variances Master of Science Thesis Dimension-adaptive sparse grid for industrial applications using Sobol variances Heavy gas flow over a barrier March 11, 2015 Ad Dimension-adaptive sparse grid for industrial

More information

Introduction to Uncertainty Quantification in Computational Science Handout #3

Introduction to Uncertainty Quantification in Computational Science Handout #3 Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Accuracy, Precision and Efficiency in Sparse Grids

Accuracy, Precision and Efficiency in Sparse Grids John, Information Technology Department, Virginia Tech.... http://people.sc.fsu.edu/ jburkardt/presentations/ sandia 2009.pdf... Computer Science Research Institute, Sandia National Laboratory, 23 July

More information

Variation-Aware Interconnect Extraction using Statistical Moment Preserving Model Order Reduction

Variation-Aware Interconnect Extraction using Statistical Moment Preserving Model Order Reduction Variation-Aware Interconnect Extraction using Statistical Moment Preserving Model Order Reduction The MIT Faculty has made this article openly available. Please share how this access benefits you. Your

More information

Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed

Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed Eveline Rosseel Herbert De Gersem Stefan Vandewalle Report TW 541, July 29 Katholieke

More information

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty? Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty? Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Uncertainty Quantification in Computational Science

Uncertainty Quantification in Computational Science DTU 2010 - Lecture I Uncertainty Quantification in Computational Science Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu Objective of lectures The main objective of these lectures are To offer

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Numerical Approximation of Stochastic Elliptic Partial Differential Equations Numerical Approximation of Stochastic Elliptic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de

More information

Multilevel stochastic collocations with dimensionality reduction

Multilevel stochastic collocations with dimensionality reduction Multilevel stochastic collocations with dimensionality reduction Ionut Farcas TUM, Chair of Scientific Computing in Computer Science (I5) 27.01.2017 Outline 1 Motivation 2 Theoretical background Uncertainty

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes

Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes TTU, October 26, 2012 p. 1/3 Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes Hao Zhang Department of Statistics Department of Forestry and Natural Resources Purdue University

More information

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids Covering : Accuracy, Precision and Efficiency in Sparse Grids https://people.sc.fsu.edu/ jburkardt/presentations/auburn 2009.pdf... John Interdisciplinary Center for Applied Mathematics & Information Technology

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeC17.1 Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 (1) Graduate Student, (2) Assistant

More information

Stochastic Integral Equation Solver for Efficient Variation-Aware Interconnect Extraction

Stochastic Integral Equation Solver for Efficient Variation-Aware Interconnect Extraction Stochastic Integral Equation Solver for Efficient Variation-Aware Interconnect Extraction ABSTRACT Tarek El-Moselhy Computational Prototyping Group Research Laboratory in Electronics Massachusetts Institute

More information

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties Mathematical Problems in Engineering Volume 22, Article ID 7565, 4 pages doi:.55/22/7565 Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties Ling Guo, 2 and

More information

arxiv: v1 [math.na] 3 Apr 2019

arxiv: v1 [math.na] 3 Apr 2019 arxiv:1904.02017v1 [math.na] 3 Apr 2019 Poly-Sinc Solution of Stochastic Elliptic Differential Equations Maha Youssef and Roland Pulch Institute of Mathematics and Computer Science, University of Greifswald,

More information

STOCHASTIC SAMPLING METHODS

STOCHASTIC SAMPLING METHODS STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD 13 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August 1-6, 24 Paper No. 569 PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD Riki Honda

More information

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Dinesh Kumar, Mehrdad Raisee and Chris Lacor Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014

More information

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly

More information

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient F. Nobile, L. Tamellini, R. Tempone, F. Tesei CSQI - MATHICSE, EPFL, Switzerland Dipartimento di

More information

4. Numerical Quadrature. Where analytical abilities end... continued

4. Numerical Quadrature. Where analytical abilities end... continued 4. Numerical Quadrature Where analytical abilities end... continued Where analytical abilities end... continued, November 30, 22 1 4.3. Extrapolation Increasing the Order Using Linear Combinations Once

More information

Sampling and low-rank tensor approximation of the response surface

Sampling and low-rank tensor approximation of the response surface Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,

More information

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications) Chapter 6 Finite Element Method Literature: (tiny selection from an enormous number of publications) K.J. Bathe, Finite Element procedures, 2nd edition, Pearson 2014 (1043 pages, comprehensive). Available

More information

Slow Growth for Gauss Legendre Sparse Grids

Slow Growth for Gauss Legendre Sparse Grids Slow Growth for Gauss Legendre Sparse Grids John Burkardt, Clayton Webster April 4, 2014 Abstract A sparse grid for multidimensional quadrature can be constructed from products of 1D rules. For multidimensional

More information

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 6t 7 - April 28, Schaumburg, IL AIAA 28-892 Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

arxiv: v1 [math.na] 14 Sep 2017

arxiv: v1 [math.na] 14 Sep 2017 Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis arxiv:1709.04584v1 [math.na] 14 Sep 2017 Anindya Bhaduri a, Yanyan He 1b, Michael D. Shields a, Lori Graham-Brady

More information

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of

More information

Hyperbolic Polynomial Chaos Expansion (HPCE) and its Application to Statistical Analysis of Nonlinear Circuits

Hyperbolic Polynomial Chaos Expansion (HPCE) and its Application to Statistical Analysis of Nonlinear Circuits Hyperbolic Polynomial Chaos Expansion HPCE and its Application to Statistical Analysis of Nonlinear Circuits Majid Ahadi, Aditi Krishna Prasad, Sourajeet Roy High Speed System Simulations Laboratory Department

More information

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES SIAM J. SCI. COMPUT. Vol. 8, No. 3, pp. 9 98 c 6 Society for Industrial and Applied Mathematics MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES XIAOLIANG WAN AND GEORGE EM

More information

Fast Numerical Methods for Stochastic Computations: A Review

Fast Numerical Methods for Stochastic Computations: A Review COMMUNICATIONS IN COMPUTATIONAL PHYSICS Vol. 5, No. 2-4, pp. 242-272 Commun. Comput. Phys. February 2009 REVIEW ARTICLE Fast Numerical Methods for Stochastic Computations: A Review Dongbin Xiu Department

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations

Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations Boris Jeremić 1 Kallol Sett 2 1 University of California, Davis 2 University of Akron, Ohio ICASP Zürich, Switzerland August

More information

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F.

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F. Accepted Manuscript SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos R. Ahlfeld, B. Belkouchi, F. Montomoli PII: S0021-9991(16)30151-6 DOI: http://dx.doi.org/10.1016/j.jcp.2016.05.014

More information

Finite Element Method (FEM)

Finite Element Method (FEM) Finite Element Method (FEM) The finite element method (FEM) is the oldest numerical technique applied to engineering problems. FEM itself is not rigorous, but when combined with integral equation techniques

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Stochastic Solvers for the Euler Equations

Stochastic Solvers for the Euler Equations 43rd AIAA Aerospace Sciences Meeting and Exhibit 1-13 January 5, Reno, Nevada 5-873 Stochastic Solvers for the Euler Equations G. Lin, C.-H. Su and G.E. Karniadakis Division of Applied Mathematics Brown

More information

Accounting for Variability and Uncertainty in Signal and Power Integrity Modeling

Accounting for Variability and Uncertainty in Signal and Power Integrity Modeling Accounting for Variability and Uncertainty in Signal and Power Integrity Modeling Andreas Cangellaris& Prasad Sumant Electromagnetics Lab, ECE Department University of Illinois Acknowledgments Dr. Hong

More information

Uncertainty Quantification of Radionuclide Release Models using Non-Intrusive Polynomial Chaos. Casper Hoogwerf

Uncertainty Quantification of Radionuclide Release Models using Non-Intrusive Polynomial Chaos. Casper Hoogwerf Uncertainty Quantification of Radionuclide Release Models using Non-Intrusive Polynomial Chaos. Casper Hoogwerf 1 Foreword This report presents the final thesis of the Master of Science programme in Applied

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Sparse polynomial chaos expansions in engineering applications

Sparse polynomial chaos expansions in engineering applications DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,

More information

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

A stochastic collocation approach for efficient integrated gear health prognosis

A stochastic collocation approach for efficient integrated gear health prognosis A stochastic collocation approach for efficient integrated gear health prognosis Fuqiong Zhao a, Zhigang Tian b,, Yong Zeng b a Department of Mechanical and Industrial Engineering, Concordia University,

More information

CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS

CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS Puneet Singla Assistant Professor Department of Mechanical & Aerospace Engineering University at Buffalo, Buffalo, NY-1426 Probabilistic Analysis

More information

Original Research. Sensitivity Analysis and Variance Reduction in a Stochastic NDT Problem

Original Research. Sensitivity Analysis and Variance Reduction in a Stochastic NDT Problem To appear in the International Journal of Computer Mathematics Vol. xx, No. xx, xx 2014, 1 9 Original Research Sensitivity Analysis and Variance Reduction in a Stochastic NDT Problem R. H. De Staelen a

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

ASSESSMENT OF COLLOCATION AND GALERKIN APPROACHES TO LINEAR DIFFUSION EQUATIONS WITH RANDOM DATA

ASSESSMENT OF COLLOCATION AND GALERKIN APPROACHES TO LINEAR DIFFUSION EQUATIONS WITH RANDOM DATA International Journal for Uncertainty Quantification, 11):19 33, 2011 ASSESSMENT OF COLLOCATION AND GALERKIN APPROACHES TO LINEAR DIFFUSION EQUATIONS WITH RANDOM DATA Howard C. Elman, 1, Christopher W.

More information

Solving the stochastic steady-state diffusion problem using multigrid

Solving the stochastic steady-state diffusion problem using multigrid IMA Journal of Numerical Analysis (2007) 27, 675 688 doi:10.1093/imanum/drm006 Advance Access publication on April 9, 2007 Solving the stochastic steady-state diffusion problem using multigrid HOWARD ELMAN

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification UNIVERSITY OF CALIFORNIA, SAN DIEGO An Empirical Chaos Expansion Method for Uncertainty Quantification A Dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

Polynomial chaos expansions for structural reliability analysis

Polynomial chaos expansions for structural reliability analysis DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for structural reliability analysis B. Sudret & S. Marelli Incl.

More information

Uncertainty Quantification in MEMS

Uncertainty Quantification in MEMS Uncertainty Quantification in MEMS N. Agarwal and N. R. Aluru Department of Mechanical Science and Engineering for Advanced Science and Technology Introduction Capacitive RF MEMS switch Comb drive Various

More information

Journal of Computational Physics

Journal of Computational Physics Journal of Computational Physics 229 (2010) 1536 1557 Contents lists available at ScienceDirect Journal of Computational Physics journal homepage: www.elsevier.com/locate/jcp Multi-element probabilistic

More information

arxiv: v2 [math.na] 6 Aug 2018

arxiv: v2 [math.na] 6 Aug 2018 Bayesian model calibration with interpolating polynomials based on adaptively weighted Leja nodes L.M.M. van den Bos,2, B. Sanderse, W.A.A.M. Bierbooms 2, and G.J.W. van Bussel 2 arxiv:82.235v2 [math.na]

More information

Strain and stress computations in stochastic finite element. methods

Strain and stress computations in stochastic finite element. methods INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 2007; 00:1 6 [Version: 2002/09/18 v2.02] Strain and stress computations in stochastic finite element methods Debraj

More information

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Hermann G. Matthies with Andreas Keese Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de

More information

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA A REPORT SUBMITTED TO THE DEPARTMENT OF ENERGY RESOURCES ENGINEERING OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Wolfgang Betz, Iason Papaioannou, Daniel Straub Engineering Risk Analysis Group, Technische Universität

More information

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University

More information

Analysis and Computation of Hyperbolic PDEs with Random Data

Analysis and Computation of Hyperbolic PDEs with Random Data Analysis and Computation of Hyperbolic PDEs with Random Data Mohammad Motamed 1, Fabio Nobile 2,3 and Raúl Tempone 1 1 King Abdullah University of Science and Technology, Thuwal, Saudi Arabia 2 EPFL Lausanne,

More information

On the Application of Intervening Variables for Stochastic Finite Element Analysis

On the Application of Intervening Variables for Stochastic Finite Element Analysis On the Application of Intervening Variables for Stochastic Finite Element Analysis M.A. Valdebenito a,, A.A. Labarca a, H.A. Jensen a a Universidad Tecnica Federico Santa Maria, Dept. de Obras Civiles,

More information

Chapter 2 Spectral Expansions

Chapter 2 Spectral Expansions Chapter 2 Spectral Expansions In this chapter, we discuss fundamental and practical aspects of spectral expansions of random model data and of model solutions. We focus on a specific class of random process

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

8 STOCHASTIC SIMULATION

8 STOCHASTIC SIMULATION 8 STOCHASTIC SIMULATIO 59 8 STOCHASTIC SIMULATIO Whereas in optimization we seek a set of parameters x to minimize a cost, or to maximize a reward function J( x), here we pose a related but different question.

More information

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Università degli Studi del Sannio Dipartimento e Facoltà di Ingegneria, Benevento, Italia Random fields

More information

A far-field based T-matrix method for three dimensional acoustic scattering

A far-field based T-matrix method for three dimensional acoustic scattering ANZIAM J. 50 (CTAC2008) pp.c121 C136, 2008 C121 A far-field based T-matrix method for three dimensional acoustic scattering M. Ganesh 1 S. C. Hawkins 2 (Received 14 August 2008; revised 4 October 2008)

More information

Stochastic structural dynamic analysis with random damping parameters

Stochastic structural dynamic analysis with random damping parameters Stochastic structural dynamic analysis with random damping parameters K. Sepahvand 1, F. Saati Khosroshahi, C. A. Geweth and S. Marburg Chair of Vibroacoustics of Vehicles and Machines Department of Mechanical

More information

Adaptive Collocation with Kernel Density Estimation

Adaptive Collocation with Kernel Density Estimation Examples of with Kernel Density Estimation Howard C. Elman Department of Computer Science University of Maryland at College Park Christopher W. Miller Applied Mathematics and Scientific Computing Program

More information

Sparse Grids for Stochastic Integrals

Sparse Grids for Stochastic Integrals John, Information Technology Department, Virginia Tech.... http://people.sc.fsu.edu/ jburkardt/presentations/ fsu 2009.pdf... Department of Scientific Computing, Florida State University, 08 October 2009.

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 Projection Methods (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 1 University of Pennsylvania 2 Boston College Introduction Remember that

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits. and Domenico Spina

Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits. and Domenico Spina electronics Review Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits Arun Kaintura *, Tom Dhaene ID and Domenico Spina IDLab, Department of Information

More information

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS Max Gunzburger Department of Scientific Computing Florida State University North Carolina State University, March 10, 2011

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

Downloaded 01/28/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 01/28/13 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 27, No. 3, pp. 1118 1139 c 2005 Society for Industrial and Applied Mathematics HIGH-ORDER COLLOCATION METHODS FOR DIFFERENTIAL EQUATIONS WITH RANDOM INPUTS DONGBIN XIU AND JAN

More information

Uncertainty analysis of large-scale systems using domain decomposition

Uncertainty analysis of large-scale systems using domain decomposition Center for Turbulence Research Annual Research Briefs 2007 143 Uncertainty analysis of large-scale systems using domain decomposition By D. Ghosh, C. Farhat AND P. Avery 1. Motivation and objectives A

More information

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information