Spectral Representation of Random Processes Example: Represent u(t,x,q) by! u K (t, x, Q) = u k (t, x) k(q) where k(q) are orthogonal polynomials. Single Random Variable:! Let k (Q) be orthogonal with respect to Q (q) with 0 (Q) =1. Then and! E[ 0 (Q)] = 1 E[ i (Q) j (Q)] = i(q) j (q) Q (q)dq = h i, j i Normalization factor:! = ij i i = E[ i (Q)] = h i, i i
Spectral Representation of Random Processes Random Process:! E u K (t, x, Q) " K # X = E u k (t, x) k (Q) = u 0 (t, x)e[ 0 (Q)] + = u 0 (t, x) u k (t, x)e[ k (Q)] k=1 h var[u K (t, x, Q)] = E = E 4 = E 4 u K (t, x, Q) E[u K (t, x, Q)] i! 3 u k (t, x) k (Q) u 0 (t, x) 5! 3 u k (t, x) k (Q) 5 k=1 = u k(t, x) k k=1
Spectral Representation of Random Processes Hermite Polynomials:! Q N(0, 1) H 0 (Q) =1, H 1 (Q) =Q, H (Q) =Q 1 H 3 (Q) =Q 3 3Q, H 4 (Q) =Q 4 6Q +3 with the weight! Q (q) = 1 p e q / Normalization factor:! i = R (q) Q (q)dq = i! Legendre Polynomials:!Q U( 1, 1) P 0 (Q) =1, P 1 (Q) =Q, P (Q) = 3 Q 1 P 3 (Q) = 5 Q 3 3 Q, P 4(Q) = 35 8 Q 4 15 4 Q + 3 8, with the weight! Q (q) = 1
Spectral Representation of Random Processes Multiple Random Variables:! Definition: (p-dimensional Multi-Index): a p-tuple! k 0 =(k 1,,k p ) N p 0 of non-negative integers is termed a p-dimensional multi-index with magnitude k 0 = k 1 + k + + k p and satisfying the ordering j 0 apple k 0, j i apple k i for i =1,,p. Consider the p-variate basis functions! which satisfy! i 0(Q) = i 1 (Q 1 ),, ip (Q p ) E[ i 0(Q) j 0(Q)] = i 0(q) j 0(q) Q(q)dq = h i 0, j 0i = i 0 j 0 i 0
Spectral Representation of Random Processes Multi-Index Representation:! u K (t, x, Q) = u k 0(t, x) k 0(Q) k 0 =0 Single Index Representation:! u K (t, x, Q) = u k (t, x) k(q) k k 0 Multi-Index Polynomial 0 0 (0, 0, 0) 0 (Q 1 ) 0 (Q ) 0 (Q 3 ) 1 1 (1, 0, 0) 1 (Q 1 ) 0 (Q ) 0 (Q 3 ) (0, 1, 0) 0 (Q 1 ) 1 (Q ) 0 (Q 3 ) 3 (0, 0, 1) 0 (Q 1 ) 0 (Q ) 1 (Q 3 ) 4 (, 0, 0) (Q 1 ) 0 (Q ) 0 (Q 3 ) 5 (1, 1, 0) 1 (Q 1 ) 1 (Q ) 0 (Q 3 ) 6 (1, 0, 1) 1 (Q 1 ) 0 (Q ) 1 (Q 3 ) 7 (0,, 0) 0 (Q 1 ) (Q ) 0 (Q 3 ) 8 (0, 1, 1) 0 (Q 1 ) 1 (Q ) 1 (Q 3 ) 9 (0, 0, ) 0 (Q 1 ) 0 (Q ) (Q 3 )
Problem:! du dx = f(t, Q,u),t>0 u(0, Q) =u 0 Quantity of Interest:! y(t) = u(t, q) Q (q)dq Scalar Initial Value Problem Finite-Dimensional Representation:! u K (t, Q) = u k (t) k(q) where! u k (t) = 1 k u(t, q) k(q) Q (q)dq
Stochastic Galerkin Method Weak Stochastic Formulation: For i=0,, K! 0= = du K dt " X K f, i du k dt (t) k(q) f t, q,!# u k (t) k(q) i(q) Q (q)dq which is equivalent to! apple du K (t, Q) E dt Quadrature yields! RX r=1 i(q r ) Q (q r )w r " K X i(q) = E f t, Q,u K i(q) du k dt (t) k(q r ) f t, q r,!# u k (t) k(q r ) =0
Example: Consider! du dt = (!)u u(0,!) = Stochastic Galerkin Method where is fixed and N(, ) with > 0. Here NX = N = n n (Q), 0 =, 1 =, n =0,n>1 = N = Analytic solution:! n=0 NX n=0 u(t, Q) = e ( + Q)t n n (Q), 0 =, n =0,n>0
Approximate solution: Find! u K (t, Q) = u k (t) k (Q) subject to! Stochastic Galerkin Method du K 0= + N u K, i dt du k = R dt (t) k(q) i (q) Q (q)dq + N R which is equivalent to! du i NX dt = i n u k (t)e ink where! n=0 i = E[ i (Q)] = R i (q) Q (q)dq e ink = E[ i (q) n (q) k (q)] = R K X u k (t) k (q) i (q) Q (q)dq Initial Conditions:! since! u k (0) = k,,,k u K (0, Q) = i(q) n (q) k (q) Q (q)dq u k (0) k (Q) = = NX n=1 n n (q)
Stochastic Galerkin Method Note: To evaluate QoI, we observe that! E u K (t, Q) = u 0 (t) var[u K (t, Q)] = u k(t) k. k=1 Exact Mean and Variance:! ( + ū(t) = e q)t R = e t e t / 1 p e q / dq var[u] =E u (t) ū (t) = e t e t e t 10 8 True Approximate 10 8 True Approximate Displacement (m) 6 4 Displacement (m) 6 4 0 0 0 4 6 8 10 1 Time (s) 0 4 6 8 10 1 Time (s) K = 8! K = 16!
Properties:! Accuracy is optimal in L sense.! Disadvantages! Stochastic Galerkin Method Method is intrusive and hence difficult to implement with legacy codes or codes for which only executable is available.! Method requires densities with associated orthogonal polynomials. These can sometimes be constructed from empirical histograms.! Method requires mutually independent parameters.!
Stochastic Collocation Strategy: Using either deterministic or stochastic techniques, generate M samples from parameter space and enforce! Vandemonde System:! 6 4 u(t, q m )=u K (t, q m ) 0(q 1 ) K(q 1 ).. 0(q M ) K(q M ) 3 7 5 6 4 u 0 (t). u K (t) 3 7 5 = Issues: System typically ill-conditioned and dense! 6 4 u(t, q 1 ). u(t, q M ) Alternative Strategy: Employ Lagrange basis functions which yield identity and! u m (t) =u(t, q m ) for m =1, M Equivalent Formulation:!Employ i(q) =L k (q) and take q m = q r to get du m dt (t) =f(t, qm,u m ),m=1,,m 3 7 5
Properties:! Stochastic Collocation Whereas motivated in the context of a Galerkin method, collocation is based on interpolation theory.! Advantages! Method is nonintrusive in the sense that once M collocation points are specified, one solves M deterministic problems using existing software.! Method is applicable to general parameter distributions with correlated parameters.! Algorithms available in Sandia Dakota package.! Disadvantages! Evaluation of QoI typically requires sampling from joint distribution, which may not be available.!
Problem:! du dx = f(t, Q,u),t>0 u(0, Q) =u 0 Finite-Dimensional Representation:! u K (t, Q) = u k (t) k(q) Discrete Projection Method where! u k (t) = 1 k u(t, q) k(q) Q (q)dq Discrete Projection (Pseudo-spectral):! u k (t) = 1 k RX u(t, q r ) k(q r ) Q (q r )w r r=1
Example: We revisit the spring model! Discrete Projection Method m d z dt + cdz dt + kz = f 0 cos(! F t) z(0) = z 0, with the response! y(! F, Q) = where Q N( q, V ) dz dt (0) = z 1 1 p (k m! F ) +(c! F ) Parameters:! m = m 0(Q)+ m 1(Q) = m + m Q 1 c = c 0(Q)+ c (Q) = c + c Q k = k 0(Q)+ k 3(Q) = k + k Q 3 where k(q) = k1 (Q 1 ) k (Q ) k3 (Q 3 ) are tensored Hermite polynomials.
Discrete Projection Method Approximated Response:! y K (! F, Q) = y k (! F ) k(q) where! and! y k (! F )= 1 k 1 k R 3 y(! F,q) k(q) Q (q)dq X R`1 X R` X R`3 r 1 =1 r =1 r 3 =1 y(! F,q r ) k(q r ) Q (q r )w r` Q (q) = 1 p 3 e m / e c / e k / Note:! ȳ(! F )=y 0 (! F ) var y K (! F, Q) = y k (! F ) k k=1
Discrete Projection Method Results:! Response Mean 3.5 1.5 1 0.5 Discrete Projection Monte Carlo Deterministic Solution Response Standard Deviation 1.5 1 0.5 Discrete Projection Monte Carlo 0 1.4 1.6 1.8..4.6 0 1.4 1.6 1.8..4.6 F
Properties:! Advantages! Discrete Projection Like collocation, the method is nonintrusive and hence can be employed with post-processing to existing codes. The method is often referred to as nonintrusive PCE.! Algorithms available in Sandia Dakota package.! Disadvantages! Requires the construction of the joint density which often relies on mutually independent parameters.!
Boundary Value Problems and Elliptic PDE Model:! N (u, Q) =F (Q) B(u, Q) =G(Q),x D,x @D Quantity of Interest:! y(x) = u(x, q) Q (q)dq Deterministic Weak Formulation:! Find u V, which satisfies D N(u, Q)S(v)dx = D F (Q)vdx for all v V Stochastic Weak Formulation:!Find u V that satisfies N(u, q)s(v(x))z(q) Q (q)dxdq = F (q)v(x)z(q) Q (q)dxdq D for all test functions v V,z D
Boundary Value Problems and Elliptic PDE Approximated Solution:! u K (x, Q) = u k (x) k(q) = JX j=1 Galerkin Method:! RX i(q r ) Q (q r )w r r=1 Quantity of Interest:! = D RX r=1 u jk X K N j (x) k(q). JX j=1 u jk i(q r ) Q (q r )w r j (x) k(q r ),q r S( `(x))dx D F (q r ) `(x)dx y(x) = RX JX w r Q (q r ) u jk j (x) k(q r ) r=1 j=1
Boundary Value Problems and Elliptic PDE Collocation: Enforce! JX u(x, q m )=u K (x, q m )= j=1 u jm j (x) at M collocation points to yield M relations! 0 1 JX N @ u jm j (x),q m A S( `(x))dx = D j=1 for ` =1,,J D F (q m ) `(x)dx Quantity of Interest:! RX JX y = w r Q (q r ) r=1 j=1 u jr RX = w r Q (q r )û r (x) r=1 j (x)