A Dynamically Bi-Orthogonal Method for Time-Dependent Stochastic Partial Differential Equations I: Derivation and Algorithms

Size: px
Start display at page:

Download "A Dynamically Bi-Orthogonal Method for Time-Dependent Stochastic Partial Differential Equations I: Derivation and Algorithms"

Transcription

1 A Dynamically Bi-Orthogonal Method for ime-dependent Stochastic Partial Differential Equations I: Derivation and Algorithms Mulin Cheng a, homas Y. Hou a,, Zhiwen Zhang a a Computing and Mathematical Sciences, California Institute of echnology, Pasadena, CA 925 Abstract We propose a dynamically bi-orthogonal method DyBO to solve time dependent stochastic partial differential equations SPDEs. he objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a biorthogonal basis. It is well-known that the Karhunen-Loeve expansion KLE minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. he main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. nlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method. Keywords: Stochastic partial differential equations, Karhunen-Loeve expansion, uncertainty quantification, bi-orthogonality, reduced-order model.. Introduction ncertainty arises in many complex real-world problems of physical and engineering interests, such as wave, heat and pollution propagation through random media [3, 9, 7, 53] and flow driven by stochastic forces [26, 28, 5, 42, 37]. Additional examples can be found in other branches of science and engineering, such as geosciences, statistical mechanics, meteorology, biology, finance, and social science. Stochastic partial differential equations SPDEs have played an important role in our investigation of uncertainty quantification Q. However, it is very challenging to solve stochastic PDEs efficiently due to the large dimensionality of stochastic solutions. Although many numerical methods have been proposed in the literature see e.g. [52, 9, 2, 3, 23, 3, 38, 54, 4, 5, 55, 36, 5, 28, 48,, 39, 24, 46, 4, 34, 5, 6, 4, 44], it is still very expensive to solve a time dependent nonlinear stochastic PDEs with high dimensional stochastic input variables. his limits our ability to attack some challenging real-world applications. In this paper, we propose a dynamically bi-orthogonal method DyBO to solve time-dependent stochastic partial differential equations. he dynamic bi-orthogonality condition that we impose on DyBO is closely Corresponding author addresses: mulinch@caltech.edu Mulin Cheng, hou@cms.caltech.edu homas Y. Hou, zhangzw@caltech.edu Zhiwen Zhang Preprint submitted to Elsevier Friday 8 th February, 23

2 related to the Karhunen-Loeve expansion KLE [29, 32]. he KL expansion has received increasing attention in many fields, such as statistics, image processing, and uncertainty quantification. It yields the best basis in the sense that it minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, computing KLE could be quite expensive since it involves forming a covariance matrix and solving the associated large-scale eigenvalue problem. One of the attractive features of our DyBO method is that we construct the sparsest bi-orthogonal basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. We consider the following time-dependent stochastic partial differential equations: u x, t, ω = Lux, t, ω, x D Rd, ω Ω, t [, ], a ux,, ω = u x, ω, x D, ω Ω, b Bux, t, ω = hx, t, ω, x D, ω Ω, c where L is a differential operator and may contain random coefficients and/or stochastic forces and B is a boundary operator. he randomness may also enter the system through the initial condition u and/or the boundary conditions B. We assume the stochastic solution ux, t, ω of the system is a second-order stochastic process, i.e., u, t, L 2 D Ω. he solution ux, t, ω can be represented by its KL expansion as follows: ux, t, ω = ūx, t + u i x, ty i ω, t, 2 where ūx, t = E [ux, t, ω], u i s are the eigenfunctions of the associated covariance function Cov u x, y = E [ux, t, ω ūx, tuy, t, ω ūy, t]. hus u i are orthogonal to each other, i.e., u i, u j = λ i δ ij, where λ i are the corresponding eigenvalues of the covariance function. he random variables Y i have zero-mean and are mutually orthogonal, i.e. E [Y i Y j ] = δ ij. hey are defined as follows: Y i ω, t = λ i t D i= ux, t, ω ūx, tu i x, t dx, for i =, 2, 3,. For many physical and engineering problems, the dimension of the input stochastic variables may appear to be high, but the effective dimension of the stochastic solution may be low due to the rapid decay of the eigenvalues λ i [48]. For this type of problems, the KLE provides the sparest representation of the solution through an optimal set of bi-orthogonal spatial and stochastic basis. Since this bi-orthogonal basis is constructed at each time, both the spatial basis u i and the stochastic basis Y i are time-dependent to preserve bi-orthogonality. his is very different from other reduced-order basis in which either the spatial or the random basis is time-dependent, but not both of them are time-dependent. he traditional way to construct the KL expansion is to first solve SPDE by some conventional method, and then form the covariance function to construct the KL expansion. As we mentioned earlier, this post-processing approach is very expensive which would offset the benefit of giving a sparsest representation of the stochastic solution. he main objective of this paper is to derive a well-posed system that governs the dynamic evolution of the KL expansion without the need of forming the covariance function explicitly. o illustrate the main idea, we assume that the eigenvalues λ i decay rapidly and an m-term truncation in the KL expansion would give an accurate approximation of the stochastic solution. We substitute the m-term truncated expansion 2, denoted as ũ, into the stochastic PDE a. After projecting the resulting equation into the spatial and stochastic basis and using the bi-orthogonality of the basis, we obtain = E [Lũ], = Λ = YE, [ Y 3a + G ū,, Y, 3b ] + G Y ū,, Y, 3c 2

3 where E [ ] is the expectation, f, g = fxgxdx, x, t = u x, t, u 2 x, t,..., u m x, t, Yω, t = Y ω, t, Y 2 ω, t,..., Y m ω, t, ũ = ū + Y, Λ = diag,, G ū,, Y and G Y ū,, Y are defined in 2a and 2b respectively. he initial condition of 3 is given by the KL expansion of u x, ω. A close inspection shows that the evolution system 3b and 3c does not determine the time derivative or uniquely. In fact, 3b only determines the projection of into the orthogonal complement set of uniquely, which is G. Similarly, 3c only determines the projection of into the orthogonal complement set of Y uniquely, which is G Y. hus, the evolution system 3 can be rewritten as follows: = E [Lũ], = C + G ū,, Y, = YD + G Yū,, Y, where C and D are m m matrices which represent the projection coefficients of and into and Y respectively. hese two matrices characterize the large degrees of freedom in choosing the spatial and stochastic basis dynamically. hey cannot be chosen independently. Instead, they must satisfy a compatibility condition to be consistent with the original SPDE and satisfy the bi-orthogonality condition of the spatial and the stochastic basis. By introducing two types of antisymmetric operators to enforce the bi-orthogonality condition, we derive two constraints for C and D. When combining these two constraints with the compatibility condition, we show that these two matrices can be determined uniquely provided that the eigenvalues of the covariance matrix are distinct. We prove rigorously that the system of governing equations indeed preserve the bi-orthogonality condition dynamically. More precisely, if we write χt = u i, u j i>j as a column vector in R mm 2, then χt satisfies a linear ODE system dχt = Wtχt for some anti-symmetric matrix Wt. he fact that Wt is anti-symmetric is important because this implies that the dynamic preservation of bi-orthogonality condition is very stable to perturbation and any deviation from the bi-orthogonal form will not be amplified. he breakdown in deriving a well-posed system for the KL expansion when two distinct eigenvalues coincide dynamically is associated with additional degrees of freedom in choosing orthogonal eigenvectors. o overcome this difficulty, we design a strategy to handle eigenvalue crossings. When two distinct eigenvalues become close, we temporarily freeze the stochastic basis and evolve only the spatial basis. By doing so, we still maintain the dynamic orthogonality of the stochastic basis, which allows us to compute the covariance matrix relatively easily. After the two eigenvalues separate in time, we can reinitialize the DyBO method by reconstructing the KL expansion without forming the covariance matrix. Similarly, we can freeze the spatial basis and evolve the stochastic basis for a short time until the two eigenvalues are separated. We remark that the low-dimensional or sparse structures of SPDE solutions have been explored partially in some methods, such as the proper orthogonal decomposition POD methods [3, 49, 5], reduced-basis RB methods [7, 35, 45, 2], and dynamical orthogonal method DO [46, 47]. Our method shares some common feature with the dynamically orthogonal method since both methods use time-dependent spatial and stochastic basis. An important ingredient in POD and RB methods is how to choose the reduced basis from some limited information of the solution that we learn offline. he main difference between our method and other reduced basis methods is that we bypass the step of learning the basis offline, which in general can be quite expensive. We construct the reduced basis on the fly as we solve the SPDE. We have performed some preliminary error analysis for our method. In the case when the differential operator is linear and deterministic and the randomness enters through the initial condition or forcing, we show that the error is due to the projection of the initial condition or the random forcing into the biorthogonal basis. In the special case when the initial condition has exactly m modes in the KL expansion, the m-term DyBO method will reproduce the exact solution without any error. For general nonlinear stochastic PDEs, the number of effective modes in the KL expansion will increase dynamically due to nonlinear interaction. In this case, even if the initial condition has an exactly m-mode KL expansion, we must dynamically increase modes to maintain the accuracy of our computation. 3 4a 4b 4c

4 λi DyBO wi L 2 gpc wα L 2 gpc DyBO 2 2 e.62m e.65np Figure : Stochastically forced Burgers equations computed by DyBO and gpc. Left: Sorted Energy spectrum of gpc and DyBO solutions. Right: Relative errors of SD, DyBO e.62m vs gpc e.65np. We have developed an effective adaptive strategy for DyBO to add or remove modes dynamically to maintain a desirable accuracy. If the magnitude of the last mode falls below a prescribed tolerance, we can remove this mode dynamically. On the other hand, when the magnitude of the last mode rises above a certain tolerance, we need to add a new mode pair. he method we propose to add new modes dynamically is to use the DyBO solution as the initial condition and compute a DyBO solution and a generalized Polynomial Chaos gpc solution respectively for a short time. We then compute the KL expansion of the difference between the DyBO solution and the gpc solution. We add the largest mode from the KL expansion of the difference to our DyBO method. his method works quite effectively. More details will be reported in the second part of our paper []. As we will demonstrate in our paper, the DyBO method could offer accurate numerical solutions to SPDEs with significant computational saving over traditional stochastic methods. We have performed numerical experiments for D Burgers equations, 2D incompressible Navier-Stokes equations, and Boussinesq equations with approximated Brownian motion forcing. In all these cases, we observe considerable computational saving. he specific rate of saving will depend on how we discretize the stochastic ordinary differential equations SODEs governing the stochastic basis. We can use three methods to discretize the SODEs, i.e. i Monte Carlo methods MC, ii generalized stochastic collocation methods gsc, iii generalized polynomial chaos methods gpc. In the numerical experiments we present in this paper, we use gpc to discretize the SODE. Our numerical results show that the DyBO method gives much faster decay rate for its error compared with that of the gpc method. o illustrate this point, we show the sorted energy spectrum of the solution of the stochastically forced Burgers equation computed by our DyBO method and that of the gpc method respectively in Figure. We can see that the eigenvalues of the covariance matrix decay much faster than the corresponding sorted energy spectrum of the gpc solution, indicating the effective dimension reduction induced by the bi-orthogonal basis. If we denote m as the number of modes used by DyBO and N p the number of modes used by gpc, our complexity analysis shows that the ratio between the computational complexities of DyBO and gpc is roughly of order Om/N p 3 for a two-dimensional quadratic nonlinear SPDE. ypically m is much smaller than N p, as illustrated in Figure. More details can be found in part II of our paper where we also discuss the parallel implementation of DyBO []. his paper is organized as follows. In Section 2, we provide the detailed derivation of the DyBO formulation for a time-dependent SPDE. Some important theoretic aspects of the DyBO formulation, such as bi-orthogonality preservation and the consistency between the DyBO formulation and the original SPDE are discussed in Section 3. In Section 4, we discuss some detailed numerical implementations of the DyBO method, such as the representation of the stochastic basis Y and a strategy to deal with eigenvalue crossings. hree numerical examples are provided in Section 5 to demonstrate computational advantage of our DyBO method. Finally, some remarks will be made in Section 6. 4

5 2. Derivation of dynamically bi-orthogonal formulation In this section, we introduce the derivation of our dynamically bi-orthogonal method. For the ease of future derivations, we use extensively vector and matrix notations. x, t = u x, t, u 2 x, t,..., u m x, t, Ũx, t = u m+ x, t, u m+2 x, t,..., Yω, t = Y ω, t, Y 2 ω, t,..., Y m ω, t, Ỹω, t = Y m+ ω, t, Y m+2 ω, t,.... Correspondingly, E[Y Y] and, are m-by-m matrices and the bi-orthogonality condition can be rewritten compactly as E [ Y Y ] t = E [Y i Y j ] = I R m m, 5a, t = u i, u j = Λ R m m, 5b where Λ = diag, = λ i δ ij m m. herefore, the KL expansion 2 of u at some fixed time t reads u = ū + Y + ŨỸ. 6 Furthermore, we assume that the solution of the SPDE enjoys a low-dimensional structure in the sense of KLE, or precisely, the eigenvalue spectrum {λ i t} i= decays fast enough, allowing for a good approximation with a truncated KL expansion 2. We denote by ũ as the m-term truncation of the KL expansion of u, ũ = ū + Y. 7 We introduce the following orthogonal complementary operators with respect to the orthogonal basis in L 2 D or Y in L 2 Ω to simplify notations, Π v = v Λ, v for vx L 2 D, Π Y Z = Z YE [ Y Z ] for Zω L 2 Ω. We also define an anti-symmetrization operator Q : R k k R k k and a partial anti-symmetrization operator Q : R k k R k k, which are defined as follows: QA = 2 A A, QA = A A + diaga, 2 where diaga denotes a diagonal matrix whose diagonal entries are equal to those of matrix A. In the DyBO formulation it is crucial to dynamically preserve the bi-orthogonality of the spatial basis and the stochastic basis Y. We begin the derivation by substituting the KLE 2 into the SPDE a and get + Y + = Lũ + {Lu Lũ} { } Ũ Ỹ + Ũ dỹ. If the eigenvalues in the KL expansion decay fast enough and the differential operator is stable, the last two terms on the right hand side will be small, so we can drop them and obtain the starting point for deriving our DyBO method, + Y + = Lũ. 9 aking expectations on both sides of Eq. 9 and taking into account the fact that Y i s are zero-mean random variables, we have = E [Lũ], 5

6 which gives the evolution equation for the mean of the solution u. Multiplying both sides of Eq. 9 by Y from the right and taking expectations, we obtain E [ Y Y ] [ ] [ + E Y = E Lũ ] Y. After using the orthogonality of the stochastic basis Y, we have [ ] [ ] = E LũY E Y, where Lũ = Lũ E [Lũ]. Similarly we obtain an evolution equation for. herefore, this gives rise to the following evolution system: = E [Lũ], a [ ] [ ] = E LũY E Y, b Λ = Lũ, Y,. c However, the above system does not give explicit evolution equations for the evolution system, we substitute Eq. c into Eq. b and get = E [ LũY ] [ ] Λ, E LũY + Λ and. o further simplify,, where we have used the orthogonality of the stochastic basis, i.e., E [ Y Y ] = I. Similar steps can be repeated by substituting Eq. b into Eq. c. hus, the system can be re-written as = E [Lũ], = Λ = YE, [ Y a + G ū,, Y, b ] + G Y ū,, Y, c where we have used the orthogonality condition of the spatial and stochastic modes, and [ ] [ ] [ ] G ū,, Y = Π E LũY = E LũY Λ, E LũY, 2a [ G Y ū,, Y = Π Y Lũ, Λ = Lũ, Λ E ] Y LũY, Λ. 2b [ ] Note that G is the projection of E LũY into the orthogonal complementary set of and G Y is the projection of Lũ, into the orthogonal complementary set of Y. hus, we have G ū,, Y span in L 2 D, G Y ū,, Y spany in L 2 Ω. he above observation will be crucial in the later derivations. We note that the system b and c does not determine the time derivative In fact, if is a solution of b, then uniquely. + C for any m m matrix C is also a solution of b. or 6

7 Similar statement can be made for. In Appendix A, we will show that the evolution system 3 can be rewritten as follows: = E [Lũ], = C + G ū,, Y, = YD + G Yū,, Y, where C and D are m m matrices which represent the projection coefficients of and into and Y respectively. Moreover, we show that by enforcing the bi-orthogonality condition of the spatial and stochastic basis and a compatibility condition with the original SPDE, these two matrices can be determined uniquely provided that the eigenvalues of the covariance matrix are distinct. he results are summarized in the following theorem. heorem 2. Solvability of matrices C and D. If u i, u i u j, u j for i j, i, j =, 2,..., m, the m-by-m matrices C and D can be solved uniquely from the following linear system 3a 3b 3c C Λ Λ C =, 4a D Q D =, 4b D + C = G ū,, Y, [ ] where G ū,, Y = Λ, E LũY R m m. he solutions are given entry-wisely as follows: 4c C ii = G ii, 5a u j 2 L C ij = 2 D u j 2 L 2 D u i 2 G ij + G ji, for i j, 5b L 2 D D ii =, D ij = u j 2 L 2 D u i 2 L 2 D 5c u j 2 L 2 D G ji + u i 2 L 2 D G ij, for i j. 5d o summarize the above discussion, the DyBO formulation of SPDE results in a set of explicit equations for all the unknowns quantities. In particular, we reformulate the original SPDE into a system of m stochastic ordinary differential equations for the stochastic basis Y coupled with m + deterministic PDEs for the mean solution ū and spatial basis. Further, from the definition of G given in heorem 2., we can rewrite G and G Y as [ ] G = E LũY G, G Y = Lũ, Λ YG. From the above identities, we can rewrite the DyBO formulation 3 as follows: = E [Lũ], 6a [ ] = D + E LũY, 6b = YC + Lũ, Λ, 6c where we have used the compatibility condition Eq.4c for C and D, i.e., D + C = G. he initial conditions of ū, and Y are given by the KL expansion of u x, ω in Eq.b. We can derive the 7

8 boundary condition of ū and the spatial basis by taking expectation on the boundary condition. For example, if B is a linear deterministic differential operator, we have Būx, t, ω x D = E [hx, t, ω] and Bu i x, t, ω x D = E [hx, t, ωy i ω, t]. If the periodic boundary condition is used in Eq.c, ū and the spatial basis satisfy the periodic boundary condition. Remark 2.. In the event that two eigenvalues approach each other and then separate as time increases, we refer to this as eigenvalue crossings. he breakdown in determining matrices C and D when we have eigenvalue crossings represents the extra degrees of freedom in choosing a complete set of orthogonal eigenvectors for repeated eigenvalues. o overcome this difficulty associated with eigenvalue crossings, we can detect such an event and temporarily freeze the spatial basis or the stochastic basis Y and continue to evolve the system for a short duration. Once the two eigenvalues separate, the solution can be recast into the bi-orthogonal form via KL expansion. Recall that one of and Y is always kept orthogonal even during the freezing stage. We can devise a KL expansion algorithm which avoids the need to form the covariance function explicitly. More details will be discussed in the section about numerical implementation. 3. heoretical analysis 3.. Bi-Orthogonality Preservation In the previous section, we have used the bi-orthogonal condition in deriving our DyBO method. Since we have made a number of approximations by using a finite truncation of the KL expansion and performing various projections, these formal derivations do not necessarily guarantee that the DyBO method actually preserves the bi-orthogonal condition. In this section, we will show that the DyBO formulation 6 preserves the bi-orthogonality of the spatial basis and the stochastic basis Y for t [, ] if the bi-orthogonality condition is satisfied initially. heorem 3. Preservation of Bi-Orthogonality in DyBO formulation. Assume that there is no eigenvalue crossing up to time >, i.e. u j L2 D u i L2 D for i j. he solutions and Y of system 6 satisfy the bi-orthogonality condition 5 exactly as long as the initial conditions x, and Yω, satisfy the bi-orthogonality condition 5. Moreover, if we write χ i,j t = u i, u j t for i > j and denote χt = χ i,j t mm as a column vector in R 2, then there exists an anti-symmetric matrix Wt i>j R mm 2 mm 2 such that Similar results also hold for the stochastic basis Y. dχt = Wtχt, 7a χ =. 7b Remark 3.. he fact that Wt is anti-symmetric is important for the stability of our DyBO method in preserving the dynamic bi-orthogonality of the spatial and stochastic basis. Due to numerical round-off errors, discretization errors, etc, and Y may not be perfectly bi-orthogonal at the beginning or become so later in the computation. he above theorem sheds some lights on the numerical stability of DyBO formulation in this scenario. Since the eigenvalues of an anti-symmetric matrix are purely imaginary or zero and their geometric multiplicities are always one see page 5, [22], so any deviation from the bi-orthogonal form will not be amplified. Proof of heorem 3.. We only give the proof for the spatial basis, since the proof for the stochastic basis Y is similar. Direct computations give d, =, +, = D, [ ] + E LũY,, [ ] D +, E LũY = D,, D + G Λ + Λ G, 8 8

9 where we have used the definition of G see heorem 2. and A.3 in the last equality. For each entry with i j in 8, we get, d u i, u j = = = m m D ik u k, u j u i, u l D jl + G ji u j 2 L 2 D + G ij u i 2 L 2 D k= m k=,k j l= D ik u k, u j m l=,l i u i, u l D jl D ij u j 2 L 2 D D ji u i 2 L 2 D u + j 2 L 2 D u i 2 L 2 D D ij m m D ik u k, u j D jl u i, u l, 9 k=,k i,j l=,l i,j where Eq. 5d and the anti-symmetric property of matrix D are used. From Eq. 9, we see that χt satisfies a linear ODE system in the form of 7 with matrix Wt given in terms of D ij and the initial condition comes from the fact that x, are a set of orthogonal basis. It is easy to see that, as long as the solutions of system 6 exist, Wt is well-defined and the ODE system admits only zero solution, i.e., orthogonality of is preserved for t [, ]. Similar arguments can be used to show Y remains orthonormal for t [, ]. Next, we show that the matrix W in ODE system 7 is anti-symmetric. Consider two pairs of indices i, j and p, q with i > j and p > q. he corresponding equations for χ i,j and χ p,q are dχ i,j dχ p,q m m = D ik χ k,j D jl χ i,l, = k=,k i,j m l=,l i,j m D pk χ k,q k=,k p,q l=,l p,q D ql χ p,l, 2a 2b where we have identified χ i,j with χ j,i. First, we consider the case two index pairs are identical. We see from Eq. 2a that there is no χ i,j on the right side, so W i,j,i,j =. Next, we consider the case none of indices in one index pair is equal to any in the other, i.e., i p, q and j p, q. It is obvious that no χ p,q appears in the summations on the right side of Eq. 2a, which implies W i,j,p,q =. Similarly, we have W p,q,i,j =. Last, we consider the case in which one index in one index pair is equal to one index in another index pair but the remaining two indices from two index pair are not equal. Without loss of generality, we assume i = p and j q. From Eq. 2a, we have dχ i,j = D jq χ i,q = D jq χ p,q, where we have intentionally isolated the relevant term from the second summation and the last equality is due to i = p. Similarly, from Eq. 2b, we have dχ p,q = D qj χ p,j = D qj χ i,j. he above two equations imply that W i,j,p,q = W p,q,i,j. Similar arguments apply to other cases. his completes the proof Error Analysis In this section, we will consider the convergence of the solution obtained by the DyBO formulation 6 to the original SPDE. heorem 3.2 gives the error analysis of the DyBO formulation. 9

10 heorem 3.2 he consistency between the DyBO formulation and the original SPDE. he stochastic solution of system 6 satisfies the following modified SPDE ũ = Lũ + e m, 2 where e m is the error due to the m-term truncation and e m = Π Y Π Lũ. 22 Proof. We show consistency by computing directly from Eqs. 6band 6c or equivalently Eqs.?? and??, [ ] Lũ Λ, Lũ Λ, E LũY Y, = Lũ + D Y Y = CY + YE Combining the above two, we have where e m = = Lũ Λ, Lũ [ LũY ] Y E [ LũY ], Lũ Λ, Lũ + E + Y = Lũ + e m, Λ. + D + C Y Λ [ ], E LũY Y [ ] [ ] + YE LũY Y E LũY, ], Lũ Y Y, [ Lũ Λ where Eq. 4c has been used in deriving the first equality. Eq. 2. Λ By noticing Lũ = Lũ E [Lũ], we prove Remark 3.2. According to heorem 3., the bi-orthogonality of and Y are preserved for all time. Because both Hilbert space L 2 D and L 2 Ω are separable, the spatial basis and the stochastic basis Y become a complete basis for L 2 D and L 2 Ω as m +, respectively, which implies lim m + e m =. Next we consider a special case where the differential operator L is deterministic and linear, such as Lv = cx v, or Lv = i a ij x v j, where cx and a ij x are deterministic functions. Only initial conditions are assumed to be random, i.e., the randomness propagates into the system only through initial conditions. Corollary 3.3. Let the differential operator L be deterministic and linear. he residual in Eq. 2 is zero for all time, i.e., e m t, t >. Proof. his can be seen by directly computing the truncation error e m. First we substitute Lũ = Lũ E [Lũ] into Eq. 22, e m = Π Y Lũ E [Lũ] Λ, Lũ E [Lũ]. Since the differential operator L is deterministic and linear, we have Lũ = Lū + LY, where L = Lu, Lu 2,, Lu m. his gives e m = Π Y Lū + LY E [ Lū + LY ] Λ, Lũ E [Lũ] = Π Y LY Λ, L Y = Π Y Y L L, Λ =. he last line is due to the orthogonal complementary project.

11 he above corollary implies that DyBO is exact if the randomness can be expressed in a finite-term KL expansion. he next corollary concerns a slightly different case where the differential operator L is affine in the sense that Lu = Lu+fx, t, ω and the differential operator L is linear and deterministic. he stochastic force f, t, L 2 D Ω for all t. Corollary 3.4. If the differential operator L is affine, i.e., Lu = Lu + fx, t, ω, and f is a second-order stochastic process at each fixed time t, the residual in Eq. 22 is given below e m = Π Y Π f. Proof. Again by directly computing, we have by the linearity of the differential operator L Simple calculation shows that Substituting into eqn. 22, we complete the proof. Lũ = Lū + LY + f. Lũ = Lũ E [Lũ] = LY + f E [f]. Remark 3.3. For this special case, Corollary 3.4 implies that numerical solutions are accurate as long as the spatial basis and the stochastic basis Y provide good approximations to the external forcing term f, which is not surprising at all. 4. Numerical Implementation 4.. Representation of Stochastic Basis Y he DyBO formulation 6 is a combination of deterministic PDEs 6a and 6b and SODEs 6c. For the deterministic PDEs in the DyBO formulation, we can apply suitable spatial discretization schemes and numerical integrators to solve them numerically. One of the conceptual difficulties, from the viewpoint of the classical numerical PDEs, involves representations of random variables or functions defined on abstract probability space Ω. Essentially, there are three ways to represent numerically stochastic basis Yω, t: ensemble representations in sampling methods, such as Monte carlo method, sparse grid based stochastic collocation method [, 8] and spectral representations, such as the generalized Polynomial Chaos method [54, 55, 5, 28, 34] or Wavelet based Chaos method [36] Ensemble Representation DyBO-MC In the MC method framework, the stochastic basis Yω, t are represented by an ensemble of realizations, i.e., Yω, t {Yω, t, Yω 2, t, Yω 3, t,, Yω Nr, t}, where N r is the total number of realizations. hen the expectations in the DyBO formulation 6 can be replaced by ensemble averages, i.e., x, t = N r N r i= Lũx, t, ω i, x, t = x, tdt + N r ω i, t = Yω i, tct + N r i= Lũx, t, ω i Yω i, t, Lũx, t, ωi, x, t Λ t, 23a 23b 23c

12 where Ct and Dt can be solved from 5 with G ū,, Y = Λ, N r N r i= Lũx, t, ω i Yω i, t, 24 and Lũx, t, ω i = Lũx, t, ω i N r N r i= Lũx, t, ω i. 25 he above system is the MC version of DyBO method and we denote it as DyBO-MC. Close examinations reveal that Eq. 23a for expectation and Eq. 23b for spatial basis are only solved once for all the realizations at each time iteration while Eq. 23c for the stochastic basis is decoupled from realization to realization and can be solved simultaneously across all realizations. In this sense, we see that DyBO-MC is semisampling and it preserves some desirable features of MC method, for example, the convergence rate does not depend on stochastic dimensionality and the solution process of Y ω i, t s are decoupled. Clearly, not only the number of realizations, but also how these realizations {ω i } Nr i= are chosen in Ω affects the numerical accuracy and the efficiency. Quasi Monte Carlo method [43, 4], variance reduction and other techniques may be combined into DyBO-MC to further improve its efficiency and accuracy. However, the disadvantage of MC method is its slow convergence. Developing effective sampling version of DyBO method such as the multi-level Monte Carlo method [24] is currently under investigation Spectral Representation DyBO-gPC In many physical and engineering problems, randomness generally comes from various independent sources, so randomness in SPDE is often given in terms of independent random variables ξ i ω. hroughout this paper, we assume only a finite number of independent random variables are involved, i.e., ξω = ξ ω, ξ 2 ω,, ξ r ω, where r is the number of such random variables. Without loss of generality, we can further assume they all have identical distribution ρ. hus, the solution of SPDE is a functional of these random variables, i.e., ux, t, ω = ux, t, ξω. When SPDE is driven by some known stochastic process, such as Brownian Motion B t, by Cameron-Martin theorem [9], the stochastic solution can be approximated by a functional of identical independent standard Gaussian variables, i.e., ux, t, ω ux, t, ξω with ξ i being a standard normal random variable. In this paper, we only consider the case where randomness is given in terms of independent standard Gaussian random variables, i.e., ξ i N, and H are a set of tensor products of Hermite polynomials. In this case, the method described above is also known as the Wiener-Chaos Expansion [52, 9, 2, 3, 28]. When the distribution of ξ i is not Gaussian, the discussions remain almost identical and the major difference is the set of orthogonal polynomials. We denote by {H i ξ} i= the one-dimensional polynomials orthogonal to each other with respect to the common distribution ρz, i.e., H i ξh j ξρξ dξ = δ ij. For some common distributions, such as Gaussian or uniform distribution, such polynomial sets are wellknown and well-studied, many of which fall in the Ashley scheme, see [54]. For general distributions, such polynomial sets can be obtained by numerical methods, see [5]. Furthermore, by a tensor product representation, we can use the one-dimensional polynomial H i ξ to construct a complete orthonormal basis H α ξ s of L 2 Ω as follows r H α ξ = H αi ξ i, α J r, where α is a multi-index, i.e., a row vector of non-negative integers, i= α = α, α 2,, α r α i N, α i, i =, 2,, r, 2

13 and J r is a multi-index set of countable cardinality, J r = {α = α, α 2,, α r α i, α i N} \ {}. We have intentionally removed the zero multi-index corresponding to H ξ = since it is associated with the mean of the stochastic solution and it is better to deal with it separately according to our experience. When no ambiguity arises, we simply write the multi-index set J r as J. Clearly, the cardinality of J r is infinite. For the purpose of numerical computations, we prefer a polynomial set of finite size. here are many choices of truncations, such as the set of polynomials whose total orders are at most p, i.e., { } r J p r = α α = α, α 2,, α r, α i, α i N, α = α i p \ {}, and sparse truncations proposed in [28], see also Luo s thesis [33]. Again, we may simply write such a truncated set as J when no ambiguity arises. he cardinality of J, or the total number of polynomial basis functions, denoted by N p = J, is equal to p+r! p!r!. As we can see, N p grows exponentially fast as p and r increase. his is also known as the curse of dimensionality. By the Cameron-Martin theorem [9], we know that the solution of SPDE admits a generalized Polynomial Chaos expansion gpce ux, t, ω = vx, t + v α x, th α ξω vx, t + v α x, th α ξω. 26 α J α J i= If we write H J ξ = H α ξ, H α2 ξ,, H αnp ξ Vx, t = v α x, t, v α2 x, t,, v αnp x, t α i J, α i J, both of which are row vectors, the above gpce can be compactly written in a vector form u gpc x, t, ω = vx, t, ω = vx, t + Vx, thξ. 27 By substituting the above gpce into Eq. a, it is easy to derive the gpc formulation for SPDE, v = E [Lv], 28a V [ ] = E LvH. 28b Now, we turn our attention to the gpc version of DyBO formulation DyBO-gPC. he Cameron-Martin theorem also implies the stochastic basis Y i ω, t s in the KL expansion 7 can be approximated by the linear combination of polynomials chaos, i.e., Y i ω, t = α J H α ξωa αi t, i =, 2,, m, 29 or in a matrix form, Yω, t = H ξω A, 3 where A R Np m. he expansion 7 now reads ũ = ū + A H. 3

14 We can derive equations for ū, and A, instead of ū, and Y. In other words, the stochastic basis Y are identified with a matrix A R Np m in DyBO-gPC. Substituting Eq. 3 into Eq. 6c and using the identity E [ H H ] = I, we get the DyBO-gPC formulation of SPDE, = E [Lũ], 3a [ ] = D + E LũH A, 3b da ] = AC + E [H Lũ, Λ, 3c where Ct and Dt can be solved from 5 with [ ] [ ] G ū,, Y = Λ, E LũY = Λ, E LũH A. 32 By solving the system 3, we have an approximate solution to SPDE or simply u DyBO when no ambiguity arises. u DyBO-gPC = ū + A H, gsc Representation DyBO-gSC In our DyBO-gPC method, the stochastic process Y i ξω, t is projected on to the gpc basis H and replaced by gpc coefficients, i.e., A αi, which can be computed by A αi = E [Y i ωh α ξω]. Numerically, the above integral can be evaluated with high accuracy on some sparse grid. In other words, the stochastic basis Y can also be represented as an ensemble of realizations Yω i where ξω i s are nodes from some sparse grid and associated with certain weight w i. he gsc version of DyBO formulation, which we denote as DyBO-gSC, is Ns x, t = w i Lũx, t, ω i, i= x, t = x, N s tdt + w i Lũx, t, ωi Yω i, t, ω i, t = Yω i, tct + i= Lũx, t, ωi, x, t Λ t, i =, 2,..., N s, where {ω i } Ns i= are the sparse grid points, {w i} Ns i= are the weight, N s is the number of sparse grid points and Ct, Dt can be solved from 5 with G ū,, Y = Λ N s, w i Lũx, t, ωi Yω i, t. 34 i= he primary focus of this paper is on DyBO-gPC methods. We will provide details and numerical examples of DyBO-gSC methods in our future work Eigenvalue Crossings As the system evolves, eigenvalues of different basis in the KL expansion of the SPDE solution may increase or decrease. Some of them may approach each other at some time, cross and then separate as 4

15 -Stage x, t x, s Eigenvalue λi λ τ < δτ out λ2 Y-Stage Yω, t Yω, s Entering: τ < δ τ in Exiting: τ > δ τ out s λ3 s s + ky s + 3kY t t 2 ime t Figure 2: Illustration of Eigenvalue Crossing illustrated in Fig. 2. In the figure, λ and λ 2 cross each other at t and λ and λ 3 cross each other at t 2. In this case, if C and D continue to be solved from the linear system 4 via 5, numerical errors will pollute the results. Here, we propose to freeze or Y temporarily for a short time and continue to evolve the system using different equations as derived below. At the end of this short duration, the solution is recast into the bi-orthogonal form via the KL expansion, which can be achieved efficiently since or Y is still kept orthogonal in this short duration. We are also currently exploring alternative approach to overcome the eigenvalue crossing problem, and will report our result in a subsequent paper. In order to apply the above strategy, we have to be able to detect that two eigenvalues may potentially cross each other in the near future. here are several ways to detect such crossing. Here we propose to monitor the following quantity τ = min i j λ i λ j maxλ i, λ j. 35 Once this quantity drops below certain threshold δτ in,, we adopt the algorithm of freezing or Y to evolve the system and continue to monitor this quantity τ. When τ exceeds some pre-specified threshold δτ out,, the algorithm recasts the solution in the bi-orthogonal form via an efficient algorithm detailed in the next two sub-sections and continues to evolve the DyBO system. hreshold δτ in and δτ out may be problem-dependent. In our numerical experiments, we found δτ in = % and δτ out = % gave accurate results. Here we only describe the basic idea of freezing stochastic basis Y or Y-Stage algorithm. he method of freezing the spatial basis or -Stage algorithm can be obtained analogously, see [] for more details. Suppose at some time t = s, potential eigenvalue crossing is detected and the stochastic basis Y are frozen for a short duration s, i.e., Yω, t Yω, s for t [s, s + s], which we call Y-stage. Because Y are orthonormal, the solution of SPDE admits the following approximation of a truncated expansion. ũx, t, ω = ūx, t + m u i x, ty i ω, s = ūx, t + x, tyω, s. 36 i= It is easy to derive a new evolution system for this stage, = E [Lũ], 37a [ ] = E LũY. 37b During this stage, Y is unchanged and orthogonal, but changes in time and would not maintain its orthogonality. he solution is no longer bi-orthogonal, so eigenvalues cannot be computed via λ i = u i 2 L 2 D 5

16 any more. Next, we derive formula for eigenvalues in this stage and then show how the solutions are recast into the bi-orthogonal form at the end of the stage, i.e., t = s + s. he covariance function can be computed as Covũx, y = E [ũx ūx ũy ūy] = E [ xy Y y ] = x y, where t is omitted for simplicity and E [ Y Y ] = I is used. By some stable orthogonalization procedures, such as the modified Gram-Schmi algorithm, x can be written as x = QxR, 38 where Qx = q x, q 2 x,, q m x, q i x L 2 D for i =, 2,, m, Q x, Qx = I and R R m m. Here RR R m m is a positive definite symmetric matrix and its SVD decomposition reads RR = WΛ R W, 39 where W R m m is an orthonormal matrix, i.e., WW = W W = I, and Λ R is a diagonal matrix with positive diagonal entries. he computational cost of W and Λ R is negligible compared to other parts of the algorithm. he covariance function can be rewritten as Covũx, y = QxRR Q y = QxWΛ R W Q y. Now, it is easy to see that the eigenfunctions of covariance function Covũx, y are Ũx = QxWΛ 2 R, 4 and eigenvalues are diag Λ R because Ũ Ũ, = Λ 2R W Q, QWΛ 2R = Λ R, where we have used the orthogonality of Q and W. o compute Ỹ, we start with the identity ỸŨ = Y, and multiply row vector Ũ on both sides from the right and take inner product,, Ũ Ỹ, Ũ = Y, Ũ, where Ũ Ũ, = Λ R,, Ũ = R Q x, QxWΛ 2R = R WΛ 2 R. So we obtain the stochastic basis Ỹ, which is given by Ỹ = YR WΛ 2 R. 4 If a generalized polynomial basis H is adopted to represent Y, we freeze A instead, i.e., At As and the system 37 is replaced by = E [Lũ], 42a [ ] = E LũH A, 42b where At As. At the exiting point, Eq. 4 is replaced by à = AR WΛ 2 R. 43 As we can see, the computation of eigenvalues in this stage is not trivial, so we do not want to compute τ every time iteration. Instead, τ is only evaluated every k Y time steps to achieve a balance between computational efficiency and accuracy. See the zoom-in figure in Fig. 2 for illustration. 6

17 5. Numerical Examples Previous sections highlight the analytical aspects of the DyBO formulation and its numerical algorithm. In this section, we demonstrate the effectiveness of this method by several numerical examples with increasing level of difficulties, each of which emphasizes and verifies some of analytical results in the previous sections. In the first example, we consider a SPDE which is driven purely by stochastic forces. he purpose of this example is to show that the DyBO formulation tracks the KL expansion. Corollary 3.3 regarding the error propagation of the DyBO method is numerically verified in the second numerical example where a transport equation with a deterministic velocity and random initial conditions is considered. In the last example, we consider Burgers equation driven by a stochastic force as an example of a nonlinear PDE driven by stochastic forces. We demonstrate the convergence of the DyBO method with respect to the number of basis pairs, m. In the second part of this paper [], we will consider the more challenging Navier-Stokes equations and the Boussinesq approximation with random forcings. Some important numerical issues, such as adaptivity, parallelization and computational complexity, will be also studied in details. 5.. SPDE Purely Driven by Stochastic Force o study the convergence of our DyBO-gPC algorithms, it is desirable to construct a SPDE whose solution and KL expansion are known analytically. his would allow us to perform a thorough convergence study for our algorithm especially when we encounter eigenvalue crossings. In this example, we consider u a SPDE which is purely driven by a stochastic force f, i.e., = Lu = fx, t, ω. he solution can be obtained by direct integration, i.e., ux, t, ω = ux,, ω + t fx, s, ω ds. Specifically, in this section, we consider the SPDE u = Lu = fx, t, ξω, x D = [, ], t [, ], 44 where ξ = ξ, ξ 2,, ξ r are independent standard Gaussian random variables, i.e., ξ i N,, i =, 2,, r. he exact solution is constructed as follows ux, t, ξ = vx, t + Vx, tz ξ, t, 45 where Vx, t = VxW V tλ 2 V t, Zξ, t = ZξW Z t, Vx = v x,, v m x with v i x, v j x = δ ij and Zξ = Z ξ,, Z ] m ξ with E [ Zi Zj = δ ij for i, j =, 2,, m. W V t and W Z t are m-by-m orthonormal matrices, and Λ 2 V t is a diagonal matrix. Clearly, the exact solution u is intentionally given in the form of its finite-term KL expansion and the diagonal entries of matrix Λ V t are its eigenvalues. By carefully choosing the eigenvalues, we can mimic various difficulties which may arise in more involved situations and devise corresponding strategies. he stochastic forcing term on the right hand side of Eq. 44 can be obtained by differentiating the exact solution, i.e., Lu = f = v + V Z + V dz. 46 he initial condition of SPDE 44 can be obtained by simply setting t = in Eq. 45. From the general DyBO-gPC formulation 6, simple calculations give the DyBO-gPC formulation for SPDE 44 = v, V = D + W Z + V dw Z da [ ] = AC + E H V Z W Z 47a [ Z ] E H A, 47b + dw Z V, Λ, 47c 7

18 and he initial conditions are simply G u,, A = Λ, V W Z + V dw [ Z ] Z E H A. ūx, = vx,, x, = Vx,, A = E [ H Zξ, ] = E In the event of eigenvalue crossings, we have the Y-stage system from 42, or the -stage system, = v, = V W Z + V dw Z = v, da [ ] = E H V Z W Z [ ] H Z W Z. 48a [ Z ] E H A, 48b + dw Z V, Λ. In the numerical examples presented in this section, we consider a small system m = 3 and use the following settings, Vx = 2 sinπx, 2 sin5πx, 2 sin9πx, Zx = H ξ, H 2 ξ, H 3 ξ, cos b V t sin b V t cos b Z t sin b Z t W V t = P V sin b V t cos b V t P V, W Z t = P Z sin b Z t cos b Z t P Z, where b V = 2., b Z = 2., P V and P Z are two orthonormal matrices generated randomly, and H ξ = H ξ, H 2 ξ,, H 5 ξ. Eigenvalues in the KL expansion of an SPDE solution may increase or decrease as time increases. Some of them may approach each other at some time, cross and then separate later. When two eigenvalues are close or equal to each other, numerical instability may arise in solving matrices C and D via 5. In Section 4.2, we have proposed a -Stage by freezing the spatial basis or a Y-Stage by freezing the stochastic basis Y temporarily to resolve this issue. Here, we demonstrate the success by incorporating such strategies into the DyBO algorithm. o this end, we choose =.2 and eigenvalues Λ 2 V = diag sin 2πt + 2, cos 2πt +.5,.8, where eigenvalues cross each other at t.675,.25,.532,.7985,.965,.675, see Fig. 5. In this numerical example, the time step t =. 3 and k = 2 in the -stage, i.e., the exiting condition is checked every 2 time iterations when the system is in the -stage. he mean E [u], the standard deviation SD Var u and the three spatial basis in KLE at time t = are shown in Fig. 3 and Fig. 4, respectively. Clearly, the results given by DyBO almost perfectly match the exact ones with L 2 relative errors of mean and SD below since only the spatial and temporal discretizations contribute to the numerical errors in this example. In Fig. 5, eigenvalues are plotted as functions of time and zoom-in figures are given at t =.675 and t =.7985 to show eigenvalue crossings and invoking of the -stage algorithm. We also check the deviation of the computed spatial and stochastic basis from the bi-orthogonality by monitoring u i, u j u i L 2 D u j L 2 D 49a 49b for the spatial basis and E [Y i Y j ] for the stochastic basis. he results are shown in Fig. 6. We observe that the deviation of the spatial basis from orthogonality is very small < throughout the computation. Large deviations of the stochastic basis from the orthogonality only occur when eigenvalues cross each other since the DyBO algorithm is not designed to preserve the orthogonality of Y during the -stage and orthogonality is only restored once the algorithm exits from the -stage. We have also applied the Y-stage to overcome eigenvalue-crossing issues in this numerical examples and the results are very similar. 8

19 Mean Mean Exact Exact.4.4DyBO DyBO SD SD Exact Exact DyBO DyBO Figure 3: Mean and SD computed by DyBO at time t = Exact DyBO Exact DyBO Exact DyBO Exact DyBO Exact DyBO Exact DyBO Exact DyBO4 Exact DyBO Exact DyBO Figure 4: he spatial basis u x, t, u 2 x, t, u 3 x, t from the left to the right in KL expansion of SPDE solution computed by DyBO at time t =.2. Stage Stage λ λ 2 λ x u & u 2 u 2 & u 3 u 3 & u Figure 5: Eigenvalues computed by DyBO. wo zoom-in 2 figures are provided when eigenvalues cross λ i x EY.5 2 u & u 2 u & u 2 3 u & u λ i Y & Y 2. 8 Y 2 & Y Y & Y EY a Orthogonality of the spatial basis x, t..5 x ubar L 2 Error b Orthogonality of the L 2 stochastic Error basis Yω, t Figure 6: Bi-orthgonality of spatial and.5 stochastic basis Y & Y 2 Y 2 & Y 3 Y & Y 3.5 x ubar L 2 Error.8 L 2 Error.5.6.4

20 5.2. Linear Deterministic Differential Operators with Random Initial Conditions In this section, we consider a linear PDE with random initial conditions, i.e., the differential operator L is deterministic and linear. For simplicity, we assume periodic boundary conditions. u = Lu, x [, ], t [,.4], 5a u, t, ξ = u, t, ξ, 5b ux,, ξ = ůx, ξ, where ξ = ξ, ξ 2,, ξ r are independent standard Gaussian random variables. Consider the gpc expansion of solution u = v + VH. From Eq. 28, it is easy to obtain the gpc formulation of this SPDE, v = L v, 5a V = LV, 5b where the initial conditions vx, = E [ů] and Vx, = E [ůxh]. Clearly, the gpc formulation 5 is linear and provides the exact solution to the original SPDE 5 if the finite-term gpc expansion of the initial condition is exact, i.e., ů = E [ů] + E [ůh] H. Now consider the KL expansion of the solution, u = ū + Y = ū + A H. From the general DyBO-gPC formulation 6, simple calculations give the DyBO-gPC formulation for the SPDE 5 = Lū, 52a = D + L, 52b da = AC + A L, Λ, 52c where C and D can be computed via Eq. 5 from G = Λ, L. If we compare the DyBO formulation 52 and the gpc formulation 5 for the linear SPDE 5, we observe the following interesting phenomenon. Although the original SPDE is linear and the gpc formulation remains linear, the DyBO formulation is clearly not linear. However, this is not surprising because KL expansion is not a linear procedure. On the other hand, the gpc expansion is indeed a linear procedure. As we argue in Sec. 3.2, our DyBO formulation essentially tracks the KL expansion of the exact solution. herefore, the non-linearity of KLE must be naturally built into the DyBO formulation to enable such tracking. Corollary 3.3 implies that the solution given by the DyBO formulation is exact if the initial condition ů can be expressed exactly by a finite-term KL expansion. o verify this numerically, we consider a transport equation, i.e., L = sin2πx + 2t and the initial condition is a functional of three standard Gaussian random variables, i.e., r = 3, ůx, ξ = cos2πx + sin2πx, 2 sin4πx, 3 sin6πx Å H J, where J = { α α J 4 3, α 3 3 } \ {}. Matrix Å is an orthonormal matrix generated randomly at the beginning of our simulation. With time step t =. 3 and spatial grid size x = /28, the elements in the spatial basis computed by DyBO match perfectly the exact ones given by gpc as shown in Fig. 7. In Fig. 8, the L 2 relative error of SD is plotted as a function of time t and remains very small 8. o confirm such errors are introduced mainly by the spatial and the temporal discretizations, we use another two sets of finer grid sizes, t =.5 3, x = /256, and t =.25 3, x = /52, and repeat the computations. he SD error drops significantly since we use spectral methods and a fourth order RK method, which essentially verifies numerically Corollary c

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients

A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients SIAM/ASA J. UNCERTAINTY QUANTIFICATION Vol., pp. 452 493 c 23 Society for Industrial and Applied Mathematics and American Statistical Association A Data-Driven Stochastic Method for Elliptic PDEs with

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders

Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders Dr. Mohamed El-Beltagy 1,2 Joint Wor with Late Prof. Magdy El-Tawil 2 1 Effat University, Engineering College, Electrical

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification UNIVERSITY OF CALIFORNIA, SAN DIEGO An Empirical Chaos Expansion Method for Uncertainty Quantification A Dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

A Data-Driven Stochastic Method

A Data-Driven Stochastic Method A ata-riven Stochastic Method Mulin Cheng, Thomas Y. Hou, Pengchong Yan Applied and Computational Mathematics, California Institute of Technology, Pasadena, CA 96 Abstract Stochastic partial differential

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media Petr Krysl, Sanjay Lall, and Jerrold E. Marsden, California Institute of Technology, Pasadena, CA 91125. pkrysl@cs.caltech.edu,

More information

SELECTED TOPICS CLASS FINAL REPORT

SELECTED TOPICS CLASS FINAL REPORT SELECTED TOPICS CLASS FINAL REPORT WENJU ZHAO In this report, I have tested the stochastic Navier-Stokes equations with different kind of noise, when i am doing this project, I have encountered several

More information

Last Update: April 7, 201 0

Last Update: April 7, 201 0 M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.

More information

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD 13 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August 1-6, 24 Paper No. 569 PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD Riki Honda

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA A REPORT SUBMITTED TO THE DEPARTMENT OF ENERGY RESOURCES ENGINEERING OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.

Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3. Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.. The implicit Schur product 2. The Bishop method for adaptive localization

More information

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Karhunen-Loève decomposition of Gaussian measures on Banach spaces Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix jean-charles.croix@emse.fr Génie Mathématique et Industriel (GMI) First workshop on Gaussian processes at Saint-Etienne

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

Uncertainty Quantification in Discrete Fracture Network Models

Uncertainty Quantification in Discrete Fracture Network Models Uncertainty Quantification in Discrete Fracture Network Models Claudio Canuto Dipartimento di Scienze Matematiche, Politecnico di Torino claudio.canuto@polito.it Joint work with Stefano Berrone, Sandra

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Sparse polynomial chaos expansions in engineering applications

Sparse polynomial chaos expansions in engineering applications DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Model Reduction, Centering, and the Karhunen-Loeve Expansion

Model Reduction, Centering, and the Karhunen-Loeve Expansion Model Reduction, Centering, and the Karhunen-Loeve Expansion Sonja Glavaški, Jerrold E. Marsden, and Richard M. Murray 1 Control and Dynamical Systems, 17-81 California Institute of Technology Pasadena,

More information

STOCHASTIC SAMPLING METHODS

STOCHASTIC SAMPLING METHODS STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Time-dependent Karhunen-Loève type decomposition methods for SPDEs

Time-dependent Karhunen-Loève type decomposition methods for SPDEs Time-dependent Karhunen-Loève type decomposition methods for SPDEs by Minseok Choi B.S., Seoul National University; Seoul, 2002 M.S., Seoul National University; Seoul, 2007 A dissertation submitted in

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

«Random Vectors» Lecture #2: Introduction Andreas Polydoros

«Random Vectors» Lecture #2: Introduction Andreas Polydoros «Random Vectors» Lecture #2: Introduction Andreas Polydoros Introduction Contents: Definitions: Correlation and Covariance matrix Linear transformations: Spectral shaping and factorization he whitening

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Stochastic Particle Methods for Rarefied Gases

Stochastic Particle Methods for Rarefied Gases CCES Seminar WS 2/3 Stochastic Particle Methods for Rarefied Gases Julian Köllermeier RWTH Aachen University Supervisor: Prof. Dr. Manuel Torrilhon Center for Computational Engineering Science Mathematics

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

1.1 A Scattering Experiment

1.1 A Scattering Experiment 1 Transfer Matrix In this chapter we introduce and discuss a mathematical method for the analysis of the wave propagation in one-dimensional systems. The method uses the transfer matrix and is commonly

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 01.2016 January 2016 (New 25.05.2016) Dimensionality

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

E = UV W (9.1) = I Q > V W

E = UV W (9.1) = I Q > V W 91 9. EOFs, SVD A common statistical tool in oceanography, meteorology and climate research are the so-called empirical orthogonal functions (EOFs). Anyone, in any scientific field, working with large

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Mike Giles (Oxford) Monte Carlo methods 1 / 23 Lecture outline

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

Domain decomposition for the Jacobi-Davidson method: practical strategies

Domain decomposition for the Jacobi-Davidson method: practical strategies Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.

More information

Feedback Control of Turbulent Wall Flows

Feedback Control of Turbulent Wall Flows Feedback Control of Turbulent Wall Flows Dipartimento di Ingegneria Aerospaziale Politecnico di Milano Outline Introduction Standard approach Wiener-Hopf approach Conclusions Drag reduction A model problem:

More information

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

More information

Numerical integration and importance sampling

Numerical integration and importance sampling 2 Numerical integration and importance sampling 2.1 Quadrature Consider the numerical evaluation of the integral I(a, b) = b a dx f(x) Rectangle rule: on small interval, construct interpolating function

More information

Consistent Histories. Chapter Chain Operators and Weights

Consistent Histories. Chapter Chain Operators and Weights Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling

Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling Y. Huang 1,2, and X.S. Qin 1,2* 1 School of Civil & Environmental Engineering, Nanyang Technological University,

More information

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Università degli Studi del Sannio Dipartimento e Facoltà di Ingegneria, Benevento, Italia Random fields

More information

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients Cent. Eur. J. Eng. 4 24 64-7 DOI:.2478/s353-3-4-6 Central European Journal of Engineering The variational iteration method for solving linear and nonlinear ODEs and scientific models with variable coefficients

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 50 AA242B: MECHANICAL VIBRATIONS Undamped Vibrations of n-dof Systems These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical Vibrations:

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information