Multiscale Model Reduction and Sparse Operator Compression for Multiscale Problems

Size: px
Start display at page:

Download "Multiscale Model Reduction and Sparse Operator Compression for Multiscale Problems"

Transcription

1 Multiscale Model Reduction and Sparse Operator Compression for Multiscale Problems Thomas Y. Hou Applied and Comput. Math, Caltech April 3, 27 Joint work with Qin Li and Pengchuan Zhang Workshop I: Multiphysics, Multiscale, and Coupled Problems in Subsurface Physics IPAM Long Program on Computational Issues in Oil Field Applications

2 Motivations Dimension reduction appears nearly everywhere in science and engineering. Solving elliptic equations with multiscale coefficients: multiscale finite element basis for the elliptic operator. Principal component analysis (PCA): principle modes of the covariance operator. Quantum chemistry: eigen states of the Hamiltonian. For computational efficiency and/or good interpretability, localized basis functions are preferred. Localized multiscale finite element basis: Babuska-Caloz-Osbron-94, Hou-Wu-997, Hughes-Feijóo-Mazzei-98, E-Engquist-3, Owhadi-Zhang-7, Målqvist-Peterseim-4, Owhadi-, etc. Sparse principle modes obtained by Sparse PCA or sparse dictionary learning: Zou-Hastie-Tibshirani-4, Witten-Tibshirani-Hastie-9, etc. Compressed Wannier modes: Ozoliņš-Lai-Caflisch-Osher-3, E-Li-Lu-, Lai-Lu-Osher-, etc.

3 Schrödinger Equation Maximally-localized generalized Wannier functions for composite energy bands Nicola Marzari and David Vanderbilt Department of Physics and Astronomy, Rutgers University, Piscataway, NJ , USA (July, 997) arxiv:cond-mat/9774v [cond-mat.mtrl-sci] 4 Jul 997 We discuss a method for determining the optimally-localized setofgeneralizedwannierfunctions associated with a set of Bloch bands in a crystalline solid. By generalizedwannierfunctions we mean a set of localized orthonormal orbitals spanning the same space as the specified set of Bloch bands. Although we minimize a functional that represents the total spread n r2 n r 2 n of the Wannier functions in real space, our method proceeds directly from the Bloch functions as represented on a mesh of k-points, and carries out the minimization in a space of unitary matrices U mn (k) describing the rotation among the Bloch bands at each k-point. The method is thus suitable for use in connection with conventional electronic-structure codes. The procedure also returns the total electric polarization as well as the location of each Wannier center. Sample results for Si, GaAs, molecular C2H4 and LiCl will be presented. I. INTRODUCTION ever, there is another motivation that goes back to a TABLE III. Localization functional theme Ω and its that decomposition in invariant, off-diagonal, and diagonal parts, for the has recurred frequently in the chemistry lit- over last 4 years, namely the study of lo- The study of periodic crystalline solids leads naturally uerature to a representation for the electronic ground state in calized molecular orbitals. 2 2 The idea is to carry out, terms of extended Bloch orbitals ψnk(r), labeled via their for a given molecule or cluster, a unitary transformation }{{} u = case of GaAs (units are Å Figure 4: Left panel: WFCs (red) from a 2 ). The bottom valence band, the top three valence bands, and snapshot all four bands are ofseparately a Car-Parrinello included in in simulation α m (t)ψof m liquid (x) band nwater. and crystal-momentum The hydrogen atoms k quantum are numbers. black the minimization. and Anthe The star from oxygens ( )referstothecase the in occupied white; one-particle hydrogen bonds Hamiltonian have also eigenstates which the minimization is not actually alternative representation can be derived in terms of localizedbeen orbitals highlighted. or Wannier Center Schrödinger to aperformed, set of and localized the orbitals that correspond more closely functions panel: solution wn(r MLWF operator:l for the -band R), forand that a 3-band O-Hcases to bond the is used. chemical insampling a water is (Lewis) dimer. view Right of molecular panel: MLWF bond-orbitals. performed with a 8 8 8meshofk-points. are formally defined via a unitary transformation of the It seems not to be widely appreciated that these are a lone pair in a water dimer. [Left panel courtesy of P. L. Silvestrelli [4]] (a) Bloch orbitals, and are labeled in real space kset according Ω toωi theωodexact analogues, ΩD for finite systems, of the Wannier the band n and the lattice vector of the band unit cell.968r to.944 functions defined.238 for infinite periodic systems. Various which they belong. 4 3bands criteria.6 have.24 been introduced for defining the localized The Wannier representation of the electronic 4bands problem8.38 molecular 4.39 orbitals, two of the most popular being is widely known for its usefulness as a starting 4bands point 8.99 for8.38 the. maximization.9 of the Coulomb 22 or quadratic 23 selfinteractions of the molecular.2 orbitals. One of the motiva- various formal developments, such as the semiclassical theory of electron dynamics or the theory of magnetic tions for such approaches is the notion that the localized interactions in solids. But until recently, the practical molecular orbitals may form the basis for an efficient representation of the centers along of electronic. correlations in many-body ap- where we show the relative position importance of Wannier functions in computational electronic structure theory has been fairly minimal. Howproaches, betweenand the Gaindeed this ought to be equally true in the the Ga-As bonds. Here β is the distance atom and the Wannier center, given ever, this situation is now beginning to change, in view extended, as a fractionsolid-state of the case. of two recent developments. First, there bond is length a vigorous (in Si the centers effort underway on the part of many groups were fixed One by symmetry major to Ga As reason why the Wannier functions have seen be in to the middle develop of the socalled order-n or linear-scaling methods, bond, β =., little irrespective practical of the. use to date in solid-state applications is sampling). i.e., methods for which the computational time for undoubtedly their non-uniqueness. Even in the case of In solving Fig. 2, we for present the plots asingleisolatedband,itiswellknownthatthewannierinfunctions GaAs, for thewn(r) are -. notunique,duetoaphaseinde- system size, instead of the third power typical 8 8 8k-pointsampling. of conven- Again,attheminimumΩ, terminacy e iφn(k) in the Bloch orbitals [] Axis ψnk(r). For this showing one of these electronic ground state scales only as the maximally-localized first power Wannier of functions tional methods based on solving for Bloch all four states. Wannier functions Manyhave become case, identical the conditions (under required to obtain a set of maximally of these methods are based on solving directly for localized the symmetry operations of the tetrahedral group), and localized, exponentially decaying Wannier functions are Wannier or Wannier-like orbitals that theyspan are real, the except occupied subspace, Figure 6 3 : and Snapshots thus relyof PRB for an overall known. complex 2,26 phase. The (b) Marzari-Vanderbilt, ona the rapid localization shape (6), water-molecule of the Wannier 97prop- functionsdissociation again In that theof present sp 3 under hy- work high-temperature we discuss the(39 determination K) of i t u(t, x) = ( 2 2 x + V (x) ) { Lψ m = λ m ψ m (a -3/2 )

4 Methods Based on L Minimization: motivation Min-max principle: Ψ with the first d eigenvectors as columns is the optimizer for minimize trace(ψ T CovΨ) subject to: Ψ R n d, Ψ T Ψ = I d. Consider X = ΨΨ T R n n and define fantope F d := {X R n n : X = X T, X I n, trace(x) = d}. (Lai-Lu-Osher-4): projection to the first d-dimensional eigenspace is the optimizer for minimize X F d trace(covx). Idea: add L penalty to force Ψ or X sparse.

5 Methods Based on L Minimization: methods Sparse PCA (Zou-Hastie-Tibshirani-4) and Convex relaxation of Sparse PCA (d Aspremont et al 7, Vu et al 3): Given a symmetric input matrix S R n n, seek for a d-dimensional sparse principal space estimator X by minimize X F d trace(sx) + λ X Take the first d eigenvectors of X as the estimated sparse basis elements. Compressed modes for variational problems (Ozolins-Lai-Caflisch-Osher-3) and its convex relaxation (Lai-Lu-Osher-4): Given a symmetric input matrix H R n n, seek for a d-dimensional sparse principal space estimator P by minimize P F d trace(hp ) + µ P Take d columns of P as the estimated sparse basis elements.

6 Random Field Parametrization κ(x, ω) = E[κ(x, ω)] + k g k (x)η k (ω), E[η k ] =, E[η i η j ] = δ ij. What we have:. samples 4 Covariance Q: How to parametrize the random field?

7 Random Field Parametrization, continued Karhunen-Loéve expansion 3 : κ(x, ω) = E[κ(x, ω)] + k λk f k (x)ξ k f k (x) : eigenfunctions of Cov(x, y) Given error tolerance ɛ >, K minimize K, subject to: Cov λ k f k (x)f k (y) 2 ɛ. k=.3 KL modes Loéve, 77

8 Random Field Parametrization, continued Karhunen-Loéve expansion 3 : κ(x, ω) = E[κ(x, ω)] + k λk f k (x)ξ k f k (x) : eigenfunctions of Cov(x, y) Given error tolerance ɛ >, K minimize K, subject to: Cov λ k f k (x)f k (y) 2 ɛ. k=.3 KL modes. Ground Truth Physical Modes Loéve, 77

9 I. Intrinsic Sparse Mode Decomposition (ISMD) Given a low rank symmetric positive semidefinite matrix Cov R N N, Our goal is to decompose Cov into rank-one matrices, i.e. Cov = K g k gk T. k= The modes {g k } K k= are required to be as sparse as possible. K K min g k, subject to: Cov = g k gk T. k= This minimization problem is extremely difficult: k= Number of modes K is unknown, typically K ; Each unknown physical mode g k (x) is represented by a long vector (g k R N, N, ); l minimization problem with 6 unknowns, NP hard!

10 From l Norm to Patch-wise Sparseness Definition (Sparseness of a physical mode g) Given domain partition: D = M m=p m, define sparseness as s(g) = #{P m : g Pm, m =, 2,, M}. Ground Truth Sparse Modes.4 s = 2.3 s 2 = 3.2. d = d 2 = 2 d 3 = d 4 = d m is the local dimension on the m th patch P m. K s k = M k= m= d m

11 Intrinsic Sparse Mode Decomposition K K minimize g k subject to: Cov = g k gk T. k= k=

12 Intrinsic Sparse Mode Decomposition K K minimize g k subject to: Cov = g k gk T. k= k= minimize s k = m k K d m subject to: Cov = g k gk T. () k=

13 Intrinsic Sparse Mode Decomposition K K minimize g k subject to: Cov = g k gk T. k= k= minimize s k = m k K d m subject to: Cov = g k gk T. () k= Theorem (Theorem 3. in [Hou et al., 27a]) Under the regular sparse assumption (sparse modes are linear independent on every patch), ISMD generates one minimizer of the patch-wise sparseness minimization problem ().

14 ISMD details: Construct Local Pieces by Local Rotations Let Cov m,m be the local covariance matrix on patch P m. Want: local sparse physical modes G m [g m,, g m,2,..., g m,dm ] Cov m,m = G m G T m d m k= g m,k g T m,k. 2 Have: local eigen-decomposition H m [h m,, h m,2,..., h m,km ] K m Cov m,m = H m H T m h m,k h T m,k. k= 3 G m = H m D m, D m D T m = I Km. 4 Aim: find the right local rotation 4 D m on every patch P m! 4 Regular sparse assumption

15 ISMD Details: Patch-up Local Pieces Ground Truth Sparse Modes.4 s = 2.3 s 2 = 3.2. d = d 2 = 2 d 3 = d 4 = [g, g 2 ] = = g, g 2, g,2 g 2,2 g,3 g 2,3 g,4 g 2,4 = g, g,2 g 2,2 g 2,3 g 2,4 g, g,2 g 2,2 g 2,3 g 2,4 [g, g 2 ] = diag(g, G 2, G 3, G 4 )L Aim: find the right patch-up matrix L!

16 ISMD Details: Flowchart of ISMD Step Local eigen decomposition: Cov m,m = H m H T m. Output: H ext := diag(h,, H M ) Step 2 Assemble correlation matrix: Cov = H ext ΛH T ext. Output: Λ Step 3 Identify local rotations by joint diagonalization: D m = arg min V O(K m) M (V T Λ mn Λ T mnv ) i,j 2. n= i j Output: D ext diag(d,, D M ) Step 4 Patch-up by pivoted Cholesky decomposition DextΛD T ext = P LL T P T Output: L P L. Step Assemble the intrinsic sparse modes: [g, g 2,..., g K ] H ext D ext L.

17 ISMD: Numerical Example fieldsample fieldsample Figure: One sample and the bird s-eye view. The covariance matrix is plotted on the right. KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery KL recovery Figure: Eigen-decomposition: 2 out of 3 KL modes. Each mode contains multiple pieces of the ground-truth media.

18 ISMD: Numerical Example continued ISMD: H=/ ISMD: H=/ ISMD: H=/ ISMD: H=/ ISMD: H=/ ISMD: H=/ ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/32 ISMD: H=/32 ISMD: H=/32 ISMD: H=/32 ISMD: H=/32 ISMD: H=/32 Figure: First 6 eigenvectors (H=); First 6 intrinsic sparse modes (H=/8, regular sparse); First 6 intrinsic sparse modes (H=/32; not regular sparse) Comments: ISMD is proven to give the optimal sparse decomposition under the regular sparse assumption. Even when the regular sparse assumption fails (the partition is too fine), ISMD still performs well in our numerical examples.

19 Other topics about ISMD Computational complexity: much cheaper than the eigen-decomposition. Comparison of CPU time 3 ISMD Full Eigen Partial Eigen Pivoted Cholesky Regular Sparse Fail 2 CPU Time /96 /48 /32 /24 /6 /2 /8 /6 /4 /3 /2 subdomain size: H Figure: CPU time (unit: second) for different partition sizes H.

20 Other topics about ISMD Computational complexity: much cheaper than the eigen-decomposition. Comparison of CPU time 3 ISMD Full Eigen Partial Eigen Pivoted Cholesky Regular Sparse Fail 2 CPU Time /96 /48 /32 /24 /6 /2 /8 /6 /4 /3 /2 subdomain size: H Figure: CPU time (unit: second) for different partition sizes H. ISMD is insensitive to the partition of the domain, see Theorem 3.6 in [Hou et al., 27a].

21 Other topics about ISMD Computational complexity: much cheaper than the eigen-decomposition. Comparison of CPU time 3 ISMD Full Eigen Partial Eigen Pivoted Cholesky Regular Sparse Fail 2 CPU Time /96 /48 /32 /24 /6 /2 /8 /6 /4 /3 /2 subdomain size: H Figure: CPU time (unit: second) for different partition sizes H. ISMD is insensitive to the partition of the domain, see Theorem 3.6 in [Hou et al., 27a]. Robustness : ISMD is robust against small error, see Lemma 4. in [Hou et al., 27a]. K Cov = g k gk T + Error. k=

22 Results from L Minimization Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Lu O sher recovery Variance explained by the first 3 eigenvectors of P.6.4 Variance Explained λ Figure: Covariance Matrix and Variance Explained by the First 3 Eigenvectors of P Choose λ = 3.94 for the following result. Figure: The First 2 Eigenvectors of P. The first 3 eigenvectors of P explain 9% of the variance.

23 Results from L Minimization, continued Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Columns of P with largest norms Figure: 2 Columns of P with largest norms. The first 3 columns with largest norms only explain 3.46% of the variance. ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 ISMD: H=/8 Figure: The First 2 Intrinsic Sparse Modes from ISMD (H = /8). The first 3 intrinsic sparse modes explain % of the variance, exact recovery!

24 Discussion on Computational Complexity The main computational cost in ISMD is the first step: compute local eigen-decomposition, which is about M Eig(n/M). n is the total number of fine grid nodes in the spatial domain. Here Eig(n) is the computational cost of eigen-decomposition of a matrix of size n. ADMM algorithm is used to solve the semidefinite program in convex relaxation of sparse PCA, and the main computational cost is a global eigen-decomposition Eig(n) in each iteration. Suppose N iterations are needed to achieve convergence, the computational cost is about N Eig(n). Partial eigen-decomposition is used to reduced the computational cost. For the specific problem we consider here, our algorithm for ISMD takes into account the special structure of the problem, and is more efficient than the ADMM algorithm (sparse PCA). In a typical test of the high contrast example, our method is 62 times faster than the ADMM algorithm.

25 II. Stochastic Multiscale Model Reduction minimize s k = m k K d m subject to: Cov = g k gk T, k= Figure: One sample of permeability field

26 Random PDEs { x (κ ɛ (x, ω) x u(x, ω)) = b(x), x D, ω Ω, u(x, ω) =, x D Random coefficient κ ɛ (x, ω) has high dimensionality in the stochastic space and has multiple scales (channels and inclusions) in the physical space. Multiple scales in physical space: fine mesh required for standard FEM, large computational cost for one sample. Stochastic high dimensional: Monte Carlo : large number of samples needed. gpc based methods, curse of dimensionality! Prohibitively huge number of collocation nodes. We propose a stochastic multiscale method that explores the locally low dimensional feature of the solution in the stochastic space and upscales the solution in the physical space simultaneously for all random samples.

27 Stochastic Multiscale Model Reduction { x (κ ɛ (x, ω) x u(x, ω)) = b(x), x D, ω Ω, u(x, ω) =, x D Step. Locally low dimensional parametrization of κ ɛ (x, ω) by ISMD, local KL expansion, sparse PCA, etc. Step. Offline stage: prepare high-order polynomial interpolants for local upscaled quantities. Various local upscaling methods are available, see e.g. Babuska-Caloz-Osborn 997, Hou-Wu 997, Hughes et al 998, E-Engquist 23, Owhadi-Zhang 24, Owhadi-2. Step 2. Online stage: for each parameter configuration, interpolate the upscaled quantities from the pre-computed interpolants in Step and solve the upscaled system. Online cost saving is of order (H/h) d /(log(h/h)) k, where H/h is the ratio between the coarse and the fine gird sizes, d is the physical dimension and k is the local stochastic dimension. For details, see [Hou et al., 27b].

28 A synthetic example with high contrast random medium κ(x, ω) = ξ(ω) + K g k (x)η k (ω) where global variable ξ uniformly distributed in [.,.], local variables {η k } K k= are i.i.d. variables uniformly distributed in [4, 2 4 ]. k= Figure: Left: one sample medium; middle: medium mean; right: medium variance. There are 3 high permeability (of order 4 ) channels in the x direction and a few high permeability inclusions. The background permeability is of order.

29 Step : Medium Parametrization by ISMD X minimize X sk = dm, m k low rank component, L+S second sparse component, L+S third sparse component, L+S. gk gkt. k= first sparse component, L+S. K X Cov = subject to: x y x.8. x y x y y Figure: ISMD: the constant background mode and three channels. The number of local parameters is much less than the total number of parameters. Second mode, KL expansion first mode, KL expansion forth mode, KL expansion third mode, KL expansion x.2.4 y x.2.4 y x.2.4 y.6.8. x y Figure: KL expansion: KL modes mixed the global and local modes together. The number of local parameters equals to the total number of parameters.

30 Step : Local Upscaling Coarse mesh size: H =., fine mesh size h = H/2, oversampling domain size H os = 3H. ISMD: mode ISMD: mode 2 Figure: Two physical modes on patch (4,9) : d m = 2 S(,) Relative error of interpolation, S(,) Figure: Left: the (,) element of local stiffness matrix on patch (4,9); Right: relative error of surrogate by Chebyshev interpolation on a 6 grid. The maximal relative error is of order 6.

31 Step 2: Solve the upscaled system, one sample y y reference solution : u H random interpolation error x x -4. Figure: Left: one sample solution from direct MsFEM; right: solution absolute error due to local interpolation. The error introduced by interpolation is of order. Table: Online cost saving for one sample Direct MsFEM Local Interpolation Cost Saving (s).33 (s) 29

32 Step 2: Solve the upscaled system, solution statistics y y u H mean from random interpolation u H standard deviation from random interpolation x 2 CPU time comparison Local interpolation Direct MsFEM x CPU time / s Sample Size Figure: The statistics of the numerical solutions using samples. Top left: mean; top right: standard deviation; bottom: computational cost comparison.

33 III. Sparse Operator Compression and Concluding Remarks

34 From ISMD to Sparse Operator Compression κ(x, ω) k g k(x)η k (ω), {η k } K k= uncorrelated or indepedent. ISMD min g,...,g K s.t. K s(g k ), k= Cov K g k gk T. k= Applications in stochastic multiscale mode reduction. κ(x, ω) k g k(x)η k (ω), {η k } K k= can be correlated. Sparse Operator Compression min g,...,g K,B s.t. K s(g k ), k= Cov B k,k g k gk T k,k ɛ B is the covariance matrix of the factors {η k } K k=. More applications in solving deterministic elliptic equations, constructing localized Wannier functions. For details, see [Hou and Zhang, 27a, Hou and Zhang, 27b].

35 Random field parametrization The Matérn class covariance ( ) K ν (x, y) = σ 2 2 ν 2ν ν ( ) x y 2ν x y K ν, Γ(ν) ρ ρ Widely used in spatial statistics, geoscience. Compress the covariance operator with localized basis functions. When ν + d 2 is an integer, we construct nearly optimally localized basis functions {g i } n i= : supp(g i ) C l log(n) n i n. We can approximate K ν by rank-n operator with optimal accuracy: min K ν GBG T 2 C e λ n+ (K ν ). B Sparsity/locality: better interpretability and computational efficiency.

36 Solving deterministic elliptic equations. L is an elliptic operator of order 2k (k ) with rough multiscale coefficients in L (D), and the load f L 2 (D). Lu = f, u H k (D). (2) k = : heat equation, subsurface flow; k = 2: beam equation, plate equation, etc... Compress the Green s function (L ) with localized basis functions. We construct nearly optimally localized basis functions {g i } n i= Hk (D). For a given mesh h, we have supp(g i ) C l h log(/h) i n. The Galerkin finite element solution u ms := GL n G T f satisfies u u ms H C e h k f 2 f L 2 (D), where H is the energy norm, C e is indep. of small scale of a σγ. Sparsity/locality: computational efficiency.

37 Our construction and theoretical results 2 ψ Figure: Left: 8 localized basis functions for with periodic BC. Middle and right: 2 localized basis functions for 2 with homogeneous Dirichlet BC.

38 Our construction of {ψ loc i } n i= Choose h >. Partition the physical domain D using a regular partition {τ i } m i= with mesh size h. 2 Choose r >, say r = 2h log(/h). For each patch τ i, S r is the union of the subdomains τ i intersecting B(x i, r) (for some x i τ i ). 3 P k (τ i ) is the space of all d-variate polynomials of degree at most k on the patch τ i. Q = ( ) k+d d is its dimension. {ϕ i,q } Q q= is a set of orthogonal basis functions for P k (τ i ). ψ loc i,q = arg min ψ H k B s.t. ψ 2 H Figure: A regular partition, local patch τ i and its associated S r. S r ψϕ j,q = δ iq,jq, j m, q Q, ψ(x), x D\S r, where H k B is the solution space (with some prescribed BC), H is the energy norm associated with L and the BC.

39 Our construction Ψ loc := {ψ loc i,q }m,q i=,q= Theorem (Hou-Zhang-26) Suppose HB k = H k (D) and Lu = ( ) k D σ (a σγd γ u). Assume that σ = γ =k L is self-adjoint, positive definite and strongly elliptic, and that there exists θ min, θ max > such that θ min ξ 2k a σγξ σ ξ γ θ max ξ 2k, ξ R d. σ = γ =k Then for r C rh log(/h), we have L f Ψ loc L n (Ψ loc ) T f H Cehk f 2 f L 2 (D), (7) θmin 2 where L n is the stiffness matrix under basis functions Ψ loc. E oc(ψ loc ; L ) C2 e h 2k θ min. (8) Here, the constant C r only depends on the contrast θmax θ min, and C e is independent of the coefficients.

40 Several remarks Theorem (Hou-Zhang-26) also applies to L with low order terms, i.e. Lu = ( ) k D σ (a σγ D γ u). σ, γ k Theorem (Hou-Zhang-26) also applies to other homogeneous boundary conditions, like periodic BC, Robin BC and mixed BC. For H k B = H (D), i.e. second order elliptic operators with zero Dirichlet BC, Theorem (Hou-Zhang-26) have been proved in Owhadi-2. A similar result for H k B = H (D) was also provided in Målqvist-Peterseim-24. In this case, Our proof improves the estimates of the constants C r and C e. For other BCs, operators with lower order terms, and high-order elliptic operators, new techniques and concepts have been developed. Among them, the most important three new techniques are a projection-type polynomial approximation property in H k (D), the notion of the strong ellipticity 3, an inverse energy estimate for functions in Ψ := span{ψ i,q : i m, q Q}. 3 Equivalent to uniform ellipticity when d =, 2 or k =. Slightly stronger than uniform ellipticity in other cases; counter examples exist but difficult to construct.

41 Roadmap of the proof: Error estimate Theorem (An error estimate based on projection-type approximation) Suppose there is a n-dimensional subspace Φ L 2 (D) with basis {ϕ i} n i= such that u P (L2 ) Φ u L 2 k n u H u H k (D). (9) Let Ψ be the n-dimensional subspace in H k (D) (also in H k B(D)) spanned by {L ϕ i} n i=. Then For any f L 2 (D) and u = L f, we have u P (Hk B ) Ψ u H k n f L2. () 2 We have E oc(ψ; L ) k 2 n. () k = : Φ piecewise constant functions. By the Poincare inequality, it is easy to obtain u P (L2 ) Φ u L 2 Cph u θmin H. k 2: Φ piecewise polynomials with degree no more than k. By a projection-type polynomial approximation property in H k (D), see Thm 3. in Hou-Zhang-PartII, we have u P (L2 ) Φ u L 2 Cphk u θmin H.

42 Roadmap of the proof: Error estimate, discussions Take H k B = H (D) as an example, where Φ is the space of piecewise constant functions. Based on a projection-type approximation property, we obtain the error estimates of the GFEM in the energy norm, i.e. u P (L2 ) Φ u L 2 C projh u H u P (H ) Ψ u H C proj h f L2. C proj does not depends on the small scales in the coefficients. Tranditional interpolation-type estimation requires higher regularity of the solution u: assume u H 2 (D) u I h u,2,d Ch u 2,2,D u I h u H C interp h f L2. C interp depends on the small scales in the coefficients. Basis functions for I h u: optimally localized linear nodal basis Basis functions for P (H ) Ψ u: global basis functions {L ϕ i } n i=

43 Discrete setting: graph Laplacians Lu = d ( a(x) du ), dx dx u() = u(). Lu = (a(x) u), u D =. Lu = f L : a graph Laplacian Figure: A D circular graph. Figure: A 2D lattice graph. Figure: A social network graph. Social networks and transportation networks; genetic data and web pages; spectral clustering of images; electrical resistor circuits; elliptic partial differential equations discretized by finite elements; etc Fundamental problems: fast algorithms for Lu = f and eigen decomposition of L.

44 Discrete setting: graph Laplacians Lu = f Spielman-Teng (STOC-4, SICOMP-3, SIMAX-4): Nearly-Linear Time Algorithms for Graph Partitioning and Solving Linear Systems Maximal spanning tree, support-graph preconditioners, graph sparsification, etc. Theoretical results, impractical algorithms. Gödel Prize 28, 2. Livne-Brandt-22: Lean Algebraic Multigrid. Practical nearly-linear time algorithm, no theoretical guarantee. Sparse operator compression for graph Laplacians? The key is an efficient algorithm to find a partition {τ i } m i= of the graph vertices such that u P (L2 ) Φ u L 2 C p λn (L ) u H, which is the Poincare inequality on graphs. Implementing the sparse operator compression in a multigrid manner leads to a nearly-linear time algorithm.

45 Concluding Remarks We propose an intrinsic sparse mode decomposition method (ISMD) to decompose a symmetric positive semidefinite matrix into a finite sum of sparse rank one components. We apply ISMD to the covariance matrix of a random field and obtain a locally low dimensional parametrization of the random field. 2 Based on ISMD-like locally low dimensional parametrization methods, we propose the Stochastic Multiscale Finite Element Method (StoMsFEM). It can significantly reduce the computational cost for every single evaluation of the solution sample. 3 Our recent work Sparse Operator Compression shows that many symmetric positive semidefinite operator can be localized, for example, the Green s function of elliptic operators and the Matérn class kernels. 4 Representing/approximating an operator with localized basis functions has potential applications in solving deterministic elliptic equations with rough coefficients and constructing localized orbits for Hamiltonians in quantum physics.

46 Hou, T. Y., Li, Q., and Zhang, P. (27a). A sparse decomposition of low rank symmetric positive semidefinite matrices. Multiscale Modeling & Simulation, (): Hou, Y. T., Li, Q., and Zhang, P. (27b). Exploring the locally low dimensional structure in solving random elliptic PDEs. Multiscale Modeling & Simulation. Hou, Y. T. and Zhang, P. (27a). Sparse operator compression of elliptic operators Part I : Second order elliptic operators. Research in the Mathematical Sciences. Hou, Y. T. and Zhang, P. (27b). Sparse operator compression of elliptic operators Part II : High order elliptic operators. Research in the Mathematical Sciences.

Sparse Operator Compression of Elliptic Operators with Multiscale Coefficients

Sparse Operator Compression of Elliptic Operators with Multiscale Coefficients Sparse Operator Compression of Elliptic Operators with Multiscale Coefficients Thomas Y. Hou Applied and Comput. Math, Caltech February 1, 2017 Joint work with Pengchuan Zhang Big Data Meets Computation,

More information

EXPLORING THE LOCALLY LOW DIMENSIONAL STRUCTURE IN SOLVING RANDOM ELLIPTIC PDES

EXPLORING THE LOCALLY LOW DIMENSIONAL STRUCTURE IN SOLVING RANDOM ELLIPTIC PDES MULTISCALE MODEL. SIMUL. Vol. 5, No. 2, pp. 66 695 c 27 Society for Industrial and Applied Mathematics EXPLORING THE LOCALLY LOW DIMENSIONAL STRUCTURE IN SOLVING RANDOM ELLIPTIC PDES THOMAS Y. HOU, QIN

More information

A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMI-DEFINITE MATRICES

A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMI-DEFINITE MATRICES A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMI-DEFINITE MATRICES THOMAS Y. HOU, QIN LI, AND PENGCHUAN ZHANG Abstract. Suppose that A R N N is symmetric positive semidefinite with ran K N. Our

More information

Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis.

Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis. Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis Houman Owhadi Joint work with Clint Scovel IPAM Apr 3, 2017 DARPA EQUiPS / AFOSR

More information

A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMIDEFINITE MATRICES

A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMIDEFINITE MATRICES MULTISCALE MODEL. SIMUL. Vol. 15, No. 1, pp. 41 444 c 217 Society for Industrial and Applied Mathematics A SPARSE DECOMPOSITION OF LOW RANK SYMMETRIC POSITIVE SEMIDEFINITE MATRICES THOMAS Y. HOU, QIN LI,

More information

Compressing Positive Semidefinite Operators with Sparse/Localized Bases

Compressing Positive Semidefinite Operators with Sparse/Localized Bases Compressing Positive Semidefinite Operators with Sparse/Localized Bases Thesis by Pengchuan Zhang In Partial Fulfillment of the Requirements for the degree of Doctor of Philosophy CALIFORNIA INSTITUTE

More information

Computer simulation of multiscale problems

Computer simulation of multiscale problems Progress in the SSF project CutFEM, Geometry, and Optimal design Computer simulation of multiscale problems Axel Målqvist and Daniel Elfverson University of Gothenburg and Uppsala University Umeå 2015-05-20

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Introduction to Aspects of Multiscale Modeling as Applied to Porous Media

Introduction to Aspects of Multiscale Modeling as Applied to Porous Media Introduction to Aspects of Multiscale Modeling as Applied to Porous Media Part IV Todd Arbogast Department of Mathematics and Center for Subsurface Modeling, Institute for Computational Engineering and

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix

Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix Lin Lin Compressed Kohn-Sham Orbitals 1 Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix Lin Lin Department of Mathematics, UC Berkeley; Computational Research

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION P. DOSTERT, Y. EFENDIEV, AND T.Y. HOU Abstract. In this paper, we study multiscale

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms

Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms www.oeaw.ac.at Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms Y. Efendiev, J. Galvis, R. Lazarov, J. Willems RICAM-Report 2011-05 www.ricam.oeaw.ac.at

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Nonparametric density estimation for elliptic problems with random perturbations

Nonparametric density estimation for elliptic problems with random perturbations Nonparametric density estimation for elliptic problems with random perturbations, DqF Workshop, Stockholm, Sweden, 28--2 p. /2 Nonparametric density estimation for elliptic problems with random perturbations

More information

A Statistical Look at Spectral Graph Analysis. Deep Mukhopadhyay

A Statistical Look at Spectral Graph Analysis. Deep Mukhopadhyay A Statistical Look at Spectral Graph Analysis Deep Mukhopadhyay Department of Statistics, Temple University Office: Speakman 335 deep@temple.edu http://sites.temple.edu/deepstat/ Graph Signal Processing

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

arxiv: v2 [math.na] 28 Nov 2010

arxiv: v2 [math.na] 28 Nov 2010 Optimal Local Approximation Spaces for Generalized Finite Element Methods with Application to Multiscale Problems Ivo Babuska 1 Robert Lipton 2 arxiv:1004.3041v2 [math.na] 28 Nov 2010 Abstract The paper

More information

Computational Information Games A minitutorial Part II Houman Owhadi ICERM June 5, 2017

Computational Information Games A minitutorial Part II Houman Owhadi ICERM June 5, 2017 Computational Information Games A minitutorial Part II Houman Owhadi ICERM June 5, 2017 DARPA EQUiPS / AFOSR award no FA9550-16-1-0054 (Computational Information Games) Question Can we design a linear

More information

Adaptive Collocation with Kernel Density Estimation

Adaptive Collocation with Kernel Density Estimation Examples of with Kernel Density Estimation Howard C. Elman Department of Computer Science University of Maryland at College Park Christopher W. Miller Applied Mathematics and Scientific Computing Program

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

arxiv: v1 [math.na] 4 Jun 2014

arxiv: v1 [math.na] 4 Jun 2014 MIXE GENERALIZE MULTISCALE FINITE ELEMENT METHOS AN APPLICATIONS ERIC T. CHUNG, YALCHIN EFENIEV, AN CHA SHING LEE arxiv:1406.0950v1 [math.na] 4 Jun 2014 Abstract. In this paper, we present a mixed Generalized

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends Lecture 3: Finite Elements in 2-D Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Finite Element Methods 1 / 18 Outline 1 Boundary

More information

recent developments of approximation theory and greedy algorithms

recent developments of approximation theory and greedy algorithms recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Dr. Noemi Friedman Contents of the course Fundamentals

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

H(div) Preconditioning for a Mixed Finite Element Formulation of the Stochastic Diffusion Problem 1

H(div) Preconditioning for a Mixed Finite Element Formulation of the Stochastic Diffusion Problem 1 University of Maryland Department of Computer Science CS-TR-4918 University of Maryland Institute for Advanced Computer Studies UMIACS-TR-2008-15 H(div) Preconditioning for a Mixed Finite Element Formulation

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Multiscale Computation for Incompressible Flow and Transport Problems

Multiscale Computation for Incompressible Flow and Transport Problems Multiscale Computation for Incompressible Flow and Transport Problems Thomas Y. Hou Applied Mathematics, Caltech Collaborators: Y. Efenidev (TAMU), V. Ginting (Colorado), T. Strinopolous (Caltech), Danping

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Second-Order Inference for Gaussian Random Curves

Second-Order Inference for Gaussian Random Curves Second-Order Inference for Gaussian Random Curves With Application to DNA Minicircles Victor Panaretos David Kraus John Maddocks Ecole Polytechnique Fédérale de Lausanne Panaretos, Kraus, Maddocks (EPFL)

More information

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Multiscale Finite Element Methods. Theory and

Multiscale Finite Element Methods. Theory and Yalchin Efendiev and Thomas Y. Hou Multiscale Finite Element Methods. Theory and Applications. Multiscale Finite Element Methods. Theory and Applications. October 14, 2008 Springer Dedicated to my parents,

More information

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA A REPORT SUBMITTED TO THE DEPARTMENT OF ENERGY RESOURCES ENGINEERING OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Quantum Mechanics for Mathematicians: Energy, Momentum, and the Quantum Free Particle

Quantum Mechanics for Mathematicians: Energy, Momentum, and the Quantum Free Particle Quantum Mechanics for Mathematicians: Energy, Momentum, and the Quantum Free Particle Peter Woit Department of Mathematics, Columbia University woit@math.columbia.edu November 28, 2012 We ll now turn to

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Mike Giles (Oxford) Monte Carlo methods 1 / 23 Lecture outline

More information

Generalized Finite Element Methods for Three Dimensional Structural Mechanics Problems. C. A. Duarte. I. Babuška and J. T. Oden

Generalized Finite Element Methods for Three Dimensional Structural Mechanics Problems. C. A. Duarte. I. Babuška and J. T. Oden Generalized Finite Element Methods for Three Dimensional Structural Mechanics Problems C. A. Duarte COMCO, Inc., 7800 Shoal Creek Blvd. Suite 290E Austin, Texas, 78757, USA I. Babuška and J. T. Oden TICAM,

More information

Block-diagonal preconditioning for spectral stochastic finite-element systems

Block-diagonal preconditioning for spectral stochastic finite-element systems IMA Journal of Numerical Analysis (2009) 29, 350 375 doi:0.093/imanum/drn04 Advance Access publication on April 4, 2008 Block-diagonal preconditioning for spectral stochastic finite-element systems CATHERINE

More information

Algorithms for Scientific Computing

Algorithms for Scientific Computing Algorithms for Scientific Computing Finite Element Methods Michael Bader Technical University of Munich Summer 2016 Part I Looking Back: Discrete Models for Heat Transfer and the Poisson Equation Modelling

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Inverse Power Method for Non-linear Eigenproblems

Inverse Power Method for Non-linear Eigenproblems Inverse Power Method for Non-linear Eigenproblems Matthias Hein and Thomas Bühler Anubhav Dwivedi Department of Aerospace Engineering & Mechanics 7th March, 2017 1 / 30 OUTLINE Motivation Non-Linear Eigenproblems

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

1 Discretizing BVP with Finite Element Methods.

1 Discretizing BVP with Finite Element Methods. 1 Discretizing BVP with Finite Element Methods In this section, we will discuss a process for solving boundary value problems numerically, the Finite Element Method (FEM) We note that such method is a

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

More information

STOCHASTIC SAMPLING METHODS

STOCHASTIC SAMPLING METHODS STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions

More information

INTRODUCTION TO MULTIGRID METHODS

INTRODUCTION TO MULTIGRID METHODS INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods Stat260/CS294: Spectral Graph Methods Lecture 20-04/07/205 Lecture: Local Spectral Methods (3 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Xingye Yue. Soochow University, Suzhou, China.

Xingye Yue. Soochow University, Suzhou, China. Relations between the multiscale methods p. 1/30 Relationships beteween multiscale methods for elliptic homogenization problems Xingye Yue Soochow University, Suzhou, China xyyue@suda.edu.cn Relations

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials Dr. Noemi Friedman Contents of the course Fundamentals

More information

Weak Galerkin Finite Element Methods and Applications

Weak Galerkin Finite Element Methods and Applications Weak Galerkin Finite Element Methods and Applications Lin Mu mul1@ornl.gov Computational and Applied Mathematics Computationa Science and Mathematics Division Oak Ridge National Laboratory Georgia Institute

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

Sparse polynomial chaos expansions in engineering applications

Sparse polynomial chaos expansions in engineering applications DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,

More information

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday. MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* DOUGLAS N ARNOLD, RICHARD S FALK, and RAGNAR WINTHER Dedicated to Professor Jim Douglas, Jr on the occasion of his seventieth birthday Abstract

More information

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21 U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the

More information

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Università degli Studi del Sannio Dipartimento e Facoltà di Ingegneria, Benevento, Italia Random fields

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Aspects of Multigrid

Aspects of Multigrid Aspects of Multigrid Kees Oosterlee 1,2 1 Delft University of Technology, Delft. 2 CWI, Center for Mathematics and Computer Science, Amsterdam, SIAM Chapter Workshop Day, May 30th 2018 C.W.Oosterlee (CWI)

More information

Sparse PCA in High Dimensions

Sparse PCA in High Dimensions Sparse PCA in High Dimensions Jing Lei, Department of Statistics, Carnegie Mellon Workshop on Big Data and Differential Privacy Simons Institute, Dec, 2013 (Based on joint work with V. Q. Vu, J. Cho, and

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

PROJECT C: ELECTRONIC BAND STRUCTURE IN A MODEL SEMICONDUCTOR

PROJECT C: ELECTRONIC BAND STRUCTURE IN A MODEL SEMICONDUCTOR PROJECT C: ELECTRONIC BAND STRUCTURE IN A MODEL SEMICONDUCTOR The aim of this project is to present the student with a perspective on the notion of electronic energy band structures and energy band gaps

More information

Communities, Spectral Clustering, and Random Walks

Communities, Spectral Clustering, and Random Walks Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 26 Sep 2011 20 21 19 16 22 28 17 18 29 26 27 30 23 1 25 5 8 24 2 4 14 3 9 13 15 11 10 12

More information

Solving the stochastic steady-state diffusion problem using multigrid

Solving the stochastic steady-state diffusion problem using multigrid IMA Journal of Numerical Analysis (2007) 27, 675 688 doi:10.1093/imanum/drm006 Advance Access publication on April 9, 2007 Solving the stochastic steady-state diffusion problem using multigrid HOWARD ELMAN

More information

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint

More information

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information