Non-Intrusive Solution of Stochastic and Parametric Equations
|
|
- Andrew Walsh
- 5 years ago
- Views:
Transcription
1 Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes, GeM, Nantes, France c KAUST, Thuwal, Saudi Arabia d Institute of Aerodynamics and Flow Control, DLR, Brunswick, Germany wire@tu-bs.de 13 Overview-Barcelona.tex,v /01/06 00:23:34 hgm Exp
2 Overview 2 1. Parametric equations 2. Stochastic model problem 3. Plain vanilla Galerkin 4. To be or no to be intrusive 5. Numerical Comparison 6. Galerkin and low-rank tensor approximation 7. Non-intrusive computation 8. Numerical examples
3 General mathematical setup 3 Consider operator equation, physical system modelled by A: A(p; u) = f(p) u U, f F, U space of states, F = U dual space of actions / forcings; Operator and rhs depend on parameters p, well posed for all p P. Iterative solver convergent for all values of p iterates for k = 0,..., u (k+1) (p) = S(p; u (k) (p), R(p; u (k) (p)), with u (k) (p) u (p), where S is one cycle of the solver, and the residuum: R(u (k) ) := R(p; u (k) (p)) := f(p) A(p; u (k) ). When the residuum vanishes R(p; u (p)) = 0 the mapping S has a fixed point u (p) = S(p; u (p), 0).
4 Model stochastic problem 4 Geometry flow = 0 Sources 2 flow out Aquifier 2D Model model with stochastic data, p Dirichlet b.c. (κ(x, p) u(x, p)) = f(x, ω) & b.c., x G R d (κ(x, p) u(x, p)) n = g(x, p), x Γ G, p P κ stochastic conductivity, f and g stochastic sinks and sources. One p is a realisation of κ, f, g
5 Preconditioned residual 5 In the iteration set u (k+1) = u (k) + u (k) with u (k) := S(p; u (k), R(p; u (k) )) u (k), and usually P ( u (k) ) = R(p; u (k) ), so that S(p; u (k) ) = u (k) + P 1 (R(p; u (k) )). (list of arguments shortened) Here P is some preconditioner, which may depend on p, the iteration counter k, and on the current iterate u (k) ; e.g. in Newton s method P = D u A(p; u (k) ).
6 Iteration 6 Algorithm: Start with some initial guess u (0) k 0 while no convergence do Compute u (k) S(p; u (k), R(p; u (k) )) u (k) u (k+1) u (k) + u (k) k k + 1 end while Uniform contraction: p, u, v : S(p; u(p), R(p; u(p))) S(p; v(p), R(p; v(p))) U ϱ u(p) v(p) U. with ϱ < 1.
7 Discretisation I 7 Let S R P be an appropriate Hilbert space of real functions on P, look for solution in tensor space U := U S so that u(p) = ι u ι ς ι (p). Normally, discretise first U by choosing finite-dimensional U N U, but results here independent of that. Direct Integration: To compute Quantity of Interest (QoI) Q(u) = Q(u(p), p) µ(dp) w z Q(u(p z ), p z ) P z integrand and u(p z ) have to be computed for all p z : expensive! But decoupled, non-intrusive solves. Want to replace it by proxy / meta model or emulator u(p) u M (p)
8 Discretisation II 8 (Further) discretise U S by choosing S M = span{ψ α (p)} S to give U S M = U M U S = U. Ansatz u M (ω) = α u αψ α (p) U M, Often Ψ α (p) = Ψ α (θ(p)), where θ(p) = [..., θ l (p),... ] are independent. If Ψ α (θ) = l ψ α l (θ l ), where α = (..., α l,... ), then S M = l S M,l allows for higher degree tensors. Simplest computation for Ψ α to be orthogonal (orthonormal), e.g. in inner product φ, ϕ S = φ(p)ϕ(p) µ(dp) P How to determine the unknown coefficients u α?
9 Solution procedures I 9 Projection of the solution u(p), or of the residuum R(p; u(p)). Interpolation: Determine the u α by interpolating condition: p β : u(p β ) =! u M (p β ) = u α Ψ α (p β ). α Simplest when Kronecker-δ property Ψ α (p β ) = δ α,β satisfied. Solve equation on interpolation points p β decoupled, non-intrusive solves. Pseudo-spectral projection: Simple as Ψ α are orthonormal. Compute projection inner product (integral) by quadrature, i.e. u α = Ψ α (p)u(p) µ(dp) w z Ψ α (p z )u(p z ), P z solve equation on quadrature points p z decoupled, non-intrusive solve.
10 Solution procedures II 10 Mapping u( ) u M ( ) is a projection Π, and to describe a general projection, choose ŜM = span{φ α (p)}, projection orthogonal to ŜM: ϕ ŜM : (I Π)u, ϕ = 0, i.e. Ŝ M = im(i Π). Approximation properties are determined by S M, stability by ŜM. Collocation / Interpolation i.e. solve equation on collocation / interpolation points p β, i.e. Φ β (p) = δ(p p β ): R(p β ; u M (p β )) = R(p β ; α u α Ψ α (p β ))! = 0. With Kronecker-δ: R(p β ; u β ) = 0 same as interpolation, decoupled, non-intrusive solve. We worry about the norm Π. Norm of collocation projector Π C may grow with M.
11 Projectors 11 Pseudo-spectral projector Π P is orthogonal, i.e. Π P = 1. This means that ŜM = S M, normally Φ α = Ψ α Galerkin: Apply Galerkin weighting. β : Φ β (p), R(p; u M (p)) = Φ β (p), R(p; α Coupled equations, is it intrusive? u α Ψ α (p)) = 0. When solved in a partitioned way, residua computed by quadrature, it is non-intrusive, needs only residua on qudrature points. To have norm of projector as small as possible (Bubnov-Galerkin), choose orthogonal projection Φ α = Ψ α,
12 Galerkin on iteration equation 12 Trick: Project iteration equation. Set u (k) (p) = α u(k) α Ψ α (p)) and u (k) = [..., u (k) β,... ]: u (k+1) = u (k) + u (k) = S(u (k), R(u (k) )) = u (k+1) = u (k) + M u (k), with M u (k) := [..., Ψ β, S(p; u (k) (p), R(p; u (k) (p))),... ] u (k) Define a mapping S(u): S(u) := [..., Ψ β, S(p; α u α Ψ α (p), R(p; α u α Ψ α (p))),... ], then M u (k) = S(u (k) ) u (k) and u (k+1) = u (k) + M u (k) = S(u (k) ).
13 Convergence 13 Start with some initial guess u (0) k 0 while no convergence do Compute M u (k) as above u (k+1) u (k) + M u (k) k k + 1 end while Nonlinear block Jacobi algorithm: Theorem: The mapping S has the same contraction factor ϱ. This means that the simple nonlinear block Jacobi algorithm converges as before.
14 The myth about intrusiveness 14 Folklore: Galerkin methods are intrusive. They can be, but don t have to. Question: To be or not to be intrusive? Stochastic Galerkin conditions for iteration equation requires S(u (k) ), approximated by S(u (k) ) S Z (u (k) ) = ( ) υ z Ψ β (p z )S p z, u (k) (p z ), R(p; u (k) (p z )) z. to give M u (k) Z u (k) = S Z (u (k) ) u (k). This requires the evaluation of preconditioned residuum one iteration with the solver at each p z. Theorem still holds with M u (k) replaced by Z u (k) in algorithm.
15 Numerical example 15 2 R 4 R R R 3 R R R 1 R 5 R 6 A(p; u) := (Ku + λ 1 (p 1 )(u T u) u) = λ 2 (p 2 )f 0 =: f(p). f 0 := [1, 0, 0, 0, 0] T.
16 Numerical example spec 16 Case 1 Case 2 Case 3 Case 4 λ 1 (p 1 ) p p p sin(4p 1 + 2) λ 2 (p 2 ) p p p sin(p 2 ) + 30 c.o.v. 2.5e 2 2.9e+1 1.7e 1 2.2e 1 1e 02 1e 04 2nd order polynomial 3rd order polynomial 4th order polynomial 5th order polynomial RMSE 1e 06 1e Convergence criteria (ε tol ) 10 0
17 Numerical results 17 order solver calls ɛ(l 2 (u)) ɛ(l 1 (u)) ɛ(l 2 (R u )) m P G P G P G P G e-5 6.1e-5 3.5e-5 3.5e-5 4.1e-5 4.1e e-6 3.9e-6 2.3e-6 2.3e-6 2.6e-6 2.6e e-7 2.7e-7 1.6e-7 1.6e-7 1.8e-7 1.8e e-8 2.0e-8 1.2e-8 1.2e-8 1.4e-8 1.4e-8 Low rank approximation: write u := [..., u α,... ] = (u α,n ) u = α,n u α,ne α e n u r = r 1 j=1 w j η j. Use faster global methods than block Jacobi, e.g. Quasi-Newton. Try and keep a low-rank tensor approximation troughout, from input fields to output solution.
18 Successive rank-one updates (SR1U) 18 Assume a functional J(p; u), and that A(p; u) f(p) = δ u J(p; u) = 0, so that solution is equivalent with minimising J for each p. Build solution rank-one by rank-one, i.e. with already computed u r := r 1 j=1 w j η j add new term w r η r through: min J(u r + w r η r ) δ w,η J(u r + w r η r ) = 0 w r,η r successive rank-one updates (SR1U), proper generalised decomposition (PGD). This Galerkin procedure only solves small problems, good approximations often with small r.
19 Low-rank approximation (basic PGD) 19 Define J r (w r, η r ) := J(u r (p) + w r η r ). New w r and η r found via system δ w J r (w r, η r ) = 0 δ η J r (w r, η r ) = 0 w r = 1. Block-Jacobi solver: u 1 0; η 1 1; w 1 0; for r = 1,..., until u r + w r η r accurate enough do : while no convergence do η r η r / η r ; Solve δ w J r (w r, η r ) = 0 for w r ; w r w r / w r ; Solve δ η J r (w r, η r ) = 0 for η r ; end while u r+1 u r + w r η r ; end for Output: a basic (greedy) low-rank approximation u r.
20 Non-intrusive residual for PGD Non-intrusive approximation of first equation: δ w J r (w r, η r ) = 0 δ u J(u r + w r η r ), η r S = 0 in U 0 = R(p; u r (p) + w r η r (p))η r (p) dp P 20 z υ z R(p z ; u r (p z ) + w r η r (p z )) η r (p z ), 2 nd eq.: δ η J r (w r, η r ) = 0 δ u J(u r + w r η r ), w r U = 0 in S λ S : 0 = R(p; u r (p) + w r η r (p)), w r U λ(p) dp P z υ z R(p z ; u r (p z ) + w r η r (p z )), w r U λ(p z )
21 Recent improvements 21 Increase u r by more than one term at a time (e.g terms) larger systems to be solved. Use faster algorithm than block-jacobi, e.g. Quasi-Newton methods (here BFGS). Use previous iterates as control variates to have fewer integration points per iteration. Increase accuracy of integration (number of integration points) as iteration converges.
22 z matrix is chosen to be the linear part B of the Hessian of the functional J. The low-rank approximations are also compared to the full-rank Galerkin approximation computed with the block-jacobi algorithmpgd introduced accuracy in [8], with a stagnation criterion of The comparison is made in Table 1 for total degrees d = 2,3,4,5 and ranks 1,2,3,4,5 for the approximations. 22 d =2 d =3 d =4 d =5 Block-Jacobi solver [8] Basic PGD (Algorithm 1) r = r = r = r = r = Improved PGD (Algorithm 3) r = r = r = r = r = Table 1: Relative error for the approximation resulting from the block-jacobi solver, the basic PGD and the improved algorithm for different total degrees d and different r.
23 greedy approximation gives satisfying results, even if the result is not optimal compared to the approximation resulting from a direct optimization in low-rank subsets. For the rest of this section, we focus on d = 5 and we measure the efficiency of PGD calls the different algorithms by counting the number of calls to the residual R(ur (pz ); pz ) = b(pz ) A(u(pz ); pz ). The results are reported in Table 2. r=1 r=2 r=3 r=4 Basic PGD (Algorithm 1) Relative error Residual calls Improved algorithm (Algorithm 3) Relative error Residual calls r= Table 2: Number of calls to the residual and corresponding relative error for different ranks r for the basic PGD and the improved algorithm. Both algorithms are similar at the beginning until r = 2. When r = 3, Algorithm 3 becomes the most efficient for computing the low-rank approximation. However, if we compare with the block-jacobi solver, the latter one only requires 540 calls to the residual. This suggests that the classical algorithms for computing the low-rank approximation of the solution of nonlinear equations must be reconsidered in terms of efficiency and intrusivity and different approaches must be proposed. 23
24 u(p;x) g(p;x) Obstacle example p x (a) Obstacle: g(p; x) p x (b) Solution: u(p; x). Figure 2: Obstacle and solution as functions of x and p [3] r 3 SVD of the L2 -projection Algorithm 1 Algorithm 3
25 p p x (a) Obstacle: g(p; x). Obstacle x (b) Solution: u(p; x). example convergence Figure 2: Obstacle and solution as functions of x and p [3]. 100 SVD of the L2 -projection Algorithm 1 Algorithm Relative error r Figure 3: Relative error with respect to the rank of the approximation for different algorithms. 25
26 Conclusion 26 Parametric problems can be emulated. Galerkin methods can be non-intrusive. Convergence can be accelerated by faster global algorithms. For efficiency try and use sparse representation throughout; ansatz in low-rank tensor products, saves storage as well as computation. PGD / SR1U is inherently a Galerkin procedure. Can also be non-intrusive. Low-rank tensor representation can be very accurate.
Sampling and Low-Rank Tensor Approximations
Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3
More informationParametric Problems, Stochastics, and Identification
Parametric Problems, Stochastics, and Identification Hermann G. Matthies a B. Rosić ab, O. Pajonk ac, A. Litvinenko a a, b University of Kragujevac c SPT Group, Hamburg wire@tu-bs.de http://www.wire.tu-bs.de
More informationHierarchical Parallel Solution of Stochastic Systems
Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations
More informationNON-LINEAR APPROXIMATION OF BAYESIAN UPDATE
tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific
More informationNumerical Approximation of Stochastic Elliptic Partial Differential Equations
Numerical Approximation of Stochastic Elliptic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de
More informationQuantifying Uncertainty: Modern Computational Representation of Probability and Applications
Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Hermann G. Matthies with Andreas Keese Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de
More informationSampling and low-rank tensor approximation of the response surface
Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationParadigms of Probabilistic Modelling
Paradigms of Probabilistic Modelling Hermann G. Matthies Brunswick, Germany wire@tu-bs.de http://www.wire.tu-bs.de abstract RV-measure.tex,v 4.5 2017/07/06 01:56:46 hgm Exp Overview 2 1. Motivation challenges
More informationEfficient Solvers for Stochastic Finite Element Saddle Point Problems
Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite
More informationSchwarz Preconditioner for the Stochastic Finite Element Method
Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty
More informationA Posteriori Adaptive Low-Rank Approximation of Probabilistic Models
A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models Rainer Niekamp and Martin Krosche. Institute for Scientific Computing TU Braunschweig ILAS: 22.08.2011 A Posteriori Adaptive Low-Rank
More informationA regularized least-squares method for sparse low-rank approximation of multivariate functions
Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with
More informationAdaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationTowards Reduced Order Modeling (ROM) for Gust Simulations
Towards Reduced Order Modeling (ROM) for Gust Simulations S. Görtz, M. Ripepi DLR, Institute of Aerodynamics and Flow Technology, Braunschweig, Germany Deutscher Luft und Raumfahrtkongress 2017 5. 7. September
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationUncertainty analysis of large-scale systems using domain decomposition
Center for Turbulence Research Annual Research Briefs 2007 143 Uncertainty analysis of large-scale systems using domain decomposition By D. Ghosh, C. Farhat AND P. Avery 1. Motivation and objectives A
More informationNumerical Solution I
Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationProper Generalized Decomposition for Linear and Non-Linear Stochastic Models
Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Olivier Le Maître 1 Lorenzo Tamellini 2 and Anthony Nouy 3 1 LIMSI-CNRS, Orsay, France 2 MOX, Politecnico Milano, Italy 3 GeM,
More informationSparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification
Noname manuscript No. (will be inserted by the editor) Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification J. Winokur D. Kim F. Bisetti O.P. Le Maître O.M.
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationA fast and well-conditioned spectral method: The US method
A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,
More informationMultigrid Methods and their application in CFD
Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential
More informationGalerkin Methods for Linear and Nonlinear Elliptic Stochastic Partial Differential Equations
ScientifiComputing Galerkin Methods for Linear and Nonlinear Elliptic Stochastic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institute of Scientific Computing Technical University
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationStochastic Spectral Methods for Uncertainty Quantification
Stochastic Spectral Methods for Uncertainty Quantification Olivier Le Maître 1,2,3, Omar Knio 1,2 1- Duke University, Durham, North Carolina 2- KAUST, Saudi-Arabia 3- LIMSI CNRS, Orsay, France 38th Conf.
More informationLecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:
tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients
More informationReduced Modeling in Data Assimilation
Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationPartial Differential Equations with Stochastic Coefficients
Partial Differential Equations with Stochastic Coefficients Hermann G. Matthies gemeinsam mit Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationPolynomial chaos expansions for sensitivity analysis
c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty
More informationOverlapping Schwarz preconditioners for Fekete spectral elements
Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,
More informationSolving the Stochastic Steady-State Diffusion Problem Using Multigrid
Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei
More informationWeighted Residual Methods
Weighted Residual Methods Introductory Course on Multiphysics Modelling TOMASZ G. ZIELIŃSKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Problem definition. oundary-value Problem..................
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationSPECTRAL METHODS: ORTHOGONAL POLYNOMIALS
SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS 31 October, 2007 1 INTRODUCTION 2 ORTHOGONAL POLYNOMIALS Properties of Orthogonal Polynomials 3 GAUSS INTEGRATION Gauss- Radau Integration Gauss -Lobatto Integration
More informationLecture 24: Starting to put it all together #3... More 2-Point Boundary value problems
Lecture 24: Starting to put it all together #3... More 2-Point Boundary value problems Outline 1) Our basic example again: -u'' + u = f(x); u(0)=α, u(l)=β 2) Solution of 2-point Boundary value problems
More informationfor compression of Boundary Integral Operators. Steven Paul Nixon B.Sc.
Theory and Applications of the Multiwavelets for compression of Boundary Integral Operators. Steven Paul Nixon B.Sc. Institute for Materials Research School of Computing, Science & Engineering, University
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationTheory and Computation for Bilinear Quadratures
Theory and Computation for Bilinear Quadratures Christopher A. Wong University of California, Berkeley 15 March 2015 Research supported by NSF, AFOSR, and SIAM C. Wong (UC Berkeley) Bilinear Quadratures
More informationHigh Performance Nonlinear Solvers
What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described
More informationIterative methods for positive definite linear systems with a complex shift
Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution
More informationSTOCHASTIC SAMPLING METHODS
STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions
More informationA Sobolev trust-region method for numerical solution of the Ginz
A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationarxiv: v2 [math.na] 8 Sep 2017
arxiv:1704.06339v [math.na] 8 Sep 017 A Monte Carlo approach to computing stiffness matrices arising in polynomial chaos approximations Juan Galvis O. Andrés Cuervo September 3, 018 Abstract We use a Monte
More informationGreedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid
Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationSolving A Low-Rank Factorization Model for Matrix Completion by A Nonlinear Successive Over-Relaxation Algorithm
Solving A Low-Rank Factorization Model for Matrix Completion by A Nonlinear Successive Over-Relaxation Algorithm Zaiwen Wen, Wotao Yin, Yin Zhang 2010 ONR Compressed Sensing Workshop May, 2010 Matrix Completion
More informationON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS. Introduction Let us consider an operator equation of second kind [1]
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 5, 1996, 457-474 ON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS A. JISHKARIANI AND G. KHVEDELIDZE Abstract. The estimate for the
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationFEM-FEM and FEM-BEM Coupling within the Dune Computational Software Environment
FEM-FEM and FEM-BEM Coupling within the Dune Computational Software Environment Alastair J. Radcliffe Andreas Dedner Timo Betcke Warwick University, Coventry University College of London (UCL) U.K. Radcliffe
More informationNonlinear error dynamics for cycled data assimilation methods
Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.
More informationA Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation
A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition
More informationThe Finite Element Method
Chapter 1 The Finite Element Method 1.1 Introduction This document provides a brief introduction to the finite element method and illustrates how the method is implemented in oomph-lib. The first few sections
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationLearning Mixtures of Truncated Basis Functions from Data
Learning Mixtures of Truncated Basis Functions from Data Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón PGM This work is supported by an Abel grant from Iceland, Liechtenstein, and Norway through
More informationA Recursive Trust-Region Method for Non-Convex Constrained Minimization
A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationSPECIAL RELATIVITY AND ELECTROMAGNETISM
SPECIAL RELATIVITY AND ELECTROMAGNETISM MATH 460, SECTION 500 The following problems (composed by Professor P.B. Yasskin) will lead you through the construction of the theory of electromagnetism in special
More informationSpectral methods for fuzzy structural dynamics: modal vs direct approach
Spectral methods for fuzzy structural dynamics: modal vs direct approach S Adhikari Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University, Wales, UK IUTAM Symposium
More informationRank reduction of parameterized time-dependent PDEs
Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationA Spectral Approach to Linear Bayesian Updating
A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT
More informationNumerical Analysis Comprehensive Exam Questions
Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationMODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS
MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS L. TAMELLINI, O. LE MAÎTRE, AND A. NOUY Abstract. In this paper we consider a
More informationPolynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format
Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format arxiv:1503.03210v1 [math.na] 11 Mar 2015 Sergey Dolgov Boris N.
More informationOrthogonality of hat functions in Sobolev spaces
1 Orthogonality of hat functions in Sobolev spaces Ulrich Reif Technische Universität Darmstadt A Strobl, September 18, 27 2 3 Outline: Recap: quasi interpolation Recap: orthogonality of uniform B-splines
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationFast Multipole BEM for Structural Acoustics Simulation
Fast Boundary Element Methods in Industrial Applications Fast Multipole BEM for Structural Acoustics Simulation Matthias Fischer and Lothar Gaul Institut A für Mechanik, Universität Stuttgart, Germany
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationINTRODUCTION TO FINITE ELEMENT METHODS
INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.
More informationLehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V
Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea
More informationWeighted Residual Methods
Weighted Residual Methods Introductory Course on Multiphysics Modelling TOMASZ G. ZIELIŃSKI bluebox.ippt.pan.pl/ tzielins/ Institute of Fundamental Technological Research of the Polish Academy of Sciences
More informationChapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)
Chapter 6 Finite Element Method Literature: (tiny selection from an enormous number of publications) K.J. Bathe, Finite Element procedures, 2nd edition, Pearson 2014 (1043 pages, comprehensive). Available
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More information. D CR Nomenclature D 1
. D CR Nomenclature D 1 Appendix D: CR NOMENCLATURE D 2 The notation used by different investigators working in CR formulations has not coalesced, since the topic is in flux. This Appendix identifies the
More informationDomain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions
Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New
More informationNumerical Methods for Two Point Boundary Value Problems
Numerical Methods for Two Point Boundary Value Problems Graeme Fairweather and Ian Gladwell 1 Finite Difference Methods 1.1 Introduction Consider the second order linear two point boundary value problem
More informationProjection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018
Projection Methods (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 1 University of Pennsylvania 2 Boston College Introduction Remember that
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationPartitioned Methods for Multifield Problems
Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More information