Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Similar documents
Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Simple Examples on Rectangular Domains

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Solving PDEs with Multigrid Methods p.1

Boundary Value Problems and Iterative Methods for Linear Systems

Iterative Methods for Linear Systems

Finite Difference Methods for Boundary Value Problems

Introduction to Boundary Value Problems

A Hybrid Method for the Wave Equation. beilina

Lecture 18 Classical Iterative Methods

Discretization of PDEs and Tools for the Parallel Solution of the Resulting Systems

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Numerical Analysis Comprehensive Exam Questions

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

Basic Aspects of Discretization

INTRODUCTION TO FINITE ELEMENT METHODS

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Poisson Equation in 2D

Final Exam May 4, 2016

Chapter 1: The Finite Element Method

Scientific Computing: An Introductory Survey

Stabilization and Acceleration of Algebraic Multigrid Method

Higher-Order Compact Finite Element Method

Lab 1: Iterative Methods for Solving Linear Systems

Motivation: Sparse matrices and numerical PDE's

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Finite difference method for elliptic problems: I

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

MATH 333: Partial Differential Equations

9.1 Preconditioned Krylov Subspace Methods

Lecture 10b: iterative and direct linear solvers

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

Iterative Methods for Solving A x = b

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Overlapping Schwarz preconditioners for Fekete spectral elements

MATH 425, FINAL EXAM SOLUTIONS

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

The Finite Element Method for the Wave Equation

From Stationary Methods to Krylov Subspaces

Numerical Solutions to Partial Differential Equations

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM

Algebraic Multigrid as Solvers and as Preconditioner

PDEs, part 1: Introduction and elliptic PDEs

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Chapter 3 Second Order Linear Equations

APPLICATIONS OF FD APPROXIMATIONS FOR SOLVING ORDINARY DIFFERENTIAL EQUATIONS

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing I

Mathematical Methods - Lecture 9

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

FEniCS Course. Lecture 0: Introduction to FEM. Contributors Anders Logg, Kent-Andre Mardal

Partial Differential Equations

Math 4263 Homework Set 1

A very short introduction to the Finite Element Method

Algebra C Numerical Linear Algebra Sample Exam Problems

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Question 9: PDEs Given the function f(x, y), consider the problem: = f(x, y) 2 y2 for 0 < x < 1 and 0 < x < 1. x 2 u. u(x, 0) = u(x, 1) = 0 for 0 x 1

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

MTH 215: Introduction to Linear Algebra

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

1. Differential Equations (ODE and PDE)

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

1 Discretizing BVP with Finite Element Methods.

The Conjugate Gradient Method

Computation Fluid Dynamics

Solving linear systems (6 lectures)

A simple FEM solver and its data parallelism

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

1. Let a(x) > 0, and assume that u and u h are the solutions of the Dirichlet problem:

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

University of Illinois at Urbana-Champaign College of Engineering

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

Time-dependent variational forms

Parallel Discontinuous Galerkin Method

Direct Methods for Solving Linear Systems. Matrix Factorization

MATH 590: Meshfree Methods

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Lecture Notes: African Institute of Mathematics Senegal, January Topic Title: A short introduction to numerical methods for elliptic PDEs

UNIVERSITY OF MANITOBA

Lecture Notes on Numerical Methods for Partial Differential Equations

9. Iterative Methods for Large Linear Systems

Chapter 7 Iterative Techniques in Matrix Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solving Linear Systems

Approximations of diffusions. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

From Completing the Squares and Orthogonal Projection to Finite Element Methods

Next topics: Solving systems of linear equations

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs)

Transcription:

Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1

Learning objectives of this lecture Apply the finite difference method to Laplace s equation Understand why iterative linear solvers are useful in this context Understand the basics of the finite element method 2

Outline 1. Approximations of Laplace s equation 2. Finite Element Method 3

1 Approximations of Laplace s equation 4

Finite differences for the Laplace equation Let s consider Laplace s equation in two dimensions: u xx + u yy = Approximating both terms with centered differences leads to u j+1,k 2u j,k + u j 1,k ( x) 2 with u j,k + u j,k+1 2u j,k + u j,k 1 ( y) 2 =. an approximation of u( j x, k y). turns o we just choos 5

Finite differences for the Laplace equation The relative Choosing x = y, we get u j,k = 1 4 (u j+1,k + u j 1,k + u j,k+1 + u j,k 1 ). Thus u j, k is the average of the values at the four neighboring grid points. The discrete scheme thus has the same mean value property as the Laplace equation! 6

Finite differences for the Laplace equation A solution u j, k cannot take its maximum or minimum value at an interior point, unless it is a constant: otherwise it couldn t be the average of its neighbors. Thus the maximum and minimum values can be taken only at the boundary we thus recover the maximum principle! 7

Finite differences for the Laplace equation To solve the Dirichlet problem for u xx + u yy = in D with given boundary values, we draw a grid covering D and approximate D by a union of squares: 8

Finite differences for the Laplace equation In contrast to time-dependent problems, no marching method is readily available: if we have N interior grid points, we obtain a linear system of N linear equations in N unknowns. The system matrix can become very large, but it is non singular very sparse (many entries are zero) 9

Finite differences for the Laplace equation For example, multiplying by 4 leads to the following matrix system for a grid with 4-by-4 interior points: 2 6 4 u j,k = 1 4 (u j+1,k + u j 1,k + u j,k+1 + u j,k 1 ). 4 1 1 1 4 1 1 1 4 1 1 1 4 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 4 1 1 1 4 1 1 1 4 1 1 1 4 3 2 7 6 5 4 u 1,1 u 2,1 u 3,1 u 4,1 u 1,2 u 2,2 u 3,2 u 4,2 u 1,3 u 2,3 u 3,3 u 4,3 u 1,4 u 2,4 u 3,4 u 4,4 3 2 = 7 6 5 4 b 1,1 b 2,1 b 3,1 b 4,1 b 1,2 b 2,2 b 3,2 b 4,2 b 1,3 b 2,3 b 3,3 b 4,3 b 1,4 b 2,4 b 3,4 b 4,4 3 7 5 1

Jacobi iteration Instead of direct solution methods (LU factorization), various iterative methods can be designed. is ca Jacobi iteration: starting from an initial guess, successively solve u (n+1) j,k = 1 4 [ u (n) j+1,k + u (n) j 1,k + u (n) j,k+1 + u (n) j,k 1 = Convergence analysis and matrix form: cf. your numerical analysis class from last year. ] u (1) j,k. 11

Jacobi iteration ] PS: [ ] u (n+1) j,k = 1 u (n) 4 j+1,k + u (n) j 1,k + u (n) j,k+1 + u (n) j,k 1 is actually the = same as as solving the 2D diffusion equation v t = v xx +v yy using centered differences for v xx and ntered v yy dif- and using the forward xx difference for v t, with x = y and t = ( x) 2 /4. e Dirichlet blem by taking t In effect, the Jacobi iteration is solving the Laplace problem by taking the limit of the discretized diffusion equation solution v(x, t) as t. 12

Gauss-Seidel method If we compute j,k one row at a time starting at the bottom row, and going left to right, by directly using any computed value, we get the Gauss-Seidel method: u (n+1) j,k = 1 4 u (n+1) [ ] [ u (n) j+1,k + u (n+1) j 1,k + u (n) j,k+1 + u (n+1) ] j,k 1 + 13

What s next? These are all so-called stationary iterative methods, whose general matrix form for solving Ax = b is If A is decomposed into its diagonal, lower triangular and upper triangular parts D, L and U, we have Jacobi: K = D x n = x n 1 + K 1 (b Ax n 1 ) Gauss-Seidel: K = D + L 14

What s next? Next week we will introduce more general, nonstationary methods, whose general form is x n = x n 1 + Â K 1 (b Ax k )a k,n 1 k<n We will study in more detail the Congugate Gradient method, which is part of the family of socalled Krylov subspace methods. 15

2 Finite Element Method 16

A simple 1D example Consider the following (ordinary) differential equation d a(x) du = f (x) dx dx where a(x) and f(x) are given functions. Let s try to solve the boundary value problem that consists in solving the above equation on an interval D = [, L] with homogenous Dirichlet boundary conditions u() = u(l) = on both extremities. 17

A simple 1D example Recall the introduction to distributions from last week. Introducing smooth test functions v(x) vanishing on and L, we reformulate the problem as finding u(x) such that Z L d dx a(x) du dx v(x)dx = Z L f (x)v(x)dx holds for all test functions v(x). This is only valid in the classical sense if a(x) is smooth enough. What if a(x) is only piece-wise continuous? 18

A simple 1D example We can proceed exactly as last week when we considered differential equations in the sense of distributions! Integrating by parts leads to i.e. Z L a(x) du dx Z L dv dx dx a(x) du dx apple a(x) du dx v(x) L dv dx dx = Z L which is the weak form of the problem. = Z L f (x)v(x)dx f (x)v(x)dx 19

A simple 1D example The finite element method then consists in Discretizing the domain D into several elements (here line segments) Approximating the solution u(x) by a linear combination of basis functions (usually piecewise polynomials), with local support on elements Choosing a finite number of test functions (usually the same as the basis functions used to approximate u(x)) 2

A simple 1D example For our problem, we can e.g. choose piecewise linear functions v i (x) and write: u(x) U 1 v 1 (x)+ +U N v N (x) v i (x) i 21

A simple 1D example Using the N basis functions also as test functions, becomes Z L N Z L ÂU i a(x) dv i i=1 dx a(x) du dx dv dx dx = Z L This is a system of N linear equations in the N unknowns U 1,..., U N : N m ij U i = f j i=1 dv j dx dx = Z L f (x)v(x)dx f (x)v j (x)dx ( j = 1,...,N). ( j = 1,...,N) 22

A simple 1D example The entries of the system matrix and the right hand side are respectively m ij = Z L a(x) dv i dx dv j dx dx f j = Z L With piecewise linear basis and test functions, the system matrix is very sparse: m ij = if i and j are not neighboring nodes! (The matrix is actually the same as for a centered finite difference scheme if a(x) is a constant.) f (x)v j (x)dx 23

Another 1D example Consider solving the diffusion problem u t = κu xx + f (x, t); u = at x =, l; u = φ(x) at t =. Partition the interval [, l] into l equal subintervals. We assign the piecewise linear basis function v j (x) to each of the N = l 1 interior vertices. Multiply the diffusion equation by any function v(x) that vanishes on the boundary and integrating by parts, we get: d l dt l uv dx = κ u x dv l dx dx + f (x, t) v(x) dx 24

Another 1D example In order to use the finite-element method, we look for a solution of the form u(x, t) = and we only require the weak = form to hold for v = v 1,..., v N. Then N ( l i=1 ) dui v i v j dx dt = κ This is a system of ODEs for U 1 (t),..., U N (t). N U i (t) v i (x) i=1 N ( l i=1 = dv i dx ) dv j dx dx U i (t) + l f (x, t)v j (x) dx 25

Another 1D example ) ( ) If we define the entries of vectors U and F as u j = U j, f j = and entries of matrices K and M as Z l f (x,t)v j (x)dx k ij = l v i v j dx, m ij = l dv i dx dv j dx dx the system of N ODEs in N unknowns takes the simple vector form K du dt = κmu(t) + F(t) 26

Another 1D example M is called the stiffness matrix, K the mass matrix, and F the forcing vector. One can easily show that in this 1D problem with unit length elements, we get 2 1 1 2 1 M =, K = 1 2 2 3 1 6 1 6 2 1 3 6 The matrices M and K have two important features. They are sparse and they 1 6 2 3 27

Another 1D example The system of ODEs can be solved e.g. with the backward Euler method, using the initial condition leading to i.e. U i () = i l which is solved recursively for φ(x)v i (x) dx [ [ U (p +1) U (p ) ] ] K = κ MU (p +1) + F(t p +1 ) t [K + κ tm] U (p +1) = KU (p ) + tf(t p +1 ) + = U (1), U (2),... 28

Generalization to higher dimensions The generalization is natural: higher dimensional domains are subdivided into simple polygons (in 2D) or polyhedral (in 3D) In addition to the rigorous handling of discontinuities, the method then also allows to handle curved or irregularly shaped domains which are not easily handled with finite differences. 29

Generalization to higher dimensions Let s consider the Dirichlet problem for Poisson s equation in the plane u = f in D, u = on bdyd. We proceed as in 1D: restate the problem as finding u such that ZZ D Duvdxdy= holds for any test function v (vanishing on bdy D). Recalling that D := and integrating by parts, we get: ZZ D u vdxdy Z bnd D ZZ D fvdxdy (n u)vdl = ZZ D fvdxdy 3

Generalization to higher dimensions We then triangulate the domain with N interior nodes N=1 N=8 N=53 and approximate u(x,y) u N (x,y)=u 1 v 1 (x,y)+ +U N v N (x,y) 31

Generalization to higher dimensions The simplest basis functions are again piecewise linear: a basis function v i (x,y) is equal to 1 at its node i and to equal at all the other nodes v i (x,y) V i 32

Generalization to higher dimensions Using the N basis functions also as test functions, Becomes i=1 U i ZZ D u vdxdy= D This is a system of N linear equations (j = 1,..., N) in the N unknowns U 1,..., U N. ZZ = = N v i v j dx dy = = D fvdxdy D f v j dx dy 33

Generalization to higher dimensions Denoting the system again takes the form Also notice that, at a node V= k = (x k, y k ), = m ij = v i v j dx dy and f j = D N m ij U i = f j i=1 D ( j = 1,...,N). f v j dx dy u N (x k, y k ) = U 1 v 1 (x k, y k ) + +U N v N (x k, y k ) = U k since v) i equals (x k, y k ) equals for and for iequals k 1and for equals =. Thus 1 for i = k. the ap Thus the coefficients are precisely the values of the approximate solution at the vertices. 34

Other generalizations High-order basis functions Different basis and test functions ( Petrov- Galerkin methods) Non-Lagrange basis functions (associated e.g. with edges of the mesh) Curved meshes Adaptive mesh refinement 35

Take-home messages Approximations of Laplace s equation using finite differences and finite elements lead to sparse systems of linear equations Finite element methods provide a general mathematical framework to handle more complicated cases: discontinuities, curved or complex geometries, local refinements, 36

Next week We will start Part 2 of the course on Complements of linear algebra by examining the Conjugate Gradient (CG) method CG is a powerful method to solve linear systems iteratively; it is in particular well-suited for the very large sparse matrices originating from the approximation of PDEs 37