ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli

Size: px
Start display at page:

Download "ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices. Numerical Methods and Simulation / Umberto Ravaioli"

Transcription

1 ECE539 - Advanced Theory of Semiconductors and Semiconductor Devices 1 General concepts Numerical Methods and Simulation / Umberto Ravaioli Introduction to the Numerical Solution of Partial Differential Equations The mathematical models of most physical processes are based on partial differential equations (PDE). Frequently, a direct analytic solution is possible only for simple cases or under very restrictive assumptions. For the solution of detailed and realistic models, numerical methods are often the only alternative available. The goal of this chapter is to introduce a number of classical examples, to familiarize the reader with numerical and practical issues in the solution of PDE s. The main objective of a numerical method is to solve a PDE on a discrete set of points of the solution domain, called discretization. In order to do so, the solution domain is divided into subdomains having the discretization points as vertices. The distance between two adjacent vertices is the mesh size. Time is also subdivided into discrete intervals, and we call timestep the interval between two consecutive times at which the solution is obtained. The PDE is then approximated, or discretized, to obtain a system of algebraic equations, where the unknowns are the solution values at the discretization points. This system of algebraic equations can then be solved on a computer by direct or iterative techniques. It is important to realize that the discretization step replaces the original equation with a new one, and that even an exact solution of the discretized problem will yield an approximate solution of the original PDE, since we introduced a discretization error. A wide variety of equations are encountered in the study of physical phenomena. Equations have been classified according to the type of phenomena involved, or according to mathematical features. It is important to attempt a classification, because the choice of a solution approach depends on the structure of the equation. Typical examples of time-dependent physical processes, given here in 1-D form, are for instance Diffusion u/ t = a u/ x Advection or drift u/ t = a u/ x Wave propagation u/ t = a u/ x For the solution of time-dependent problems of this type (initial value problems), boundary conditions on the solution domain, as well as the initial condition u(t = 0) are needed. Steady-state phenomena, characterized by zero time derivatives, can be studied by following the time-dependent process from an initial condition until a steady-state is reached. In alternative, the steady-state equations may be solved directly as a boundary value problem, which only requires the knowledge of boundary conditions. By definition, at steady-state the memory of a specific initial condition is lost. A typical boundary value problem is Laplace s equation u = 0 which could be viewed also as the steady-state of a diffusion process. particular case of Laplace s equation is a Poisson s equation u = ρ/. 1

2 The classical mathematical classification of elementary PDE s stems from the analysis of the general second order PDE α u x + β u x y + γ u y + δ u x + u y + + ηu = φ (1) The nature of the equation is determined by the coefficients, according to the following classification a) β 4αγ < 0 Elliptic (example: Poisson s equation) b) β 4αγ = 0 Parabolic (example: Diffusion equation) c) β 4αγ > 0 Hyperbolic (example: Wave equation) Many equations of practical importance may be of a mixed type, or not easily identifiable according to one of the above categories. Nevertheless, the distinction between elliptic, parabolic, and hyperbolic equations provides a very useful guideline for the selection of solution procedures. There are many approaches which are used for the discretization of the original PDE to obtain a numerical problem. The most important discretization approaches can be classified as Finite Differences Finite Elements Spectral Methods After discretization, it is necessary to check if the approximation is appropriate or if the discretized model can produce a solution at all when programmed into a computer code. For a successful solution, the numerical scheme must be stable, convergent and consistent. The scheme is stable if the solution stays bounded during the solution procedure. The scheme is convergent if the numerical solution tends to the real solution as the mesh size and the timestep tend to zero. The scheme is consistent if the truncation error tends to zero as the mesh size and the timestep tend to zero. If a numerical scheme is consistent, then stability is a necessary and sufficient condition to achieve convergence. A scheme which is stable but not consistent may converge to a solution of a different equation (with which it is consistent). A number of errors are introduced when a PDE is discretized and solved numerically. To summarize, we have Truncation error - the error introduced by the finite approximation of the derivatives. Discretization error - the error in the solution due to the replacement of the continuous equation with a discretized one. Round-off error - the computational error introduced by digital algorithms, due to the finite number of digits used in the numerical representation. The round-off error is random in nature, and normally increases when the mesh size is decreased. Conversely, the discretization error decreases with the mesh size, since more mesh points (i.e. more resolution) are introduced.

3 1-D Finite Differences The finite difference approach is the most popular discretization technique, owing to its simplicity. Finite difference approximations of derivatives are obtained by using truncated Taylor series. Consider the following Taylor expansions u(x + x) = u(x) + x u x + x u x + x3 3 u 6 x 3 + O( x4 ) () u(x x) = u(x) x u x + x u x x3 3 u 6 x 3 + O( x4 ) (3) The first order derivative is given by the following approximations: From (): Forward Difference From (3): Backward Difference u u(x + x) u(x) = + O( x) (4) x x u u(x) u(x x) = + O( x) (5) x x By substracting () (3) : Centered Difference u u(x + x) u(x x) = + O( x ) (6) x x An approximation for the second order derivative is obtained by adding () + (3): u u(x + x) u(x) + u(x x) = x x + O( x ) (7) The terms O( x) and O( x ) indicate the remainders which are truncated (truncation error) to obtain the approximate derivatives. The centered difference approximation given by (6) is more precise than the forward difference (4) or the backward difference (5) because the truncation error is of higher order, a consequence of cancellation of terms of the expansions when taking the difference between () and (3). Since the centered difference involves both neighboring points, there is more balanced information on the local behavior of the function. If a better approximation is needed, one could either reduce the size of the mesh or add more information by including higher order neighbors. It is a good exercise to derive an expression for the second order derivative including five points instead of three. Besides () and (3) we also need the expansions for u(x + x) and u(x x): u(x + x) = u(x) + x u x + 4 x u x + 8 x3 3 u 6 x x4 4 u 4 x x5 5 u 10 x 5 + O( x6 ) (8) u(x x) = u(x) x u x + 4 x u x 8 x3 3 u 6 x x4 4 u 4 x 4 3 x5 5 u 10 x 5 + O( x6 ) (9) The final result is an approximation of order O( x 4 ) u u(x + x) + 16u(x + x) 30u(x) + 16u(x x) u(x x) x 1 x (10) 3

4 By including more discretization points, it is also possible to improve the approximation for the first order derivative. The following are two possible approximations of order O( x ) u u(x + x) + 4u(x + x) 3u(x) x x (11) u u(x + x) + 8u(x + x) 8u(x x) + u(x x) (1) x 1 x Equation (11) is an improvement of the forward difference and a symmetrical expression can be derived for the backward difference. Equation (1) is an improvement of the centered difference. In order to simplify the notation, when convenient we will label the discretization points with appropriate indices. With x = h, Eq. (7) becomes u u(i + 1) u(i) + u(i 1) x h (13) Until now, we have assumed a regular grid with uniform mesh intervals, for the derivation of finite difference approximations. In many cases it may be necessary to discretize the equations on an irregular grid. Consider a non-uniform 1-D discretization, for example. If finite difference approximations are sought at the grid point i, we can use the following Taylor expansions: u u(i + 1) = u(i) + h i x + h i u x + h3 i 3 u 6 x 3 + O(h4 i ) (14) u u(i 1) = u(i) h i 1 x + h i 1 u x h3 i 1 3 u 6 x 3 + O(h4 i 1) (15) The forward and backward difference approximation for the first order derivative can be assembled as before. An expression which involves both u(i 1) and u(i + 1) is obtained from (14) (15) as u u(i + 1) u(i 1) (16) x h i + h i 1 Notice that the terms corresponding to the second order derivatives in the Taylor expansions do not cancel exactly, as in the case of the centered difference in (6) due to the nonuniformity of the mesh intervals, therefore we have a larger truncation error. It is often convenient to involve the mid-points of the mesh intervals, indicated as i 1 and i + 1, to express first order derivatives, as u x [u(i + 1 ) u(i 1 )]/(h i + h i 1 ) (17) This result can be useful to obtain an approximation for the second order derivative on the nonuniform mesh. We can write We can then express u/ x i+ 1 and obtain u x = x ( u x ) [ u x i+ 1 u x i 1 ]/( h i + h i 1 ) (18) and u/ x i 1 u x [u i+1 u i h i with centered differences, which are of order O(h ), u i u i 1 h i 1 ]( h i + h i 1 ) 1 (19) 4

5 3 1-D Examples 3.1 Finite Differences for Poisson s equation In 1-D, Poisson s equation is a simple ordinary differential equation. Assuming a uniform dielectric permittivity, we can write d V (x) dx = ρ(x) (0) where V (x) is the unknown electrical potential and ρ(x) is the space dependent charge density. The equation must be solved on a 1-D domain, where the charge density ρ(x) is sampled at the discretization points. In order to specify correctly the problem, we need two boundary conditions, since we are dealing with a second order differential equation. Such boundary conditions could be constant values for the potential at the boundary points x = 0 and x = L, or one value for the electric potential V (x) and one value for the electric field E(x), at either boundary point and in any combination. Remember that the electric field is defined as V (x) E(x) = (1) x It is trivial to show that in a 1-D problem one cannot specify the electric field at the two boundaries, since in doing so the information on the potential drop across the structure is lost. By using the finite difference approximation (7), we obtain the discretized 1-D Poisson equation V (i 1) V (i) + V (i + 1) x = ρ(i) () Let s consider first the case of two potential boundary conditions, e.g. V (1) = V 1 and V (N) = V N. The domain is divided into N 1 intervals corresponding to N grid points, of whic are boundary points, where the solution is known, and N are internal points, where the solution is unknown. In order to solve the problem, we need an additional N equations, besides the boundary conditions. Therefore, Eq. () must be written for each internal boundary point. The discretization of the differential equation (0) yields a system of N equations in N unknowns, reduced to N unknowns by eliminating the boundary conditions. In order to perform computer solutions, the system of equations must be represented in matrix form as A V = B (3) For the purpose of illustration, let s consider a uniform mesh with N = 5. equations, including boundary conditions are The discretized V (1) = V 1 V (1) V () + V (3) = x ρ() V () V (3) + V (4) = x ρ(3) V (3) V (4) + V (5) = x ρ(4) V (5) = V N This system of equations corresponds to the matrix equation 5

6 V (1) V () V (3) V (4) V (5) = V 1 ( x /)ρ() ( x /)ρ(3) ( x /)ρ(4) V N Since the boundary conditions correspond to two trivial equations, the variables V (1) and V (5) can be eliminated. The result is the following matrix equation for the internal points V () V (3) V (4) = ( x /)ρ() V 1 ( x /)ρ(3) ( x /)ρ(4) V N (4) (5) The discretization of Poisson s equation yields a tridiagonal matrix, where only three diagonals are non-zero. Matrix equations of this type can be solved quite efficiently, using a fairly compact variant of Gaussian elimination (Appendix 1). In many cases, the knowledge of the electric field is also required. By finite differencing the results for the potential, one can calculate the values of the field on the mesh points using forward, backward, and centered differences, respectively as V (i + 1) V (i) E(i) = x V (i) V (i 1) E(i) = x V (i 1) V (i + 1) E(i) = x We know by now that (8) is of order O( x ) accurate, as opposed to O( x) for (6) and (7). To compute the electric field on a boundary, only (6) and (7) can be used, since the value of the potential beyond the boundaries is necessary to evaluate the centered difference. In case the value of the electric field is needed for the mid-points of the mesh intervals, one can use the second order accurate expressions E(i + 1 (i + 1) V (i) ) = V (9) x E(i 1 (i) V (i 1) ) = V (30) x The 1-D problem can also be solved directly in terms of the electric field. Poisson s equation is substituted by Gauss law E x = ρ(x) (31) This is a first order differential equation, and only one boundary condition can be imposed. The information on the applied potential must be supplied by a condition obtained by integrating (1) over the domain, which yields (6) (7) (8) In discretized form, (31) becomes L 0 E(x)dx = V (1) V (N) (3) 6

7 ρ(i 1) x E(i) E(i 1) = (33) where ρ(i 1) now represents the average charge density inside the mesh interval i 1. We are constructing a histogram of the charge contained inside the mesh intervals, and from (33) we obtain the variation of the electric field from mesh to mesh. If we define we can rewrite (3) as E(x) = E(x) E(1) (34) L 0 E(x)dx = L On the mesh points we obtain the discretized equation 0 E(x)dx + E(1)L = V (1) V (N) (35) ρ(i 1) x E(i) E(i 1) = (36) with the boundary condition E(1) = 0. No matrix inversion is needed, because (36) is an explicit recursion. Once the distribution of E(x) is known, E(1) can be obtained from (35) by performing a numerical integration. Using Simpson s formula for the integration (there are of course many other suitable approaches) we have on a uniform mesh E(1) = V (1) V (N) L x [ E(1) + E(N) + E(i) + 4 ] E(i) 3L odd i even i where the summations are extended to internal mesh points. The distribution of the electric field on the mesh points is then given by (37) E(i) = E(i) + E(1) (38) Note that the term [V (1) V (N)]/L in (37) corresponds to the field solution when the charge is ρ = 0 throughout the domain. An extension of this method to nonuniform mesh simply requires the use of a different integration procedure in (37) Going back to Poisson equation, we mentioned above that one boundary condition for the potential and one boundary condition for the electric field can be specified at either end of the domain. Let s assume that we know the conditions E(1) = 0 and V (N) = V N. We need to choose an appropriate way to discretize (1) at x = 0, for instance E(x) = V (x) x V () V (1) x V (x) V () V (0) E(x) = (40) x x From (39), the boundary condition E(1) = 0 simply yields V (1) = V (). In the example with N = 5, we obtain now the matrix equation V (1) V () V (3) V (4) V (5) 7 = 0 ( x /)ρ() ( x /)ρ(3) ( x /)ρ(4) V N (39) (41)

8 If the centered difference (40) is choosen, the external point x = x, with label i = 0 is also involved in the boundary condition, and we have V (0) = V (). The potential V (1) at x = 0 is now solved for and we need to add an equation for the boundary point V (0) V (1) V () V (3) V (4) V (5) = 3. Finite Differences for 1-D Parabolic Equations We consider here the 1-D diffusion equation 0 ( x /)ρ(1) ( x /)ρ() ( x /)ρ(3) ( x /)ρ(4) V N u t = a u x (43) which is discretized in space and time with uniform mesh intervals x and timestep t. A simple approach is to discretize the time derivative with a forward difference as u u(i; n + 1) u(i; n) (44) t t The solution is known at time t n and a new solution must be found at time t n+1 = t n + t. Starting from the initial condition at t 1 = 0, the time evoultion is constructed after each timestep either explicitly, by direct evaluation of an expression obtained from the discretized equation, or implicitly, when solution of a system of equations is necessary. An explicit approach is readily obtained by substituting the space derivative with the 3-point finite difference evaluated at the current timestep. The algorithm, written for a generic point x i of the discretization, is u(i; n + 1) = u(i; n) + a t [ u(i 1; n) u(i; n) + u(i + 1; n) ] (45) x A fairly general implicit scheme is obtained by discretizing the space derivative with a weighted average of the finite difference approximation at t n and t n+1 (4) u(i; n + 1) λ a t [ u(i 1; n + 1) u(i; n + 1) + u(i + 1; n + 1) ] = x = u(i; n) + (1 λ) a t x [ u(i 1; n) u(i; n) + u(i + 1; n) ] (46) When λ = 1, the scheme is fully implicit. The classic Crank-Nicholson scheme is obtained when λ = 0.5 and when λ = 0 the explicit scheme in (45) is recovered. In order to cast the algorithm in matrix form, all the terms evaluated at time t n+1 are positioned on the left hand side of (46), while the terms evaluated at time t n are on the right hand side. Let s consider again a simple example with N = 5 mesh points and assume that the boundary conditions u(1; n) = u(1; n+1) = u 1 and u(5; n) = u(5; n+1) = u 5 are known. The matrix equation for the discretized diffusion equation is then 8

9 λr 1 + λr λr λr 1 + λr λr λr 1 + λr λr u(1; n + 1) u(; n + 1) u(3; n + 1) u(4; n + 1) u(5; n + 1) = u 1 RHS(; n) RHS(3; n) RHS(4; n) u 5 where RHS(i; n) represents the right hand side of (46) and r = a t / x. Other schemes originate from an evaluation of the time derivative by a centered difference approximation, in the attempt to reduce the truncation error u(i; n + 1) = u(i; n 1) + a t x [ u(i 1; n) u(i; n) + u(i + 1; n) ] (48) This scheme involves three time levels at each step of the iteration, but is explicit and is potentially much more accurate than the schemes above because of the more precise evaluation of the time derivative. Unfortunately, better truncation error is not a guarantee for the overall stability of the scheme. A variant of (48) substitutes u(i; n) with its time average [ u(i; n 1) + u(i; n + 1) ]/: (47) u(i; n + 1) = u(i; n 1) + a t x [u(i 1; n) u(i; n 1) u(i; n + 1) + u(i + 1; n)] (49) In the next section, the limits of applicability of these discretization schemes are analysed, in order to determine their validity and usefulness. It should be noted that three-level schemes need the knowledge of the solution at two consecutive times t n 1 and t n to obtain the solution at t n+1. More memory is required as well a special starting procedure. 3.3 Stability analysis A numerical scheme is unstable when, during the solution procedure, the result becomes unbounded eventually reaching overflow, independently of the choice of the computing system. The phenomenon of instability arises when the numerical procedure itself tends to amplify errors (e.g. round-off error, which is unavoidable on any digital computer) to the point that the amplified error obliterates the solution itself. This may happen even after derivatives are replaced by approximations which are more accurate from a numerical point of view. We examine here an initial value problem of the type u = L u (50) t where L is a differential space operator. The equation is solved at discrete times separated by a timestep t. Discretization of (50) will yield a matrix equation of the type A ū(t = (n + 1) t) = B ū(t = n t) (51) The solution at a given time is related to the solution at a previous timestep through a transformation of the type ū(t = (n + 1) t) = T ū(t = n t) (5) where ū is the vector of solutions obtained on the discretization points, and T = A 1 B is a transformation matrix which governs the time evolutions, according to the discretized numerical scheme. Starting from the initial condition, the iteration evolves in the following way 9

10 ū( t) = T ū(0) ū( t) = T ū( t) = T ū(0) ū(3 t) = T ū( t) = T 3 ū(0) (53) ū((n + 1) t) = T ū(n t) = T n ū(0) The eigenvalues λ i and the eigenvectors v i of the matrix T are related through T v i = λ i v i (54) Note that there are as many eigenvalues and eigenvectors as internal mesh points in the discretized domain. The solution at a given timestep can be expressed as a linear combination of the eigenvectors. At t = 0 we have ū(0) = i c n v i (55) with i = 1, N being N the total number of internal mesh points. The last equation in (53) can be rewritten as ū[(n + 1) t] = T n ū(0) = i c i (λ i ) n v i (56) The solution stays bounded if the modulus of the largest eigenvalue λ i max (spectral radius) of the matrix T satisfies the condition λ i max 1. In many cases of practical interest, stability analysis can be performed with a Fourier method. Fourier analysis is performed by expanding the solution in space Fourier components over the discretization. Then we can just study the evolution of a generic Fourier component (mode) between timesteps, since modes are orthogonal and independent of each other. A Fourier mode is expressed as F ω (i; n) = W (n) exp ( jω x i ) (57) where j is the imaginary unit, Ω is the Fourier mode wavenumber, and W (n) is the mode amplitude. The indices i and n indicate mesh point and timestep, respectively. Stability is analyzed by replacing the variable u with the generic Fourier component in the discretized scheme. The discretization is stable if the mode amplitude remains bounded at any iteration step. This is verified if W (n + 1) W (n) 1 + O( t) (58) The Fourier analysis does not include the effect of boundaries, which may affect the actual stability behavior. Boundary conditions can be introduced as additional equations when the eigenvalues of the time-evolution matrix are analyzed. 10

11 3.4 Stability analysis for the diffusion equation (a) Explicit schemes Consider (45) and substitute u(i; n) with the Fourier mode F ω (i; n) Since exp( jωx i±1 ) = W (n + 1) exp( jωx i ) = W (n) exp( jωx i ) + a t x W (n) [ exp( jωx i+1) exp( jωx i ) + exp( jωx i 1 ) ] (59) exp( jωx i ) exp(±jω x), (59) becomes W (n + 1) W (n) = 1 + a t x [ exp( jω x) + exp( jω x) ] (60) W (n + 1) W (n) = 1 4a t x sin (Ω x/) (61) For stability, we must have W (n + 1)/W (n) 1. This inequality is satisfied only when 4a t x sin (Ω x/) (6) A physical diffusion constant is a > 0, therefore the left hand side of (6) is always positive. Since 0 sin (Ω x/) 1, we can restrict ourselves to the worst case sin (Ω x/) = 1. The stability condition becomes which gives the condition for the timestep 4a t x (63) t x (64) a This result indicates that once the mesh size x is fixed, there is a limit in the choice of the timestep t. The timestep can be increased beyond this limit, only if the mesh size is also increased, leading to a reduced space resolution. The same approach can be used to analyze the stability of the three-level scheme (48) but the analysis is slightly more difficult now. For the generic Fourier mode we obtain an expression analogous to (61) W (n + 1) W (n) = W (n 1) W (n) 8a t x sin (Ω x/) (65) If we define W (n + 1)/W (n) = W (n)/w (n 1) = g, we obtain the quadratic equation with solution g = g + g 8a t x sin (Ω x/) 1 = 0 (66) 4a t x sin (Ω x/) ± a t 4 x sin 4 (Ω x/) = 0 (67) 11

12 It is easy to see that the solution with negative sign in front of the square root leads to g > 1 for any choice of finite t and x. Therefore, the scheme (48) is unconditionally unstable! The use of a more accurate approximation for the time derivative does not produce a better difference approximation for the whole equation. The modified scheme in (49) is instead always stable. However, an analysis of the local truncation error reveals that the scheme has a consistency problem. (b) Implicit schemes The Fourier stability method applied to (46) yields the equation { W (n + 1) { W (n) from which we have and for stability 1 a t x λ [ exp( jω x) + exp( jω x) ] } 1 + a t x (1 λ) [ exp( jω x) + exp( jω x) ] } W (n + 1) W (n) = 1 4 a t x (1 λ) sin ( Ω x/) a t x λ sin ( Ω x/) = (68) 1 4 a t x sin ( Ω x/) a t x λ 1 (71) sin ( Ω x/) When λ = 0 we recover the stability condition found previously for the explicit scheme (45). If we set sin (Ω x/) = 1, we have the stability condition λ ( 1 x 4 a t From this result, we can see that both the Crank-Nicholson method (λ = 0.5) and the fully implicit method (λ = 1) are unconditionally stable and any timestep t can be chosen. Of course, stability is no guarantee that the physical evolution will be properly simulated, and the timestep should be selected so that it does not exceed characteristic time constants typical of the process under investigation. 3.5 Explicit Finite Difference Methods for the Advection Equation The advection equation has the form u(x) + vu(x) = 0 (73) t x where v has the dimensions of a velocity and may be space-dependent. The advection equation is hyperbolic in nature and describes phenomena of propagation which should be solved in a natural way by obtaining the trajectories of constant solution in the space-time domain. The locus of constant solution is called characteristic. Characteristics methods for the solution are in general more expensive than finite difference methods. In particular, explicit difference methods are extremely simple to code and fast, although one has to make sure that the physics of the problem is well represented. 1 ) (69) (70) (7)

13 We can try to discretize (73) by simply using a forward difference for the time and a centered difference for the space derivatives as u(i; n + 1) = u(i; n) t [ v(i + 1) u(i + 1; n) v(i 1) u(i 1; n) ] (74) x Unfortunately, stability analysis shows that this scheme is unconditionally unstable. Some improvement is obtained by subsituting u(x i, t n ) on the right hand side with a space average t x u(i; n + 1) = u(i 1; n) + u(i + 1; n) [ v(i + 1) u(i + 1; n) v(i 1) u(i 1; n) ] (75) The scheme is now conditionally stable. Assuming for simplicity a constant velocity v, the stability condition is t x/ v. This result is usually called C.F.L. condition (from the mathematicians Courant, Friederichs and Lewy). Since x/ t has the dimensions of a velocity, the choice of discretization mesh imposes a characteristic velocity x/ t which cannot be exceeded in order to obtain physical results. There are some subtle details which are not evident from a simple stability analysis. Let s rewrite the previous schemes, adding u(i; n 1)/ u(i; n 1)/ = 0 to the left hand side of (74), and u(i; n 1) u(i; n 1) + u(i; n) u(i; n) = 0 to the left hand side of (75). Using a constant velocity, (74) becomes + u(i; n + 1) u(i; n 1) which is a second order discretization of the equation + u(i; n + 1) u(i; n) + u(i; n 1) = v t [ u(i + 1; n) u(i 1; n) ] (76) x and we obtain for (75) u(x) t + t u(x) t + v u(x) x = 0 (77) u(i; n + 1) u(i; n 1) u(i + 1; n) u(i; n) + u(i 1; n) + u(i; n + 1) u(i; n) + u(i; n 1) v t x which is in turn a second order discretization of the equation = [ u(i + 1; n) u(i 1; n) ] (78) u(x) + t u(x) t t x u(x) t x + v u(x) = 0 (79) x The stabilization in (75) is obtained by the addition of a diffusive term which is able to quench the growth of instabilities as the solution evolves in time. The introduction of this spurious diffusion causes smoothing, due to the coupling introduced between space Fourier components (modes) of the solution, which modifies the shape of the traveling solution itself. In addition, the discretization may 13

14 cause the modes to travel at different velocity (dispersion), also causing a spurious deformation of the solution. When v = x/ t, dispersion is eliminated, but obviously this holds only in the case of uniform velocity. A stable discretization of (73) is obtained with first order space differences, using a forward difference when the velocity is positive and a backward difference when the velocity is negative u(i; n + 1) = u(i; n) t x [ v(i + 1) u(i + 1; n) v(i) u(i; n) ] ; v > 0 (80) = u(i; n) + t x [ v(i) u(i; n) v(i 1) u(i 1; n) ] ; v < 0 This procedure is called upwinding, since the direction is adjusted according to the direction of the flow. A second order scheme can be obtained by using a centered time difference, which leads to a three-level time scheme, called leapfrog method u(i; n + 1) = u(i; n 1) t [ v(i + 1) u(i + 1; n) v(i 1) u(i 1; n) ] (81) x This method is stable if the C.F.L. condition is satisfied. Note that, as shown earlier, a similar method applied to the diffusion equation yields an unstable scheme. A second order scheme can be obtained considering the Taylor expansion in time u(x, t + t) = u(x, t) + t u + t u t t + O( t 3 ) (8) The time derivatives can be expressed from the advection equation itself as u(x) t = vu(x) x = v(x) u(x) x u(x) v(x) x u(x) t = v(x) [ v(x) u(x) ] x x v(x) u(x) x where we have assumed that the velocity is a slow varying space variable and that we can neglect the term u(x) v(x)/ x. The second order Lax-Wendroff scheme is then obtained by discretizing (8) (83) (84) u(i; n + 1) = u(i; n) v(i) t x [v(i + 1)u(i + 1; n) v(i 1)u(i 1; n)] (85) v(i) t x {v(i + 1/) [u(i + 1; n) u(i; n)] v(i 1/) [u(i; n) u(i 1; n)]} The last term on the right hand side of (85) corrects the instability of the scheme, and is called artificial viscosity. Note that the velocity must be known at the midpoint of the mesh intervals. 14

15 4 -D Finite Differences In the case of a 1-D domain, there are only two possible discretization approaches: uniform or nonuniform mesh intervals. A -D domain can be decomposed by using any combination of polygons having the discretization nodes as vertices. Finite difference methods are very convenient in conjunction with rectangular grids, although finite difference approximations of differential operators can be written for general meshes. Finite difference discretizations for a rectangular grid are usually assembled by considering the four neighbors of the current mesh point, although other schemes are possible. It is relatively easy to develop -D discretization on a general rectangular grid, starting from the results for the 1-D case. First order derivatives evaluated along the cartesian directions x and y are differenced in the same way, and the variables are now denoted by two indices. A finite difference approximation for the -D laplacian is easily assembled on a uniform (square) mesh, where x = y =, by combining two expressions like (7) written for the x and y direction u(i + 1, j) u(i, j) + u(i 1, j) + u = u(x, y) x + u(x, y) y = u(i, j + 1) u(i, j) + u(i, j 1) + O( ) = (86) u(i + 1, j) + u(i 1, j) 4u(i, j) + u(i, j + 1) + u(i, j 1) + O( ) This finite difference approximation involves the current point and the four nearest neighbors in the square grid (5-point formula). To improve accuracy, it is possible to involve also the second nearest neighbors (9-point formula) using the 1-D result in (10) u(x, y) x + u(x, y) y = [ u i+,j u i,j+ + 16u i+1,j + 16u i,j+1 60u i,j + 16u i 1,j + 16u i,j 1 u i,j u i,j ]/(1 ) + O( 4 ) (87) On a nonuniform rectangular grid, we can use (19) with x(i) = h i and y(j) = k j to obtain a 5-point formula which is accurate to first order u(x, y) x + u(x, y) y + [ ui+1,j u i,j h i [ ui,j+1 u i,j k j u i,j u i 1,j h i 1 u i,j u i,j 1 k j 1 ] /( h i + h i 1 ] /( k j + k j 1 ) ) (88) We can obtain the result in (88) with a more general approach called box-integration. Consider the box around the generic mesh point (x i, y j ) obtained after bisecting the mesh lines connecting the point to its neighbors. The differential operator is integrated over the volume of the box, which in -D is just the box area A box = h i 1 + h i k j 1 + k j (89) 15

16 The integral of the Laplacian over the domain D of the box can be converted into an integral over the boundary D of the domain by using Green s theorem D u(x, y) dx dy = = yj + k j yj k j 1 y j + k j y j k j 1 D u(x, y) dx dy = ( x u x i + h ) i, y dy + ( x u x i h ) i 1, y dy xi h i 1 x i + h i xi + h i x i h i 1 ( ) u u dy D x y dx ( y u x, y j + k ) j dx (90) ( y u x, y j k j 1 The derivatives at the mesh midpoints, which appear inside the line integrals in (90) can be approximated by using centered finite differences involving the neighboring mesh points. These values are used as an approximation of the average integrands for the evaluation of the line integrals, which are obtained by multiplying the average value by the corresponding length of the integration path, for instance: We then evaluate D yj + k j y j k j 1 ( x u x i + h ) i, y dy u(i + 1, j) u(i, j) h i k j 1 + k j ( u(x, y) dx dy hi 1 + h i u(i, j)a box = u(i, j) ) dx ) k j 1 + k j where we have assumed that the value of the laplacian on the mesh point is a good approximation for the average value inside the box, and we obtain finally the following finite difference approximation for the laplacian (91) (9) u(i, j + 1) u(i, j) k j [ u(i + 1, j) u(i, j) k j 1 + k j u(i, j) + h i h i 1 + h i u(i 1, j) u(i, j) k j 1 + k j + h i 1 ] ( ) h i 1 + h i hi 1 + h i k j 1 + k j / u(i, j 1) u(i, j) + k j 1 (93) It can be easily verified that the finite difference approximations (88) and (93) are identical. The box integration method provides a rigorous approximation procedure which is very useful for the discretization of complicated operators. The box integration is not limited to finite differences. Note that, up to (90), no approximation has been made and any suitable discretization method can be used from that point on D Finite Differences for Poisson s equation In the case of constant dielectric permittivity, the -D Poisson s equation is V (x, y) x + V (x, y) y = ρ(x, y) (94) 16

17 The 5-point finite difference discretization of the -D Poisson s equation is given directly by (86) for uniform square mesh V (i + 1, j) + V (i 1, j) 4V (i, j) + V (i, j + 1) + V (i, j 1) ρ(i, j) = or by (88) for nonuniform rectangular mesh (95) + [ Vi,j+1 V i,j k j [ Vi+1,j V i,j h i V i,j V i,j 1 k j 1 V i,j V i 1,j h i 1 ] ( kj + k j 1 / ] ( hi + h i 1 / ) ) ρ(i, j) = (96) The indexing in (95) and (96) assumes a natural mesh point numeration according to columns (i) and rows (j) of a rectangular grid. The scheme is often called lexicographic because the indexing follows the usual pattern of writing. An equation like (95) or (96) is written for each internal node in the -D mesh. On the boundary points we either specify a fixed potential (Dirichlet boundary condition) or the normal component of the electric field (von Neuman boundary condition). A Dirichlet boundary condition must be imposed at least at one boundary point for the problem to be specified. The ordering of the mesh points affects the structure of the discretized matrix. For illustration, let s consider a simple rectangular domain discretized with a uniform square mesh, so that x = y =, and i = 1,, 3, 4, 5 and j = 1,, 3, 4. For simplicity, fixed potential boundary conditions are imposed on all boundary points. Following the lexicographic ordering, we have equations like: Row 1: V (, 1) = V 1 V (3, 1) = V 31 V (4, 1) = V 41 Row : Row 3: V (1, ) = V 1 V (, 1) + V (1, ) 4V (, ) + V (3, ) + V (, 3) = ρ(, ) V (3, 1) + V (, ) 4V (3, ) + V (4, ) + V (3, 3) = ρ(3, ) V (4, 1) + V (3, ) 4V (4, ) + V (5, ) + V (4, 3) = ρ(4, ) V (5, ) = V 5 17

18 V (1, 3) = V 13 V (, ) + V (1, 3) 4V (, 3) + V (3, 3) + V (, 4) = ρ(, 3) V (3, ) + V (, 3) 4V (3, 3) + V (4, 3) + V (3, 4) = ρ(3, 3) V (4, ) + V (3, 3) 4V (4, 3) + V (5, 3) + V (4, 4) = ρ(4, 3) V (5, 3) = V 53 Row 4: V (, 4) = V 4 V (3, 4) = V 34 V (4, 4) = V 44 Notice that the boundary conditions for the corner points V (1, 1), V (5, 1), V (1, 4) and V (5, 4) have not been included, because they do not enter the expressions for the internal points. After elimination of the boundary conditions, the resulting matrix equations is V (, ) V (3, ) V (4, ) V (, 3) V (3, 3) V (4, 3) = ρ(,) V 1 V 1 ρ(3,) V 31 ρ(4,) V 41 V 5 ρ(,3) V 13 V 4 ρ(3,3) V 34 ρ(4,3) V 44 V 53 The discretization matrix presents five non-zero diagonals. The system of linear equations can be solved by a direct technique (e.g. Gaussian elimination) or by using an iterative procedure. 4. Iterative schemes for the -D Poisson s equation The general idea of iterative schemes is to start with an initial guess for V (i, j) and then apply repeatedly a correction scheme, which modifies the solution at each iteration, until convergence is obtained. The goal of an iterative scheme is to arrive at a solution which approximates, within a given tolerance, the exact solution of the discretized equations. Direct methods, instead, attempt an exact solution of the discretized problem, with accuracy limited only by the round-off error An iterative procedure is similar to a time-dependent problem which evolves to steady-state from an initial solution, and therefore the iteration scheme must stable and convergent. Since the actual solution cannot be exactly obtained in practice, there is the need of a stopping criterion to decide when to interrupt the iterations. At each iteration we define the error as (97) (i, j; k) = u (i, j) u(i, j; k) (98) where u (i, j) is the true solution of the discretized problem at a given mesh point (i, j) and u(i, j; k) is the solution at iteration k. A scheme is stable if the error tends to zero asymptotically. Remember that for our purposes zero is the tolerance fixed in the beginning. The error cannot in gneral reach 18

19 exactly zero because of noise introduced by round-off error. In some schemes, it is also possible for the error to increase temporarily before starting to decrease. In order to apply (98) directly, one needs to know already the solution, which is usually not the case except when the code output is compared to some known results for testing purposes. A simple way to test the convergence is to calculate for every discretized equation the residuals, obtained by replacing the solution into the discretization. For uniform discretization, we have from (95) R(i, j; k) = [u(i + 1, j; k) + u(i 1, j; k) 4u(i, j; k) + u(i, j + 1; k) +u(i, j 1; k)]/ + ρ(i, j) Obviously, the residuals tend to zero as the error (98) tends to zero, and the iterations can be stopped when the residuals are sufficiently low. In order to formulate an effective stopping criterion, the norm of local errors or residuals is used. There are various possible formulations of the norm, as defined in Hilbert spaces. We will consider here the following two norms = (i, j; k) ij 1/ (99) (100) = max{(i, j; k)} (101) Similar norms can be written for the residuals. The expression in (100) simply corresponds to the root mean square of the local error, and in the language of Hilbert spaces is also called l norm. The expression in (101) is the l norm and is obtained by searching for the maximum value. The iterative process is stopped when the selected norm is smaller than the tolerance. The l norm gives a more robust stopping criterion. If it is known that the iterative scheme is convergent, one can examine the difference between the solutions at two different steps. Convergence is assumed when the norm or u = (u(i, j; k + 1) u(i, j; k)) ij 1/ (10) u = max{u(i, j; k + 1) u(i, j; k)} (103) is smaller than the selected tolerance. This is possible because, as convergence is approached, the local variations of the iterative solution become smaller and smaller. In the following, we report a number of classical schemes for the iterative solution of the -D Poisson equation discretized with finite differences using a uniform mesh 4..1 Jacobi iteration u(i, j; k + 1) = u(i, j; k) + [u(i, j 1; k) + u(i 1, j; k) 4u(i, j; k) +u(i + 1, j; k) + u(i, j + 1; k) + ρ(i, j) /]/4 (104) 19

20 The scheme is written in a slightly redundant way, to show that the updated value is equal to the previous one plus a correction which, for this particular scheme, corresponds to one fourth of the residual calculated at the previous iteration. We already know that the residual must tend to zero as the solution is approached. The scheme in (105) can be rewritten in the compact form u(i, j; k + 1) = [u(i, j 1; k) + u(i 1, j; k) + u(i + 1, j; k) + u(i, j + 1; k) +ρ(i, j) /]/4 (105) In matrix form, we can formally write ū(k + 1) = Bū(k) + ρ (106) 4 where B is the Jacobi matrix. Note that the Jacobi iteration requires at each time step enough memory space for the solution at both iteration k and k Gauss Seidel iteration u(i, j; k + 1) = u(i, j; k) + [u(i, j 1; k + 1) + u(i 1, j; k + 1) 4u(i, j; k) +u(i + 1, j; k) + u(i, j + 1; k) + ρ(i, j) /]/4 (107) This scheme is identical to the Jacobi iteration, except that solutions at points which are scanned before (i, j) are immediately updated. From a computational point of view, this is automatically accomplished if one single array is used to store both old and updated solution. The Gauss Seidel iteration has an asymptotic convergence rate which is slightly better than the Jacobi iteration Successive Under Relaxations (SUR) u(i, j; k + 1) = u(i, j; k) + ω 4 [u(i, j 1; k) + u(i 1, j; k) 4u(i, j; k) (108) +u(i + 1, j; k) + u(i, j + 1; k) + ρ(i, j) /] The SUR scheme is a modification of the Jacobi iteration, where the correction term is multiplied by an underrelaxation parameter ω. For convergence, it must be 0 < ω 1. When ω = 1 the Jacobi iteration is recovered Successive Over Relaxations (SOR) u(i, j; k + 1) = u(i, j; k) + ω 4 [u(i, j 1; k + 1) + u(i 1, j; k + 1) 4u(i, j; k) (109) +u(i + 1, j; k) + u(i, j + 1; k) + ρ(i, j) /] 0

21 The SOR scheme is a modification of the Gauss Seidel iteration, where the correction term is multiplied by an overrelaxation parameter ω. For convergence, it must be 1 ω <. The Gauss Seidel scheme is recovered when ω = 1. The best convergence rate is obtained with an optimum relaxation parameter ω opt = 1 + (1 µ ) 1/ (110) where λ is the spectral radius (modulus of largest eigenvalue) of the Jacobi matrix B. The SOR iteration has considerably better convergence properties than the Gauss Seidel iteration. However, in the first few iterations the error actually grows, to fall sharply afterwards, while the Gauss Seidel iteration has considerable error reduction during the first few iterations Cyclic (Red-Black) SOR The grid points are subdivided into two partitions, as the red and black squares on a checkerboard. The Cyclic SOR iteration operates in two steps: Update the values on the red grid points using the SOR step u red (i, j; k + 1) = u red (i, j; k) + ω 4 [u bl(i, j 1; k) + u bl (i 1, j; k) 4u red (i, j; k) + u bl (i + 1, j; k) + u bl (i, j + 1; k) + ρ(i, j) /] (111) Update the values on the black grid points, using the SOR step u bl (i, j; k + 1) = u bl (i, j; k) + ω 4 [u red(i, j 1; k + 1) + u red (i 1, j; k + 1) 4u bl (i, j; k) + u red (i + 1, j; k + 1) + u red (i, j + 1; k + 1) + ρ(i, j) /] (11) The optimum overrelaxation parameter is the same as ω opt found for the SOR iteration. Note that at each of the two steps the partitions of red and black points are independent, since the solution is only updated on points with the same color Cyclic Chebishev iteration The Cyclic Chebishev iteration is a variation of the Cyclic SOR, where the overrelaxation parameter is varied during the iterations, according to the following scheme Iterate on red points with ω(1) = 1 Iterate on black points with ω() = (1 µ /) 1... Iterate on red points with ω(k) = (1 ω(k 1)µ /4) 1 Iterate on black points with ω(k + 1) = (1 ω(k)µ /4) 1... In the limit, ω ω opt. At the first step, the scheme behaves like the Gauss Seidel iteration, with good error reduction. 1

22 4..7 Line Iterations Line iterations are obtained by processing at one time an entire line (row) of N grid points, and the resulting implicit problem requires the solution of a system of N equations. Line iterations are possible for all the point iterations listed above. The Jacobi line iteration has the form u(i, j; k + 1) = 1 4 [u(i + 1, j; k + 1) + u(i 1, j; k + 1) (113) +u(i, j 1; k + 1) + u(i, j + 1; k) ρ(i, j) /] The scheme in (114) is similar to the point Jacobi iteration as given by (106), but here the variables corresponding to grid points on the current row are unknowns and are to be solved simultaneously at iteration (k + 1). By placing the unknowns on the left hand side, we obtain u(i 1, j; k + 1) + 4u(i, j; k + 1) u(i + 1, j; k + 1) = u(i, j 1; k) + u(i, j + 1; k) ρ(i, j) /] (114) The values at nodes on adjacent rows are evaluated at the previous iteration k. The set of equations obtained for the grid points of the row has a tridiagonal matrix, and can be solved with the method described earlier. The Gauss-Seidel line iteration is obtained simply by replacing u(i, j 1; k) with u(i, j 1; k + 1), since the latter value is already available from the iteration for row (j 1) u(i 1, j; k + 1) + 4u(i, j; k + 1) u(i + 1, j; k + 1) = u(i, j 1; k + 1) + u(i, j + 1; k) ρ(i, j) /] (115) The Successive Line Over Relaxation (SLOR) method is obtained by applying a relaxation to the Gauss Seidel line iteration u(i 1, j; k + 1) + 4u(i, j; k + 1) u(i + 1, j; k + 1) = +(1 ω)[4u(i, j; k) u(i + 1, j; k) u(i 1, j; k)] (116) +ω[u(i, j 1; k) + u(i, j 1; k + 1) ρ(i, j) /] Line iteration schemes usually converge faster and require less iterations than the corresponding point iteration schemes. The coding is somewhat more difficult, and while the point iteration requires N solution evaluations per row, the line iteration requires the solution of an N N tridiagonal matrix. The iteration cost may be comparable, provided the line iteration is coded carefully. The idea behind line iterations can be extended to block iteration, where a group (block) of adjacent rows are solved simultaneously Alternating-Direction Implicit (ADI) Iteration This iteration belongs to the category of operator splitting methods. Rather than solving the full implicit method at once, the horizontal and vertical components of the laplacian operator

23 are solved implicitly in two consecutive 1-D steps along the rows and along the columns of the grid. Since two 1-D solutions are in general not equivalent to the full -D solution, the procedure is iterated and convergence is aided by the application of a relaxation parameter β. a) Horizontal sweep u(i 1, j; k + 1/) + ( + β)u(i, j; k + 1/) u(i + 1, j; k + 1/) = u(i, j 1; k) ( β)u(i, j; k) + u(i, j + 1; k) + ρ(i, j) / (117) Every row is solved implicitly generating a tridiagonal system of equations, Similarly to the line iteration schemes. Since the values at the previous iteration are used to compute the vertical derivatives, the tridiagonal systems are all independent of each other and can be solved simultaneously. This step of the iteration resembles the Jacobi line iteration, however, here 1-D differential operators are considered for the implicit (horizontal) and the explicit (vertical) terms, and a relaxation parameter β is also used. b) Vertical sweep u(i, j 1; k + 1) + ( + β)u(i, j; k) u(i, j + 1; k + 1) = u(i 1, j; k + 1/) ( β)u(i, j; k + 1/) + u(i + 1, j; k + 1/) + ρ(i, j) / (118) Now, the same step is applied to columns of the grid, and the values at the previous k + 1/ step are used to express horizontal derivatives. Note that results obtained after the first step are very biased, and a convergence check should only be applied after both steps have been performed. If the relaxation parameter β is taken to be equal to ω opt, ADI has the same convergence rate of SOR Cyclic ADI Iteration To speed up the convergence rate of ADI, the relaxation parameter β is varied cyclically at every iteration k. For a rectangular domain, with N and M mesh intervals along the x and y sides, the values of β(k) are found according to the following algorithm a) Select s = max{m + 1, N + 1} b) Define a = 4 sin (π/s) b = 4 sin [(s 1)π/s] c = a/b δ = 3 c) Find the smallest integer m, such that δ m 1 c d) Calculate the cyclic relaxation parameters β(i) = bc ν(i) with ν(i) = (i 1)/(m 1); 1 i m When more than m iterations are necessary, the sequence of parameters β(i) is repeated. With the application of cyclical relaxation parameters, convergence is normally achieved in a much smaller number of iterations than with an equivalent SOR scheme. 3

24 4..10 Matrix Notation for the Iterative Algorithms Linear algebra notation is often used to represent in a compact form the iterative algorithms discussed above. The discretized Poisson equation can be represented by the linear system Aū = ḡ (119) where the discretization matrix A is obtained as in (97). The matrix can be decomposed as A = L + U + D (10) where D is the diagonal, and L and U are respectively the lower triangular and the upper triangular parts. The Jacobi iteration correspond to the matrix equation or Dū(k + 1) = (L + U)ū(k) + ḡ (11) ū(k + 1) = D 1 [L + U]ū(k) + D 1 ḡ (1) where D 1 (L + U) is the Jacobi matrix. In terms of the elements a i,j of the original matrix A, the Jacobi matrix for a rectangular domain with Dirichlet boundary conditions has the form D 1 (L + U) = 0 a 1 /a 11 a 13 /a a 1n /a 11 a 1 /a 0 a 3 /a... a n /a a n1 /a nn a n /a nn a n3 /a nn... 0 (13) where n = N M, with N and M the internal grid nodes on the sides of the domain. It can be shown that for this simple case the spectral radius (i.e. the modulus of the largest eigenvalue) of the Jacobi matrix is given by cos(π/m) + cos(π/n) µ = For the Gauss-Seidel method, we have the following matrix equation (14) (D + L)ū(k + 1) = Uū(k) + ḡ (15) The lower triangular matrix L is on the left hand side, because of the updating procedure discussed earlier. We can rewrite the procedure as where (D + L) 1 U is the Gauss-Seidel matrix. Suggested exercises ū(k + 1) = (D + L) 1 Uū(k) + (D + L) 1 ḡ (16) 1.1 Derive the finite difference approximations (10), (11) and(1). 1. Show that the approximation in 19 is of order O(h) where h is the local mesh. 1.3 Derive the matrix for the 1-D Poisson equation discretized with 3-point nonuniform finite differences. 4

ME Computational Fluid Mechanics Lecture 5

ME Computational Fluid Mechanics Lecture 5 ME - 733 Computational Fluid Mechanics Lecture 5 Dr./ Ahmed Nagib Elmekawy Dec. 20, 2018 Elliptic PDEs: Finite Difference Formulation Using central difference formulation, the so called five-point formula

More information

Partial differential equations

Partial differential equations Partial differential equations Many problems in science involve the evolution of quantities not only in time but also in space (this is the most common situation)! We will call partial differential equation

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Diffusion / Parabolic Equations. PHY 688: Numerical Methods for (Astro)Physics

Diffusion / Parabolic Equations. PHY 688: Numerical Methods for (Astro)Physics Diffusion / Parabolic Equations Summary of PDEs (so far...) Hyperbolic Think: advection Real, finite speed(s) at which information propagates carries changes in the solution Second-order explicit methods

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Partial Differential Equations

Partial Differential Equations Partial Differential Equations Introduction Deng Li Discretization Methods Chunfang Chen, Danny Thorne, Adam Zornes CS521 Feb.,7, 2006 What do You Stand For? A PDE is a Partial Differential Equation This

More information

Chapter 5. Methods for Solving Elliptic Equations

Chapter 5. Methods for Solving Elliptic Equations Chapter 5. Methods for Solving Elliptic Equations References: Tannehill et al Section 4.3. Fulton et al (1986 MWR). Recommended reading: Chapter 7, Numerical Methods for Engineering Application. J. H.

More information

Finite Difference Method for PDE. Y V S S Sanyasiraju Professor, Department of Mathematics IIT Madras, Chennai 36

Finite Difference Method for PDE. Y V S S Sanyasiraju Professor, Department of Mathematics IIT Madras, Chennai 36 Finite Difference Method for PDE Y V S S Sanyasiraju Professor, Department of Mathematics IIT Madras, Chennai 36 1 Classification of the Partial Differential Equations Consider a scalar second order partial

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Numerical Solution of partial differential equations

Numerical Solution of partial differential equations G. D. SMITH Brunei University Numerical Solution of partial differential equations FINITE DIFFERENCE METHODS THIRD EDITION CLARENDON PRESS OXFORD Contents NOTATION 1. INTRODUCTION AND FINITE-DIFFERENCE

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Numerical methods Revised March 2001

Numerical methods Revised March 2001 Revised March 00 By R. W. Riddaway (revised by M. Hortal) Table of contents. Some introductory ideas. Introduction. Classification of PDE's.3 Existence and uniqueness.4 Discretization.5 Convergence, consistency

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Tutorial 2. Introduction to numerical schemes

Tutorial 2. Introduction to numerical schemes 236861 Numerical Geometry of Images Tutorial 2 Introduction to numerical schemes c 2012 Classifying PDEs Looking at the PDE Au xx + 2Bu xy + Cu yy + Du x + Eu y + Fu +.. = 0, and its discriminant, B 2

More information

MIT (Spring 2014)

MIT (Spring 2014) 18.311 MIT (Spring 014) Rodolfo R. Rosales May 6, 014. Problem Set # 08. Due: Last day of lectures. IMPORTANT: Turn in the regular and the special problems stapled in two SEPARATE packages. Print your

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

Preface. 2 Linear Equations and Eigenvalue Problem 22

Preface. 2 Linear Equations and Eigenvalue Problem 22 Contents Preface xv 1 Errors in Computation 1 1.1 Introduction 1 1.2 Floating Point Representation of Number 1 1.3 Binary Numbers 2 1.3.1 Binary number representation in computer 3 1.4 Significant Digits

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS 02139 2.29 NUMERICAL FLUID MECHANICS FALL 2011 QUIZ 2 The goals of this quiz 2 are to: (i) ask some general

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations We call Ordinary Differential Equation (ODE) of nth order in the variable x, a relation of the kind: where L is an operator. If it is a linear operator, we call the equation

More information

2.2. Methods for Obtaining FD Expressions. There are several methods, and we will look at a few:

2.2. Methods for Obtaining FD Expressions. There are several methods, and we will look at a few: .. Methods for Obtaining FD Expressions There are several methods, and we will look at a few: ) Taylor series expansion the most common, but purely mathematical. ) Polynomial fitting or interpolation the

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Computation Fluid Dynamics

Computation Fluid Dynamics Computation Fluid Dynamics CFD I Jitesh Gajjar Maths Dept Manchester University Computation Fluid Dynamics p.1/189 Garbage In, Garbage Out We will begin with a discussion of errors. Useful to understand

More information

Advection / Hyperbolic PDEs. PHY 604: Computational Methods in Physics and Astrophysics II

Advection / Hyperbolic PDEs. PHY 604: Computational Methods in Physics and Astrophysics II Advection / Hyperbolic PDEs Notes In addition to the slides and code examples, my notes on PDEs with the finite-volume method are up online: https://github.com/open-astrophysics-bookshelf/numerical_exercises

More information

Additive Manufacturing Module 8

Additive Manufacturing Module 8 Additive Manufacturing Module 8 Spring 2015 Wenchao Zhou zhouw@uark.edu (479) 575-7250 The Department of Mechanical Engineering University of Arkansas, Fayetteville 1 Evaluating design https://www.youtube.com/watch?v=p

More information

Iterative Solvers. Lab 6. Iterative Methods

Iterative Solvers. Lab 6. Iterative Methods Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II Elliptic Problems / Multigrid Summary of Hyperbolic PDEs We looked at a simple linear and a nonlinear scalar hyperbolic PDE There is a speed associated with the change of the solution Explicit methods

More information

Appendix C: Recapitulation of Numerical schemes

Appendix C: Recapitulation of Numerical schemes Appendix C: Recapitulation of Numerical schemes August 31, 2009) SUMMARY: Certain numerical schemes of general use are regrouped here in order to facilitate implementations of simple models C1 The tridiagonal

More information

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

MA3232 Numerical Analysis Week 9. James Cooley (1926-) MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier

More information

7 Hyperbolic Differential Equations

7 Hyperbolic Differential Equations Numerical Analysis of Differential Equations 243 7 Hyperbolic Differential Equations While parabolic equations model diffusion processes, hyperbolic equations model wave propagation and transport phenomena.

More information

NUMERICAL METHODS FOR ENGINEERING APPLICATION

NUMERICAL METHODS FOR ENGINEERING APPLICATION NUMERICAL METHODS FOR ENGINEERING APPLICATION Second Edition JOEL H. FERZIGER A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.

More information

Time stepping methods

Time stepping methods Time stepping methods ATHENS course: Introduction into Finite Elements Delft Institute of Applied Mathematics, TU Delft Matthias Möller (m.moller@tudelft.nl) 19 November 2014 M. Möller (DIAM@TUDelft) Time

More information

Part 1. The diffusion equation

Part 1. The diffusion equation Differential Equations FMNN10 Graded Project #3 c G Söderlind 2016 2017 Published 2017-11-27. Instruction in computer lab 2017-11-30/2017-12-06/07. Project due date: Monday 2017-12-11 at 12:00:00. Goals.

More information

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM)

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) James R. Nagel September 30, 2009 1 Introduction Numerical simulation is an extremely valuable tool for those who wish

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Chapter 5. Numerical Methods: Finite Differences

Chapter 5. Numerical Methods: Finite Differences Chapter 5 Numerical Methods: Finite Differences As you know, the differential equations that can be solved by an explicit analytic formula are few and far between. Consequently, the development of accurate

More information

Chapter 10 Exercises

Chapter 10 Exercises Chapter 10 Exercises From: Finite Difference Methods for Ordinary and Partial Differential Equations by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/ rl/fdmbook Exercise 10.1 (One-sided and

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Time-dependent variational forms

Time-dependent variational forms Time-dependent variational forms Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Oct 30, 2015 PRELIMINARY VERSION

More information

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments James V. Lambers September 24, 2008 Abstract This paper presents a reformulation

More information

Chapter 10. Numerical Methods

Chapter 10. Numerical Methods Chapter 0 Numerical Methods As you know, most differential equations are far too complicated to be solved by an explicit analytic formula Thus, the development of accurate numerical approximation schemes

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Cache Oblivious Stencil Computations

Cache Oblivious Stencil Computations Cache Oblivious Stencil Computations S. HUNOLD J. L. TRÄFF F. VERSACI Lectures on High Performance Computing 13 April 2015 F. Versaci (TU Wien) Cache Oblivious Stencil Computations 13 April 2015 1 / 19

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Partial Differential Equations

Partial Differential Equations Next: Using Matlab Up: Numerical Analysis for Chemical Previous: Ordinary Differential Equations Subsections Finite Difference: Elliptic Equations The Laplace Equations Solution Techniques Boundary Conditions

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 13

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 13 REVIEW Lecture 12: Spring 2015 Lecture 13 Grid-Refinement and Error estimation Estimation of the order of convergence and of the discretization error Richardson s extrapolation and Iterative improvements

More information

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Jan Mandel University of Colorado Denver May 12, 2010 1/20/09: Sec. 1.1, 1.2. Hw 1 due 1/27: problems

More information

The Finite Difference Method

The Finite Difference Method Chapter 5. The Finite Difference Method This chapter derives the finite difference equations that are used in the conduction analyses in the next chapter and the techniques that are used to overcome computational

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Multigrid finite element methods on semi-structured triangular grids

Multigrid finite element methods on semi-structured triangular grids XXI Congreso de Ecuaciones Diferenciales y Aplicaciones XI Congreso de Matemática Aplicada Ciudad Real, -5 septiembre 009 (pp. 8) Multigrid finite element methods on semi-structured triangular grids F.J.

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general

More information

Lecture 13: Solution to Poission Equation, Numerical Integration, and Wave Equation 1. REVIEW: Poisson s Equation Solution

Lecture 13: Solution to Poission Equation, Numerical Integration, and Wave Equation 1. REVIEW: Poisson s Equation Solution Lecture 13: Solution to Poission Equation, Numerical Integration, and Wave Equation 1 Poisson s Equation REVIEW: Poisson s Equation Solution Poisson s equation relates the potential function V (x, y, z)

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Finite difference methods for the diffusion equation

Finite difference methods for the diffusion equation Finite difference methods for the diffusion equation D150, Tillämpade numeriska metoder II Olof Runborg May 0, 003 These notes summarize a part of the material in Chapter 13 of Iserles. They are based

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:

More information

Introduction to numerical schemes

Introduction to numerical schemes 236861 Numerical Geometry of Images Tutorial 2 Introduction to numerical schemes Heat equation The simple parabolic PDE with the initial values u t = K 2 u 2 x u(0, x) = u 0 (x) and some boundary conditions

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK KINGS COLLEGE OF ENGINEERING MA5-NUMERICAL METHODS DEPARTMENT OF MATHEMATICS ACADEMIC YEAR 00-0 / EVEN SEMESTER QUESTION BANK SUBJECT NAME: NUMERICAL METHODS YEAR/SEM: II / IV UNIT - I SOLUTION OF EQUATIONS

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

3.4. Monotonicity of Advection Schemes

3.4. Monotonicity of Advection Schemes 3.4. Monotonicity of Advection Schemes 3.4.1. Concept of Monotonicity When numerical schemes are used to advect a monotonic function, e.g., a monotonically decreasing function of x, the numerical solutions

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Numerical Methods for PDEs

Numerical Methods for PDEs Numerical Methods for PDEs Problems 1. Numerical Differentiation. Find the best approximation to the second drivative d 2 f(x)/dx 2 at x = x you can of a function f(x) using (a) the Taylor series approach

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

q t = F q x. (1) is a flux of q due to diffusion. Although very complex parameterizations for F q

q t = F q x. (1) is a flux of q due to diffusion. Although very complex parameterizations for F q ! Revised Tuesday, December 8, 015! 1 Chapter 7: Diffusion Copyright 015, David A. Randall 7.1! Introduction Diffusion is a macroscopic statistical description of microscopic advection. Here microscopic

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

7 Mathematical Methods 7.6 Insulation (10 units)

7 Mathematical Methods 7.6 Insulation (10 units) 7 Mathematical Methods 7.6 Insulation (10 units) There are no prerequisites for this project. 1 Introduction When sheets of plastic and of other insulating materials are used in the construction of building

More information

Finite Differences: Consistency, Stability and Convergence

Finite Differences: Consistency, Stability and Convergence Finite Differences: Consistency, Stability and Convergence Varun Shankar March, 06 Introduction Now that we have tackled our first space-time PDE, we will take a quick detour from presenting new FD methods,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Numerical Algorithms for Visual Computing II 2010/11 Example Solutions for Assignment 6

Numerical Algorithms for Visual Computing II 2010/11 Example Solutions for Assignment 6 Numerical Algorithms for Visual Computing II 00/ Example Solutions for Assignment 6 Problem (Matrix Stability Infusion). The matrix A of the arising matrix notation U n+ = AU n takes the following form,

More information

Image Reconstruction And Poisson s equation

Image Reconstruction And Poisson s equation Chapter 1, p. 1/58 Image Reconstruction And Poisson s equation School of Engineering Sciences Parallel s for Large-Scale Problems I Chapter 1, p. 2/58 Outline 1 2 3 4 Chapter 1, p. 3/58 Question What have

More information

Applied Mathematics 205. Unit III: Numerical Calculus. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit III: Numerical Calculus. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit III: Numerical Calculus Lecturer: Dr. David Knezevic Unit III: Numerical Calculus Chapter III.3: Boundary Value Problems and PDEs 2 / 96 ODE Boundary Value Problems 3 / 96

More information

Domain decomposition for the Jacobi-Davidson method: practical strategies

Domain decomposition for the Jacobi-Davidson method: practical strategies Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.

More information

Numerical methods part 2

Numerical methods part 2 Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an

More information

12 The Heat equation in one spatial dimension: Simple explicit method and Stability analysis

12 The Heat equation in one spatial dimension: Simple explicit method and Stability analysis ATH 337, by T. Lakoba, University of Vermont 113 12 The Heat equation in one spatial dimension: Simple explicit method and Stability analysis 12.1 Formulation of the IBVP and the minimax property of its

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Boundary value problems on triangular domains and MKSOR methods

Boundary value problems on triangular domains and MKSOR methods Applied and Computational Mathematics 2014; 3(3): 90-99 Published online June 30 2014 (http://www.sciencepublishinggroup.com/j/acm) doi: 10.1164/j.acm.20140303.14 Boundary value problems on triangular

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information