A Block Red-Black SOR Method. for a Two-Dimensional Parabolic. Equation Using Hermite. Collocation. Stephen H. Brill 1 and George F.
|
|
- Lucas Quinn
- 6 years ago
- Views:
Transcription
1 1 A lock ed-lack SO Method for a Two-Dimensional Parabolic Equation Using Hermite Collocation Stephen H. rill 1 and George F. Pinder 1 Department of Mathematics and Statistics University ofvermont urlington, Vermont 00 U. S. A. Department of Civil and Environmental Engineering University ofvermont urlington, Vermont 00 U. S. A. 1 Introduction In [LHH9], Lai et al. study a block Jacobi method to solve the two-dimensional Poisson equation r u u = H(x y) dened on the interior of the unit square S =[0 1] [0 1], discretized by the collocation method with a uniform mesh, given Dirichlet boundary conditions u(x y) =C(x y) (x () They determine eigenvalues for the iteration matrix of their block Jacobi method and then use the theory in [You1] to determine a formula for! opt, the optimal relaxation factor! for the block SO method associated with their block Jacobi scheme. In this paper, we explain how to extend their work to ensure that the optimal SO method is parallelizable by using a red-black ordering scheme. We then use these ideas to eciently solve the two-dimensional u =,H(x y with Dirichlet boundary conditions.
2 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION Hermite Cubic Polynomials.1 One-dimensional formulation Let u(x) be a function dened on the interval [0 1]. Partition the interval using n equally spaced nodes 0 = x 0x 1x m = 1, where m = n, 1 is the number of elements. Let h =1=m. Consider the functions (cf. [Pic9]) and f j(x) = 8 >< > g j(x) = (x, x j,1) h [(x j, x)+h] x j,1 x x j (x j+1, x) h [h, (x j+1, x)] x j x x j+1 0 otherwise 8 >< > (x, x j,1) (x, x j) h x j,1 x x j (x j+1, x) (x, x j) h x j x x j+1 0 otherwise These are the Hermite cubic polynomials that we use as basis functions in our collocation approach. Notice that f j(x i)= ij df j dx (xi) =f j(x 0 i)=0 g j(x i)=0 8i j dg j dx (xi) =g0 j(x i)= ij where ij is the Kronecker symbol. Let u j = u(x j) and let u 0 j = u 0 (x j)= dx du (xj) for j =0 1m. Then the cubic polynomial interpolating the u j's and the u 0 j's is ^u(x) = mx j=0 8i j (u jf j(x)+u 0 jg j(x)) () In [Pap8], Papatheodorou uses g j? (x) = g j (x) h (in place of g j(x)) when forming (). He makes this choice (also used in [LHH9]) because eigenvalue analysis is much easier using g j? (x) instead of g j(x). It is easily seen that the iteration matrices studied herein that one obtains using g j(x) and g j? (x) are identical. In this paper, we use g j(x) in the computer code that generates the numerical results and employ g j? (x) for our analysis.. Two-dimensional formulation Let u(x y) be a function dened on S. by using n equally spaced nodes in both the x- and y-directions. Letting m = n, 1 and h =1=m, we partition S into m square elements, where the dimensions of each element are h h. Ifwe consider two-dimensional bi-cubic Hermite basis polynomials, we obtain, by analogy to () ^u(x y) = mx mx q=0 r=0 [u qrf q(x)f r(y)+u x qrg q(x)f r(y)+u y qrf q(x)g r(y)+u xy qr g q(x)g r(y)] ()
3 where u qr = u(x qy r) u x (xqyr) u y (xqyr) u xy qr (xqyr) We see that ^u(x y) interpolates the functions for q @y, at the grid points (xqyr), Collocation Discretization of the PDE If the interpolating polynomial () is introduced into the governing equation (1), we ^u, H(x y) =E(x where E(x y) is an error function. We see that at each of the n grid points (x qy r), we have four degrees of freedom, namely u qr, u x qr, u y qr, and u xy qr.however, on the many of these values are known. In particular, we know (from ()) u qr = u(x qy r)=c(x qy r) for all nodes (grid points) In addition, we can calculate on the north and south boundaries and u x (xqyr) u y (xqyr) on the east and west boundaries. We therefore know the values of a total of 8n, degrees of freedom and do not know the values of m degrees of freedom. Therefore, to uniquely determine these m degrees of freedom we require m equations, or equations per element. To achieve this, we choose four points (x ky`) in the interior of each element and enforce E(x ky`) =0ateach of these m \collocation points". It is known (from [Cel8]) that the optimal choices for the collocation points for the symmetric dierential operator given in (1) are the so-called \Gauss points". On the interval [,1 1], the Gauss points are z, where z =,1=. On the square element [,1 1] [,1 1], the Gauss points are (,z,z), (,zz), (z,z), and (zz). Transforming these four Gauss points into each ofthem elements of our mesh denes the full set of m \collocation" equations. These can be written mx mx q=0 r=0 f[f 00 q (x k)f r(y`)+f q(x k)f 00 r (y`)]u qr +[g 00 q (x k)f r(y`)+g q(x k)f 00 r (y`)]u x qr +[f 00 q (x k)g r(y`)+f q(x k)g 00 r (y`)]u y qr +[g 00 q (x k)g r(y`)+g q(x k)g 00 r (y`)]u xy qr g = H(x ky`) () where (x ky`) varies over all m collocation points.
4 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION y ^ 1 v =u xy v =u y v =u xy v =u y v =u xy v =u y v =u xy v =u xy y h 0 h 1 h h h h h h y y h h h h h h 0 1 h h v =u xy v =u y v =u xy v =u y v =u xy v =u y v =u xy v =u xy v =u x v =u v =u x v =u v =u x v =u v =u x v =u x h h h h h h h h 0 1 y y h h h h h h h 0 1 v =u xy v =u y v =u xy v =u y v =u xy v =u y v =u xy v =u xy v =u x v =u v =u x v =u v =u x v =u v =u x v =u x h h h h h h h h 0 1 h y y 1 h h h h h h h 0 1 v =u xy v =u y v =u xy v =u y v =u xy v =u y v =u xy v =u xy v =u x v =u v =u x v =u v =u x v =u v =u x v =u x h h h h h h h h h y 0 0 h h h h h h h h xy y xy y xy y v =u v =u v =u v =u v =u v =u xy v =u v =u xy x 0 x 1 x x x x x x > x Figure I numbering of equations and unknowns There are many ways in which to number the unknowns and equations. Each numbering system will dene a dierent structure for the matrix arising from the system of linear equations () that we must solve. We use a numbering proposed by [Cel8] and by [LHH9], which is depicted pictorially in Figure I for the case of n =. In the gure, h ij indicates the approximate location of collocation point (x jy i). It is seen that the matrix equation that arises from this numbering for n =is emev = e k ()
5 where em = A A,A A A 1,A A 1 A A,A A A A 1,A A 1 A A,A A A A 1,A A 1 A,A A A,A ev = v T 0 v T 1 v T v T v T v T v T v T T ek = k T 0 k T 1 k T k T k T k T k T k T T The vectors v i and k i are given by v i = v i0 v i1 v i v i v i v i v i v i T k i = k i0 k i1 k i k i k i k i k i k i T where k ij = H(x jy i), (V ij). Here V ij indicates any known boundary value information that appears on the left side of () that is pertinent to the equation dened at collocation point (x jy i). It is clear that V ij may be non-zero only when (x jy i) is in a boundary element. The submatrices A i i=1,,,,allhave the structure A i = a i a i,a i a i a i1,a i a i1 a i a i,a i a i a i a i1,a i a i1 a i a i,a i a i a i a i1,a i a i1 a i,a i a i a i,a i Although the above example is for n =, it should be clear how the corresponding matrices and vectors would appear for dierent values of n. It is seen in [LHH9] and [Pap8] that a ij = a ij 9h, where a 11 =,, 18 p a 1 =,1, 8 p a 1 = a 1 =+ p a 1 =,1, 8 p a =,, p a =, p a =0 a 1 = a =, p a =, + 18 p a =,1 + 8 p a 1 =+ p a =0 a =,1 + 8 p a =,+ p ()
6 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION lock Jacobi Method for Poisson's Equation To begin, the matrix e M is partitioned into em = A A,A A A 1,A A 1 A A,A A A A 1,A A 1 A A,A A A A 1,A A 1 A,A A A,A which we write more concisely as em = A F F C F A C A C A L C L A L (8) The block Jacobi method is then dened by edev (p+1) =(e L + e U)ev (p) + e k (9) where ev (p) is the approximation to ev after p iterations and where M e is split into M e = ed, L e, U, e where A F A ed = A and,e L =,e U = C F F C C A C L A L L We solve (9) for ev (p+1) as follows. First, we note that each ofthem +1rows of e D in (9) denes a matrix equation, each of which is entirely decoupled from the rest. Hence, each of these m + 1 matrix equations may be solved simultaneously in parallel. We see that each of these equations is of the form Av = k where A = AF A L or A v = the corresponding vector of unknowns and k = the corresponding right-hand-side vector of known values. For the case where A = AF or A L,
7 it is clear that A is block tridiagonal, with the blocks being matrices. For example, consider A = A F = A = a a,a a a 1,a a 1 a a,a a a a 1,a a 1 a a,a a a a 1,a a 1 a,a a a,a We employ a direct block tridiagonal solver to obtain v k. The case where A = A is just slightly more complicated. Here we see that which has the structure A = A1,A A = A 1 A Permuting the rows and columns of A via a similarity transformation (see [LHH9]) gives A 0 =
8 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION 8 which is clearly block tridiagonal, with the blocks being matrices. Obviously, we must also permute correspondingly the entries of v (giving v 0 ) and those of k (giving k 0 ). We then employ a direct block tridiagonal solver on A 0 v 0 = k 0 to obtain v 0. ed-lack SO for Poisson's Equation While the equations in (9) may be solved simultaneously in parallel, we nd that the rate at which the sequence fev (p) g converges to ev to be unacceptably slow. This motivates us to seek a method with a faster convergence rate that can still take advantage of parallelism. We recall (8) em = A F F C F A C A C A L C L A L 0 1 (10) where the last column gives the block rownumber of e M. Via a similarity transformation, we permute the rows and columns of e M (and correspondingly the entries of ev and e k) in () and (10) to obtain where M = A F M v = k (11) F A C A L C L C F A C L A More precisely, M is obtained from e M by writing from top to bottom all the even numbered block rows of e M (in ascending order), followed by all the odd numbered block rows of e M (in ascending order). We abbreviate M as M = D M U M L D 0 1 Correspondingly, we write v = v v and k = k k Analogously to (9), we split M into M = D, L, U, where D D =,L = and, U = D Then the standard block SO formulation is M L M U (D,!L)v (p+1) = [(1,!)D +!U]v (p) +!k (1) where the relaxation factor! is chosen such that 1 <!<. Dividing (1) into its red (top) and black (bottom) parts, we obtain
9 9 D v (p+1) and (p+1)!m Lv Wenowintroduce the vectors =(1,!)D v (p) + D v (p+1) z (p+1) c = v (p+1) c,!mu v(p) +!k (1) (p) =(1,!)D v +!k (1), v (p) c where the color subscript c = or. We also introduce the color dependent residual vectors r c (ab), dened as and r (ab) = k, r (ab) = k, D v (a) M Lv (a) + MU v(b) + Dv(b) where the superscripts (a) and (b) denote iteration level. y considering (11), it is clear that these residual vectors measure how close the approximants v (a) and v(b) are to v and v, components of the true solution of (11). Algebraically manipulating (1) and (1) and using the notation introduced above yields and D z (p+1) D z (p+1) =!r (pp) (1) =!r (p+1p) (1) which are of a form and structure very similar to that of (9). In the SO algorithm, we compute v (p+1) using (1) and (1). It is clear that we have still maintained a high degree of parallelism by using this red-black SO scheme. Evidently, all of the red equations in (1) may be solved simultaneously in parallel. Once we have obtained v (p+1) from (1), we may solve all the black equations in (1) simultaneously in parallel, obtaining v (p+1). Numerical results are illustrated in Figure II. We ran our version of the algorithm for both the Jacobi method and the red-black SO method using various values of!.wechose m =10 and chose the boundary conditions and the function H(x y) such that u = x sin y. Letting r (pp) = r (pp) T r (pp) T T, our convergence criterion was that kr (pp) k1 < The Jacobi method needed 11 iterations to converge, which is indicated by an asterisk in the middle of the graph. y comparison, for! =1, the SO method required only 19 iterations to converge. Indeed, according to the theory in [LHH9] and [You1], the optimal! for this problem is! opt 108, which agrees well with our numerical investigation. The Parabolic Equation We now seek to solve the u, H(x y t) dened on the interior of S, discretized by the collocation method with a uniform mesh, given Dirichlet boundary conditions u(x y t) =C(x y t) (x
10 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION iterations until convergence omega Figure II results using red-black SO to solve Poisson's equation We approximate the time = u(q+1), u (q) (18) t where the superscript (q) indicates the value of u after q time steps. ecalling (), we see that matrix M e was formed by at the collocation points. Correspondingly, we form matrix P e by evaluating ^u at the collocation points. Clearly, ep has precisely the same structure as that of M. e Letting pij be the non-trivial entries of P e (just as the a ij's in () are the non-trivial entries of M), e we see that the numbers pij are given by p 11 =8+8 p p 1 =1+ p p 1 = p 1 =+ p p 1 =1+ p p =+ p p =, p p =1 p 1 = p =, p p =8, 8 p p =1, p p 1 =+ p p =1 p =1, p p =, p If we nowintroduce (18) and the interpolating polynomial () 1 into (1) and evaluate the right side of (1) at the collocation (Gauss) points at time t (q+1) +(1, ) t (q), where 1 The interpolating polynomial () and forcing function now have time dependence, i.e. uqr, u x qr, u y qr, u xy qr, and H are now functions also of t.
11 11 0 1, then we obtain the matrix form of the collocation discretization of the parabolic PDE epev (q+1), e Pev (q) = [ e Mev (q+1), e k (q+1) ]+(1, )[e Mev (q), e k (q) ] (19) t Letting = t and =(1, )t, wemay express (19) as (e P, e M)ev (q+1) =(e P + e M)ev (q), ( e k (q+1) +e k (q) ) (0) In examining (0), we see that this equation denes how wemaymove from time step (q) to time step (q + 1). In particular, at time step (q), all the vectors on the right side of (0) contain known values. Letting e Q =( e P, e M) and e b (q) = the right side of (0), we may write (0) as eqev (q+1) = e b (q) (1) which is of a form and structure identical to those of (). We may therefore apply to (1) the block red-black SO algorithm that we developed for (). That is, at each time step in (1) we iterate to convergence using block red-black SO. Eigenvalues and esults Using the work in [LHH9] as a guide, we determined the eigenvalues of the block Jacobi matrix one would use to solve (1). These eigenvalues may be computed using the following recipe k = k = k m c k = cos k r k = p + 0c k, c k (, p )[(, c k) + (8 + c k), 88 p ( p r k)] ( + p )(, c k) + ( + 9 p, p c k) + 18(10 + p, c k) k = (19, 9 p )[11(, c k ) + ( + c k ) + 8(,, 8c k p r k )] ( p )(, c k ) + ( p + 18c k + p c k ) + 18(1 + p, 9c k ) where k =1m, 1 and = h. Then form the sets f 1 mg = p p (, ) + (, ) ( + p ) + ( + p ) ( p p, 1) + (, ) ( p +1) + ( p +) + 1, 1 + m,1, m,1 and f 1 mg = p p (9, ) + 1(9, ) (9 + p ) + 1(9 + p ) (,+ p p ) + (,+ ) ( + p ) + ( + p, m,1, + m,1 ) Now let jk =( j, j) c k for k =1m, 1 and j =1m. Then (J), the set of eigenvalues of the block Jacobi matrix, is (cf. [LHH9]) n [ = 1 jk (J) =f = j j=1mg q jk +j j j=1m k =1m, 1 o
12 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION 1 Given this recipe for the computation of eigenvalues of the Jacobi matrix, one can use the theory in [LHH9] and [You1] to compute! opt for the optimal block SO method iterations until convergence omega Figure III results using red-black SO to solve the model parabolic equation For an example, we ran both the block Jacobi method and block SO method for various values of! on the parabolic problem. The boundary conditions and function H(x y t) were chosen such that u = x y (1 + e,t ). We chose m =, = 1 and let the code run over one time step, from t =0tot =t =01. The convergence criterion was that the innity norm of the residual vector had to be less than 10,.For the Jacobi method, iterations were required for convergence. The number of iterations needed for convergence of the SO method is illustrated in Figure III for various values of!. The value of! that gave usthe fewest number of iterations (namely iterations) was! = 1. This agrees well with the value of! opt given by the theory, namely! opt Summary Given the work of Lai et al., we developed herein a fast and parallelizable SO method for the numerical solution of Poisson's equation on the unit square with uniform mesh and Dirichlet boundary conditions. We then extended these techniques to the numerical solution It can also be shown that all these eigenvalues must have modulus less than unity, irrespective of the value of. Thus, the Jacobi method for the model parabolic problem must converge for any.
13 1 of a model parabolic equation. Our numerical results agree with our analytic results, showing that using our block red-block SO method on the parabolic equation with appropriately chosen relaxation factor! gives much faster results than does the block Jacobi method.
14 LOCK ED-LACK SO FO A TWO-DIMENSIONAL PAAOLIC EQUATION 1
15 eferences [Cel8] Celia M. A. (198) Collocation on Deformed Finite Elements and Alternating Direction Collocation Methods. PhD thesis, Princeton University. [LHH9] Lai Y.-L., Hadjidimos A., Houstis E. N., and ice J.. (199) On the Iterative Solution of Hermite Collocation Equations. SIAM J. Matrix Anal. Appl. 1 {. (Also Technical eport, Purdue University, 199). [Pap8] [Pic9] [You1] Papatheodorou T. S. (198) lock AO Iteration for Nonsymmetric Matrices. Math. Comp 1 11{. Piccirilli D. T. (199) Using the Collocation Method with Splines under Tension and Upstream Weighting to Solve the One-Dimensional Convection-Diusion Equation. Master's thesis, University of Vermont. Young D. M. (191) Iterative Solution of Large Linear Systems. Academic Press, New York.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationBRILL AND PINDER eral linear partial differential equations in two spatial dimensions with Dirichlet and/or Neumann boundary conditions, discretized b
Eigenvalue Analysis of a Block Red-Black Gauss-Seidel Preconditioner Applied to the Hermite Collocation Discretization of Poisson's Equation Stephen H. Brill Department of Mathematics and Computer Science
More informationChapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS
Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationApproximation of Geometric Data
Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with
More informationA Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems
A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:
More informationWRT in 2D: Poisson Example
WRT in 2D: Poisson Example Consider 2 u f on [, L x [, L y with u. WRT: For all v X N, find u X N a(v, u) such that v u dv v f dv. Follows from strong form plus integration by parts: ( ) 2 u v + 2 u dx
More information1e N
Spectral schemes on triangular elements by Wilhelm Heinrichs and Birgit I. Loch Abstract The Poisson problem with homogeneous Dirichlet boundary conditions is considered on a triangle. The mapping between
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationAn Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.
An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations by Tong Chen A thesis submitted in conformity with the requirements
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationA first order divided difference
A first order divided difference For a given function f (x) and two distinct points x 0 and x 1, define f [x 0, x 1 ] = f (x 1) f (x 0 ) x 1 x 0 This is called the first order divided difference of f (x).
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More information. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a
TE Linear Algebra and Numerical Methods Tutorial Set : Two Hours. (a) Show that the product AA T is a symmetric matrix. (b) Show that any square matrix A can be written as the sum of a symmetric matrix
More informationA MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY
A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationOutline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs
Boundary Value Problems Numerical Methods for BVPs Outline Boundary Value Problems 2 Numerical Methods for BVPs Michael T. Heath Scientific Computing 2 / 45 Boundary Value Problems Numerical Methods for
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationAbstract. 1 Introduction
The Bi-CGSTAB method with red-black Gauss-Seidel preconditioner applied to the hermite collocation discretization of subsurface multiphase flow and transport problems S.H. Brill, Department of Mathematics
More informationA NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days.
A NOTE ON MATRI REFINEMENT EQUATIONS THOMAS A. HOGAN y Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. They can play a central role in the study of renable
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationScientific Computing
2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationDirect and Iterative Solution of the Generalized Dirichlet-Neumann Map for Elliptic PDEs on Square Domains
Direct and Iterative Solution of the Generalized Dirichlet-Neumann Map for Elliptic PDEs on Square Domains A.G.Sifalakis 1, S.R.Fulton, E.P. Papadopoulou 1 and Y.G.Saridakis 1, 1 Applied Mathematics and
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationDomain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions
Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationFinite-Elements Method 2
Finite-Elements Method 2 January 29, 2014 2 From Applied Numerical Analysis Gerald-Wheatley (2004), Chapter 9. Finite-Elements Method 3 Introduction Finite-element methods (FEM) are based on some mathematical
More informationThe Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods
The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationIterative Methods for Linear Systems
Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the
More informationChapter 2. Ma 322 Fall Ma 322. Sept 23-27
Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is
More informationInterpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter
Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9
Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers
More information3.5 Finite Differences and Fast Poisson Solvers
3.5 Finite Differences and Fast Poisson Solvers It is extremely unusual to use eigenvectors to solve a linear system KU = F. You need to know all the eigenvectors of K, and (much more than that) the eigenvector
More informationLecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1
More informationLecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2
More informationNUMERICAL SOLUTIONS OF NONLINEAR PARABOLIC PROBLEMS USING COMBINED-BLOCK ITERATIVE METHODS
NUMERICAL SOLUTIONS OF NONLINEAR PARABOLIC PROBLEMS USING COMBINED-BLOCK ITERATIVE METHODS Yaxi Zhao A Thesis Submitted to the University of North Carolina at Wilmington in Partial Fulfillment Of the Requirements
More informationPartial Differential Equations
Partial Differential Equations Introduction Deng Li Discretization Methods Chunfang Chen, Danny Thorne, Adam Zornes CS521 Feb.,7, 2006 What do You Stand For? A PDE is a Partial Differential Equation This
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationWaveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Waveform Relaxation Method with Toeplitz Operator Splitting Sigitas Keras DAMTP 1995/NA4 August 1995 Department of Applied Mathematics and Theoretical
More informationThe method of lines (MOL) for the diffusion equation
Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just
More information2 Section 2 However, in order to apply the above idea, we will need to allow non standard intervals ('; ) in the proof. More precisely, ' and may gene
Introduction 1 A dierential intermediate value theorem by Joris van der Hoeven D pt. de Math matiques (B t. 425) Universit Paris-Sud 91405 Orsay Cedex France June 2000 Abstract Let T be the eld of grid-based
More informationThe WENO Method for Non-Equidistant Meshes
The WENO Method for Non-Equidistant Meshes Philip Rupp September 11, 01, Berlin Contents 1 Introduction 1.1 Settings and Conditions...................... The WENO Schemes 4.1 The Interpolation Problem.....................
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationMat1062: Introductory Numerical Methods for PDE
Mat1062: Introductory Numerical Methods for PDE Mary Pugh January 13, 2009 1 Ownership These notes are the joint property of Rob Almgren and Mary Pugh 2 Boundary Conditions We now want to discuss in detail
More informationBoundary Value Problems and Iterative Methods for Linear Systems
Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In
More informationNumerical Methods for Engineers and Scientists
Numerical Methods for Engineers and Scientists Second Edition Revised and Expanded Joe D. Hoffman Department of Mechanical Engineering Purdue University West Lafayette, Indiana m MARCEL D E К К E R MARCEL
More informationBootstrap AMG. Kailai Xu. July 12, Stanford University
Bootstrap AMG Kailai Xu Stanford University July 12, 2017 AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction.
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More informationFinal Year M.Sc., Degree Examinations
QP CODE 569 Page No Final Year MSc, Degree Examinations September / October 5 (Directorate of Distance Education) MATHEMATICS Paper PM 5: DPB 5: COMPLEX ANALYSIS Time: 3hrs] [Max Marks: 7/8 Instructions
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.
More informationNumerical solution of Least Squares Problems 1/32
Numerical solution of Least Squares Problems 1/32 Linear Least Squares Problems Suppose that we have a matrix A of the size m n and the vector b of the size m 1. The linear least square problem is to find
More informationt x 0.25
Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationK. BLACK To avoid these diculties, Boyd has proposed a method that proceeds by mapping a semi-innite interval to a nite interval [2]. The method is co
Journal of Mathematical Systems, Estimation, and Control Vol. 8, No. 2, 1998, pp. 1{2 c 1998 Birkhauser-Boston Spectral Element Approximations and Innite Domains Kelly Black Abstract A spectral-element
More informationFrom Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?
Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationFinite Element Methods
Solving Operator Equations Via Minimization We start with several definitions. Definition. Let V be an inner product space. A linear operator L: D V V is said to be positive definite if v, Lv > for every
More informationRational Chebyshev pseudospectral method for long-short wave equations
Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84
More informationSolving PDEs with Multigrid Methods p.1
Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationNumerical Solutions to PDE s
Introduction Numerical Solutions to PDE s Mathematical Modelling Week 5 Kurt Bryan Let s start by recalling a simple numerical scheme for solving ODE s. Suppose we have an ODE u (t) = f(t, u(t)) for some
More informationarxiv: v1 [math.na] 1 May 2013
arxiv:3050089v [mathna] May 03 Approximation Properties of a Gradient Recovery Operator Using a Biorthogonal System Bishnu P Lamichhane and Adam McNeilly May, 03 Abstract A gradient recovery operator based
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationA mixed nite volume element method based on rectangular mesh for biharmonic equations
Journal of Computational and Applied Mathematics 7 () 7 3 www.elsevier.com/locate/cam A mixed nite volume element method based on rectangular mesh for biharmonic equations Tongke Wang College of Mathematical
More informationA Simple Compact Fourth-Order Poisson Solver on Polar Geometry
Journal of Computational Physics 182, 337 345 (2002) doi:10.1006/jcph.2002.7172 A Simple Compact Fourth-Order Poisson Solver on Polar Geometry Ming-Chih Lai Department of Applied Mathematics, National
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationThe Complexity of Numerical Methods for Elliptic Partial Differential Equations
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 1977 The Complexity of Numerical Methods for Elliptic Partial Differential Equations Elias
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationOn a max norm bound for the least squares spline approximant. Carl de Boor University of Wisconsin-Madison, MRC, Madison, USA. 0.
in Approximation and Function Spaces Z Ciesielski (ed) North Holland (Amsterdam), 1981, pp 163 175 On a max norm bound for the least squares spline approximant Carl de Boor University of Wisconsin-Madison,
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationSpline Element Method for Partial Differential Equations
for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More information1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r
DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization
More informationSolution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method
Ninth International Conference on Computational Fluid Dynamics (ICCFD9), Istanbul, Turkey, July 11-15, 2016 ICCFD9-0113 Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationBackground. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58
Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More information