Recovery Techniques in Finite Element Methods

Similar documents
Ultraconvergence of ZZ Patch Recovery at Mesh Symmetry Points

INTRODUCTION TO FINITE ELEMENT METHODS

arxiv: v1 [math.na] 1 May 2013

Simple Examples on Rectangular Domains

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian

On the positivity of linear weights in WENO approximations. Abstract

THE PATCH RECOVERY FOR FINITE ELEMENT APPROXIMATION OF ELASTICITY PROBLEMS UNDER QUADRILATERAL MESHES. Zhong-Ci Shi and Xuejun Xu.

10 The Finite Element Method for a Parabolic Problem

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

Studies on Barlow points, Gauss points and Superconvergent points in 1D with Lagrangian and Hermitian finite element basis

A brief introduction to finite element methods

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

Chapter 1: The Finite Element Method

Scientific Computing I

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Numerical Solutions to Partial Differential Equations

High order, finite volume method, flux conservation, finite element method

Superconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients

CIV-E1060 Engineering Computation and Simulation Examination, December 12, 2017 / Niiranen

Nodal O(h 4 )-superconvergence of piecewise trilinear FE approximations

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

A POSTERIORI ERROR ESTIMATES BY RECOVERED GRADIENTS IN PARABOLIC FINITE ELEMENT EQUATIONS

PREPRINT 2010:23. A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO

Generalized Finite Element Methods for Three Dimensional Structural Mechanics Problems. C. A. Duarte. I. Babuška and J. T. Oden

C e n t r u m v o o r W i s k u n d e e n I n f o r m a t i c a

Abstract. 1. Introduction

8 A pseudo-spectral solution to the Stokes Problem

Chapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma

Polynomial Preserving Recovery for Quadratic Elements on Anisotropic Meshes

POLYNOMIAL PRESERVING GRADIENT RECOVERY AND A POSTERIORI ESTIMATE FOR BILINEAR ELEMENT ON IRREGULAR QUADRILATERALS

Algorithms for Scientific Computing

Chapter 2 Interpolation

ETNA Kent State University

THE POLYNOMIAL-PRESERVING RECOVERY FOR HIGHER ORDER FINITE ELEMENT METHODS IN 2D AND 3D. A. Naga and Z. Zhang 1. (Communicated by Zhiming Chen)

Numerical Solutions to Partial Differential Equations

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

Chapter 3 Conforming Finite Element Methods 3.1 Foundations Ritz-Galerkin Method

1 Discretizing BVP with Finite Element Methods.

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

Numerical Solutions to Partial Differential Equations

A posteriori error estimation for elliptic problems

A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators

on! 0, 1 and 2 In the Zienkiewicz-Zhu SPR p 1 and p 2 are obtained by solving the locally discrete least-squares p

Hamburger Beiträge zur Angewandten Mathematik

Interpolation Functions for General Element Formulation

Numerical Methods I Orthogonal Polynomials

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

Discontinuous Galerkin Methods

A Gradient Recovery Operator Based on an Oblique Projection

DISCRETE MAXIMUM PRINCIPLES in THE FINITE ELEMENT SIMULATIONS

Supraconvergence of a Non-Uniform Discretisation for an Elliptic Third-Kind Boundary-Value Problem with Mixed Derivatives

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

On the relationship of local projection stabilization to other stabilized methods for one-dimensional advection-diffusion equations

c 2004 Society for Industrial and Applied Mathematics

CHAPTER 3 Further properties of splines and B-splines

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

PDEs, part 1: Introduction and elliptic PDEs

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

Overlapping Schwarz preconditioners for Fekete spectral elements

Chapter 6 A posteriori error estimates for finite element approximations 6.1 Introduction

Basic Principles of Weak Galerkin Finite Element Methods for PDEs

Finite Element Superconvergence Approximation for One-Dimensional Singularly Perturbed Problems

WRT in 2D: Poisson Example

Time-dependent variational forms

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

Adaptive Finite Element Methods Lecture Notes Winter Term 2017/18. R. Verfürth. Fakultät für Mathematik, Ruhr-Universität Bochum

arxiv: v2 [math.na] 23 Apr 2016

An additive average Schwarz method for the plate bending problem

Local flux mimetic finite difference methods

Numerical Methods for Two Point Boundary Value Problems

MATH 205C: STATIONARY PHASE LEMMA

Axioms of Adaptivity (AoA) in Lecture 3 (sufficient for optimal convergence rates)

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Finite Elements. Colin Cotter. January 15, Colin Cotter FEM

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Numerical Solutions to Partial Differential Equations

A very short introduction to the Finite Element Method

1. Let a(x) > 0, and assume that u and u h are the solutions of the Dirichlet problem:

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

R T (u H )v + (2.1) J S (u H )v v V, T (2.2) (2.3) H S J S (u H ) 2 L 2 (S). S T

Numerical methods for PDEs FEM convergence, error estimates, piecewise polynomials

NONCONFORMING MIXED ELEMENTS FOR ELASTICITY

The method of lines (MOL) for the diffusion equation

Introduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA

Remarks on the analysis of finite element methods on a Shishkin mesh: are Scott-Zhang interpolants applicable?

An Iterative Substructuring Method for Mortar Nonconforming Discretization of a Fourth-Order Elliptic Problem in two dimensions

A gradient recovery method based on an oblique projection and boundary modification

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Lecture Notes: African Institute of Mathematics Senegal, January Topic Title: A short introduction to numerical methods for elliptic PDEs

Matrix construction: Singular integral contributions

Scientific Computing: Numerical Integration

Numerische Mathematik

ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS

A posteriori error estimation in the FEM

A posteriori analysis of a discontinuous Galerkin scheme for a diffuse interface model

HIGH ORDER NUMERICAL METHODS FOR TIME DEPENDENT HAMILTON-JACOBI EQUATIONS

Recovery-Based A Posteriori Error Estimation

Transcription:

Chapter 8 Recovery Techniques in Finite Element Methods Zhimin Zhang Department of Mathematics Wayne State University Detroit, MI 48202 USA zhang@math.wayne.edu Contents 8. Introduction and preliminary............................. 297 8.2 Local recovery for D................................. 300 8.2. Motivation linear element.......................... 30 8.2.2 Higher order elements............................. 303 8.2.3 One dimensional theoretical results...................... 37 8.3 Local recovery in higher dimensions......................... 323 8.3. Methods and examples............................. 324 8.3.2 Properties of the gradient recovery operator.................. 33 8.3.3 Superconvergence analysis in 2-D....................... 332 8.4 Quasi-local, semi-local, and global recoveries.................... 35 8. Introduction and preliminary Finite element recovery techniques are post-processing methods that reconstruct numerical approximations from finite element solutions to achieve better results. To be practically useful, a good recovery This research was partially supported by the National Science Foundation grants DMS-03807 and DMS-062908. 297

298 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS method should have the following three features:. It is simple to implement and cost effective. In practice, a recovery procedure takes only very small portion of the whole computational cost. 2. It is applicable to higher dimensions. 3. It is problem independent, i.e., a recovery process uses only numerical solution data. In this article, we shall discuss several recovery techniques and identify those good ones. We shall concentrate on two methods: Zienkiewicz-Zhu s Superconvergence Patch Recovery SPR and recently proposed Polynomial Preserving Recovery PPR. We consider only C 0 finite element methods, although generalization to other finite element methods such as non-conforming and discontinuous Galerkin methods are feasible. To fix idea for the sake of theoretical analysis, we shall use the second-order elliptic equation as our model and bare in mind that results are applicable to a large class of differential equations, such as linear elasticity equations, the Reissner-Mindlin Plate model, the Stokes equation, and many others. In addition, we shall concentrate on gradient recovery, since the gradient error is the dominate error in the energy norm for the finite element approximation of a second-order differential equation. Let u be a solution of certain differential equation and u h be the finite element approximation of u. The goal of a recovery technique is to construct G h u h based on u h or u h such that u G h u h D << u u h D 8.. for some norm D, where D is a union of some elements, with two extremal cases: one element and whole solution domain. In other words, G h u h is a better approximation of u than u h. Naturally, the mathematical background of recovery techniques is closely related to the finite element superconvergence theory. The practical usage of recovery technique is not only to improve quality of the approximation, but also to construct a posteriori error estimators. Based on 8.., the non-computable error u u h D can be estimated by a computable one η D = G h u h u h D. Indeed, by the triangular inequality, u G hu h D u u h D G hu h u h D u u h D + u G hu h D u u h D. 8..2 By virtue of 8.., u G h u h D / u u h D is much smaller comparing with. Therefore, η D u u h D, 8..3 and consequently, η D provide a good knowledge of the true error u u h D. This simple argument is the essential idea for recovery type a posteriori error estimators, or the Zienkiewicz-Zhu error estimator, c.f. [82]. It is observed that the approximation error has oscillatory behavior and periodic pattern in a region where the solution is sufficiently regular and mesh is locally translation invariant. In this situation, some kind of averaging will smoothen the numerical solution and thereby result in a good recovery. Actually, earlier recovery techniques are simple averaging and weighted averaging, which was used by engineers at very beginning of the finite element method. The motivation is natural: The computed

8.. INTRODUCTION AND PRELIMINARY 299 stress via C 0 finite element methods is discontinuous across elements and usually provide the worst result at element boundaries. In his 963 Ph.D. thesis [69], Wilson introduced a weighted average method to calculate the stress which gave good results for both interior and boundary elements. Almost at the same time, the paper by Turner et al. [59] discussed in detail an averaging technique based on an equivalence of nodal forces and element stresses. Later, Iron [35] proposed a least squares surface fitting to smooth the stress computed from the finite element method. This smoothing technique is essentially a global L 2 -projection, which is further developed in the 968 Master thesis of Hinton [32]. In this direction, also see another version of stress smoothing by Oden-Brauchli [48]. In [33], Hinton- Cambell discussed a local L 2 -projection to calculate the stress. In addition to function smoothing, an engineer version of the local L 2 -projection, they also considered discrete smoothing, least squares fitting at the Gauss points. However, their local was limited to one element. As a consequence, the smoothed stress is still discontinuous across element boundaries. This problem was completely solved 20 years later, when Ziekiewicz-Zhu introduced their Superconvergence Patch Recovery SPR [83, 84], where the discrete least square fitting was performed on an element patch, a set of elements surrounding the same vertex. SPR produces a continuous stress field, which is superconvergent under some mild mesh conditions. The mathematical aspect of SPR is one content of this chapter. Soon after, Wiberg et al. incorporated equilibrium and boundary conditions to enhanced SPR [66, 67] and discussed strategies to improve the finite element solution u h itself other than the stress, which is essentially the gradient of u h [68]. Recently, the author and his colleagues proposed an alternative strategy, called Polynomial Preserving Recovery PPR [45, 46, 75, 76, 78], to recover the gradient. Theoretical analysis revealed that PPR has better superconvergence properties than SPR. Our numerical tests have also indicated that the a posteriori error estimator based on PPR is as good as or better than that of SPR. In this chapter, a detailed discussion of PPR will be presented. Due to superconvergence property of SPR, it was used in a posteriori error estimates called ZZ estimator for smoothing and mesh adaptive purpose from the very beginning [84]. Now it is widely used in the engineering software industry, including commercial codes such as ANSYS, MCS/NASTRAN- Marc, Pro/MECHANICA a product of Parametric Technology, and I-DEAS product of SDRC, part of EDS. It is also used in NASA s COMET-AR COmputational MEchanics Testbed With Adaptive Refinement. The great success of SPR caught a lot of attentions in mathematical community. Some theoretical investigations have been carried on to find the hidden mathematical reason behind SPR and in large, the recovery methods and recovery based error estimators, see e.g., [, 4, 5, 6, 40, 49, 50, 7 74, 77, 79], and reference therein. Mathematical foundation of recovery methods is closely related to finite element superconvergence theory. In this respect, the reader is referred to relative contents of the following books [4, 8, 9, 37, 62, 80, 8] and references therein. While the mathematical analysis of recovery based a posteriori error estimates is still in its infancy, the theory regarding residual based error estimates has reached its maturity, see e.g., [,5 7,4,26 28, 42, 60], and references therein.

300 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS Preliminary As mentioned earlier, the theoretical aspect of recovery techniques has much to do with finite element superconvergence theory. For this reason, we shall use the Gauss points and Lobatto points frequently. We first introduce the Legendre polynomials on [,]: L 0 ξ =, L ξ = ξ, and j + L j+ ξ = 2j + ξl j ξ jl j ξ, j. Define It can be proved that 2j + φ j+ ξ = 2 ξ L j tdt, j. 8..4 φ j+ ξ = 2j + [L j+ ξ L j ξ] = 22j + 2 jj + ξ2 L jξ. It is well known that the jth order Legendre polynomial L j has j zeros in, which are called the Gauss points of degree j; and L j has j zeros in,, together with ± are called the Lobatto points or zeros of φ j+. Here are some lower degree Gauss points, zeros of L j for j =,2,3,4: 3 j = : 0; j = 2 : ± 3 ; j = 3 : 0, ± 5 ; j = 4 : ± and some lower degree Lobatto points, zeros of φ j+ for j =,2,3,4: 3 7 ± 4 7 3 0 ; 3 j = : ±; j = 2 : 0, ±; j = 3 : ±; ± ; j = 4 : 0, ±, ± 5 7. We use conventional notations for the Sobolev/Hilbert spaces and norms as in standard finite element books such as [2, 3, 22, 29]. We occasionally use a b to replace a Cb to avoid writing the bounding constant C repeatedly. It is worthwhile to point out that recovery techniques discussed in this article use only the numerical solution data, and therefore can be applied to other numerical methods such as finite difference and finite volume methods. As for theoretical analysis, we consider only the finite element method and second-order differential operators. We present discussion of recovery methods in an elementary way. Many examples will be given in every details, since the difference of those techniques are very subtle. Most materials should be understandable by engineers. 8.2 Local recovery for D We consider a function ux on the unit interval I = 0,. Any other interval can be mapped to the unit interval by a linear transformation. As for recovery operator itself, all we need are values of u at certain points. The underlying differential equation that u satisfies comes to the picture only when

8.2. LOCAL RECOVERY FOR D 30 theoretical analysis are involved. With nodal points 0 = x 0 < x < x 2 < < x N =, we partition I into N subintervals I j = x j,x j, j =,2,...,N, and denote h j = x j x j, h = max j N x j x j. Let u h be the C 0 finite element approximation of u on the above partition. We further denote x j /2 = 2 x j + x j, u j = u h x j, u j /2 = u h x j /2. When the value of u is involved, we simply write ux j without an abbreviation. 8.2. Motivation linear element When u h is a continuous piecewise linear function,u h is a step piecewise constant function and its values at nodal points x j s are not defined. The idea of all those recovery methods is to construct nodal values of u h, denote as G hu h x j by post-processing, and thereby to obtain a piecewise linear derivative field G h u h via linear interpolation. Clearly, G h u h is continuous at x j. It is well known that the mid-point x j±/2 is the derivative superconvergence point for the linear finite element approximation. Therefore, it is natural to utilize those points in constructing G h u h x j. Simple averaging, weighted averaging, and SPR are all constructed along this line of thinking. Simple averaging. Weighted averaging. G h u h x j = 2 G h u h x j = u j /2 + u j+/2 = uj u j + u j+ u j. 2 h j h j+ h j h j + h j+ u j /2 + h j+ h j + h j+ u j+/2 = u j+ u j h j + h j+. SPR. Let p x be the linear interpolation of u h at x j /2 and x j+/2. G h u h x j = p x j = = the same as the weighted averaging. [ 2x xj /2 u j /2 h j + h + 2x ] j+/2 x u j+/2 j+ h j + h j+ h j h j + h j+ u j /2 + h j+ h j + h j+ u j+/2, Other two methods, local L 2 -projection and PPR do not play with derivative superconvergence points explicitly. x=x j

302 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS Local L 2 -projection. Find a linear function p x = a 0 + a x such that xj+ p x u h dx = 0, x j xj+ p x u h xdx = 0, x j and set G h u h x j = p x j. Note that u h has two different values on the two elements x j,x j and x j,x j+. From Zhang-Zhu [79, 25] G h u h x j = h3 j h2 j h j+ + 4h j h 2 j+ h j + h j+ 3 u j /2 + 4h2 j h j+ h j h 2 j+ + h3 j+ h j + h j+ 3 u j+/2 = h2 j h jh j+ + 4h 2 j+ h j + h j+ 3 u j u j + 4h2 j h jh j+ + h 2 j+ h j + h j+ 3 u j+ u j. PPR. Let p 2 x be the quadratic interpolation of u h at x j,x j,x j+ : and G h u h x j = p 2 x j p 2 x = x x jx x j+ h j h j + h j+ + x x j x x j h j+ h j + h j+ u j+. u j + x x j x j+ x u j h j h j+ = 2x j x j x j+ h j h j + h j+ u j + 2x j + x j + x j+ u j + 2x j x j x j h j h j+ h j+ h j + h j+ u j+ h j+ = h j h j + h j+ u j + h j+ h j h j u j + h j h j+ h j+ h j + h j+ u j+. We see that all five methods result in some finite difference schemes involving values of u h at x j,x i,x j+. In this way, we obtain values of G h u h at x,x 2,...,x N. If u 0 is not provided by the problem, we also need to define G h u h x 0. For the averaging, SPR, and the local L 2 -projection, a simple choice is G h u h x 0 = u u 0 /h. As for PPR, we set j = in the expression of p 2 x and define G h u h x 0 = p 2x j = 2h + h 2 h h + h 2 u 0 + h + h 2 h u h h 2 h 2 h + h 2 u 2, which is a second-order finite difference operator at x 0. Similarly, we obtain G h u h x N. Finally, by linear interpolation, we recover a piecewise linear continuous derivative field G h u h, which is a better approximation of u than u h over 0,. When h j = h = h j+, all five methods result in the same central difference scheme. However, when h j h j+, only PPR yields a second order finite difference scheme at x j.

8.2. LOCAL RECOVERY FOR D 303 8.2.2 Higher order elements As for higher order elements: finite element space is composed by piecewise polynomials of degree k > which are continuous at each nodal point x j. For k >, simple averaging and weighted averaging are no longer valid. However, other three methods can be generalized directly. Let us address them separately. SPR The recovery utilizes derivative values at the Gauss points known as superconvergence points on an element patch J j = x j,x j+ to construct a polynomial of degree k, which is a better approximation to u x than u h x over I j. Note that u h x is a piecewise polynomial of degree k, which has a jump at x j on the element patch. Without loss of generality, we shift the element patch to h j,h j+, and construct σ k x x j = P k x x j a based on the values of u h at the Gauss points. =,x x j,,x x j k a 0,a,,a k T, There are 2k Gauss points on an element patch containing two elements. We need at least k + points in order to construct a polynomial of degree k. There are many options here. Two extremal cases are:. Choose k + points to determine σ k x x j by interpolation; 2. Choose 2k points to determine σ k x x j by the least squares fitting. What in between is to choose r k + < r < 2k points to determine σ k x x j by the least squares fitting. Suppose that we have chosen r k + Gauss points G,,G r on h j,h j+. The least squares fitting is to minimize the functional r u h x j + G i σ k G i 2 = i= when h j = h j+ = r u h x j + G i P k G i a 2 i= r u h x j + G i P k G i /hâa 2, i= with respect to âa = a 0,a h,,a k h k T. The minimization condition yields the following linear system r r P k G i /h T P k G i /hâa = P k G i /h T u h x j + G i, i= i= or in the matrix form A T Aâa = A T b h, 8.2.

304 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS where A T G /h G 2 /h G r /h =......, b h = G k /hk G k 2 /hk G k r/h k u h x j + G u h x j + G 2. u h x j + G r The condition number of A is independent of h for a quasi-uniform mesh. Hence the scaling introduced above using âa rather than a has the advantage in practical computation. Since r k +, RankA T A = RankA = k +, A T A is invertible. Then. âa = A T A A T b h. 8.2.2 Summing up, the above least squares fitting defines a recovery operator G h such that on each element patch G h u h x = σ k x x j, x x j h j,x j + h j+ especially, G h u h x j = σ k 0 = P k 0âa = a 0 = α T b h, 8.2.3 where α T = α,,α r is the first row of A T A A T. In this way, we determine recovered derivative values at all Lobatto points on [x j,x j ] and [x j,x j+ ]. Finally, we obtain G h u h on 0, by interpolation at the Lobatto points using the original finite element basis functions. On the overlapping part of two adjacent element patches, G h u h is simply chosen as the average in practical computation. For the theoretical purpose, it is defined arbitrarily as either one of them. For the quasi-uniform mesh, the l -norm of α is uniformly bounded with respect to h since the condition number of A is independent of h. In other words, there is a constant Ck, depends only on k such that r α = α i Ck. 8.2.4 i= This property is also valid for other norms. In general G h is a bounded operator in the sense that for a constant C independent of h. G h u h C u h, Since A T A A T A = I is the identity matrix and α T is the first row of A T A A T, therefore α T A =,0,,0, i.e., r α i =, i= r α i G m i = 0, m =,,k. 8.2.5 i= In case h j = h j+ and k is an even number, we chose even r k + 2 and select the Gauss points x j +G i s symmetrically with respect to x j. More precisely, whenever G iν appears, G iν also appears. In this situation, the least squares fitting will produce the same weight α iν for both G iν and G iν.

8.2. LOCAL RECOVERY FOR D 305 Hence we have in this case, r i= α i G k+ i = r/2 ν= α iν [ G k+ i ν + G iν k+] = r/2 ν= α iν [ G k+ i ν G k+ i ν ] = 0. 8.2.6 The operator G h can be applied to any function whose derivative values are well defined at all Gauss points, especially to u with G h ux j = α b, b = u x j + G,,u x j + G r T. In light of 8.2.5 and 8.2.6, for sufficiently smooth u, we can show that if r k +, G h ux j = u x j + k! r i= α i G k+ i 0 s k u k+2 x j + G i sds; 8.2.7 if k even, r k + 2 even, h j = h j+, and G i s distribute symmetrically with respect to x j, then G h ux j = u x j + k +! r i= α i G k+2 i 0 s k+ u k+3 x j + G i sds. 8.2.8 We see that G h is indeed a higher-order finite difference operator. To better understand the recovery operator, we demonstrate two examples and postpone the theoretical analysis to the end of this section. Example 8.2.. SPR for quadratic element. Let u h be a piecewise quadratic function. In order to simplify the computation, only uniform mesh will be considered. Therefore, we can shift the element patch to, by ht = x x j and have σ 2 t = P 2 t T âa = â 0 + â t + â 2 t 2. The Gauss points are x j + G i = x j + g i h, i =,2,3,4, with: g = g 4 = +, 2 3 g 2 = g 3 =. 2 3 The least squares procedure yields, 4 P 2 g i P 2 g i T = i= = 4 0 4/3 0 4/3 0 4/3 0 7/9 7/2 0 0 3/4 0 0 3.

306 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS If we denote b i = u h x j + G i, i =,2,3,4, then â 0 â â 2 = = 7/2 0 0 3/4 0 0 3 g g 2 g 3 g 4 g 2 g2 2 g3 2 g4 2 [3 2 3b + 3 + 2 3b 2 + 3 + 2 3b 3 + 3 2 3b 4 ]/2 3[ 3 + b 3 b 2 + 3 b 3 + 3 + b 4 ]/2 3b b 2 b 3 + b 4 /2 b b 2 b 3 b 4. After obtaining âa, we can express G h u h x explicitly and calculate the recovered derivative at any point in the element patch, especially at the nodal point x j, which associated with the local coordinate t = 0, = G h u h x j = σ 2 0 = â 0 8.2.9 3 4 u h 6 x j + G + u h x 3 j + G 4 + 4 + u h 6 x j + G 2 + u h x j + G 3 ; and at the element center x j+/2, which associated with t = /2, G h u h x j+/2 = σ 2 /2 = â 0 + + 2â 4â2 = 2 + 7 3 u h 24 x j + G 2 7 3 u h 24 x j + G 2 + 5 3 u h 24 x j + G 3 + + 5 3 u h 24 x j + G 4. 8.2.0 Since x j+/2 belongs to two element patches J j = x j,x j+ and J j+ = x j,x j+2, j =, 2,..., N 2, in practice, the recovered derivative value is taken as the following average G h u h x j+/2 = σ 2,j /2 + σ 2,j+ /2 = u h x j + G 3 + u h x j+ + G 2 4 7 3 [u h 48 x j + G 2 + u h x j+ + G 3 ] 4 + 7 3 [u h 48 x j + G + u h x j+ + G 4 ], where σ 2,j and σ 2,j+ are two quadratic polynomials obtained from patches J j and J j+, respectively; and x j+ + G 3,x j+ + G 4 are two Gaussian points in x j+,x j+2. Another strategy in determining G h u h x j+/2 is to fit a quadratic polynomial by derivative values u h at x j + G 2,x j + G 3,x j+ + G 2,x j+ + G 3. Note that x j+ + G 2 = x j + G 4. Finally, if boundary condition does not provide u 0 and u, we set G h u h x 0 = σ, Gh u h x N = σ N. 2 2

8.2. LOCAL RECOVERY FOR D 307 After determining G h u h at all nodal points and element centers, we obtain G h u h x on 0, by interpolation using the original quadratic finite element base functions. It is straightforward to verify that the recovery operator at the nodal point x j given by 8.2.9 is fourth-order, i.e., for σ C 4 h,h, 3 4 6 σg 3 + σg 4 + 4 + 6 σg 2 + σg 3 σ0 Ch4 σ 4,. The weights 3 4 + 6, 3 4 6 were used in early work for superconvergence recovery at the nodal point u 0 see Figure 8. for quadratic element under triangular mesh [2,30]. It is the simple average of two quadratic interpolations to values of tangential derivatives at G,G 2,u 0 and u 0,G 3,G 4. Here one least squares fitting of a quadratic polynomials on G,G 2,G 3,G 4 does the same. u3 τ 2 2 u2 τ 3 τ 2 G u4 + G 2 G + 0 + 3 0 G 4 + u0 τ τ 6 4 2 u τ 5 2 u5 u6 Figure 8.: All recovery: Denominator 6h. Example 8.2.2. SPR for cubic element. Now P 3 t =,t,t 2,t 3 T, and the Gauss points are: G = G 6 = 3 + h, G 2 = G 5 = 2 5 2 h, G 3 = G 4 = 3 h. 2 5 Following the same procedure as in the quadratic element case with some tedious calculation, we have 5 5 G h u h x j = 36 u h 8 x j + G + u h x j + G 6 + 2 9 u h x j + G 2 + u h G 5 5 5 + 36 + u h 8 x j + G 3 + u h x j + G 4. By the Taylor expansion, we can verify that this is a fourth order scheme.

308 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS Another option is to use only 4 Gauss points in this case, for example, G 2,G 3,G 4,G 5. Then we determine a cubic polynomial by interpolation instead of least squares fitting. Denote g i = G i /h, we have which yields G h u h x j = a 0 = g 2 g2 2 g2 3 g 3 g3 2 g3 3 g 4 g4 2 g4 3 g 5 g5 2 g5 3 Again this is an order Oh 4 scheme. a 0 u h x j + G 2 a h a 2 h 2 = u h x j + G 3 u a 3 h 3 h x j + G 4, u h x j + G 5 6 7 5 5 5 5 + 34 + 5 5 5 u h x j + G 2 + u h x j + G 5 u h x j + G 3 + u h x j + G 4. Local L 2 -projection As in the least square fitting, the element patch is again chosen as x j,x j+ and shifted to h j,h j+. The local L 2 -projection is to minimize hj+ h j u h x j + x P k xa 2 dx, which yields hj+ hj+ a = P k x T P k xdx P k x T u h x j + xdx, 8.2. h j h j Again, for simplicity, we consider the special case h j+ = h j = h. By scaling x = ht, we have a = ˆP k t T ˆPk tdt ˆP k t T u h x j + htdt. 8.2.2 Here we understand u h x j + ht as piecewise polynomial of degree k on [,] with original values mapped from the physical domain [ h, h] to the reference domain [, ]. In order to avoid a numerically ill-conditioned matrix, Legendre polynomials are used as the basis functions, so ˆP k t = [L 0 t,l t,l 2 t,,l k t], where L 0 t =, L t = t, L 2 t = 3 2 t 2 /3, L 3 t = 5 2 t t 2 3/5,. By doing so, we are able to take advantage of the orthogonal property of the Legendre polynomials which leads to a diagonal matrix ˆP k t T ˆPk tdt = L i tl j tdt k i,j=0 = 2 diag, 3, 5, 7,,. 8.2.3 2k +

8.2. LOCAL RECOVERY FOR D 309 On the other hand, since we know u h x j + ht at k Gauss points in,0 and k Gauss points in 0,, and ˆP k t T u h x j + ht contains polynomials of degree no more than 2k on,0 and 0,. By the k-points Gauss quadrature rule = ˆP k t T u h x j + htdt = 2k i= A i P k g i T u h x j + G i A A 2 A 2k A L g A 2 L g 2 A 2k L g 2k...... A L k g A 2 L k g 2 A 2k L k g 2k u h x j + G u h x j + G 2. u h x j + G 2k, 8.2.4 where A i i 2k are the weight for Gauss quadrature. Note that for the uniform mesh, A i+ = A 2k i, 0 i n. In general, this strategy does not yield superconvergent recovery. We have seen that for linear element the local L 2 -projection results in superconvergence recovery only for uniform mesh. We shall demonstrate here that for even order element, it does not yields superconvergence recovery even for uniform mesh. Example 8.2.3. Local L 2 -projection for quadratic element. Let P 2 t = [L 0 t,l t,l 2 t] and g i, i =,2,3,4, be the Gauss points in,0 0,. Then ˆP 2 t T ˆP2 tdt = 2 ˆP 2 t T u h tdt = 2 0 0 0 /3 0 0 0 /5 where b i = u h x j + G i. Thus from 8.2., a 0 a a 2 = 8 = 2 8, g g 2 g 3 g 4 3g 2 /2 3g2 2 /2 3g2 3 /2 3g2 4 /2 0 0 0 3 0 0 0 5 4 b j, 6 2 2 2 2 2g 2g 2 2g 3 2g 4 3g 2 3g2 2 3g2 3 3g2 4 4 g j b j, 5 T 4 3gj 2 b j. b b 2 b 3 b 4 b b 2 b 3 b 4, 8.2.5

30 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS Then the recovered nodal derivative is G h u h x j = a 0 a 2 2 4 9 = 6 5 6 g2 i b i i= = 4 5 3 b + b 4 + 32 4 + 5 3 b 2 + b 3. 32 By the Taylor s expansion, we can verify that the above scheme is only second-order in the sense that 4 5 3 σg + σg 4 + 32 4 + 5 3 σg 2 + σg 3 σ0 = h2 32 96 σ 0 +. Therefore the local L 2 -projection does not lead to desired superconvergence at the nodal point for quadratic element even under the uniform mesh. This behavior was reported numerically in the original work of Zienkiewicz-Zhu in 992 [83]. If reduced integration is applied to the term L2 2 tdt, we will have 8 4 3gi 2 2 = 3 8, instead of 2/5 as the last entry of the matrix P 2t T P 2 tdt. Consequently, i= a 2 = 2 4 3gi 2 b i, 3 i= 4 7 G h u h x j = 2 g2 i b i i= 3 = 4 b + b 4 + 6 4 + 3 b 2 + b 3, 6 which is the same nodal recovery as the least square fitting compare with 8.2.9. Example 8.2.4. Local L 2 -projection for cubic element. Let In this case, the weights for Gauss quadrature are: P 3 t = [L 0 t,l t,l 2 t,l 3 t]. A = A 3 = A 4 = A 6 = 5 8, A 2 = A 5 = 4 9.

8.2. LOCAL RECOVERY FOR D 3 Then by 8.2.-8.2.4 a 0,a,a 2,a 3 T 0 0 0 = 0 3 0 0 36 0 0 5 0 0 0 0 7 =,3,4,6 5 8 5 5g 8g 2 5g 6 5 3 2 g2 2 83 2 g2 2 2 5 5 2 g2 3 2 g 8 5 2 g2 2 3 2 g 2 53 2 g2 6 2 5 5 2 g2 6 3 2 g 6 5 36 b + b 3 + b 4 + b 6 + 2 9 b 2 + b 5 5 2 g b + g 3 b 3 + g 4 b 4 + g 6 b 6 + 2 3 g 2b 2 + g 5 b 5 25 gj 2 24 3 b j + 5 gj 2 3 3 b, j j=2,5 b b 2. b 6 = G h u h x j = a 0 a 2 2 5 48 5 5 u h 96 x j + G + u h x j + G 6 + 7 24 u h x j + G 2 + u h x j + G 5 5 + 48 + 5 5 u h 96 x j + G 3 + u h x j + G 4. 8.2.6 Unlike quadratic element, for the cubic element with uniform mesh, the recovered derivative by the local L 2 -projection is superconvergent at the nodal point. This is also observed numerically in [83]. Different from Example 8.2.3 for quadratic element, reduced integration will not yield the same nodal recovery as the least squares fitting, since the weights A i s are not all equal here. PPR Observe that values of u h can be expressed by nodal values of u h. Therefore, instead of carrying recovery by u h at the Gauss points, we may perform it based on nodal values of u h. Indeed, this is the basic idea of PPR. Procedure of PPR: On an element patch [x j,x j+ ], we select r k + 2 Lobatto points and fit a polynomial p P k+, in the least squares sense, by values of u h at those Lobatto points. The recovered derivative on the element patch is then defined as G h u h x = p x, x [x j,x j+ ]. To construct the recovered derivative, we shift the element patch to [ h j,h j+ ] as in the SPR and determine px = P k+ x x j a =,x x j,,x x j k,x x j k+ a 0,a,,a k,a k+ T, based on u h x j + τ i, where τ i, i =,,r, are the Lobatto points on [ h j,0] [0,h j+ ].

32 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS There are 2k + Lobatto points on [ h j,0] [0,h j+ ]. We need at least k + 2 points in order to construct a polynomial of degree k +. Therefore, r can be any number between k + 2 and 2k +. The least squares fitting is to minimize the functional r u h x j + τ i px j + τ i 2 = i= when h j = h j+ = r u h x j + τ i P k+ τ i a 2 i= r u h x j + τ i P k+ τ i /hâa 2, i= with respect toâa = a 0,a h,,a k h k,a k+ h k+ T. The minimization condition yields the following linear system r r P k+ τ i /h T P k+ τ i /h âa = P k+ τ i /h T u h x j + τ i, i= i= or in the matrix form B T Bâa = B T b h, 8.2.7 where B T = τ /h τ 2 /h τ r /h......, b h = τ k+ /h k+ τ2 k+ /h k+ τr k+ /h k+ u h x j + τ u h x j + τ 2. u h x j + τ r. Note, for the quasi-uniform mesh, the condition number of B is independent of h. Since r k + 2, RankB T B = RankB = k + 2, B T B is invertible. Then âa = B T B B T b h. 8.2.8 Now G h u h x = p x on I j = x j h j,x j + h j+, and especially, G h u h x j = p x j = a when h j = h j+ = hâ h = h βt b h, where β T is the second row of B T B B T. In this way, we determine G h u h at all Lobatto points on [x j,x j ] and [x j,x j+ ]. After doing so, we obtain a G h u h on the whole domain 0, by interpolation using the original finite element basis functions. Therefore, G h u h S h, which is a continuous piecewise polynomials of degree k. Since B T B B T B is the identity matrix, then β T B = 0,,0,,0, and hence, r β i = 0, i= r β i τ i /h =, i= r β i τi m = 0, m = 2,...,k +. i=

8.2. LOCAL RECOVERY FOR D 33 The recovery operator G h can be applied to any function whose values are well defined at those Lobatto points, especially, to a sufficiently smooth function u. In this case, we replace b h with b = ux j + τ,ux j + τ 2,,ux j + τ r T and expand each ux j + τ i at x j with its Taylor series. We can show that if r k +, G h ux j = u x j + k +! r i= β i h τk+2 i 0 s k+ u k+2 x j + τ i sds; 8.2.9 if k even, r k + 2 even, h j = h j+, and x j + τ i s distribute symmetrically with respect to x j, then G h ux j = u x j + k + 2! r i= β i h τk+3 i 0 s k+ u k+3 x j + τ i sds. 8.2.20 Example 8.2.5. PPR for quadratic element. Consider uniform mesh. We first shift x j to 0 and then scale with mesh size h. Then we need to find a cubic polynomial p 3 t = â 0 + â t + â 2 t 2 + â 3 t 3 based on five Lobatto points:, /2,0,/2,. Now B T Bâa = B T b h is 5 0 5/2 0 0 5/2 0 7/8 5/2 0 7/8 0 0 7/8 0 65/32 where we find âa, especially, â 0 â â 2 â 3 u h x j h = /2 0 /2 u h x j h/2 /4 0 /4 u h x j + h/2, /8 0 /8 u h x j + h G h u h x j = p 3 0/h = â /h = 4 3h [u hx j + h/2 u h x j h/2] + 6h [u hx j u h x j+ ]; 8.2.2 G h u h x j+/2 = p 3 /2 h = â + â 2 h + 3â 3 4h. 8.2.22 We can show that 8.2.2 is a fourth order finite difference scheme by the Taylor expansion, i.e., 4 3h [ux j + h/2 ux j h/2] 6h [ux j + h ux j h] u x j Ch 4 u 5,, for u C 5 [x j h,x j + h]; while the finite difference operator obtained from 8.2.22 is 3rd-order. Since for j =,,N 2, x j+/2 belongs to two patches J j = x j,x j+ and J j+ = x j,x j+2, in practice, G h u h x j+/2 is taken as the following average G h u h x j+/2 = h p 3,j /2 + p 3,j+ /2, 8.2.23 where P 3,j and p 3,j+ are two cubic polynomials obtained from patches J j and J j+, respectively. We

34 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS can show that the average strategy 8.2.23 results in a 4th-order finite difference scheme. Another strategy is to use the four nearby u h values to determine G h u h x j+/2 = 4 3h [u hx j+ u h x j ] 6h [u hx j+ + h/2 u h x j h/2]. If u 0 and u are not given by the problem, we set G h u h x 0 = p 2, h and both of them are 3rd-order finite difference schemes., G h u h x N = p 2,N ; h As we can see, the recovered derivative from PPR is 4th-order at vertices as well as interior element centers and 3rd-order at two boundary points. By quadratic interpolation, we then have a 3rd-order global recovery. Now we show that for uniform mesh, the finite difference scheme 8.2.2 produced by PPR is the same as 8.2.9 by SPR. On [x j,x j+ ], we may represent u h x, and consequently, u h x by its three nodal values, x u h x = 2 h x 2 h u j + 4 x h u h x j + G 3 = h + 2 3 u h x j + G 4 = h x h x h u j+ ; 2 u j+/2 + 2 x h u j + h 4 u j+/2 + h 2 u j+, 3 3 + 2 u j h 4 u j+/2 + h + 2 u j+. 3 3 3 Similarly, we may represent u h x j + G and u h x j + G 2 by u j,u j /2, and u j. Substituting u h x j + G k, k =,2,3,4 into 8.2.9, we obtain G h u h x j = 4 3h u j+/2 u j /2 6h u j+ u j, 8.2.24 which is the same as 8.2.2 since u j+/2 = u h x j + h/2, Another possibility to determine p P k+ is by the local L 2 -projection. The following example demonstrates that it is not a good choice. Example 8.2.6. Local L 2 -projection with nodal values. Consider quadratic element. Again, we map everything into [, ] and determine p 3 t = â 0 + â t + â 2 L 2 t + â 3 L 3 t by minimizing a cubic polynomial p 3 t u h t 2 dt

8.2. LOCAL RECOVERY FOR D 35 with the interpolation u h t = u N t + u /2 N /2 t + u 0 N 0 t, for t [,0], where N t = t2t +, N /2 t = 4tt +, N 0 t = 2t + t + ; and u h t = u 0 N 0+ t + u /2 N /2 t + u N t, for t [0,], where N 0+ t = 2t t, N /2 t = 4tt, N t = t2t. The local L 2 -projection results in the following linear system 2diag,/3,/5,/7âa = B T u, where u T = u,u /2,u 0,u /2,u, B = 0 0 0 0 N t N /2 t N 0 t N /2 t N t 0 0 0 0 tn t tn /2 t tn 0 t tn /2 t tn t 0 0 0 0 L 2 tn t L 2 tn /2 t L 2 tn 0 t L 2 tn /2 t L 2 tn t 0 0 0 0 L 3 tn t L 3 tn /2 t L 3 tn 0 t. L 3 tn /2 t L 3 tn t Here N 0 t = Performing some simple calculation, we find { N0 t t [,0] N 0+ t t [0,]. 2 8 8 8 2 B T = 2 4 0 4 2 2 2 0 2 Hence, âa can be obtained explicitly and hence G h u h t = p 3 t, especially, p 3 0 = â 3â 3 /2, with â = 4 [ ] 2u/2 u /2 + u u, â3 = 7 [ u u 2u 24 /2 u /2 ]. Transforming back to the physical domain, we have G h u h x j = 8h u j+/2 u j /2 3 6h u j+ u j.

36 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS This is only a second order scheme since for a smooth function u, 8h ux j + h/2 ux j h/2 3 6h ux j + h ux j h u x j = h2 u x j + Oh 4. 92 For linear element, using the local L 2 -projection to determine a quadratic polynomial p 2 t = â 0 + â t + â 2 L 2 t results in a central difference scheme. In fact, the mapped interpolation on [,] is and the linear system is u h t = { u t + u 0 + t t [,0] u 0 t + u t t [0,] = 2diag,/3,/5â 0,â,â 2 T,t,L 2 t T u h tdt = u 2 + u 0 + u 2, u u, T. 3 Therefore, p 2 0 = â = u u /2, and the recovered derivative at x j is the central difference scheme, G h u h x j = u j+ u j, 2h which is second-order. Condition numbers for different polynomial bases When higher-order polynomials are involved, we recommend using orthogonal polynomial basis functions for the least squares fitting, in order to avoid ill conditioning. We explain this point in the one dimensional setting. Consider the standard least squares fitting problem: Given n distinct numbers t j in 0,, and data set {b j } n, find p P k, such that n pt j b j 2 = min q P k n qt j b j 2. If we use the conventional basis functions,t,t 2,...,t k, the coefficients of p are given by A T A A T b, where b = b,b 2,,b n T, and t t k t 2 t k 2 A =....... t n t k n

8.2. LOCAL RECOVERY FOR D 37 The l,m entry of A T A is, n t l+m 2 j n 0 t l+m 2 dt = n l + m, when n is sufficiently large and t j s are properly distributed. Therefore, A T A behaves like the Hilbert matrix and has a condition number of order O0 k. A better choice for the basis function would be the Legendre polynomials on the unit interval in which case the coefficients of p is given by B T B B T b with The l,m entry of B T B is n L l t j L m t j n L t L k t L t 2 L k t 2 B =....... L t n L k t n 0 L l tl m tdt = 2n 2m δ lm, l + m > 2, when n is sufficiently large and t j s are properly distributed. Therefore, B T B behaves like the diagonal matrix ndiag,2/3, 2/2k +. 8.2.3 One dimensional theoretical results By studying the resulting finite difference schemes of different recovery techniques, we have the following observation:. Both simple averaging and weighted averaging are not easy to generalize to higher-order elements. 2. Local L 2 -projection with either the derivative u h Example 2.3 or the solution u h Example 2.6 does not yield a superconvergence derivative recovery operator for even-order elements under uniform mesh. 3. SPR results in superconvergence recovery operators under uniform mesh. 4. PPR results in superconvergence recovery operators under any mesh. Based on the above observation, we shall analyze only SPR and PPR here. As mentioned earlier, we need the explicit form of differential equations for theoretical analysis. Consider the following two-point boundary value problem: a 2 xu a xu + a 0 xu = f in I = 0,, u0 = u = 0. 8.2.25 We assume that a i and f are sufficiently smooth. We also assume that a 2 x α > 0 for all x Ī.

38 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS The weak formulation of 8.2.25 is to find u H0 I such that a 2 u,v + a u,v + a 0 u,v = f,v, v H 0I. 8.2.26 Denote the partition T h = {x i } N i=0 and define the finite element space S h = {v H I, v Ij P k I j }, S 0 h = {v H 0I, v Ij P k I j }. We see that S h and Sh 0 are the spaces of continuous piecewise polynomials of degree not exceeding k on I under the subdivision T h. The finite element solution of 8.2.26 is to find u h Sh 0 such that a 2 u h,v + a u h,v + a 0 u h,v = f,v, v S 0 h. 8.2.27 Recall the Gauss and Lobatto points from Introduction, and we denote by g k,...,gk k the zeros = ; then g k j, j =,...,k, of L k ξ, and by l k,...,lk k the zeros of L k x with lk 0 =, l k k are called the Gauss points of order k, and l k j, j = 0,,...,k, the Lobatto points of order k. The Gauss and Lobatto points on I i are defined as the affine transformations of g k j respectively: G ij = 2 x i + x i + h i g k j, j =,,k, L ij = 2 x i + x i + h i l k j, j = 0,,,k. Here the index k on G ij and L ij is dropped in order to simplify the notation. and l k j to I i, Recall from the previous section that the recovered derivative is a continuous piecewise polynomial of degree k as u h, G h u h S h, which can be uniquely determined by its values at the Lobatto points. The values of the recovered derivative at the Lobatto points are obtained by either SPR or PPR. The first step of our analysis is to reduce 8.2.27 to a simpler problem. Subtracting 8.2.27 from 8.2.26 yields a 2 u u h,v + a u u h,v + a 0 u u h,v = 0, v S 0 h. 8.2.28 Let ũ h S 0 h be given by u ũ h,v = 0, v S 0 h. 8.2.29 Then we have the following super-approximation and ultra-approximation results between u h and ũ h see [62, Theorem.3. and Remark.3.]: Lemma 8.2.. Let u h, ũ h satisfy 8.2.28, 8.2.29, respectively. Then there exists a constant C, independent of h and u, such that u h ũ h L I Ch k+ u W k+ I. 8.2.30

8.2. LOCAL RECOVERY FOR D 39 For the special case when k 2, a 2 =, and a = 0, we have u h ũ h L I Ch k+2 u W k+ I. 8.2.3 By virtue of Lemma 8.2., we can reduce our discussion to a simple case: u = f in I = 0,, u0 = u = 0; or u,v = f,v, v H 0I, 8.2.32 since the finite element solution of 8.2.32 satisfies 8.2.29. In the following, we shall construct the finite element solution u h Sh 0 for 8.2.32 and prove superconvergence and ultraconvergence properties of the recovered derivative. We characterize Sh 0 by the following basis functions, cf. [57, p.38]: S 0 h = Span{N ix,i =,,N ; φ jl x,j =,,N,l = 2,,k}. Here, + x x i /h i, x I i, N i x = + x i x/h i+, x I i+, 0, otherwise is the usual finite element tent basis function, φ jl is a bubble function with support on I j and its value on I j is defined as follows: φ jl x = φ jl x j ξh j /2 = φ l ξ, ξ,, where φ l ξ is defined by 8..4. Observe that φ l = φ l = 0, φ l ξdξ = 0, φ l ξφ m ξdξ = 0, l m. We then have, φ jl x j = φ jl x j = 0, 0 0 N i xφ jl xdx = 0, φ jl xφ imxdx = 0, i j or l m. These orthogonality properties greatly simplify our analysis. We are able to express explicitly the finite element solution of 8.2.32 on I i as where c il = f,φ il /φ il,φ il. u h x = ux i N i x + ux i N i x + k c il φ il x, 8.2.33 Theorem 8.. Let u be the solution of 8.2.32, and let u h be its finite element approximation on S 0 h. l=2

320 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS Assume that u is a polynomial of degree not greater than k + on an element patch J i = x i,x i+. Then G h u h = u on J i for both SPR and PPR. Proof. From 8.2.33, we have, on I i, where u h x = c i + r l=2 c i = u,χ Ii χ Ii,χ Ii = ux i ux i h i, and χ Ii is the characteristic function of I i. By the definition of φ il, we see that Span{χ Ii x,φ il x,l = 2,...,k + } = P ki i. When u P k+ J i, we have u P k I i, and therefore c il φ il x, 8.2.34 k+ u u,φ il x = c i + φ l=2 ik,φ il φ il = c k+ u,φ il i + φ l=2 il,φ il φ il = u h x + c i,k+φ i,k+ x. 8.2.35 Note that φ i,k+ x is linearly dependent on the kth-degree Legendre polynomial on I i; therefore, it vanishes at the k Gauss points G il of I i ; i.e., φ i,k+ G il = 0, l =,...,k. Hence, Applying the same argument on I i+, we have u h G il = u G il, l =,...,k. 8.2.36 u h G i+,l = u G i+,l, l =,...,k. 8.2.37 In SPR, G h u h is a polynomial of degree k on J i and fits u, a polynomial of the same degree, in a least squares sense at the 2k k Gauss points on the element patch J i since u h equals u at these points. Therefore, G h u h = u on J i. Now we turn to PPR. In light of 8.2.35, the solution u P k+ J i can be expressed as Since { ci,k+ φ ux = u h x + i,k+ x x I i c i+,k+ φ i+,k+ x x I i+ φ i,k+ L ij = 0, φ i+,k+ L i+,j = 0, j = 0,,...,k, we have u h = u at the 2k + k + 2 Lobatto points on J i. Note that L k ik = Lk i+,0. Recall the procedure for PPR, a polynomial p k+ P k+ fits u h, in the least squares sense, at no less than k + 2 Lobatto points. There must be p k+ = u, and consequently, G h u h = u on J i. A direct consequence of Theorem 8. is the following superconvergence property.

8.2. LOCAL RECOVERY FOR D 32 Theorem 8.2. Let u be the solution of 8.2.26, and let u h be its finite element approximation on Sh 0. Then there exists a constant C, independent of h and u, such that for both SPR and PPR, at an interior node x i, u x i G h u h x i Ch k+ u W k+2 J i + u W k+ I. 8.2.38 For the special case a 2 = and a = a 0 = 0, we have u x i G h u h x i Ch k+ u W k+2 J i. 8.2.39 Proof. The proof of 8.2.39 follows from Theorem 8. and the standard argument by applying the Bramble-Hilbert Lemma. The proof of 8.2.38 follows from Lemma 8.2. and 8.2.39 for the special case. Based on Theorem 8., we can further prove the ultraconvergence result. Theorem 8.3. Let u be the solution of 8.2.26 when a 2 = and a = 0, and let u h be its finite element approximation on Sh 0 with k 2 an even number. If the two elements on the element patch J i have the same length, i.e., h i = h i+, then there exists a constant C, independent of h and u, such that, for both SPR and PPR, at an interior node x i, u x i G h u h x i Ch k+2 u W k+3 J i + u W k+ I. 8.2.40 Assuming further that a 0 = 0, we have u x i G h u h x i Ch k+2 u W k+3 J i. 8.2.4 Proof. We first prove 8.2.4. Associated with any interior node x i, i =,...,N, there is an element patch J i = x i h i,x i +h i recall that h i = h i+, and a linear mapping F i from Î =, onto J i defined by x = x i + h i ξ. Given any function v on J i, we define ˆv = v F i, or ˆvξ = vf i ξ = vx i + h i ξ. Now, consider u x i G h u h x i =< I h i u G h u h,δ h i >= h i < Ih i u G h u h, ˆδ h i >= h i Eû. 8.2.42 Here, ˆδ i h = δ h F i with δ h, the discrete delta function, and Ii hu S h satisfies Ii hu x i = u x i, Ii hu G h u h Sh 0. Obviously, Eû is a linear functional which is bounded in W k+2 Î. We shall show that Eû vanishes when û is a polynomial of degree not greater than k +. Let k = 2s. We examine the case when x s+ xi xi+ x s+ ux = a, a 0, 8.2.43 h i on J i. Note that u x i = 0, and ux is symmetric with respect to x i on J i so is u x. By definition, h i

322 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS φ il and φ i+,l are symmetric antisymmetric with respect to x i when l is even odd. Therefore, c il = u,φ il /φ il,φ il = u,φ i+,l /φ i+,l,φ i+,l = c i+,l, l = 2m, c il = c i+,l, l = 2m. 8.2.44 Recalling 8.2.33, we have, on J i, s s c i,2m φ i,2m x + c i,2m φ i,2m x, x I i u h x = an i x + m= m= s s c i,2m φ i+,2m x c i,2m φ i+,2m x, x I i+, m= a/h i + s s m= c i,2mφ i,2m x + c i,2m φ i,2m x, x I i u h x = m= s s a/h i + c i,2m φ i+,2mx c i,2m φ i+,2m x, x I i+. Observe that, for 0 τ h i, m= m= m= φ i,2m x i τ = φ i+,2m x i + τ, φ i,2m x i τ = φ i+,2m x i + τ, φ i,2m x i τ = φ i+,2m x i + τ, φ i,2m x i τ = φ i+,2m x i + τ, so that u h x i τ = u h x i + τ, u h x i τ = u h x i + τ, 0 τ h i. 8.2.45 By recovery procedures, for SPR k G h u h x i = α j [u h G ij + u h G i+,k j+], 8.2.46 where α j s are weights of the least squares fitting for SPR; and for PPR, G h u h x i = h i k β j [u h L ij u h L i+,k j+ ], 8.2.47 j=0 where β j s are weights of the least squares fitting for PPR. Note that when h i = h i+, the Gauss Lobbato points and weights are distributed symmetrically on J i with respect to x i. By symmetry, we see that x i G ij = G i+,k j+ x i, x i L ij = L i+,k j+ x i, and we set these values as τ in 8.2.45 to obtain u h L ij = u h L i+,k j+, u h G ij = u h G i+,k j+.

8.3. LOCAL RECOVERY IN HIGHER DIMENSIONS 323 We then have from 8.2.46 and 8.2.47 G h u h x i = 0 = u x i, 8.2.48 for both SPR and PPR, when u is given by 8.2.43. Since any u P k+2 J i k = 2s can be decomposed into x s+ xi xi+ x s+ ux = a + wx h h for some a R and w P k+ J i. From Theorem 8. and 8.2.48 we see that G h u h x i = u x i u P k+2 J i, 8.2.49 i.e., the linear functional Eû vanishes for all û P k+ Î. Therefore, by the Bramble-Hilbert Lemma, we have Eû C ˆδ i h W 0 Î û W k+2 Î Ch δ h i W 0 J i hk+2 u W k+2 J i = Chk+ u W k+3 J i. Note that δi h W 0J Ck, a constant depending on k only. Combining 8.2.42 and 8.2.49, we i obtain 8.2.4. Finally, 8.2.40 follows from 8.2.4 and Lemma 8.2.. Several simple remarks are listed below. The ultraconvergence recovery result is local with regard to the mesh. If we want the ultraconvergence recovery at the node x i, we only need to use uniform meshes adjacent to x i. The ultraconvergence recovery for the general case where a 0 is not known since we have only the super-approximation result 8.2.30 instead of the ultra-approximation result 8.2.3 in general. The generalization of the result to the higher-dimensional tensor product case is not straightforward. 8.3 Local recovery in higher dimensions We mainly discuss SPR and PPR with more emphasis on PPR. Other three methods: simple averaging, weighted averaging, and the local L 2 -projection will be considered only for linear element, since averaging is difficult to be generalized for higher-order elements, and the local L 2 -projection is inferior comparing with SPR as observed by Zienkiewicz-Zhu in their 992 work [83, 84]. Let S h be a polynomial finite element space of degree k over a triangulation T h. We define a gradient recovery operator G h : S h S d h, with d = 2,3. Given a finite element solution u h, we first define G h u h at certain nodes. When d = 2, there are three types of nodes: vertices, edge nodes, and internal nodes. When d = 3, there is one more type: the surface node. For the linear element, all nodes are vertices. For the quadratic element, there are vertices and edge-center nodes. For the cubic or higher-order elements, all types of nodes are present. After defining values of G h u h at all nodes,

324 CHAPTER 8. RECOVERY TECHNIQUES IN FEMS we obtain G h u h Sh d on the whole domain by interpolation using the original nodal shape functions of S h. In general, a recovery procedure generates a finite difference scheme at each node z i, n G h vz i = C j vz ij, n C j = 0. 8.3. The task of our recovery is to determine the coefficients C j s. 8.3. Methods and examples In practice, the least squares fitting is performed with scaling, since it is the relative position of those sampling points that counts. For simplicity, we use simple examples on uniform meshes to illustrate the idea and keep in mind that the least squares procedure is applicable to arbitrary and anisotropic meshes. SPR In higher dimensional case, the first step is to choose an assembly node z i and an element patch ω i. The assembly node is usually a vertex and naturally, all elements that share this vertex form the element patch. The second step is to select sampling points z ij, where gradient values of the numerical solution are being picked. The third step is to fit a polynomial p l P k in the least squares sense by l u h z ij, i.e.: Find p l P k such that p l l u h 2 z ij = min q l u h 2 z ij, l d, 8.3.2 j q P k where d is the space dimension. Finally define the recovered gradient on ω i by j G h u h = p,...,p d. 8.3.3 Now the question is: How to select sampling points z ij? It is natural to use the Gauss points for rectangular or quadrilateral element, and to use barycentric centers for linear triangular and tetrahedral elements. However, the situation is more complicated for higher-order triangular and tetrahedral elements. In their 992 paper [83], Zienkiewicz-Zhu proposed to use edge centers as sampling points for quadratic triangular element. It worked astonishingly well and two-order superconvergence recovery was observed at interior vertices for uniform triangular mesh of the regular pattern. As for cubic and higher-order elements, there is no general guideline so far. Example 8.3.. Linear element under the regular triangulation. An element patch with horizontal and vertical edge length h is displayed by Figure 8.. As in the one-dimensional case, we scale the patch by a factor h with x = hξ and y = hη. Then we least squares fit p x,y =,x,ya 0,a,a 2 T =,ξ,ηâ 0,â,â 2 T with respect to the six derivative values at the barycentric center of each element on the patch. The ξ-,

8.3. LOCAL RECOVERY IN HIGHER DIMENSIONS 325 η-coordinates of these barycentric centers are ξ T =,, 2,,,2, 3 ηt =,2,,, 2,. 3 Further, we denote e T =,,,,,. Then the fitting procedure of SPR results in A T Aâa = A T σ, where A = e,ξ,η and σ T = σ,,σ 6 represent either x u h or y u h on six elements. It is straightforward to calculate Therefore, p x,y = A T A A T = 3 0 3 3 0 3. 6 3 3 0 3 3 0 6 σ j /6 + σ σ 3 σ 4 + σ 6 x/2 + σ + σ 2 σ 4 σ 5 y/2. Especially, the recovered derivative at the patch center is p 0,0 = 6 6 σ j, or G h u h 0,0 = 6 6 u h τj, which is the average of the six gradient values. Now we express gradient by nodal values. u h x = û h h ξ, u h y = û h h η. If we map element τ to the reference element ˆτ, û h ξ,η = û 0 ξ η + û ξ + û 2 η, û h ξ ξ,η = u û h u 0, η ξ,η = u 2 u 0. Carry on for all six elements and assemble, G h u h 0,0 = 2u u 4 + u 2 u 3 + u 6 u 5 6h 2u 2 u 5 + u u 6 + u 3 u 4 2 2 u + u 2 + u 3 + u 4 + u 5 + 2 2 = 6h u 6. 8.3.4 Example 8.3.2. Linear element under the Chevron triangulation. The procedure is the same as the above. The resulting finite difference operator is see Figure 8.2: G h u h 0,0 = 2u 3 u + 2u 6 u 4 = [ 0 4h 6 4h 2u 0 u 5 + 2u 3 u 4 + 2u u 6 + 8u 2 u 0 6 0 6 0 u 0 + u + u 2 + u 3 + u 4 + u 5 + 2 8 2 2 2 2 ] u 6. 8.3.5