One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes

Size: px
Start display at page:

Download "One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes"

Transcription

1 DOI /s One-Sided Position-Dependent Smootness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meses JenniferK.Ryan Xiaozou Li Robert M. Kirby Kees Vuik Received: 14 January 2014 / Revised: 17 October 2014 / Accepted: 2 November 2014 Springer Science+Business Media New York 2014 Abstract In tis paper, we introduce a new position-dependent smootness-increasing accuracy-conserving (SIAC) filter tat retains te benefits of position dependence as proposed in van Slingerland et al. (SIAM J Sci Comput 33: , 2011) wile ameliorating some of its sortcomings. As in te previous position-dependent filter, our new filter can be applied near domain boundaries, near a discontinuity in te solution, or at te interface of different mes sizes; and as before, in general, it numerically enances te accuracy and increases te smootness of approximations obtained using te discontinuous Galerkin (dg) metod. However, te previously proposed position-dependent one-sided filter ad two significant disadvantages: (1) increased computational cost (in terms of function evaluations), brougt about by te use of 4k + 1 central B-splines near a boundary (leading to increased kernel support) and (2) increased numerical conditioning issues tat necessitated te use of quadruple precision for polynomial degrees of k 3 for te reported accuracy benefits to be realizable numerically. Our new filter addresses bot of tese issues maintaining te same support size and wit similar function evaluation caracteristics as te symmetric filter in Jennifer K. Ryan, Xiaozou Li: Supported by te Air Force Office of Scientific Researc (AFOSR), Air Force Material Command, USAF, under Grant No. FA Robert M. Kirby: Supported by te Air Force Office of Scientific Researc (AFOSR), Computational Matematics Program (Program Manager: Dr. Fariba Faroo), under Grant No. FA J. K. Ryan (B) Scool of Matematics, University of East Anglia, Norwic NR4 7TJ, UK Jennifer.Ryan@uea.ac.uk X. Li K. Vuik Delft Institute of Applied Matematics, Delft University of Tecnology, 2628 CD Delft, Te Neterlands X.Li-2@tudelft.nl R. M. Kirby Scool of Computing, University of Uta, Salt Lake City, UT, USA kirby@cs.uta.edu K. Vuik C.Vuik@tudelft.nl

2 a way tat as better numerical conditioning making it, unlike its predecessor, amenable for GPU computing. Our new filter was conceived by revisiting te original error analysis for superconvergence of SIAC filters and by examining te role of te B-splines and teir weigts in te SIAC filtering kernel. We demonstrate, in te uniform mes case, tat our new filter is globally superconvergent for k = 1 and superconvergent in te interior (e.g., region excluding te boundary) for k 2. Furtermore, we present te first teoretical proof of superconvergence for postprocessing over smootly varying meses, and explain te accuracy-order conserving nature of tis new filter wen applied to certain non-uniform meses cases. We provide numerical examples supporting our teoretical results and demonstrating tat our new filter, in general, enances te smootness and accuracy of te solution. Numerical results are presented for solutions of bot linear and nonlinear equations solved on bot uniform and non-uniform one- and two-dimensional meses. Keywords Discontinuous Galerkin metod Post-processing SIAC filtering Superconvergence Uniform meses Smootly varying meses Non-uniform meses 1 Introduction Computational considerations are always a concern wen dealing wit te implementation of numerical metods tat claim to ave practical (engineering) value. Te focus of tis paper is te formerly introduced smootness-increasing accuracy-conserving (SIAC) class of filters, a class of filters tat exibit superconvergence beavior wen applied to discontinuous Galerkin (dg) solutions. Altoug te previously proposed position-dependent filter (wic we will, encefort, call te SRV filter) introduced in van Slingerland et al. [20] met its stated goals of demonstrating superconvergence, it contained two deficiencies wic often made it impractical for implementation and usage witin engineering scenarios. Te first deficiency of te SRV filter was its reliance on 4k+1 central B-splines, wic increased bot te widt of te stencil generated and increased te computational cost (in terms of functions evaluations) a disproportionate amount compared to te symmetric SIAC filter. Te second deficiency is one of numerical conditioning: te SRV filter requires te use of quadruple precision to obtain consistent and meaningful results, wic makes it unsuitable for practical CPU-based computations and for GPU computing. In tis paper, we introduce a position-dependent SIAC filter tat, like te SRV filter, allows for one-sided post-processing to be used near boundaries and solution discontinuities and wic exibits superconvergent beavior; owever, our new filter addresses te two stated deficiencies: it as a smaller spatial support wit a reduced number of function evaluations, and it does not require extended precision for error reduction to be realized. To give context to wat we will propose, let us review ow we arrived at te currently available and used one-sided filter given in van Slingerland et al. [20]. Te SIAC filter as its roots in te finite element superconvergence extraction tecnique for elliptic equations proposed by Bramble and Scatz [1], Mock and Lax [12] and Tomée [19]. Te linear yperbolic system counterpart for discontinuous Galerkin (dg) metods was introduced by Cockburn et al. [5] and extended to more general applications in [9 11,13,14,18]. Te postprocessing tecnique can enance te accuracy order of dg approximations from k + 1to 2k + 1inteL 2 -norm. Tis symmetric post-processor uses 2k + 1 central B-splines of order k + 1. However, a limitation of tis symmetric post-processor was tat it required a symmetric amount of information around te location being post-processed. To overcome tis problem, Ryan and Su [15] used te same ideas in Cockburn et al. [5] todevelopa

3 one-sided post-processor tat could be applied near boundaries and discontinuities in te exact solution. However, teir results were not very satisfactory as te errors ad a stairstepping-type structure, and te errors temselves were not reduced wen te post-processor was applied to some dg solutions over coarse meses. Later, van Slingerland et al. [20] recast tis formulation as a position-dependent SIAC filter by introducing a smoot sift function λ( x) tat aided in redefining te filter nodes and elped to ease te errors from te stairstepping-type structure. In an attempt to reduce te errors, te autors doubled to 4k + 1 te number of central B-splines used in te filter wen near a boundary. Furter, tey introduced a convex function tat allowed for a smoot transition between boundary and symmetric regions. Te results obtained wit tis strategy were good for linear yperbolic equations over uniform meses, but new callenges arose. Issues were manifest wen te position-dependent filter was applied to equations wose solution lacked te (ig) degree of regularity required for valid post-processing. In some cases, tis filter applied to certain dg solutions gave worse results tan wen te original one-sided filter wic used only 2k + 1 central B-splines [15]. Furtermore, it was observed tat in order for te superconvergence properties expressed in van Slingerland et al. [20] to be fully realized, extended precision (beyond traditional double precision) ad to be used. Lastly, te addition of more B-splines did not come witout cost. Figure 1 sows te difference between te symmetric filter, wic was introduced in [1,5] and is applied in te domain interior, and te SRV filter wen applied to te left boundary. Te solution being filtered is at x = 0, and te filter extends into te domain. Upon examination, one sees tat te position-dependent filter as a larger filter support and, by necessity, is not symmetric near te boundary. Te vast discrepancy in spatial extent is due to te number of B-splines used: 2k + 1 in te symmetric case versus 4k + 1 in te one-sided case. Te practical implications of tis discrepancy is two-fold: (1) filtering at te boundary wit te 4k + 1 filter is noticeably more costly (in terms of function evaluations) tan filtering in te interior wit te symmetric filter; and (2) te spatial extent of te one-sided filter forces one to use te one-sided filter over a larger area/volume before being able to transition over to te symmetric filter. Fig. 1 Comparison of (a) te symmetric filter centered around x = 0 wen te kernel is applied in te domain interior and (b) te position-dependent filter at te boundary (represented by x = 0) before convolution wit a quadratic approximation. Notice tat te boundary filter requires a larger spatial support, te amplitude is significantly larger in magnitude and te filter does not empasize te point x = 0, wic is te point being post-processed

4 Recall tat te SRV filter added two new features wen attempting to improve te original Ryan and Su one-sided filter: te autors added position-dependence to te filter and increased te number of B-splines used. In attempting to overcome te deficiencies of te filter in van Slingerland et al. [20], we reverted to te 2k +1 B-spline approac as in Ryan and Su [15] but added position-dependence. Altoug going back to 2k +1 B-splines does make te filter less costly in te function evaluation sense, unfortunately, tis approac did not lead to a filter tat reduced te errors in te way we ad oped (i.e., at least error-order conserving, if not also superconvergent). We were forced to reconsider altogeter ow superconvergence is obtained in te SIAC filter context; we outline a sketc of our tinking cronologically in wat follows. To conceive of a new one-sided filter containing all te benefits mentioned above wit none of te deficiencies, we ad to arken back to te fundamentals of SIAC filtering. We not only examined te filter itself (e.g., its construction, etc.), but also te error analysis in [5,8]. From te analysis in [5,8] we can conclude tat te main source of te aforementioned conditioning issue (expressed in terms of te necessity for increased precision) is te constant term found in te expression for te error, wic relies on te B-spline coefficients c γ (2k+1,l) in te SIAC filtering kernel γ c γ (2k+1,l) Tis quantity increases wen te number of B-splines is increased. Furter, te condition number of te system used to calculate c γ (2k+1,l) becomes quite large, on te order of for P 4, increasing te possibility for round-off errors and tus requiring iger levels of precision. As mentioned before, we attempted to still use 2k + 1 position-dependent central B-splines, but tis approac did not lead to error reduction. Indeed, te constant in te error term remains quite large at te boundaries. To reduce te error term, we concluded tat one needed to add to te set of central B-splines te following: one non-central B-spline near te boundary. Tis general B-spline allows te filter to maintain te same spatial support trougout te domain, including boundary regions; it provides only a sligt increase in computational cost as tere are now 2k + 2 B-splines to evaluate as part of te filter; and possibly most importantly, it allows for error reduction. We note tat our modifications to te previous filter (e.g., going beyond central B-splines) do come wit a compromise: we must relax te assumption of obtaining superconvergence. Instead, we merely require a reduction in error and a smooter solution. Tis new filter remains globally superconvergent for k = 1. Te new contributions of tis paper are: A new one-sided position-dependent SIAC filter tat allows filtering up to boundaries and tat ameliorates te two principle deficiencies identified in te previous 4k + 1 one-sided position-dependent filter; Examination and documentation of te reasoning concerning te constant term in te error analysis tat led to te proposed work; Demonstration tat for te linear polynomial case te filtered approximation is always superconvergent for a uniform mes; and Application of te scaled new filter to bot smootly varying and non-uniform (random) meses. In te smootly varying case, we prove and demonstrate tat we obtain superconvergence. For te general non-unform case, we still observe significant improvement in te smootness and an error reduction over te original dg solution, altoug full superconvergence is not always acieved. We sow owever tat we remain accuracy-order conserving..

5 Tese important results are presented as follows: first, as we present te SIAC filter in te context of discontinuous Galerkin approximations, we review te important properties of te dg metod and te position-dependent SIAC filter in Sect. 2. In Sect. 3, we introduce te newly proposed filter and furter establis some teoretical error estimates for te uniform and non-uniform (smootly varying) cases. We present te numerical results over uniform and non-uniform one-dimensional mes structures in Sect. 4 and two-dimensional quadrilateral mes structures in Sect. 5. Finally, conclusions are given in Sect Background In tis section, we present te relevant background for understanding ow to improve te SIAC filter, wic includes te important properties of te discontinuous Galerkin (dg) metod tat make te application of SIAC filtering attractive as well as te building blocks of SIAC filtering B-splines and te symmetric filter. 2.1 Important Properties of Discontinuous Galerkin Metods We frame te discussion of te properties of dg metods in te context of a one-dimensional problem as te ideas easily extend to multiple dimensions. Furter details about te discontinuous Galerkin metod can be found in [2 4]. Consider a one-dimensional yperbolic equation suc as u t + a 1 u x + a 0 u = 0, x =[x L, x R ] (2.1) u(x, 0) = u 0 (x). (2.2) To obtain a dg approximation, we first decompose as = N j=1 I j were I j = [x j 1, x 2 j+ 1 ]=[x j x j, x j x j ]. Ten Eq. (2.1) is multiplied by a test function and integrated by parts. Te test function is cosen from te same function space as te trial functions, a piecewise polynomial basis. Te approximation can ten be written as u (x, t) = k l=0 u (l) j (t)ϕ (l) j (x), for x I j. Herein, we coose te basis functions ϕ (l) j (x) = P (l) (2(x x j )/ x j ),werep (l) is te Legendre polynomial of degree l over [ 1, 1]. For simplicity, trougout tis paper we represent polynomials of degree less tan or equal to l by P l. In order to investigate te superconvergence property of te dg solution, it is important to look at te usual convergence rate of te dg metod. By estimating te error of te dg solution, we obtain u u O( k+1 ) in te L 2 -norm for sufficiently smoot initial data u 0 [2]: u u 0 C k+1 u 0 H k+2, were is te measure of te elements, = x for a uniform mes and = max j x j for non-uniform meses. Anoter useful property is te superconvergence of te dg solution in te negative-order norm [5],wereweave α (u u ) l, C 2k+1 α u 0 k+1,

6 for linear yperbolic equations. Tis expression represents wy accuracy enancement troug post-processing is possible. Unfortunately, tis superconvergence property does not old for non-uniform meses wen α 1, wic makes extracting te superconvergence over non-uniform meses callenging. However, we prove tat, for certain non-uniform meses containing smootness in teir construction (i.e., smootly varying meses), te accuracy enancement troug te SIAC filtering is still possible. 2.2 A Review of B-splines As te SIAC filter relies eavily on B-splines, ere we review te definition of B-splines given by de Boor in [7] as well as central B-splines. Te reader is also directed to [17] for more on Splines. Definition 2.1 (B-spline) Lett := (t j ) be a nondecreasing sequence of real numbers tat create a so-called knot sequence. Te jt B-spline of order l for te knot sequence t is denoted by B j,l,t and is defined, for l = 1, by te rule { 1, t j x < t j+1 ; B j,1,t (x) = 0, oterwise. In particular, t j = t j+1 leads to B j,1,t = 0. For l>1, B j,l,t (x) = ω j,l,t B j,l 1,t + (1 ω j+1,l,t )B j+1,l 1,t, (2.3) wit ω j,l,t (x) = x t j. t j+l 1 t j Tis notation will be used to create a new kernel near te boundaries. Te original symmetric filter [5,16] relied on central B-splines of order l wose knot sequence was uniformly spaced and symmetrically distributed t = 2 l, l 2 l 2 2,..., 2, 2 l, yielding te following recurrence relation for central B-splines: ψ (1) (x) = χ [ 1/2,1/2] (x), ψ (l+1) (x) = (ψ (1) ψ (l) )(x) = ( l x ) ψ (l) ( x ) ( + l+1 2 x ) ψ (l) ( x 2 1 ), l 1. (2.4) l For te purposes of tis paper, it is convenient to relate te recurrence relation for central B- splines to te definition of general B-splines given in Definition 2.1. Relating te recurrence relation to te definition can be done by defining t = t 0,...,t l to be a knot sequence, and denoting ψ (l) t (x) to be te 0t B-spline of order l for te knot sequence t, ψ (l) t (x) = B 0,l,t (x). Note tat te knot sequence t also represents te so-called breaks of te B-spline. Te B- spline in te region [t i, t i+1 ), i = 0,...,l 1 is a polynomial of degree l 1, but in te entire support [t 0, t l ], te B-spline is a piecewise polynomial. Wen te knots (t j ) are sampled in a symmetric and equidistant fasion, te B-spline is called a central B-spline. Notice tat Eq. (2.4) for a central B-spline is a subset of te general B-spline definition were te knots are equally spaced. Tis new notation provides more flexibility tan te previous central B-spline notation.

7 2.3 Position-Dependent SIAC Filtering Te original position-dependent SIAC filter is a convolution of te dg approximation wit a central B-spline kernel ( ) u ( x) = K (2k+1,l) u ( x). (2.5) Te convolution kernel is given by 2k K (2k+1,l) (x) = c γ (2k+1,l) ψ (l) (x x γ ), (2.6) γ =0 were 2k + 1 represents te number of central B-splines, l te order of te B-splines and K = 1 K ( x ). Te coefficients c(2k+1,l) γ are obtained from te property tat te kernel reproduces polynomials of degree 2k. For te symmetric central B-spline filter [5,16], l = k + 1andx γ = k + γ,werek is te igest degree of te polynomial used in te dg approximation. More explicitly, te symmetric kernel is given by 2k K (2k+1,l) (x) = c γ (2k+1,l) ψ (l) (x ( k + γ)). (2.7) γ =0 Note tat tis kernel is by construction symmetric and uses an equal amount of information from te neigborood around te point being post-processed. Wile being symmetric is suitable in te interior domain wen te function is smoot, it is not suitable for application near a boundary, or wen te solution contains a discontinuity. Te one-sided position-dependent SRV filter defined in van Slingerland et al. [20] is called position-dependent because of its cange of support according to te location of te point being post-processed. For example, near a boundary or discontinuity, a translation of te filter is done so tat te support of te kernel remains inside te domain. Furtermore, in tese regions, a greater number of central B-splines is required. Using more B-splines aids in improving te magnitude of te errors near te boundary, wile allowing superconvergence. In addition, te autors in van Slingerland et al. [20] increased te number of B-splines used in te construction of te kernel to be 4k + 1. Te position-dependent (SRV) filter for elements near te boundaries can ten be written as 1 K (4k+1,l) (x) = 4k γ =0 c (4k+1,l) γ ψ (l) (x x γ ), (2.8) were x γ depends on te location of te evaluation point x used in Eq. (2.5) andatte boundaries is given by x γ = 4k + γ + λ( x), wit { min λ( x) = max 0, 4k+l { 0, 4k+l 2 + x x R 2 + x x L Here x L and x R are te left and rigt boundaries, respectively. }, x [x L, x L +x R 2 ), }, x [ x L +x R 2, x R ]. (2.9) 1 Note tat te notation used in te current manuscript is sligtly different from te notation used in van Slingerland et al. [20]. Instead of using r 2 = 4k to denote te SRV filter we cose to use te number of B-splines directly for te clarity of te discussion.

8 Te autors cose 4k + 1 central B-splines because, in teir experience, using fewer (central) B-splines was insufficient for enancing te error. Furtermore, in order to provide a smoot transition from te boundary kernel to te interior kernel, a convex combination of te two kernels was used: ( ) ( ) u (x) = θ(x) K (2k+1,l) u (x) + (1 θ(x)) K (4k+1,l) u (x), (2.10) were θ(x) C k 1 suc tat θ = 1inteinteriorandθ = 0 in te boundary regions. Tis position-dependent filter demonstrated better beavior in terms of error tan te original one-sided filter given by Ryan and Su in [15]. Trougout te article, we will refer to te position-dependent filter using 4k + 1 central B-splines as te SRV filter. 3 Proposed One-Sided Position-Dependent SIAC Filter In tis section, we propose a new one-sided position-dependent filter for application near boundaries. We first discuss te deficiencies in te current position-dependent SIAC filter. We ten propose a new position dependent filter tat ameliorates te deficiencies of te SRV filter; owever, our new filter must make some compromises wit regards to superconvergence (wic will be discussed). Lastly, we prove tat our new filter is globally superconvergent for k = 1 and superconvergent in te interior of te domain for k Deficiencies of te Previous Position-Dependent SIAC Filter Te SRV filter was reported to reduce te errors wen filtering near a boundary. However, applying tis filter to iger-order dg solutions (e.g., P 4 -or even P 3 -polynomials in some cases) required using a multi-precision package (or at least quadruple precision) to reduce round-off error, leading to significantly increased computational time. Figure 2 sows te significant round-off error near te boundaries wen using double precision for post-processing te initial condition. Te multi-precision requirement also makes te position-dependent kernel [20] near te boundaries, unsuitable for GPU computing. To discover wy tis callenge arises requires revisiting te foundations of te filter in particular, te existing error estimates. Te L 2 -error estimate given in Cockburn et al. [5], Fig. 2 Comparison of te pointwise errors in log scale of te (a) original L 2 projection solution, (b)tesrv filter in van Slingerland et al. [20] forte2dl 2 projection using basis polynomials of degree k = 4, mes Double precision was used in tese computations

9 u u 0, C 2k+1, (3.1) provides us insigt into te cause of te issue by examining te constant C in more detail. Te constant C depends on: r κ (r+1,l) = c (r+1,l), (3.2) γ =0 were c γ denotes te kernel coefficients and te value of r depends on te number of B- splines used to construct te kernel. Te kernel coefficients are obtained by ensuring tat te kernel reproduces polynomials of degree r by te convolution: γ K (r+1,l) (x) (x) p = x p, p = 0, 1,...,r. (3.3) We note tat for te error estimates to old, it is enoug to ensure tat te kernel reproduces polynomials up to degree 2k (for r 2k), altoug near te boundaries it was required tat te kernel reproduces polynomials of degree 4k (r = 4k) in[8,20]. For filtering near te boundary, te value κ defined in Eq. (3.2) is on te order of 10 5 for k = 3 and 10 7 for k = 4, as can be seen in Fig. 4. Tis indicates one possible avenue (i.e., lowering κ) by wic we migt generate an improved filter. A second avenue is by investigating te round-off error stemming from te large condition number of te linear system generated to satisfy Eq. (3.3) and solved to find te kernel coefficients. Te condition number of te generated matrix is on te order of for k = 4. Tis leads to significant round-off error (e.g., te rule of tumb in tis particular case being tat 24 digits of accuracy are lost due to te conditioning of tis system), ence requiring te use of ig-precision/extended precision libraries for SIAC filtering to remain accurate (in bot its construction and usage). Te requirement of using extended precision in our computations increases te computational cost. In addition, te aforementioned discrepancy in te spatial extent of te filters due to te number of B-splines used 2k +1 in te symmetric case versus 4k +1 in te one-sided case leads to te boundary filter costing even more due to extra function evaluations. Tese extra function evaluations ave led us to reconsider te position-dependent filter and propose a better conditioned and less computationally intensive alternative. In order to apply SIAC filters near boundaries, we first no longer restrict ourselves to using only central B-splines. Secondly, we seek to maintain a constant support size for bot te interior of te domain and te boundaries. Te idea we propose is to add one general B-spline for boundary regions, wic is located witin te already defined support size. Using general B-splines provides greater flexibility and improves te numerical evaluation (eliminating te explicit need for precision beyond double precision). To introduce our new position-dependent one-sided SIAC filter, we discuss te one-dimensional case and ow to modify te current definition of te SIAC filter. Multi-dimensional SIAC filters are a tensor product of te one-dimensional case. 3.2 Te New Position-Dependent One-Sided Kernel Before we provide te definition of te new position-dependent one-sided kernel, we first introduce a new concept, tat of a knot matrix. Te definition of a knot matrix elps us to introduce te new position-dependent one-sided kernel in a concise and compact form. It also aids in demonstrating te differences between te new position-dependent kernel, te symmetric kernel and te SRV filter. Informally, te idea beind introducing a knot matrix is to exploit te definition of B-splines in terms of teir corresponding knot sequence t := (t j ), in

10 te definition of te SIAC filter. In order to introduce a knot matrix, we will use te following notation: ψ (l) t (x) = B 0,l,t (x) for te zerot B-Spline of order l wit knot sequence t. Definition 3.1 (Knot matrix) A knot matrix, T, is an n m matrix suc tat te γ t row, T(γ ), of te matrix T is a knot sequence wit l + 1 elements (i.e., m = l + 1) tat are used to create te B-spline ψ (l) T(γ )(x). Te number of rows n is specified based on te number of B-splines used to construct te kernel. To provide some context for te necessity of te definition of a knot matrix, we first redefine some of te previous SIAC kernels discussed in terms of teir knot matrices. Recall tat te general definition of te SIAC kernel relies on r + 1 central B-splines of order l. Terefore, we can use Definition 3.1 to rewrite te symmetric kernel given in Eq. (2.7) in terms of a knot matrix as follows K (2k+1,l) T sym (x) = 2k γ =0 c γ (2k+1,l) ψ (l) T sym (γ )(x), (3.4) were T sym in tis relation is a (2k +1) (l+1) matrix. Eac row in T sym corresponds to te knot sequence of one of te constituent B-splines in te symmetric kernel. More specifically, te elements of T sym are defined as T sym (i, j) = l + j + i k, 2 i = 0,...,2k; and j = 0,...,l. For instance, for te first order symmetric SIAC kernel we ave: l = 2andk = 1. Terefore, te corresponding knot matrix can be defined as T sym = (3.5) Te definition of a SIAC kernel in terms of a knot matrix can also be used to rewrite te original boundary filter [15], wic uses only 2k + 1 central B-splines at te left boundary. Te knot matrix T one for tis case is given by T one = (3.6) Now we can define our new position-dependent one-sided kernel by generating a knot matrix. Te new position-dependent one-sided kernel consists of r + 1 = 2k + 1 central B-splines and one general B-spline, and ence te knot matrix is of size (2k + 2) (l + 1). At a ig-level, using te scaling of te kernel, te new position-dependent one-sided kernel can be written as K (r+1,l) r+1 T (x) = γ =0 c γ (r+1,l) ψ (l) T(γ )(x), (3.7) were T(γ ) represents te γ t row of te knot matrix T, wic is te knot sequence T (γ, 0),..., T (γ, l). For te central B-spline, γ = 0,..., 2k and ψ (l) T(γ ) (x) = 1 ( x ) ψ(l) T(γ ).

11 Te added B-spline is a monomial defined as ψ (l) T(r+1) (x) = 1 ( x ) xl 1 T(r+1), were { (x T (r + 1, 0)) x l 1 l 1 T(r+1) =, T (r + 1, 0) x T (r + 1,l); 0, oterwise. Terefore near te left boundary, te kernel in Eq. (3.7) can be rewritten as K (r+1,l) T (x) = r γ =0 c γ (r+1,l) ψ (l) T(γ ) (x) }{{} Position-dependent kernel wit r+1=2k+1 central B-splines + c (r+1,l) r+1 ψ (l) T(r+1) (x) } {{ } General B-spline (3.8) Te kernel coefficients, c γ (r+1,l),γ= 0,...,r + 1 are obtained troug reproducing polynomials of degree up to r +1. We ave imposed furter restrictions on te knot matrix for te definition of te new position-dependent one-sided kernel. First, for convenience we require and T (γ, 0) T (γ, 1) T (γ, l), for γ = 0,...,r, +1 Second, te knot matrix, T, sould satisfy T (γ + 1, 0) T (γ, l), for γ = 0,...,r. T (0, 0) x x R and T (r,l) x x L, were is te element size in a uniform mes. Tis requirement is derived from te support of te B-spline as well as te support needing to remain inside te domain. Recall tat te support of te B-spline ψ (l) T (γ ) is [T (γ, 0), T (γ, l)], and te support of te kernel is [T (0, 0), T (r,l)]. Forany x [x L, x R ], te post-processed solution at point x can ten be written as u ( x) = ( K (r+1,l) T = x T(0,0) x T(r,l) u ) ( x) = K (r+1,l) T ( x ξ)u (ξ)dξ K (r+1,l) T ( x ξ)u (ξ)dξ, (3.9) were T represents te scaled knot matrix. For te boundary regions, we force te interval [ x T(r, l), x T(0, 0)] to be inside te domain =[x L, x R ]. Tis implies tat x L x T(r, l), x T(0, 0) x R, and ence te requirement of T (0, 0) x x R and T (r,l) x x L. Finally, we require tat te kernel remain as symmetric as possible. Tis means te knots sould be cosen as ( Left : T T T (r,l) x x ) L x x L, for < 3k + 1, (3.10) 2 ( Rigt : T T T (0, 0) x x R ), for x R x < 3k + 1, (3.11) 2.

12 Tis sifting will increase te error and it is terefore still necessary to increase te number of B-splines used in te kernel. Because te symmetric filter yields superconvergence results, we wis to retain te original form of te kernel as muc as possible. Near te boundary, were te symmetric filter cannot be applied, we keep te 2k + 1 sifted central B-splines and add only one general B-spline. To avoid increasing te spatial support of te filter, we will coose te knots of tis general B-spline dependent upon te knots of te 2k + 1 central B-splines in te following way: near te left boundary, we let te first 2k + 1 B-splines be central B-splines wereas te last B-spline will be a general spline. Te elements of te knot matrix are ten given by l r + j + i + x x L, 0 i r, 0 j l; x x T (i, j) = L 1, i = r + 1, j = 0; x x L, i = r + 1, j = 1,...,l. (3.12) Similarly, we can design te new kernel near te rigt boundary, were te general B-spline is given by ψ (l) T(0) (x) = xl 1 T(0) = { (T (0,l) x) l 1, T (0, 0) x T (r + 1,l); 0, oterwise. Te elements of te knot matrix for te rigt boundary kernel are defined as x x R, i = 0, j = 0,...,l 1; x x T (i, j) = R + 1, i = 0, j = l; j + i 1 + x x R, 1 i r + 1, 0 j l, (3.13) and te form of te kernel is ten K (r+1,l) T (x) = c (r+1,l) 0 ψ (l) r+1 T(0) (x) + γ =1 c γ (r+1,l) ψ (l) T(γ )(x). (3.14) We note tat tis extra B-spline is only used wen x x L 2, oterwise te symmetric central B-spline kernel is used. We present a concrete example for te P 1 case wit l = 2. In tis case, te knot matrices for our newly proposed filter at te left and rigt boundaries are < 3k+1 2 or x R x < 3k T Left = , T Rigt = (3.15) Tese new knot matrices are 4 3 matrices were, in te case of te filter for te left boundary, te first tree rows express te knots of te tree central B-splines and te last row expresses te knots of te general B-spline. For te filter applied to te rigt boundary, te first row expresses te knots of te general B-spline and te last tree rows express te knots of te central B-splines.

13 Fig. 3 Comparison of te two boundary filters before convolution. (a) Te SRV kernel and (b) te new kernel at te boundary wit k = 2. Te boundary is represented by x = 0. Te new filter as reduced support and magnitude If we use te same form of te knot matrix to express te SRV kernel introduced in van Slingerland et al. [20] at te left boundary for k = 1, we would ave T SRV = (3.16) Comparing te new knot matrix wit te one used to obtain te SRV filter, we can see tat tey ave te same number of columns, wic indicates tat tey use te same order of B-splines. Tere are fewer rows in te new matrix (2k + 2) tan te number of rows from te original position-dependent filter (4k + 1). Tis indicates tat te new filter uses fewer B-splines tan te SRV filter. To compare te new filter and te SRV filter, we plot te kernels used at te left boundary for k = 2. Figure 3 illustrates tat te new position-dependent SIAC kernel places more weigt on te evaluation point tan te SRV kernel, and te SRV kernel as a significantly larger magnitude and support wic we observed to cause problems, especially for iger-order polynomials (suc as P 3 or P 4 ). For tis example, using te filter for quadratic approximations, te scaling of te original position-dependent SIAC filter as a range from 150 to 150 versus 6 to 6 for te newly proposed filter. 3.3 Teoretical Results in te Uniform Case Te previous section introduced a new filter to reduce te errors of dg approximations wile attempting to ameliorate te issues concerning te old filter. In tis section, we discuss te teoretical results for te newly defined boundary kernel. Specifically, for k = 1 it is globally superconvergent of order tree. For iger degree polynomials, it is possible to obtain superconvergence only in te interior of te domain. Recall from Eq. (3.7) tat te new one-sided scaled kernel as te form K (r+1,l) r+1 T (x) = c γ (r+1,l) ψ (l) T(γ )(x). (3.17) γ =0

14 In te interior of te domain te symmetric SIAC kernel is used wic consists of 2k + 1 central B-splines K (2k+1,l) 2k T (x) = c γ (2k+1,l) ψ (l) T(γ )(x), (3.18) γ =0 and, near te left boundary te new one-sided kernel can be written as r K (r+1,l) T (x) = c γ (r+1,l) ψ (l) + c (r+1,l) r+1 ψ (l) γ =0 T(γ ) (x) T(r+1) (x), r = 2k were 2k +1 central B-splines are used togeter wit one general B-spline. Te scaled kernel K (r+1,l) T as te property tat te convolution K (r+1,l) T u only uses information inside te domain. Teorem 3.1 Let u be te exact solution to te linear yperbolic equation d u t + A i u xi + A 0 u = 0, x (0, T ], i=1 u(x, 0) = u 0 (x), x, (3.19) were te initial condition u 0 (x) is a sufficiently smoot function. Here, R d.letu be te numerical solution to Eq. (3.19), obtained using a discontinuous Galerkin sceme wit an upwind flux over a uniform mes wit mes spacing. Let u (r+1,l) ( x) = (K T u )( x) be te solution obtained by applying our newly proposed filter wic uses r + 1 = 2k + 1 central B-splines of order l = k + 1 and one general B-spline in boundary regions. Ten te SIAC-filtered dg solution as te following properties: (i) (u u )( x) 0, C 3 for k = 1. Tat is, u ( x) is globally superconvergent of order tree for linear approximations. (ii) (u u )( x) 0, \supp{k s } C r+1 wen r + 1 2k + 1 central B-splines are used in te kernel. Here supp{k s } represents te support of te symmetric kernel. Tus, u ( x) is superconvergent in te interior of te domain. (iii) (u u )( x) 0, C k+1 globally. Proof We neglect te proof of properties (i) and (ii) as tey are similar to te proofs in Cockburn et al. [5] and Ji et al. [8]. Instead we concentrate on (u u )( x) Ck+1, wic is rater straigt-forward. Consider te one-dimensional case (d = 1). Ten te error can be written as u K (r+1,l) 0, T u,1 +,2, were,1 = u K (r+1,l) T u 0, and,2 = K (r+1,l) T (u u ) 0,. Te proof of iger order convergence for te first term, H,1, is te same as in Cockburn et al. [5] as te requirement on K T does not cange (reproduction polynomials up to degree r + 1). Tis means tat,1 r+1 (r + 1)! C 1 u r+1,.

15 Now consider te second term,,2. Witout loss of generality, we consider te filter for te left boundary in order to estimate,2. Te proofs for te filter in te interior and rigt boundary are similar. We use te form of te kernel given in Eq. (3.8), wic decomposes te new filter into two parts: 2k + 1 central B-splines and one general B-spline. Tat is, we write K (r+1,l) T (x) = r γ =0 Setting e(x) = u(x) u (x), ten,2 = Hence K (r+1,l) T e 0, 1 c γ (r+1,l) ψ (l) T(γ ) (x) } {{ } central B-splines K (r+1,l) T,2 C sup x + c (r+1,l) r e 0 sup L1 r γ =0 c (r+1,l) γ x r+1 ψ (l) T(r+1) (x) } {{ } general B-spline γ =0 + c(r+1,l) 2k+1 l c (r+1,l) γ k c(r+1,l) 2k+1 l e 0. Remark 3.1 Note tat in tis analysis we steered away from te negative-order norm argument. Tecnically, te terms involving te central B-splines ave a convergence rate of r + 1 2k + 1asgivenin[5,8]. It is te new addition, te term involving te general B-spline tat presents te limitation and reduces te convergence rate to tat of te dg approximation itself (i.e., it is accuracy-order conserving). To extend tis to te multidimensional case (d > 1), given an arbitrary x = (x 1,...,x d ) R d,weset ψ (l) T(γ ) (x) = d i=1 ψ (l) T(γ ) (x i ). Te filter for te multidimensional space considered is of te form K (r+1,l) r+1 T (x) = c γ (r+1,l) ψ (l) T(γ ) (x), γ =0 were te coefficients c γ (l) are tensor products of te one-dimensional coefficients. To empasize te general B-spline used near te boundary, we assume, witout loss of generality, tat in te x k1,...,x kd0 directions we need te general B-spline, were 0 d 0 d.ten d 0 ψ (l) T(2k+1) = ψ (l) T(2k+1) (x k i ). i=1 By applying te same idea we used for te one-dimensional case, te teorem is also true for multi-dimensional case. We note tat te constant in te final estimation is a product of two oter constants, one of tem is determined by te filter ( r γ =0 c γ (r+1,l) + c(r+1,l) r+1 ) and te oter one is determined l

16 Fig. 4 Plots demonstrating te effect of te coefficients on te error estimate for (a) P 3 -and(b) P 4 - polynomials. Sown is r γ =0 c γ (r+1,l) using 2k + 1 central B-splines (blue dased), using 4k + 1 central B-splines (green das dot-dot)and r γ =0 c (r+1,l) γ + c(r+1,l) r+1 l for te new filter (red line) by te dg approximation. To furter illustrate te necessity of examining te constant in te error term wic contributed by te filter, we provide Fig. 4. Tis figure demonstrates te difference between r γ =0 c γ (r+1,l) for te previously introduced filters and our new filter in wic te constant gets modified to r γ =0 c γ (r+1,l) + c(r+1,l) r+1 l.infig.4, one can clearly see tat by adding a general spline to te r + 1 central B-splines, we are able, in te boundary regions, to reduce te constant in te error term significantly. 3.4 Teoretical Results in te Non-uniform (Smootly Varying) Case In tis section, we give a teoretical interpretation to te computational results presented in Curtis et al. [6]. Tis is done by using te newly proposed filter for non-uniform meses and sowing tat te new position-dependent filter maintains te superconvergence property in te interior of te domain for smootly varying meses and is accuracy order conserving near te boundaries for non-uniform meses. We begin by defining wat we mean by a smootly varying mes. Definition 3.2 (Smootly varying mes) Letξ be a variable defined over a uniform mes on domain R, ten a smootly varying mes defined over is a non-uniform mes wose variable x satisfies x = ξ + f (ξ), (3.20) were f is a sufficiently smoot function and satisfies f (ξ) > 1, ξ ξ + f (ξ). For example, we can coose f (ξ) = 0.5 sin(ξ) over [0, 2π]. Te multi-dimensional definition can be defined in te same way. Lemma 3.2 Let u be te exact solution of a linear yperbolic equation d u t + A n u xn + A 0 u = 0, x (0, T ], (3.21) n=1

17 wit a smoot enoug initial function and R d.letξ be te variable for te uniform mes defined on wit size, and x be te variable of a smootly varying mes defined in (3.20). Letu (ξ) be te numerical solution to Eq. (3.21) over uniform mes ξ, and u (x) be te approximation over smootly varying mes x, bot of tem obtained by using te discontinuous Galerkin sceme. Ten te post-processed solution obtained by applying SIAC filter K (ξ) for u (ξ) and K H (x) for u (x) wit a proper scaling H, are related by u(x) K H u (x) 0, C u(ξ) K u (ξ) 0,. Here, te filter K can be any filter we mentioned in te previous section (symmetric filter, te SRV filter, and newly proposed position-dependent filter). Note tat tis means tat we obtain te full 2k + 1 superconvergence rate beavior for bot te SRV and symmetric filters. Proof Te proof is straigtforward. If te scaling H is properly cosen, a simple mapping can be done from te smootly varying mes to te corresponding uniform mes. Te result olds if te Jacobian is bounded (from te definition of smootly varying mes). u(x) K H u (x) 2 0, = (u(x) K H u (x)) 2 dx x ξ = ( u(ξ) u (ξ) ) 2 (1 + f (ξ))dξ u(ξ) K u (ξ) 2 0, max 1 + f (ξ). According to te definition of smootly varying mes, =, weave u(x) K H u (x) 0, C u(ξ) K u (ξ) 0,, were C = ( max 1 + f ) 1 2. Remark 3.2 Te proof seems obvious, but it is important to coose a proper scaling for H in te computations. Due to te smootness and computational cost requirements, we need to keep H constant wen treating points witin te same element. Under tis condition, te natural coice is H = x j wen post-processing te element I j. It is now easy to see tat tere exists a position c in te element I j, suc tat H = x j = (1 + f (c)). Remark 3.3 Note tat Teorem 3.1 (iii) still olds for generalized non-uniform meses. Tis is due to te proof not relying on te properties (i.e., structure) of te mes. We ave now sown tat superconvergence can be acieved for interior solutions over smootly varying meses. In te subsequent sections, we present numerical results tat confirm our results on uniform and non-uniform (smootly varying) meses. 4 Numerical Results for One Dimension Te previous section introduced a new SIAC kernel by adding a general B-spline to a modified central B-spline kernel. Te addition of a general B-spline elps to maintain a consistent support size for te kernel trougout te domain and eliminates te need for a multi-precision package. Tis section illustrates te performance of te new position-dependent SIAC filter on one-dimensional uniform and non-uniform (smootly varying and random) meses. We compare our results to te SRV filter [20]. In order to provide a fair comparison between

18 te SRV and new filters, we mainly sow te results using quadruple precision for a few one-dimensional cases. Furtermore, in order to reduce te computational cost of te filter tat uses 4k +1 central B-splines, we neglect to implement te convex combination described in Eq. (2.10). Tis is not necessary for te new filter, and was implemented in te SRV filter to ensure te transition from te one-sided filter to te symmetric filter was smoot. Tis is te first time tat te position-dependent filters ave been tested on non-uniform meses. Altoug tests were performed using scalings of H = x j and H = max x j, we only present te results using a scaling of H = x j. Tis scaling provides better results in boundary regions, wic is one of te motivations of tis paper. We note tat te errors produced using a scaling of H = max x j are quite similar and often produce smooter errors in te interior of te domain for smootly varying meses. Remark 4.1 Te SRV filter requires using quadruple precision in te computations to eliminate round-off error, wic typically involves more computational effort tan natively supported double precision. Te new filter only requires double precision. In order to give a fair comparison between te SRV filter and te new filter, for te one-dimensional examples we ave used quadruple precision to maintain a consistent computational environment. 4.1 Linear Transport Equation Over a Uniform Mes Te first equation tat we consider is a linear yperbolic equation wit periodic boundary conditions, u t + u x = 0, (x, t) [0, 1] (0, T ] (4.1) u(x, 0) = sin(2π x), x [0, 1]. (4.2) Te exact solution is a periodic translation of te sine function, u(x, t) = sin(2π(x t)). For T = 0, tis is simply te L 2 -projection of te initial condition. Here, we consider a final time of T = 1 and note tat we expect similar results at later times. Te discontinuous Galerkin approximation error and te position-dependent SIAC filtered error results are sown in Tables 1 and 2 for bot quadruple precision and double precision. Using quadruple precision, bot filters reduce te errors in te post-processed solution, altoug te new filter as only a minor reduction in te quality of te error. However, using double precision only te new filter can maintain tis error reduction for P 3 - and P 4 -polynomials. We note tat we concentrate on te results for P 3 -andp 4 -polynomials as tere is no noticeable difference between double and quadruple precision for P 1 -and P 2 -polynomials. Te pointwise error plots are given in Figs. 5 and 6. Wen using quadruple precision as in Fig. 5, te SRV filter can reduce te error of te dg solution better tan te new filter for fine meses. However, it uses 2k 1 more B-splines tan te newly generated filter. Tis difference is noticeable wen using double precision, wic is almost ten times faster tan using quadruple precision for P 3 and P 4. For suc examples te new filter performs better bot computationally and numerically (in terms of error). Tables 1 and 2 sow tat te SRV filter can only reduce te error for fine meses wen using P 4 piecewise polynomials. Te new filter performs as good as wen using quadruple precision and reduces te error magnitude at a reduced computational cost.

19 Table 1 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te linear transport equation using polynomials of degree k = 1,...,4 (quadruple precision) over uniform meses Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Quadruple precision P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E

20 Table 2 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te linear transport equation using polynomials of degree k = 3, 4 (double precision) over uniform meses Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Double precision P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E

21 Fig. 5 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport equation over uniform meses using polynomials of degree k = 3, 4(top and bottom rows, respectively). Quadruple precision was used in te computations

22 Fig. 6 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport equation over uniform meses using polynomials of degree k = 3, 4(top and bottom rows, respectively). Double precision was used in te computations

23 Additionally, we point out tat te accuracy of te SRV filter depends on (1) aving iger regularity of C 4k+1, (2) a well-resolved dg solution, and (3) a wide enoug support (at least 5k + 1 elements). Te same penomenon will also be observed in te following tests suc as for a nonlinear equation. For te new filter, te support size remains te same trougout te domain 3k + 1 elements and a iger degree of regularity is not necessary. 4.2 Non-uniform Meses We begin by defining tree non-uniform meses tat are used in te numerical examples. Te meses tested are: Mes 4.1 Smootly varying mes wit periodicity Te first mes is a simple smootly varying mes. It is defined by x = ξ + b sin(ξ), were ξ is a uniform mes variable and b = 0.5 as in Curtis et al. [6]. We note tat te tests were also performed for different values of b; similar results were attained in all cases. Tis mes as te nice feature tat it is a periodic mes and tat te elements near te boundaries ave a larger element size. Mes 4.2 Smoot polynomial mes Te second mes is also a smootly varying mes but does not ave a periodic structure. It is defined by x = ξ 0.05(ξ 2π)ξ. For tis mes, te size of elements gradually decrease from left to rigt. Mes 4.3 Randomly varying mes Te tird mes is a mes wit randomly distributed elements. Te element size varies between [0.8, 1.2], were is te uniform mes size. We will now present numerical results demonstrating te usefulness of te positiondependent SIAC filter in van Slingerland et al. [20] and our new one-sided SIAC filter for te aforementioned meses. 4.3 Linear Transport Equation Te first example tat we consider is a linear transport equation, u t + u x = 0, (x, t) [0, 2π] (0, T ] u(x, 0) = sin(x), (4.3) wit periodic boundary conditions and te errors calculated at T = 2π. We calculate te discontinuous Galerkin approximations for tis equation over te tree different non-uniform meses (Meses 4.1, 4.2, 4.3). Te approximation is ten post-processed at te final time in order to analyze te numerical errors. We note tat altoug te boundary conditions for te equation are periodic, in te boundary regions we implement te one-sided filter in van Slingerland et al. [20] as te SRV filter and compare tem wit te new filter presented above. Te pointwise error plots for te periodically smootly varying mes are given in Fig. 7 wit te corresponding errors presented in Table 3. In te boundary regions, te SRV filter beaves sligtly better for coarse meses tan te new filter. However, we recall tat tis filter essentially doubles te support in te boundary regions. Additionally, we see tat te new filter as a iger convergence rate tan k + 1 wic is better tan te teoretically predicted convergence rate. For te smoot polynomial Mes 4.2 (witout a periodic property), te results of using te scaling of H = x j are presented in Fig. 8 and Table 3. Unlike te previous example, witout te periodic property, te SRV filter leads to significantly worse performance. Te SRV filter no longer enances te accuracy order and as larger errors near te boundaries.

24 Fig. 7 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over smootly varying mes (Mes 4.1). Te kernel scaling H = x j and quadruple precision was used in te computations

25 Table 3 L 2 -andl -errors for te dg approximation togeter wit te SRV and te new filters for te linear transport Eq. (4.3) over te tree Meses 4.1, 4.2 and 4.3 Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order Mes 1: Smootly varying mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 2: Smoot polynomial mes P E E E E E E E E E E E E E E E E E E

26 Table 3 continued Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 3: Randomly varying mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E P E E E E E E 05

27 Table 3 continued Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E A scaling of H = x j along wit quadruple precision was used in te computations

28 Fig. 8 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over smoot polynomial mes (Mes 4.2). Te kernel scaling H = x j and quadruple precision was used in te computations

29 On te oter and, te new filter still improves accuracy wen te mes is sufficiently refined (N = 40). Numerically te new filter obtains iger accuracy order tan k + 1. For iger order polynomials, P 3 and P 4, we see tat it acieves accuracy order of 2k + 1, but tis is not teoretically guaranteed. Lastly, te filters were applied to dg solutions over a randomly distributed mes. For tis randomly varying mes, te new filter again reduces te errors except for a very coarse mes (see Table 3). Te accuracy order is decreased compared to te smootly varying mes example, but it is still iger tan k + 1. Unlike te smootly varying mes, tere are more oscillations in te errors (Fig. 9). However, te oscillations are still reduced compared to te dg solutions. We note tat te results suggest tat te SRV filter may be only suitable for uniform meses. 4.4 Variable Coefficient Equation In tis example, we consider te variable coefficient equation: u t + (au) x = f, x [0, 2π] (0, T ] a(x, t) = 2 + sin(x + t), u(x, 0) = sin(x), (4.4) at T = 2π. Similar to te previous constant coefficient Eq. (4.3), we also test tis variable coefficient Eq. (4.4) over tree different non-uniform meses (Meses 4.1, 4.2, 4.3). Since te results are similar to te previous linear transport Eq. (4.3), ere we do not re-describe te details of te results. We only note tat te results of variable coefficient equation ave more wiggles tan te constant coefficient equation. Tis may be an important issue in extending tese ideas to nonlinear equations. To save space, we only sow te P 3 and P 4 results, P 1 and P 2 are similar to te previous examples. Figure 10 sows te pointwise error plots for te dg and post-processed approximations over a smootly varying mes. Te corresponding errors are given in Table 4. Te results are similar to te linear transport equation. Te two filters perform similarly, wit te new filter using fewer function evaluations. For te smoot polynomial mes 4.2, we sow te pointwise error plots in Fig. 11. Te corresponding errors are given in Table 4. In tis example we see tat te new filter beaves better at te boundaries tan te SRV filter. Tis may be due to te more compact kernel support size. Finally, we test te variable coefficient Eq. (4.4) over randomly varying mes 4.3. Similar to te linear transport example, te pointwise errors plots (Fig. 12) sow more oscillations tan te smootly varying mes examples. We again see te new filter as better errors at te boundaries tan te SRV filter. 5 Numerical Results for Two Dimensions 5.1 Linear Transport Equation Over a Uniform Mes To demonstrate te performance of te new filter in two-dimensions, we consider te solution to a linear transport equation, u t + u x + u y = 0, (x, y) [0, 2π] [0, 2π], (5.1)

30 Fig. 9 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over randomly varying Mes 4.3. Te kernel scaling H = x j and quadruple precision was used in te computations

31 Fig. 10 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over smootly varying mes (Mes 4.1). Te kernel scaling H = x j and quadruple precision was used in te computations

32 Table 4 L 2 -andl -errors for te dg approximation togeter wit te SRV and te new filters for te variable coefficient Eq. (4.4) using a dg approximation of polynomial degree k = 3, 4 over te tree Meses 4.1, 4.2 and 4.3 Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Mes 1: Smootly varying mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 2: Smoot polynomial mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 3: Randomly varying mes P E E E E E E 05

33 Table 4 continued Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Te filters are using scaling H = x j. Quadruple precision was used in te computations

34 Fig. 11 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over smoot polynomial mes (Mes 4.2). Te kernel scaling H = x j and quadruple precision was used in te computations

35 Fig. 12 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over randomly varying mes (Mes 4.3). Te kernel scaling H = x j and quadruple precision was used in te computations

36 Table 5 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for a 2D linear transport equation solved over a uniform mes using polynomials of degree k = 3, 4 Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Double precision was used in te computations

37 Fig. 13 Comparison of te pointwise errors in log scale of te original dg solution (left), te SRV filter (middle) and te new filter (rigt) for te 2D linear transport equation using polynomials of degree k = 4 and a uniform mes. Double precision was used in te computations

38 Table 6 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over te tree meses: Meses 4.1, 4.2 and 4.3 Mes dg SRV Filter New Filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Mes 1: Smootly varying mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 2: Smoot polynomial mes P E E E E E E E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Mes 3: Randomly varying mes P E E E E E E 05

39 Table 6 continued Mes dg SRV Filter New Filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order E E E E E E E E E E E E P E E E E E E E E E E E E E E E E E E Te filters use te scaling of Hx = x j in x-direction and Hy = y j in y-direction. Double precision was used in te computations

40 Fig. 14 Comparison of te pointwise errors in log scale of te original dg solution (left), te SRV filter (middle) and te new filter (rigt) for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over a smootly varying mes (Mes 4.2). Te filters use te scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision was used in te computations J Sci Comput

41 Fig. 15 Comparison of te pointwise errors on a log scale between te SRV and new filters for te 2D linear transport equation using polynomials of degree k = 3, 4. A smoot-varying mes defined by x = ξ b(ξ 2π )ξ, y = ξ b(ξ 2π )ξ wit b = 0.05 was used. Filter scaling was based upon te local element size. Double precision was used in te computations J Sci Comput

42 Fig. 16 Comparison of te pointwise errors in log scale of te original dg solution (a), te SRV filter (b) and te new filter (c) for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over a randomly varying mes. Te filters use te scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision was used in te computations J Sci Comput

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

Hanieh Mirzaee Jennifer K. Ryan Robert M. Kirby

Hanieh Mirzaee Jennifer K. Ryan Robert M. Kirby J Sci Comput (2010) 45: 447 470 DOI 10.1007/s10915-009-9342-9 Quantification of Errors Introduced in te Numerical Approximation and Implementation of Smootness-Increasing Accuracy Conserving (SIAC) Filtering

More information

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM SIAM J. SCI. COMPUT. Vol. 26, No. 3, pp. 821 843 c 2005 Society for Industrial and Applied Matematics ETENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Superconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract

Superconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract Superconvergence of energy-conserving discontinuous Galerkin metods for linear yperbolic equations Yong Liu, Ci-Wang Su and Mengping Zang 3 Abstract In tis paper, we study superconvergence properties of

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Numerische Mathematik

Numerische Mathematik Numer. Mat. 17 136:7 73 DOI 1.17/s11-16-833-y Numerisce Matematik Discontinuous Galerkin metods for nonlinear scalar yperbolic conservation laws: divided difference estimates and accuracy enancement Xiong

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp MIXED DISCONTINUOUS GALERIN APPROXIMATION OF THE MAXWELL OPERATOR PAUL HOUSTON, ILARIA PERUGIA, AND DOMINI SCHÖTZAU SIAM J. Numer. Anal., Vol. 4 (004), pp. 434 459 Abstract. We introduce and analyze a

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Fourier Type Super Convergence Study on DDGIC and Symmetric DDG Methods

Fourier Type Super Convergence Study on DDGIC and Symmetric DDG Methods DOI 0.007/s095-07-048- Fourier Type Super Convergence Study on DDGIC and Symmetric DDG Metods Mengping Zang Jue Yan Received: 7 December 06 / Revised: 7 April 07 / Accepted: April 07 Springer Science+Business

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

A Hybrid Mixed Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems

A Hybrid Mixed Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems A Hybrid Mixed Discontinuous Galerkin Finite Element Metod for Convection-Diffusion Problems Herbert Egger Joacim Scöberl We propose and analyse a new finite element metod for convection diffusion problems

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

AN ANALYSIS OF THE EMBEDDED DISCONTINUOUS GALERKIN METHOD FOR SECOND ORDER ELLIPTIC PROBLEMS

AN ANALYSIS OF THE EMBEDDED DISCONTINUOUS GALERKIN METHOD FOR SECOND ORDER ELLIPTIC PROBLEMS AN ANALYSIS OF THE EMBEDDED DISCONTINUOUS GALERKIN METHOD FOR SECOND ORDER ELLIPTIC PROBLEMS BERNARDO COCKBURN, JOHNNY GUZMÁN, SEE-CHEW SOON, AND HENRYK K. STOLARSKI Abstract. Te embedded discontinuous

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System Discontinuous Galerkin Metods for Relativistic Vlasov-Maxwell System He Yang and Fengyan Li December 1, 16 Abstract e relativistic Vlasov-Maxwell (RVM) system is a kinetic model tat describes te dynamics

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

Jian-Guo Liu 1 and Chi-Wang Shu 2

Jian-Guo Liu 1 and Chi-Wang Shu 2 Journal of Computational Pysics 60, 577 596 (000) doi:0.006/jcp.000.6475, available online at ttp://www.idealibrary.com on Jian-Guo Liu and Ci-Wang Su Institute for Pysical Science and Tecnology and Department

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

A Mixed-Hybrid-Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems

A Mixed-Hybrid-Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems A Mixed-Hybrid-Discontinuous Galerkin Finite Element Metod for Convection-Diffusion Problems Herbert Egger Joacim Scöberl We propose and analyse a new finite element metod for convection diffusion problems

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

1. Introduction. We consider the model problem: seeking an unknown function u satisfying

1. Introduction. We consider the model problem: seeking an unknown function u satisfying A DISCONTINUOUS LEAST-SQUARES FINITE ELEMENT METHOD FOR SECOND ORDER ELLIPTIC EQUATIONS XIU YE AND SHANGYOU ZHANG Abstract In tis paper, a discontinuous least-squares (DLS) finite element metod is introduced

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Inf sup testing of upwind methods

Inf sup testing of upwind methods INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Met. Engng 000; 48:745 760 Inf sup testing of upwind metods Klaus-Jurgen Bate 1; ;, Dena Hendriana 1, Franco Brezzi and Giancarlo

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 27, No. 4, pp. 47 492 c 26 Society for Industrial and Applied Matematics A NOVEL MULTIGRID BASED PRECONDITIONER FOR HETEROGENEOUS HELMHOLTZ PROBLEMS Y. A. ERLANGGA, C. W. OOSTERLEE,

More information

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations Arbitrary order exactly divergence-free central discontinuous Galerkin metods for ideal MHD equations Fengyan Li, Liwei Xu Department of Matematical Sciences, Rensselaer Polytecnic Institute, Troy, NY

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

arxiv: v1 [math.na] 20 Nov 2018

arxiv: v1 [math.na] 20 Nov 2018 An HDG Metod for Tangential Boundary Control of Stokes Equations I: Hig Regularity Wei Gong Weiwei Hu Mariano Mateos Jon R. Singler Yangwen Zang arxiv:1811.08522v1 [mat.na] 20 Nov 2018 November 22, 2018

More information

Variational Localizations of the Dual Weighted Residual Estimator

Variational Localizations of the Dual Weighted Residual Estimator Publised in Journal for Computational and Applied Matematics, pp. 192-208, 2015 Variational Localizations of te Dual Weigted Residual Estimator Tomas Ricter Tomas Wick Te dual weigted residual metod (DWR)

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes A First-Order System Approac for Diffusion Equation. I. Second-Order Residual-Distribution Scemes Hiroaki Nisikawa W. M. Keck Foundation Laboratory for Computational Fluid Dynamics, Department of Aerospace

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Discretization of Multipole Sources in a Finite Difference. Setting for Wave Propagation Problems

Discretization of Multipole Sources in a Finite Difference. Setting for Wave Propagation Problems Discretization of Multipole Sources in a Finite Difference Setting for Wave Propagation Problems Mario J. Bencomo 1 and William Symes 2 1 Institute for Computational and Experimental Researc in Matematics,

More information

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems Comp. Part. Mec. 04) :357 37 DOI 0.007/s4057-04-000-9 Optimal parameters for a ierarcical grid data structure for contact detection in arbitrarily polydisperse particle systems Dinant Krijgsman Vitaliy

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL IFFERENTIATION FIRST ERIVATIVES Te simplest difference formulas are based on using a straigt line to interpolate te given data; tey use two data pints to estimate te derivative. We assume tat

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Finite Difference Methods Assignments

Finite Difference Methods Assignments Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation

More information

arxiv: v1 [math.na] 3 Nov 2011

arxiv: v1 [math.na] 3 Nov 2011 arxiv:.983v [mat.na] 3 Nov 2 A Finite Difference Gost-cell Multigrid approac for Poisson Equation wit mixed Boundary Conditions in Arbitrary Domain Armando Coco, Giovanni Russo November 7, 2 Abstract In

More information

Mass Lumping for Constant Density Acoustics

Mass Lumping for Constant Density Acoustics Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Entropy and the numerical integration of conservation laws

Entropy and the numerical integration of conservation laws Pysics Procedia Pysics Procedia 00 2011) 1 28 Entropy and te numerical integration of conservation laws Gabriella Puppo Dipartimento di Matematica, Politecnico di Torino Italy) Matteo Semplice Dipartimento

More information

GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES

GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES Cristoper J. Roy Sandia National Laboratories* P. O. Box 5800, MS 085 Albuquerque, NM 8785-085 AIAA Paper 00-606 Abstract New developments

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

arxiv: v1 [math.na] 28 Apr 2017

arxiv: v1 [math.na] 28 Apr 2017 THE SCOTT-VOGELIUS FINITE ELEMENTS REVISITED JOHNNY GUZMÁN AND L RIDGWAY SCOTT arxiv:170500020v1 [matna] 28 Apr 2017 Abstract We prove tat te Scott-Vogelius finite elements are inf-sup stable on sape-regular

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds. Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical

More information

Derivatives. By: OpenStaxCollege

Derivatives. By: OpenStaxCollege By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator

More information

Error estimates for a semi-implicit fully discrete finite element scheme for the mean curvature flow of graphs

Error estimates for a semi-implicit fully discrete finite element scheme for the mean curvature flow of graphs Interfaces and Free Boundaries 2, 2000 34 359 Error estimates for a semi-implicit fully discrete finite element sceme for te mean curvature flow of graps KLAUS DECKELNICK Scool of Matematical Sciences,

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

arxiv: v1 [physics.flu-dyn] 3 Jun 2015

arxiv: v1 [physics.flu-dyn] 3 Jun 2015 A Convective-like Energy-Stable Open Boundary Condition for Simulations of Incompressible Flows arxiv:156.132v1 [pysics.flu-dyn] 3 Jun 215 S. Dong Center for Computational & Applied Matematics Department

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

arxiv: v1 [math.na] 9 Mar 2018

arxiv: v1 [math.na] 9 Mar 2018 A simple embedded discrete fracture-matrix model for a coupled flow and transport problem in porous media Lars H. Odsæter a,, Trond Kvamsdal a, Mats G. Larson b a Department of Matematical Sciences, NTNU

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

Effect of the Dependent Paths in Linear Hull

Effect of the Dependent Paths in Linear Hull 1 Effect of te Dependent Pats in Linear Hull Zenli Dai, Meiqin Wang, Yue Sun Scool of Matematics, Sandong University, Jinan, 250100, Cina Key Laboratory of Cryptologic Tecnology and Information Security,

More information

LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS

LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS SIAM J. NUMER. ANAL. c 998 Society for Industrial Applied Matematics Vol. 35, No., pp. 393 405, February 998 00 LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS YANZHAO CAO

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information

arxiv: v1 [math.na] 9 Sep 2015

arxiv: v1 [math.na] 9 Sep 2015 arxiv:509.02595v [mat.na] 9 Sep 205 An Expandable Local and Parallel Two-Grid Finite Element Sceme Yanren ou, GuangZi Du Abstract An expandable local and parallel two-grid finite element sceme based on

More information

Functions of the Complex Variable z

Functions of the Complex Variable z Capter 2 Functions of te Complex Variable z Introduction We wis to examine te notion of a function of z were z is a complex variable. To be sure, a complex variable can be viewed as noting but a pair of

More information

Chapter 2 Ising Model for Ferromagnetism

Chapter 2 Ising Model for Ferromagnetism Capter Ising Model for Ferromagnetism Abstract Tis capter presents te Ising model for ferromagnetism, wic is a standard simple model of a pase transition. Using te approximation of mean-field teory, te

More information

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001 Investigating Euler s Metod and Differential Equations to Approximate π Lindsa Crowl August 2, 2001 Tis researc paper focuses on finding a more efficient and accurate wa to approximate π. Suppose tat x

More information

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions A Numerical Sceme for Particle-Laden Tin Film Flow in Two Dimensions Mattew R. Mata a,, Andrea L. Bertozzi a a Department of Matematics, University of California Los Angeles, 520 Portola Plaza, Los Angeles,

More information

Some Review Problems for First Midterm Mathematics 1300, Calculus 1

Some Review Problems for First Midterm Mathematics 1300, Calculus 1 Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,

More information

arxiv: v2 [math.na] 11 Dec 2016

arxiv: v2 [math.na] 11 Dec 2016 Noname manuscript No. will be inserted by te editor Sallow water equations: Split-form, entropy stable, well-balanced, and positivity preserving numerical metods Hendrik Ranoca arxiv:609.009v [mat.na]

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves.

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves. Calculus can be divided into two ke areas: Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and minima problems Integral

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization Linearized Primal-Dual Metods for Linear Inverse Problems wit Total Variation Regularization and Finite Element Discretization WENYI TIAN XIAOMING YUAN September 2, 26 Abstract. Linear inverse problems

More information

High-Order Energy and Linear Momentum Conserving Methods for the Klein-Gordon Equation

High-Order Energy and Linear Momentum Conserving Methods for the Klein-Gordon Equation matematics Article Hig-Order Energy and Linear Momentum Conserving Metods for te Klein-Gordon Equation He Yang Department of Matematics, Augusta University, Augusta, GA 39, USA; yang@augusta.edu; Tel.:

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information