One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes

Similar documents
Journal of Computational and Applied Mathematics

Hanieh Mirzaee Jennifer K. Ryan Robert M. Kirby

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

lecture 26: Richardson extrapolation

Differentiation in higher dimensions

Polynomial Interpolation

Polynomial Interpolation

Order of Accuracy. ũ h u Ch p, (1)

Superconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract

Copyright c 2008 Kevin Long

Numerische Mathematik

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

The derivative function

Poisson Equation in Sobolev Spaces

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Fourier Type Super Convergence Study on DDGIC and Symmetric DDG Methods

Numerical Differentiation

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

A Hybrid Mixed Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems

A = h w (1) Error Analysis Physics 141

AN ANALYSIS OF THE EMBEDDED DISCONTINUOUS GALERKIN METHOD FOR SECOND ORDER ELLIPTIC PROBLEMS

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

Function Composition and Chain Rules

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Sin, Cos and All That

2.8 The Derivative as a Function

Math 312 Lecture Notes Modeling

Jian-Guo Liu 1 and Chi-Wang Shu 2

Symmetry Labeling of Molecular Energies

A Mixed-Hybrid-Discontinuous Galerkin Finite Element Method for Convection-Diffusion Problems

Taylor Series and the Mean Value Theorem of Derivatives

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

2.11 That s So Derivative

Exercises for numerical differentiation. Øyvind Ryan

1. Introduction. We consider the model problem: seeking an unknown function u satisfying

How to Find the Derivative of a Function: Calculus 1

Inf sup testing of upwind methods

c 2006 Society for Industrial and Applied Mathematics

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

CS522 - Partial Di erential Equations

arxiv: v1 [math.na] 20 Nov 2018

Variational Localizations of the Dual Weighted Residual Estimator

LECTURE 14 NUMERICAL INTEGRATION. Find

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

Discretization of Multipole Sources in a Finite Difference. Setting for Wave Propagation Problems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

NUMERICAL DIFFERENTIATION

MATH745 Fall MATH745 Fall

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

ch (for some fixed positive number c) reaching c

Finite Difference Methods Assignments

arxiv: v1 [math.na] 3 Nov 2011

Mass Lumping for Constant Density Acoustics

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

Entropy and the numerical integration of conservation laws

GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

arxiv: v1 [math.na] 28 Apr 2017

Continuity and Differentiability Worksheet

232 Calculus and Structures

Exam 1 Review Solutions

Dedicated to the 70th birthday of Professor Lin Qun

The Laplace equation, cylindrically or spherically symmetric case

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Derivatives. By: OpenStaxCollege

Error estimates for a semi-implicit fully discrete finite element scheme for the mean curvature flow of graphs

3.4 Worksheet: Proof of the Chain Rule NAME

arxiv: v1 [physics.flu-dyn] 3 Jun 2015

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

arxiv: v1 [math.na] 9 Mar 2018

IEOR 165 Lecture 10 Distribution Estimation

HOMEWORK HELP 2 FOR MATH 151

Fast Exact Univariate Kernel Density Estimation

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

Effect of the Dependent Paths in Linear Hull

LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS

Introduction to Derivatives

arxiv: v1 [math.na] 9 Sep 2015

Functions of the Complex Variable z

Chapter 2 Ising Model for Ferromagnetism

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions

Some Review Problems for First Midterm Mathematics 1300, Calculus 1

arxiv: v2 [math.na] 11 Dec 2016

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves.

7 Semiparametric Methods and Partially Linear Regression

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization

High-Order Energy and Linear Momentum Conserving Methods for the Klein-Gordon Equation

The Verlet Algorithm for Molecular Dynamics Simulations

The Complexity of Computing the MCD-Estimator

Transcription:

DOI 10.1007/s10915-014-9946-6 One-Sided Position-Dependent Smootness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meses JenniferK.Ryan Xiaozou Li Robert M. Kirby Kees Vuik Received: 14 January 2014 / Revised: 17 October 2014 / Accepted: 2 November 2014 Springer Science+Business Media New York 2014 Abstract In tis paper, we introduce a new position-dependent smootness-increasing accuracy-conserving (SIAC) filter tat retains te benefits of position dependence as proposed in van Slingerland et al. (SIAM J Sci Comput 33:802 825, 2011) wile ameliorating some of its sortcomings. As in te previous position-dependent filter, our new filter can be applied near domain boundaries, near a discontinuity in te solution, or at te interface of different mes sizes; and as before, in general, it numerically enances te accuracy and increases te smootness of approximations obtained using te discontinuous Galerkin (dg) metod. However, te previously proposed position-dependent one-sided filter ad two significant disadvantages: (1) increased computational cost (in terms of function evaluations), brougt about by te use of 4k + 1 central B-splines near a boundary (leading to increased kernel support) and (2) increased numerical conditioning issues tat necessitated te use of quadruple precision for polynomial degrees of k 3 for te reported accuracy benefits to be realizable numerically. Our new filter addresses bot of tese issues maintaining te same support size and wit similar function evaluation caracteristics as te symmetric filter in Jennifer K. Ryan, Xiaozou Li: Supported by te Air Force Office of Scientific Researc (AFOSR), Air Force Material Command, USAF, under Grant No. FA8655-13-1-3017. Robert M. Kirby: Supported by te Air Force Office of Scientific Researc (AFOSR), Computational Matematics Program (Program Manager: Dr. Fariba Faroo), under Grant No. FA9550-08-1-0156. J. K. Ryan (B) Scool of Matematics, University of East Anglia, Norwic NR4 7TJ, UK e-mail: Jennifer.Ryan@uea.ac.uk X. Li K. Vuik Delft Institute of Applied Matematics, Delft University of Tecnology, 2628 CD Delft, Te Neterlands e-mail: X.Li-2@tudelft.nl R. M. Kirby Scool of Computing, University of Uta, Salt Lake City, UT, USA e-mail: kirby@cs.uta.edu K. Vuik e-mail: C.Vuik@tudelft.nl

a way tat as better numerical conditioning making it, unlike its predecessor, amenable for GPU computing. Our new filter was conceived by revisiting te original error analysis for superconvergence of SIAC filters and by examining te role of te B-splines and teir weigts in te SIAC filtering kernel. We demonstrate, in te uniform mes case, tat our new filter is globally superconvergent for k = 1 and superconvergent in te interior (e.g., region excluding te boundary) for k 2. Furtermore, we present te first teoretical proof of superconvergence for postprocessing over smootly varying meses, and explain te accuracy-order conserving nature of tis new filter wen applied to certain non-uniform meses cases. We provide numerical examples supporting our teoretical results and demonstrating tat our new filter, in general, enances te smootness and accuracy of te solution. Numerical results are presented for solutions of bot linear and nonlinear equations solved on bot uniform and non-uniform one- and two-dimensional meses. Keywords Discontinuous Galerkin metod Post-processing SIAC filtering Superconvergence Uniform meses Smootly varying meses Non-uniform meses 1 Introduction Computational considerations are always a concern wen dealing wit te implementation of numerical metods tat claim to ave practical (engineering) value. Te focus of tis paper is te formerly introduced smootness-increasing accuracy-conserving (SIAC) class of filters, a class of filters tat exibit superconvergence beavior wen applied to discontinuous Galerkin (dg) solutions. Altoug te previously proposed position-dependent filter (wic we will, encefort, call te SRV filter) introduced in van Slingerland et al. [20] met its stated goals of demonstrating superconvergence, it contained two deficiencies wic often made it impractical for implementation and usage witin engineering scenarios. Te first deficiency of te SRV filter was its reliance on 4k+1 central B-splines, wic increased bot te widt of te stencil generated and increased te computational cost (in terms of functions evaluations) a disproportionate amount compared to te symmetric SIAC filter. Te second deficiency is one of numerical conditioning: te SRV filter requires te use of quadruple precision to obtain consistent and meaningful results, wic makes it unsuitable for practical CPU-based computations and for GPU computing. In tis paper, we introduce a position-dependent SIAC filter tat, like te SRV filter, allows for one-sided post-processing to be used near boundaries and solution discontinuities and wic exibits superconvergent beavior; owever, our new filter addresses te two stated deficiencies: it as a smaller spatial support wit a reduced number of function evaluations, and it does not require extended precision for error reduction to be realized. To give context to wat we will propose, let us review ow we arrived at te currently available and used one-sided filter given in van Slingerland et al. [20]. Te SIAC filter as its roots in te finite element superconvergence extraction tecnique for elliptic equations proposed by Bramble and Scatz [1], Mock and Lax [12] and Tomée [19]. Te linear yperbolic system counterpart for discontinuous Galerkin (dg) metods was introduced by Cockburn et al. [5] and extended to more general applications in [9 11,13,14,18]. Te postprocessing tecnique can enance te accuracy order of dg approximations from k + 1to 2k + 1inteL 2 -norm. Tis symmetric post-processor uses 2k + 1 central B-splines of order k + 1. However, a limitation of tis symmetric post-processor was tat it required a symmetric amount of information around te location being post-processed. To overcome tis problem, Ryan and Su [15] used te same ideas in Cockburn et al. [5] todevelopa

one-sided post-processor tat could be applied near boundaries and discontinuities in te exact solution. However, teir results were not very satisfactory as te errors ad a stairstepping-type structure, and te errors temselves were not reduced wen te post-processor was applied to some dg solutions over coarse meses. Later, van Slingerland et al. [20] recast tis formulation as a position-dependent SIAC filter by introducing a smoot sift function λ( x) tat aided in redefining te filter nodes and elped to ease te errors from te stairstepping-type structure. In an attempt to reduce te errors, te autors doubled to 4k + 1 te number of central B-splines used in te filter wen near a boundary. Furter, tey introduced a convex function tat allowed for a smoot transition between boundary and symmetric regions. Te results obtained wit tis strategy were good for linear yperbolic equations over uniform meses, but new callenges arose. Issues were manifest wen te position-dependent filter was applied to equations wose solution lacked te (ig) degree of regularity required for valid post-processing. In some cases, tis filter applied to certain dg solutions gave worse results tan wen te original one-sided filter wic used only 2k + 1 central B-splines [15]. Furtermore, it was observed tat in order for te superconvergence properties expressed in van Slingerland et al. [20] to be fully realized, extended precision (beyond traditional double precision) ad to be used. Lastly, te addition of more B-splines did not come witout cost. Figure 1 sows te difference between te symmetric filter, wic was introduced in [1,5] and is applied in te domain interior, and te SRV filter wen applied to te left boundary. Te solution being filtered is at x = 0, and te filter extends into te domain. Upon examination, one sees tat te position-dependent filter as a larger filter support and, by necessity, is not symmetric near te boundary. Te vast discrepancy in spatial extent is due to te number of B-splines used: 2k + 1 in te symmetric case versus 4k + 1 in te one-sided case. Te practical implications of tis discrepancy is two-fold: (1) filtering at te boundary wit te 4k + 1 filter is noticeably more costly (in terms of function evaluations) tan filtering in te interior wit te symmetric filter; and (2) te spatial extent of te one-sided filter forces one to use te one-sided filter over a larger area/volume before being able to transition over to te symmetric filter. Fig. 1 Comparison of (a) te symmetric filter centered around x = 0 wen te kernel is applied in te domain interior and (b) te position-dependent filter at te boundary (represented by x = 0) before convolution wit a quadratic approximation. Notice tat te boundary filter requires a larger spatial support, te amplitude is significantly larger in magnitude and te filter does not empasize te point x = 0, wic is te point being post-processed

Recall tat te SRV filter added two new features wen attempting to improve te original Ryan and Su one-sided filter: te autors added position-dependence to te filter and increased te number of B-splines used. In attempting to overcome te deficiencies of te filter in van Slingerland et al. [20], we reverted to te 2k +1 B-spline approac as in Ryan and Su [15] but added position-dependence. Altoug going back to 2k +1 B-splines does make te filter less costly in te function evaluation sense, unfortunately, tis approac did not lead to a filter tat reduced te errors in te way we ad oped (i.e., at least error-order conserving, if not also superconvergent). We were forced to reconsider altogeter ow superconvergence is obtained in te SIAC filter context; we outline a sketc of our tinking cronologically in wat follows. To conceive of a new one-sided filter containing all te benefits mentioned above wit none of te deficiencies, we ad to arken back to te fundamentals of SIAC filtering. We not only examined te filter itself (e.g., its construction, etc.), but also te error analysis in [5,8]. From te analysis in [5,8] we can conclude tat te main source of te aforementioned conditioning issue (expressed in terms of te necessity for increased precision) is te constant term found in te expression for te error, wic relies on te B-spline coefficients c γ (2k+1,l) in te SIAC filtering kernel γ c γ (2k+1,l) Tis quantity increases wen te number of B-splines is increased. Furter, te condition number of te system used to calculate c γ (2k+1,l) becomes quite large, on te order of 10 24 for P 4, increasing te possibility for round-off errors and tus requiring iger levels of precision. As mentioned before, we attempted to still use 2k + 1 position-dependent central B-splines, but tis approac did not lead to error reduction. Indeed, te constant in te error term remains quite large at te boundaries. To reduce te error term, we concluded tat one needed to add to te set of central B-splines te following: one non-central B-spline near te boundary. Tis general B-spline allows te filter to maintain te same spatial support trougout te domain, including boundary regions; it provides only a sligt increase in computational cost as tere are now 2k + 2 B-splines to evaluate as part of te filter; and possibly most importantly, it allows for error reduction. We note tat our modifications to te previous filter (e.g., going beyond central B-splines) do come wit a compromise: we must relax te assumption of obtaining superconvergence. Instead, we merely require a reduction in error and a smooter solution. Tis new filter remains globally superconvergent for k = 1. Te new contributions of tis paper are: A new one-sided position-dependent SIAC filter tat allows filtering up to boundaries and tat ameliorates te two principle deficiencies identified in te previous 4k + 1 one-sided position-dependent filter; Examination and documentation of te reasoning concerning te constant term in te error analysis tat led to te proposed work; Demonstration tat for te linear polynomial case te filtered approximation is always superconvergent for a uniform mes; and Application of te scaled new filter to bot smootly varying and non-uniform (random) meses. In te smootly varying case, we prove and demonstrate tat we obtain superconvergence. For te general non-unform case, we still observe significant improvement in te smootness and an error reduction over te original dg solution, altoug full superconvergence is not always acieved. We sow owever tat we remain accuracy-order conserving..

Tese important results are presented as follows: first, as we present te SIAC filter in te context of discontinuous Galerkin approximations, we review te important properties of te dg metod and te position-dependent SIAC filter in Sect. 2. In Sect. 3, we introduce te newly proposed filter and furter establis some teoretical error estimates for te uniform and non-uniform (smootly varying) cases. We present te numerical results over uniform and non-uniform one-dimensional mes structures in Sect. 4 and two-dimensional quadrilateral mes structures in Sect. 5. Finally, conclusions are given in Sect. 6. 2 Background In tis section, we present te relevant background for understanding ow to improve te SIAC filter, wic includes te important properties of te discontinuous Galerkin (dg) metod tat make te application of SIAC filtering attractive as well as te building blocks of SIAC filtering B-splines and te symmetric filter. 2.1 Important Properties of Discontinuous Galerkin Metods We frame te discussion of te properties of dg metods in te context of a one-dimensional problem as te ideas easily extend to multiple dimensions. Furter details about te discontinuous Galerkin metod can be found in [2 4]. Consider a one-dimensional yperbolic equation suc as u t + a 1 u x + a 0 u = 0, x =[x L, x R ] (2.1) u(x, 0) = u 0 (x). (2.2) To obtain a dg approximation, we first decompose as = N j=1 I j were I j = [x j 1, x 2 j+ 1 ]=[x j 1 2 2 x j, x j + 2 1 x j ]. Ten Eq. (2.1) is multiplied by a test function and integrated by parts. Te test function is cosen from te same function space as te trial functions, a piecewise polynomial basis. Te approximation can ten be written as u (x, t) = k l=0 u (l) j (t)ϕ (l) j (x), for x I j. Herein, we coose te basis functions ϕ (l) j (x) = P (l) (2(x x j )/ x j ),werep (l) is te Legendre polynomial of degree l over [ 1, 1]. For simplicity, trougout tis paper we represent polynomials of degree less tan or equal to l by P l. In order to investigate te superconvergence property of te dg solution, it is important to look at te usual convergence rate of te dg metod. By estimating te error of te dg solution, we obtain u u O( k+1 ) in te L 2 -norm for sufficiently smoot initial data u 0 [2]: u u 0 C k+1 u 0 H k+2, were is te measure of te elements, = x for a uniform mes and = max j x j for non-uniform meses. Anoter useful property is te superconvergence of te dg solution in te negative-order norm [5],wereweave α (u u ) l, C 2k+1 α u 0 k+1,

for linear yperbolic equations. Tis expression represents wy accuracy enancement troug post-processing is possible. Unfortunately, tis superconvergence property does not old for non-uniform meses wen α 1, wic makes extracting te superconvergence over non-uniform meses callenging. However, we prove tat, for certain non-uniform meses containing smootness in teir construction (i.e., smootly varying meses), te accuracy enancement troug te SIAC filtering is still possible. 2.2 A Review of B-splines As te SIAC filter relies eavily on B-splines, ere we review te definition of B-splines given by de Boor in [7] as well as central B-splines. Te reader is also directed to [17] for more on Splines. Definition 2.1 (B-spline) Lett := (t j ) be a nondecreasing sequence of real numbers tat create a so-called knot sequence. Te jt B-spline of order l for te knot sequence t is denoted by B j,l,t and is defined, for l = 1, by te rule { 1, t j x < t j+1 ; B j,1,t (x) = 0, oterwise. In particular, t j = t j+1 leads to B j,1,t = 0. For l>1, B j,l,t (x) = ω j,l,t B j,l 1,t + (1 ω j+1,l,t )B j+1,l 1,t, (2.3) wit ω j,l,t (x) = x t j. t j+l 1 t j Tis notation will be used to create a new kernel near te boundaries. Te original symmetric filter [5,16] relied on central B-splines of order l wose knot sequence was uniformly spaced and symmetrically distributed t = 2 l, l 2 l 2 2,..., 2, 2 l, yielding te following recurrence relation for central B-splines: ψ (1) (x) = χ [ 1/2,1/2] (x), ψ (l+1) (x) = (ψ (1) ψ (l) )(x) = ( l+1 2 + x ) ψ (l) ( x + 1 2 ) ( + l+1 2 x ) ψ (l) ( x 2 1 ), l 1. (2.4) l For te purposes of tis paper, it is convenient to relate te recurrence relation for central B- splines to te definition of general B-splines given in Definition 2.1. Relating te recurrence relation to te definition can be done by defining t = t 0,...,t l to be a knot sequence, and denoting ψ (l) t (x) to be te 0t B-spline of order l for te knot sequence t, ψ (l) t (x) = B 0,l,t (x). Note tat te knot sequence t also represents te so-called breaks of te B-spline. Te B- spline in te region [t i, t i+1 ), i = 0,...,l 1 is a polynomial of degree l 1, but in te entire support [t 0, t l ], te B-spline is a piecewise polynomial. Wen te knots (t j ) are sampled in a symmetric and equidistant fasion, te B-spline is called a central B-spline. Notice tat Eq. (2.4) for a central B-spline is a subset of te general B-spline definition were te knots are equally spaced. Tis new notation provides more flexibility tan te previous central B-spline notation.

2.3 Position-Dependent SIAC Filtering Te original position-dependent SIAC filter is a convolution of te dg approximation wit a central B-spline kernel ( ) u ( x) = K (2k+1,l) u ( x). (2.5) Te convolution kernel is given by 2k K (2k+1,l) (x) = c γ (2k+1,l) ψ (l) (x x γ ), (2.6) γ =0 were 2k + 1 represents te number of central B-splines, l te order of te B-splines and K = 1 K ( x ). Te coefficients c(2k+1,l) γ are obtained from te property tat te kernel reproduces polynomials of degree 2k. For te symmetric central B-spline filter [5,16], l = k + 1andx γ = k + γ,werek is te igest degree of te polynomial used in te dg approximation. More explicitly, te symmetric kernel is given by 2k K (2k+1,l) (x) = c γ (2k+1,l) ψ (l) (x ( k + γ)). (2.7) γ =0 Note tat tis kernel is by construction symmetric and uses an equal amount of information from te neigborood around te point being post-processed. Wile being symmetric is suitable in te interior domain wen te function is smoot, it is not suitable for application near a boundary, or wen te solution contains a discontinuity. Te one-sided position-dependent SRV filter defined in van Slingerland et al. [20] is called position-dependent because of its cange of support according to te location of te point being post-processed. For example, near a boundary or discontinuity, a translation of te filter is done so tat te support of te kernel remains inside te domain. Furtermore, in tese regions, a greater number of central B-splines is required. Using more B-splines aids in improving te magnitude of te errors near te boundary, wile allowing superconvergence. In addition, te autors in van Slingerland et al. [20] increased te number of B-splines used in te construction of te kernel to be 4k + 1. Te position-dependent (SRV) filter for elements near te boundaries can ten be written as 1 K (4k+1,l) (x) = 4k γ =0 c (4k+1,l) γ ψ (l) (x x γ ), (2.8) were x γ depends on te location of te evaluation point x used in Eq. (2.5) andatte boundaries is given by x γ = 4k + γ + λ( x), wit { min λ( x) = max 0, 4k+l { 0, 4k+l 2 + x x R 2 + x x L Here x L and x R are te left and rigt boundaries, respectively. }, x [x L, x L +x R 2 ), }, x [ x L +x R 2, x R ]. (2.9) 1 Note tat te notation used in te current manuscript is sligtly different from te notation used in van Slingerland et al. [20]. Instead of using r 2 = 4k to denote te SRV filter we cose to use te number of B-splines directly for te clarity of te discussion.

Te autors cose 4k + 1 central B-splines because, in teir experience, using fewer (central) B-splines was insufficient for enancing te error. Furtermore, in order to provide a smoot transition from te boundary kernel to te interior kernel, a convex combination of te two kernels was used: ( ) ( ) u (x) = θ(x) K (2k+1,l) u (x) + (1 θ(x)) K (4k+1,l) u (x), (2.10) were θ(x) C k 1 suc tat θ = 1inteinteriorandθ = 0 in te boundary regions. Tis position-dependent filter demonstrated better beavior in terms of error tan te original one-sided filter given by Ryan and Su in [15]. Trougout te article, we will refer to te position-dependent filter using 4k + 1 central B-splines as te SRV filter. 3 Proposed One-Sided Position-Dependent SIAC Filter In tis section, we propose a new one-sided position-dependent filter for application near boundaries. We first discuss te deficiencies in te current position-dependent SIAC filter. We ten propose a new position dependent filter tat ameliorates te deficiencies of te SRV filter; owever, our new filter must make some compromises wit regards to superconvergence (wic will be discussed). Lastly, we prove tat our new filter is globally superconvergent for k = 1 and superconvergent in te interior of te domain for k 2. 3.1 Deficiencies of te Previous Position-Dependent SIAC Filter Te SRV filter was reported to reduce te errors wen filtering near a boundary. However, applying tis filter to iger-order dg solutions (e.g., P 4 -or even P 3 -polynomials in some cases) required using a multi-precision package (or at least quadruple precision) to reduce round-off error, leading to significantly increased computational time. Figure 2 sows te significant round-off error near te boundaries wen using double precision for post-processing te initial condition. Te multi-precision requirement also makes te position-dependent kernel [20] near te boundaries, unsuitable for GPU computing. To discover wy tis callenge arises requires revisiting te foundations of te filter in particular, te existing error estimates. Te L 2 -error estimate given in Cockburn et al. [5], Fig. 2 Comparison of te pointwise errors in log scale of te (a) original L 2 projection solution, (b)tesrv filter in van Slingerland et al. [20] forte2dl 2 projection using basis polynomials of degree k = 4, mes 80 80. Double precision was used in tese computations

u u 0, C 2k+1, (3.1) provides us insigt into te cause of te issue by examining te constant C in more detail. Te constant C depends on: r κ (r+1,l) = c (r+1,l), (3.2) γ =0 were c γ denotes te kernel coefficients and te value of r depends on te number of B- splines used to construct te kernel. Te kernel coefficients are obtained by ensuring tat te kernel reproduces polynomials of degree r by te convolution: γ K (r+1,l) (x) (x) p = x p, p = 0, 1,...,r. (3.3) We note tat for te error estimates to old, it is enoug to ensure tat te kernel reproduces polynomials up to degree 2k (for r 2k), altoug near te boundaries it was required tat te kernel reproduces polynomials of degree 4k (r = 4k) in[8,20]. For filtering near te boundary, te value κ defined in Eq. (3.2) is on te order of 10 5 for k = 3 and 10 7 for k = 4, as can be seen in Fig. 4. Tis indicates one possible avenue (i.e., lowering κ) by wic we migt generate an improved filter. A second avenue is by investigating te round-off error stemming from te large condition number of te linear system generated to satisfy Eq. (3.3) and solved to find te kernel coefficients. Te condition number of te generated matrix is on te order of 10 24 for k = 4. Tis leads to significant round-off error (e.g., te rule of tumb in tis particular case being tat 24 digits of accuracy are lost due to te conditioning of tis system), ence requiring te use of ig-precision/extended precision libraries for SIAC filtering to remain accurate (in bot its construction and usage). Te requirement of using extended precision in our computations increases te computational cost. In addition, te aforementioned discrepancy in te spatial extent of te filters due to te number of B-splines used 2k +1 in te symmetric case versus 4k +1 in te one-sided case leads to te boundary filter costing even more due to extra function evaluations. Tese extra function evaluations ave led us to reconsider te position-dependent filter and propose a better conditioned and less computationally intensive alternative. In order to apply SIAC filters near boundaries, we first no longer restrict ourselves to using only central B-splines. Secondly, we seek to maintain a constant support size for bot te interior of te domain and te boundaries. Te idea we propose is to add one general B-spline for boundary regions, wic is located witin te already defined support size. Using general B-splines provides greater flexibility and improves te numerical evaluation (eliminating te explicit need for precision beyond double precision). To introduce our new position-dependent one-sided SIAC filter, we discuss te one-dimensional case and ow to modify te current definition of te SIAC filter. Multi-dimensional SIAC filters are a tensor product of te one-dimensional case. 3.2 Te New Position-Dependent One-Sided Kernel Before we provide te definition of te new position-dependent one-sided kernel, we first introduce a new concept, tat of a knot matrix. Te definition of a knot matrix elps us to introduce te new position-dependent one-sided kernel in a concise and compact form. It also aids in demonstrating te differences between te new position-dependent kernel, te symmetric kernel and te SRV filter. Informally, te idea beind introducing a knot matrix is to exploit te definition of B-splines in terms of teir corresponding knot sequence t := (t j ), in

te definition of te SIAC filter. In order to introduce a knot matrix, we will use te following notation: ψ (l) t (x) = B 0,l,t (x) for te zerot B-Spline of order l wit knot sequence t. Definition 3.1 (Knot matrix) A knot matrix, T, is an n m matrix suc tat te γ t row, T(γ ), of te matrix T is a knot sequence wit l + 1 elements (i.e., m = l + 1) tat are used to create te B-spline ψ (l) T(γ )(x). Te number of rows n is specified based on te number of B-splines used to construct te kernel. To provide some context for te necessity of te definition of a knot matrix, we first redefine some of te previous SIAC kernels discussed in terms of teir knot matrices. Recall tat te general definition of te SIAC kernel relies on r + 1 central B-splines of order l. Terefore, we can use Definition 3.1 to rewrite te symmetric kernel given in Eq. (2.7) in terms of a knot matrix as follows K (2k+1,l) T sym (x) = 2k γ =0 c γ (2k+1,l) ψ (l) T sym (γ )(x), (3.4) were T sym in tis relation is a (2k +1) (l+1) matrix. Eac row in T sym corresponds to te knot sequence of one of te constituent B-splines in te symmetric kernel. More specifically, te elements of T sym are defined as T sym (i, j) = l + j + i k, 2 i = 0,...,2k; and j = 0,...,l. For instance, for te first order symmetric SIAC kernel we ave: l = 2andk = 1. Terefore, te corresponding knot matrix can be defined as 2 1 0 T sym = 1 0 1. (3.5) 0 1 2 Te definition of a SIAC kernel in terms of a knot matrix can also be used to rewrite te original boundary filter [15], wic uses only 2k + 1 central B-splines at te left boundary. Te knot matrix T one for tis case is given by 4 3 2 T one = 3 2 1. (3.6) 2 1 0 Now we can define our new position-dependent one-sided kernel by generating a knot matrix. Te new position-dependent one-sided kernel consists of r + 1 = 2k + 1 central B-splines and one general B-spline, and ence te knot matrix is of size (2k + 2) (l + 1). At a ig-level, using te scaling of te kernel, te new position-dependent one-sided kernel can be written as K (r+1,l) r+1 T (x) = γ =0 c γ (r+1,l) ψ (l) T(γ )(x), (3.7) were T(γ ) represents te γ t row of te knot matrix T, wic is te knot sequence T (γ, 0),..., T (γ, l). For te central B-spline, γ = 0,..., 2k and ψ (l) T(γ ) (x) = 1 ( x ) ψ(l) T(γ ).

Te added B-spline is a monomial defined as ψ (l) T(r+1) (x) = 1 ( x ) xl 1 T(r+1), were { (x T (r + 1, 0)) x l 1 l 1 T(r+1) =, T (r + 1, 0) x T (r + 1,l); 0, oterwise. Terefore near te left boundary, te kernel in Eq. (3.7) can be rewritten as K (r+1,l) T (x) = r γ =0 c γ (r+1,l) ψ (l) T(γ ) (x) }{{} Position-dependent kernel wit r+1=2k+1 central B-splines + c (r+1,l) r+1 ψ (l) T(r+1) (x) } {{ } General B-spline (3.8) Te kernel coefficients, c γ (r+1,l),γ= 0,...,r + 1 are obtained troug reproducing polynomials of degree up to r +1. We ave imposed furter restrictions on te knot matrix for te definition of te new position-dependent one-sided kernel. First, for convenience we require and T (γ, 0) T (γ, 1) T (γ, l), for γ = 0,...,r, +1 Second, te knot matrix, T, sould satisfy T (γ + 1, 0) T (γ, l), for γ = 0,...,r. T (0, 0) x x R and T (r,l) x x L, were is te element size in a uniform mes. Tis requirement is derived from te support of te B-spline as well as te support needing to remain inside te domain. Recall tat te support of te B-spline ψ (l) T (γ ) is [T (γ, 0), T (γ, l)], and te support of te kernel is [T (0, 0), T (r,l)]. Forany x [x L, x R ], te post-processed solution at point x can ten be written as u ( x) = ( K (r+1,l) T = x T(0,0) x T(r,l) u ) ( x) = K (r+1,l) T ( x ξ)u (ξ)dξ K (r+1,l) T ( x ξ)u (ξ)dξ, (3.9) were T represents te scaled knot matrix. For te boundary regions, we force te interval [ x T(r, l), x T(0, 0)] to be inside te domain =[x L, x R ]. Tis implies tat x L x T(r, l), x T(0, 0) x R, and ence te requirement of T (0, 0) x x R and T (r,l) x x L. Finally, we require tat te kernel remain as symmetric as possible. Tis means te knots sould be cosen as ( Left : T T T (r,l) x x ) L x x L, for < 3k + 1, (3.10) 2 ( Rigt : T T T (0, 0) x x R ), for x R x < 3k + 1, (3.11) 2.

Tis sifting will increase te error and it is terefore still necessary to increase te number of B-splines used in te kernel. Because te symmetric filter yields superconvergence results, we wis to retain te original form of te kernel as muc as possible. Near te boundary, were te symmetric filter cannot be applied, we keep te 2k + 1 sifted central B-splines and add only one general B-spline. To avoid increasing te spatial support of te filter, we will coose te knots of tis general B-spline dependent upon te knots of te 2k + 1 central B-splines in te following way: near te left boundary, we let te first 2k + 1 B-splines be central B-splines wereas te last B-spline will be a general spline. Te elements of te knot matrix are ten given by l r + j + i + x x L, 0 i r, 0 j l; x x T (i, j) = L 1, i = r + 1, j = 0; x x L, i = r + 1, j = 1,...,l. (3.12) Similarly, we can design te new kernel near te rigt boundary, were te general B-spline is given by ψ (l) T(0) (x) = xl 1 T(0) = { (T (0,l) x) l 1, T (0, 0) x T (r + 1,l); 0, oterwise. Te elements of te knot matrix for te rigt boundary kernel are defined as x x R, i = 0, j = 0,...,l 1; x x T (i, j) = R + 1, i = 0, j = l; j + i 1 + x x R, 1 i r + 1, 0 j l, (3.13) and te form of te kernel is ten K (r+1,l) T (x) = c (r+1,l) 0 ψ (l) r+1 T(0) (x) + γ =1 c γ (r+1,l) ψ (l) T(γ )(x). (3.14) We note tat tis extra B-spline is only used wen x x L 2, oterwise te symmetric central B-spline kernel is used. We present a concrete example for te P 1 case wit l = 2. In tis case, te knot matrices for our newly proposed filter at te left and rigt boundaries are < 3k+1 2 or x R x < 3k+1 4 3 2 0 0 1 T Left = 3 2 1 2 1 0, T Rigt = 0 1 2 1 2 3. (3.15) 1 0 0 2 3 4 Tese new knot matrices are 4 3 matrices were, in te case of te filter for te left boundary, te first tree rows express te knots of te tree central B-splines and te last row expresses te knots of te general B-spline. For te filter applied to te rigt boundary, te first row expresses te knots of te general B-spline and te last tree rows express te knots of te central B-splines.

Fig. 3 Comparison of te two boundary filters before convolution. (a) Te SRV kernel and (b) te new kernel at te boundary wit k = 2. Te boundary is represented by x = 0. Te new filter as reduced support and magnitude If we use te same form of te knot matrix to express te SRV kernel introduced in van Slingerland et al. [20] at te left boundary for k = 1, we would ave 6 5 4 5 4 3 T SRV = 4 3 2 3 2 1. (3.16) 2 1 0 Comparing te new knot matrix wit te one used to obtain te SRV filter, we can see tat tey ave te same number of columns, wic indicates tat tey use te same order of B-splines. Tere are fewer rows in te new matrix (2k + 2) tan te number of rows from te original position-dependent filter (4k + 1). Tis indicates tat te new filter uses fewer B-splines tan te SRV filter. To compare te new filter and te SRV filter, we plot te kernels used at te left boundary for k = 2. Figure 3 illustrates tat te new position-dependent SIAC kernel places more weigt on te evaluation point tan te SRV kernel, and te SRV kernel as a significantly larger magnitude and support wic we observed to cause problems, especially for iger-order polynomials (suc as P 3 or P 4 ). For tis example, using te filter for quadratic approximations, te scaling of te original position-dependent SIAC filter as a range from 150 to 150 versus 6 to 6 for te newly proposed filter. 3.3 Teoretical Results in te Uniform Case Te previous section introduced a new filter to reduce te errors of dg approximations wile attempting to ameliorate te issues concerning te old filter. In tis section, we discuss te teoretical results for te newly defined boundary kernel. Specifically, for k = 1 it is globally superconvergent of order tree. For iger degree polynomials, it is possible to obtain superconvergence only in te interior of te domain. Recall from Eq. (3.7) tat te new one-sided scaled kernel as te form K (r+1,l) r+1 T (x) = c γ (r+1,l) ψ (l) T(γ )(x). (3.17) γ =0

In te interior of te domain te symmetric SIAC kernel is used wic consists of 2k + 1 central B-splines K (2k+1,l) 2k T (x) = c γ (2k+1,l) ψ (l) T(γ )(x), (3.18) γ =0 and, near te left boundary te new one-sided kernel can be written as r K (r+1,l) T (x) = c γ (r+1,l) ψ (l) + c (r+1,l) r+1 ψ (l) γ =0 T(γ ) (x) T(r+1) (x), r = 2k were 2k +1 central B-splines are used togeter wit one general B-spline. Te scaled kernel K (r+1,l) T as te property tat te convolution K (r+1,l) T u only uses information inside te domain. Teorem 3.1 Let u be te exact solution to te linear yperbolic equation d u t + A i u xi + A 0 u = 0, x (0, T ], i=1 u(x, 0) = u 0 (x), x, (3.19) were te initial condition u 0 (x) is a sufficiently smoot function. Here, R d.letu be te numerical solution to Eq. (3.19), obtained using a discontinuous Galerkin sceme wit an upwind flux over a uniform mes wit mes spacing. Let u (r+1,l) ( x) = (K T u )( x) be te solution obtained by applying our newly proposed filter wic uses r + 1 = 2k + 1 central B-splines of order l = k + 1 and one general B-spline in boundary regions. Ten te SIAC-filtered dg solution as te following properties: (i) (u u )( x) 0, C 3 for k = 1. Tat is, u ( x) is globally superconvergent of order tree for linear approximations. (ii) (u u )( x) 0, \supp{k s } C r+1 wen r + 1 2k + 1 central B-splines are used in te kernel. Here supp{k s } represents te support of te symmetric kernel. Tus, u ( x) is superconvergent in te interior of te domain. (iii) (u u )( x) 0, C k+1 globally. Proof We neglect te proof of properties (i) and (ii) as tey are similar to te proofs in Cockburn et al. [5] and Ji et al. [8]. Instead we concentrate on (u u )( x) Ck+1, wic is rater straigt-forward. Consider te one-dimensional case (d = 1). Ten te error can be written as u K (r+1,l) 0, T u,1 +,2, were,1 = u K (r+1,l) T u 0, and,2 = K (r+1,l) T (u u ) 0,. Te proof of iger order convergence for te first term, H,1, is te same as in Cockburn et al. [5] as te requirement on K T does not cange (reproduction polynomials up to degree r + 1). Tis means tat,1 r+1 (r + 1)! C 1 u r+1,.

Now consider te second term,,2. Witout loss of generality, we consider te filter for te left boundary in order to estimate,2. Te proofs for te filter in te interior and rigt boundary are similar. We use te form of te kernel given in Eq. (3.8), wic decomposes te new filter into two parts: 2k + 1 central B-splines and one general B-spline. Tat is, we write K (r+1,l) T (x) = r γ =0 Setting e(x) = u(x) u (x), ten,2 = Hence K (r+1,l) T e 0, 1 c γ (r+1,l) ψ (l) T(γ ) (x) } {{ } central B-splines K (r+1,l) T,2 C sup x + c (r+1,l) r e 0 sup L1 r γ =0 c (r+1,l) γ x r+1 ψ (l) T(r+1) (x) } {{ } general B-spline γ =0 + c(r+1,l) 2k+1 l c (r+1,l) γ k+1.. + c(r+1,l) 2k+1 l e 0. Remark 3.1 Note tat in tis analysis we steered away from te negative-order norm argument. Tecnically, te terms involving te central B-splines ave a convergence rate of r + 1 2k + 1asgivenin[5,8]. It is te new addition, te term involving te general B-spline tat presents te limitation and reduces te convergence rate to tat of te dg approximation itself (i.e., it is accuracy-order conserving). To extend tis to te multidimensional case (d > 1), given an arbitrary x = (x 1,...,x d ) R d,weset ψ (l) T(γ ) (x) = d i=1 ψ (l) T(γ ) (x i ). Te filter for te multidimensional space considered is of te form K (r+1,l) r+1 T (x) = c γ (r+1,l) ψ (l) T(γ ) (x), γ =0 were te coefficients c γ (l) are tensor products of te one-dimensional coefficients. To empasize te general B-spline used near te boundary, we assume, witout loss of generality, tat in te x k1,...,x kd0 directions we need te general B-spline, were 0 d 0 d.ten d 0 ψ (l) T(2k+1) = ψ (l) T(2k+1) (x k i ). i=1 By applying te same idea we used for te one-dimensional case, te teorem is also true for multi-dimensional case. We note tat te constant in te final estimation is a product of two oter constants, one of tem is determined by te filter ( r γ =0 c γ (r+1,l) + c(r+1,l) r+1 ) and te oter one is determined l

Fig. 4 Plots demonstrating te effect of te coefficients on te error estimate for (a) P 3 -and(b) P 4 - polynomials. Sown is r γ =0 c γ (r+1,l) using 2k + 1 central B-splines (blue dased), using 4k + 1 central B-splines (green das dot-dot)and r γ =0 c (r+1,l) γ + c(r+1,l) r+1 l for te new filter (red line) by te dg approximation. To furter illustrate te necessity of examining te constant in te error term wic contributed by te filter, we provide Fig. 4. Tis figure demonstrates te difference between r γ =0 c γ (r+1,l) for te previously introduced filters and our new filter in wic te constant gets modified to r γ =0 c γ (r+1,l) + c(r+1,l) r+1 l.infig.4, one can clearly see tat by adding a general spline to te r + 1 central B-splines, we are able, in te boundary regions, to reduce te constant in te error term significantly. 3.4 Teoretical Results in te Non-uniform (Smootly Varying) Case In tis section, we give a teoretical interpretation to te computational results presented in Curtis et al. [6]. Tis is done by using te newly proposed filter for non-uniform meses and sowing tat te new position-dependent filter maintains te superconvergence property in te interior of te domain for smootly varying meses and is accuracy order conserving near te boundaries for non-uniform meses. We begin by defining wat we mean by a smootly varying mes. Definition 3.2 (Smootly varying mes) Letξ be a variable defined over a uniform mes on domain R, ten a smootly varying mes defined over is a non-uniform mes wose variable x satisfies x = ξ + f (ξ), (3.20) were f is a sufficiently smoot function and satisfies f (ξ) > 1, ξ ξ + f (ξ). For example, we can coose f (ξ) = 0.5 sin(ξ) over [0, 2π]. Te multi-dimensional definition can be defined in te same way. Lemma 3.2 Let u be te exact solution of a linear yperbolic equation d u t + A n u xn + A 0 u = 0, x (0, T ], (3.21) n=1

wit a smoot enoug initial function and R d.letξ be te variable for te uniform mes defined on wit size, and x be te variable of a smootly varying mes defined in (3.20). Letu (ξ) be te numerical solution to Eq. (3.21) over uniform mes ξ, and u (x) be te approximation over smootly varying mes x, bot of tem obtained by using te discontinuous Galerkin sceme. Ten te post-processed solution obtained by applying SIAC filter K (ξ) for u (ξ) and K H (x) for u (x) wit a proper scaling H, are related by u(x) K H u (x) 0, C u(ξ) K u (ξ) 0,. Here, te filter K can be any filter we mentioned in te previous section (symmetric filter, te SRV filter, and newly proposed position-dependent filter). Note tat tis means tat we obtain te full 2k + 1 superconvergence rate beavior for bot te SRV and symmetric filters. Proof Te proof is straigtforward. If te scaling H is properly cosen, a simple mapping can be done from te smootly varying mes to te corresponding uniform mes. Te result olds if te Jacobian is bounded (from te definition of smootly varying mes). u(x) K H u (x) 2 0, = (u(x) K H u (x)) 2 dx x ξ = ( u(ξ) u (ξ) ) 2 (1 + f (ξ))dξ u(ξ) K u (ξ) 2 0, max 1 + f (ξ). According to te definition of smootly varying mes, =, weave u(x) K H u (x) 0, C u(ξ) K u (ξ) 0,, were C = ( max 1 + f ) 1 2. Remark 3.2 Te proof seems obvious, but it is important to coose a proper scaling for H in te computations. Due to te smootness and computational cost requirements, we need to keep H constant wen treating points witin te same element. Under tis condition, te natural coice is H = x j wen post-processing te element I j. It is now easy to see tat tere exists a position c in te element I j, suc tat H = x j = (1 + f (c)). Remark 3.3 Note tat Teorem 3.1 (iii) still olds for generalized non-uniform meses. Tis is due to te proof not relying on te properties (i.e., structure) of te mes. We ave now sown tat superconvergence can be acieved for interior solutions over smootly varying meses. In te subsequent sections, we present numerical results tat confirm our results on uniform and non-uniform (smootly varying) meses. 4 Numerical Results for One Dimension Te previous section introduced a new SIAC kernel by adding a general B-spline to a modified central B-spline kernel. Te addition of a general B-spline elps to maintain a consistent support size for te kernel trougout te domain and eliminates te need for a multi-precision package. Tis section illustrates te performance of te new position-dependent SIAC filter on one-dimensional uniform and non-uniform (smootly varying and random) meses. We compare our results to te SRV filter [20]. In order to provide a fair comparison between

te SRV and new filters, we mainly sow te results using quadruple precision for a few one-dimensional cases. Furtermore, in order to reduce te computational cost of te filter tat uses 4k +1 central B-splines, we neglect to implement te convex combination described in Eq. (2.10). Tis is not necessary for te new filter, and was implemented in te SRV filter to ensure te transition from te one-sided filter to te symmetric filter was smoot. Tis is te first time tat te position-dependent filters ave been tested on non-uniform meses. Altoug tests were performed using scalings of H = x j and H = max x j, we only present te results using a scaling of H = x j. Tis scaling provides better results in boundary regions, wic is one of te motivations of tis paper. We note tat te errors produced using a scaling of H = max x j are quite similar and often produce smooter errors in te interior of te domain for smootly varying meses. Remark 4.1 Te SRV filter requires using quadruple precision in te computations to eliminate round-off error, wic typically involves more computational effort tan natively supported double precision. Te new filter only requires double precision. In order to give a fair comparison between te SRV filter and te new filter, for te one-dimensional examples we ave used quadruple precision to maintain a consistent computational environment. 4.1 Linear Transport Equation Over a Uniform Mes Te first equation tat we consider is a linear yperbolic equation wit periodic boundary conditions, u t + u x = 0, (x, t) [0, 1] (0, T ] (4.1) u(x, 0) = sin(2π x), x [0, 1]. (4.2) Te exact solution is a periodic translation of te sine function, u(x, t) = sin(2π(x t)). For T = 0, tis is simply te L 2 -projection of te initial condition. Here, we consider a final time of T = 1 and note tat we expect similar results at later times. Te discontinuous Galerkin approximation error and te position-dependent SIAC filtered error results are sown in Tables 1 and 2 for bot quadruple precision and double precision. Using quadruple precision, bot filters reduce te errors in te post-processed solution, altoug te new filter as only a minor reduction in te quality of te error. However, using double precision only te new filter can maintain tis error reduction for P 3 - and P 4 -polynomials. We note tat we concentrate on te results for P 3 -andp 4 -polynomials as tere is no noticeable difference between double and quadruple precision for P 1 -and P 2 -polynomials. Te pointwise error plots are given in Figs. 5 and 6. Wen using quadruple precision as in Fig. 5, te SRV filter can reduce te error of te dg solution better tan te new filter for fine meses. However, it uses 2k 1 more B-splines tan te newly generated filter. Tis difference is noticeable wen using double precision, wic is almost ten times faster tan using quadruple precision for P 3 and P 4. For suc examples te new filter performs better bot computationally and numerically (in terms of error). Tables 1 and 2 sow tat te SRV filter can only reduce te error for fine meses wen using P 4 piecewise polynomials. Te new filter performs as good as wen using quadruple precision and reduces te error magnitude at a reduced computational cost.

Table 1 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te linear transport equation using polynomials of degree k = 1,...,4 (quadruple precision) over uniform meses Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Quadruple precision P 1 20 4.02E 03 1.45E 02 1.98E 03 2.80E 03 1.98E 03 2.80E 03 40 1.02E 03 1.97 3.82E 03 1.92 2.44E 04 3.02 3.46E 04 3.02 2.44E 04 3.02 3.46E 04 3.02 80 2.58E 04 1.99 9.79E 04 1.96 3.02E 05 3.01 4.28E 05 3.01 3.03E 05 3.01 4.28E 05 3.01 P 2 20 1.07E 04 3.67E 04 3.73E 06 5.82E 06 1.21E 05 8.27E 05 40 1.34E 05 3.00 4.62E 05 2.99 9.42E 08 5.31 1.34E 07 5.44 5.52E 07 4.45 5.31E 06 3.96 80 1.67E 06 3.00 5.78E 06 3.00 2.48E 09 5.24 3.52E 09 5.26 4.79E 08 3.53 6.19E 07 3.10 P 3 20 2.06E 06 6.04E 06 1.53E 07 1.02E 06 2.30E 06 8.71E 06 40 1.29E 07 4.00 3.80E 07 3.99 2.70E 10 9.15 4.00E 10 11.32 4.14E 09 9.12 2.27E 08 8.58 80 8.07E 09 4.00 2.38E 08 4.00 1.22E 12 7.79 1.73E 12 7.85 8.18E 12 8.98 1.20E 10 7.56 P 4 20 3.19E 08 7.02E 08 7.53E 03 7.33E 02 5.31E 07 1.99E 06 40 1.00E 09 4.99 2.25E 09 4.97 1.99E 12 31.82 3.12E 12 34.45 2.97E 10 10.80 1.58E 09 10.30 80 3.14E 11 5.00 7.14E 11 4.98 2.23E 15 9.80 3.19E 15 9.93 1.37E 13 11.08 1.55E 12 9.99

Table 2 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te linear transport equation using polynomials of degree k = 3, 4 (double precision) over uniform meses Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Double precision P 3 20 2.06E 06 6.04E 06 1.53E 07 1.02E 06 2.30E 06 8.71E 06 40 1.29E 07 4.00 3.80E 07 3.99 2.70E 10 9.15 4.00E 10 11.32 4.14E 09 9.12 2.27E 08 8.58 80 8.07E 09 4.00 2.38E 08 4.00 1.25E 12 7.75 3.85E 12 6.70 8.18E 12 8.98 1.20E 10 7.56 P 4 20 3.19E 08 7.02E 08 7.53E 03 7.33E 02 5.31E 07 1.99E 06 40 1.00E 09 4.99 2.25E 09 4.97 3.97E 11 27.50 6.14E 10 26.83 2.97E 10 10.80 1.58E 09 10.30 80 3.14E 11 5.00 7.14E 11 4.98 1.48E 11 1.42 3.28E 10 0.90 1.37E 13 11.08 1.55E 12 9.99

Fig. 5 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport equation over uniform meses using polynomials of degree k = 3, 4(top and bottom rows, respectively). Quadruple precision was used in te computations

Fig. 6 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport equation over uniform meses using polynomials of degree k = 3, 4(top and bottom rows, respectively). Double precision was used in te computations

Additionally, we point out tat te accuracy of te SRV filter depends on (1) aving iger regularity of C 4k+1, (2) a well-resolved dg solution, and (3) a wide enoug support (at least 5k + 1 elements). Te same penomenon will also be observed in te following tests suc as for a nonlinear equation. For te new filter, te support size remains te same trougout te domain 3k + 1 elements and a iger degree of regularity is not necessary. 4.2 Non-uniform Meses We begin by defining tree non-uniform meses tat are used in te numerical examples. Te meses tested are: Mes 4.1 Smootly varying mes wit periodicity Te first mes is a simple smootly varying mes. It is defined by x = ξ + b sin(ξ), were ξ is a uniform mes variable and b = 0.5 as in Curtis et al. [6]. We note tat te tests were also performed for different values of b; similar results were attained in all cases. Tis mes as te nice feature tat it is a periodic mes and tat te elements near te boundaries ave a larger element size. Mes 4.2 Smoot polynomial mes Te second mes is also a smootly varying mes but does not ave a periodic structure. It is defined by x = ξ 0.05(ξ 2π)ξ. For tis mes, te size of elements gradually decrease from left to rigt. Mes 4.3 Randomly varying mes Te tird mes is a mes wit randomly distributed elements. Te element size varies between [0.8, 1.2], were is te uniform mes size. We will now present numerical results demonstrating te usefulness of te positiondependent SIAC filter in van Slingerland et al. [20] and our new one-sided SIAC filter for te aforementioned meses. 4.3 Linear Transport Equation Te first example tat we consider is a linear transport equation, u t + u x = 0, (x, t) [0, 2π] (0, T ] u(x, 0) = sin(x), (4.3) wit periodic boundary conditions and te errors calculated at T = 2π. We calculate te discontinuous Galerkin approximations for tis equation over te tree different non-uniform meses (Meses 4.1, 4.2, 4.3). Te approximation is ten post-processed at te final time in order to analyze te numerical errors. We note tat altoug te boundary conditions for te equation are periodic, in te boundary regions we implement te one-sided filter in van Slingerland et al. [20] as te SRV filter and compare tem wit te new filter presented above. Te pointwise error plots for te periodically smootly varying mes are given in Fig. 7 wit te corresponding errors presented in Table 3. In te boundary regions, te SRV filter beaves sligtly better for coarse meses tan te new filter. However, we recall tat tis filter essentially doubles te support in te boundary regions. Additionally, we see tat te new filter as a iger convergence rate tan k + 1 wic is better tan te teoretically predicted convergence rate. For te smoot polynomial Mes 4.2 (witout a periodic property), te results of using te scaling of H = x j are presented in Fig. 8 and Table 3. Unlike te previous example, witout te periodic property, te SRV filter leads to significantly worse performance. Te SRV filter no longer enances te accuracy order and as larger errors near te boundaries.

Fig. 7 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over smootly varying mes (Mes 4.1). Te kernel scaling H = x j and quadruple precision was used in te computations

Table 3 L 2 -andl -errors for te dg approximation togeter wit te SRV and te new filters for te linear transport Eq. (4.3) over te tree Meses 4.1, 4.2 and 4.3 Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order Mes 1: Smootly varying mes P 1 20 7.13E 03 1.97E 02 3.57E 03 5.83E 03 3.54E 03 5.29E 03 40 1.65E 03 2.11 5.43E 03 1.86 4.35E 04 3.04 6.42E 04 3.18 4.34E 04 3.03 6.42E 04 3.04 80 4.04E 04 2.03 1.43E 03 1.93 5.37E 05 3.02 7.94E 05 3.02 5.37E 05 3.02 7.94E 05 3.02 P 2 20 2.43E 04 1.22E 03 2.41E 04 1.51E 03 1.56E 04 7.32E 04 40 3.02E 05 3.01 1.55E 04 2.98 9.97E 07 7.92 9.73E 06 7.28 2.55E 06 5.94 2.29E 05 5.00 80 3.77E 06 3.00 1.95E 05 3.00 8.30E 09 6.91 1.34E 08 9.50 1.98E 07 3.68 2.13E 06 3.42 P 3 20 5.45E 06 1.87E 05 6.43E 06 5.51E 05 6.36E 05 2.02E 04 40 3.39E 07 4.01 1.20E 06 3.96 3.11E 07 4.37 3.25E 06 4.09 1.72E 07 8.53 7.62E 07 8.05 80 2.12E 08 4.00 7.48E 08 4.01 1.45E 10 11.06 1.81E 09 10.81 2.81E 10 9.26 2.16E 09 8.46 P 4 20 1.56E 07 5.20E 07 5.15E 07 4.11E 06 1.72E 05 5.41E 05 40 4.83E 09 5.01 1.66E 08 4.97 6.43E 09 6.32 6.56E 08 5.97 2.56E 08 9.39 1.12E 07 8.91 80 1.51E 10 5.00 5.22E 10 4.99 1.25E 11 9.01 1.80E 10 8.51 1.15E 11 11.13 7.77E 11 10.50 Mes 2: Smoot polynomial mes P 1 20 5.36E 03 1.68E 02 2.38E 03 3.48E 03 2.39E 03 3.48E 03 40 1.26E 03 2.10 4.69E 03 1.84 2.93E 04 3.02 4.37E 04 2.99 2.94E 04 3.03 4.37E 04 2.99 80 3.08E 04 2.03 1.23E 03 1.93 3.63E 05 3.01 5.43E 05 3.01 3.64E 05 3.01 5.43E 05 3.01

Table 3 continued Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order 20 1.43E 04 8.00E 04 2.88E 05 2.45E 04 4.62E 05 2.18E 04 40 1.80E 05 2.99 1.03E 04 2.95 2.24E 06 3.68 2.43E 05 3.33 1.22E 06 5.24 9.29E 06 4.55 80 2.25E 06 3.00 1.31E 05 2.99 3.91E 07 2.52 6.37E 06 1.93 9.79E 08 3.64 1.37E 06 2.76 P 3 20 3.15E 06 1.25E 05 1.52E 05 1.75E 04 1.53E 05 7.35E 05 40 1.96E 07 4.01 8.05E 07 3.96 9.80E 08 7.27 1.50E 06 6.86 3.50E 08 8.77 2.34E 07 8.29 80 1.22E 08 4.00 4.97E 08 4.02 2.31E 09 5.40 3.77E 08 5.32 5.63E 11 9.28 8.26E 10 8.15 P 4 20 6.25E 08 2.67E 07 6.79E 07 5.38E 06 4.45E 06 2.13E 05 40 1.96E 09 5.00 8.77E 09 4.93 2.13E 09 8.32 2.34E 08 7.85 4.12E 09 10.08 2.76E 08 9.59 80 6.14E 11 5.00 2.79E 10 4.97 3.03E 11 6.13 5.01E 10 5.54 1.74E 12 11.21 1.59E 11 10.76 Mes 3: Randomly varying mes P 1 20 4.88E 03 1.49E 02 2.31E 03 6.73E 03 2.18E 03 3.83E 03 40 1.15E 03 2.09 4.45E 03 1.74 2.84E 04 3.02 8.11E 04 3.05 2.76E 04 2.98 6.41E 04 2.58 80 2.87E 04 2.00 1.20E 03 1.89 4.85E 05 2.55 1.60E 04 2.35 4.71E 05 2.55 1.37E 04 2.23 P 2 20 1.17E 04 5.63E 04 3.78E 04 3.57E 03 2.23E 05 1.06E 04 40 1.52E 05 2.95 7.90E 05 2.83 3.35E 05 3.50 3.45E 04 3.37 9.58E 07 4.54 7.44E 06 3.83 80 1.93E 06 2.98 9.85E 06 3.00 8.04E 06 2.06 1.17E 04 1.56 1.27E 07 2.92 8.51E 07 3.13 P 3 20 2.49E 06 1.06E 05 3.35E 05 3.61E 04 5.64E 06 2.90E 05

Table 3 continued Mes dg SRV filter New filter L 2 error Order L Error Order L 2 error order L error order L 2 Error Order L error Order 40 1.55E 07 4.00 7.46E 07 3.83 7.42E 07 5.50 9.11E 06 5.31 6.23E 09 9.82 3.80E 08 9.58 80 1.02E 08 3.93 4.67E 08 4.00 2.46E 08 4.91 4.57E 07 4.32 1.54E 10 5.34 8.71E 10 5.45 P 4 20 4.03E 08 1.49E 07 1.40E 06 1.47E 05 1.52E 06 7.85E 06 40 1.37E 09 4.88 5.25E 09 4.83 1.42E 08 6.63 1.78E 07 6.36 3.20E 10 12.21 1.98E 09 11.95 80 4.40E 11 4.96 1.70E 10 4.95 4.03E 10 5.13 7.93E 09 4.49 3.68E 13 9.77 3.95E 12 8.97 A scaling of H = x j along wit quadruple precision was used in te computations

Fig. 8 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over smoot polynomial mes (Mes 4.2). Te kernel scaling H = x j and quadruple precision was used in te computations

On te oter and, te new filter still improves accuracy wen te mes is sufficiently refined (N = 40). Numerically te new filter obtains iger accuracy order tan k + 1. For iger order polynomials, P 3 and P 4, we see tat it acieves accuracy order of 2k + 1, but tis is not teoretically guaranteed. Lastly, te filters were applied to dg solutions over a randomly distributed mes. For tis randomly varying mes, te new filter again reduces te errors except for a very coarse mes (see Table 3). Te accuracy order is decreased compared to te smootly varying mes example, but it is still iger tan k + 1. Unlike te smootly varying mes, tere are more oscillations in te errors (Fig. 9). However, te oscillations are still reduced compared to te dg solutions. We note tat te results suggest tat te SRV filter may be only suitable for uniform meses. 4.4 Variable Coefficient Equation In tis example, we consider te variable coefficient equation: u t + (au) x = f, x [0, 2π] (0, T ] a(x, t) = 2 + sin(x + t), u(x, 0) = sin(x), (4.4) at T = 2π. Similar to te previous constant coefficient Eq. (4.3), we also test tis variable coefficient Eq. (4.4) over tree different non-uniform meses (Meses 4.1, 4.2, 4.3). Since te results are similar to te previous linear transport Eq. (4.3), ere we do not re-describe te details of te results. We only note tat te results of variable coefficient equation ave more wiggles tan te constant coefficient equation. Tis may be an important issue in extending tese ideas to nonlinear equations. To save space, we only sow te P 3 and P 4 results, P 1 and P 2 are similar to te previous examples. Figure 10 sows te pointwise error plots for te dg and post-processed approximations over a smootly varying mes. Te corresponding errors are given in Table 4. Te results are similar to te linear transport equation. Te two filters perform similarly, wit te new filter using fewer function evaluations. For te smoot polynomial mes 4.2, we sow te pointwise error plots in Fig. 11. Te corresponding errors are given in Table 4. In tis example we see tat te new filter beaves better at te boundaries tan te SRV filter. Tis may be due to te more compact kernel support size. Finally, we test te variable coefficient Eq. (4.4) over randomly varying mes 4.3. Similar to te linear transport example, te pointwise errors plots (Fig. 12) sow more oscillations tan te smootly varying mes examples. We again see te new filter as better errors at te boundaries tan te SRV filter. 5 Numerical Results for Two Dimensions 5.1 Linear Transport Equation Over a Uniform Mes To demonstrate te performance of te new filter in two-dimensions, we consider te solution to a linear transport equation, u t + u x + u y = 0, (x, y) [0, 2π] [0, 2π], (5.1)

Fig. 9 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te linear transport Eq. (4.3) over randomly varying Mes 4.3. Te kernel scaling H = x j and quadruple precision was used in te computations

Fig. 10 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over smootly varying mes (Mes 4.1). Te kernel scaling H = x j and quadruple precision was used in te computations

Table 4 L 2 -andl -errors for te dg approximation togeter wit te SRV and te new filters for te variable coefficient Eq. (4.4) using a dg approximation of polynomial degree k = 3, 4 over te tree Meses 4.1, 4.2 and 4.3 Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Mes 1: Smootly varying mes P 3 20 5.54E 06 1.93E 05 4.40E 06 3.66E 05 6.36E 05 2.02E 04 40 3.41E 07 4.02 1.21E 06 4.00 3.14E 07 3.81 3.25E 06 3.49 1.72E 07 8.53 7.61E 07 8.05 80 2.12E 08 4.01 7.50E 08 4.01 1.45E 10 11.08 1.81E 09 10.81 2.78E 10 9.27 2.05E 09 8.53 P 4 20 1.62E 07 5.69E 07 1.89E 05 1.44E 04 1.72E 05 5.41E 05 40 4.95E 09 5.03 1.77E 08 5.00 5.74E 09 11.68 5.82E 08 11.28 2.56E 08 9.39 1.12E 07 8.91 80 1.53E 10 5.01 5.48E 10 5.02 1.26E 11 8.83 1.76E 10 8.37 1.16E 11 11.11 7.26E 11 10.59 Mes 2: Smoot polynomial mes P 3 20 3.15E 06 1.27E 05 2.70E 05 3.05E 04 1.53E 05 7.36E 05 40 1.96E 07 4.01 8.06E 07 3.98 1.31E 07 7.69 1.54E 06 7.62 3.55E 08 8.75 2.38E 07 8.28 80 1.22E 08 4.00 4.98E 08 4.02 7.51E 09 4.13 1.27E 07 3.60 6.25E 11 9.15 7.84E 10 8.24 P 4 20 6.40E 08 2.82E 07 2.95E 06 2.42E 05 4.45E 06 2.13E 05 40 1.98E 09 5.01 8.94E 09 4.98 6.84E 07 2.11 1.12E 05 1.10 4.12E 09 10.08 2.76E 08 9.60 80 6.18E 11 5.00 2.80E 10 5.00 1.51E 09 8.83 3.50E 08 8.33 1.59E 12 11.34 1.55E 11 10.80 Mes 3: Randomly varying mes P 3 20 2.49E 06 9.61E 06 1.11E 04 8.98E 04 5.63E 06 2.90E 05

Table 4 continued Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order 40 1.56E 07 4.00 7.18E 07 3.74 2.12E 06 5.71 2.55E 05 5.14 7.96E 09 9.47 4.31E 08 9.39 80 1.02E 08 3.93 4.72E 08 3.93 5.91E 08 5.17 1.06E 06 4.59 3.15E 10 4.66 1.91E 09 4.50 P 4 20 4.07E 08 1.56E 07 2.45E 05 1.96E 04 1.52E 06 7.85E 06 40 1.37E 09 4.89 5.31E 09 4.87 4.48E 07 5.77 6.18E 06 4.99 2.98E 10 12.31 1.79E 09 12.09 80 4.41E 11 4.96 1.73E 10 4.94 1.26E 09 8.48 1.91E 08 8.33 2.64E 12 6.82 2.19E 11 6.35 Te filters are using scaling H = x j. Quadruple precision was used in te computations

Fig. 11 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over smoot polynomial mes (Mes 4.2). Te kernel scaling H = x j and quadruple precision was used in te computations

Fig. 12 Comparison of te pointwise errors in log scale of te original dg solution (left column), te SRV filter (middle column) and te new filter (rigt column) for te variable coefficient Eq. (4.4) over randomly varying mes (Mes 4.3). Te kernel scaling H = x j and quadruple precision was used in te computations

Table 5 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for a 2D linear transport equation solved over a uniform mes using polynomials of degree k = 3, 4 Mes dg SRV filter New filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order 20 20 3.30E 06 1.21E 05 2.60E 07 1.12E 04 2.39E 06 1.80E 05 40 40 2.06E 07 4.00 7.60E 06 3.99 4.69E 10 9.11 3.11E 09 5.17 7.01E 09 8.41 5.11E 08 8.46 80 80 1.29E 08 4.00 4.76E 08 4.00 1.74E 11 4.75 5.50E 09-0.82 7.97E 11 6.46 1.02E 09 5.65 P 4 20 20 4.71E 08 1.41E 07 2.77E 08 1.35E 06 5.25E 07 3.77E 06 40 40 1.46E 09 5.01 4.50E 09 4.97 2.55E 08 0.12 2.84E 06 1.07 3.83E 10 10.42 3.40E 09 10.11 80 80 4.44E 11 5.04 1.43E 10 4.98 2.73E 08 0.10 7.86E 06-1.47 3.00E 13 10.31 3.12E 12 10.09 Double precision was used in te computations

Fig. 13 Comparison of te pointwise errors in log scale of te original dg solution (left), te SRV filter (middle) and te new filter (rigt) for te 2D linear transport equation using polynomials of degree k = 4 and a uniform 80 80 mes. Double precision was used in te computations

Table 6 L 2 -andl -errors for te dg approximation togeter wit te SRV and new filters for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over te tree meses: Meses 4.1, 4.2 and 4.3 Mes dg SRV Filter New Filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order Mes 1: Smootly varying mes P 3 20 20 8.74E 06 5.39E 05 6.94E 06 1.10E 04 6.71E 05 4.04E 04 40 40 5.45E 07 4.00 3.39E 06 4.00 3.68E 07 4.24 6.49E 06 4.08 2.09E 07 8.33 1.66E 06 7.93 80 80 3.40E 08 4.00 2.06E 07 4.03 1.50E 10 11.76 9.01E 09 9.49 7.33E 10 8.16 7.76E 09 8.92 P 4 20 20 1.93E 07 1.05E 06 9.25E 07 8.56E 06 3.26E 05 1.12E 04 40 40 6.00E 09 5.01 3.32E 08 4.98 3.38E 08 4.77 4.17E 06 1.04 2.67E 08 10.25 2.31E 07 8.92 80 80 1.88E 10 5.00 1.04E 09 5.00 2.07E 08 0.71 9.13E 06-1.13 1.90E 11 10.46 1.61E 10 10.49 Mes 2: Smoot polynomial mes P 3 20 20 4.56E 06 3.03E 05 1.55E 05 3.49E 04 1.59E 05 1.38E 04 40 40 2.85E 07 4.00 1.92E 06 3.98 1.23E 07 6.98 3.04E 06 6.84 4.67E 08 8.41 5.14E 07 8.67 80 80 1.78E 08 4.00 1.20E 07 4.00 3.35E 09 5.20 8.01E 08 5.25 2.43E 10 7.59 4.61E 09 6.80 P 4 20 20 8.48E 08 3.27E 07 1.38E 06 1.48E 05 5.92E 06 3.27E 05 40 40 2.65E 09 5.00 1.74E 08 4.92 3.21E 08 5.43 4.99E 06 1.57 4.65E 09 10.30 5.79E 08 9.14 80 80 8.31E 11 5.00 5.58E 10 4.96 2.51E 08 0.35 6.88E 06-0.46 3.29E 12 10.46 3.49E 11 10.27 Mes 3: Randomly varying mes P 3 20 20 3.47E 06 2.16E 05 3.46E 05 6.43E 04 3.90E 06 3.83E 05

Table 6 continued Mes dg SRV Filter New Filter L 2 error Order L error Order L 2 error Order L error Order L 2 error Order L error Order 40 40 2.23E 07 3.96 1.52E 06 3.83 1.90E 06 4.19 3.59E 05 4.16 1.28E 08 8.25 1.25E 07 8.26 80 80 1.41E 08 3.98 9.65E 08 3.98 9.94E 08 4.26 2.71E 06 3.73 2.97E 10 5.43 3.88E 09 5.01 P 4 20 20 5.83E 08 2.84E 07 3.06E 06 5.19E 05 1.06E 06 8.87E 05 40 40 1.90E 09 4.94 1.04E 08 4.77 2.64E 08 6.86 2.64E 06 4.30 7.80E 10 10.41 1.01E 08 9.78 80 80 6.06E 11 4.97 3.48E 10 4.90 1.46E 08 0.85 7.09E 06-1.43 6.60E 13 10.20 7.85E 12 10.33 Te filters use te scaling of Hx = x j in x-direction and Hy = y j in y-direction. Double precision was used in te computations

Fig. 14 Comparison of te pointwise errors in log scale of te original dg solution (left), te SRV filter (middle) and te new filter (rigt) for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over a smootly varying 80 80 mes (Mes 4.2). Te filters use te scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision was used in te computations J Sci Comput

Fig. 15 Comparison of te pointwise errors on a log scale between te SRV and new filters for te 2D linear transport equation using polynomials of degree k = 3, 4. A smoot-varying mes defined by x = ξ b(ξ 2π )ξ, y = ξ b(ξ 2π )ξ wit b = 0.05 was used. Filter scaling was based upon te local element size. Double precision was used in te computations J Sci Comput

Fig. 16 Comparison of te pointwise errors in log scale of te original dg solution (a), te SRV filter (b) and te new filter (c) for te 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over a randomly varying 80 80 mes. Te filters use te scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision was used in te computations J Sci Comput