AJ80 Strong Stability Preserving Time Discretizations Sigal Gottlieb University of Massachusetts Dartmouth Center for Scientific Computing and Visualization Research November 20, 2014 November 20, 2014 1 / 22
AJ80 Happy Birthday Happy Birthday Antony! Collaborators: David Seal (MSU) and Zachary Grant (UMassD) Thanks to the AFOSR for funding this work November 20, 2014 2 / 22
Motivation Consider the hyperbolic partial differential equation U t + f (U) x = 0. Method of lines approach: we semi-discretize the problem in space, to obtain some ODE of the form u t = F (u) and we evolve this ODE in time using standard methods such as Runge Kutta or multi-step methods. If the solution is discontinuous, then linear L 2 stability analysis is not sufficient for convergence. In that case, we try to build spatial discretizations which satisfy some nonlinear stability properties. Sometimes we want our scheme to satisfy non-l 2 properties: positivity, maximum-principle preserving, total variation diminishing, or some non-oscillatory property. November 20, 2014 3 / 22
Motivation Such methods include TVD or TVB limiters Positivity or maximum principle preserving ENO or WENO schemes These nonlinearly stable spatial discretizations are designed to satisfy a specific property when coupled with forward Euler, under some time-step restriction. November 20, 2014 4 / 22
Motivation Consider ODE system (typically from PDE) u t = F (u), where spatial discretization F (u) is carefully chosen 1 so that the solution from the forward Euler method u n+1 = u n + tf (u n ), satisfies the monotonicity requirement u n+1 u n, in some norm, semi-norm or convex functional, for a suitably restricted timestep 1 e.g. TVD, TVB, ENO, WENO t t FE. November 20, 2014 5 / 22
Motivation We spend a lot of effort to design spatial discretizations which satisfy certain nonlinear, non-inner product stability properties 2 when coupled with forward Euler. But then we want the properties to carry over when the spatial discretization is coupled with higher order time-stepping methods (for both accuracy and linear stability reasons) 2 such as TVD or TVB limiters, positivity or maximum-principle preserving limiters or methods, or ENO or WENO schemes November 20, 2014 6 / 22
The Main Idea The trick is to decompose a higher order method into convex combinations of forward Euler so that if the spatial discretization F (u) satisfies the strong stability property u n + tf (u n ) u n for t t FE then the Runge Kutta or multistep method will preserve this property, so that u n+1 u n for t C t FE. Decoupling the analysis C is a property only of the time-integrator. t FE is a property of the spatial discretization. November 20, 2014 7 / 22
The Main Idea SSP methods give you a guarantee of provable nonlinear stability in arbitrary convex functionals, for arbitrary starting values for arbitrary nonlinear, non autonomous equations provided only that forward Euler strong stability is satisfied. This condition is clearly sufficient. It turns out it is also necessary. Bounds and Barriers This is a very strong property, so it should not be surprising that it is associated with time-step bounds and order barriers. November 20, 2014 8 / 22
SSP Bounds and Barriers Explicit SSP Runge Kutta methods have order p 4 and C eff 1. Explicit SSP multi-step methods do not have an order barrier, but they have the same bound C eff = 1. Any explicit general linear method have C eff 1. Explicit multi-step multi-stage methods have p > 4 with better C eff. All implicit SSP Runge Kutta methods ever found have C eff 2. Implicit SSP Runge Kutta methods have order p 6. Implicit SSP multi-step methods do not have an order barrier, but we can prove C eff 2. Implicit SSP multi-step multi-stage methods also have C eff 2. We conjecture that C eff 2 for any general linear method. November 20, 2014 9 / 22
Multi-derivative methods If we want to consider multistage multi derivative methods, we need a new condition that includes the derivative. Once again, the conservation law is semi-discretized U t + f (U) x = 0 u t = F (u) and we assume that F satisfies the forward Euler condition: U n + tf (U n ) U n for t t FE (1) and the second derivative condition: U n + t 2 F (U n ) U n for t K t FE. (2) November 20, 2014 10 / 22
Formulating an optimization problem for multi-derivative methods Given conditions (1) and (4) we can show that if the MDMS method is written in the form we can add the terms ray and y = u n + taf (y) + t 2  F (y) (3) ˆrÂy to both sides to obtain: ( ) ( I + ra + ˆr y = u n + ra y + t ) ) r F (y) + ˆr (y + t2 F (y) ˆr ( y = Ru n + P y + t ) ) r F (y) + Q (y + t2 F (y) ˆr where R = ( I + ra + ˆrÂ) 1 P = rra Q = ˆrR November 20, 2014 11 / 22
Formulating an optimization problem for multi-derivative methods y = Ru n + P ( y + t ) ) r F (y) + Q (y + t2 F (y) ˆr Now, we can clearly see that if r = ˆrK then this method preserves the strong stability condition u n+1 u n under the time-step restriction t r t FE as long as 1 R + P + Q = I (which is satisfied because R + rra + ˆrRÂ = (I + ra + ˆrÂ)R = I ) 2 Re 0, component wise, where e is a vector of ones. 3 P 0 component wise 4 Q 0 component wise November 20, 2014 12 / 22
Optimal third order MSMD methods Many two-stage two-derivative third order methods exist. For each K the optimal method 1.6 depends on resulting optimal C 1.4 depends on the value of K. We 1.2 found optimal methods for the 1 range 0.1 K 5. The 0.8 interesting thing is that these optimal methods all have a 0.6 Taylor series method as the 0.4 first stage, with t replaced by 0.2 a t, where (K K ) K 2 + 2 K 2. a = 1 r SSP Coefficient 0 0 1 2 3 4 5 November 20, 2014 13 / 22
Optimal fourth order MSMD methods The two-stage two-derivative fourth order method is unique: U = U n + t 2 F (U n ) + t2 F (U n ) 8 U n+1 = U n + tf (U n ) + t2 6 ( F (U n ) + 2 F (U )). The first stage of the method is a Taylor series method with t, 1.2 2 1 while the second stage can be written as a linear combination of 0.8 a forward Euler and a vanishing 0.6 energy term (but not a Taylor 0.4 series term). The SSP coefficient 0.2 of this method is larger as K 0 increases. 0 1 2 3 4 5 SSP Coefficient K November 20, 2014 14 / 22
Comparison of optimal SSP two stage two derivative methods For 2-stage, 2-derivative methods, the SSP coefficients of the third and fourth order methods have similar looking curves, but the SSP coefficient of the third order methods (blue) is much larger than that of the fourth order method (red). SSP Coefficient 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 K November 20, 2014 15 / 22
Optimal fourth order MSMD methods If we increase the number of stages to three, we can construct whole families of methods that obtain fourth-order accuracy, and are SSP with a larger allowable time step. SSP Coefficient 2 1.5 1 0.5 0 0 0.5 1 1.5 2 2.5 K November 20, 2014 16 / 22
Optimal fifth order MSMD methods Three stage fifth order methods can be found, with SSP coefficients that grow with K. These methods break the explicit SSP Runge Kutta order barrier. SSP Coefficient 1 0.8 0.6 0.4 0.2 0 0 0.5 1 1.5 2 2.5 K November 20, 2014 17 / 22
Comparison of optimal SSP three stage two derivative methods For 3-stage, 2-derivative methods, the SSP coefficients of the fourth and fifth order methods have similar looking curves, but the SSP coefficient of the fourth order methods (blue) is much larger than that of the fifth order method (red). SSP Coefficient 2 1.5 1 0.5 0 0 0.5 1 1.5 2 2.5 K November 20, 2014 18 / 22
How SSP MSMD methods perform in practice We look at a linear advection equation u t = u x with a weighted essentially non oscillatory (WENO) method in space and the time-stepping method y = y n + 2 (F 3 tw n + 1 ) 3 t F n ( 5 u n+1 = u n + tw 8 F n + 3 8 F + 1 ) 8 t( F n + F ) which can also be written as: y = y n + 2 (F 3 tw n + 1 ) 3 t F n u n+1 = 7 16 un + 9 16 y + tw ( 1 4 F n + 3 8 F + 1 ) 8 t F These recent results due to David Seal. November 20, 2014 19 / 22
How SSP MSMD methods perform in practice Increase in Total Variation 10 0 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 10-9 10-10 10-11 10-12 10-13 10-14 Square wave advection No Shu-Osher Shu-Osher 0.2 0.4 0.6 0.8 1.0 1.2 CFL number November 20, 2014 20 / 22
Conclusions SSP Runge Kutta methods give you a guarantee of provable nonlinear stability in arbitrary convex functionals, for arbitrary starting values for arbitrary nonlinear, non autonomous equations provided only that forward Euler strong stability is satisfied. SSP multistage multi derivative methods give you this same guarantee, provided that the second derivative condition: U n + t 2 F (U n ) U n for t K t FE. is satisfied as well. Future work: Methods with a forward Euler and Taylor series condition (already in progress). Sixth order methods. More testing in practice. November 20, 2014 21 / 22
Conclusions Thank You! November 20, 2014 22 / 22