THE MAGNUS METHOD FOR SOLVING OSCILLATORY LIE-TYPE ORDINARY DIFFERENTIAL EQUATIONS

Similar documents
Calculation of Highly Oscillatory Integrals by Quadrature Method


Comparative numerical solutions of stiff ordinary differential equations using Magnus series expansion method

Math Ordinary Differential Equations

Highly oscillatory quadrature and its applications

Efficient methods for time-dependence in semiclassical Schrödinger equations

COMPOSITION MAGNUS INTEGRATORS. Sergio Blanes. Lie Group Methods and Control Theory Edinburgh 28 June 1 July Joint work with P.C.

A Numerical Approach To The Steepest Descent Method

Exponential integrators

On the quadrature of multivariate highly oscillatory integrals over non-polytope domains

A numerical approximation to some specific nonlinear differential equations using magnus series expansion method

Lecture 4: Numerical solution of ordinary differential equations

IMPROVED HIGH ORDER INTEGRATORS BASED ON MAGNUS EPANSION S. Blanes ;, F. Casas, and J. Ros 3 Abstract We build high order ecient numerical integration

High-order actions and their applications to honor our friend and

Do not turn over until you are told to do so by the Invigilator.

Matrix power converters: spectra and stability

Application of Geometric Integration to some Mechanical Problems

Interpolation in Special Orthogonal Groups

Magnus series expansion method for solving nonhomogeneous stiff systems of ordinary differential equations

A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets

S. Sánchez, F. Casas & A. Fernández

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

INTERPOLATION IN LIE GROUPS

Research Statement. James Bremer Department of Mathematics, University of California, Davis

Three approaches for the design of adaptive time-splitting methods

Lie algebraic quantum control mechanism analysis

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Math 331 Homework Assignment Chapter 7 Page 1 of 9

type of high order Z. Jackiewicz, A. Marthinsen and B. Owren PREPRINT NUMERICS NO. 4/1999

Lecture 4: Fourier Transforms.

MATH 205C: STATIONARY PHASE LEMMA

RESEARCH ARTICLE. Spectral Method for Solving The General Form Linear Fredholm Volterra Integro Differential Equations Based on Chebyshev Polynomials

Highly Oscillatory Problems: Computation, Theory and Applications

ENO and WENO schemes. Further topics and time Integration

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

High-order time-splitting methods for Schrödinger equations

NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS VIA HAAR WAVELETS

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Scientific Computing: An Introductory Survey

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

Exponential Integrators

SANGRADO PAGINA 17CMX24CM. PhD Thesis. Splitting methods for autonomous and non-autonomous perturbed equations LOMO A AJUSTAR (AHORA 4CM)

Numerical Solutions to Partial Differential Equations

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Short Review of Basic Mathematics

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Research Article The Approximate Solution of Fredholm Integral Equations with Oscillatory Trigonometric Kernels

July 21 Math 2254 sec 001 Summer 2015

Numerical Approximation of Highly Oscillatory Integrals

Ordinary Differential Equation Theory

Complex Gaussian quadrature of oscillatory integrals

A Model of Evolutionary Dynamics with Quasiperiodic Forcing

Favourable time integration methods for. Non-autonomous evolution equations

Consistency and Convergence

Explicit Wei-Norman formulæ for matrix Lie groups via Putzer s method

Time-Dependent Statistical Mechanics A1. The Fourier transform

Bessel Functions Michael Taylor. Lecture Notes for Math 524

MAGNUS EXPANSION AS AN APPROXIMATION TOOL FOR ODES TIM CARLSON. Advisory Committee Chair. APPROVED: Dean, College of Natural Science and Mathematics

Higher Order Averaging : periodic solutions, linear systems and an application

REPORTS IN INFORMATICS

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

arxiv: v1 [math.ap] 11 Jan 2014

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

A combined Filon/asymptotic quadrature method for highly oscillatory problems

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Answers to Problem Set Number MIT (Fall 2005).

Theoretical Analysis of Nonlinear Differential Equations

LMI Methods in Optimal and Robust Control

UNIFORM BOUNDS FOR BESSEL FUNCTIONS

Preconditioned GMRES for oscillatory integrals

Copyright (c) 2006 Warren Weckesser

Solution via Laplace transform and matrix exponential

Splitting and composition methods for the time dependent Schrödinger equation

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Damped harmonic motion

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

CUBATURE FORMULA FOR APPROXIMATE CALCULATION OF INTEGRALS OF TWO-DIMENSIONAL IRREGULAR HIGHLY OSCILLATING FUNCTIONS

Commutator-free Magnus Lanczos methods for the linear Schrödinger equation

B-Series and their Cousins - A. Contemporary Geometric View

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Polynomial Approximation: The Fourier System

Some asymptotic properties of solutions for Burgers equation in L p (R)

Exam in TMA4215 December 7th 2012

ARTICULATED multibody systems play an important role

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

6 Linear Equation. 6.1 Equation with constant coefficients

MATH 251 Final Examination May 3, 2017 FORM A. Name: Student Number: Section:

Ph.D. Qualifying Exam: Algebra I

Non-Polynomial Spline Solution of Fourth-Order Obstacle Boundary-Value Problems

Math 127: Course Summary

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Elementary maths for GMT

Contents. I Basic Methods 13

Finite Difference and Finite Element Methods

Transcription:

THE MAGNUS METHOD FOR SOLVING OSCILLATORY LIE-TYPE ORDINARY DIFFERENTIAL EQUATIONS MARIANNA KHANAMIRYAN Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 WA, United Kingdom. email: M.Khanamiryan@damtp.cam.ac.uk Abstract This work presents a new method for solving Lie-type equations X ω = A ω (t)x ω, X ω () = I. The solution to the equations of this type have the following representation X ω = exp(ω). Here Ω stands for an infinite Magnus series of integral commutators. We assume that the matrix A ω has large imaginary spectrum and ω is a real parameter describing the frequency of oscillation. The novel method, called the FM method, combines the Magnus method, which is based on iterative techniques to approximate Ω, and the Filon-type method, an efficient quadrature rule for solving oscillatory integrals. We show that the FM method has superior performance compared to the classical Magnus method when applied to oscillatory differential equations. 1 Introduction We proceed with the matrix ordinary differential equation X ω = A ω X ω, X ω () = I, X ω = e Ω, (1) 1

where Ω represents the Magnus expansion, an infinite recursive series, Ω(t) = t + 1 A ω (t)dt t t [A ω (τ), τ τ + 1 [A ω (τ), 4 + 1 t [[A ω (τ), 1 +. τ A ω (ξ)dξ]dτ [A ω (ξ), ξ A ω (ξ)dξ], A ω (ζ)dζ]dξ]dτ τ A ω (ζ)dζ]dτ We make the following assumptions: A ω (t) is a smooth matrix-valued function, the spectrum σ(a ω ) of the matrix A ω has large imaginary eigenvalues and that ω 1 is a real parameter describing frequency of oscillation. It has been shown by Hausdorff, [Hau6], that the solution of the linear matrix equation in (1) is a matrix exponential X ω (t) = e Ω(t), where Ω(t) satisfies the following nonlinear differential equation, Ω = B j j= j! ad j Ω (A ω) = dexp 1 Ω (A ω(t)), Ω =. () Here B j are Bernoulli numbers and the adjoint operator is defined as ad Ω (A ω) = A ω ad Ω (A ω ) = [Ω,A ω ] = ΩA ω A ω Ω ad j+1 Ω (A ω) = [Ω,ad j Ω A ω]. Later it was observed by W. Magnus, [Mag54], that solving equation () with Picard iteration it is possible to obtain an infinite recursive series for Ω and then the truncated expansion can be used in approximation of Ω. Magnus methods preserve peculiar geometric properties of the solution, ensuring that if X ω is in matricial Lie group G and A ω is in associated Lie algebra g of G, then the numerical solution after discretization stays on the manifold. Moreover, Magnus methods preserve time-symmetric properties of the solution after discretization, for any a > b, X ω (a,b) 1 = X ω (b,a) yields Ω(a,b) = Ω(b,a). These properties appear to be valuable in applications. Employing the Magnus method to solve highly oscillatory differential equations we develop it in combination with appropriate quadrature rules, such as

3 Filon-type methods, proved to be more efficient for oscillatory equations than for example classical Gaussian quadrature. Below we provide a brief background on the Filon-type method and the asymptotic method. The asymptotic method provides theoretical background for the Filon-type method. At this point of discussion we find it appropriate to state the two important theorems from [IN5], describing the asymptotic and the Filon-type methods used to approximate highly oscillatory integrals of the form I[ f] = b a f(x)e iωg(x) dx. (3) Suppose f,g C are smooth, g is strictly monotone in [a,b],a x b and the frequency is ω 1. LEMMA 1.1 [A. Iserles & S. Nørsett][IN5] Let f,g C, g (x) on [a,b] and Then, for ω 1, I[ f] m=1 σ [ f](x) = f(x), σ k+1 [ f](x) = d σ k [ f](x) dx g, k =,1,... (x) [ ] 1 e iωg(1) ( iw) m g (1) σ m 1[ f](1) eiωg() g () σ m 1[ f](). The asymptotic method is defined as follows, [ ] Q A s s [ f] = 1 e iωg(1) m=1 ( iω) m g (1) σ m 1[ f](1) eiωg() g () σ m 1[ f](). THEOREM 1. [A. Iserles & S. Nørsett][IN5] For every smooth f and g, such that g (x) on [a,b], it is true that Q A s [ f] I[ f] O(ω s 1 ), ω. The Filon-type method was first pioneered in the work of L. N. G. Filon in 198 and later developed by A. Iserles and S. Nørsett in 4, [IN5]. The method

4 works as follows. First interpolate function f in (3) at chosen node points. For instance, take Hermite interpolation ṽ(x) = ν l=1 θ l 1 α l, j (x) f ( j) (c l ), (4) j= which satisfies ṽ ( j) (c l ) = f ( j) (c l ), at node points a = c 1 < c <... < c ν = b, with θ 1,θ,...,θ ν 1 associated multiplicities, j =,1,...,θ l 1,l = 1,,...,ν and r = ν l=1 θ l 1 being the order of approximation polynomial. Then, for the Filontype method, by definition, where b Q F s [ f] = I[ṽ] = ṽ(x)e iωg(x) dx = b l, j = 1 a ν l=1 θ l 1 b l, j (w) f ( j) (c l ), j= α l, j (x)e iωg(x) dx, j =,1,...,θ l 1, l = 1,,...,ν. THEOREM 1.3 [A. Iserles & S. Nørsett][IN5] Suppose that θ 1,θ ν s. For every smooth f and g, g (x) on [a,b], it is true that I[ f] Q F s [ f] O(ω s 1 ), ω. The Filon-type method has the same asymptotic order as the asymptotic method. For formal proof of the asymptotic order of the Filon-type method for univariate integrals, we refer the reader to [IN5] and for matrix and vector-valued functions we refer to [Kha8]. The following theorem from [Kha8] is on the numerical order of the Filon-type method. It was also shown in [Kha8] that the numerical solution obtained after discretisation of the integral with Filon-type method is convergent. THEOREM 1.4 [MK][Kha8] Let θ 1,θ ν s. Then the numerical order of the Filon-type method is equal to r = ν l=1 θ l 1.

5 4 4 6 x 1 7 6 1 1 14 16 18 6 4 4 8 x 1 8 6 1 1 14 16 18 Figure 1: The error of the asymptotic method Q A (right) and the Filon-type method Q F (left) for f(x) = cos(x), g(x) = x, θ 1 = θ =, 1 ω. EXAMPLE 1.1 Here we consider the asymptotic and the Filon-type methods with first derivatives for (3) over the interval [,1] for the case g(x) = x. Q A = eiω f(1) f() + eiω f (1) f () iω ω, ( Q F = 1 iω 61+eiω iω 3 + 1 1 ) eiω ω 4 f() + ( eiω iω + 61+eiω iω 3 1 1 ) eiω ω 4 f(1) 1ω + ( +eiω iω 3 + 6 1 ) eiω ω 4 f () ( e iω + ω 1+eiω iω 3 + 6 1 ) eiω ω 4 f (1). In Figure 1 we present numerical results on the asymptotic and Filon-type methods, with function values and first derivatives at the end points only, c 1 =,c = 1, for the integral I[ f] = 1 cos(x)e iωx dx, 1 ω. Both methods have the same asymptotic order and use exactly the same information. However, as we can see from Figure 1, the Filon-type method yields a

6 greater measure of accuracy than the asymptotic method. Adding more internal interpolation points leads to the decay of the leading error constant, resulting in a marked improvement in the accuracy of approximation. However, interpolating function f at internal points does not contribute to the higher asymptotic order of the Filon-type method. The Magnus method In this section we focus on Magnus methods for approximation of a matrix-valued function X ω in (1). There is a large list of publications available on the Lie-group methods, here we refer to some of them: [BCR], [CG93], [Ise9], [Isea], [MK98], [BO97], [Zan96]. Currently, the most general theorem on the existence and convergence of the Magnus series is proved for a bounded linear operator A(t) in a Hilbert space, [Cas7]. The following theorem gives us sufficient condition for convergence of the Magnus series, extending Theorem 3 from [MN8], where the same results are stated for real matrices. THEOREM.1 (F. Casas, [Cas7]) Consider the differential equation X = A(t)X defined in a Hilbert space H with X() = I, and let A(t) be a bounded linear operator on H. Then, the Magnus series Ω(t) in () converges in the interval t [,T) such that T A(τ) dτ < π and the sum Ω(t) satisfies exp Ω(t) = X(t). Given the representation for Ω(t) in (), the numerical task on evaluating the commutator brackets is fairly simple, [BCR], [Ise9], [Isea]. For example, choosing symmetric grid c 1,c,...,c ν, suppose taking Gaussian points with respect to 1, consider set {A 1,A,...,A ν }, with A k = ha(t + c k h),k = 1,,...,ν. Linear combinations of this basis form an adjoint basis {B 1,B,...,B ν }, with A k = ν l=1 (c k 1 )l 1 B l, k = 1,,...,ν.

7 In this basis the six-order method, with Gaussian points c 1 = 1 15 1,c = 1,c 3 = 1 + 15 1, A k = ha(t + c k h), is Ω(t + h) B 1 + 1 1 B 3 1 1 [B 1,B ]+ 1 4 [B,B 3 ] + 1 36 [B 1,[B 1,B 3 ]] 1 4 [B,[B 1,B ]]+ 1 7 [B 1[B 1,[B 1,B ]]], where 15 B 1 = A, B = 3 (A 3 A 1 ), B 3 = 1 3 (A 3 A + A 1 ). This can be reduced further and written in a more compact manner [Isea], [Ise9], Ω(t + h) B 1 + 1 1 B 3 + P 1 + P + P 3, where P 1 = [B, 1 1 B 1 + 1 4 B 3], 1 P = [B 1,[B 1, 36 B 3 1 6 P 1]], P 3 = 1 [B,P 1 ]. A more profound approach taking Taylor expansion of A(t) around the point t 1/ = t + h was introduced in [BCR], A(t) = i (t t 1/ ) i=a i, with a i = 1 d n A(t) i! dt i t=t1/. This can be substituted in the univariate integrals of the form B (i) = 1 h/ ( h i+1 t i A t + h ) dt, i =,1,,... to obtain a new basis h/ B () = a + 1 1 h a + 1 8 h4 a 4 ++ 1 448 h6 a 6... B (1) = 1 1 ha 1 + 1 8 h3 a 3 + 1 448 h5 a 5 +... B () = 1 1 a + 1 8 h a + 1 448 h4 a 4 + 1 34 h6 a 6 +...

8 In these terms a second order method will look as follows, e Ω = e hb() +O(h 3 ). Whereas for a six-order method, Ω = 4 i=1 Ω i +O(h 7 ), one needs to evaluate only four commutators, Ω 1 = hb () Ω = h [B (1), 3 B() 6B () ] Ω 3 + Ω 4 = h [B (),[B (), 1 hb() 1 6 Ω ]]+ 3 5 h[b(1), Ω ]. Numerical behavior of the fourth and six order classical Magnus method is illustrated in Figures, 3 and 4, and 5, 6 and 7, respectively. The method is applied to solve Airy equation y (t) = ty(t) with [1,] initial conditions, t [,1], for varies step-sizes, h = 4 1, h = 1 1 and h = 5 1. Comparison shows that for a bigger interval steps h both fourth and six order methods give similar results, as illustrated in Figures, 3, 5 and 6. However, for smaller steps six order Magnus method has a more rapid improvement in approximation compared to a fourth order method, Figures 4 and 7. 4 x 1 3 4 4 6 8 1 Figure : Global error of the fourth order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 4. In current work we present an alternative method to solve equations of the kind (1). We apply the Filon quadrature to evaluate integral commutators for Ω and then solve the matrix exponential X ω with the Magnus method or the modified Magnus method. The combination of the Filon-type methods and the Magnus

9 4 8 x 1 4 4 8 4 6 8 1 Figure 3: Global error of the fourth order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 1. methods forms the FM method, presented in the last section. Application of the FM method to solve systems of highly oscillatory ordinary differential equation can be found in [Kha9]. 3 The modified Magnus method In this section we continue our discussion on the solution of the matrix differential equation X ω = A ω(t)x ω, X ω () = I, X ω = exp(ω). (5) It was shown in [Isec] that one can achieve better accuracy in approximation of the fundamental matrix X ω, if one solves (5) at each time step locally for a constant matrix A ω. We commence from a linear oscillator studied in [Isec], y = A ω (t)y, y() = y. We also assume that the spectrum of the matrix A ω (t) has large imaginary values. Introducing local change of variables at each mesh point, we write the solution in the form, y(t) = e (t t n)a ω (t n + 1 h) x(t t n ), t t n, and observe that x = B(t)x, x() = y n,

1.5.5 1 x 1 7 1 4 6 8 1 Figure 4: Global error of the fourth order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 5. with B(t) = e ta ω(t n + 1 h) [A ω (t n +t) A ω (t n+ 1 )]e ta ω(t n + 1 h), where t n+ 1 = t n + h. The modified Magnus method is defined as a local approximation of the solution vector y by solving x = B(t)x, x() = y n, with classical Magnus method. This approximation results in the following algorithm, y n+1 = e ha ω(t n +h/) x n, x n = e Ω n y n. Performance of the modified Magnus method is better than that of the classical Magnus method due to a number of reasons. Firstly, the fact that the matrix B is small, B(t) = O(t t n+ 1 ), contributes to higher order correction to the solution. Secondly, the order of the modified Magnus method increases from p = s to p = 3s+1, [Iseb]. 4 The FM method Consider the Airy-type equation y (t)+g(t)y(t) =, g(t) > for t >, and lim t g(t) = +.

11 4 x 1 3 4 4 6 8 1 Figure 5: Global error of the six order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 4. Replacing the second order differential equation by the first order system, we obtain y = A ω (t)y, where ( ) 1 A ω (t) =. g(t) Due to a large imaginary spectrum of the matrix A ω, this Airy-type equation is rapidly oscillating. We apply the modified Magnus method to solve the system y = A ω (t)y, y n+1 = e ha ω(t n +h/) e Ω n y n. Here the integral commutators in Ω n are computed according to the rules of the Filon quadrature. This results in highly accurate numerical method, the FM method. Taking into account that the entries of the matrix B(t) are likely to be oscillatory, the advantage of the Filon-type method is evident. In the example below we provide a more detailed evaluation of the FM method applied to solve the Airy-type equation y = A ω (t)y. EXAMPLE 4.1 Once we have obtained the representation for Ω n, the commutator brackets are now formed by the matrix B(t). It is possible to reduce cost of evaluation of the matrix B(t) by simplifying it, [Ise4]. Denote q = g(t n + 1 h)

1 4 4 8 x 1 4 8 4 6 8 1 Figure 6: Global error of the six order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 1. and v(t) = g(t n +t) g(t n + 1 h). Then, [ (q) B(t) = v(t) 1 sinqt q sin ] qt cos qt (q) 1 sinqt [ q = v(t) 1 ] sinqt [ cosqt q cosqt 1 sinqt ], and for the product B(t)B(s) = v(t)v(s) It was shown in [Ise4] that [ sinq(s t) q 1 sinqt q cosqt ] [ cosqs q 1 sinqs ]. B(t) = cos wt + sin wt w and B(t) = O(t t n+ 1 ). Given the compact representation of the matrix B(t) with oscillatory entries, we solve Ω(t) in () with a Filon-type method, approximating functions in B(t) by a polynomial ṽ(t), for example Hermite polynomial (4), as in classical Filon-type method. Furthermore, in our approximation we use end points only, although the method is general and more nodes of approximation can be required. In Figure 8 we present the global error of the fourth order modified Magnus method with exact integrals for the Airy equation y (t)= ty(t) with [1,] initial

13 5 x 1 9 5 4 6 8 1 Figure 7: Global error of the six order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t 1 and step-size h = 1 5. conditions, t time interval and step-size h = 1 5. This can be compared with the global error of the FM method applied to the same equation with exactly the same conditions and step-size, Figure 9. In Figures 1 and 11 we compare the fourth order classical Magnus method with a remarkable performance of the fourth order FM method, applied to the Airy equation with a large step-size equal to h = 1. While in Figures 1 and 13 we solve the Airy equation with the fourth order FM method with step-sizes h = 1 4 and h = 1 1 respectively. References [BCR] S. Blanes, F. Casas, and J. Ros. Improved high order integrators based on the Magnus expansion. BIT, 4(3):434 45,. [BO97] [Cas7] [CG93] A. Marthinsen B. Owren. Integration methods based on rigid frames. Norwegian University of Science & Technology Tech. Report, 1997. F. Casas. Sufficient conditions for the convergence of the Magnus expansion. J. Phys. A, 4(5):151 1517, 7. P. E. Crouch and R. Grossman. Numerical integration of ordinary differential equations on manifolds. J. Nonlinear Sci., 3(1):1 33, 1993.

14 [Hau6] [IN5] [Isea] [Iseb] [Isec] F. Hausdorff. Die symbolische exponentialformel in der gruppentheorie. Berichte der Sachsischen Akademie der Wissenschaften (Math. Phys. Klasse), 58:19 48, 196. A. Iserles and S. P. Nørsett. Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461(57):1383 1399, 5. A. Iserles. Brief introduction to Lie-group methods. In Collected lectures on the preservation of stability under discretization (Fort Collins, CO, 1), pages 13 143. SIAM, Philadelphia, PA,. A. Iserles. On the global error of discretization methods for highlyoscillatory ordinary differential equations. BIT, 4(3):561 599,. A. Iserles. Think globally, act locally: solving highly-oscillatory ordinary differential equations. Appl. Numer. Math., 43(1-):145 16,. 19th Dundee Biennial Conference on Numerical Analysis (1). [Ise4] A. Iserles. On the method of Neumann series for highly oscillatory equations. BIT, 44(3):473 488, 4. [Ise9] [Kha8] [Kha9] A. Iserles. Magnus expansions and beyond. to appear in Proc. MPIM, Bonn, 9. M. Khanamiryan. Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations. I. BIT, 48(4):743 761, 8. M. Khanamiryan. Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations. II. Submitted to BIT, 9. [Mag54] W. Magnus. On the exponential solution of differential equations for a linear operator. Comm. Pure Appl. Math., 7:649 673, 1954. [MK98] H. Munthe-Kaas. Runge-Kutta methods on Lie groups. BIT, 38(1):9 111, 1998. [MN8] [Zan96] P. C. Moan and J. Niesen. Convergence of the Magnus series. Found. Comput. Math., 8(3):91 31, 8. A. Zanna. The method of iterated commutators for ordinary differential equations on lie groups. Tech. Rep. DAMTP 1996/NA1, University of Cambridge, 1996.

15 x 1 1 1 1 3 5 1 15 Figure 8: Global error of the fourth order Modified Magnus method with exact evaluation of integral commutators for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1 5. x 1 1 1 1 3 5 1 15 Figure 9: Global error of the fourth order FM method with end points only and multiplicities all for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1 5.

16 4 x 1 7 4 5 1 15 Figure 1: Global error of the fourth order FM method with end points only and multiplicities all for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1..15.5.5.15 5 1 15 Figure 11: Global error of the fourth order Magnus method for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1.

17 x 1 9 1 1 5 1 15 Figure 1: Global error of the fourth order FM method with end points only and multiplicities all, for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1 4. 4 x 1 11 4 5 1 15 Figure 13: Global error of the fourth order FM method with end points only and multiplicities all, for the Airy equation y (t) = ty(t) with [1,] initial conditions, t and step-size h = 1 1.