Introduction to symplectic integrator methods for solving diverse dynamical equations

Similar documents
High-order actions and their applications to honor our friend and

THE Hamilton equations of motion constitute a system

Quantum Statistical Calculations and Symplectic Corrector Algorithms

HAMILTON S PRINCIPLE

General Exam Part II, Fall 1998 Quantum Mechanics Solutions

QFT PS1: Bosonic Annihilation and Creation Operators (11/10/17) 1

Hamiltonian flow in phase space and Liouville s theorem (Lecture 5)

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Poisson Brackets and Lie Operators

Non-linear dynamics! in particle accelerators!

Assignment 8. [η j, η k ] = J jk

NIU PHYS 500, Fall 2006 Classical Mechanics Solutions for HW6. Solutions

Topics in Representation Theory: Cultural Background

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

1 Mathematical preliminaries

Scientific Computing II

The Calculus of Variations

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t)

Physics 115/242 The leapfrog method and other symplectic algorithms for integrating Newton s laws of motion

Euclidean path integral formalism: from quantum mechanics to quantum field theory

Convergence of Star Products and the Nuclear Weyl Algebr

Chaotic motion. Phys 750 Lecture 9

On the Hamilton-Jacobi Variational Formulation of the Vlasov Equation

Supersymmetric Quantum Mechanics and Geometry by Nicholas Mee of Trinity College, Cambridge. October 1989

LECTURE 5: SURFACES IN PROJECTIVE SPACE. 1. Projective space

Lecture I: Constrained Hamiltonian systems

EXERCISES IN POISSON GEOMETRY

Part III Symmetries, Fields and Particles

Generators for Continuous Coordinate Transformations

Choice of Riemannian Metrics for Rigid Body Kinematics

Modified Equations for Variational Integrators

Quantum Theory and Group Representations

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Completeness, Eigenvalues, and the Calculus of Variations

Nonlinear Single-Particle Dynamics in High Energy Accelerators

Chap. 3. Controlled Systems, Controllability

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

The BRST complex for a group action

Lie algebraic quantum control mechanism analysis

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Chaotic motion. Phys 420/580 Lecture 10

The groups SO(3) and SU(2) and their representations

Quantum Mechanics for Mathematicians: The Heisenberg group and the Schrödinger Representation

MATHEMATICAL METHODS INTERPOLATION

Splitting Methods for Non-autonomous Hamiltonian Equations

Liouville Equation. q s = H p s

TCC Homological Algebra: Assignment #3 (Solutions)

COMPOSITION MAGNUS INTEGRATORS. Sergio Blanes. Lie Group Methods and Control Theory Edinburgh 28 June 1 July Joint work with P.C.

Numerical Approximation of Phase Field Models

An Exactly Solvable 3 Body Problem

LECTURE 8: THE MOMENT MAP

Com simular de forma precisa el moviment del nostre Sistema Solar?

Canonical transformations (Lecture 4)

High performance computing and numerical modeling

Calogero model and sl(2,r) algebra

-state problems and an application to the free particle

Ch 3.2: Direct proofs

Parallel-in-time integrators for Hamiltonian systems

arxiv: v1 [physics.plasm-ph] 15 Sep 2013

4. Killing form, root space inner product, and commutation relations * version 1.5 *

1 Ordinary differential equations

Implications of Poincaré symmetry for thermal field theories

D-MATH Alessio Savini. Exercise Sheet 4

An introduction to Birkhoff normal form

Elementary Approximation of Exponentials of. Frederic Jean & Pierre{Vincent Koseleff. Equipe Analyse Algebrique, Institut de Mathematiques

Bare-bones outline of eigenvalue theory and the Jordan canonical form

BACKGROUND IN SYMPLECTIC GEOMETRY

Multi-product expansion, Suzuki s method and the Magnus integrator for solving time-dependent problems

Symplectic and Poisson Manifolds

(Most of the material presented in this chapter is taken from Thornton and Marion, Chap. 7) p j . (5.1) !q j. " d dt = 0 (5.2) !p j . (5.

Homework 4. Goldstein 9.7. Part (a) Theoretical Dynamics October 01, 2010 (1) P i = F 1. Q i. p i = F 1 (3) q i (5) P i (6)

Abstract Vector Spaces and Concrete Examples

(VI.C) Rational Canonical Form

Normal form for the non linear Schrödinger equation

Quantum Field Theory III

Let us recall in a nutshell the definition of some important algebraic structure, increasingly more refined than that of group.

Solutions to Laplace s Equations

Lecture IV: Time Discretization

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

THE SUM OF DIGITS OF n AND n 2

= 0. = q i., q i = E

Chapter 12. The cross ratio Klein s Erlanger program The projective line. Math 4520, Fall 2017

Physics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1

Introduction to Hamiltonian Monte Carlo Method

Symplectic integration. Yichao Jing

DO NOT USE WITHOUT PERMISSION

On the Linearization of Second-Order Dif ferential and Dif ference Equations

Physics 550. Problem Set 6: Kinematics and Dynamics

Ernst Hairer 1, Robert I. McLachlan 2 and Robert D. Skeel 3

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into

Physics 127b: Statistical Mechanics. Renormalization Group: 1d Ising Model. Perturbation expansion

Coupling of Angular Momenta Isospin Nucleon-Nucleon Interaction

stochnotes Page 1

Spontaneous breaking of supersymmetry

Dierential geometry for Physicists

Geometric Numerical Integration

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Transcription:

Introduction to symplectic integrator methods for solving diverse dynamical equations Siu A. Chin Department of Physics and Astronomy Texas A&M University College Station, TX, 77840, USA Third Lecture The algebratization of algorithms; higher order algorithms; the proof of canonicity and the proof of non-forward time step algorithms beyond second-order. 1

In Lecture II we have arrived at the Poisson operator solution of classical dynamics. W (t) = e t(t +V ) W (0) = e th W (0). For the most common, separable Hamiltonian n p 2 i H(q i, p i ) = 2m + U(q i), if we define then v p m T = v q We now observe ( that e tt q ) ( q + tv ) = v v i=1 a(q) F(q) m 2 V = a(q) ( e tv q ) v = v. ( q ). v + ta(q)

That is, although exp(tt + tv ) cannot be solved exactly, the component operator exp(tt ) and exp(tv ) are just shift operators, and can be evaluated exactly. This, therefore suggests that we should try to approximate e t(t +V ) = e aitt e b itv i to take advantage of the solvability of exp(tt ) and exp(tv ). The advantage of this product decomposition or splitting is that the resulting algorithm is always a canonical transformation with det M = 1, and now known as symplectic integrators. Moreover, this reduces the derivation of algorithms for solving classical dynamics from doing analysis ( analyzing how derivatives are to be approximated) to that of just doing algebra (determining the coefficients {a i, b i }). 3

First-order algorithms The Baker-Campbell-Hausforff formula e A e B = e A+B+ 1 2 [A,B]+ 1 12 ([A,[A,B]] [B,[A,B]])+ is the key to this approach, suggesting that we can approximate by t(t +V ) e T 1A = e tv e tt = e t(t +V )+ 1 2 t2 [V,T ]+O( t 3) = e th 1A where the Hamiltonian operator of the approximation is H 1A = T + V + 1 t[v, T ] + = H + O( t) 2 which differs from the original Hamiltonian by an error term first order in t. We call this a first-order operator splitting 1A 4

This first-order splitting produces the following first-order symplectic algorithm 1A: q(t) = e tv e tt q = e tv (q + tv) = q + t(v + ta(q)), which is equivalent to first updating v 1 = v + ta(q) then setting q 1 = q + tv 1. If we identify q 1 = q( t) and v 1 = v( t), then this is precisely Cromer s algorithm! Note the variables are updated in the reverse order of the operators. That is, the operators are acting from right to left, but the variables are updated according to each operator from left to right. 5

The first-order algorithm corresponds to T 1B = e tt e tv = e t(t +V )+ 1 2 t2 [T,V ]+O( t 3 ) is then q(t) = e tt e tv q = e tt q = q + tv, v(t) = e tt e tv v = e tt (v + ta(q)) = v + ta(q + tv), which is equivalent to first updating q 1 = q + tv then v 1 = v + ta(q 1 ) which is the canonical transformation produced by F 3! We will call this algorithm 1B. 6

Time-symmetry These first-order symplectic integrators are all stable with det M = 1. However, they have one defect, that is not characteristic of the exact solution. Since classical mechanics is time-symmetric, running the exact solution backward in time would retrace the trajectory back to its origin. This is not case for these first order algorithms, since it is easy to see that T 1A ( t)t 1A ( t) = e tv e tt e tv e tt 1 What kind of splitting T ( t) = i e a i tt e b i tv would produce a time-symmetric algorithm? (Hint: in order for T ( t)t ( t) = 1 we must have the right-edge operator of T ( t) cancels the left edge operator of T ( t). ) 7

What kind of splitting would produce a time-symmetric algorithm? Since we must have the right-edge operator of T ( t) cancels the left edge operator of T ( t) we must have T ( t) = e a 1 tt e a 1 tt or T ( t) = e b 1 tv e b 1 tv. After that cancellation, the same argument applies to the next operator T ( t) = e a 1 tt e b 1 tv e b 1 tv e a 1 tt or or T ( t) = e b 1 tv e a 1 tt e a 1 tt e b 1 tv. So, the answer is clear, the splitting must be left-right symmetric, in order for T ( t)t ( t) = 1. 8

A time-symmetric splitting T ( t), obeying must have the form T ( t)t ( t) = 1 T ( t) = exp( t(t + V ) + t 3 E 3 + t 5 E 5 + ), with only odd powers of t, since any even powers of t would not cancel. This means that any time-symmetric, or left-right symmetric splitting, must have which is at least second-order. H T = H + t 2 E 3 + t 4 E 5 + 9

Second-order algorithms It s simple to produce left-right symmetric splittings, T 2A ( t) = T 1A ( t/2)t 1B ( t/2) = e 1 2 tv e 1 2 tt e 1 2 tt e 1 2 tv = e 1 2 tv e tt e 1 2 tv, which translates to the symplectic algorithm v 1 = v + 1 2 ta(q) q 1 = q + tv 1 v 2 = v 1 + 1 2 ta(q 1) known as the velocity-verlet algorithm, empirically found long before this derivation. We should call this algorithm 2A. 10

Similarly, we have: T 2B ( t) = T 1B ( t/2)t 1A ( t/2) = e 1 2 tt e 1 2 tv e 1 2 tv e 1 2 tt = e 1 2 tt e tv e 1 2 tt, which translates to the symplectic algorithm 2B q 1 = q + 1 2 tv v 1 = v + ta(q 1 ) q 2 = q 1 + 1 2 tv 1 which is sometime referred to as the position-verlet algorithm. From this analysis, there are only TWO canonical first and second-order symplectic algorithms. 11

Higher order algorithms Once we have these two second-order algorithms, we can generate symplectic integrator of any order. Let T 2 be either of the two second-order algorithm. Consider now T 4 = T 2 (ϵ)t 2 ( sϵ)t 2 (ϵ), which is left-right symmetric but with a negative middle time step. Since T 2 (ϵ) = exp(ϵh + ϵ 3 E 3 + ϵ 5 E 5 + ), [ ] T 2 (ϵ)t 2 ( sϵ)t 2 (ϵ) = exp (2 s)ϵh + (2 s 3 )ϵ 3 E 3 + O(ϵ 5 ). If we now choose s = 2 1/3, then the ϵ 3 error term will vanish and we will have a fourth-order, known as the Forest-Ruth algorithm. The next thing is to renormalize the time step so that (2 s)ϵ = t, giving ( t ( T 4 ( t) = T 2 )T 2 s t ) ( t ) T 2. 2 s 2 s 2 s 12

Now we can repeat the process to produce a sixth-order algorithm: ( t ( T 6 ( t) = T 4 )T 4 s t ) ( t ) T 4 2 s 2 s 2 s where now s = 2 1/5. Or a eighth-order algorithm via ( t ( T 8 ( t) = T 6 )T 6 s t ) ( t ) T 6 2 s 2 s 2 s where now s = 2 1/7, etc.. This is a great advance in symplectic integrators, where one can easily produce arbitrarily higher-order algorithms. But there are two disadvantages: 1) Beyond second-order, some time steps must be negative. This means that these splitting methods cannot be used to solve time-irreversible equations, or semi-group problems, involving the heat/diffusion kernel. 2) The number of operators grows exponentially with order of the algorithm. We will come back to address two points. 13

Proof of Canoniticity We will now prove that any splitting of the form e t(t +V ) = i e a itt e b itv produces a canonical transformation. Recall that a canonical transformation (q i, p i ) (Q i, P i ) is a transformation such that Hamilton s equations are preserved for the new variables Q i = K P i, P i = K, (6) Q i with respect to K(Q i, P i ) = H(q i, p i ). From Poisson s equation of motion, we must also have Q i = {Q i, H}, P i = {P i, H}. 14

Now that Q i and P i are our new variables, we must therefore express (sum over repeated indices) the partial derivatives of H in the Poisson brackets in terms of Q i, P i and K: H = K Q j q k Q j q k + K P j P j q k H p k = K Q j Q j p k + K P j P j p k. Substitute the above then gives straightforwardly, Q i = {Q i, Q j } K Q j + {Q i, P j } K P j P i = {P i, P j } K + {P i, Q j } K. P j Q j This agrees with (6) if and only if {Q i, Q j } = 0, {P i, P j } = 0, {Q i, P j } = δ ij. (7) 15

This means that the transformed variable must also obey the fundamental Poisson brackets. Thus the fundamental Poisson brackets are an equivalent way of characterizing a canonical transformation. Furthermore, in (7) the Poisson brackets are computed with respect to {q i, p i }. One clearly would have obtained the same fundamental Poisson brackets if computed with respect to {Q i, P i }! This examplifies that Poisson brackets are canonically invariant, are the same for all canonical transformed variables. 16

Lie operators and Lie Transforms For any dyanmical function S(q i, p i ), we can define an associated Lie operator Ŝ via the Poisson bracket Ŝ = {, S} n ( S i=1 p i S q i q i ), p i that is, when Ŝ acts on any other dynamical function W (q i, p i ), its effect is n ( S ŜW = {W, S} S ) W. p i q i q i p i i=1 The importance of introducing Lie operators is that the exponential of any Lie operator, called a Lie transform, defined formally by the infinite Lie series, T S = e ϵŝ = 1 + ϵŝ + 1 2! + ϵ2 Ŝ 2 + 1 3! ϵ3 Ŝ 3 +, 17

produces an explicit canonical transformation via Q i = e ϵŝq i, P i = e ϵŝp i. (8) The small parameter ϵ is introduced to guarantee the series convergence to the exponential function. To show that (8) is a canonical transformation, we verify that Q i and P i satisfy the fundamental Poisson bracket (7). The key steps are elementary; all we need are some preliminary lemmas. First, we note that Ŝ is just a first order differential operator, and therefore for any two functions of the dynamical variables, f(q i, p i ) and g(q i, p i ), we have the product rule: Ŝ(fg) = (Ŝf)g + f(ŝg). From here, it is not too difficult to show that e ϵŝ(fg) = (e ϵŝf)(e ϵŝg), (9) 18

and hence e ϵŝ{f, g} = {e ϵŝf, e ϵŝg}. Take the simplest case of Ŝ = v / q, then eϵŝf(q) is just the Taylor expansion of f(q + ϵv). Eq.(9) is then obviously true e ϵŝ( f(q)g(q) ) = f(q + ϵv)g(q + ϵv) = (e ϵŝf)(e ϵŝg). For Q i and P i defined by (8) we can now use to compute {Q i, Q j } = {e ϵŝq i, e ϵŝq j } = e ϵŝ{q i, q j } = 0 {P i, P j } = {e ϵŝp i, e ϵŝp j } = e ϵŝ{p i, p j } = 0 {Q i, P j } = {e ϵŝq i, e ϵŝp j } = e ϵŝ{q i, p j } = e ϵŝδ ij = δ ij. In this Lie transform formulation of canonical transformation, we see explicitly how the fundamental Poisson brackets are preserve 19

and transferred to the new variables. The last equality in follows since δ ij is a constant and all terms of e ϵŝ acting on it vanish except the first term, which is 1. Finally we recall that, T = ˆT = {, k p 2 k 2m } = i p i m q i and V = ˆV = {, U(q i )} = i F i p i are Lie operators and e tt and e tv are Lie transforms. 20

No forward time step algorithms beyond second order We note that we have a middle negative time step in the triplet construction of the fourth-order algorithm. This negative time-step is unavoidable and is the content of the Sheng-Suzuki theorem. We give here a simplified proof by Blanes and Casas. This is a purely mathematical result applies to the splitting of any two operators A and B. The statement is that if one were to approximate e ϵ(a+b) by e αiϵa e βiϵb, to any order higher than the second-order, then the set of coefficients {α i, β i } cannot be all positive. i 21

Let the two first-order splitting be T A (ϵ) = e ϵa e ϵb and T B (ϵ) = e ϵb e ϵa. Since T B ( ϵ)t A (ϵ) = e ϵb e ϵa e ϵa e ϵb = 1. This means that T B ( ϵ) = T 1 A (ϵ) and T B(ϵ) = T 1 A ( ϵ). By the Baker-Campbell-Hausdorff formula T A (ϵ) = e ϵa e ϵb = exp(ϵf 1 + ϵ 2 F 2 + ϵ 3 F 3 + ) = exp(f ) where F 1 = A + B, F 2 = 1 2 [A, B], F 3 = 1 12 ([(A B), [A, B]]), etc.. Clearly then T 1 A (ϵ) = exp( F ) = exp( ϵf 1 ϵ 2 F 2 ϵ 3 F 3 + ), 22

and therefore T B (ϵ) = T 1 A ( ϵ) = exp(ϵf 1 ϵ 2 F 2 + ϵ 3 F 3 + ). Hence, T A (a 1 ϵ)t B (a 2 ϵ) = exp[(a 1 + a 2 )ϵf 1 + (a 2 1 a 2 2)ϵ 2 F 2 + (a 3 1 + a 3 2)ϵ 3 F 3 + O(ϵ 3 )[F 1, F 2 ] ] T A (a 3 ϵ)t B (a 4 ϵ) = exp[(a 3 + a 4 )ϵf 1 + (a 2 3 a 2 4)ϵ 2 F 2 + (a 3 3 + a 3 4)ϵ 3 F 3 + O(ϵ 3 )[F 1, F 2 ] ] Now coming back to i eα iϵa e β iϵb and define {a i, b i } from {α i, β i } as follow: α 1 = a 1, β 1 = a 1 + a 2, α 2 = a 2 + a 3, β 2 = a 3 + a 4, etc., then e α 1ϵA e β 1ϵB e α 2ϵA e β 2ϵB = e a 1ϵA e a 1ϵB e a 2ϵB e a 2ϵA e a 3ϵA e a 3ϵB e a 4ϵB = T A (a 1 ϵ)t B (a 2 ϵ)t A (a 3 ϵ)t B (a 4 ϵ) = exp[c 1 ϵf 1 + C 2 ϵ 2 F 2 + C 3 ϵ 3 F 3 + O(ϵ 3 )[F 1, F 2] ] 23

where C 1 =a 1 +a 2 +a 3 +a 4 + C 2 =a 2 1 a 2 2+a 2 3 a 2 4+ C 3 =a 3 1+a 3 2+a 3 3+a 3 4+ For a first-order algorithm, we must have C 1 = 1, for a secondorder algorithm, we must have additionally C 2 = 0. For a thirdorder algorithm, we must have additionally, as a necessary condition, C 3 = 0. (This is because there is also the O(ϵ 3 )[F 1, F 2] term that needs to vanish.) If C 3 where to be zero, then at some point, one must have a 3 i + a 3 i+1 < 0. Since x 3 + y 3 = (x + y)[ 3 4 y2 + (x 1 2 y)2 ] 24

x 3 + y 3 < 0 x + y < 0, we must have a i + a i+1 < 0. Recall that α 1 = a 1, β 1 = a 1 + a 2, α 2 = a 2 + a 3, β 2 = a 3 + a 4 etc. then one of the α i or β i must be negative. The proof can be refined to show that at least one α i AND one β i must be negative. 25