Runga-Kutta Schemes Exact evolution over a small time step: x(t + t) =x(t)+ t 0 d(δt) F (x(t + δt),t+ δt) Expand both sides in a small time increment: x(t + t) =x(t)+x (t) t + 1 2 x (t)( t) 2 + 1 6 x (t)+ = x(t)+f t + 1 ( ) Ft + FF x ( t) 2 2 + 1 Ftt +2FF tx + F 6( 2 F xx + FFx 2 ) + F t F x ( t) 3 +
Runga-Kutta Schemes Exact evolution over a small time step: x(t + t) =x(t)+ t 0 d(δt) F (x(t + δt),t+ δt) Expand both sides in a small time increment: x(t + t) =x(t)+x (t) t + 1 2 x (t)( t) 2 + 1 6 x (t)+ = x(t)+f t + 1 ( ) Ft + FF x ( t) 2 2 + 1 Ftt +2FF tx + F 6( 2 F xx + FFx 2 ) + F t F x ( t) 3 + partial derivatives
Runga-Kutta Schemes Runge-Kutta ansatz at order m: x(t + t) =x(t)+α 1 c 1 + α 2 c 2 + + α m c m Function evaluation at many points in the interval c 1 =( t)f (x, t) c 2 =( t)f (x + ν 21 c 1,t+ ν 21 t) c 3 =( t)f (x + ν 31 c 1 + ν 32 c 2,t+(ν 31 + ν 32 ) t) c 4 =( t)f (x + ν 41 c 1 + ν 42 c 2 + ν 43 c 3,t+(ν 41 + ν 42 + ν 43 ) t). m equations and m + m(m 1)/2 unknowns {α i, ν ij }
We have discussed the Euler, Picard, and Runga-Kutta schemes for integrating the first-order initial value problem: x (t) =F (x(t),t) x(0) = x 0 Similar considerations can be applied to the second-order problem: x (t) =F (x(t),x (t),t) x(0) = x 0 x (0) = v 0
Convenient to reinterpret the second-order system as two coupled first-order equations x (t) =F (x(t),x (t),t) x(0) = x 0 x (0) = v 0 v (t) =A(x(t),v(t),t) x (t) =v(t) x(0) = x 0 v(0) = v 0 Obvious connection to classical mechanics: velocity v and acceleration model A
Naive generalization of Euler method to the pair of first order equations Some ambiguity in the labelling of time steps Set x 0 and v 0 to their initial values Step through each t i (i 0): compute a i = A(x i,v i,t i ) v i+1 = v i + a i t x i+1 = x i + v i t t i+1 = t i + t
Naive generalization of Euler method to the pair of first order equations Some ambiguity in the labelling of time steps Set x 0 and v 0 to their initial values Step through each t i (i 0): compute a i = A(x i,v i,t i ) v i+1 = v i + a i t x i+1 = x i + v i t t i+1 = t i + t Could equally read v i+1 and still be correct to O( t) (Euler-Cromer)
Can achieve higher order algorithms systematically at the cost of having more time steps involved in each update x i+1 = x i + v i t + 1 2 a i( t) 2 + O( t) 3 x i 1 = x i v i t + 1 2 a i( t) 2 + O( t) 3 Adding and subtracting the forward and reverse forms x i+1 =2x i x i 1 + a i ( t) 2 v i = x i+1 x i 1 (Verlet method) t
Verlet method is more numerically stable than Euler It is not self-starting: it needs both and (x 0,v 0 ) (x 1,v 1 ) Accuracy can be arbitrarily improved in this way at the cost of more starting points: (x 2,v 2 ), etc. Fortunately, there is an update rule mathematically equivalent to Verlet that is self-starting: x i+1 = x i + v i t + 1 2 a i( t) 2 (self-starting Verlet) v i+1 = v i + 1 2 (a i+1 + a i ) t
Runga-Kutta is always self-starting 4th order Runga-Kutta for Newton s equations: k 1v = A(x i,v i,t i ) t k 1x = v i t k 2v = A(x i + 1 2 k 1x,v i + 1 2 k 1v,t i + 1 2 t) k 3v = A(x i + 1 2 k 2x,v i + 1 2 k 2v,t i + 1 2 t) k 3x = ( v i + 1 2 k 2v) t k 4v = A(x i + k 3x,v i + k 3v,t i + t) k 4x = ( ) v i + k 3x t ( ) v i+1 = v i + 1 6 k1v +2k 2v +2k 3v + k 4v ( ) x i+1 = x i + 1 6 k1x +2k 2x +2k 3x + k 4x
Finite difference equation approximates a differential equation as an iterative map (x i+1,v i+1 )=M[(x i,v i )] Evolution given by orbits of the map (as in Game of Life) (x n,v n )=M (n) [(x 0,v 0 )] = M[M[ M[(x 0,v 0 )] ]] }{{} ntimes
Backward analysis theorem: the iterative map is solving exactly some other equation M v (t) =A(x(t),v(t),t)+ɛ Are additional terms ɛ relevant? Do they break important conservation laws?
Hamiltonian Systems System that can be describe by a Hamiltonian of generalized coordinates (p n,q n ) H in terms Equations of motion: ṗ n dp n dt = H q i q n dq n dt = H p i
Hamiltonian Systems Key property of Hamiltonian systems: symplectic symmetry = conservation of phase space volume p p t = t 1 t = t 2 q q