Propagators for TDDFT

Size: px
Start display at page:

Download "Propagators for TDDFT"

Transcription

1 Propagators for TDDFT A. Castro 1 M. A. L. Marques 2 A. Rubio 3 1 Institut für Theoretische Physik, Fachbereich Physik der Freie Universität Berlin 2 Departamento de Física, Universidade de Coimbra, Portugal. 3 Departamento de Física de Materiales, Centro Mixto CSIC-UPV/EHU and Donostia International Physics Center, San Sebastián, Spain. Benasque 2008 TDDFT School, September 3 rd, 2008

2 Outline Introduction 1 Introduction The Problem Time discretization Self consistency Notation 2 3 4

3 Outline Introduction The Problem Time discretization Self consistency Notation 1 Introduction The Problem Time discretization Self consistency Notation 2 3 4

4 Definition of the problem The Problem Time discretization Self consistency Notation i t φ j( r, t) = { v ext ( r, t) + d 3 r n( r, t) r r + δe xc δn( r, t) }φ j( r, t) n( r, t) = Φ(t) j δ( r ˆ r j ) Φ(t) = j φ j ( r, t) 2. φ j (t = 0) = φ 0 j. i d φ(t) = Ĥ(t) φ(t). dt φ(t = 0) = φ 0.

5 Definition of the problem The Problem Time discretization Self consistency Notation i t φ j( r, t) = { v ext ( r, t) + d 3 r n( r, t) r r + δe xc δn( r, t) }φ j( r, t) n( r, t) = Φ(t) j δ( r ˆ r j ) Φ(t) = j φ j ( r, t) 2. φ j (t = 0) = φ 0 j. i d φ(t) = Ĥ(t) φ(t). dt φ(t = 0) = φ 0.

6 Definition of the problem The Problem Time discretization Self consistency Notation i d φ(t) = Ĥ(t) φ(t). dt φ(t = 0) = φ 0. Ĥ is very large, sparse, typically Hermitian, and unbounded. The Hamiltonian Ĥ(t) that appears in TDDFT is necessarily time-dependent. We do not know a priori this time-dependence: the Hamiltonian is both the problem and the solution: The problem is to obtain φ(t + t), given the knowledge of φ(τ) and Ĥ(τ), for τ t.

7 The solution Introduction The Problem Time discretization Self consistency Notation φ(t) = Û(t, t 0 ) φ(t 0 ). Û is a non-linear operator. It is unitary: Û = Û 1. There is (usually) time-reversal symmetry: Û 1 (t, t 0 ) = Û(t 0, t). The evolution equation may be equivalently rewritten for Û: i d dt Û(t, t 0) = Ĥ(t)Û(t, t 0 ) Û(t 0, t 0 ) = ˆ1. Or in integral form: t Û(t, t 0 ) = ˆ1 i dτĥ(τ)û(τ, t 0 ). t 0

8 The Solution Introduction The Problem Time discretization Self consistency Notation By making use of the previous integral equation, one may derive an explicit form for the propagator: Û(t, t 0 ) = ( i) n n=0 n! t t 0 dt 1 t t 0 dt 2... t t 0 dt n T [ Ĥ(t 1 )Ĥ(t 2 )...Ĥ(t n ) ]. To beautify the expression, the time-ordering exponential is defined: Û(t, t 0 ) = Texp{ i t t 0 dτĥ(τ)}. (But this is only an aesthetic trick, that does not help to design numerical propagators) If [Ĥ(t), Ĥ(t )] = 0, Û(t, t 0 ) = exp{ i t t 0 dτĥ(τ)}. If the Hamiltonian is time-independent, Û(t, t 0 ) = exp{ iĥ(t t 0 )}.

9 The Solution Introduction The Problem Time discretization Self consistency Notation By making use of the previous integral equation, one may derive an explicit form for the propagator: Û(t, t 0 ) = ( i) n n=0 n! t t 0 dt 1 t t 0 dt 2... t t 0 dt n T [ Ĥ(t 1 )Ĥ(t 2 )...Ĥ(t n ) ]. To beautify the expression, the time-ordering exponential is defined: Û(t, t 0 ) = Texp{ i t t 0 dτĥ(τ)}. (But this is only an aesthetic trick, that does not help to design numerical propagators) If [Ĥ(t), Ĥ(t )] = 0, Û(t, t 0 ) = exp{ i t t 0 dτĥ(τ)}. If the Hamiltonian is time-independent, Û(t, t 0 ) = exp{ iĥ(t t 0 )}.

10 The Solution Introduction The Problem Time discretization Self consistency Notation By making use of the previous integral equation, one may derive an explicit form for the propagator: Û(t, t 0 ) = ( i) n n=0 n! t t 0 dt 1 t t 0 dt 2... t t 0 dt n T [ Ĥ(t 1 )Ĥ(t 2 )...Ĥ(t n ) ]. To beautify the expression, the time-ordering exponential is defined: Û(t, t 0 ) = Texp{ i t t 0 dτĥ(τ)}. (But this is only an aesthetic trick, that does not help to design numerical propagators) If [Ĥ(t), Ĥ(t )] = 0, Û(t, t 0 ) = exp{ i t t 0 dτĥ(τ)}. If the Hamiltonian is time-independent, Û(t, t 0 ) = exp{ iĥ(t t 0 )}.

11 Outline Introduction The Problem Time discretization Self consistency Notation 1 Introduction The Problem Time discretization Self consistency Notation 2 3 4

12 The Problem Time discretization Self consistency Notation Time discretization: short-time propagation The exact propagator verifies the well-known property: Û(t 1, t 2 ) = Û(t 1, t 3 )Û(t 3, t 2 ) Û(t, t 0 ) = N 1 j=0 This permits us to work for short time propagations: φ(t + t) = Û(t + t, t) φ(t) = Texp{ i t+ t t Û(t j, t j + t j ). dτĥ(τ)} φ(t). The time-dependence of the Hamiltonian is small. The norm of the time-ordered exponential argument is proportional to t. One normally needs to monitor the evolution: If we want to discern frequencies up to ω max, the the time step t has to be no larger than 1 ω max.

13 The Problem Time discretization Self consistency Notation Time discretization: short-time propagation The exact propagator verifies the well-known property: Û(t 1, t 2 ) = Û(t 1, t 3 )Û(t 3, t 2 ) Û(t, t 0 ) = N 1 j=0 This permits us to work for short time propagations: φ(t + t) = Û(t + t, t) φ(t) = Texp{ i t+ t t Û(t j, t j + t j ). dτĥ(τ)} φ(t). The time-dependence of the Hamiltonian is small. The norm of the time-ordered exponential argument is proportional to t. One normally needs to monitor the evolution: If we want to discern frequencies up to ω max, the the time step t has to be no larger than 1 ω max.

14 The Problem Time discretization Self consistency Notation Time discretization: short-time propagation The exact propagator verifies the well-known property: Û(t 1, t 2 ) = Û(t 1, t 3 )Û(t 3, t 2 ) Û(t, t 0 ) = N 1 j=0 This permits us to work for short time propagations: φ(t + t) = Û(t + t, t) φ(t) = Texp{ i t+ t t Û(t j, t j + t j ). dτĥ(τ)} φ(t). The time-dependence of the Hamiltonian is small. The norm of the time-ordered exponential argument is proportional to t. One normally needs to monitor the evolution: If we want to discern frequencies up to ω max, the the time step t has to be no larger than 1 ω max.

15 Outline Introduction The Problem Time discretization Self consistency Notation 1 Introduction The Problem Time discretization Self consistency Notation 2 3 4

16 Self-consistency Introduction The Problem Time discretization Self consistency Notation For most methods, one needs to know Û(τ) for t τ t + t. Two possibilities: Perform some kind of predictor-corrector, ideally in a self consistent manner or not. Extrapolate Û(τ) from the knowledge or a given number of previous steps Û(t i ).

17 Outline Introduction The Problem Time discretization Self consistency Notation 1 Introduction The Problem Time discretization Self consistency Notation 2 3 4

18 Some notation Introduction The Problem Time discretization Self consistency Notation The key parameters that deterimine any algorithm s accuracy are the time step t, and the norm of the Hamiltonian. This is determined by the highest eigenvalue. And for a Kohn-Sham Hamiltonian discretized on a real space or plane wave mesh, this is determined quadratically by the grid spacing h. The mesh ratio is defined to be t/h 2. For a given approximate propagator U(t, t + t), the local truncation error is defined as: E = 1 t [ψ exact(t + t) U(t, t + t)ψ(t)] (1)

19 Some notation Introduction The Problem Time discretization Self consistency Notation A propagator U is consistent if the truncation error is O( t p ) for some integer p. All reasonable methods should be consistent. A propagator is stable if there exists a t max such that, for any t < t max, U(t, t + t) n is uniformly bound for any n > 0. A propagator is contractive if U <= 1. It is unitary if U = 1. Being unitary implies being contractive; being contractive involves being stable. A propagator is unconditionally (consistent, stable, contractive, unitary) if it is such independently of the mesh ratio. A good propagator should also be time-reversible.

20 Classical propagators Crank-Nicholson, or Implicit-Midpoint Rule: Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i 2 tĥ(t + t/2) The method is implicit since it requires the solution of a linear system: It is unitary, and b = ˆLφ(t + t) = b, ˆL = ˆ1 + i t Ĥ(t + t/2), [ 2 ˆ1 i t ] Ĥ(t t/2) φ(t). 2 If Ĥ(t + t/2) is exact, it preserves time-reversal symmetry. Other classical propagators: implicit or explicit Runge-Kutta, Euler s method, etc.

21 Classical propagators Crank-Nicholson, or Implicit-Midpoint Rule: Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i 2 tĥ(t + t/2) The method is implicit since it requires the solution of a linear system: It is unitary, and b = ˆLφ(t + t) = b, ˆL = ˆ1 + i t Ĥ(t + t/2), [ 2 ˆ1 i t ] Ĥ(t t/2) φ(t). 2 If Ĥ(t + t/2) is exact, it preserves time-reversal symmetry. Other classical propagators: implicit or explicit Runge-Kutta, Euler s method, etc.

22 Classical propagators Crank-Nicholson, or Implicit-Midpoint Rule: Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i 2 tĥ(t + t/2) The method is implicit since it requires the solution of a linear system: It is unitary, and b = ˆLφ(t + t) = b, ˆL = ˆ1 + i t Ĥ(t + t/2), [ 2 ˆ1 i t ] Ĥ(t t/2) φ(t). 2 If Ĥ(t + t/2) is exact, it preserves time-reversal symmetry. Other classical propagators: implicit or explicit Runge-Kutta, Euler s method, etc.

23 Exponential Midpoint Rule Û EM (t + t, t) = exp{ i tĥ(t + t/2)}. If the exponential is calculated exactly: It is unitary, and If Ĥ(t + t/2) is exact, it preserves time-reversal symmetry. In our experience, it performs better than Û CN although this depends on the quality of the methods to calculate the exponential or to solve linear systems. Note that the most simple-minded approximation to the propagator, exp{ i tĥ(t)} would not preserve time-reversal symmetry.

24 Exponential propagator based on TRS Assuming the simplest approximation to the propagator, time-reversal symmetry can be enforced by stating: exp{+i t 2 which defines the propagator: Ĥ(t + t)}φ(t + t) = exp{ i t Ĥ(t + t)}φ(t), 2 Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} It is also unitary, but contains once again one unknown in the definition, Ĥ(t + t). This Hamiltonian must be either extrapolated or approximated with a previous evolution. The procedure, ideally, should be self-consistent.

25 Exponential propagator based on TRS Assuming the simplest approximation to the propagator, time-reversal symmetry can be enforced by stating: exp{+i t 2 which defines the propagator: Ĥ(t + t)}φ(t + t) = exp{ i t Ĥ(t + t)}φ(t), 2 Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} It is also unitary, but contains once again one unknown in the definition, Ĥ(t + t). This Hamiltonian must be either extrapolated or approximated with a previous evolution. The procedure, ideally, should be self-consistent.

26 Splitting techniques Typically, Ĥ = ˆT + ˆV, where ˆT is diagonal in Fourier space, and ˆV is diagonal in real space (or almost). This is indeed the case in TDDFT The split-operator technique makes use of this fact, and traditionally splits the exponential that approximates the propagator (i.e. via the EMR) into three exponentials: exp{ i t(ˆt + ˆV)} = exp{ i t/2ˆt} exp{ i t ˆV} exp{ i t/2ˆt}. This may be generalized and obtain numerous methods. The root to derive and analyze them is the well-known Baker-Campbell-Hausdorff relation, and generalizations. 1 e A e B = exp{a + B + 1 [A, B] +...}, 2 1 T. Y. Mikhailova and V. I. Pupyshev, Phys. Lett. A 257, 1 (1999)

27 Splitting techniques Typically, Ĥ = ˆT + ˆV, where ˆT is diagonal in Fourier space, and ˆV is diagonal in real space (or almost). This is indeed the case in TDDFT The split-operator technique makes use of this fact, and traditionally splits the exponential that approximates the propagator (i.e. via the EMR) into three exponentials: exp{ i t(ˆt + ˆV)} = exp{ i t/2ˆt} exp{ i t ˆV} exp{ i t/2ˆt}. This may be generalized and obtain numerous methods. The root to derive and analyze them is the well-known Baker-Campbell-Hausdorff relation, and generalizations. 1 e A e B = exp{a + B + 1 [A, B] +...}, 2 1 T. Y. Mikhailova and V. I. Pupyshev, Phys. Lett. A 257, 1 (1999)

28 Splitting techniques Typically, Ĥ = ˆT + ˆV, where ˆT is diagonal in Fourier space, and ˆV is diagonal in real space (or almost). This is indeed the case in TDDFT The split-operator technique makes use of this fact, and traditionally splits the exponential that approximates the propagator (i.e. via the EMR) into three exponentials: exp{ i t(ˆt + ˆV)} = exp{ i t/2ˆt} exp{ i t ˆV} exp{ i t/2ˆt}. This may be generalized and obtain numerous methods. The root to derive and analyze them is the well-known Baker-Campbell-Hausdorff relation, and generalizations. 1 e A e B = exp{a + B + 1 [A, B] +...}, 2 1 T. Y. Mikhailova and V. I. Pupyshev, Phys. Lett. A 257, 1 (1999)

29 Splitting techniques A split operator scheme method taylored for TDDFT: 2 Û SO (t + t, t) = exp{ i t/2ˆt} exp{ i tˆv } exp{ i t/2ˆt}. ˆV is the Kohn-Sham potential built from the orbitals that result of applying the first kinetic-energy term: φ i = exp{ i t/2ˆt} φ i (t) n ( r, t) = i φ i( r, t) 2 ˆV = V KS [n ]. This scheme permits to maintain an O( t 2 ) order in the error, without the need of performing a self-consistent obtention of ˆV(t + /2). 2 N. Watanabe and M. Tsukada, Phys. Rev. E 65, (2002).

30 Splitting techniques: higher orders Suzuki 3 generalized the split-operator splitting to higuer orders. For example, to fourth order. e xh = S 4 (x) = 5 S 2 (p j x), where p j is a set of real numbers, and S 2 (x) is the normal split-operator splitting. The number of FFTs is multiplied by five. It is thus unclear that this method involves overall increase in speed over normal Strang splitting. j=1 3 M. Suzuki, J. Phys. Soc. Jpn. 61, L3015 (1992); O. Sugino and Y. Miyamoto, Phys. Rev. B 59, 2579 (1999).

31 Magnus expansions Once again: Û(t + t, t) does not reduce to a simple exponential unless the Hamiltonian is time-independent. Question: Is there any operator ˆΩ(t + t, t) such that: Û(t + t, t) = exp{ˆω(t + t, t)}? Answer: Yes. There exists an infinite series, convergent for small enough t, such that the seeked operator may be written as: 4 ˆΩ(t + t, t) = Ω k (t + t, t). k=1 Moreover, there exists a procedure to generate recursively the terms of the infinite sum. 5 For example: Ω 2 (t + t, t) = Ω 1 (t + t, t) = t+ t t+ t t dτ 1 [ iĥ(τ1 ) ]. t+ t [ dτ 1 dτ 2 iĥ)(τ1 ), iĥ(τ 2 ) ]. t t 4

32 Magnus expansions An approximation of order 2n to the exact Û(t + t, t) is achieved by: Truncating the Magnus series to n-th order. Approximating the integrals in time through an n-th order quadrature formula. This gives us the EMR for the order-two Magnus expansion: ˆΩ M(2) (t + t, t) = iĥ(t + t/2) t. The first interesting result is thus the order-four Magnus expansion: ˆΩ M(4) (t + t, t) = i t 2 [Ĥ(t1 ) + Ĥ(t 2 ) ] 3 t 2 [Ĥ(t2 ), Ĥ(t 1 ) ], 12 t 1,2 = t + (1/2 3/6) t.

33 Magnus expansions We have applied this result for our TDDFT case. The propagator then takes the form: Û M(4) (t + t, t) = exp{ i tĥ M(4) (t, t)}, Ĥ M(4) (t, t) = H(t, t) + i [ˆT + ˆV nonlocal ext, V(t, t) ], H(t, t) = ˆT {ˆV KS (t 1 ) + ˆV KS (t 2 )}, V = 3 12 t{ˆv KS (t 2 ) ˆV KS (t 1 )}.

34 Magnus expansions <µ(t)> (a.u.) Exact M(4) EM=M(2) ETRS Time (a.u.) Evolution of the dipole moment of Na 8 subject to an intense laser field, calculated via the methods indicated in the legend.

35 The solutions Introduction Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i t 2 Ĥ(t + t/2) Û EM (t + t, t) = exp{ i tĥ(t + t/2)} Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} Û SO (t + t, t) = exp{ i t/2ˆt} exp{ i tˆv } exp{ i t/2ˆt} 5 Û ST (t+ t, t) = exp{ ip j t/2ˆt} exp{ ip j tˆv(t+p j t)} exp{ ip j t/2ˆt} j=1 n Û M(2n) (t + t, t) = exp{ Ω j (t + t, t)} j=1

36 The solutions Introduction Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i t 2 Ĥ(t + t/2) Û EM (t + t, t) = exp{ i tĥ(t + t/2)} Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} Û SO (t + t, t) = exp{ i t/2ˆt} exp{ i tˆv } exp{ i t/2ˆt} 5 Û ST (t+ t, t) = exp{ ip j t/2ˆt} exp{ ip j tˆv(t+p j t)} exp{ ip j t/2ˆt} j=1 n Û M(2n) (t + t, t) = exp{ Ω j (t + t, t)} j=1

37 The solutions Introduction Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i t 2 Ĥ(t + t/2) Û EM (t + t, t) = exp{ i tĥ(t + t/2)} Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} Û SO (t + t, t) = exp{ i t/2ˆt} exp{ i tˆv } exp{ i t/2ˆt} 5 Û ST (t+ t, t) = exp{ ip j t/2ˆt} exp{ ip j tˆv(t+p j t)} exp{ ip j t/2ˆt} j=1 n Û M(2n) (t + t, t) = exp{ Ω j (t + t, t)} j=1

38 The solutions Introduction Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i t 2 Ĥ(t + t/2) Û EM (t + t, t) = exp{ i tĥ(t + t/2)} Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)} exp{ i t 2 2 Ĥ(t)} Û SO (t + t, t) = exp{ i t/2ˆt} exp{ i tˆv } exp{ i t/2ˆt} 5 Û ST (t+ t, t) = exp{ ip j t/2ˆt} exp{ ip j tˆv(t+p j t)} exp{ ip j t/2ˆt} j=1 n Û M(2n) (t + t, t) = exp{ Ω j (t + t, t)} j=1

39 The solutions Introduction Û CN (t + t, t) = 1 i t 2 Ĥ(t + t/2) 1 + i t 2 Ĥ(t + t/2) Û EM (t + t, t) = exp{ i tĥ(t + t/2)} Û ETRS (t + t, t) = exp{ i t Ĥ(t + t)}exp{ i t 2 2 Ĥ(t)} 5 Û ST (t+ t, t) = exp{ ip j t/2ˆt}exp{ ip j tˆv(t+p j t)}exp{ ip j t/2ˆt} j=1 n Û M(2n) (t + t, t) = exp{ Ω j (t + t, t)} j=1

40 The exponential exp exp{a} = a a a N If A is not diagonal, diagonalize: j=0 = A = VHV T, 1 j! Aj. e a e a e an and e A = Ve H V T.

41 The exponential exp exp{a} = a a a N If A is not diagonal, diagonalize: j=0 = A = VHV T, 1 j! Aj. e a e a e an and e A = Ve H V T.

42 The exponential exp exp{a} = a a a N If A is not diagonal, diagonalize: j=0 = A = VHV T, 1 j! Aj. e a e a e an and e A = Ve H V T.

43 The exponential: small matrices C. Moler and C. Van Loan, Nineteen Dubious Ways to Compute the Exponential of A Matrix, SIAM Review 20, 801 (1978). Taylor series. Padé approximations. Scaling and squaring. Chebyshev rational approximation. Ordinary differential equation methods. Polynomial methods. Matrix decomposition methods. Splitting methods. The focus is placed on the problem of calculating e A, which is only possible for small matrices. We have to be more modest, and look for methods to calculate e A v, for a given v.

44 The exponential: small matrices C. Moler and C. Van Loan, Nineteen Dubious Ways to Compute the Exponential of A Matrix, Twenty-Five Years Later, SIAM Review 45, 3 (2003). In principle, the exponential of a matrix could be computed in many ways. (... ) In practice, consideration of computational stability and efficiency indicates that some of the methods are preferable to others but that none are completely satisfactory. Two different problems: Given A, calculate e A, so that it can be applied to any vector: unfeasible for our TDDFT problem, wher A is huge and sparse. Given A and v, calculate e A v.

45 The exponential: small matrices C. Moler and C. Van Loan, Nineteen Dubious Ways to Compute the Exponential of A Matrix, Twenty-Five Years Later, SIAM Review 45, 3 (2003). In principle, the exponential of a matrix could be computed in many ways. (... ) In practice, consideration of computational stability and efficiency indicates that some of the methods are preferable to others but that none are completely satisfactory. Two different problems: Given A, calculate e A, so that it can be applied to any vector: unfeasible for our TDDFT problem, wher A is huge and sparse. Given A and v, calculate e A v.

46 N-th order expansion The most obvious way to approximate the exponential is to use its definition (standard expansion): exp ( i th) = ( i t) j j=0 taylor k { i th} = j! H j n (i t) j H j v. j! The error, for a given n, is O( t (n+1) ). This operator is not unitary. It may be proved that n = 4 is especially advantageous, since it is conditionally contractive, and thus stable for large values of t. 6 n = 2, for example, is unconditionally unstable; n = 6 is also conditionally stable, but only for smaller values of t. j=0 6 Jeff Giansiracusa, unpublished.

47 Chebyshev expansion Instead of using the standard polynomial basis, we may use other polynomial to expand the exponential. The Chebyshev basis is well known as a way to economize power series. Utilizing them is advantageous because: Since 1984, we know a closed form for the coefficients: 7 e ih t = k (2 δ k0 )J n ( t)( i) n T n (H). (2) n=0 7 H. Tal Ezer and R. Kosloff, J. Chem. Phys. 81, 3967 (1984).

48 Evaluation of the polynomials may be done at low cost (essentially the same than with standard expansion) thanks to Clenshaw s algorithm 8. Chebyshev expansion error vs standard expansion error 8 C. W. Clenshaw, MTAC 9, 118 (1955).

49 Error Standard Expansion Chebyshev Expansion δt (a.u.) Std: e -ihδt =Σ (-iδth) n /n! Cheb: e -ihδt =Σ (2-δ n0 )J n (δt)(-i) n T n (H) (being H shifted to [-1,1]) 10 0 O(δt 2 ) Chebyshev vs Standard Standard Chebyshev Expansion Order O(δt 3 ) O(δt 4 ) O(δt 5 ) O(δt 6 ) O(δt 7 ) For an excited Na atom, error in the evaluation of the exponential of the Hamiltonian.

50 Split-Operator Approaches The Kohn-Sham hamiltonian H has the form H = T + V, where T is diagonal if reciprocal space, and V is diagonal in real space. This suggests the use of the Strang splitting (split-operator, split step,...): 9 exp ( i th) exp ( i t/2t) exp ( i tv) exp ( i t/2t) (3) The split-operator may be kinetic or potential referenced. The error is third order in t. The method is unitary and unconditionally stable. 10 A wealth of other splitting schemes are possible W. C. Strang, J. Numer. Anal. 6, 506 (1968). 10 R. Kosloff, J. Phys. Chem. 92, 2087 (1988)- 11 N. N. Yanenko, The Method of Fractional Steps, Springer, 1971.

51 Suzuki-Trotter Introduction Suzuki 12 generalized Strang splitting to higuer orders. For example, to fourth order. e xh = S 4 (x) = 5 S 2 (p j x), (4) where p j is a set of real numbers, and S 2 (x) is the normal Strang splitting. The number of FFTs is multiplied by five. It is thus unclear that this method involves overall increase in speed over normal Strang splitting. j=1 12 M. Suzuki, J. Phys. Soc. Jpn. 61, L3015 (1992); O. Sugino and Y. Miyamoto, Phys. Rev. B 59, 2579 (1999).

52 The method may be generalized to time-dependent Hamiltonians. So it is not only a way to approximate the Hamiltonian, but also a full algorithm to approximate the propagator (the same holds for the basic Strang splitting) Error t (a.u.) Comparison of the second-order split operator (SO, solid) and the fourth-order Suzuki-Trotter (ST, dashed) schemes.

53 Krylov subspace projection A given matricial function f(ta) may be Taylor expanded: m 1 f(ta)v = v + i=1 (ta) i + O(tA) m. i! This provides us with a polynomial approximation of degree m 1: m 1 f(ta)v = c i (ta) i. It is not the only possible polynomial approximation. All possibilities are elements of the Krylov subspace: i=0 K m (ta, v) = L{v,(tA)v,...,(tA) m 1 v}. What is the element of K m (ta, v) that optimally approximates f(ta)v?

54 Krylov subspace projection A given matricial function f(ta) may be Taylor expanded: m 1 f(ta)v = v + i=1 (ta) i + O(tA) m. i! This provides us with a polynomial approximation of degree m 1: m 1 f(ta)v = c i (ta) i. It is not the only possible polynomial approximation. All possibilities are elements of the Krylov subspace: i=0 K m (ta, v) = L{v,(tA)v,...,(tA) m 1 v}. What is the element of K m (ta, v) that optimally approximates f(ta)v?

55 Krylov subspace projection A given matricial function f(ta) may be Taylor expanded: m 1 f(ta)v = v + i=1 (ta) i + O(tA) m. i! This provides us with a polynomial approximation of degree m 1: m 1 f(ta)v = c i (ta) i. It is not the only possible polynomial approximation. All possibilities are elements of the Krylov subspace: i=0 K m (ta, v) = L{v,(tA)v,...,(tA) m 1 v}. What is the element of K m (ta, v) that optimally approximates f(ta)v?

56 Krylov subspace projection A given matricial function f(ta) may be Taylor expanded: m 1 f(ta)v = v + i=1 (ta) i + O(tA) m. i! This provides us with a polynomial approximation of degree m 1: m 1 f(ta)v = c i (ta) i. It is not the only possible polynomial approximation. All possibilities are elements of the Krylov subspace: i=0 K m (ta, v) = L{v,(tA)v,...,(tA) m 1 v}. What is the element of K m (ta, v) that optimally approximates f(ta)v?

57 Krylov subspace projection To manipulate the elements of K m (ta, v), it is better to have an orthonormal base. This is the task of the Arnoldi (Lanczos) procedure: β = v v 1 = v/β for j = 1 to m do p = Av j for i = 1 to j do h ij = v T i p p = p h ij v i end h j+1,j = p v j+1 = p/h j+1,j end

58 Krylov subspace projection The result are two matrices: V m+1 = [v 1, v 2,...,v m+1 ] R n (m+1) H = (h ij ) R (m+1) m and H R m m, which is the square matrix formed by the first m rows of H. These matrices satisfy: V T mav m = H m AV m = V m+1 H V T mv m = 1

59 Krylov subspace projection The result are two matrices: V m+1 = [v 1, v 2,...,v m+1 ] R n (m+1) H = (h ij ) R (m+1) m and H R m m, which is the square matrix formed by the first m rows of H. These matrices satisfy: V T mav m = H m AV m = V m+1 H V T mv m = 1

60 Krylov subspace projection Using this recursion, each V j = [v 1,...,v j ] is an orthonormal base yes of K j (A, v). It may be proved that the optimal approximation to e ta v, in the least squares sense, within K m (A, v), is: w opt = βv m (V T me ta V m )ê 1 But we still have e ta in the way. The idea now is to do the following approximation: V T me ta V m = exp{tv T mav m } = e thm The final approximation is then: e ta v lanczos m (ta, v) = βv m e thm ê 1.

61 Krylov subspace projection Using this recursion, each V j = [v 1,...,v j ] is an orthonormal base yes of K j (A, v). It may be proved that the optimal approximation to e ta v, in the least squares sense, within K m (A, v), is: w opt = βv m (V T me ta V m )ê 1 But we still have e ta in the way. The idea now is to do the following approximation: V T me ta V m = exp{tv T mav m } = e thm The final approximation is then: e ta v lanczos m (ta, v) = βv m e thm ê 1.

62 Krylov subspace projection Using this recursion, each V j = [v 1,...,v j ] is an orthonormal base yes of K j (A, v). It may be proved that the optimal approximation to e ta v, in the least squares sense, within K m (A, v), is: w opt = βv m (V T me ta V m )ê 1 But we still have e ta in the way. The idea now is to do the following approximation: V T me ta V m = exp{tv T mav m } = e thm The final approximation is then: e ta v lanczos m (ta, v) = βv m e thm ê 1.

63 Krylov subspace projection Using this recursion, each V j = [v 1,...,v j ] is an orthonormal base yes of K j (A, v). It may be proved that the optimal approximation to e ta v, in the least squares sense, within K m (A, v), is: w opt = βv m (V T me ta V m )ê 1 But we still have e ta in the way. The idea now is to do the following approximation: V T me ta V m = exp{tv T mav m } = e thm The final approximation is then: e ta v lanczos m (ta, v) = βv m e thm ê 1.

64 For a given order m, the method is O(m + 1). For any m, the method is unitary. The computational cost grows linearly with m. The dimension m is increased recursively until some convergence criterium (β [exp( i th m )] m,m ). The decay of the error decays superlinearly with m. So the method is of arbitrary accuracy.

65 Normalized residue th order Chebyshev Lanczos t (a.u.) For an excited Na atom, error in the evaluation of the exponential of the Hamiltonian for both the Lanczos method (circles) for a fixed tolerance, and for the Chebyshev expansion of 8 th order (crosses). The numbers close to the circles show the Krylov basis dimension needed to achieve the desired accuracy.

66 Standard Chebyshev Lanczos Na [1s] p(δt)/δt (a.u. -1 ) C [2p] Au [5d] δt (a.u.) Number of Hamiltonian-wavefunction operations per unit time, as a function of t for the Taylor (solid) and Chebyschev (dashes) expansions, and for the Lanczos projection method (dotted)

67 Introduction There is not an always optimal algorithm for the propagation of the TDKS equations. For long time propagations, assuring time-reversal symmetry is very important. Some methods require the calculation of the action of the exponential of Hamiltonian matrices; there are efficient methods to perform this task. The Lanczos-Krylov subspace projection seems to be the best algorithm to calculate the action of exponentials. For problems involving very high frequencies, Magnus expansion are advantageous. Otherwise, a combination of the EM rule with Lanczos subspace projection is sufficient.

Numerical Simulation of Spin Dynamics

Numerical Simulation of Spin Dynamics Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators

More information

Efficient Modeling Techniques for Time- Dependent Quantum System with Applications to Carbon Nanotubes

Efficient Modeling Techniques for Time- Dependent Quantum System with Applications to Carbon Nanotubes University of Massachusetts Amherst ScholarWorks@UMass Amherst Masters Theses 1911 - February 214 21 Efficient Modeling Techniques for Time- Dependent Quantum System with Applications to Carbon Nanotubes

More information

1 Time-Dependent Two-State Systems: Rabi Oscillations

1 Time-Dependent Two-State Systems: Rabi Oscillations Advanced kinetics Solution 7 April, 16 1 Time-Dependent Two-State Systems: Rabi Oscillations a In order to show how Ĥintt affects a bound state system in first-order time-dependent perturbation theory

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Introduction to linear-response, and time-dependent density-functional theory. Heiko Appel

Introduction to linear-response, and time-dependent density-functional theory. Heiko Appel Introduction to linear-response, and time-dependent density-functional theory Heiko Appel Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin Heiko Appel (Fritz-Haber-Institut der MPG) Intro to real-space,

More information

TDDFT II. Alberto Castro

TDDFT II. Alberto Castro Problem set Alberto Castro Institute for Biocomputation and Physics of Complex Systems (BIFI) and Zaragoza Scientific Center for Advanced Modeling (ZCAM), University of Zaragoza, Spain ELK Summit, Lausanne

More information

Rabi oscillations within TDDFT: the example of the 2 site Hubbard model

Rabi oscillations within TDDFT: the example of the 2 site Hubbard model Rabi oscillations within TDDFT: the example of the 2 site Hubbard model Johanna I. Fuks, H. Appel, Mehdi,I.V. Tokatly, A. Rubio Donostia- San Sebastian University of Basque Country (UPV) Outline Rabi oscillations

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Physics 550. Problem Set 6: Kinematics and Dynamics

Physics 550. Problem Set 6: Kinematics and Dynamics Physics 550 Problem Set 6: Kinematics and Dynamics Name: Instructions / Notes / Suggestions: Each problem is worth five points. In order to receive credit, you must show your work. Circle your final answer.

More information

Quantum Physics III (8.06) Spring 2016 Assignment 3

Quantum Physics III (8.06) Spring 2016 Assignment 3 Quantum Physics III (8.6) Spring 6 Assignment 3 Readings Griffiths Chapter 9 on time-dependent perturbation theory Shankar Chapter 8 Cohen-Tannoudji, Chapter XIII. Problem Set 3. Semi-classical approximation

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Splitting and composition methods for the time dependent Schrödinger equation

Splitting and composition methods for the time dependent Schrödinger equation Splitting and composition methods for the time dependent Schrödinger equation S. Blanes, joint work with F. Casas and A. Murua Instituto de Matemática Multidisciplinar Universidad Politécnica de Valencia,

More information

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments James V. Lambers September 24, 2008 Abstract This paper presents a reformulation

More information

Symplectic time-average propagators for the Schödinger equation with a time-dependent Hamiltonian

Symplectic time-average propagators for the Schödinger equation with a time-dependent Hamiltonian Symplectic time-average propagators for the Schödinger equation Symplectic time-average propagators for the Schödinger equation with a time-dependent Hamiltonian Sergio Blanes,, a Fernando Casas, 2, b

More information

Schrödinger equation

Schrödinger equation Splitting methods for the time dependent Schrödinger equation S. Blanes, Joint work with F. Casas and A. Murua Instituto de Matemática Multidisciplinar Universidad Politécnica de Valencia, SPAIN Workshop:

More information

Numerical Integration of the Wavefunction in FEM

Numerical Integration of the Wavefunction in FEM Numerical Integration of the Wavefunction in FEM Ian Korey Eaves ike26@drexel.edu December 14, 2013 Abstract Although numerous techniques for calculating stationary states of the schrödinger equation are

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Evolution equations with spectral methods: the case of the wave equation

Evolution equations with spectral methods: the case of the wave equation Evolution equations with spectral methods: the case of the wave equation Jerome.Novak@obspm.fr Laboratoire de l Univers et de ses Théories (LUTH) CNRS / Observatoire de Paris, France in collaboration with

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Implicit Schemes for the Model Problem The Crank-Nicolson scheme and θ-scheme

More information

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2. Chapter 5 Exercises From: Finite Difference Methods for Ordinary and Partial Differential Equations by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/ rjl/fdmbook Exercise 5. (Uniqueness for

More information

The frequency-dependent Sternheimer equation in TDDFT

The frequency-dependent Sternheimer equation in TDDFT The frequency-dependent Sternheimer equation in TDDFT A new look into an old equation Miguel A. L. Marques 1 Centre for Computational Physics, University of Coimbra, Portugal 2 LPMCN, Université Claude

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

General Exam Part II, Fall 1998 Quantum Mechanics Solutions

General Exam Part II, Fall 1998 Quantum Mechanics Solutions General Exam Part II, Fall 1998 Quantum Mechanics Solutions Leo C. Stein Problem 1 Consider a particle of charge q and mass m confined to the x-y plane and subject to a harmonic oscillator potential V

More information

Chebychev Propagator for Inhomogeneous Schrödinger Equations

Chebychev Propagator for Inhomogeneous Schrödinger Equations Chebychev Propagator for Inhomogeneous Schrödinger Equations Michael Goerz May 16, 2011 Solving the Schrödinger Equation Schrödinger Equation Ψ(t) = Ĥ Ψ(t) ; e.g. Ĥ = t ( V1 (R) ) µɛ(t) µɛ(t) V 2 (R) Solving

More information

Time-Dependent Density-Functional Theory

Time-Dependent Density-Functional Theory Summer School on First Principles Calculations for Condensed Matter and Nanoscience August 21 September 3, 2005 Santa Barbara, California Time-Dependent Density-Functional Theory X. Gonze, Université Catholique

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

A proposed family of variationally correlated first-order density matrices for spin-polarized three-electron model atoms

A proposed family of variationally correlated first-order density matrices for spin-polarized three-electron model atoms J Math Chem 013) 51:763 773 DOI 10.1007/s10910-01-0113-8 ORIGINAL PAPER A proposed family of variationally correlated first-order density matrices for spin-polarized three-electron model atoms Ali Akbari

More information

SANGRADO PAGINA 17CMX24CM. PhD Thesis. Splitting methods for autonomous and non-autonomous perturbed equations LOMO A AJUSTAR (AHORA 4CM)

SANGRADO PAGINA 17CMX24CM. PhD Thesis. Splitting methods for autonomous and non-autonomous perturbed equations LOMO A AJUSTAR (AHORA 4CM) SANGRADO A3 PAGINA 17CMX24CM LOMO A AJUSTAR (AHORA 4CM) We have considered the numerical integration of non-autonomous separable parabolic equations using high order splitting methods with complex coefficients.

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Selective quasienergies from short time cross-correlation probability amplitudes by the filter-diagonalization method

Selective quasienergies from short time cross-correlation probability amplitudes by the filter-diagonalization method PHYSICAL REVIEW E VOLUME 58, NUMBER 1 JULY 1998 Selective quasienergies from short time cross-correlation probability amplitudes by the filter-diagonalization method Markus Glück, 1 H. Jürgen Korsch, 1

More information

High-order actions and their applications to honor our friend and

High-order actions and their applications to honor our friend and High-order actions and their applications to honor our friend and collaborator Siu A. Chin Universitat Politecnica de Catalunya, Barcelona, Spain Lecture: Magnus Expansion and Suzuki s Method Outline of

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Solving Burnup Equations in Serpent:

Solving Burnup Equations in Serpent: Solving Burnup Equations in Serpent: Matrix Exponential Method CRAM Maria Pusa September 15th, 2011 Outline Burnup equations and matrix exponential solution Characteristics of burnup matrices Established

More information

Time Evolving Block Decimation Algorithm

Time Evolving Block Decimation Algorithm Time Evolving Block Decimation Algorithm Application to bosons on a lattice Jakub Zakrzewski Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Electronic structure theory: Fundamentals to frontiers. 1. Hartree-Fock theory

Electronic structure theory: Fundamentals to frontiers. 1. Hartree-Fock theory Electronic structure theory: Fundamentals to frontiers. 1. Hartree-Fock theory MARTIN HEAD-GORDON, Department of Chemistry, University of California, and Chemical Sciences Division, Lawrence Berkeley National

More information

QFT PS1: Bosonic Annihilation and Creation Operators (11/10/17) 1

QFT PS1: Bosonic Annihilation and Creation Operators (11/10/17) 1 QFT PS1: Bosonic Annihilation and Creation Operators (11/10/17) 1 Problem Sheet 1: Bosonic Annihilation and Creation Operators Comments on these questions are always welcome. For instance if you spot any

More information

The Sommerfeld Polynomial Method: Harmonic Oscillator Example

The Sommerfeld Polynomial Method: Harmonic Oscillator Example Chemistry 460 Fall 2017 Dr. Jean M. Standard October 2, 2017 The Sommerfeld Polynomial Method: Harmonic Oscillator Example Scaling the Harmonic Oscillator Equation Recall the basic definitions of the harmonic

More information

The quantum state as a vector

The quantum state as a vector The quantum state as a vector February 6, 27 Wave mechanics In our review of the development of wave mechanics, we have established several basic properties of the quantum description of nature:. A particle

More information

Quantum Electrodynamical TDDFT: From basic theorems to approximate functionals

Quantum Electrodynamical TDDFT: From basic theorems to approximate functionals Quantum Electrodynamical TDDFT: From basic theorems to approximate functionals Ilya Tokatly NanoBio Spectroscopy Group - UPV/EHU San Sebastiàn - Spain IKERBASQUE, Basque Foundation for Science - Bilbao

More information

Lecture V: The game-engine loop & Time Integration

Lecture V: The game-engine loop & Time Integration Lecture V: The game-engine loop & Time Integration The Basic Game-Engine Loop Previous state: " #, %(#) ( #, )(#) Forces -(#) Integrate velocities and positions Resolve Interpenetrations Per-body change

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

A Time Splitting for the Semiclassical Schrödinger Equation

A Time Splitting for the Semiclassical Schrödinger Equation A Time Splitting for the Semiclassical Schrödinger Equation V. Gradinaru Seminar for Applied Mathematics, ETH Zürich, CH 809, Zürich, Switzerland vasile.gradinaru@sam.math.ethz.ch and G. A. Hagedorn Department

More information

Time evolution of states in quantum mechanics 1

Time evolution of states in quantum mechanics 1 Time evolution of states in quantum mechanics 1 The time evolution from time t 0 to t of a quantum mechanical state is described by a linear operator Û(t, t 0. Thus a ket at time t that started out at

More information

Exponential integration of large systems of ODEs

Exponential integration of large systems of ODEs Exponential integration of large systems of ODEs Jitse Niesen (University of Leeds) in collaboration with Will Wright (Melbourne University) 23rd Biennial Conference on Numerical Analysis, June 2009 Plan

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

2 The Density Operator

2 The Density Operator In this chapter we introduce the density operator, which provides an alternative way to describe the state of a quantum mechanical system. So far we have only dealt with situations where the state of a

More information

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals

More information

Efficient Solution of Parabolic Equations by Krylov Approximation Methods

Efficient Solution of Parabolic Equations by Krylov Approximation Methods Efficient Solution of Parabolic Equations by Krylov Approximation Methods E. Gallopoulos and Y. Saad Abstract In this paper we take a new look at numerical techniques for solving parabolic equations by

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Key concepts in Density Functional Theory (I) Silvana Botti

Key concepts in Density Functional Theory (I) Silvana Botti From the many body problem to the Kohn-Sham scheme European Theoretical Spectroscopy Facility (ETSF) CNRS - Laboratoire des Solides Irradiés Ecole Polytechnique, Palaiseau - France Temporary Address: Centre

More information

Efficient time evolution of one-dimensional quantum systems

Efficient time evolution of one-dimensional quantum systems Efficient time evolution of one-dimensional quantum systems Frank Pollmann Max-Planck-Institut für komplexer Systeme, Dresden, Germany Sep. 5, 2012 Hsinchu Problems we will address... Finding ground states

More information

Excited state dynamics of nanostructures and extended systems within TDDFT

Excited state dynamics of nanostructures and extended systems within TDDFT Excited state dynamics of nanostructures and extended systems within TDDFT Angel Rubio Dpto. de Física de Materiales, Universidad del País Vasco, Donostia International Physics Center (DIPC), and Centro

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK KINGS COLLEGE OF ENGINEERING MA5-NUMERICAL METHODS DEPARTMENT OF MATHEMATICS ACADEMIC YEAR 00-0 / EVEN SEMESTER QUESTION BANK SUBJECT NAME: NUMERICAL METHODS YEAR/SEM: II / IV UNIT - I SOLUTION OF EQUATIONS

More information

Introduction to symplectic integrator methods for solving diverse dynamical equations

Introduction to symplectic integrator methods for solving diverse dynamical equations Introduction to symplectic integrator methods for solving diverse dynamical equations Siu A. Chin Department of Physics and Astronomy Texas A&M University College Station, TX, 77840, USA Third Lecture

More information

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat. Numerical Methods for Engineers An Introduction with and Scientists Applications using MATLAB Third Edition Amos Gilat Vish- Subramaniam Department of Mechanical Engineering The Ohio State University Wiley

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

1 Planck-Einstein Relation E = hν

1 Planck-Einstein Relation E = hν C/CS/Phys C191 Representations and Wavefunctions 09/30/08 Fall 2008 Lecture 8 1 Planck-Einstein Relation E = hν This is the equation relating energy to frequency. It was the earliest equation of quantum

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Computing the Action of the Matrix Exponential

Computing the Action of the Matrix Exponential Computing the Action of the Matrix Exponential Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Awad H. Al-Mohy 16th ILAS

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 6 Chapter 20 Initial-Value Problems PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction

More information

Finite difference method for heat equation

Finite difference method for heat equation Finite difference method for heat equation Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH Consistency & Numerical Smoothing Error Estimation An Alternative of the Lax-Richtmyer Theorem Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH 43403

More information

-state problems and an application to the free particle

-state problems and an application to the free particle -state problems and an application to the free particle Sourendu Gupta TIFR, Mumbai, India Quantum Mechanics 1 2013 3 September, 2013 Outline 1 Outline 2 The Hilbert space 3 A free particle 4 Keywords

More information

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester HIGHER ORDER METHODS School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 21 Quantum Mechanics in Three Dimensions Lecture 21 Physics 342 Quantum Mechanics I Monday, March 22nd, 21 We are used to the temporal separation that gives, for example, the timeindependent

More information

Lecture IV: Time Discretization

Lecture IV: Time Discretization Lecture IV: Time Discretization Motivation Kinematics: continuous motion in continuous time Computer simulation: Discrete time steps t Discrete Space (mesh particles) Updating Position Force induces acceleration.

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 6 Chapter 20 Initial-Value Problems PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Control Systems. Time response. L. Lanari

Control Systems. Time response. L. Lanari Control Systems Time response L. Lanari outline zero-state solution matrix exponential total response (sum of zero-state and zero-input responses) Dirac impulse impulse response change of coordinates (state)

More information

Numerical simulations of spin dynamics

Numerical simulations of spin dynamics Numerical simulations of spin dynamics Charles University in Prague Faculty of Science Institute of Computer Science Spin dynamics behavior of spins nuclear spin magnetic moment in magnetic field quantum

More information

Control Systems. Time response

Control Systems. Time response Control Systems Time response L. Lanari outline zero-state solution matrix exponential total response (sum of zero-state and zero-input responses) Dirac impulse impulse response change of coordinates (state)

More information

Postulates of Quantum Mechanics

Postulates of Quantum Mechanics EXERCISES OF QUANTUM MECHANICS LECTURE Departamento de Física Teórica y del Cosmos 018/019 Exercise 1: Stern-Gerlach experiment Postulates of Quantum Mechanics AStern-Gerlach(SG)deviceisabletoseparateparticlesaccordingtotheirspinalonga

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

Scattering Theory. In quantum mechanics the basic observable is the probability

Scattering Theory. In quantum mechanics the basic observable is the probability Scattering Theory In quantum mechanics the basic observable is the probability P = ψ + t ψ t 2, for a transition from and initial state, ψ t, to a final state, ψ + t. Since time evolution is unitary this

More information

Symmetries and Supersymmetries in Trapped Ion Hamiltonian Models

Symmetries and Supersymmetries in Trapped Ion Hamiltonian Models Proceedings of Institute of Mathematics of NAS of Ukraine 004, Vol. 50, Part, 569 57 Symmetries and Supersymmetries in Trapped Ion Hamiltonian Models Benedetto MILITELLO, Anatoly NIKITIN and Antonino MESSINA

More information

Physics in the lane. For Ladislav, Bergen December 19th

Physics in the lane. For Ladislav, Bergen December 19th Physics in the lane For Ladislav, Bergen December 19th A laser pulse and a hydrogen atom ee 0 v mω The story dates back PRL 95, 043601 (005) PRL 97, 043601 (006) i ℏ Ψ =[c α ( p+e A)+V +m c β] Ψ i ℏ Ψ

More information

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical

More information

Euler s Method, cont d

Euler s Method, cont d Jim Lambers MAT 461/561 Spring Semester 009-10 Lecture 3 Notes These notes correspond to Sections 5. and 5.4 in the text. Euler s Method, cont d We conclude our discussion of Euler s method with an example

More information

Hamiltonian simulation with nearly optimal dependence on all parameters

Hamiltonian simulation with nearly optimal dependence on all parameters Hamiltonian simulation with nearly optimal dependence on all parameters Dominic Berry + Andrew Childs obin Kothari ichard Cleve olando Somma Quantum simulation by quantum walks Dominic Berry + Andrew Childs

More information

Higher-order Chebyshev Rational Approximation Method (CRAM) and application to Burnup Calculations

Higher-order Chebyshev Rational Approximation Method (CRAM) and application to Burnup Calculations Higher-order Chebyshev Rational Approximation Method (CRAM) and application to Burnup Calculations Maria Pusa September 18th 2014 1 Outline Burnup equations and matrix exponential solution Characteristics

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

The heavy-light sector of N f = twisted mass lattice QCD

The heavy-light sector of N f = twisted mass lattice QCD The heavy-light sector of N f = 2 + 1 + 1 twisted mass lattice QCD Marc Wagner Humboldt-Universität zu Berlin, Institut für Physik mcwagner@physik.hu-berlin.de http://people.physik.hu-berlin.de/ mcwagner/

More information

Comparison of Various HFB Overlap Formulae

Comparison of Various HFB Overlap Formulae Bulg. J. Phys. 42 (2015) 404 409 Comparison of Various HFB Overlap Formulae M. Oi Institute of Natural Sciences, Senshu University, 3-8-1 Kanda-Jinbocho, Chiyoda-ku, Tokyo 101-0051, Japan Received 31 October

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Module 6: Implicit Runge-Kutta Methods Lecture 17: Derivation of Implicit Runge-Kutta Methods(Contd.) The Lecture Contains:

Module 6: Implicit Runge-Kutta Methods Lecture 17: Derivation of Implicit Runge-Kutta Methods(Contd.) The Lecture Contains: The Lecture Contains: We continue with the details about the derivation of the two stage implicit Runge- Kutta methods. A brief description of semi-explicit Runge-Kutta methods is also given. Finally,

More information

Lyapunov-based control of quantum systems

Lyapunov-based control of quantum systems Lyapunov-based control of quantum systems Symeon Grivopoulos Bassam Bamieh Department of Mechanical and Environmental Engineering University of California, Santa Barbara, CA 936-57 symeon,bamieh@engineering.ucsb.edu

More information

Stability of Krylov Subspace Spectral Methods

Stability of Krylov Subspace Spectral Methods Stability of Krylov Subspace Spectral Methods James V. Lambers Department of Energy Resources Engineering Stanford University includes joint work with Patrick Guidotti and Knut Sølna, UC Irvine Margot

More information

Time stepping methods

Time stepping methods Time stepping methods ATHENS course: Introduction into Finite Elements Delft Institute of Applied Mathematics, TU Delft Matthias Möller (m.moller@tudelft.nl) 19 November 2014 M. Möller (DIAM@TUDelft) Time

More information

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Outline ODEs and initial conditions. Explicit and implicit Euler methods. Runge-Kutta methods. Multistep

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

FDM for parabolic equations

FDM for parabolic equations FDM for parabolic equations Consider the heat equation where Well-posed problem Existence & Uniqueness Mass & Energy decreasing FDM for parabolic equations CNFD Crank-Nicolson + 2 nd order finite difference

More information

Finite Difference and Finite Element Methods

Finite Difference and Finite Element Methods Finite Difference and Finite Element Methods Georgy Gimel farb COMPSCI 369 Computational Science 1 / 39 1 Finite Differences Difference Equations 3 Finite Difference Methods: Euler FDMs 4 Finite Element

More information

Exponential integrators

Exponential integrators Exponential integrators Marlis Hochbruck Heinrich-Heine University Düsseldorf, Germany Alexander Ostermann University Innsbruck, Austria Helsinki, May 25 p.1 Outline Time dependent Schrödinger equations

More information