Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014
Introduction Discretization in time Computing the subpropagators Avoiding computation of subpropagators
Nuclear magnetic resonance spectroscopy Phenomenon of nuclear magnetic resonance Nuclei in a magnetic field absorb and re-emit electromagnetic radiation. This energy is at a specific resonance frequency which depends on the strength of the magnetic field and the magnetic properties of the isotope of the atoms. Quantum mechanics QM is know for being weird, counterintuitive, and difficult to understand. 4/38
QM recap The evolution of a closed system is given by the time-dependent Schrödinger equation i d ψ = H ψ, dt where H is the system Hamiltonian. The evolution of a closed system is unitary ψ(t) = U(t, 0) ψ(0). Equivalent Schrödinger equation for the time evolution operator (propagator) i du dt = HU. 5/38
Mixed states Pure state: can be described by state vector ψ or ψ ψ. Mixed state: statistical mixture of pure states ρ = N p j ψ j ψ j, j=1 where p j are real, 0 p j 1, j p j = 1. Liouville-von Neumann equation (equivalent of the Schrödinger equation for the density matrix) i dρ dt = [H, ρ]. Liouville-von Neumann equation is solved by use of propagators as ρ(t) = U(t, 0)ρ(0)U(t, 0). 6/38
Time-dependent Hamiltonian All previous equations hold also for H = H(t), in particular But However, du(t, 0) i = H(t)U(t, 0). dt U(t, 0) e ith e i t 0 H(t )dt. U(t, 0) = U(t, t )U(t, 0), t > t > 0. Therefore, for t t/n, For t small, U(t, 0) = U(t, (N 1) t) U(2 t, t)u( t, 0). U((k + 1) t, k t) e i th(k t). 7/38
Observable (Schrödinger picture) For a pure state, the expected value of an observable for a system in state ψ is given by O = ψ O ψ. For the density matrix ρ = j p j ψ j ψ j, O = N p j ψ j O ψ j = j=1 N p j Tr ( ψ j ψ j O) = Tr(ρO). j=1 identify dominating frequencies Fourier coefficients sampling s(t) O(t) s(t i ), i = 1,..., M DFT ([s(t i )] i=1,...,m ) 8/38
Our setting Given: To be computed: ρ(0), H(t), O, {t i } i=1,...,m. s(t i ) = Tr(U(t i, 0)ρ(0)U(t i, 0) O), where U((k + 1) t, k t) e i th(k t). (picture: Wikipedia) 9/38
Our setting (cont.) Required accuracy: 0.1% 1%. Important properties: N spins ±1/2 matrices 2 N 2 N ; ρ(0) is often sparse, but not necessarily; H(t) is real symmetric; H(t) is sparse with the same sparsity pattern t; H(t) is periodic; only small number of sampling times ( 4 10) within one period of Hamiltonian; O is sparse. 10/38
Example I H H 10 5 0 100 100 200 200 300 300 400 400 500 500 600 600 700 700 800 800 900 900 2.5 2 1.5 1 0.5 0-0.5-1 1000-1.5-2 1000 0 200 400 600 800 1000-2.5 200 400 600 800 1000 nz = 24064 ρ FT (s) 0 100 200 300 400 500 600 700 800 900 1000 0 200 400 600 nz = 2048 800 1000 11/38
Example II 0 H without RF H without RF 10 4 10 50 50 8 6 100 100 4 150 150 2 0 200 200-2 -4 250 0 50 100 150 200 250 nz = 3840 250 50 100 150 200 250-6 0 H with RF H with RF 10 5 4.5 50 50 4 3.5 3 100 100 2.5 2 150 150 1.5 1 200 200 0.5 0 250 0 50 100 150 200 250 nz = 5888 250 50 100 150 200 250-0.5 12/38
Main tasks How to choose t? How to compute subpropagator e i th(k t)? How to avoid computation of subpropagators? 13/38
Choosing the time step t Aim: choose t s.t. H is almost constant in t [(k + 1) t, k t] { e i th(k t) U((k + 1) t, k t) Standard approach: refine until stagnation. e i th((k+1) t) Sensitivity of the matrix exponential A normal: e A+E e A e A E e E. Q: Is this bound tight? A: Quite. Q: Is this bound useful? A: Well, only locally... 15/38
Example 0.03 0.025 rel. change Hams rel. change exps upper bound 0.02 0.015 0.01 0.005 0 10 20 30 40 50 60 70 80 90 100 k rel. change Hams... H(k t) H((k + 1) t) / H(k t) rel. change exps... e i th(k t) e i th((k+1) t) / e i th(k t) upper bound... t H(k t) H((k + 1) t) e t H(k t) H((k+1) t) 16/38
Computing the subpropagators I Taylor expansion e A n k=0 1 k! Ak + requires only sparse-dense multiplications reasonable convergence only for th < 1 (ca. 7 terms) Scaling and squaring e A = (e A/n) n optimal: n = 2 k + scaling improves convergence of many methods requires dense-dense multiplications alternative: finer time discretization 18/38
Computing the subpropagators II Padé approximant such that m Rm,n(x) f k=0 = a kx k 1 + n l=1 b lx l, R(0) = f (0) R (0) = f (0). R (m+n) (0) = f (m+n) (0) usually m = n + explicit formulas for a k and b l denominator may be singular, stability issues useful only for th small 19/38
Computing the subpropagators III Chebyshev expansion e A = J 0 (i)i + 2 J k... Bessel functions i k J k ( i)t k (A), A 1 k=1 T k... Chebyshev polynomials of the first kind + T k (A) using three-term recurrence with only sparse-dense multiplications A 1 must hold Diagonalization A = V ΛV e A = V diag([e λ 1,..., e λn ])V depends on how efficiently we compute the spectral decomposition 20/38
Computing the subpropagators IV Diagonalization using updates use the spectral decomposition from previous time step A (k) = V (k) Λ (k) ( V (k)) for i = 1,..., n ( end µ (k+1) i λ (k+1) i v (k) i ) A (k+1) v (k) i compute the dominant eigenvector ṽ (k+1) i of (A (k+1) µ (k+1) i I ) 1 ( ) (k+1) ṽ i A (k+1)ṽ (k+1) (k+1) i, ṽ i v (k+1) i + power iteration generally converges very quickly poor convergence for λ i+1 λ i 1 21/38
Eigenvalues of a Hamiltonian 2 10 6 1.5 1 0.5 λi 0-0.5-1 -1.5-2 0 100 200 300 400 500 600 700 800 900 1000 i 1.2 1.1 1 0.9 λi+1 / λi 0.8 0.7 0.6 0.5 0.4 0.3 0 100 200 300 400 500 600 700 800 900 1000 i 22/38
Computing the subpropagators V Sparse splitting methods in general Suzuki-Trotter formula: Sparse splitting: e t(a 1+A 2 ) = e ta 1 e ta 2 + O(t 2 ) A = A 1 + + A k, s.t. e A i sparse e A e A 1 e A2 e A k + avoids dense-dense multiplication of the subpropagators error may be large for highly non-commutative matrices 23/38
Computing the subpropagators VI Checkerboard method A symmetric with zero diagonal, k = nnz(a) 2 :......... 0 a i A i....... a i 0..... e A i = I.. cosh(a i ) sinh(a i ). I. sinh(a i ) cosh(a i ).. I For α = A < 1 and m = max i nnz(a(i, :)) e A e A 1 e A2 e A k 1 2 kmα2 e mα. + cheap multiplication by e A i error scales with nnz(a) + better splitting schemes: (mα) 2 e mα edge coloring 24/38
Computing the subpropagators: Summary Expansion methods often need scaling and squaring (or smaller time step) to achieve reasonable convergence. For Chebyshev expansion, proper scaling is absolutely inevitable. Diagonalization update is a difficult task if the eigenvalues are not well separated. Subpropagators are dense and must be multiplied to form the final propagator (dense-dense multiplication). Splitting methods may introduce large error. 25/38
Evolution of pure states Assume with k small. Compute ρ = k p j ψ j ψ j j=1 ψ j (t) = U(t, 0) ψ j (0) e i th(n t) e i th( t) e i th(0) ψ j (0), i.e., the action of the matrix exponential on a vector, and ρ(t) = k p j ψ j (t) ψ j (t). j=1 + only matrix-vector multiplication + Krylov subspace methods cannot employ the periodicity of H k is rarely small 27/38
Baker-Campbell-Hausdorff formula We know that e A+B e A e B. However, there always exists C, s.t. e C = e A e B [A, B] = 0 = C = A + B [A, B] central = C = A + B + 1 2 [A, B] [A, B] general = go to Wikipedia 28/38
Baker-Campbell-Hausdorff formula (cont.) + enables aggregation of subpropagators + allows employing the periodicity of H + computation of C only sparse-sparse multiplications C is generally not sparse often slow convergence nnz A 24064 B 24064 [A, B] 44672 [A, [A, B]] 130048 [B, [B, A]] 129248 [B, [A, [A, B]]] 257158 29/38
Adjoint endomorphism (matrix Lie algebra) Compute directly the uniformly transformed density matrix using the following (Taylor-expansion type) formula e A Be A = B + [A, B] + 1 2! [A, [A, B]] + 1 [A, [A, [A, B]]] + 3! + fully avoids dense-dense no periodicity often slow convergence 30/38
Avoiding the computation of subpropagators: Summary Due to periodicity of H, propagators from the initial time to the sampling times can be reused, therefore avoiding computation of the subpropagators fully is counterproductive. To use (band) Krylov subspace methods, the initial state has to be a statistical mixture of a small number of pure states. Both commutators of two consecutive Hamiltonians and Hamiltonian and the density matrix decay relatively slowly. 31/38
References 32/38
References 33/38
References 34/38
References 35/38
References 36/38
References 37/38
Thank you for your attention.
Edge-coloring (pictures: Wikipedia) back