Numerical Simulation of Spin Dynamics

Similar documents
Numerical simulations of spin dynamics

Physics 550. Problem Set 6: Kinematics and Dynamics

Solution Set 3. Hand out : i d dt. Ψ(t) = Ĥ Ψ(t) + and

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

S.K. Saikin May 22, Lecture 13

Classical behavior of magnetic dipole vector. P. J. Grandinetti

Chapter 2 The Group U(1) and its Representations

The Hamiltonian and the Schrödinger equation Consider time evolution from t to t + ɛ. As before, we expand in powers of ɛ; we have. H(t) + O(ɛ 2 ).

Ensembles and incomplete information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Lie algebraic quantum control mechanism analysis

Introduction to the GRAPE Algorithm

G : Quantum Mechanics II

2.4.8 Heisenberg Algebra, Fock Space and Harmonic Oscillator

3.024 Electrical, Optical, and Magnetic Properties of Materials Spring 2012 Recitation 1. Office Hours: MWF 9am-10am or by appointment

Quantum Mechanics for Mathematicians: The Heisenberg group and the Schrödinger Representation

The Postulates of Quantum Mechanics

MATH 583A REVIEW SESSION #1

Numerical Integration of the Wavefunction in FEM

1 Math 241A-B Homework Problem List for F2015 and W2016

General Exam Part II, Fall 1998 Quantum Mechanics Solutions

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Linear Algebra II Lecture 13

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

Density Matrices. Chapter Introduction

Introduction to Quantum Computing

NANOSCALE SCIENCE & TECHNOLOGY

MATH 5640: Functions of Diagonalizable Matrices

Eigenvalues and Eigenvectors

Propagators for TDDFT

Differential equations

Chapter 7 Iterative Techniques in Matrix Algebra

Quantum mechanics in one hour

Attempts at relativistic QM

Linear Algebra using Dirac Notation: Pt. 2

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Theory and Group Representations

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Superoperators for NMR Quantum Information Processing. Osama Usman June 15, 2012

Course Notes: Week 1

Simple one-dimensional potentials

Dephasing, relaxation and thermalization in one-dimensional quantum systems

Mathematical Physics Homework 10

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Postulates of Quantum Mechanics

The Exponential of a Matrix

Lyapunov-based control of quantum systems

Open quantum systems

Math 408 Advanced Linear Algebra

Introduction to Quantum Information Hermann Kampermann

Solving Linear Systems of Equations

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Definition (T -invariant subspace) Example. Example

MATH 5524 MATRIX THEORY Problem Set 4

CONTENTS. 2 CLASSICAL DESCRIPTION 2.1 The resonance phenomenon 2.2 The vector picture for pulse EPR experiments 2.3 Relaxation and the Bloch equations

arxiv: v1 [quant-ph] 28 Mar 2012

Quantum Mechanics for Mathematicians: Energy, Momentum, and the Quantum Free Particle

2 The Density Operator

Math Ordinary Differential Equations

Group representation theory and quantum physics

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

1 The postulates of quantum mechanics

Time-dependent DMRG:

Upper triangular forms for some classes of infinite dimensional operators

Positive Hamiltonians can give purely exponential decay

-state problems and an application to the free particle

1 Infinite-Dimensional Vector Spaces

2.1 Green Functions in Quantum Mechanics

Linear Algebra II Lecture 22

Page 684. Lecture 40: Coordinate Transformations: Time Transformations Date Revised: 2009/02/02 Date Given: 2009/02/02

Lecture Notes 2: Review of Quantum Mechanics

Spectrum (functional analysis) - Wikipedia, the free encyclopedia

Consider a system of n ODEs. parameter t, periodic with period T, Let Φ t to be the fundamental matrix of this system, satisfying the following:

5.4 Given the basis e 1, e 2 write the matrices that represent the unitary transformations corresponding to the following changes of basis:

The Conjugate Gradient Method

Exponential integrators and functions of the matrix exponential

Automorphic Equivalence Within Gapped Phases

ACM 104. Homework Set 5 Solutions. February 21, 2001

Favourable time integration methods for. Non-autonomous evolution equations

Newton s Method and Localization

Numerical Methods I Eigenvalue Problems

Lecture 1: The Equilibrium Green Function Method

Math 3191 Applied Linear Algebra

3.5 Finite Rotations in 3D Euclidean Space and Angular Momentum in QM

Likewise, any operator, including the most generic Hamiltonian, can be written in this basis as H11 H

MATH 581D FINAL EXAM Autumn December 12, 2016

Linear Algebra - Part II

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Calculus C (ordinary differential equations)

Linear Solvers. Andrew Hazel

Nonlinear Eigenvalue Problems: Theory and Applications

Identification Methods for Structural Systems

Applied Linear Algebra

Math 108b: Notes on the Spectral Theorem

Diagonalizing Hermitian Matrices of Continuous Functions

The following definition is fundamental.

1 Mathematical preliminaries

Lecture 2: Linear operators

The Two Level Atom. E e. E g. { } + r. H A { e e # g g. cos"t{ e g + g e } " = q e r g

Transcription:

Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014

Introduction Discretization in time Computing the subpropagators Avoiding computation of subpropagators

Nuclear magnetic resonance spectroscopy Phenomenon of nuclear magnetic resonance Nuclei in a magnetic field absorb and re-emit electromagnetic radiation. This energy is at a specific resonance frequency which depends on the strength of the magnetic field and the magnetic properties of the isotope of the atoms. Quantum mechanics QM is know for being weird, counterintuitive, and difficult to understand. 4/38

QM recap The evolution of a closed system is given by the time-dependent Schrödinger equation i d ψ = H ψ, dt where H is the system Hamiltonian. The evolution of a closed system is unitary ψ(t) = U(t, 0) ψ(0). Equivalent Schrödinger equation for the time evolution operator (propagator) i du dt = HU. 5/38

Mixed states Pure state: can be described by state vector ψ or ψ ψ. Mixed state: statistical mixture of pure states ρ = N p j ψ j ψ j, j=1 where p j are real, 0 p j 1, j p j = 1. Liouville-von Neumann equation (equivalent of the Schrödinger equation for the density matrix) i dρ dt = [H, ρ]. Liouville-von Neumann equation is solved by use of propagators as ρ(t) = U(t, 0)ρ(0)U(t, 0). 6/38

Time-dependent Hamiltonian All previous equations hold also for H = H(t), in particular But However, du(t, 0) i = H(t)U(t, 0). dt U(t, 0) e ith e i t 0 H(t )dt. U(t, 0) = U(t, t )U(t, 0), t > t > 0. Therefore, for t t/n, For t small, U(t, 0) = U(t, (N 1) t) U(2 t, t)u( t, 0). U((k + 1) t, k t) e i th(k t). 7/38

Observable (Schrödinger picture) For a pure state, the expected value of an observable for a system in state ψ is given by O = ψ O ψ. For the density matrix ρ = j p j ψ j ψ j, O = N p j ψ j O ψ j = j=1 N p j Tr ( ψ j ψ j O) = Tr(ρO). j=1 identify dominating frequencies Fourier coefficients sampling s(t) O(t) s(t i ), i = 1,..., M DFT ([s(t i )] i=1,...,m ) 8/38

Our setting Given: To be computed: ρ(0), H(t), O, {t i } i=1,...,m. s(t i ) = Tr(U(t i, 0)ρ(0)U(t i, 0) O), where U((k + 1) t, k t) e i th(k t). (picture: Wikipedia) 9/38

Our setting (cont.) Required accuracy: 0.1% 1%. Important properties: N spins ±1/2 matrices 2 N 2 N ; ρ(0) is often sparse, but not necessarily; H(t) is real symmetric; H(t) is sparse with the same sparsity pattern t; H(t) is periodic; only small number of sampling times ( 4 10) within one period of Hamiltonian; O is sparse. 10/38

Example I H H 10 5 0 100 100 200 200 300 300 400 400 500 500 600 600 700 700 800 800 900 900 2.5 2 1.5 1 0.5 0-0.5-1 1000-1.5-2 1000 0 200 400 600 800 1000-2.5 200 400 600 800 1000 nz = 24064 ρ FT (s) 0 100 200 300 400 500 600 700 800 900 1000 0 200 400 600 nz = 2048 800 1000 11/38

Example II 0 H without RF H without RF 10 4 10 50 50 8 6 100 100 4 150 150 2 0 200 200-2 -4 250 0 50 100 150 200 250 nz = 3840 250 50 100 150 200 250-6 0 H with RF H with RF 10 5 4.5 50 50 4 3.5 3 100 100 2.5 2 150 150 1.5 1 200 200 0.5 0 250 0 50 100 150 200 250 nz = 5888 250 50 100 150 200 250-0.5 12/38

Main tasks How to choose t? How to compute subpropagator e i th(k t)? How to avoid computation of subpropagators? 13/38

Choosing the time step t Aim: choose t s.t. H is almost constant in t [(k + 1) t, k t] { e i th(k t) U((k + 1) t, k t) Standard approach: refine until stagnation. e i th((k+1) t) Sensitivity of the matrix exponential A normal: e A+E e A e A E e E. Q: Is this bound tight? A: Quite. Q: Is this bound useful? A: Well, only locally... 15/38

Example 0.03 0.025 rel. change Hams rel. change exps upper bound 0.02 0.015 0.01 0.005 0 10 20 30 40 50 60 70 80 90 100 k rel. change Hams... H(k t) H((k + 1) t) / H(k t) rel. change exps... e i th(k t) e i th((k+1) t) / e i th(k t) upper bound... t H(k t) H((k + 1) t) e t H(k t) H((k+1) t) 16/38

Computing the subpropagators I Taylor expansion e A n k=0 1 k! Ak + requires only sparse-dense multiplications reasonable convergence only for th < 1 (ca. 7 terms) Scaling and squaring e A = (e A/n) n optimal: n = 2 k + scaling improves convergence of many methods requires dense-dense multiplications alternative: finer time discretization 18/38

Computing the subpropagators II Padé approximant such that m Rm,n(x) f k=0 = a kx k 1 + n l=1 b lx l, R(0) = f (0) R (0) = f (0). R (m+n) (0) = f (m+n) (0) usually m = n + explicit formulas for a k and b l denominator may be singular, stability issues useful only for th small 19/38

Computing the subpropagators III Chebyshev expansion e A = J 0 (i)i + 2 J k... Bessel functions i k J k ( i)t k (A), A 1 k=1 T k... Chebyshev polynomials of the first kind + T k (A) using three-term recurrence with only sparse-dense multiplications A 1 must hold Diagonalization A = V ΛV e A = V diag([e λ 1,..., e λn ])V depends on how efficiently we compute the spectral decomposition 20/38

Computing the subpropagators IV Diagonalization using updates use the spectral decomposition from previous time step A (k) = V (k) Λ (k) ( V (k)) for i = 1,..., n ( end µ (k+1) i λ (k+1) i v (k) i ) A (k+1) v (k) i compute the dominant eigenvector ṽ (k+1) i of (A (k+1) µ (k+1) i I ) 1 ( ) (k+1) ṽ i A (k+1)ṽ (k+1) (k+1) i, ṽ i v (k+1) i + power iteration generally converges very quickly poor convergence for λ i+1 λ i 1 21/38

Eigenvalues of a Hamiltonian 2 10 6 1.5 1 0.5 λi 0-0.5-1 -1.5-2 0 100 200 300 400 500 600 700 800 900 1000 i 1.2 1.1 1 0.9 λi+1 / λi 0.8 0.7 0.6 0.5 0.4 0.3 0 100 200 300 400 500 600 700 800 900 1000 i 22/38

Computing the subpropagators V Sparse splitting methods in general Suzuki-Trotter formula: Sparse splitting: e t(a 1+A 2 ) = e ta 1 e ta 2 + O(t 2 ) A = A 1 + + A k, s.t. e A i sparse e A e A 1 e A2 e A k + avoids dense-dense multiplication of the subpropagators error may be large for highly non-commutative matrices 23/38

Computing the subpropagators VI Checkerboard method A symmetric with zero diagonal, k = nnz(a) 2 :......... 0 a i A i....... a i 0..... e A i = I.. cosh(a i ) sinh(a i ). I. sinh(a i ) cosh(a i ).. I For α = A < 1 and m = max i nnz(a(i, :)) e A e A 1 e A2 e A k 1 2 kmα2 e mα. + cheap multiplication by e A i error scales with nnz(a) + better splitting schemes: (mα) 2 e mα edge coloring 24/38

Computing the subpropagators: Summary Expansion methods often need scaling and squaring (or smaller time step) to achieve reasonable convergence. For Chebyshev expansion, proper scaling is absolutely inevitable. Diagonalization update is a difficult task if the eigenvalues are not well separated. Subpropagators are dense and must be multiplied to form the final propagator (dense-dense multiplication). Splitting methods may introduce large error. 25/38

Evolution of pure states Assume with k small. Compute ρ = k p j ψ j ψ j j=1 ψ j (t) = U(t, 0) ψ j (0) e i th(n t) e i th( t) e i th(0) ψ j (0), i.e., the action of the matrix exponential on a vector, and ρ(t) = k p j ψ j (t) ψ j (t). j=1 + only matrix-vector multiplication + Krylov subspace methods cannot employ the periodicity of H k is rarely small 27/38

Baker-Campbell-Hausdorff formula We know that e A+B e A e B. However, there always exists C, s.t. e C = e A e B [A, B] = 0 = C = A + B [A, B] central = C = A + B + 1 2 [A, B] [A, B] general = go to Wikipedia 28/38

Baker-Campbell-Hausdorff formula (cont.) + enables aggregation of subpropagators + allows employing the periodicity of H + computation of C only sparse-sparse multiplications C is generally not sparse often slow convergence nnz A 24064 B 24064 [A, B] 44672 [A, [A, B]] 130048 [B, [B, A]] 129248 [B, [A, [A, B]]] 257158 29/38

Adjoint endomorphism (matrix Lie algebra) Compute directly the uniformly transformed density matrix using the following (Taylor-expansion type) formula e A Be A = B + [A, B] + 1 2! [A, [A, B]] + 1 [A, [A, [A, B]]] + 3! + fully avoids dense-dense no periodicity often slow convergence 30/38

Avoiding the computation of subpropagators: Summary Due to periodicity of H, propagators from the initial time to the sampling times can be reused, therefore avoiding computation of the subpropagators fully is counterproductive. To use (band) Krylov subspace methods, the initial state has to be a statistical mixture of a small number of pure states. Both commutators of two consecutive Hamiltonians and Hamiltonian and the density matrix decay relatively slowly. 31/38

References 32/38

References 33/38

References 34/38

References 35/38

References 36/38

References 37/38

Thank you for your attention.

Edge-coloring (pictures: Wikipedia) back