Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Size: px
Start display at page:

Download "Numerical Linear and Multilinear Algebra in Quantum Tensor Networks"

Transcription

1 Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20,

2 Outline Numerical (Multi-) Linear Algebraic Approaches to......quantum Control...Quantum Tensor Networks i 1,k 1,k 2 i 2,k 2,k 3 i 3,k 3,k 4... i 1,m 1,m 2 i 2,m 2,m 3 i 3,m 3,m 4 QCCC 2013, Prien, October 20,

3 NMR Spectroscopy as Optimal Control Problem Goal: find optimal pulse amplitudes u(t) to steer nuclear spin states x(t) over the time interval [0, T ] Optimality can be described by a scalar function f: f(x, u) = f T (x(t )) }{{} quality Application: Synthesization of a unitary target operator U G : f(u(t ), u) = U(T ) U G 2 F The dynamics are governed by the equation of motion: [ ] M ẋ(t) = i H 0 + u m (t)h m x(t) m=1 } {{ } =H(t) QCCC 2013, Prien, October 20,

4 Gradient Ascent Pulse Engineering Algorithm: 1: Set initial control amplitudes u {0} m (t k ) for all times t k and U 0 2: for l = 0, 1, 2,... until convergence a: Calculate exponentials U k = e ih(t k) b: Compute the forward propagation U k:0 = U k U k 1 U 1 U 0 c: Compute the backward propagation for all k = 1,..., K for all k = 1,..., K W K+1:k+1 = U K+1 U K U k+1 U k for all k = K,..., 1 d: Calculate the gradient )] f u m(t k ) [trace (W = Re HK+1:k+1 ( ih m)u k:0 e: Update the controls u m (t k ) u {l+1} m (t k ) = u {l} m (t k ) + ɛ f u m(t k ) QCCC 2013, Prien, October 20,

5 GRAPE: Computational Challenges 1. Computation of the matrix exponentials U k = e i[h0+ m um(t k)h m] Challenges: Exponential of matrices are required in explicit form Operand matrices have certain properties (e.g., sparsity structure) QCCC 2013, Prien, October 20,

6 Matrix Exponentials NMR setting: Problem can be reduced to two problems of half size: cn 3 reduces to 2c ( ) n 3 2 = c n 3 4 = saves factor 4 QCCC 2013, Prien, October 20,

7 Matrix Exponentials Old version: Eigendecomposition method New version: Chebyshev method H = V ΛV H = ) e i th = V e i tλ V H = V diag (e i tλ1,..., e i tλn V H. Requires eigendecomposition of a typically sparse matrix. QCCC 2013, Prien, October 20,

8 Matrix Exponentials Old version: Eigendecomposition method New version: Chebyshev method ( ) m e i th H = a 0 I + 2 a k C k, a k = i k J k ( t H ). H k=1 Chebyshev recurrence translates into for k = m downto 0 do a k = i k J k ( t H ) D k = a k I + 2 H HD k+1 D k+2 e i th = D 0 D 2 Computation relies solely on products of the form sparse dense. = No expensive operations such as inversion/eigendecomposition required QCCC 2013, Prien, October 20,

9 GRAPE: Computational Challenges 2. Computation of the intermediate matrix products Compute the forward propagation U k:0 = U k U k 1 U 1 U 0 for all k = 1,..., K Compute the backward propagation W K+1:k+1 = U K+1 U K U k+1 U k for all k = K,..., 1 Instances of the matrix-multiplication prefix problem: ) k = 1 : K : (X 1 X k 1 X k Challenges: Parallelization Large number of matrices to be multiplied (e.g., K = 1024) Matrices comparably small (e.g., ) QCCC 2013, Prien, October 20,

10 Computation of the Matrix Multiplications Old version: coarse-grain parallel prefix tree New version: fine-grain 3D parallelization: p 0 p 1 p 2 p 3 X 1 X 8 X 2 X 7 X 3 X 6 X 4 X 5 X 1 X 7:8 X 1:2 X 7 X 3 X 5:6 X 3:4 X 5 X 1 X 5:8 X 1:2 X 5:7 X 1:3 X 5:6 X 1:4 X 5 X 1 X 1:8 X 1:2 X 1:7 X 1:3 X 1:6 X 1:4 X 1:5 = individual matrix multiplications are computed sequentially QCCC 2013, Prien, October 20,

11 Computation of the Matrix Multiplications Old version: coarse-grain parallel prefix tree New version: fine-grain 3D parallelization: = compute loop over k sequentially and = parallelize individual matrix multiplications Total amount of work corresponds to a 3D cube. Each subblock corresponds to a block operation C i,l := C i,l + A i,j B j,l Find an appropriate domain decomposition. 3D parallelization features lowest communication overhead l j j Ci,l Ai,j i i l Bj,l j QCCC 2013, Prien, October 20,

12 Numerical Results (10 spins) computation time [s] gradient backward propagation forward propagation exponential 0 old new old new old new old new K = 128 K = 256 K = 512 K = 1024 Figure : Runtime of one GRAPE iteration on the HLRB2 using K 2 processors. QCCC 2013, Prien, October 20,

13 Numerical Results (11 spins) computation time [s] gradient backward propagation forward propagation exponential 0 old new old new old new old new K = 128 K = 256 K = 512 K = 1024 Figure : Runtime of one GRAPE iteration on the HLRB2 using K 2 processors. QCCC 2013, Prien, October 20,

14 Ground State of Quantum Systems Ground state: Compute lowest eigenvalue of the Hamiltonian H = M k=1 α k Q (k) 1 Q (k) N C2N 2 N Challenges: the vector space V = C 2N grows exponentially in N for large N (e.g., N = 100) traditional matrix methods fail Solution: data-sparse representation format that allows to overcome the curse of dimensionality, to describe the right corner of the Hilbert space, for efficient computations and algorithms. QCCC 2013, Prien, October 20,

15 Ground State of Quantum Systems Ground state: Compute lowest eigenvalue of the Hamiltonian H = M k=1 α k Q (k) 1 Q (k) N C2N 2 N Challenges: the vector space V = C 2N grows exponentially in N for large N (e.g., N = 100) traditional matrix methods fail Solution: data-sparse representation format that allows to overcome the curse of dimensionality, to describe the right corner of the Hilbert space, for efficient computations and algorithms. Two approaches: subspace method in the MPS format, symmetry-adapted variational ground-state search. QCCC 2013, Prien, October 20,

16 Matrix Product States In physical simulations, Matrix Product States (MPS) are in use: x = ( ) A (i1) 1 A (i2) 2 A (in ) N e i1i 2...i N. i 1,...,i N tr with D j D j+1 matrices A (ij) j. MPS representation requires 2ND 2 instead of 2 N entries. Minimization in terms of MPS vmps (ALS): j DMRG (MALS): j j + 1 QCCC 2013, Prien, October 20,

17 i 1 i 2 i 3 i N Tucker: α 1 α 2 α 3 α N α 1, α 2, α 3,..., α N i 1 i 2 i 3 i N CP: QCCC 2013, Prien, October 20, α

18 i 1 i 2 i 3 i N Tucker: α 1 α 2 α 3 α N α 1, α 2, α 3,..., α N i 1 i 2 i 3 i N TT/MPS: α 2 α 3 α 4 α N i 1 i 2 i 3 i N CP: QCCC 2013, Prien, October 20, α

19 The Lanczos Algorithm Algorithm: Lanczos iteration for Hermitian matrix H Choose initial guess r 0 0 Define β 0 = r 0 and q 0 = 0 for j = 1, 2,..., m do q j = r j 1 /β j 1 u j = Hq j α j = q H j u j r j = u j α j q j β j 1 q j 1 β j = r j T m = tridiag(β, α, β) (λ, y) = eig(t m ) v = Qy = (q 1,..., q m )y QCCC 2013, Prien, October 20,

20 The Lanczos Algorithm Algorithm: Lanczos iteration for Hermitian matrix H Choose initial guess r 0 0 Define β 0 = r 0 and q 0 = 0 for j = 1, 2,..., m do q j = r j 1 /β j 1 u j = Hq j α j = q H j u j r j = u j α j q j β j 1 q j 1 β j = r j T m = tridiag(β, α, β) (λ, y) = eig(t m ) v = Qy = (q 1,..., q m )y Computational tasks: QCCC 2013, Prien, October 20,

21 The Lanczos Algorithm Algorithm: Lanczos iteration for Hermitian matrix H Choose initial guess r 0 0 Define β 0 = r 0 and q 0 = 0 for j = 1, 2,..., m do q j = r j 1 /β j 1 u j = Hq j α j = q H j u j r j = u j α j q j β j 1 q j 1 β j = r j T m = tridiag(β, α, β) (λ, y) = eig(t m ) v = Qy = (q 1,..., q m )y Computational tasks: 1 Application of the matrix to a vector QCCC 2013, Prien, October 20,

22 The Lanczos Algorithm Algorithm: Lanczos iteration for Hermitian matrix H Choose initial guess r 0 0 Define β 0 = r 0 and q 0 = 0 for j = 1, 2,..., m do q j = r j 1 /β j 1 u j = Hq j α j = q H j u j r j = u j α j q j β j 1 q j 1 β j = r j T m = tridiag(β, α, β) (λ, y) = eig(t m ) v = Qy = (q 1,..., q m )y Computational tasks: 1 Application of the matrix to a vector 2 Computation of inner products QCCC 2013, Prien, October 20,

23 The Lanczos Algorithm Algorithm: Lanczos iteration for Hermitian matrix H Choose initial guess r 0 0 Define β 0 = r 0 and q 0 = 0 for j = 1, 2,..., m do q j = r j 1 /β j 1 u j = Hq j α j = q H j u j r j = u j α j q j β j 1 q j 1 β j = r j T m = tridiag(β, α, β) (λ min, y) = eig(t m ) v = Qy = (q 1,..., q m )y Computational tasks: 1 Application of the matrix to a vector 2 Computation of inner products 3 Sums and linear combinations of vectors QCCC 2013, Prien, October 20,

24 Computation of Inner Products x H y = i 1,...,i N = i j k j ( (Ā(i )) ( ( )) tr 1) 1 Ā (in ) N tr B (i1) 1 B (in ) N a (i1) 1;k 1,k 2 a (i N ) N;k N,k 1 m j b (i1) 1;m 1,m 2 b (i N ) N;m N,m 1 = Find an efficient ordering of the summations m j, k j, i j. Consider the tensor network i 1,k 1,k 2 i 2,k 2,k 3 i 3,k 3,k 4... i 1,m 1,m 2 i 2,m 2,m 3 i 3,m 3,m 4 QCCC 2013, Prien, October 20,

25 i1,k1,k2 i2,k2,k3 i3,k3,k4 i1,k1,k2 i2,k2,k3 i3,k3,k4... i1... i1,m1,m2 i2,m2,m3 i3,m3,m4 i1,m1,m2 i2,m2,m3 i3,m3,m4 k2 k1,m1,k2,m2 i2,k2,k3 i3,k3,k4 k1,m1,k2,m2 i2,k2,k3 i3,k3,k i2,m2,m3 i3,m3,m4 i2,m2,m3 i3,m3,m4 k1,m1,i2,m2,k3 i3,k3,k4 k1,m1,k3,m3 i3,k3,k4 i2 m i2,m2,m3 i3,m3,m4 i3,m3,m4 QCCC 2013, Prien, October 20,

26 Computation of the Sum of Two MPS Sum of two MPS vectors x = ( ) A (i1) 1 A (i2) 2... A (in ) N y = i 1,...,i N tr i 1,...,i N tr e i1...i N ( ) B (i1) 1 B (i2) 2... B (in ) N e i1...i N (x + y) i1,...,i N = [( A (i1) 1 0 tr 0 B (i1) 1 ) ( A (i2) B (i2) 2 )... ( )] A (in ) N 0 0 B (in ) N Adding MPS leads to an increase of the bond dimensions: x MPSD + y MPSD = z MPS2D QCCC 2013, Prien, October 20,

27 Matrix Product Operators (MPO) The MPS concept x i1,i 2,...,i N can be generalized to operators: O i 1,...,i N j 1,...,j N = tr ( ) = tr A (i1) 1 A (i2) 2 A (in ) N ( W (i1,j1) 1 W (i2,j2) 2 W (in,jn ) N Application of such an MPO to an MPS leads to [ ( ) ( ) ] tr W (i1,j1) 1 A (j1) 1 W (in,jn ) N A (jn ) N, j N j 1 i.e., an MPS of larger size D MPS D MPO. Usually exact representation with small ranks Not dependent on N ) QCCC 2013, Prien, October 20,

28 Augmentation of ranks Increase of the bond dimension during the Lanczos algorithm Lanczos algorithm Bond dimension for j = 1, 2,..., m q j = r j 1 /β j 1 D (j) u j = Hq j D O D (j) α j = qj Hu j r j = Hq j α j q j β j 1 q j 1 D O D (j) + D (j) + D (j 1) = β j = r j = D (j+1) v = Qy = (q 1,..., q m )y D (1) + D (2) + + D (m) QCCC 2013, Prien, October 20,

29 Augmentation of ranks Increase of the bond dimension during the Lanczos algorithm Lanczos algorithm Bond dimension for j = 1, 2,..., m q j = r j 1 /β j 1 D (j) u j = Hq j D O D (j) α j = qj Hu j r j = Hq j α j q j β j 1 q j 1 D O D (j) + D (j) + D (j 1) = β j = r j = D (j+1) v = Qy = (q 1,..., q m )y D (1) + D (2) + + D (m) Solution: keep the ranks D (j) bounded = Projection required QCCC 2013, Prien, October 20,

30 Compression/Projection of MPS P D : y MP SD arg min A [ tr SVD-based truncation: ymp arg min x 2 SD MP S D = x MP SD 2 ) ( B (i1) 1 B (in ) N ( )] tr A (i1) 1 A (in ) N i 1,...,i N Suppose A 1,..., A r 1 are already constructed. B r is D D Define A (ir) r ( ) B (0) r B r (1) = ( ) U (0) r U r (1) Σ r V H trunc r = Ũ r (ir), a D D matrix proceed with the D D matrix Σ r Ṽr HB(i r+1) r+1 (Ũ ) (0) r Σ Ũ r (1) r Ṽ H r 2 2 QCCC 2013, Prien, October 20,

31 Restarted Lanczos Algorithm: Restarted Lanczos iteration Choose initial guess r 0 0 ; for iter = 1, 2,... do Define β 0 = r 0 and q 0 = 0 ; for j = 1, 2,..., m do q j = r j 1 /β j 1 ; u j = Hq j ; α j = q j H u j ; r j = u j α j q j β j 1 q j 1 ; β j = r j ; T m = tridiag(β, α, β) ; (λ min, y) = eig(t m ) ; v = Qy = (q 1,..., q m )y ; r 0 = P D (v) QCCC 2013, Prien, October 20,

32 Numerical Results (Ising Hamiltonian N = 25) restart parameter m = 3 λ1 λ1 / λ outer iteration D = 2: D = 3: D = 4: D = 5: D = 6: Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos (full vector) QCCC 2013, Prien, October 20,

33 Numerical Results (Ising Hamiltonian N = 25) restart parameter m = 4 λ1 λ1 / λ outer iteration D = 2: D = 3: D = 4: D = 5: D = 6: Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos (full vector) QCCC 2013, Prien, October 20,

34 Numerical Results (Ising Hamiltonian N = 25) restart parameter m = 5 λ1 λ1 / λ outer iteration D = 2: D = 3: D = 4: D = 5: D = 6: Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos DMRG Lanczos (full vector) QCCC 2013, Prien, October 20,

35 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. A = a b c d b e f g c f h i d g i j QCCC 2013, Prien, October 20,

36 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. A = 0 b c d -b 0 f g -c -f 0 i -d -g -i 0 QCCC 2013, Prien, October 20,

37 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. A = a b c d e f g c h i f b j h e a QCCC 2013, Prien, October 20,

38 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. A = a b c 0 e f 0 -c h 0 -f -b 0 -h -e -a QCCC 2013, Prien, October 20,

39 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. A = a b c d b e f c c f e b d c b a QCCC 2013, Prien, October 20,

40 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. v 1 v 2.. v n 1 v n = v n v n 1.. v 2 v 1 = } {{ } =:J v 1 v 2.. v n 1 v n QCCC 2013, Prien, October 20,

41 Matrix and Vector Symmetries A R n n is symmetric, if a i,j = a j,i A R n n is skew-symmetric, if a i,j = a j,i A R n n is persymmetric, if a i,j = a n+1 j,n+1 i A R n n is skew-persymmetric, if a i,j = a n+1 j,n+1 i A R n n is symmetric persymmetric, if a i,j = a j,i = a n+1 i,n+1 j A vector v R n is called symmetric, if v i = v n+1 i. A vector v R n is called skew-symmetric, if v i = v n+1 i. v 1 v 2.. v n 1 v n v n =.. v 2 v 1 v n 1 = } {{ } =:J v 1 v 2.. v n 1 v n QCCC 2013, Prien, October 20,

42 Symmetries of Spin- 1 2 Hamiltonians H = σ x = M k=1 α k Q (k) 1 Q (k) N, Q(k) i {I, σ x, σ y, σ z } C 2 2 ( ) 0 1, σ 1 0 y = ( ) ( ) 0 i 1 0, σ i 0 z =, I = 0 1 ( ) QCCC 2013, Prien, October 20,

43 Symmetries of Spin- 1 2 Hamiltonians H = σ x = M k=1 α k Q (k) 1 Q (k) N, Q(k) i {I, σ x, σ y, σ z } C 2 2 ( ) 0 1, σ 1 0 y = ( ) ( ) 0 i 1 0, σ i 0 z =, I = 0 1 σ x, σ z and I are symmetric, σ y /i is skew-symmetric, ( ) QCCC 2013, Prien, October 20,

44 Symmetries of Spin- 1 2 Hamiltonians H = σ x = M k=1 α k Q (k) 1 Q (k) N, Q(k) i {I, σ x, σ y, σ z } C 2 2 ( ) 0 1, σ 1 0 y = ( ) ( ) 0 i 1 0, σ i 0 z =, I = 0 1 ( ) σ x, σ z and I are symmetric, σ y /i is skew-symmetric, σ x, σ y /i and I are persymmetric, σ z is skew-persymmetric, QCCC 2013, Prien, October 20,

45 Symmetries of Spin- 1 2 Hamiltonians H = σ x = M k=1 α k Q (k) 1 Q (k) N, Q(k) i {I, σ x, σ y, σ z } C 2 2 ( ) 0 1, σ 1 0 y = ( ) ( ) 0 i 1 0, σ i 0 z =, I = 0 1 ( ) σ x, σ z and I are symmetric, σ y /i is skew-symmetric, σ x, σ y /i and I are persymmetric, σ z is skew-persymmetric, Kronecker product of two skew-symmetric matrices is symmetric, Kronecker product of two skew-persymmetric matrices is persymmetric. QCCC 2013, Prien, October 20,

46 Symmetries of Spin- 1 2 Hamiltonians H = σ x = M k=1 α k Q (k) 1 Q (k) N, Q(k) i {I, σ x, σ y, σ z } C 2 2 ( ) 0 1, σ 1 0 y = ( ) ( ) 0 i 1 0, σ i 0 z =, I = 0 1 ( ) σ x, σ z and I are symmetric, σ y /i is skew-symmetric, σ x, σ y /i and I are persymmetric, σ z is skew-persymmetric, Kronecker product of two skew-symmetric matrices is symmetric, Kronecker product of two skew-persymmetric matrices is persymmetric. Symmetry of Spin Hamiltonians Hamiltonians of typical physical models (Ising-ZZ, various Heisenberg models) are real-valued and symmetric persymmetric. QCCC 2013, Prien, October 20,

47 Matrix Symmetries and Eigenvectors Eigenvectors of Symmetric Persymmetric Matrices ([CB76]) A symmetric persymmetric matrix A can be orthogonally transformed to a block diagonal matrix: ( ) B C A = C T = 1 ( ) ( ) ( ) I I B + JC 0 I J. JBJ 2 J J 0 B JC I J Hence, the eigenvectors of A are of the form ( ) ( ) vi vi or Jv i Jv i. QCCC 2013, Prien, October 20,

48 Matrix Symmetries and Eigenvectors Eigenvectors of Symmetric Persymmetric Matrices ([CB76]) A symmetric persymmetric matrix A can be orthogonally transformed to a block diagonal matrix: ( ) B C A = C T = 1 ( ) ( ) ( ) I I B + JC 0 I J. JBJ 2 J J 0 B JC I J Hence, the eigenvectors of A are of the form ( ) ( ) vi vi or Jv i Jv i. Ground States of Spin Hamiltonians Ground states of typical physical models (Ising-ZZ, various Heisenberg models) are either symmetric or skew-symmetric. QCCC 2013, Prien, October 20,

49 MPS Representation of Symmetric Vectors x i1,i 2,i 3,...,i N 1,i N = x i1,i 2,i 3,...,i N 1,i N Jx = x (BF) Flip-operator i := 1 i. Example (N=3): x = ( a b c d d c b a ) T Theorem: MPS Representation of the Vector Symmetry ([Huckle13]) 1 If all MPS matrix pairs (A (0) j, A (1) j ) are connected via A (ij) j with involutions U j (Uj 2 symmetric (BF). = U j A (īj) j U j+1, j = 1,..., N, (1) = I), the represented vector is 2 Any symmetric vector (BF) can be represented by an MPS fulfilling relations (1). Reduction factor: 2 QCCC 2013, Prien, October 20,

50 MPS Representation of Symmetric Vectors Theorem: Uniqueness Result for Symmetric Vectors ([Huckle13]) Assume that the MPS matrices (over K) are related by A (1) j = U j 1 A (0) j V j, with square matrices V j and U j of appropriate size (j = 1,..., N). If any choice of matrices A (0) j results in the symmetry of the represented vector x, i.e., Jx = x, then U j and V j are up to a scalar factor involutions for all j: Uj 2 = u ji, Vj 2 = v ji. Furthermore, U j = c j V j, j = 1,..., N with constants c j. This theorem gives an argument to use involutions as heuristic. An involution has as eigenvalues ±1 = choose U j = J. QCCC 2013, Prien, October 20,

51 Exploiting Symmetries in Computations Impose symmetry conditions into variational ground-state search Suppose that the desired ground-state vector x is symmetric, i.e., Jx = x Therefore, 0 x H (I J) x = x H (x Jx) = 0. }{{} =0 λ min yh Hy y H y yh Hy y H y + ρyh (I J) y y H y = yh (H + ρ(i J)) y y H y. Apply ground-state algorithm (e.g., DMRG) to modified Hamiltonian H(ρ) := H + ρ(i J) for some ρ > 0. QCCC 2013, Prien, October 20,

52 Numerical Results: Number of Iterations Table : Heisenberg-XXX model of N = 100 spins: Number of DMRG sweeps until convergence. ρ D QCCC 2013, Prien, October 20,

53 Numerical Studies: Convergence λ1 λ1 / λ D = 5: ρ = 0 ρ = 1 D = 10: ρ = 0 ρ = 1 D = 15: ρ = 0 ρ = outer ALS iteration D = 20: ρ = 0 ρ = 1 Figure : Heisenberg-XXX model of N = 100 spins: Approximation error. QCCC 2013, Prien, October 20,

54 Conclusions Simulation of quantum systems relies on standard problems of numerical linear algebra Hilbert space grows exponentially in the physical system size QCCC 2013, Prien, October 20,

55 Conclusions Simulation of quantum systems relies on standard problems of numerical linear algebra Hilbert space grows exponentially in the physical system size Non-generic behavior of quantum systems Nature of the underlying system translates into structured objects of linear and multilinear algebra (e.g., symmetries) Numerical treatment requires efficient algorithms acting on problem-adapted data structures NMR computation: problem reduction and algorithmic improvement Projection on MPS manifold may improve the solution Symmetry-adapted ground-state search improves variational MPS methods QCCC 2013, Prien, October 20,

56 Conclusions Simulation of quantum systems relies on standard problems of numerical linear algebra Hilbert space grows exponentially in the physical system size Non-generic behavior of quantum systems Nature of the underlying system translates into structured objects of linear and multilinear algebra (e.g., symmetries) Numerical treatment requires efficient algorithms acting on problem-adapted data structures NMR computation: problem reduction and algorithmic improvement Projection on MPS manifold may improve the solution Symmetry-adapted ground-state search improves variational MPS methods QCCC 2013, Prien, October 20,

57 Conclusions Simulation of quantum systems relies on standard problems of numerical linear algebra Hilbert space grows exponentially in the physical system size Non-generic behavior of quantum systems Nature of the underlying system translates into structured objects of linear and multilinear algebra (e.g., symmetries) Numerical treatment requires efficient algorithms acting on problem-adapted data structures NMR computation: problem reduction and algorithmic improvement Projection on MPS manifold may improve the solution Symmetry-adapted ground-state search improves variational MPS methods Thank you for your attention. QCCC 2013, Prien, October 20,

58 T. Auckenthaler, M. Bader, T. Huckle, A. Spörl, and K. Waldherr Matrix exponentials and parallel prefix computation in a quantum control problem. Parallel Comput., T. Huckle, and K. Waldherr Subspace iteration methods in terms of matrix product states. Proc. Appl. Math. Mech., T. Huckle, K. Waldherr, and T. Schulte-Herbrüggen Computations in quantum tensor networks. Lin. Alg. Appl., T. Huckle, K. Waldherr, and T. Schulte-Herbrüggen Exploiting matrix symmetries and physical symmetries in matrix product states and tensor trains. Linear Multilinear A., QCCC 2013, Prien, October 20,

Computation of ground states

Computation of ground states Computation of ground states A mathematical point of view Bad Tölz, October 13th, 2009 Konrad Waldherr Technische Universität München, Germany Computation of ground states Problem: Given: Hamiltonian H

More information

Introduction to the GRAPE Algorithm

Introduction to the GRAPE Algorithm June 8, 2010 Reference: J. Mag. Res. 172, 296 (2005) Journal of Magnetic Resonance 172 (2005) 296 305 www.elsevier.com/locate/jmr Optimal control of coupled spin dynamics: design of NMR pulse sequences

More information

Numerical Simulation of Spin Dynamics

Numerical Simulation of Spin Dynamics Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Numerical simulations of spin dynamics

Numerical simulations of spin dynamics Numerical simulations of spin dynamics Charles University in Prague Faculty of Science Institute of Computer Science Spin dynamics behavior of spins nuclear spin magnetic moment in magnetic field quantum

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Introduction to tensor network state -- concept and algorithm. Z. Y. Xie ( 谢志远 ) ITP, Beijing

Introduction to tensor network state -- concept and algorithm. Z. Y. Xie ( 谢志远 ) ITP, Beijing Introduction to tensor network state -- concept and algorithm Z. Y. Xie ( 谢志远 ) 2018.10.29 ITP, Beijing Outline Illusion of complexity of Hilbert space Matrix product state (MPS) as lowly-entangled state

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

4 Matrix product states

4 Matrix product states Physics 3b Lecture 5 Caltech, 05//7 4 Matrix product states Matrix product state (MPS) is a highly useful tool in the study of interacting quantum systems in one dimension, both analytically and numerically.

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

3 Symmetry Protected Topological Phase

3 Symmetry Protected Topological Phase Physics 3b Lecture 16 Caltech, 05/30/18 3 Symmetry Protected Topological Phase 3.1 Breakdown of noninteracting SPT phases with interaction Building on our previous discussion of the Majorana chain and

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Quantum simulation with string-bond states: Joining PEPS and Monte Carlo

Quantum simulation with string-bond states: Joining PEPS and Monte Carlo Quantum simulation with string-bond states: Joining PEPS and Monte Carlo N. Schuch 1, A. Sfondrini 1,2, F. Mezzacapo 1, J. Cerrillo 1,3, M. Wolf 1,4, F. Verstraete 5, I. Cirac 1 1 Max-Planck-Institute

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Renormalization of Tensor Network States

Renormalization of Tensor Network States Renormalization of Tensor Network States I. Coarse Graining Tensor Renormalization Tao Xiang Institute of Physics Chinese Academy of Sciences txiang@iphy.ac.cn Numerical Renormalization Group brief introduction

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Excursion: MPS & DMRG

Excursion: MPS & DMRG Excursion: MPS & DMRG Johannes.Schachenmayer@gmail.com Acronyms for: - Matrix product states - Density matrix renormalization group Numerical methods for simulations of time dynamics of large 1D quantum

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Lyapunov-based control of quantum systems

Lyapunov-based control of quantum systems Lyapunov-based control of quantum systems Symeon Grivopoulos Bassam Bamieh Department of Mechanical and Environmental Engineering University of California, Santa Barbara, CA 936-57 symeon,bamieh@engineering.ucsb.edu

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Tensor network simulations of strongly correlated quantum systems

Tensor network simulations of strongly correlated quantum systems CENTRE FOR QUANTUM TECHNOLOGIES NATIONAL UNIVERSITY OF SINGAPORE AND CLARENDON LABORATORY UNIVERSITY OF OXFORD Tensor network simulations of strongly correlated quantum systems Stephen Clark LXXT[[[GSQPEFS\EGYOEGXMZMXMIWUYERXYQGSYVWI

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Journal Club: Brief Introduction to Tensor Network

Journal Club: Brief Introduction to Tensor Network Journal Club: Brief Introduction to Tensor Network Wei-Han Hsiao a a The University of Chicago E-mail: weihanhsiao@uchicago.edu Abstract: This note summarizes the talk given on March 8th 2016 which was

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Efficient time evolution of one-dimensional quantum systems

Efficient time evolution of one-dimensional quantum systems Efficient time evolution of one-dimensional quantum systems Frank Pollmann Max-Planck-Institut für komplexer Systeme, Dresden, Germany Sep. 5, 2012 Hsinchu Problems we will address... Finding ground states

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Quantum Systems: Dynamics and Control 1

Quantum Systems: Dynamics and Control 1 Quantum Systems: Dynamics and Control 1 Mazyar Mirrahimi and Pierre Rouchon 3 February, 18 1 See the web page: http://cas.ensmp.fr/~rouchon/masterupmc/index.html INRIA Paris, QUANTIC research team 3 Mines

More information

Time Evolving Block Decimation Algorithm

Time Evolving Block Decimation Algorithm Time Evolving Block Decimation Algorithm Application to bosons on a lattice Jakub Zakrzewski Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University,

More information

Quantum Many Body Systems and Tensor Networks

Quantum Many Body Systems and Tensor Networks Quantum Many Body Systems and Tensor Networks Aditya Jain International Institute of Information Technology, Hyderabad aditya.jain@research.iiit.ac.in July 30, 2015 Aditya Jain (IIIT-H) Quantum Hamiltonian

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

BRST and Dirac Cohomology

BRST and Dirac Cohomology BRST and Dirac Cohomology Peter Woit Columbia University Dartmouth Math Dept., October 23, 2008 Peter Woit (Columbia University) BRST and Dirac Cohomology October 2008 1 / 23 Outline 1 Introduction 2 Representation

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

MAT265 Mathematical Quantum Mechanics Brief Review of the Representations of SU(2)

MAT265 Mathematical Quantum Mechanics Brief Review of the Representations of SU(2) MAT65 Mathematical Quantum Mechanics Brief Review of the Representations of SU() (Notes for MAT80 taken by Shannon Starr, October 000) There are many references for representation theory in general, and

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Ensembles and incomplete information

Ensembles and incomplete information p. 1/32 Ensembles and incomplete information So far in this course, we have described quantum systems by states that are normalized vectors in a complex Hilbert space. This works so long as (a) the system

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12 C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12 In this and the next lecture we summarize the essential physical and mathematical aspects of quantum mechanics relevant to

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Parallel-in-time integrators for Hamiltonian systems

Parallel-in-time integrators for Hamiltonian systems Parallel-in-time integrators for Hamiltonian systems Claude Le Bris ENPC and INRIA Visiting Professor, The University of Chicago joint work with X. Dai (Paris 6 and Chinese Academy of Sciences), F. Legoll

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell

More information

Strassen-like algorithms for symmetric tensor contractions

Strassen-like algorithms for symmetric tensor contractions Strassen-like algorithms for symmetric tensor contractions Edgar Solomonik Theory Seminar University of Illinois at Urbana-Champaign September 18, 2017 1 / 28 Fast symmetric tensor contractions Outline

More information

Quantum Information & Quantum Computing

Quantum Information & Quantum Computing Math 478, Phys 478, CS4803, February 9, 006 1 Georgia Tech Math, Physics & Computing Math 478, Phys 478, CS4803 Quantum Information & Quantum Computing Problems Set 1 Due February 9, 006 Part I : 1. Read

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lie algebraic quantum control mechanism analysis

Lie algebraic quantum control mechanism analysis Lie algebraic quantum control mechanism analysis June 4, 2 Existing methods for quantum control mechanism identification do not exploit analytic features of control systems to delineate mechanism (or,

More information

Training Schro dinger s Cat

Training Schro dinger s Cat Training Schro dinger s Cat Quadratically Converging algorithms for Optimal Control of Quantum Systems David L. Goodwin & Ilya Kuprov d.goodwin@soton.ac.uk comp-chem@southampton, Wednesday 15th July 2015

More information

. D CR Nomenclature D 1

. D CR Nomenclature D 1 . D CR Nomenclature D 1 Appendix D: CR NOMENCLATURE D 2 The notation used by different investigators working in CR formulations has not coalesced, since the topic is in flux. This Appendix identifies the

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information