Journal Club: Brief Introduction to Tensor Network

Similar documents
Matrix Product States

4 Matrix product states

3 Symmetry Protected Topological Phase

It from Qubit Summer School

An introduction to tensornetwork

Quantum Entanglement and Error Correction

PHY305: Notes on Entanglement and the Density Matrix

Tensor network simulations of strongly correlated quantum systems

Quantum Many Body Systems and Tensor Networks

Linear Algebra Practice Final

Algebraic Theory of Entanglement

The AKLT Model. Lecture 5. Amanda Young. Mathematics, UC Davis. MAT290-25, CRN 30216, Winter 2011, 01/31/11

Machine Learning with Tensor Networks

SPACETIME FROM ENTANGLEMENT - journal club notes -

Introduction to tensor network state -- concept and algorithm. Z. Y. Xie ( 谢志远 ) ITP, Beijing

A new perspective on long range SU(N) spin models

Frustration-free Ground States of Quantum Spin Systems 1

Machine Learning with Quantum-Inspired Tensor Networks

Quantum Physics II (8.05) Fall 2002 Assignment 3

MP 472 Quantum Information and Computation

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003

Physics 221A Fall 1996 Notes 14 Coupling of Angular Momenta

Quantum Computing Lecture 2. Review of Linear Algebra

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Matrix product approximations to multipoint functions in two-dimensional conformal field theory

Group Representations

Ph 219/CS 219. Exercises Due: Friday 20 October 2006

Linear Algebra: Matrix Eigenvalue Problems

Consistent Histories. Chapter Chain Operators and Weights

Linear Algebra and Dirac Notation, Pt. 3

The density matrix renormalization group and tensor network methods

Quantum decoherence. Éric Oliver Paquette (U. Montréal) -Traces Worshop [Ottawa]- April 29 th, Quantum decoherence p. 1/2

Some Introductory Notes on Quantum Computing

Tensor operators: constructions and applications for long-range interaction systems

Lecture: Quantum Information

Frustration-free Ground States of Quantum Spin Systems 1

Lecture 2: Linear operators

MBL THROUGH MPS. Bryan Clark and David Pekker. arxiv: Perimeter - Urbana. Friday, October 31, 14

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

IPAM/UCLA, Sat 24 th Jan Numerical Approaches to Quantum Many-Body Systems. QS2009 tutorials. lecture: Tensor Networks.

and extending this definition to all vectors in H AB by the complex bilinearity of the inner product.

Chapter 5. Density matrix formalism

Efficient time evolution of one-dimensional quantum systems

Chiral Haldane-SPT phases of SU(N) quantum spin chains in the adjoint representation

Holographic Geometries from Tensor Network States

The quantum state as a vector

Particles I, Tutorial notes Sessions I-III: Roots & Weights

Definition (T -invariant subspace) Example. Example

The Framework of Quantum Mechanics

MAT1302F Mathematical Methods II Lecture 19

Introduction to Tensor Networks: PEPS, Fermions, and More

Quantum Simulation. 9 December Scott Crawford Justin Bonnar

Notes on basis changes and matrix diagonalization

Vectors. January 13, 2013

Tensor network simulation of QED on infinite lattices: learning from (1 + 1)d, and prospects for (2 + 1)d

Quantum Computation. Alessandra Di Pierro Computational models (Circuits, QTM) Algorithms (QFT, Quantum search)

Degenerate Perturbation Theory. 1 General framework and strategy

Density Matrices. Chapter Introduction

DECAY OF SINGLET CONVERSION PROBABILITY IN ONE DIMENSIONAL QUANTUM NETWORKS

Introduction to the Mathematics of the XY -Spin Chain

Quantum Hamiltonian Complexity. Itai Arad

Quantum many-body systems and tensor networks: simulation methods and applications

MP463 QUANTUM MECHANICS

Introduction to Quantum Information Hermann Kampermann

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

Lecture 4: Postulates of quantum mechanics

Introduction to Electronic Structure Theory

Quantum mechanics in one hour

1 Mathematical preliminaries

arxiv:quant-ph/ v2 24 Dec 2003

CS 143 Linear Algebra Review

Quantum Information and Quantum Many-body Systems

arxiv: v2 [cond-mat.str-el] 18 Oct 2014

Symmetries, Groups, and Conservation Laws

Renormalization of Tensor Network States

Lecture 7 Spectral methods

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Entanglement and Symmetry in Multiple-Qubit States: a geometrical approach

(1.1) In particular, ψ( q 1, m 1 ; ; q N, m N ) 2 is the probability to find the first particle

Representation theory of SU(2), density operators, purification Michael Walter, University of Amsterdam

Eigenvectors and Hermitian Operators

Matrix Product Operators: Algebras and Applications

Jordan normal form notes (version date: 11/21/07)

conventions and notation

17 Entanglement and Tensor Network States

Entanglement spectrum and Matrix Product States

Entanglement in Topological Phases

arxiv: v2 [cond-mat.str-el] 6 Nov 2013

Linear Algebra in Hilbert Space

8.334: Statistical Mechanics II Problem Set # 4 Due: 4/9/14 Transfer Matrices & Position space renormalization

Spinor Formulation of Relativistic Quantum Mechanics

Quantum Entanglement and the Bell Matrix

An Inverse Mass Expansion for Entanglement Entropy. Free Massive Scalar Field Theory

14 Singular Value Decomposition

Introduction to quantum information processing

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Introduction to Quantum Spin Systems

MAT265 Mathematical Quantum Mechanics Brief Review of the Representations of SU(2)

Quantum Mechanics Solutions. λ i λ j v j v j v i v i.

Lecture notes: Quantum gates in matrix and ladder operator forms

Transcription:

Journal Club: Brief Introduction to Tensor Network Wei-Han Hsiao a a The University of Chicago E-mail: weihanhsiao@uchicago.edu Abstract: This note summarizes the talk given on March 8th 2016 which was on introductory tensor network theory whose aim was to provide with some fundamentals for the following topic, the relation between tensor network and holography. This note basically follows the arguments in A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States [1] and Quantum Information Meets Quantum Matter [2]

Contents 1 Tensor Network Basics 1 1.1 What is a tensor network 1 1.2 Why do we need a tensor network 3 1.3 How do we implement a tensor network 4 2 Matrix Product States 4 1 Tensor Network Basics 1.1 What is a tensor network Generally speaking, tensor network is a kind of decomposition and graphical representation of a general tensor. In this language, a rank k tensor is represented as an object with k external legs. Depending on the dimension of that index, each leg can further be regarded as a bundle of tiny legs. In figure 1 we list some simple or frequently used examples of tensors, say scalar s, vector v and rank 2 tensor t ij. Figure 1. Using this kind of representation, in principle we could realize all tensor manipulations in terms of drawing diagrams. For example, if we are given 2 rank 2 tensors v ij and w kl we connect them via combining one leg from each, graphically we are left with a new object with 2 legs left, i.e. a rank 2 tensor t il alone. Algebraically, it is the conventional tensor contraction (or matrix multiplication) t il = v ij w kl δ jk. (1.1) Therefore, in tensor network, tensor contractions can be done graphically by merely connecting legs of tensors. From this observation we could also see δ ij is represented as a line (which of course has two legs.) This is depicted in figure 2 In quantum many body system one often encounter states of form Ψ = {i k } Ψ i1i 2...i N i 1...i N. (1.2) 1

Figure 2. Given a specific basis i1...in i, manipulating quantum states is essentially equivalent to manipulating the wave functions Ψi1 i2...in which are components of a rank N tensor. We may first give some examples of graphical representation of various tensors that appear frequently in quantum many body systems. These are shown in figure 3 and 4. Figure 3. Figure 4. What about the network? From the above examples there was no network like ingredient. The idea of network is to decompose a general big tensor, say Ψi1...iN into contractions of little tensors. We have seen contractions by δij are represented as lines connecting legs from tensors. Consequently, a huge tensor, under a decomposition, becomes a network. Generically speaking, an arbitrary network with net external legs N is a rank N tensor. However, depending on the structure of internal contractions and the numbers of parameters contained in building blocks, not every tensor network is useful and able to efficiently implement. In the following figure 5 are several celebrated examples of tensor networks, matrix product states (MPS), projected entangled pair states (PEPS), and multi-scale entanglement renormalization ansatz (MERA). It worth noting that in some literature MPS = PEPS and the PEPS here is called Tensor Product States. In this note I will regard PEPS as 2-dimensional many-body states. 2

Figure 5. MPS looks like the simplest tensor network we can imagine in 1D, yet it turns out to be very powerful, which is strongly related to the success of density matrix renormalization group (DMRG) algorithm in 1 dimension. Roughly speaking, when performing DMRG, if one postpones superblock combination procedure to the end of renormalization, one could see the whole structure is basically a MPS state representation. Later we will comment on the reason that MPS or DMRG could be so efficient in 1 dimension and its failure in higher dimensional space. Before diving into more sophisticated analysis, we can first count the degrees of freedom stored in a MPS. Suppose the dimension of the physical external leg i n is p n, the dimensions of internal contraction bonds are bounded by χ. The number of parameters to specify a N site MPS is bounded by # O(Npχ 2 ), (1.3) which scales linearly in N. On the other hand, we know the degrees of freedom of a general tensor is of order # min{p n } N. (1.4) It s apparent that given χ, MPS states are unlikely to spanned the whole Hilbert space of the target tensor. More generally, a practical tensor network usually contains parameters of order # = O(poly(N)ploy(χ)). (1.5) In the following sections we will comment on the discrepancy in degrees of freedom between a general tensor and tensor networks states. 1.2 Why do we need a tensor network In the previous subsection we introduced the notion of a tensor network as a kind of graphical representation of general tensors and one can hardly find the necessity of it. Indeed, given a general tensor, 3

a mere graphical representation doesn t help much. However, if a general giant tensor, a big black box, can be illustrated in a network, according to different connecting schemes, states can be categorized by its entanglement pattern. Moreover, from the estimation at the end of last section we saw that the degrees of freedom of an applicable tensor network grow polynomially with system size, which is trackable. This is one of the reasons that make tensor network representation useful. As we know it, the whole Hilbert space H for an arbitrary many-body system is always way too large and takes time scales beyond the age of the universe to explore. Nonetheless, the states with certain entanglement pattern may merely occupy a small corner of the Hilbert space. If those are exactly what concern us, we need not to be able to sweep the complete H for extracting physics. Instead, we could concentrate on states represented by the class of networks that acquire similar entanglement structure to obtain insightful information. For example, fortunately, it has been shown that the ground states to local gapped many body Hamiltonians cannot be arbitrary in the sense that their entanglement entropy don t scale extensively but obey the so called area law. Area law is of no doubt a special kind of entanglement pattern and therefore the set containing those ground states lie in a corner. We will show in the following sections explicitly that MPS states in 1 dimension obey this exotic property. Here we give a heuristic argument using PEPS in 2 dimensions. As shown in figure, the system is divided into inner part and outer part, where the inner part is of size 4L. We can write a general state as and therefore the deduced density matrix Ψ = in(α) out(α) (1.6) χ 4L α=1 ρ in = tr Hout [ Ψ Ψ ] = α,α out(α ) out(α) in(α) in(α ). (1.7) This reduced density matrix has rank bounded by χ 4L and the reduced density matrix is thus bounded by the rank of ρ in, say S(L) = tr(ρ in log ρ in ) 4L log χ. (1.8) If χ = 1, S(L) = 0. It s anticipated since Ψ is just a product state in this case. Changing the value of χ modifies this relation in the logarithm and doesn t change the scaling relation. This implies to change scaling relation one has to change the geometric pattern of the tensor network state. 1.3 How do we implement a tensor network 2 Matrix Product States In this section we examine carefully the properties of matrix product states and its implications. To demonstrate these, we would like to have a formal expression of a matrix product state. It s given by Ψ = tr[a [1] i 1 A [2] i 2...A N i N ] i 1...i N (2.1) i 1...i N for a periodic system or Ψ = i 1...i N l [A [1] i 1 A [2] i 2...A N i N ] r i 1...i N (2.2) for a system with given boundary conditions l and r. In the following are few examples. 4

1. χ = 1 describes product states. For example, consider a two-level (p i = 2) many body system. A [i] 1 = α and A[2] 2 = β with α 2 + β 2 = 1. Ψ = [α 1 + β 2 ]...[α 1 + β 2 ]. (2.3) 2. When χ > 1, tensor product states are then usually entangled. An example with p i = 2 and χ = 2 is given by [ ] A [i] 1 0 0 = 0 0 [ ] and A [i] 0 0 1 = 0 1 The state produced by these matrix coefficients is the GHZ (GreenbergerHorneZeilinger state) state (2.4) GHZ = 0 N + 1 N. (2.5) 3. Yet another example is the famous AKLT state which describes the ground state of antiferromagnetic spin-1 chain defined by the Hamiltonian H = i S S i+1 + 1 3 (S i S i+1 ) 2. (2.6) It can be described by a tensor network state p = 3 and χ = 2. A 1 = 1 2 (σ x + iσ y ), A 0 = σ z, A 1 = 1 2 (σ x iσ y ). (2.7) In practice, we rarely deal with a mere state. Instead, what we often manipulate is some matrix elements. Therefore, it s convenient to introduce an object called the double tensor E = i A i A i, (E αγ,βδ = i A i,αβ (A i,γδ ) ). (2.8) Given As, it s straightforward to get Es. What about given Es? It turns out the result is not unique. Since the double tensor is a direct product of local matrices, if two set of matrices B and A are related by an unitary transformation A i = j U ij B j, (2.9) E = ijk U ij B j U ikb k = ijk U ij U ki B j B k = j B j B j. (2.10) Reversely, if E can be decomposed into tensor products of As or Bs, then it s true that A and B are related by an unitary transformation. Even A cannot be uniquely determined by mere information of E. There is an useful representation. E can be regarded as a matrix with grouped indices (αβ) and (γδ), which would be Hermitian and non-negative. We can diagonalize E to get eigenvalues λ i and eigenvectors v i. Then we can identify A i,αβ = λ i v i,αβ. In the following let us examine two properties (i) a MPS has finite correlation length if the double 5

tensor has non-degenerate eigenspace for the largest eigenvalue and (ii) its entanglement entropy obeys area law. To see the first fact, let us consider the correlation function of the single particle operator O of a N-site system. Suppose site 1 and 2 are separated by length L. In terms of double tensors Es, where O 1 O 2 O 1 O 2 = tr[en L 2 E[O 1 ]E L E[O 2 ]] tr[e N ] tr[en 1 E[O 1 ]] tr[e N ] tr[e N 1 E[O 2 ]] tr[e N, (2.11) ] E[O] = i,j O ij A i A j. (2.12) Without loss of generality, we could set the scale such that the largest eigenvalue of E to be 1 such that the norm of wave function tre N is finite. In thermodynamic limit N, E P 1, where P i is the projection operator of ith eigenspace and the two-point correlation function becomes O 1 O 2 O 1 O 2 = tr[p 1E[O 1 ]( λp λ ) L E[O 2 ]] tr[p 1 ] tr[p 1E[O 1 ]] tr[p 1 ] tr[p 1 E[O 2 ]]. (2.13) tr[p 1 ] From this expression we can see if P 1 is one-dimensional (non-degenerate), the two-point correlation function is dominated by λ L L log λ2 2 = e with λ 2 < 1. Hence, the correlation length is given by ξ = 1 log λ 2. (2.14) It this sense, MPS cannot describe phase transition states. The information of entanglement entropy can also be extracted using double tensors. As we mention earlier, E [i] could be regarded as a matrix with grouped indices. Let us divided the system into i = 1...k and i = k + 1...N. Then we can define E l = k E [i], E r = i=1 N i=k+1 E [i]. (2.15) Both E l and E r are χ 2 χ 2 Hermitian and non-negative matrices and hence have at most χ 2 nonvanishing eigenvalues. By the approach mentioned above we can therefore find at most χ 2 nonvanishing A l i and Ar i in which physical information is stored. Consequently the physical degrees of freedom on both-hand sides are bounded by χ 2 and S 2 log χ. (2.16) It s the area law in one-dimension. A remark here is that not every one-dimensional state obeying entanglement area law is a MPS. Fortunately, the inner bond dimension χ to approximate any such state scales polynomially with system size. References [1] Roman Orus, A Practical Introduction to Tensor Networks: Matrix Product States and Projected Eantangled Pair States. [2] Bei Zeng, Xie Chen, Duan-Lu Zhou, and Xiao-Gang Wen, Quantum Information Meets Quantum Matter. 6