Introduction to tensor network state -- concept and algorithm Z. Y. Xie ( 谢志远 ) 2018.10.29 ITP, Beijing
Outline Illusion of complexity of Hilbert space Matrix product state (MPS) as lowly-entangled state Tensor network state (TNS) as lowly-entangled state Partition function as tensor networks Brief review of RG techniques to evaluate a 2D tensor network Partial list of successful applications of TNS
Why need Tensor Network State (TNS)? Hilbert space is too large N Catastrophe of dimension, Exponential wall problem Dirac: too complicated to be solved Kohn: wavefunction is not a scientific concept
Why we need Tensor Network State (TNS)? Nature does not exhaust all the possibilities 1024 * 1024 binary figure: can hold absolutely anything in the universe actually much much more: 10 315653 Almost all of them are meaningless and you will never see
Why we need Tensor Network State (TNS)? Nature is meaningful and usually local Particles are uncorrelated at long distance Entanglement entropy Area law conjecture: all physical relevant system satisfies (may have log corrections in gapless/critical systems) sys env
Why we need Tensor Network State (TNS)? From the viewpoint of entanglement Full Hilbert space: Too large to study even to enumerate! TNS is suitable for lowly-entangled state Lowly-entangled corner: relevant for physical QMB state, area law
Graphical Notations Open links Shared links State and operator State norm Expectation value
1D TNS: Matrix Product State (MPS) Graphics and Expression Some exact states: GHZ, Majumdar-Ghosh, AKLT Matrix Product Operator (MPO): Heisenberg Key: 1D area law, always finitely-correlated Canonical form
1D TNS: Matrix Product State (MPS) Energy minimization find a PEPS which minimize the energy: Imaginary time evolution
2D Tensor Network State Graphical notations (basis always there) Scalar (in PBC) Operator parameterize State: virtual / physical
Graphics and Expression 2D Tensor Network State Norm/expectation
2D Tensor Network State Members: PEPS, PESS (virtual particle entanglement) entangled pair entangled simplex Projected Entangled Pair States (PEPS) arxiv: cond-mat 0407066 Projected Entangled Simplex States (PESS) PRX 4, 011025 (2014)
2D Tensor Network State Members: Correlator Product State (CPS), e.g., H. J. Changlani, PRB 80, 245116 (2009) Product of correlators, no virtual particle
2D Tensor Network State Members: TTN, MERA G. Vidal, PRL 99, 220405 (2007) Tree Tensor Network State (TNN) Multi-scale Entangled Renormalization Ansatz (MERA) RG: Kadanoff, Fisher, Wilson
2D Tensor Network State Different classes indicate different entanglement structure 1D area law: TNN Area Law summary 1D area law with log: 1D MERA 2D area law: PEPS, PESS, Short-CPS, 2D MERA 2D area law with log(even volume law): Long-CPS, Branch MERA
2D Tensor Network State Some exact states: RVB, Toric Code, SSS (1). Represent a spin-2 in terms of 2 vspin-1. (2). Vspins in a simplex form simplex singlet (3). Project virtual space to the physical spin-2 space:
2D Tensor Network State Power-law-decaying correlation = Ψ Ψ
2D Tensor Network State 2D-iTEBD: simple update Local vision!
2D Tensor Network State Fermionic statistics P. Corboz, PRL 113, 046402 (2014)
Partition function as a tensor-network Tensor network model (TNM) Any statistical model with only local interactions has an exact tensor network representation of the partition function = T
Partition function as a tensor-network Other construction: group, dual
What we have discussed up to now? For quantum lattice systems Choose suitable ansatz (e.g., 5 classes) Obtain wavefunction (e.g., 2 big classes) Obtain expectation (e.g., 3 big classes) For classical statistical systems Mapping to TNM (e.g., 3 methods) Obtain expectation (e.g., 3 big classes)
RG methods to evaluate a tensor-network It is a #P-complete hard problem to do it exactly in 2D! Strategy: approximation via renormalization (information compression) Transform Matrix (TM) Renormalization Group (RG) Target: effective low-dimensional representation of T, and then diagonalize it. XQW, Phys. Rev. B 56, 5061 (1997) Basis Transformation to get the fixed point
RG methods to evaluate a tensor-network Boundary MPS: infinite Time Evolving Block Decimation (itebd) Target: effective MPS representation of the dominant eigenvector Power method to get the fixed point: R. Orus, Phys. Rev. B 78, 155117 (2008)
RG methods to evaluate a tensor-network Corner Transfer Matrix (CTM) RG Target: effective representation of the surrounding environment A A
RG methods to evaluate a tensor-network Corner Transfer Matrix (CTM) RG: Left move P. Corboz, PRL 113, 046402 (2014) C 1 E 3 E 3 C 2 A E 1 E 1 A A A A E 2 E 2 make matrix C 3 E 4 E 4 C 4 Enlarge the corner by absorbing system gradually to get the fixed point Truncation by bond targeting: always 4*4 cluster
RG methods to evaluate a tensor-network Coarse-graining RG: Mimic Kadanoff s block spin decimation in real space by scale transformation. The local DOF is renormalized and decimated in each scale transformation
Local optimization Levin-Nave Tensor RG: realized by local SVD 8/43 Step 1: lattice deformation M kj, il mji mlk m D n 1 T T U V kj, n n il, n SVD: the best scheme to truncate a matrix
Step 2 of LN-TRG: Decimation of local degree of freedom by summation 9/43 Txyz SxikSyjiSzkj ijk A complete scale transformation step: from N to N/3
Higher-order TRG (HOTRG): realized by local HOSVD Coarse grain along the lattice vectors alternatively PRB 86, 045139 (2012) Key Problem: 2 cutoff simultaneously, how?? D D D 2 D 2
Low rank approximation of a tensor is still an open problem itself! Higher-Order SVD (HOSVD) SIAM, J. Matrix Anal. Appl, 21, 1253 (2000). Definition: pseudo-diagonalization by orthogonal transformation Truncation: gives a good low rank approximation of T. Us Can be obtained by directional SVD independently
In practical calculation: isometry action U: obtained from HOSVD of M, chosen the one has smaller truncation error Two directions are coarse-grained alternatively to cover more entanglement and keep symmetry. More symmetric method: bond SVD via gauge transformation
Improvement 1: Second Renormalization Group (SRG) by global optimization PRL 103, 160601 (2009) Difference between NRG and DMRG: different basis selection scheme! 1974 Wilson NRG System 1992 White DMRG System sys env States are weighted according to the spectra of the system States are weighted according to the spectra of sys s RDM: entanglement spectra between a sys and its env Environment is important!
Note: HOTRG is a local update method without environment HOSRG: Consider the environment in the frame work of HOTRG Z=Tr MM env Forward iteration: HOTRG to obtain U at all the scales. Backward iteration: get E (N-1), E (N-2),, E (2), E (1) from the recursive relation. Sweep: This iteration can be repeated to gain more accuracy
Extension to 3D Forward iteration Backward iteration Modify the local decomposition SRG can be used to globally optimize a 3D tensor network! SRG in 2D finite system with PBC: combined with a sweeping scheme H. H. Zhao, Z. Y. Xie, T. Xiang, and M. Imada, Phys. Rev. B 93, 125115 (2016).
Improvement 2: EV/Loop-TRG by removing local(short-range) entanglement Corner double line picture TEFR: ZCGu, XGWen, PRB 80, 155131(2009) No long-range entanglement at all Topological trivial phase Long-range entanglement exist Probably topological ordered phase RG fixed tensor: low-rank Direct-product structure: can be removed locally; If done, then D can be much smaller without lowering the accuracy! Corresponds to RG spirit: low-scale entanglement should not appear at higher-scale near Tc
EV-TRG, i.e., realized by disentangler Idea: why we did not see CDL? they are entangled by local entanglement (as unitary trans.) Spirit of MERA: we add disentangler before we do local decomposition. Evenbly, Vidal, PRL 115, 180405 (2015) equivalently to enforce this special structure of the basis transformation unitary disentangle (to remove local entanglement) isometry (to do decimation)
Solve the parameters by variation: Minimize distance, or maximize overlap Isometric property can simplify the calculation: for any isometry X and any matrix M, we have
Loop TRG: realized by entanglement filtering Idea: CDL structure can be removed by canonicalization of a very sparse MPS (loop) Canonicalization (basis transformation): to remove all the loop entanglement Deformation (square to octagon): to coarse grain Variation: to improve accuracy SY, ZCGu, XGWen, PRL 118, 110504 (2017)
Comparison Method Summary Local opt CG, global opt CG, transfer-matrix-based (best). Coarse-graining: more suitable to extract critical info from RG fixed point HOTRG/HOSRG: designed for 3D lattice, almost the only practical method in 3D SRG/HOSRG: can work for finite lattice (even PBC), while others has problems Entanglement removing: more suitable for critical system
Critical point accuracy Method Summary Remove SRE, but why (not so clear)? Variation? disentangle/filtering? No + No Yes + Yes EV-TRG and Loop-TRG use variation, while others do not.
What we have discussed up to now? For quantum lattice systems Choose suitable ansatz (e.g., 5 classes) Obtain wavefunction (e.g., 2 big classes) Obtain expectation (e.g., 3 big classes) For classical statistical systems Mapping to TNM (e.g., 3 methods) Obtain expectation (e.g., 3 big classes)
Successful Applications (Very Partial) 3D classical statistical model: Ising model PRB 86, 045139 (2012) Frustrated spin model: AF Kagome, J1-J2 square PRL 118, 137202 (2017) Unfrustrated spin model: many, spin-1/2 AF Heisenberg model on square Annu. Rev. CMP 3, 111(2012) Superconductivity: t-j model, Hubbard model PRL 113, 046402 (2014) Classical spin glass: EA model PRB 90, 174201 (2014) Quantum chemistry: Nat. Chem. 5, 660 (2013) Continuous DOF and KT phase transition: Continuous space and 1D quantum field theory: PRE 89, 013308 (2014) PRL 104, 190405 (2010) 1D many-body localization: Topological order detection: PRL 114, 170505 (2015) PRL 111, 107205 (2013) Lattice gauge theory: PRD 88, 056005 (2013)
Machine Learning? Thank you!