Tensor product Take two tensors, get together enough inputs to feed into both, and take the product of their results.

Similar documents
Lecture 4: Postulates of quantum mechanics

Quantum Simulation. 9 December Scott Crawford Justin Bonnar

Local Algorithms for 1D Quantum Systems

3 Symmetry Protected Topological Phase

4 Matrix product states

Entanglement Manipulation

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i.

Quantum Computing Lecture 3. Principles of Quantum Mechanics. Anuj Dawar

Quantum Many Body Systems and Tensor Networks

Single qubit + CNOT gates

Quantum Gates, Circuits & Teleportation

2. Introduction to quantum mechanics

Instantaneous Nonlocal Measurements

Quantum Communication Complexity

arxiv: v1 [quant-ph] 7 Jan 2013

Lecture 1: Overview of quantum information

Hilbert Space, Entanglement, Quantum Gates, Bell States, Superdense Coding.

SUPERDENSE CODING AND QUANTUM TELEPORTATION

b) (5 points) Give a simple quantum circuit that transforms the state

Ph 219b/CS 219b. Exercises Due: Wednesday 22 February 2006

Lecture 6: Quantum error correction and quantum capacity

Physics 239/139 Spring 2018 Assignment 2 Solutions

Simulation of quantum computers with probabilistic models

Quantum Computing: Foundations to Frontier Fall Lecture 3

Describing Quantum Circuits with Systolic Arrays

1. Basic rules of quantum mechanics

DECAY OF SINGLET CONVERSION PROBABILITY IN ONE DIMENSIONAL QUANTUM NETWORKS

CSCI 2570 Introduction to Nanocomputing. Discrete Quantum Computation

Measurement Based Quantum Computing, Graph States, and Near-term Realizations

1 Dirac Notation for Vector Spaces

Factoring on a Quantum Computer

Lecture: Quantum Information

Quantum Hamiltonian Complexity. Itai Arad

Lecture 3: Superdense coding, quantum circuits, and partial measurements

Short introduction to Quantum Computing

Ph 219/CS 219. Exercises Due: Friday 20 October 2006

CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games

6.896 Quantum Complexity Theory September 9, Lecture 2

Teleportation-based approaches to universal quantum computation with single-qubit measurements

Decay of the Singlet Conversion Probability in One Dimensional Quantum Networks

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Tensor network simulations of strongly correlated quantum systems

Qubits vs. bits: a naive account A bit: admits two values 0 and 1, admits arbitrary transformations. is freely readable,

Quantum Optics and Quantum Informatics FKA173

Unitary evolution: this axiom governs how the state of the quantum system evolves in time.

Quantum Computing 1. Multi-Qubit System. Goutam Biswas. Lect 2

Unitary Dynamics and Quantum Circuits

Physics ; CS 4812 Problem Set 4

Quantum Mechanics C (130C) Winter 2014 Final exam

Quantum Information & Quantum Computing

By allowing randomization in the verification process, we obtain a class known as MA.

Quantum information and quantum computing

INTRODUCTION TO QUANTUM COMPUTING

Eigenvectors and Hermitian Operators

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002

The tilde map can be rephased as we please; i.e.,

)j > Riley Tipton Perry University of New South Wales, Australia. World Scientific CHENNAI

Short Course in Quantum Information Lecture 2

X row 1 X row 2, X row 2 X row 3, Z col 1 Z col 2, Z col 2 Z col 3,

Lecture 3: Hilbert spaces, tensor products

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

An Introduction to Quantum Information. By Aditya Jain. Under the Guidance of Dr. Guruprasad Kar PAMU, ISI Kolkata

In the beginning God created tensor... as a picture

Explicit bounds on the entangled value of multiplayer XOR games. Joint work with Thomas Vidick (MIT)

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Lecture 2: November 9

CS286.2 Lecture 15: Tsirelson s characterization of XOR games

Some Introductory Notes on Quantum Computing

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Seminar 1. Introduction to Quantum Computing

Introduction into Quantum Computations Alexei Ashikhmin Bell Labs

Quantum Teleportation Pt. 1

CSE 599d - Quantum Computing The No-Cloning Theorem, Classical Teleportation and Quantum Teleportation, Superdense Coding

Introduction to Quantum Information Hermann Kampermann

Quantum Noise. Michael A. Nielsen. University of Queensland

Quantum Mechanics - I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 9 Introducing Quantum Optics

Introduction to Quantum Information Processing

Fourier Sampling & Simon s Algorithm

CSE 599d - Quantum Computing Stabilizer Quantum Error Correcting Codes

Quantum Algorithms for Finding Constant-sized Sub-hypergraphs

Fourier analysis of boolean functions in quantum computation

Linear Algebra using Dirac Notation: Pt. 2

Solutions Final exam 633

An Approximation Algorithm for Approximation Rank

More advanced codes 0 1 ( , 1 1 (

Ph 219b/CS 219b. Exercises Due: Wednesday 11 February 2009

Designing Information Devices and Systems II

C/CS/Phys 191 Quantum Gates and Universality 9/22/05 Fall 2005 Lecture 8. a b b d. w. Therefore, U preserves norms and angles (up to sign).

Chapter 5. Density matrix formalism

Ph 219b/CS 219b. Exercises Due: Wednesday 20 November 2013

Quantum Information Types

The Principles of Quantum Mechanics: Pt. 1

Grothendieck Inequalities, XOR games, and Communication Complexity

Grover s algorithm. We want to find aa. Search in an unordered database. QC oracle (as usual) Usual trick

An Introduction to Quantum Information and Applications

Discrete quantum random walks

Efficient time evolution of one-dimensional quantum systems

1 Measurements, Tensor Products, and Entanglement

EFFICIENT SIMULATION FOR QUANTUM MESSAGE AUTHENTICATION

Transcription:

1 Tensors Tensors are a representation of linear operators. Much like with bra-ket notation, we want a notation which will suggest correct operations. We can represent a tensor as a point with n legs radiating downward: it accepts n inputs (vectors) and gives a number back. Alternatively, we could have some legs radiating in (inputs) and some out (outputs). We can define two operations: Tensor product Take two tensors, get together enough inputs to feed into both, and take the product of their results. Contraction Take two tensors, attach a leg from each one to the other, and sum over all possible labels for that edge. 1.1 Examples Matrix Multiplication Take two tensors, each with two legs (one in and one out), and contract them together. This yields annother tensor with two legs: the matrix product. Inner product Contract two one-input tensors together (flip one by complex conjugating) to get a number. Trace Take a 1 to 1 tensor and contract the legs with each other. 1

1.2 Tensor Network You can join tensors in these ways to create a tensor network. The network will have some open edges, and labeling them will give you a value: you take the product over all the original tensors, summing over all contracted closed edges. The order of contraction doesn t matter, so you don t need to know the history of how the network was assembled. 1.3 Teleportation We can represent the bell state as a tensor of two inputs that spits out 1 if the labels on the inputs match. Furthermore, we can generate the other 4 members of the bell basis by jamming an X gate, a gate, or both onto one of the lines. For example: = = 00 + 11 = 00 11 We can figure out what would happen if alice measured her two qbits and got the bell basis by attaching a bell state to the two qbits she s measuring. If we do that, we get a streight line from Alice s qbit to Bob s, and sure enough, Bob now has Alice s qbit. If she instead measured 00 11, there would be a gate along the path. There are 4 possible outcomes, corresponding to the 4 possible measurements: X X X X 1.4 Computation by teleportation Every computation is essentially a change of basis. Now when we teleport, why do we get our output in the same space? We get our output in the same basis because the bell state entangles 0 with 0 and 1 with 1. We could insert a unitary operator into our bell state: U U U U If we measure 00 + 11, Bob will get U. If we are less fortunate and measure 00 11, we ll get U, which is potentially inconvenient. To correct our result, we need to apply UU 1. However, if we can get around this snag, we can do our computation before we get our data. 2

1.5 Pauli Group The Pauli Group is: C 1 = ±{I, X,, Y } There s a bigger group, the Clifford Group C 2, which commutes Paulis to Paulis. For example, H C 2, and HXH 1 =. There s a yet bigger ( set (not ) a group), 1 0 C 3, which commutes Paulis to Cliffords. C 3 contains π/4 = 0 e iπ/4. If your computation is in C 3, your correction is in C 2, and if your computation is in C 2, your correction is in C 1. You can repeatedly teleport, each time computing an easier correction untill you hit C 1. 1.6 Bubble Width We can take a tensor network, and draw a bubble around one of the tenors. We re sad about every edge that goes through our bubble since we need to keep track of it. We want a sequence that minimizes the maximum number of edges through our bubble: that minimum is called the bubble width. It takes d w time to classically evaluate the tensor network. 1.7 Quantum circuits as Tensor Networks You can write quantum circuits as tensor networks. If we want to figure out what the acceptance probability is of some tensor network, we can stick the tensor network on top of itself upside down. By projecting the output bits onto 1, we can get the probability of outputting 1. 1 1 T N T N If we have a vector living in the tensor product of two spaces, we can schmidt decompose it: = λ i u i v i i 3

and write it as a matrix (rows being basis vectors of one space, columns of the other). = UAV = D A = U DV A U V = = D You can characterize the thickness of the bond by K (the rank of D, or alternately the number of nonzero λ i ). 1.8 MPS (Matrix Product State) See http://www.nature.com/nphys/journal/v11/n7/abs/nphys3345.html We can write some n-qbit state as a matrix product state, like so: d d d d d d d d Say we have a gapped local hamiltonian in 1D, with ground state Ω. We would like to approximate the ground state by an MPS with low Schmidt rank. By default, we would expect the bond dimension to go as 2 n 1, which is going to be untractable. φ kɛ ɛ Ω, of Schmidt rank k ɛ. How does k ɛ vary with ɛ? Well, worst case, the errors from each bond add up and we get nɛ. It would be nice to do better, and it s an open question if you can. Doing so would allow better algorithms. ɛ = 1 poly(n) k = 2/3 2O(log n) We want to take a product state not orthogonal to the ground state, then shrink it s orthogonal components. To do so, we ll use a (D, ) AGSP (Approximate Ground State Projector). We want to shrink them by a factor of: D = (ln d) s = e s3/2 /u 1/2 = 1/poly(n) D = (ln d) u1/3 ln 2/3 (n) So if we just had one cut, we could truncate the Schmidt decomposition after just O(D) terms. That leaves us with an error of nɛ in the ground state and k ɛ across each cut. 4

The argument goes: you start with your state, which you don t know anything about. You repeatedly pull off one of your particles (schmidt decomposing, keeping only the top k components). So let: = = i λ i v i w i We can truncate by projecting to the left (span{ v 1... v n }) or to the right (span{ w 1... w n }). We re worried about our schmidt decompositions conflicting with each other, but if we do it to the left one way and to the right the other the errors just add. We can effeciently compute a local observable O (the bubble width is small): MPS O MPS 1.8.1 The Algorithm As we go along, we re keeping track of some S H A, where Ω ɛ S H B, but the dimension of S is polynomial in n. The algorithm tracks a basis for S, with each basis vector described by an MPS. S 1 S 2 S 3 S 4 S 12 S 34 First, we combine neighboring particles into one subspace. We then recurse, combining blobs into bigger subspaces. By itself, this wouldn t help much: we d still get an exponential space. We could randomly shrink our subspace, but that would increase the error massively. Instead, we apply an AGSP. The whole point of an AGSP is to get a very favorable tradeoff between D and. For example, we can pick D 4 < 1/2. This gives us a pump between dimension and error: first we trade off dimension for error 1-1, then we shrink the error. If we can: Construct the AGSP efficiently Bound the bond Schmidt rank of the AGSp of all cuts to poly(n) Then we re pretty happy. 5