Reconstructing Entropy from Objective Probabilities

Similar documents
Ensembles and incomplete information

1 Mathematical preliminaries

Einselection without pointer states -

PHY305: Notes on Entanglement and the Density Matrix

2 The Density Operator

The general reason for approach to thermal equilibrium of macroscopic quantu

arxiv: v1 [quant-ph] 12 Dec 2008

Quantum mechanics in one hour

Quantum Entanglement and Measurement

Quantum Thermodynamics

Verschränkung versus Stosszahlansatz: The second law rests on low correlation levels in our cosmic neighborhood

Quantum decoherence. Éric Oliver Paquette (U. Montréal) -Traces Worshop [Ottawa]- April 29 th, Quantum decoherence p. 1/2

Chapter 5. Density matrix formalism

Compression and entanglement, entanglement transformations

E = hν light = hc λ = ( J s)( m/s) m = ev J = ev

Basics on quantum information

Basics on quantum information

Solutions Final exam 633

SPACETIME FROM ENTANGLEMENT - journal club notes -

Physics 239/139 Spring 2018 Assignment 2 Solutions

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

arxiv:quant-ph/ v2 23 Mar 2001

Master Projects (EPFL) Philosophical perspectives on the exact sciences and their history

Density Matrices. Chapter Introduction

Entropy in Classical and Quantum Information Theory

Ph 219/CS 219. Exercises Due: Friday 20 October 2006

Statistical Mechanics in a Nutshell

Physics 4022 Notes on Density Matrices

Introduction. Chapter The Purpose of Statistical Mechanics

Introduction to Quantum Computing

S.K. Saikin May 22, Lecture 13

Quantum Information Types

Entanglement: concept, measures and open problems

Lecture 13B: Supplementary Notes on Advanced Topics. 1 Inner Products and Outer Products for Single Particle States

Lecture 5. Hartree-Fock Theory. WS2010/11: Introduction to Nuclear and Particle Physics

The quantum state as a vector

Bohmian Quantum Mechanics and the Finite Square Potential Barrier

Lecture: Quantum Information

Stochastic Histories. Chapter Introduction

Quantum control of dissipative systems. 1 Density operators and mixed quantum states

Quantum Mechanics C (130C) Winter 2014 Final exam

Language of Quantum Mechanics

3 Symmetry Protected Topological Phase

An Introduction to Quantum Computation and Quantum Information

Electrons in a periodic potential

Stochastic Processes

1 Dirac Notation for Vector Spaces

0.1 Schrödinger Equation in 2-dimensional system

Quantum Entanglement- Fundamental Aspects

Statistical Mechanics

Algebraic Theory of Entanglement

Quantum Measurements: some technical background

Lecture 9 Examples and Problems

arxiv: v4 [cond-mat.stat-mech] 25 Jun 2015

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 25 Sep 2000

The Particle in a Box

Journal Club: Brief Introduction to Tensor Network

Lecture 4: Postulates of quantum mechanics

Degenerate Perturbation Theory. 1 General framework and strategy

Nullity of Measurement-induced Nonlocality. Yu Guo

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

,, rectilinear,, spherical,, cylindrical. (6.1)

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential

Consistent Histories. Chapter Chain Operators and Weights

Bounds on thermal efficiency from inference

Checking Consistency. Chapter Introduction Support of a Consistent Family

PHYS Statistical Mechanics I Course Outline

Incompatibility Paradoxes

2. Introduction to quantum mechanics

Quantum measurement theory

Introduction to Quantum Information Hermann Kampermann

Derivation of Bose-Einstein and Fermi-Dirac statistics from quantum mechanics: Gauge-theoretical structure

conventions and notation

6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities.

What is thermal equilibrium and how do we get there?

Structure of Unital Maps and the Asymptotic Quantum Birkhoff Conjecture. Peter Shor MIT Cambridge, MA Joint work with Anand Oza and Dimiter Ostrev

Topic 2: The mathematical formalism and the standard way of thin

SECOND QUANTIZATION PART I

MP 472 Quantum Information and Computation

Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory

Grand Canonical Formalism

Matrix Product States

Distance between physical theories based on information theory

PHYSICS 715 COURSE NOTES WEEK 1

THE OBJECTIVE PAST OF

Decoherence and Thermalization of Quantum Spin Systems

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

Dynamics and Quantum Channels

Basic Notions of Entropy and Entanglement

1 Traces, Traces Everywhere (5 points)

Density Operators and Ensembles

Continuous quantum states, Particle on a line and Uncertainty relations

BOGOLIUBOV TRANSFORMATIONS AND ENTANGLEMENT OF TWO FERMIONS

The great conceptual puzzle of statistical mechanics is how a

By allowing randomization in the verification process, we obtain a class known as MA.

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

to mere bit flips) may affect the transmission.

This is the important completeness relation,

Decoherence and the Classical Limit

Thermal pure quantum state

Transcription:

Amsterdam University College Capstone Thesis Bachelor of Science Reconstructing Entropy from Objective Probabilities Author: Kamiel Mobach Supervisor: Dr. Sebastian de Haro 23 May, 2016 Abstract Following advances in the explanation of the process of equilibration from the laws of quantum mechanics, this paper re-evaluates how the notions of equilibration and entropy can be constructed from an objective approach. It has been shown that the fluctuation in time of any subsystem with non-degenerate energy gaps from its average state has an upper bound. This proves equilibration for quantum states but does not highlight the mechanism of equilibration. The setup only uses pure states and their subsystems, which greatly limits the use of subjective lack of knowledge in the equilibration shown here. The notion of entanglement-entropy is the best candidate to explain the equilibration and would replace any subjective ignorance in statistical mechanics with the notion of objective probability already present in quantum mechanics. This paper explores the mechanism for quantum equilibration explored in previous papers as well as the notions of subjective and objective probabilities to evaluate the possibility of reconstructing the concept of entropy in an objective sense. Keywords: entropy, equilibration, objective probability, entanglement

1 Introduction When Boltzmann first refined the second law of thermodynamics (the law of increasing entropy) in 1866, his mathematical formalism was mainly based on probability distributions and ensemble averaging [1]. These notions rely heavily on subjective lack of knowledge about the considered system, which is questionable to use as a foundation of such an important physical concept as entropy. Recent theoretical developments point a way around the use of subjective probabilities by showing that equilibration is a universal property of quantum systems [2, 3]. Since quantum mechanics incorporates probability as a fundamental property of its description of the world it could be used to explain the second law in a way that only uses objective probabilities. This would reconstruct entropy as a physical quantity that stems from an epistemic use of probabilities as suggested by Frigg, rather than one that relies on our subjective ignorance about a system [4]. To restrict the use of probability theory to objective probabilities, we would like to only use pure quantum states. These do use probability theory because of the inherent indeterminacy of quantum mechanics, but do not rely on any subjective lack of knowledge. Reimann and Linden et al. independently made the first steps in this direction by identifying an upper bound to the fluctuations of the state of quantum systems from their time-averaged state and showing that any observables of these states are indistinguishable from their equilibrium value [3, 2]. Their results have been unified and improved by Short, who points out that quantum equilibration is a much stronger foundation for statistical mechanics than the classical ensembleapproach [5]: Equilibration is a general property of quantum systems that does not depend on which observable we consider. Furthermore, it has been shown that equilibration of quantum systems happens within a finite time-span [5]. In this way, the system s time evolution is bound to a realistic equilibration time. While these bounds lay down the conceptual framework for a more fundamental understanding of the second law, an explicit connection to entropy is still missing for the considered pure quantum states. Entropy in quantum systems is usually defined as the Von Neumann entropy which measures information loss in quantum systems [6]. However, this notion of entropy can only be used in the context of mixed states described by density matrices, because loss of information can only be described in a subjective sense. For pure states the Von Neumann entropy is always zero, which obscures the mechanism of equilibration of these states. A solution to this problem could be found by considering the subsystems of large pure states that are constructed by tracing out the state over its environment: φ = Tr E ( ψ ψ ) [7]. These subsystems are represented by density matrices and are therefore mixed states, part of a larger pure state [8]. From the consideration of mixed states as subsystems of a large pure states, the notion of entanglement entropy arises, which can be shown to be equivalent to thermodynamic entropy for long times [6, 9]. To work towards a new notion of entropy in this fashion, this paper will evaluate the research into equilibration of pure quantum states and explore how entropy can be quantified in this approach to equilibration. 1

2 Probabilities in statistical mechanics In the interpretation of thermal physics the use of probabilistic thinking is a fundamental, but partly unwanted feature. In Boltzmann s earliest work he declared that he found an exact approach to the mathematical notion of equilibrium and entropy [1]. He intended to use probabilistic calculus in an exact, deterministic manner. However, he did not succeed according to later commentators. As probability is an even more defining feature of modern physics than in Boltzmann s time, it is important to differentiate between objective and subjective probabilities. Using these notions it can be evaluated whether physics using some kind of probabilistic thinking relies on inherent uncertainty or on ignorance of the scientist. The difference between objective and subjective probability is important to any statistical science as it represents fundamentally different attitudes that scientists can have towards their object of study: objective probabilities are conceived of as chances that are inherently part of the studied object, whereas subjective probabilities signify an actor s belief about the chances of such a system [10]. For example, when tossing a fair coin we know that the chances of heads or tails have equal probability of 1 2. However, someone involved in such a coin toss might think that the chance of heads is actually 2 3 because of some imbalance of the coin. They might adjust this assigned subjective probability after doing an experiment and seeing that the coin is actually fair. In statistical physics, it is unclear whether the probability distributions that represent the studied systems are subjective or objective. If we take the Maxwell-Boltzmann distribution, which describes the distribution of particle speeds in gases [11] ( m ) 34πv f(v) = 2 e mv2 2kT, (1) 2πkT we might say that this reflects the true probabilities of the system exhibiting certain properties, like temperature and pressure. However, we might also say that this distribution is just an educated guess, and that we predict the properties of the system using this guess. The reason for introducing probability is a general ignorance about a system resulting from its great complexity [12]. Scientists try to reflect the true information behind this ignorance in probability distributions that represent the system. In this picture the distributions represent some deeper nature of the system and are therefore objective. On the other hand, the same probability distributions are also used to predict certain properties of the system: They represent our beliefs about the behaviour that the system will exhibit and are therefore also subjective probabilities [10]. This picture becomes more involved when we include the consideration of quantum states in the discussion of probability. In classical mechanics any lack of knowledge can be regarded as subjective: classical physics is deterministic in nature. Quantum mechanics, however, deals with uncertainty principles, such as σ x σ p h 2, that state that certain knowledge about quantum systems is simply undefined. Quantum mechanical wave functions can be manipulated to yield 2

the probability of finding a particle in a particular state. This probabilistic character is an essential property of quantum mechanical objects, and therefore this case deals with objective probabilities. These objective quantum mechanical probabilities should be contrasted with subjective probabilities that are used in ensemble-based techniques such as the density matrix approach [8]. In the latter approach we are dealing with lack of knowledge as a fundamental quantum mechanical property as well as with lack of knowledge about how systems interact. The next section will outline how the different probabilities that we described can be found in the mathematical formulations used in quantum statistical mechanics. It will provide us with the concepts that we need to show how systems equilibrate, which will be done in the fourth section. The fifth section will focus on the entropy of the systems that we have been considering and will give an example of a system that has the properties needed for the equilibration shown in the section before. 3 Setup In this section, we consider how we can separate our system into a system and an environment, which can be seen as a subjective choice in the setup. However, we might also say that the distinction between system and environment is constrained by physical factors. In most realistic scenarios it is rather clear what the system and what the bath is. However, we still deal with some subjectivity in this setup through the use of the density matrix: This describes an ensemble of possible states that our system can be in. Although this can be used in a very subjective manner, we only use this approach for tracing out certain parts of the total system to make the physical distinction between system and bath. The interaction or ensemble that we then describe with this density matrix is rather due to quantum entanglement between the system and the environment, which will be elaborated on in section five. At the end of this section we will drop the distinction between system and environment temporarily to introduce new measures in a more simple manner. The mathematical setup of this paper follows that of previous previous work in quantum mechanical equilibration [3, 2, 13] and notation introduced by Gemmer et al. [8]. We consider a subspace of a Hilbert space H that describes a subsystem, and a bath connected to this subspace. The bath and the system together make up the total Hilbert space which is the tensor product H = H S H B. Any Hilbert space decomposed as a tensor product will suffice for the equilibration shown here. As we will see later, there are constraints on this separation that determine whether the subsystem will equilibrate (The dimension of the subsystem should be small compared to that of the bath), but this will not affect the calculations. Likewise, the Hamiltonian that governs the time-evolution of the global state of the total Hilbert space is left almost 3

completely general: H = k E k k k (2) where k can take integer values k = 1, 2,..., n tot, and is a generic label that labels the energy eigenstates. The only requirement is that it has non-degenerate energy gaps: E k E l = E m E n k = l and m = n k = m and l = n This definition ensures that the Hamiltonians of the system and of bath interact, because equation (3) is not satisfied by a total Hamiltonian of the form H = H S I B + H B I S [13]. This requirement for the Hamiltonian is still a very general requirement that virtually all realistic systems obey because an infinitesimal perturbation will remove any degeneracy: From perturbation theory it is known that a perturbation H to the Hamiltonian will lift degeneracies in the unperturbed Hamiltonian H. The perturbation changes the eigenfunctions of the total Hamiltonian, and therefore it also changes the eigenvalues (the energies), hereby lifting the degeneracy [14]. Realistic systems cannot be expected to obey idealised Hamiltonians without perturbations: There will always be a small perturbation from some interaction with an environment which will lift degeneracies, which makes our assumption a realistic one. To stick with objective probabilities as much as possible, we start with a global pure state Ψ(t) that includes both the state of the system and the state of the bath. This state can be described by a density matrix ρ(t) = Ψ(t) Ψ(t). This density matrix describes an ensemble of quantum states. Generally, density matrices can describe the mixing of two states and attach weights in the form of a matrix: ρ = W ij i j (4) i,j where W ij is the weight of the i, j component of the mixture. For the global pure state Ψ(t) the density matrix ρ(t) mixes the state with itself with no additional statistical weights. It just allows us to use the tools of quantum statistical mechanics on the states that we consider here. To obtain the expression for the density matrix of this global pure state, we have to consider that the Hilbert space of this state can be decomposed into a product space H = H S H B. If the vectors that span the Hilbert spaces of the system and the bath are s and b respectively, then the complete Hilbert space H is spanned by a complete set of orthonormal vectors sb = s b. We can now write down the global pure state ρ(t) in terms of these vectors and corresponding statistical weights: ρ(t) = W sb W s b sb s b (5) s,s,b,b We will call this density matrix the global pure state, even though it is itself constructed from quantum mechanical states. We just call it a state to denote (3) 4

that this measure describes all relevant information of the system like a conventional state would do in other calculations in quantum mechanics. From the density matrix of the pure global state we can find the state of the bath and the subsystem by tracing out the relevant part of the global state. The density matrix of the system is obtained by tracing out ρ(t) over the bath, and vice versa. ρ S (t) = Tr B ρ(t) ρ B (t) = Tr S ρ(t) These states are no longer pure states even though they are constructed from a global pure state. However, they still make up the global pure state that is only affected by objective probabilities. To introduce the following concepts we drop the distinction between system and bath for simplicity s sake. The indices i and j in the rest of the section can be seen as denoting the whole state or a specific part of the state like the system or the environment. The amount of mixing can be measured by considering the purity of a state: P (ρ) = Tr(ρ 2 ) = Wii 2 (7) i For pure states P = 1, because all the weights W ii are 1. This means that the system will always will be found in the global pure state, because the density matrix is essentially the projection operator ρ 2 = ρ: It projects the mixture onto the state itself, which means ρ is a pure state. The minimum purity is reached for a density matrix that is maximally mixed: such a state has equal weights assigned to all states that make up the mixture: (6) ρ = i,j δ ij n tot i j (8) n tot represents the total amount of states that contribute to the mixture: the indices of i and j run from 1 to n tot. Filling this into equation (7), we get P min = W ij W i j Tr( i j j i ) i,j i,j = δ ijδ i j n 2 Tr(δ ii δ jj ) (9) i,j i,j tot = δ ijδ i j n 2 n tot = 1. i,j i,j tot n tot Here we multiplied ρ by its complex conjugate to get ρ 2. The expression in the trace then reduces to two delta functions. When taking the trace of this identity matrix, we get the dimension of the matrix, n tot. This can also directly be seen by summing over the squares of W ii as in equation (7): P min = i,i δ ii δ i i n 2 tot = 1 n tot (10) 5

Using the expression for purity we can evaluate the amount of mixing that we deal with in our discussion of quantum mechanical equilibration. The purity of the global pure state will always be 1, but the purity of ρ S will have a different value. To be able to distinguish between states we will use a distance measure D that evaluates how well the states can be differentiated in a perfect experiment. This distance measure uses the trace operation because observables are also obtained from mixed states by taking the trace: A = Tr(Âρ) [8]. D(ρ 1, ρ 2 ) = 1 2 Tr ρ 1 ρ 2 (11) As our aim is to derive the evolution toward equilibrium of any subsystem we need to define the time-averaged global state ω. By using the distance measure (11), we can evaluate how close states are to the equilibrium state ω. 1 τ ω = ρ(t) t = lim ρ(t)dt (12) τ τ 0 The time-averaged states of the subsystem and bath are similarly defined as ω S and ω B respectively. Next, we need a measure of the amount of different energies incorporated in the state, or equivalently, of how many pure states contribute to the mixture in the density matrix. For this measure we use the inverse of the Purity (7), which we will call the effective dimension d eff. d eff (ρ) = 1 P = 1 Tr(ρ 2 ) (13) For pure states the effective dimension will be 1; the more mixing occurs in a subsystem, the larger the effective dimension gets until it is equal to n tot for maximally mixed states (8). In this way, the effective dimension denotes roughly how many pure states contribute to the mixture. The effective dimension will be used in combination with d S and d B which denote the dimension of H S and H B. Finally, to connect the equilibration shown in the next section to a measure of entropy, we introduce the Von Neumann Entropy [8]: S = Tr(ρ ln ρ) (14) Some texts introduce a constant of proportionality, like k B, in the Von Neumann entropy to give it the same dimension as the entropy that is used in thermodynamics. In this paper we only want to compare the entropy of different states that are shown to equilibrate, so we set this constant to 1. Effectively, this means we work with a dimensionless entropy. The Entropy s minimum S = 0 is reached for pure states: The weights W ii are 1 for pure states, so the logarithm of these weights will reduce the whole expression to zero. The maximum entropy is reached for maximally mixed states described by equation (8). These states 6

have minimum purity as shown in (9). Substituting equation (8) into (14) we get S max = δ ( ii δi ) i ln n tot n tot i,i = δ ( ii ntot ) (15) ln n tot δ i,i i i = ln n tot, where the last line uses that the trace of δ ii equals n tot. 4 Quantum mechanical equilibration Equilibrium is classically described by considering a phase space of possible configurations of a system and evaluating which configuration is most likely to occur. The observables that define this equilibrium are independent of time for the equilibrium state [11]. For quantum mechanical equilibration a similar result has been found by Reimann, who shows that the expectation value of any quantum observable will evolve towards a static, i.e. time-independent equilibrium value [3]. This quantum mechanical equilibration is different from classical considerations of equilibrium because its derivation is re-traceable to objective probabilities of global pure states. We will follow derivations by Linden et al. and Short to demonstrate equilibration without resorting to subjective ignorance as a physical principle [2, 13]. Consider a pure global state Ψ(t) which is expanded as follows: Ψ(t) = k W k e ī h E kt E k (16) Then the density matrix describing this state is obtained by multiplying the state by its complex conjugate: ρ(t) = k,l W k W l e ī h (E k E l )t E k E l (17) To obtain the equilibrium state of the global state we integrate out the timedependence as in equation (5), keeping in mind that the energy gaps of the Hamiltonian are non-degenerate, which means that the exponent will never be zero except for at t = 0. At t = 0 the exponent reduces to 1 and we can relabel all the indices to k as they are dummy variables and are not physically distinct from each other. ω = ρ(t) t τ = lim W k Wl e ī h (E k E l )t dt E k E l τ 0 = k k,l W k 2 E k E k (18) 7

The time averaged state of the system ω S is obtained in the same way, using equation (6). To see whether the subsystem generally equilibrates we can use the notion of distance between the state of the system and its time averaged state D(ρ S (t), ω S ). We first bind this distance to a maximum value using equation (11) and the dimension of H S : D(ρ S (t), ω S ) = 1 2 Tr ρ S(t) ω S = 1 2 Tr (ρ S (t) ω S ) 2 1 (19) ds Tr(ρ S (t) ω S ) 2 2 The last line uses the concavity of the square root function, and the equivalence relation of the Hilbert-Schmidt norm A F n A 2, (20) where n is the dimension of the matrix A and the F- and p-norms are defined as follows [15]. A 2 is a specific form of the p-norm: A F = a ij 2 = Tr( A 2 ) i,j ( ) 1 ( p A p = a ij p = Tr( A p ) i,j ) 1 p (21) To evaluate the general equilibration of the subsystem, we take the time average of this distance. D(ρ S (t), ω S ) t 1 2 ds Tr(ρ S (t) ω S ) 2 t (22) The next step is to expand (ρ S (t) ω S ). First we find an expression for ρ S using (6) and (17): ρ S (t) = k,l W k W l e ī h (E k E l )t Tr B E k E l (23) The trace in this expression comes from tracing out the bath as in equation (6). In this way ρ S (t) is obtained from the trace of ρ(t) (17). Next, we subtract ω S : (ρ S (t) ω S ) = k l W k W l e ī h (E k E l )t Tr B E k E l (24) The Subtraction of ω S is found back in the formula above in the exclusion of the elements where k = l from the sum: According to formula (18) the diagonal elements are the ones that form the time-averaged state. The trace in these diagonal elements again comes from tracing out the total ω over the bath to obtain ω S. Substituting this expression into the time average in (22) we get Tr(ρ S (t) ω S ) 2 t = k l m n Tr S (Tr B ( E k E l )Tr B ( E m E n )), W k W l W m W n e ī h (E k E l +E m E n)t t (25) 8

where we used that the constants W as well as the exponents can be taken out of the trace, and that the exponents are the only elements that are timedependent. Now we evaluate the time average in a way similar to equation (12): We use the fact that the Hamiltonian of the system has non-degenerate energy gaps and that k l and m n to see that the only non-zero terms are those with k = n and l = m. The other terms disappear in the integral, which gives us Tr(ρ S (t) ω S ) 2 t = k l W k 2 W l 2 Tr S (Tr B ( E k E l ) Tr B ( E l E k )) = W k 2 W l 2 s b E k E l b s s b E l E k b s k l ss bb = W k 2 W l 2 sb E k E k sb b s E l E l bs k l ss bb = ) Tr B (Tr S ( W k 2 E k E k ) Tr S ( W l 2 E l E l ) k l (26) where in the second line we inserted the identity operator s s, in the third line rearranged terms, and in the fourth line took away the identity b b. Recalling equation (18) we see that ω B = Tr S ω = k Tr S W k 2 E k E k. Comparing this to the equation above: Tr(ρ S (t) ω S ) 2 t = k l Tr B (ω B ω B ) (27) Tr B ω 2 B The sum with k l in the first line refers to the absence of those elements in the matrices ω B. This expression is then smaller than the same expression where those elements are in place. Finally, substituting this bound to the time-average into equation (22) and using the definition of the effective dimension (13): D(ρ S (t), ω S ) t 1 2 d S Tr B ω 2 B = 1 2 d S d eff (ω B ) (28) Note that the distance is a strictly positive function, and that the time average of the distance will only have positive contributions. The upper bound can be even further improved by invoking the weak subadditivity of the Rényi entropy [16] as shown in the cited derivations [2, 13] D(ρ S (t), ω S ) t 1 2 d 2 S d eff (ω) (29) where d eff (ω) is the effective dimension of the global state. The upper bound to the time-average to the distance is the main result of the derivation. We see that if the effective dimension of the time-average of the 9

bath is larger than the dimension of the subsystem, or if the effective dimension of the time-averaged global state is larger than the square of the dimension of the subsystem, the time-averaged distance will be bound to a small number. This means that the average distance between the state of the subsystem and its equilibrium state will be small. The system will spend the largest amount of time in a state near its equilibrium state, which is analogous to saying that the system equilibrates. The only requirements for this equilibration to occur is that the bath and system make up a pure state with a non-degenerate Hamiltonian, that this state has many different energy-eigenstates so that its effective dimension is large, and that the considered subsystem is a small part of the total system. Nothing is specified about the subsystem, which means that any subsystem of such a state will evolve towards equilibrium. Moreover, no thermal properties are assumed. This implies that the shown equilibration does not even have to be thermodynamic. 5 From equilibration to entropy We have shown that generic small subsystems equilibrate. Equilibration has appeared to be a general property of quantum system that does not require any ensemble averaging or great subjective lack of knowledge. In this section we want to explore how we can go further from here by looking at entropy of the considered systems. 5.1 Constructing entropy from pure states As discussed in the setup, the entropy of the global pure state ρ(t) is zero. For this discussion we use the global pure state as defined in (5), and take the trace over the whole density matrix by summing over k. We immediately see that this is zero, because W k for the pure state are 1: S(ρ(t)) = k W k 2 ln ( W k 2) = 0 (30) The entropy that we are interested in, however, is not that of the global pure state, but that of the subsystem. Although the global state is only affected by objective probabilities because it is a pure state, we can still compute the entropy of the subsystem without completely resorting to subjective ignorance. As we will see at the end of this section, the local entropy in the subsystem arises from entanglement. To compute the local entropy we first recall the expression for ρ S (t) and make it explicit that the total Hilbert space is a bipartite system by indicating the components of the system and the bath: ρ S (t) = ss b b W sb W s b sb s b b = ss W ss s s (31) 10

This expression can be substituted in the definition of entropy (14) to get a general expression of the entropy of the subsystem: S = S W ss ln W ss (32) The purity of the system (7) is related to the entropy, but the two do not uniquely map onto each other. Maximum entropy states correspond to minimum purity states and vice versa. The purity that we address here is not that of the global system, but of the subsystems. These can be less pure due to more mixing in the density matrix because of entanglement. The less pure a state gets, the higher its entropy will be. In other words, using the law of increasing entropy: systems tend to go towards a more mixed state. In our case this refers to a quantum mechanical mixing in the density matrix approach, but it is also analogous to the classical case where systems increase their entropies by, for example, exchanging particles. When talking about the purity, we have to realise what the state is mixing with. In our setup we have been considering a pure state that is a density matrix on H S H B. A theorem by Araki and Lieb says that pure states in such setups (i.e. H 12 = H 1 H 2 ) require that the entropy of the traced out sub-states is equal [17]. They show that if a bipartite system forms a global pure state, its sub-systems have the same eigenvalues and multiplicities. From this it follows that the trace of any real-valued function is the same for both sub-systems: Tr 1 f(ρ 1 ) = Tr 2 f(ρ 2 ) S(ρ 1 ) = S(ρ 2 ), (33) in which the upper index indicates the part of the Hilbert space that is referenced (system or bath in our case). This result might seem surprising in the classical view of entropy, but we have to realise that the Von Neumann entropy deals with the spreading of information rather than with configurations of atoms. When thinking of information-exchange, it seems reasonable to expect that the same amount of information is exchanged between subsystems. Araki and Lieb also show that S 1 S 2 S 12 S 1 + S 2. (34) Equation (33) can also easily be deduced from equation (34): if the total system is in a pure state, S(ρ 12 ) = 0. This means that for a global pure states, the entropies of the sub-systems have to be equal for equality (34) to hold. For states that can be written in a product form, ρ 12 = ρ 1 ρ 2, the entropy is additive [8]: S(ρ 1 ρ 2 ) = S(ρ 1 ) + S(ρ 2 ) (35) However, this is not consistent with the total state being a pure state and equation (34), because the only solution to (34) where this would apply is one where all states are pure states (i.e. where the entropy of all states is zero). This means that we have to conclude that the global pure state cannot be written in a product form: ρ 12 ρ 1 ρ 2 Ψ ψ 1 ψ 2 (36) 11

This is exactly the condition that is required for states to be entangled [18]. If the states were in product form, they would be separable, uncorrelated and thus not entangled. From this discussion of entropy we have to conclude that the pure state that we have been considering must be a superposition of product states: Ψ = p,q W pq ψ 1,p ψ 2,q (37) This connection through entanglement has been proposed by both Gemmer et al. and Popescu et al. [7, 8]: Entangled states would have equal entropy as in equation(33), but would not obey (35). Popescu et al. propose that the Von Neumann entropy of the total state (i.e. the universe) does not change over time, but that entanglement interactions between system and environment do increase. They propose that information about a subsystem will spread into its environment, increasing the entanglement and therefore the local entropy of the subsystems. This could be the qualitative idea behind the mechanism of the second law of thermodynamics. 5.2 System, environment and correlations With the constraints in the previous subsection in mind, we now want to illustrate what the discussed equilibration of a bipartite system might look like, following the setup of Gemmer and Mahler [8, 19]. We start considering a Hamiltonian that can be partitioned in terms of the Hamiltonians of the system and the bath and an interaction term I SE : H = H S + H E + I SE (38) The interaction term I SE does not change the energies of the system and environment to make sure that this setup obeys the constraints that we set in the previous sections: System and environment do not influence each other directly, but only through entanglement. However, if we conceptualise the system as some gas inside a container (the environment), there must still be an interaction term that ensures that the gas stays inside the container. We describe the non-interaction by letting the Hamiltonians of system and environment commute, as well as letting the interaction term commute with the Hamiltonian of both system and environment: [H S, H E ] = 0, [H S, I SE ] = 0, [H E, I SE ] = 0 (39) The last two relations are implied by the fact that the energy expectation values for both Hamiltonians are time-independent and thus conserved: H S = E g, and H E = E c. We can construct a Hamiltonian for a subsystem consisting of an ideal gas g in a container c by summing over the kinetic energies of all particles in the subsystem. This Hamiltonian contains all the terms that acts on the gas and doesn t contain any terms that act on the container, and thus commutes with the Hamiltonian of the container: H S = h 2 2m g (pg n) 2 (40) n 12

Here, p g n is the momentum operator on the gas particles. In this scenario we take the environment to be a container consisting of particles connected by an interaction term V c H E = n h 2 2m c (pc n) 2 + 1 2 V c (qµ, c qν) c (41) where p c n is the momentum operator for the particles in the container, and V c (qµ, c qν) c signifies the interactions between particles in the container that makes the container a solid. The factor 1 2 in front makes sure the interaction is not counted twice when summing over all particles. The interaction between H S and H E is accounted for by the term I SE that denotes a repelling interaction between the particles of the container and the gas. This keeps the gas inside the container. I SE = V gc (qµ, g qν) c (42) µν The terms V c (qµ, c qν) c denote the interactions that keep the gas inside the container together with any other dephasing interactions that might occur. It is clear in this setup that it will be impossible to solve the Schrödinger equation for this Hamiltonian because of the macroscopic number of particles involved. Furthermore, we do not even know the interaction term in detail. We only assigned certain interaction to certain parts of the Hamiltonian. In stead of defining more parameters of the gas-container system we recall the fact that the state of the system and the environment are not separable and therefore subject to quantum entanglement. These correlations arose from the fact that the global state is a pure state. To identify the correlations between system and environment, we define a matrix describing these correlations [8]: µν ρ c = ρ ρ S ρ E (43) Defining this correlation we can find out whether the described system actually equilibrates due to entanglement. To this end, we define an operator that is the square root of the purity, and only take positive values of this operator. p i = P i = Tr(ρ 2 i ) (44) The index i = c, S, E can stand for the correlations, system, or environment respectively. Using the new operator, we define a coefficient that denotes the contributions of the correlations to the global pure state with respect to the contributions of the system and the bath: η = p c p S p E (45) When the system and environment would be separable and pure states, ρ c would be zero following (43). In this case the coefficient η would reduce to zero. For large values of η the correlations would contribute a lot to the total state. Taking 13

this into account, we say that the contribution of the correlations is negligible if η 1. Next, we want to calculate the purity of the correlation matrix which gives us p 2 c = Tr ( (ρ ρ S ρ E ) 2) = Tr(ρ 2 ) 2Tr(ρ ρ S ρ E ) + Tr ( (ρ S ρ E ) 2) = p 2 2Tr(ρ ρ S ρ E ) + p 2 S p 2 E (46) The second term involving the trace can be simplified by realising that the trace of a product of these Hermitian matrices fulfils the requirements for inner products: They commute, are distributive with the tensor product, and have scalar multiplication. With this we can use the Cauchy-Schwartz inequality o this term: Tr(ρ ρ S ρ E ) p p S p E (47) With this we can set a lower bound to the correlation matrix p 2 c p 2 2p p S p E + p 2 S p 2 E = (p p S p E ) 2 (48) and insert this into the coefficient (45) to obtain a lower bound : η p p Sp E = p 1 (49) p S p E p S p E p S p E As we are considering a global pure state, the numerator p = 1, and recalling (33), which states that the trace of any function is the same for two subsystems of a global pure state, p S = p E. This yields η 1 p 2 1 = d eff (ρ S ) 1 (50) S For a subsystem that approaches purity, d eff (ρ S ) approaches 1, which will make the correlation coefficient approach zero: Far from equilibrium there is barely any entanglement. Close to the equilibrium state, however, the subsystem will approach its least pure state where d eff (ρ S ) = n S tot. Here even for a small subsystem the correlation coefficient will be much larger than one. The equilibrium states are therefore highly entangled, and the entanglement entropy will increase with the state equilibrating to its final relaxed state. 6 Conclusion From the discussion of objective and subjective probabilities in section two we anticipated that making a rigid distinction between the two would be a very involved task. By using a global pure state as a starting point we at least greatly limited the use of subjective lack of knowledge in identifying when systems equilibrate, but we cannot claim to have completely ruled out subjective probability: Subjective choices can be made in a great number of places, such as 14

when partitioning the total state into a system and environment. However, these choices will be guided by physical constraints as well, and more importantly, we haven t used any statistical techniques or ensemble-averaging in our discussion of equilibration. The Von Neumann entropy cannot be said to be intrinsically objective or subjective: Its subjectivity relies on the setup of the states for which the entropy is computed. In our setup the mixing of states does not signify an ensemble approach, but undetermined interactions due to entanglement. These interactions are due to the nature of quantum mechanics and are fundamental indeterminacy of the considered states in stead of a subjective lack of knowledge. This makes the entanglement entropy that we use an objective measure. From the constraints on the Von Neumann entropy that we set, we saw that quantum entanglement is likely to play an important role in the equilibration that we discussed. If we accept this as the main mechanism of equilibration for subsystems of pure states, the local entanglement entropy can be seen as a good measure to underlie the second law in an objective manner. References [1] Jos Uffink. Boltzmann s Work in Statistical Physics. In Edward N Zalta, editor, The Stanford Encyclopedia of Philosophy. Fall 2014 edition, 2014. [2] Noah Linden, Sandu Popescu, Anthony J. Short, and Andreas Winter. Quantum mechanical evolution towards thermal equilibrium. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 79(6):1 12, 2009. [3] Peter Reimann. Foundation of statistical mechanics under experimentally realistic conditions. Physical Review Letters, 101(19):1 4, 2008. [4] Roman Frigg. Chance in Boltzmannian Statistical Mechanics. Philosophy of Science, 75(5):670 681, 2008. [5] Anthony J. Short and Terence C. Farrelly. Quantum equilibration in finite time. New Journal of Physics, 14, 2012. [6] J. M. Deutsch, Haibin Li, and Auditya Sharma. Microscopic origin of thermodynamic entropy in isolated systems. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 87(4), 2013. [7] Sandu Popescu, Anthony J. Short, and Andreas Winter. Entanglement and the foundations of statistical mechanics. Nature Physics, 2(11):754 758, 2006. 15

[8] Jochen Gemmer, M. Michel, and Günter Mahler. Quantum Thermodynamics. Lecture Notes in Physics. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. [9] Liangsheng Zhang, Hyungwon Kim, and David A. Huse. Thermalization of entanglement. Physical Review E, 91(6):062128, 2015. [10] Wayne C Myrvold. Probabilities in Statistical Mechanics : Objective, Subjective, or a Bit of Both? (Preprint). 2011. http://philsci-archive. pitt.edu/8642/1/statmechprobs.pdf. [11] Frederick Reif. Fundamentals of Statistical and Thermal Physics. McGraw- Hill, Singapore, international edition, 1965. [12] Eric Winsberg. Laws and chances in statistical mechanics. Studies in History and Philosophy of Science Part B, 39(4):872 888, 2008. [13] Anthony J. Short. Equilibration of quantum systems and subsystems. New Journal of Physics, 13:1 5, 2010. [14] David J. Griffiths. Introduction To Quantum Mechanics. Pearson Education, second edition, 2005. [15] G H Golub and C F Van Loan. Matrix Computations, volume 10. 1996. [16] Wim van Dam and Patrick Hayden. Renyi-entropic bounds on quantum communication. 2002. http://arxiv.org/abs/quant-ph/0204093. [17] Huzihiro Araki and Elliott H. Lieb. Entropy inequalities. Communications in Mathematical Physics, 18(2):160 170, 1970. [18] Antonella De Pasquale. Bipartite entanglement of large quantum systems. 2012. http://arxiv.org/abs/1206.6749. [19] J. Gemmer and G. Mahler. Distribution of local entropy in the Hilbert space of bi-partite quantum systems: Origin of Jaynes principle. European Physical Journal B, 31(2):249 257, 2003. 16