# Quantum Monte Carlo Methods for. ABSTRACT We review some of the recent development in quantum Monte

Save this PDF as:

Size: px
Start display at page:

Download "Quantum Monte Carlo Methods for. ABSTRACT We review some of the recent development in quantum Monte"

## Transcription

1 This is page 1 Printer: Opaque this Quantum Monte Carlo Methods for Strongly Correlated Electron Systems Shiwei Zhang ABSTRACT We review some of the recent development in quantum Monte Carlo (QMC) methods for models of strongly correlated electron systems. QMC is a promising general theoretical tool to study many-body systems, and has been widely applied in areas spanning condensed-matter, highenergy, and nuclear physics. Recent progress has included two new methods, the ground-state and nite-temperature constrained path Monte Carlo methods. These methods signicantly improve the capability of numerical approaches to lattice models of correlated electron systems. They allow calculations without any decay of the sign, making possible calculations for large system sizes and low temperatures. The methods are approximate. Benchmark calculations show that accurate results on energy and correlation functions can be obtained. This chapter gives a pedagogical introduction to quantum Monte Carlo, with a focus on the constrained path Monte Carlo methods. 1 Introduction In order to understand properties of correlated electron systems and their theoretical and technological implications, accurate calculations are necessary to correctly treat microscopic correlations. This crucial need is manifested in the daily demand to solve model systems, which are often intractable analytically, in order to develop and benchmark theory and compare with experiments. To eectively calculate electron correlation eects is an extremely challenging task. It requires the solution of the Schrodinger equation or evaluation of the density matrix for a many-body system at equilibrium. Explicit numerical approaches are an obvious possibility, which have found wide applications and proved very valuable, e.g., exact diagonalization in the study of lattice models for high-temperature superconductivity and for magnetism, and conguration interaction in quantum chemistry. However, since the dimensionality involved grows exponentially with system size, they all suer from exponential complexity, i.e., exponential scaling of the required computer time with system size or accuracy. As an example, with the most powerful computers we now can treat a lattice of about 50 sites in exact diagonalization of the two-dimensional spin-1=2 Heisenberg model, up from 20 sites twenty years ago. During this period, the peak speed of computers has been doubling roughly every two years.

2 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 2 Monte Carlo methods oer a promising alternative as a numerical approach to study many-body systems. Its required computer time scales algebraically (as opposed to exponentially in exact diagonalization) with system size. Rather than explicitly integrating over phase space, Monte Carlo methods sample it. The central limit theorem dictates that the statistical error in a Monte Carlo calculation decays algebraically with computer time. With the rapid advent of scalable parallel computers, these methods oer the possibility to study large system sizes systematically and extract information about the thermodynamic limit. For fermion systems, however, Monte Carlo methods suer from the socalled \sign" problem[1, 2, 3]. In these systems, the Pauli exclusion principle requires that the states be anti-symmetric under interchange of two particles. As a consequence, negative signs appear, which cause cancellations among contributions of the Monte Carlo samples of phase space. In fact, as the temperature is lowered or the system size is increased, such cancellation becomes more and more complete. The net signal thus decays exponentially. The algebraic scaling is then lost, and the method breaks down. Clearly the impact of this problem on the study of correlated electron systems is extremely severe. To date most applications of QMC methods to strongly correlated electron models have either lived with the sign problem or relied on some form of approximation to overcome the exponential scaling. The former has diculties reaching large system sizes or low temperatures. The latter loses the exactness of QMC and the results are biased by the approximation. Despite these limitations, QMC methods have proved a very useful theoretical tool in the study of strongly correlated electron systems. In many cases they have provided very accurate and reliable numerical results which are sometimes the only ones available for the system in question. Especially recently, progress has been rapid in the development of approximate QMC methods for lattice models, which has led to a growing number of applications. For example, with the constrained path Monte Carlo method (CPMC) discussed below, a system of 220 electrons on a lattice in the Hubbard model can be studied with moderate computer time[4]. The dimension of the Hilbert space for this system exceeds In this chapter, we review some of the recent progress in the study of models for strongly correlated electron systems with quantum Monte Carlo methods. The chapter is not meant to be a comprehensive review. Our focus is on the ground-state and nite-temperature constrained path Monte Carlo methods[5, 3, 6, 7]. We will, however, include in the Appendix some discussions of variational Monte Carlo and Green's function Monte Carlo (GFMC)[8] methods, which have also seen extensive applications in the 1 The size for a system with 109 electrons with up spins and 109 with down spins is about

3 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 3 study of correlated electron models. We do so to illustrate that, while GFMC is very dierent from the auxiliary-eld-based methods we focus on, the underlying ideas and the nature of the sign problems have much in common. Our goal is to highlight these ideas and show the capabilities that QMC methods in general bring as a theoretical tool, as well as the current algorithmic diculties. 2 Preliminaries 2.1 Starting point of quantum Monte Carlo (QMC) Most ground-state QMC methods are based on j 0 i / lim!1 e? ^H j T i; (1.1) that is, the ground state j 0 i of a many-body Hamiltonian ^H can be projected from any known trial state j T i that satises h T j 0 i 6= 0. In a numerical method, the limit can be obtained iteratively by j (n+1) i = e? ^H j (n) i; (1.2) where j (0) i = j T i. Ground-state expectation h ^Oi of a physical observable ^O is given by h h ^Oi (n) j ^Oj (n) i = lim n!1 h (n) j (n) i : (1.3) For example, the ground-state energy can be obtained by letting ^O = ^H. A so-called mixed estimator exists, however, which is exact for the energy (or any other ^O that commutes with ^H) and can lead to considerable simplications in practice: h T j ^Hj E 0 = lim (n) i n!1 h T j (n) i : (1.4) Finite-temperature QMC methods use the density matrix. The expectation value of ^O is: h ^Oi = Tr( ^Oe? ^H ) Tr(e? ^H ) ; (1.5) where = 1=kT is the inverse temperature. In other words, h ^Oi is simply a weighted average with respect to the density matrix e? ^H. In a numerical method, the partition function in the denominator of Eq. (1.5) is written as {z } L Z Tr(e? ^H ) = Tr[e? ^H e? ^H e? ^H ]; (1.6)

4 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 4 where = =L is the \time" step, and L is the number of time slices. Quantum Monte Carlo methods carry out the iteration in Eq. (1.2) or the trace in Eq. (1.6) both requiring integration in many-dimensional spaces by Monte Carlo sampling. The dierence in the dierent classes of methods amounts primarily to the space that is used to represent the wave function or density matrix and to carry out the integration. The groundstate and nite-temperature constrained path Monte Carlo methods work in second quantized representation and in an auxiliary-eld space, while Green's function Monte Carlo works in rst-quantized representation and in conguration space. 2.2 Basics of Monte Carlo techniques We list a few key elements from standard Monte Carlo techniques. In addition to serving as a brief reminder to the reader, they will help to introduce several results and notations that will be useful in our discussion of the QMC methods. For extensive discussions of Monte Carlo methods, excellent books exist[9]. Monte Carlo methods are often used to compute many-dimensional integrals of the form: R G = 0 f(~x)g(~x)d~x R 0 f(~x)d~x ; (1.7) where ~x is a vector in a many-dimensional space and 0 is a domain in this space. We will assume that f(~x) 0 on 0 and that it is normalizable, i.e., the denominator is nite. A familiar example of the integral in Eq. (1.7) comes from classical statistical physics, where f(~x) is the Boltzmann distribution. To compute G by Monte Carlo, we sample ~x from a probability density function (PDF) proportional to f(~x), i.e., the PDF f(~x) f(~x)= R 0 f(~x)d~x. This means we generate a sequence f~x 1 ; ~x 2 ; ; ~x i ; g so that the probability that any ~x i is in the sub-domain (~x; ~x + d~x) is Probf~x i 2 (~x; ~x + d~x)g = f(~x)d~x (1.8) Below when we refer to sampling a function f(~x), it should be understood as sampling the corresponding PDF f(~x). There are dierent techniques to sample a many-dimensional function f(~x). The most general and perhaps most often used is the Metropolis algorithm, which creates a Markov chain random walk in ~x-space whose equilibrium distribution is the desired function. Given M independent samples from f(~x), the integral in Eq. (1.7) is estimated by GM = 1 M MX i=1 g(~x i ): (1.9)

5 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 5 The error in the estimate decays algebraically with the number of samples: jgm? Gj / p 1= M. Using the results above, we can compute R G 0 = 0 f(~x)g(~x)h(~x)d~x R 0 f(~x)h(~x)d~x ; (1.10) if the function h(~x) is such that both the numerator and denominator exist. Formally G 0 M = P M i=1 g(~x i)h(~x i ) P M i=1 h(~x ; (1.11) i) although, as we will see, diculties arise when h(~x) can change sign and is rapidly oscillating. Integral equations are another main area of applications of Monte Carlo methods. For example[9], the integral equation 0 (~x) = Z 0 K(~x; ~y) w(~y) (~y)d~y; (1.12) can be viewed in terms of a random walk if it has the following properties: (~y) and 0 (~x) can be viewed as PDF's (in the sense of f in Eq. (1.7)), w(~y) 0, and K(~x; ~y) is a PDF for ~x conditional on ~y. Then, given an ensemble f~y i g sampling (~y), the following two steps will allow us to generate an ensemble that samples 0 (~x). First an absorption/branching process is applied to each ~y i according to w(~y i ). For example, we can make int(w(~y i ) + ) copies of ~y i, where is a uniform random number on (0; 1). Second we randomly walk each new ~y j to an ~x j by sampling the PDF K(~x; ~y j ). The resulting f~x j g are Monte Carlo samples of 0 (~x). We emphasize that the purpose of our discussion of random walks here is to illustrate the basic concept, which we will use later. The simple procedure described above is thus not meant as an accurate account of the technical details necessary for an ecient implementation. 2.3 Slater determinant space In auxiliary-eld-based QMC[10, 11, 12, 13] method, the Monte Carlo algorithm works in a space of Slater determinants. The building blocks of Slater determinants are single-particle basis states. The single-particle basis states can be plane waves, or lattice sites in the Hubbard model, or energy eigenstates in a mean-eld potential. Often the space of single-particle basis states is truncated. Single-particle wave functions (orbitals) are formed with the basis states. Slater determinants are then built from the singleparticle orbitals. We rst dene some notations that we will use throughout the discussion of standard auxiliary-eld quantum Monte Carlo (AFQMC) and then later the constrained path Monte Carlo (CPMC) methods.

6 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 6 N: number of single-electron basis states. For example, N can be the number of lattice sites (L L) in the two-dimensional Hubbard model. j i i: the i th single-particle basis state (i = 1; 2; ; N). For example, j G i can be a plane wave basis state with G (r) / e igr, where r is a real-space co-ordinate. c y i and c i: creation and annihilation operators for an electron in j i i. n i c y i c i is the corresponding number operator. M: number of electrons (if we omit spin index, e.g., if the system is fully polarized). In the more general case, M is the number of electrons with spin ( =" or #). Of course, the choice of N above must ensure that M N. ' m : single-particle orbital (we include an index m for discussions below to distinguish dierent single-particle orbitals). A single-particle orbital P ' m, given in terms of the single-particle basis states fj i ig as i ' i;mj i i P = i cy i ' i;mj0i, can be conveniently expressed as an N-dimensional vector: 0 1 ' 1;m ' 2;m. ' N;m Given M dierent single-particle orbitals, we form a many-body wave function from their anti-symmetrized product: C A ji ^' y 1 ^'y 2 ^' y Mj0i (1.13) where the operator ^' y m X i c y i ' i;m (1.14) creates an electron in the m th single-particle orbital f' 1;m ; ' 2;m ; ; ' N;m g. The many-body state ji in Eq. (1.13) can be conveniently expressed as an N M matrix: 0 ' 1;1 ' 1;2 ' 1;M ' 2;1 ' 2;2 ' 2;M... ' N;1 ' N;2 ' N;M Each column of this matrix represents a single-particle orbital that is completely specied by its N-dimensional vector. 1 C A

7 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 7 If the real-space co-ordinates of the electrons are R = fr 1 ; r 2 ; ; r M g, the many-body state in Eq. (1.13) gives hrji = (R) = det 0 ' 1 (r 1 ) ' 2 (r 1 ) ' M (r 1 ) ' 1 (r 2 ) ' 2 (r 2 ) ' M (r 2 ) 1 C... A ; ' 1 (r M ) ' 2 (r M ) ' M (r M ) P where ' m (r) = i ' i;m i (r). The many-body state ji is known as a Slater determinant. One ex- of a Slater determinant is the Hartree-Fock (HF) solution j HF i = Qample j HFi, where each j HFi is dened by a matrix HF whose columns are the N lowest HF eigenstates. We can now add the following to our list of notations above: ji: a many-body wave function which can be written as a Slater determinant. : an N M matrix which represents the Slater determinant ji. ij will denote the matrix element of the matrix in the i th row and j th column. For example, ij = ' i;j above in. Below when a Slater determinant ji is referred to, it will often be helpful to think in terms of the matrix representation operationally. j i (upper case): a many-body wave function which is not necessarily a single Slater determinant, e.g., j (n) i in Eq. (1.2). Several properties of the Slater determinant are worth mentioning. For any two real non-orthogonal Slater determinants, ji and j 0 i, it can be shown that their overlap integral is hj 0 i = det( T 0 ): (1.15) The operation on any Slater determinant by any operator ^B of the form? X ^B = exp c y i U ijc j (1.16) simply leads to another Slater determinant[14], i.e., ij ^Bji = ^ 0 y 1 ^ 0 y 2 ^ 0 y Mj0i j 0 i (1.17) with ^ 0 m y P = j cy j 0 jm and 0 e U, where U is a square matrix whose elements are given by U ij and B e U is therefore an N N square matrix as well. In other words, the operation of ^B on ji simply involves multiplying an N N matrix to an N M matrix.

8 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 8 The many-body trace of an operator which is an exponential of one-body operators (i.e., of the form ^B in Eq. (1.16)) or a product of exponentials of one-body operators can be conveniently evaluated. The grand-canonical trace, which involves summing over a complete basis for a xed number of particles M as well as over dierent numbers of particles (from 0 to N), has a particularly simple form[10, 12]: Tr( ^B) = det[i + B]; (1.18) where I is the N N unit matrix and B once again is the corresponding matrix of the operator ^B. The canonical trace, which is for a xed number of particles M, is given by 1 d Tr M ( ^B) M = M! d M det[i + B] =0 : (1.19) Note that this is simply the sum of all rank-m diagonal minors of the matrix B. As we would expect, the sum over all possible values of M of Eq. (1.19), with the appropriate factor for the chemical potential, recovers Eq. (1.18). For an operator ^O, we can dene its expectation with respect to a pair of Slater determinants h ^Oi hj ^Oj 0 i hj 0 i (1.20) or with respect to a propagator B under the nite-temperature grandcanonical formalism of Eq. (1.18) h ^Oi Tr( ^O ^B) : (1.21) Tr( ^B) The \bar" distinguishes these from the true interacting many-body expectations in Eq.'s (1.3) and (1.5). The latter are of course what we wish to compute with QMC. The simplest example of Eq.'s (1.20) and (1.21) is the single-particle Green's function G ij hc i c y ji. In the ground-state formalism, G ij hjc ic y jj 0 i hj 0 i In the nite-temperature grand-canonical formalism G ij Tr(c ic y j ^B) Tr( ^B) = ij? [ 0 ( T 0 )?1 T ] ij : (1.22) = (I + B)?1 ij : (1.23) Given the Green's function G, the general expectation dened in Eq.'s (1.20) and (1.21) can be computed for most operators of interest. This is an

9 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 9 important property that will be used in ground-state and nite-temperature QMC calculations. For example, we can calculate the expectation of a general two-body operator, ^O = Pijkl O ijklc y i cy j c kc l, under denitions (1.20) and (1.21): h ^Oi = X ijkl O ijkl (G 0 jkg 0 il? G 0 ikg 0 jl); (1.24) where the matrix G 0 is dened as G 0 I? G. 2.4 Hubbard-Stratonovich transformation In order to carry out Eq.'s (1.2) and (1.6) in the Slater-determinant space we have introduced above, we write the many-body propagator e? ^H in single-particle form. For simplicity we will treat the system as spinpolarized and suppress the spin index in most of the discussions below. It is, however, straightforward to generalize the discussions to include spins. Assuming that the many-body Hamiltonian involves two-body interactions only, we can write it as X X ^H = ^K + ^V = K ij (c y i c j + c y j c i) + V ijkl c y i cy j c kc l : (1.25) ijkl i;j For example, ^K and ^V can be the kinetic and potential energy operators, respectively. With a small > 0, the Trotter approximation can be used: e? ^H e? ^K e? ^V ; (1.26) which introduces a Trotter error. For actual implementation of the algorithms we discuss here, higher order Trotter break-ups are often used. The Trotter error can be further reduced with an extrapolation procedure after separate calculations have been done with dierent values of. The ^K part of the propagator in (1.26) is the exponential of a one-body operator. The ^V part is not. It is, however, possible to rewrite e? ^V in this desired form through a so-called Hubbard-Stratonovich transformation[15]. As we will show below, ^V can be written as a sum of terms of the form ^v 2 =2, where is a constant and ^v is a one-body operator similar to ^K. The Hubbard-Stratonovich transformation then allows us to write Z 1 1 e?=2 ^v2 =?1 dxe? 2 p x2 e xp? ^v ; (1.27) 2 where x is an auxiliary-eld variable. The constant in the exponent on the right-hand side can be real or imaginary dependent on the sign of. The key is that the quadratic form (in ^v) on the left is replaced by a linear one on the right.

10 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 10 We now show one way to write ^V as a sum of ^v 2 =2. With the most general form of ^V in Eq. (1.25) we can cast Vijkl in the form of a Hermitian matrix by introducing two indices = (i; l) and = (j; k) and letting V = V (i;l);(j;k) = V ijkl. The Hermitian matrix V can then be diagonalized and written as V = RR T, where R is a matrix whose columns are the eigenvectors of V and is a diagonal matrix containing the corresponding eigenvalues. That is V = X R R? : (1.28) The two-body operator ^V can therefore be written as X ^V = V ijkl c y i c lc y j c k? X V ijkl c y i c k jl ijkl ijkl = X ( X il R (i;l) c y i c l)( X jk R? (j;k) cy j c k)? X X ( ik j V ijkj )c y i c k: Noting that ^V is Hermitian, we can put the above in a more symmetric form ^V = 1 X f^ ; ^ y 2 g + ^ 0 ; (1.29) where the one-body operators are dened as ^ P il R (i;l)c y i c l and ^ 0? P ik [P j (V ijkj + V jijk )=2]c y i c k. Since f^ ; ^ y g = 1 2 [(^ + ^ y ) 2? (^? ^ y ) 2 ]; (1.30) we have succeeded in writing ^V in the desired form. The way to decompose ^V above leads to approximately 2N 2 auxiliary elds. Often the interaction simplies ^V and the number of auxiliary elds can be much reduced. In fact, certain type of interactions have particularly simple forms of Hubbard-Stratanovic transformations. For example, for the repulsive on-site interaction Un i" n i# (" and # denote electron spin) in the Hubbard model, an exact, discrete Hubbard-Stratonovich transformation[16] exists: e?uni"ni# = e?u(ni"+ni#)=2 X x i=1 1 2 exi(ni"?n i#) ; (1.31) where the constant is determined by cosh() = exp( U=2). Similarly, for an attractive interaction V n i" n i# with V < 0: X e?v?v (ni"+ni#?1)=2 1 ni"ni# = e 2 exi(ni"+ni#?1) ; (1.32) x i=1

11 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 11 where cosh() = exp(jv j=2) If we denote the collection of auxiliary elds by ~x and combine one-body terms from ^K and ^V, we obtain the following compact representation of the outcome of the HS transformation: e? ^H = Z d~x p(~x) ^B(~x); (1.33) where p(~x) is a probability density function (e.g., a multi-dimensional Gaussian). The propagator ^B(~x) in Eq. (1.33) has a special form, namely, it is a product of operators of the type in Eq. (1.16), with U ij depending on the auxiliary eld. The matrix representation of ^B(~x) will be denoted by B(~x). In essence, the HS transformation replaces the two-body interaction by one-body interactions with a set of random external auxiliary elds. In other words, it converts an interacting system into many non-interacting systems living in uctuating external auxiliary-elds. The sum over all congurations of auxiliary elds recovers the interaction. Dierent forms of the HS transformation exist[17, 18]. It is reasonable to expect that they can aect the performance of the QMC method. Indeed experience shows that they can not only impact the statistical accuracy, but also lead to dierent quality of approximations under the constrained path methods that we discuss below. Therefore, although we discuss the algorithms under the generic form of Eq. (1.33), we emphasize that it is worthwhile to explore dierent forms of the HS transformation in an actual implementation. 3 Standard Auxiliary-Field Quantum Monte Carlo In this section we briey describe the standard ground-state[11, 13] and nite-temperature[10, 12] auxiliary-eld quantum Monte Carlo (AFQMC) methods. We will rely on the machinery established in the previous section. Our goal is to illustrate the essential ideas, in a way which will facilitate our discussion of the sign problem and help introduce the framework for the constrained path Monte Carlo methods. We will not go into details such as how to eciently sample the auxiliary elds or how to stabilize the matrices. They are described in the literature. In addition, although the approaches are not identical, we will see these issues manifested in the next section and gain sucient understanding of them when we discuss the corresponding constrained path Monte Carlo methods.

12 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems Ground-state method Ground-state expectation h ^Oi can now by computed with (1.2) and (1.33). The denominator is = = h (0) je?n ^H e?n ^H j (0) i Z h 2nY h (0) j d~x (l) p(~x (l) ) ^B(~x )i (l) j (0) i l=1 Z hy i Y d~x (l) p(~x (l) ) det [ (0) ] T l l B(~x (l) ) (0) : (1.34) In the standard ground-state AFQMC method[10], a (suciently large) value of n is rst chosen and xed throughout the calculation. If we use X to denote the collection of the auxiliary-elds X = f~x (1) ; ~x (2) ; : : :; ~x (2n) g and D(X) to represent the integrand in Eq. (1.34), we can write the expectation value of Eq. (1.3) as where R h ^Oi D(X) dx h ^Oi = R = D(X) dx R h ^Oi D(X) s(x) dx R D(X) s(x) dx ; (1.35) s(x) D(X)= D(X) (1.36) measures the \sign" of D(X). The non-interacting expectation for a given X is that dened in Eq. (1.20): h ^Oi h Lj ^Oj R i h L j R i (1.37) with h L j = h (0) j ^B(~x (2n) ) ^B(~x (2n?1) ) ^B(~x (n+1) ) j R i = ^B(~x (n) ) ^B(~x (n?1) ) ^B(~x (1) ) j (0) i; which are both Slater determinants. D(X) as well as h L j and j R i are completely determined by the path X in auxiliary-eld space. The expectation in Eq. (1.35) is therefore in the form of Eq. (1.10), with f(x) = jd(x)j and g(x) = h ^Oi. The important point is that, for each X, jd(x)j is a number and g(x) can be evaluated using Eq.'s (1.22) or (1.24). Often the Metropolis Monte Carlo algorithm[9] is used to sample auxiliary-elds X from jd(x)j. Any h ^Oi can then be computed following the procedure described by Eq. (1.9). The Metropolis Monte Carlo algorithm allows one to, starting from an (arbitrary) initial conguration of X, create a random walk whose equilibrium distribution is f(x). A kernel K(X 0 ; X) must be chosen prior to the

13 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 13 calculation which is a probability density function of X 0 (conditional on X). The only other condition on K is ergodicity, i.e., it allows the random walk to eventually reach any position from any other position in X-space. For example, it can be a uniform distribution in a hyper-cube of linear dimension centered at X K(X 0 ; X) = 1= 2nd ; if X 0 inside hyper-cube 0; otherwise where is a parameter and d is the dimensionality of the auxiliary-eld ~x. The random walk resembles that in Eq. (1.12). A move from X to X 0 is proposed by sampling an X 0 according to K(X 0 ; X). The key of the Metropolis algorithm is an acceptance probability h A(X 0 K(X; X 0 )f(x 0 ) i ; X) = min K(X 0 ; X)f(X) ; 1 (1.38) whose role is similar to that of w in Eq. (1.12). The one important dierence is that, if the outcome of A(X 0 ; X) is to reject X 0, X is kept and counted as one new Monte Carlo sample and hence one new step in the random walk, even though the walker did not actually move. The choice of K, which has a great deal of freedom, controls the eciency of the random walk, i.e., how quickly it converges, how correlated successive steps of the random walk are, etc. A choice that is perhaps the most common in practice is to \sweep" through the eld one component at a time. A change is proposed for only one component of the many-dimensional vector X, from a kernel K(~x 0 i ; ~x i) in similar form to K, while the rest of X is kept the same; the acceptance/rejection procedure is then applied. With the newly generated eld, a change is proposed on the next ((l+1)-th) component and the step above is repeated. The \local" nature of this choice of the kernel can often lead to simplications in the computation of A. 3.2 Finite-temperature method The standard nite-temperature auxiliary-eld QMC method works in the grand-canonical ensemble. P This means that the Hamiltonian contains an additional term? i n i which involves the chemical potential. The term leads to an additional diagonal one-body operator in the exponent in Eq. (1.16). We will assume that the resulting term has been absorbed in ^B in Eq. (1.33). Substituting Eq. (1.33) into Eq. (1.6) leads to Z = Z hy l d~x l p(~x l ) i Tr[ ^B(~x L ) ^B(~x 2 ) ^B(~x 1 )]: (1.39) If we apply Eq. (1.18) to the above and again use X to denote a complete

14 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 14 path in auxiliary-eld space, X f~x 1 ; ~x 2 ; ; ~x L g, we obtain Z = Z det[i + B(~x L ) B(~x 2 )B(~x 1 )] p(x) dx: (1.40) Again representing the integrand by D(X), we see that the nite-temperature expectation in Eq. (1.5) reduces to Eq. (1.35), exactly the same expression as in ground-state AFQMC. The non-interacting expectation h ^Oi is now h ^Oi Tr[ ^O ^B(~x L ) ^B(~x 2 ) ^B(~x 1 )] Tr[ ^B(~x L ) ^B(~x 2 ) ^B(~x : (1.41) 1 )] Note that, as in the ground-state algorithm, this can be computed for each X. Eq. (1.23) remains valid. The matrix B must be replaced by the ordered product of the corresponding matrices, B(~x L ) B(~x 2 )B(~x 1 ). \Wrapping" this product (without changing the ordering) gives the equal time Green's function at dierent imaginary times. Thus we have unied the ground-state and nite-temperature AFQMC methods with Eq. (1.35). Although the actual forms for D(X) and h ^Oi are dierent, the nite-temperature algorithm is the same as that of the ground state described earlier. Applying the same procedure, we can sample auxiliary-elds X from jd(x)j, and then compute the desired many-body expectation values from Eq. (1.35). 4 Constrained Path Monte Carlo Methods Ground-State and Finite-Temperature In this section, we review the ground-state and nite-temperature constrained path Monte Carlo (CPMC) methods. These methods are free of any decay of the average sign, while retaining many of the advantages of the standard AFQMC methods. The methods are approximate, relying on what we will generally refer to as the constrained path approximation. Below, other than presenting a way to make a more formal connection between the ground-state and nite-temperature methods, we will largely follow Ref.'s [5, 7] in discussing these methods. 4.1 Why and how does the sign problem occur? In the standard AFQMC methods, the sign problem occurs because D(X) is not always positive. 2 The Monte Carlo samples of X therefore have to 2 In fact ^B(~x) is complex for a general HS transformation when the interaction is repulsive. D(X) is therefore complex as well. We will limit our discussions here of the sign problem and the constrained path approximation to the case where D(X) is real, although generalizations are possible.

15 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 15 D(X) X FIGURE 1. Schematic illustration of the sign problem. The X-axis represents an abstraction of the many-dimensional auxiliary-eld paths X; each point denotes a collection of X's. The sign problem occurs because the contributing part (shaded area) is exponentially small compared to what is sampled, namely jd(x)j. 0.2 log[ <s(x)> X ] /T FIGURE 2. Decay of the average sign as a function of inverse temperature. Note the logarithmic scale on the vertical axis. The calculation is for a 4 4 Hubbard model using the nite-temperature formalism. The chemical potential is such that the average electron density is 0:875 (corresponding to 7 " 7 # electrons). The on-site repulsion U = 4. (Data courtesy of Richard Scalettar [19].) be drawn from the probability distribution function dened by jd(x)j. As Fig. 1 illustrates, however, D(X) approaches an antisymmetric function exponentially as N or the the length of the path is increased. That is, its average sign, i.e., the denominator in Eq. (1.35), vanishes at large system size or low temperature. Thus the variance of the calculation in Eq. (1.11) grows exponentially, which negates the advantage of the \square-root" behavior of a Monte Carlo calculation (cf. Eq. (1.9)). Fig. (2) shows exponential decay of the average sign hs(x)i of the Monte Carlo samples as a function of inverse temperature. This problem was largely uncontrolled in both the ground-state and nite-temperature algorithms, preventing general simulations at low temperatures or large system sizes. Here we will develop a more detailed picture for the behavior of D(X) and

16 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 16 hence for how the sign problem occurs. Under this picture the constrained path approximation, which will eliminate the decay of the denominator in Eq. (1.35) as a function of n or lattice size, emerges naturally. For simplicity, we will now use one common symbol L to denote the length of the path X. That is, in the ground-state AFQMC algorithm in the previous section, we let L = 2n. D(X) = D(~x 1 ; ~x 2 ; ; ~x L ) will denote the corresponding integrand in either the ground-state or nitetemperature AFQMC algorithm. We recall that the goal is to sample X according to D(X) (although we had to resort to sampling jd(x)j in the previous section). To gain insight we conduct the following thought experiment[7]. We imagine sampling the complete path X by L successive steps, from ~x 1 to ~x L. We consider the contribution in Eq. (1.35) by an individual partial path f~x 1 ; ~x 2 ; ; ~x l g: P l (f~x 1 ; ~x 2 ; ; ~x l g) Z D(~x 1 ; ~x 2 ; ; ~x l ; ~x l+1 ; ; ~x L ) d~x l+1 d~x L : (1.42) Note that this is the same as replacing e ^H by Eq. (1.33) only in the rst l time slices. For simplicity we will use ^B to denote the many-body propagator e ^H in our discussion below, i.e, ^B e ^H. Under the groundstate formalism, P l (f~x 1 ; ~x 2 ; ; ~x l g) = h (0) ^B ^B ^B {z } L?l while under the nite-temperature formalism P l (f~x 1 ; ~x 2 ; ; ~x l g) = Tr[ ^B ^B ^B {z } L?l ^B(~x l ) ^B(~x2 ) ^B(~x1 )] (0) ip(~x l ) p(~x 1 ); (1.43) ^B(~x l ) ^B(~x 2 ) ^B(~x 1 )]p(~x l ) p(~x 1 ): (1.44) We consider the case when P l = 0. This means that, after the remaining L? l steps are nished, the collective contribution from all complete paths that result from this particular partial path will be precisely zero. In other words, all complete paths that have f~x 1 ; ~x 2 ; ; ~x l g as their rst l elements collectively make no contribution in the denominator in Eq. (1.35). This is because the path integral over all possible congurations for the rest of the path, f~x l+1 ; ~x l+2 ; ; ~x L g, is simply given by Eq. (1.42). In other words, the path integral simply reproduces the ^B's in Eq. (1.43) or (1.44), leading to P l (f~x 1 ; ~x 2 ; ; ~x l g) which is zero by assumption. Thus, in our thought experiment any partial path that reaches the axis in Fig. 3 immediately turns into noise, regardless of what it does at future l's. A complete path which is in contact with the axis at any point belongs to the \antisymmetric" part of D(X) in Fig. 1, whose contributions cancel.

17 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 17 P l Z l L C 0 FIGURE 3. Schematic illustration of the boundary condition to control the sign problem in AFQMC. P l (Eq. (1.42)) is shown as a function of the length of the partial path, l, for several paths. The remainder of the path integral (of length L? l) is assumed to have been evaluated analytically. (See Eq. (1.42).) All paths start from the right (l = 0) at P 0, which is either h T je L ^H j T i > 0 (ground-state) or Z (nite-temperature). When P l becomes 0, ensuing paths (dashed lines) collectively contribute zero. Only complete paths with P l > 0 for all l (solid line) contribute in the denominator in Eq. (1.35). As L increases, it becomes more and more likely for a path to reach the axis at some point l. Therefore, the \noise" paths become an increasingly larger portion of all paths unless paths are completely prohibited from reaching the axis, such as in the half-lled Hubbard model with repulsive interaction, where particle-hole symmetry makes D(X) positive. A complete path X which crosses the axis at least once, i.e., a noise path, can have a D(X) which is either positive or negative. In the Monte Carlo sampling in AFQMC, we sample X according to jd(x)j, which makes no distinction of such paths from the contributing ones that stay completely above the axis. Asymptotically in L, the Monte Carlo samples consist of an equal mixture of positive and negative contributions in D(X). The Monte Carlo signal is lost and the Monte Carlo estimate of Eq. (1.35) becomes increasingly noisy. This is the origin of the sign problem. 4.2 The constrained-path approximation P 0 is positive, because h T je L ^H j T i > 0 (ground-state formalism) and Z > 0 (nite-temperature). P l changes continuously with l at the limit! 0. Therefore a complete path contributes if and only if it stays entirely above the axis in Fig. 3. Thus, in our thought experiment, imposition of the boundary condition[5, 7, 20] P 1 (f~x 1 g) > 0 P 2 (f~x 1 ; ~x 2 g) > 0 P L (f~x 1 ; ~x 2 ; ; ~x L g) > 0 (1.45)

18 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 18 will ensure all contributing complete paths to be selected while eliminating all noise paths. The axis acts as an innitely absorbing boundary. A partial path is terminated and discarded as soon as it reaches the boundary. By discarding a path, we eliminate all of its future paths, including the ones that would eventually make positive contributions. The boundary condition makes the distribution of complete paths vanish at the axis, which accomplishes complete cancellation of the negative and the corresponding positive contributions in the antisymmetric part of D(X). In practice ^B is of course not known. We replace it by a known trial propagator ^B T. The boundary condition in Eq. (1.45) is then clearly approximate. Results from our calculation will depend on the trial propagator used in the constraint. We expect the quality of the approximation to improve with a better ^B T. This is the basic idea of the constrained path approximation. For ground-state calculations, let us imagine taking L to innity. If ^B T is a mean-eld propagator, the wave function h T j = lim L!1 h (0) L?l j ^B T (1.46) can be thought of as the corresponding ground-state solution. The righthand side of Eq. (1.43) can then be evaluated: P T l h T j (l) ip(~x l ) p(~x 1 ); (1.47) where the Slater determinant that results from a particular partial path is dened as j (l) i ^B(~x l ) ^B(~x 2 ) ^B(~x 1 )j (0) i: (1.48) The approximate condition in Eq. (1.45) can then be written as h T j (l) i > 0: (1.49) Notice that the constraint now becomes independent of the length of the path l. It is expressed in terms of the overlap of a determinant with a trial wave function j T i. For a choice of j T i that is a single Slater determinant or a linear combination of Slater determinants, Eq. (1.15) allows straightforward computation of the overlap in Eq. (1.49). In principle, we could keep L nite and directly implement the constraint in the framework of the standard AFQMC, using the propagator ^BT instead of j T i. But, as we will see below, the formalism above lends itself more naturally to the random walk realization of the ground-state method. The constraint is manifested in one other way in the computation of expectations of operators ^O that do not commute with ^H. Because we need to insert ^O in the middle in the numerator of Eq. (1.37), we will have to \back-propagate"[5, 3] from the left to obtain a h L j. The constraint, however, is implemented in the opposite direction, as we sample the path

19 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 19 from right to left. A set of determinants along the path of a random walk which does not violate the constraint at any step when going from right to left may violate it any even number of times when going from left to right. This sense of direction means that the expectation h ^Oi we compute is not strictly h c 0j ^Oj c 0 i=h c 0j c 0i, with j c 0i the approximate ground-state wave function under the constraint. This dierence is not crucial, however, since it is expected to be of the same order as the error due to the constrained path approximation itself. For nite-temperature calculations, the right-hand side of Eq. (1.44) can be evaluated if ^B T is in the form of a single-particle propagator P T l det[i + B T B T B {z T B(~x } 1 )]p(~x l ) p(~x 1 ); (1.50) L?1 where, following our convention, B T is the matrix representation of ^BT. Combining this with Eq. (1.45) leads to the following matrix representation of the approximate boundary condition: det[i + B T B T B {z T B(~x } 1 )] > 0 L?1 det[i + B T B T B {z T } L?2 B(~x 2 )B(~x 1 )] > 0 det[i + B(~x L ) B(~x 2 )B(~x 1 )] > 0 (1.51) Formally, the constrained path approximation in Eq.'s (1.49) and (1.51) has similarity to the xed-node[21, 22, 23] (ground-state) or restrictedpath[24] (nite-temperature) approximation in conguration space 3 Signicant dierences exist, however, because of the dierence between the real space and a Slater determinant space, which is non-orthogonal. The goal of the new CPMC methods is to carry out the thought experiment stochastically. We wish to generate Monte Carlo samples of X which both satisfy the conditions in (1.49) or (1.51) and are distributed according to D(X). The most natural way to accomplish this is perhaps to incorporate the boundary conditions into the standard AFQMC algorithm, for example, as an additional acceptance condition. However, such an approach is likely to be inecient: The boundary condition is non-local and breaks translational invariance in imaginary time, which requires simultaneous updating of the entire path. Without a scheme to propose paths that incorporates information on future contributions, it is dicult to nd complete paths which satisfy all the constraints, especially as L increases. 3 See the Appendix for a brief description of the xed-node approximation for lattice Fermion systems.

20 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 20 We therefore seek to sample paths via a random walk whose time variable corresponds to the imaginary time l. We will see that we must introduce importance sampling schemes to guide the random walks by their projected contribution in D(X). Below we discuss details of such schemes. 4.3 Ground-state constrained path Monte Carlo (CPMC) method We start from Eq. (1.2). Using (1.33), we write it as j (l+1) i = Z d~x p(~x) ^B(~x)j (l) i: (1.52) In the random walk realization of this iteration, we represent the wave function at each stage by a nite ensemble of Slater determinants, i.e., j (l) i / X k w (l) k j (l) k i; (1.53) where k labels the Slater determinants and an overall normalization factor of the wave function has been omitted. A weight factor w (l) k is introduced for each walker, even though in Eq. (1.52) the kernal p is normalized. This is because single-particle orbitals in a Slater determinants cease to be orthogonal to each other as a result of propagation by ^B. When they are re-orthogonalized (see Section 4.5), an overall factor appears, which we will view as the w term in the integral equation Eq. (1.12). The structure of the random walk now resembles that of Eq. (1.12). For each random walker we sample an auxiliary-eld conguration ~x from the probability density function p(~x) and propagate the walker to a new one via (l+1) k = B(~x) (l) k. If necessary, a re-orthogonalization procedure (e.g., modied Gram-Schmidt) is applied to (l) k prior to the propagation: (l) k = [ (l) k ]0 R, where R is an M M upper-triangular matrix. [ (l) k ]0 is then used in the propagation instead; the weight of the new walker is w (l+1) k = w (l) k det(r). The simple random walk procedure is correct and can be very useful for thinking about many conceptual issues. It is, however, not ecient enough as a practical algorithm in most cases of interest, because the sampling of ~x is completely random with no regard to the potential contribution to D(X). The idea of importance sampling is to iterate a modied equation with a modied wave function, without changing the underlying eigenvalue problem of (1.52). Specically, for each Slater determinant ji, we dene an importance function O T () h T ji; (1.54) which estimates its overlap with the ground-state wave function. We can

21 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 21 then rewrite equation (1.52) as j ~ (l+1) i = where the modied \probability density function" is Z d~x l+1 ~p(~x l+1 ) ^B(~x l+1 )j ~ (l) i; (1.55) ~p(~x l+1 ) = O T ( (l+1) ) O T ( (l) ) p(~x l+1): (1.56) It is easy to see that the new kernel ~p in Eq. (1.56) can be written in terms of the trial partial path contributions in Eq. (1.47): ~p(~x l+1 ) = PT l+1 Pl T : (1.57) As we will see, this is formally analogous to the importance-sampling kernel in the nite-temperature algorithm we discuss next. With the new kernel ~p, the probability distribution for ~x l+1 vanishes smoothly as Pl+1 T approaches zero, and the constraint is naturally imposed. The auxiliary eld ~x l+1 is sampled according to our best estimate of the partial path contributions (from the trial propagator/wave function). As expected, ~p(~x) is a function of both the current and future positions in Slater-determinant space. Further, it modies p(~x) such that the probability is increased when ~x leads to a determinant with larger overlap and is decreased otherwise. It is trivially veried that equations (1.52) and (1.55) are identical. To better see the eect of importance sampling, we observe that if j T i = j R 0 i, the normalization ~p(~x)d~x becomes a constant. Therefore the weights of walkers remain a constant and the random walk has no uctuation. In the random walk, the ensemble of walkers f j (l) k modied wave function: j ~(l) i / P k w(l) then given formally by j (l) i / X k w (l) k j(l) k j(l) k i g now represents the i. The true wave function is k i=o T ( (l) ); (1.58) although in actual measurements it is j ~(l) i that is needed and division by O T does not appear. The random walk process is similar to that discussed for Eq. (1.52), but with p(~x) replaced by ~p(~x). The latter is in general not a normalized probability density function, and we denote the normalization constant for walker k by N( (l) k j (l+1) k i N( (l) k ) and rewrite the iterative relation as Z k ) d~x ~p(~x) )B(~x)j(l) N( (l) k i: (1.59) k This iteration now forms the basis of the ground-state constrained path Monte Carlo algorithm. For each walker j (l) k i, one step of the random walk consists of:

22 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems sampling an ~x from the probability density function ~p(~x)=n( (l) k ) 2. constructing the corresponding B(~x) and propagating the walker (l) k by it to generate a new walker 3. assigning a weight w (l+1) k = w (l) k N((l) k ) to the new walker Note that, in contrast with the primitive algorithm in Eq. (1.52), the weight factor of a walker does not need to be modied here when the reorthogonalization procedure is applied. This is because the upper-triangular matrix R only contributes to the overlap O T, which is already represented by the walker weight. After each modied Gram-Schmidt procedure, R can simply be discarded. 4.4 Finite-temperature method We again construct an algorithm which builds directly into the sampling process both the constraints and some knowledge of the projected contribution. In terms of the trial projected partial contributions Pl T dened in Eq. (1.50), the fermion determinant D(X) can be written as D(X) = PT L P T L?1 PL?1 T PT 2 PL?2 T P1 T P T 1 P T 0 P T 0 : (1.60) We again construct the path X by a random walk of L steps, corresponding to stochastic representations of the L ratios in Eq. (1.60). In Fig. 4 the sampling procedure for one walker is illustrated schematically. At the (l + 1)-th step, we sample ~x l+1 from the conditional probability density function dened by ~p(~x l+1 ) = Pl+1 T =PT l, which is precisely the same as Eq. (1.57) in the ground-state method. Like in the ground-state method or in Green's function Monte Carlo, we simultaneously keep an ensemble of walkers, which undergo branching in their random walks (of L steps). In the random walk, each walker is a product of B T 's and B(~x)'s. They are initialized to P0 T, with overall weight 1. At the end of the l-th step, each walker has the form B T B T B {z T B(~x } l ) B(~x 2 )B(~x 1 ) (1.61) L?l where f~x l ; ; ~x 2 ; ~x 1 g is the partial path already sampled for this walker. Each step for a walker is formally identical to that in the ground-state method: 1. pick an ~x l+1 from the probability density function ~p(~x l+1 ) 2. advance the walker by replacing the B T next to B(~x l ) by B(~x l+1 )

23 1. Quantum Monte Carlo Methods for Strongly Correlated Electron Systems 23 step L--1 L FIGURE 4. Illustration of the sampling procedure in the algorithm. Circles represent auxiliary-elds ~x l. A row shows the eld conguration at the corresponding step number shown on the right. Within each row, the imaginary-time index l increases as we move to the left, i.e., the rst circle is ~x 1 and the last ~x L. Empty circles indicate elds which are not \activated" yet, i.e., ^BT is still in place of ^B. Solid circles indicate elds that have been sampled, with the arrow indicating the one being sampled in the current step. 3. multiply the overall weight of the walker by the normalization factor of ~p(~x l+1 ). The basic idea of the algorithm is, similar to the ground-state method, to select ~x l according to the best estimate of its projected contribution in Z. Note that the probability distribution for ~x l again vanishes smoothly as Pl T approaches zero. This naturally imposes the boundary condition in Eq. (1.51). The Monte Carlo samples are generated from a probability distribution function of the contributing paths only (solid lines in Fig. 3), which is more ecient than sampling paths from jd(x)j and imposing the boundary condition as an additional acceptance/rejection step. 4.5 Additional technical issues Implementation of the constraint at nite In actual simulations is nite and paths are dened only at a discrete set of imaginary times. The boundary condition on the underlying continuous paths is the same, namely that the probability distribution must vanish at the axis in Fig. 3. In Fig. 5, we illustrate how the boundary condition is imposed under the discrete representation. The \contact" point with the axis is likely to be between time slices and not well dened, i.e., P l may be zero at a noninteger value of l. To the lowest order, we can terminate a partial path when its P l rst turns negative. That is, we still impose Eq. (1.45) in our thought experiment to generate paths. In Fig. 5, this means terminating the path at l = n (point B) and thereby discarding all its future paths

### Lecture 5. Hartree-Fock Theory. WS2010/11: Introduction to Nuclear and Particle Physics

Lecture 5 Hartree-Fock Theory WS2010/11: Introduction to Nuclear and Particle Physics Particle-number representation: General formalism The simplest starting point for a many-body state is a system of

### Monte Carlo simulations of harmonic and anharmonic oscillators in discrete Euclidean time

Monte Carlo simulations of harmonic and anharmonic oscillators in discrete Euclidean time DESY Summer Student Programme, 214 Ronnie Rodgers University of Oxford, United Kingdom Laura Raes University of

### Monte Carlo Simulation of the 2D Ising model

Monte Carlo Simulation of the 2D Ising model Emanuel Schmidt, F44 April 6, 2 Introduction Monte Carlo methods are a powerful tool to solve problems numerically which are dicult to be handled analytically.

### A key idea in the latter paper is that using dierent correlations depending upon the position and orientation of a pair of antithetic walkers can, in

Model fermion Monte Carlo with correlated pairs II M.H. Kalos Center for Theory and Simulation in Science and Engineering Laboratory of Atomic and Solid State Physics Cornell University Ithaca, New York

### I. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS

I. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS Markus Holzmann LPMMC, UJF, Grenoble, and LPTMC, UPMC, Paris markus@lptl.jussieu.fr http://www.lptl.jussieu.fr/users/markus (Dated: January 24, 2012)

### Symmetries, Groups, and Conservation Laws

Chapter Symmetries, Groups, and Conservation Laws The dynamical properties and interactions of a system of particles and fields are derived from the principle of least action, where the action is a 4-dimensional

### 4 Matrix product states

Physics 3b Lecture 5 Caltech, 05//7 4 Matrix product states Matrix product state (MPS) is a highly useful tool in the study of interacting quantum systems in one dimension, both analytically and numerically.

### Physics 70007, Fall 2009 Answers to Final Exam

Physics 70007, Fall 009 Answers to Final Exam December 17, 009 1. Quantum mechanical pictures a Demonstrate that if the commutation relation [A, B] ic is valid in any of the three Schrodinger, Heisenberg,

### Monte Carlo (MC) Simulation Methods. Elisa Fadda

Monte Carlo (MC) Simulation Methods Elisa Fadda 1011-CH328, Molecular Modelling & Drug Design 2011 Experimental Observables A system observable is a property of the system state. The system state i is

### Reduction of two-loop Feynman integrals. Rob Verheyen

Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................

### ii PREFACE This dissertation is submitted for the degree of Doctor of Philosophy at the University of Cambridge. It contains an account of research ca

QUANTUM MONTE CARLO CALCULATIONS OF ELECTRONIC EXCITATIONS By Andrew James Williamson Robinson College, Cambridge Theory of Condensed Matter Group Cavendish Laboratory Madingley Road Cambridge CB3 0HE

### Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

### The 1+1-dimensional Ising model

Chapter 4 The 1+1-dimensional Ising model The 1+1-dimensional Ising model is one of the most important models in statistical mechanics. It is an interacting system, and behaves accordingly. Yet for a variety

### bound on the likelihood through the use of a simpler variational approximating distribution. A lower bound is particularly useful since maximization o

Category: Algorithms and Architectures. Address correspondence to rst author. Preferred Presentation: oral. Variational Belief Networks for Approximate Inference Wim Wiegerinck David Barber Stichting Neurale

### Classical Monte Carlo Simulations

Classical Monte Carlo Simulations Hyejin Ju April 17, 2012 1 Introduction Why do we need numerics? One of the main goals of condensed matter is to compute expectation values O = 1 Z Tr{O e βĥ} (1) and

### 1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot

RIMS-1161 Proof of the Gauge Independence of the Conformal Anomaly of Bosonic String in the Sense of Kraemmer and Rebhan Mitsuo Abe a; 1 and Noboru Nakanishi b; 2 a Research Institute for Mathematical

### Momentum-space and Hybrid Real- Momentum Space DMRG applied to the Hubbard Model

Momentum-space and Hybrid Real- Momentum Space DMRG applied to the Hubbard Model Örs Legeza Reinhard M. Noack Collaborators Georg Ehlers Jeno Sólyom Gergely Barcza Steven R. White Collaborators Georg Ehlers

### Linearly-solvable Markov decision problems

Advances in Neural Information Processing Systems 2 Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu

### Summary of free theory: one particle state: vacuum state is annihilated by all a s: then, one particle state has normalization:

The LSZ reduction formula based on S-5 In order to describe scattering experiments we need to construct appropriate initial and final states and calculate scattering amplitude. Summary of free theory:

### Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

### Notes on SU(3) and the Quark Model

Notes on SU() and the Quark Model Contents. SU() and the Quark Model. Raising and Lowering Operators: The Weight Diagram 4.. Triangular Weight Diagrams (I) 6.. Triangular Weight Diagrams (II) 8.. Hexagonal

### 1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

### Consequently, the exact eigenfunctions of the Hamiltonian are also eigenfunctions of the two spin operators

VI. SPIN-ADAPTED CONFIGURATIONS A. Preliminary Considerations We have described the spin of a single electron by the two spin functions α(ω) α and β(ω) β. In this Sect. we will discuss spin in more detail

### conventions and notation

Ph95a lecture notes, //0 The Bloch Equations A quick review of spin- conventions and notation The quantum state of a spin- particle is represented by a vector in a two-dimensional complex Hilbert space

### Density Matrices. Chapter Introduction

Chapter 15 Density Matrices 15.1 Introduction Density matrices are employed in quantum mechanics to give a partial description of a quantum system, one from which certain details have been omitted. For

### Numerical integration and importance sampling

2 Numerical integration and importance sampling 2.1 Quadrature Consider the numerical evaluation of the integral I(a, b) = b a dx f(x) Rectangle rule: on small interval, construct interpolating function

### 7.1 Creation and annihilation operators

Chapter 7 Second Quantization Creation and annihilation operators. Occupation number. Anticommutation relations. Normal product. Wick s theorem. One-body operator in second quantization. Hartree- Fock

### Week 4: Differentiation for Functions of Several Variables

Week 4: Differentiation for Functions of Several Variables Introduction A functions of several variables f : U R n R is a rule that assigns a real number to each point in U, a subset of R n, For the next

### Linear Algebra and Eigenproblems

Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

### Prancing Through Quantum Fields

November 23, 2009 1 Introduction Disclaimer Review of Quantum Mechanics 2 Quantum Theory Of... Fields? Basic Philosophy 3 Field Quantization Classical Fields Field Quantization 4 Intuitive Field Theory

### 1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

### Lecture 6: Fluctuation-Dissipation theorem and introduction to systems of interest

Lecture 6: Fluctuation-Dissipation theorem and introduction to systems of interest In the last lecture, we have discussed how one can describe the response of a well-equilibriated macroscopic system to

### The Matrix Representation of a Three-Dimensional Rotation Revisited

Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

### Regularization Physics 230A, Spring 2007, Hitoshi Murayama

Regularization Physics 3A, Spring 7, Hitoshi Murayama Introduction In quantum field theories, we encounter many apparent divergences. Of course all physical quantities are finite, and therefore divergences

### Particle Physics. Michaelmas Term 2011 Prof. Mark Thomson. Handout 2 : The Dirac Equation. Non-Relativistic QM (Revision)

Particle Physics Michaelmas Term 2011 Prof. Mark Thomson + e - e + - + e - e + - + e - e + - + e - e + - Handout 2 : The Dirac Equation Prof. M.A. Thomson Michaelmas 2011 45 Non-Relativistic QM (Revision)

### The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

### Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

### The Quantum Heisenberg Ferromagnet

The Quantum Heisenberg Ferromagnet Soon after Schrödinger discovered the wave equation of quantum mechanics, Heisenberg and Dirac developed the first successful quantum theory of ferromagnetism W. Heisenberg,

### = w 2. w 1. B j. A j. C + j1j2

Local Minima and Plateaus in Multilayer Neural Networks Kenji Fukumizu and Shun-ichi Amari Brain Science Institute, RIKEN Hirosawa 2-, Wako, Saitama 35-098, Japan E-mail: ffuku, amarig@brain.riken.go.jp

### André Schleife Department of Materials Science and Engineering

André Schleife Department of Materials Science and Engineering Length Scales (c) ICAMS: http://www.icams.de/cms/upload/01_home/01_research_at_icams/length_scales_1024x780.png Goals for today: Background

### An Adaptive Bayesian Network for Low-Level Image Processing

An Adaptive Bayesian Network for Low-Level Image Processing S P Luttrell Defence Research Agency, Malvern, Worcs, WR14 3PS, UK. I. INTRODUCTION Probability calculus, based on the axioms of inference, Cox

### SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

### Electronic structure theory: Fundamentals to frontiers. 1. Hartree-Fock theory

Electronic structure theory: Fundamentals to frontiers. 1. Hartree-Fock theory MARTIN HEAD-GORDON, Department of Chemistry, University of California, and Chemical Sciences Division, Lawrence Berkeley National

### 1.1 A Scattering Experiment

1 Transfer Matrix In this chapter we introduce and discuss a mathematical method for the analysis of the wave propagation in one-dimensional systems. The method uses the transfer matrix and is commonly

### Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

### CHAPTER 4. Cluster expansions

CHAPTER 4 Cluster expansions The method of cluster expansions allows to write the grand-canonical thermodynamic potential as a convergent perturbation series, where the small parameter is related to the

### 5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

### Introduction to Path Integral Monte Carlo. Part I.

Introduction to Path Integral Monte Carlo. Part I. Alexey Filinov, Jens Böning, Michael Bonitz Institut für Theoretische Physik und Astrophysik, Christian-Albrechts-Universität zu Kiel, D-24098 Kiel, Germany

### Unitary Dynamics and Quantum Circuits

qitd323 Unitary Dynamics and Quantum Circuits Robert B. Griffiths Version of 20 January 2014 Contents 1 Unitary Dynamics 1 1.1 Time development operator T.................................... 1 1.2 Particular

### Mathematical Methods wk 2: Linear Operators

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

### arxiv: v1 [hep-ph] 5 Sep 2017

A First Step Towards Effectively Nonperturbative Scattering Amplitudes in the Perturbative Regime Neil Christensen, Joshua Henderson, Santiago Pinto, and Cory Russ Department of Physics, Illinois State

### Quantum Mechanics Solutions

Quantum Mechanics Solutions (a (i f A and B are Hermitian, since (AB = B A = BA, operator AB is Hermitian if and only if A and B commute So, we know that [A,B] = 0, which means that the Hilbert space H

### arxiv:quant-ph/ v5 10 Feb 2003

Quantum entanglement of identical particles Yu Shi Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom and Theory of

### STA 294: Stochastic Processes & Bayesian Nonparametrics

MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

### Stabilizing Canonical- Ensemble Calculations in the Auxiliary-Field Monte Carlo Method

Stabilizing Canonical- Ensemble Calculations in the Auxiliary-Field Monte Carlo Method C. N. Gilbreth In collaboration with Y. Alhassid Yale University Advances in quantum Monte Carlo techniques for non-relativistic

### Quantum Theory and Group Representations

Quantum Theory and Group Representations Peter Woit Columbia University LaGuardia Community College, November 1, 2017 Queensborough Community College, November 15, 2017 Peter Woit (Columbia University)

### Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### Spin, Isospin and Strong Interaction Dynamics

October, 11 PROGRESS IN PHYSICS Volume 4 Spin, Isospin and Strong Interaction Dynamics Eliahu Comay Charactell Ltd. P.O. Box 3919, Tel Aviv 6139 Israel. E-mail: elicomay@post.tau.ac.il The structure of

### Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Analysis on Graphs Alexander Grigoryan Lecture Notes University of Bielefeld, WS 0/ Contents The Laplace operator on graphs 5. The notion of a graph............................. 5. Cayley graphs..................................

### Lecture 6 Positive Definite Matrices

Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

### Physics 115/242 Monte Carlo simulations in Statistical Physics

Physics 115/242 Monte Carlo simulations in Statistical Physics Peter Young (Dated: May 12, 2007) For additional information on the statistical Physics part of this handout, the first two sections, I strongly

### I. CSFs Are Used to Express the Full N-Electron Wavefunction

Chapter 11 One Must be Able to Evaluate the Matrix Elements Among Properly Symmetry Adapted N- Electron Configuration Functions for Any Operator, the Electronic Hamiltonian in Particular. The Slater-Condon

### Harmonic Oscillator I

Physics 34 Lecture 7 Harmonic Oscillator I Lecture 7 Physics 34 Quantum Mechanics I Monday, February th, 008 We can manipulate operators, to a certain extent, as we would algebraic expressions. By considering

### Simulated Annealing for Constrained Global Optimization

Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective

### q-alg/ Mar 96

Integrality of Two Variable Kostka Functions Friedrich Knop* Department of Mathematics, Rutgers University, New Brunswick NJ 08903, USA knop@math.rutgers.edu 1. Introduction q-alg/9603027 29 Mar 96 Macdonald

### Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Course MP3 Lecture 4 13/11/2006 Monte Carlo method I An introduction to the use of the Monte Carlo method in materials modelling Dr James Elliott 4.1 Why Monte Carlo? The name derives from the association

### Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

### Kernel Methods. Machine Learning A W VO

Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

### ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

### Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

### Topic 15 Notes Jeremy Orloff

Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

### Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

### 550 XU Hai-Bo, WANG Guang-Rui, and CHEN Shi-Gang Vol. 37 the denition of the domain. The map is a generalization of the standard map for which (J) = J

Commun. Theor. Phys. (Beijing, China) 37 (2002) pp 549{556 c International Academic Publishers Vol. 37, No. 5, May 15, 2002 Controlling Strong Chaos by an Aperiodic Perturbation in Area Preserving Maps

### Lecture 1: The Equilibrium Green Function Method

Lecture 1: The Equilibrium Green Function Method Mark Jarrell April 27, 2011 Contents 1 Why Green functions? 2 2 Different types of Green functions 4 2.1 Retarded, advanced, time ordered and Matsubara

### On reaching head-to-tail ratios for balanced and unbalanced coins

Journal of Statistical Planning and Inference 0 (00) 0 0 www.elsevier.com/locate/jspi On reaching head-to-tail ratios for balanced and unbalanced coins Tamas Lengyel Department of Mathematics, Occidental

### The method of lines (MOL) for the diffusion equation

Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

### Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

### 1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

### One-dimensional Schrödinger equation

Chapter 1 One-dimensional Schrödinger equation In this chapter we will start from the harmonic oscillator to introduce a general numerical methodology to solve the one-dimensional, time-independent Schrödinger

### A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

### COMP 558 lecture 18 Nov. 15, 2010

Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

### Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite

### Diagonalization by a unitary similarity transformation

Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

### Path integral in quantum mechanics based on S-6 Consider nonrelativistic quantum mechanics of one particle in one dimension with the hamiltonian:

Path integral in quantum mechanics based on S-6 Consider nonrelativistic quantum mechanics of one particle in one dimension with the hamiltonian: let s look at one piece first: P and Q obey: Probability

### Path integrals and the classical approximation 1 D. E. Soper 2 University of Oregon 14 November 2011

Path integrals and the classical approximation D. E. Soper University of Oregon 4 November 0 I offer here some background for Sections.5 and.6 of J. J. Sakurai, Modern Quantum Mechanics. Introduction There

### component risk analysis

273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

### Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

### Relation of Pure Minimum Cost Flow Model to Linear Programming

Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

### van Quantum tot Molecuul

10 HC10: Molecular and vibrational spectroscopy van Quantum tot Molecuul Dr Juan Rojo VU Amsterdam and Nikhef Theory Group http://www.juanrojo.com/ j.rojo@vu.nl Molecular and Vibrational Spectroscopy Based

### arxiv:cond-mat/ v1 [cond-mat.str-el] 24 Jan 2000

The sign problem in Monte Carlo simulations of frustrated quantum spin systems Patrik Henelius and Anders W. Sandvik 2 National High Magnetic Field Laboratory, 800 East Paul Dirac Dr., Tallahassee, Florida

### The Sommerfeld Polynomial Method: Harmonic Oscillator Example

Chemistry 460 Fall 2017 Dr. Jean M. Standard October 2, 2017 The Sommerfeld Polynomial Method: Harmonic Oscillator Example Scaling the Harmonic Oscillator Equation Recall the basic definitions of the harmonic

### Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Advanced sampling ChE210D Today's lecture: methods for facilitating equilibration and sampling in complex, frustrated, or slow-evolving systems Difficult-to-simulate systems Practically speaking, one is

### Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture

### Algebraic Theory of Entanglement

Algebraic Theory of (arxiv: 1205.2882) 1 (in collaboration with T.R. Govindarajan, A. Queiroz and A.F. Reyes-Lega) 1 Physics Department, Syracuse University, Syracuse, N.Y. and The Institute of Mathematical

### Vibrational motion. Harmonic oscillator ( 諧諧諧 ) - A particle undergoes harmonic motion. Parabolic ( 拋物線 ) (8.21) d 2 (8.23)

Vibrational motion Harmonic oscillator ( 諧諧諧 ) - A particle undergoes harmonic motion F == dv where k Parabolic V = 1 f k / dx = is Schrodinge h m d dx ψ f k f x the force constant x r + ( 拋物線 ) 1 equation

### arxiv:quant-ph/ v1 20 Apr 1995

Combinatorial Computation of Clebsch-Gordan Coefficients Klaus Schertler and Markus H. Thoma Institut für Theoretische Physik, Universität Giessen, 3539 Giessen, Germany (February, 008 The addition of

### The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria

ESI The Erwin Schrodinger International Boltzmanngasse 9 Institute for Mathematical Physics A-1090 Wien, Austria Comparing Formulations of Generalized Quantum Mechanics for Reparametrization-Invariant

### Automatica, 33(9): , September 1997.

A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm