Selected problems in lattice statistical mechanics

Size: px
Start display at page:

Download "Selected problems in lattice statistical mechanics"

Transcription

1 Selected problems in lattice statistical mechanics Yao-ban Chan September 12, 2005 Sbmitted in total flfilment of the reqirements of the degree of Doctor of Philosophy Department of Mathematics and Statistics The Uniersity of Melborne

2 ABSTRACT This thesis consists of an introdction and for chapters, with each chapter coering a different topic. In the introdction, we introdce the models that we stdy in the later chapters. In Chapter 2, we stdy corner transfer matrices as a method for generating series expansions in statistical mechanical models. This is based on the work of Baxter, whose CTM eqations we re-derie in detail. We then propose two methods that tilise these CTM eqations to derie series expansions. The first one is based on iterating throgh the eqations seqentially. We ran this algorithm on the hard sqares model and prodced 48 series terms. The second method, based on the corner transfer matrix renormalization grop method of Nishino and Oknishi, is mch faster (thogh still exponential in time), bt only crrently works for nmerical calclations. In Chapter 3, we apply the finite lattice method and or renormalization grop corner transfer matrix method to the Ising model with second-nearest-neighbor interactions. In particlar, we stdy the crossoer exponent of the critical line near the point J 1 = 0, tanh J 2 = 2 1. Throgh a scaling assmption, we predict this exponent to be 4, kt 7 which is spported by or nmerical methods. We also estimate the location of the critical lines, and the critical exponent of the magnetization along the lower phase bondary. In Chapter 4, we stdy the problem of n-friendly directed lattice walks confined in a horizontal strip. We find the general transfer matrix for two walkers, and se this and a method of recrrences to calclate generating fnctions for certain specific cases ia Padé approximants. Using generating fnction argments, we then derie the generating fnction for the nmber of walks for two walkers in strips of width 3 and 4, three walkers in a strip of width 4, and p icios walkers in strips of width 2p 1, 2p, and 2p + 1. Finally we generalise the model by introdcing another parameter, called bandwidth. In Chapter 5, we stdy mean nknotting times, in a problem motiated by DNA entanglement. Firstly, we find the mean nknotting times for minimal embeddings of low-crossing knots. Then we look at generating random embeddings by sing self-aoiding polygon trails (SAPTs). After proing Kesten s pattern theorem, we se it and the model of a walk on an n-cbe to estimate that the mean nknotting time grows exponentially in the length of the SAPT. We try to find the growth constant by sing the piot algorithm to generate SAPTs, bt the low-length behaior of the mean nknotting time appears to follow a power law, leading s to beliee that mch longer trails are needed.

3 Declaration This is to certify that 1. the thesis comprises only my original work towards the PhD except where indicated in the Preface, 2. de acknowledgement has been made in the text to all other material sed, 3. the thesis is less than 100,000 words in length, exclsie of tables, maps, bibliographies and appendices. Yao-ban Chan Preface This thesis was written nder the sperision of Prof. Tony Gttmann (AJG), Dr. Andrew Rechnitzer (AR) (The Uniersity of Melborne) and Prof. Ian Enting (IGE) (MASCOS, formerly CSIRO). Chapter 1 (Introdction) contains no new research, jst introdctions to the models that the thesis stdies. Chapter 2 is joint work with IGE and AR. Sections 2.4 to 2.6 are largely taken from the work of Baxter, while the method in Section 2.10 is based on Nishino and Oknishi s corner transfer matrix renormalization grop method. Chapter 3 is also joint work with IGE and AR. Section 3.2 is taken from preios works by Enting, while Section 3.4 is based on work by Hankey and Stanley. Chapter 4 is joint work with AJG. A shortened ersion of this chapter has been pblished in a conference jornal olme of Discrete Mathematics and Theoretical Compter Science. Theorems and were proed by one of the referees of that paper. Chapter 5 is joint work with AR and Aleks Owczarek (The Uniersity of Melborne). Section 5.3 is based on works by Madras and Slade and by an Rensbrg et al. Theorem was gien to s by Gordon Slade. Section 5.8 is adapted from the work of Dbins et al. 3

4 Acknowledgements To my sperisors: Tony, for looking after me from beginning to end (and after); Ian, for looking after me at conferences among other things (and reading 230 pages while attending one); and Andrew, for ptting p with me knocking on his door eery other day or so. To my family: My parents, for always spporting, feeding and hosing me; Yi-Shen, for haing someone to talk to and play games with; Mandy, for proiding entertainment; and last bt not least, Bilbo, for looking cte. To inanimate objects: Compter games, for keeping me sane; Table tennis, for keeping me entertained, exercised and friend-ly at the same time; and Piano, for giing me something else to do. To the money-proiders the Astralian Goernment, CSIRO, and MASCOS. I liked the hotels. 4

5 CONTENTS 1. Introdction In search of simplicity Series and lattice animals Combinatorial objects Polygons and knots In this thesis The corner transfer matrix method Introdction Transfer matrices A ariational reslt The CTM eqations An expression for the partition fnction Eigenale eqations Stationarity The CTM eqations The infinite-dimensional soltion Calclating qantities The 1x1 soltion an example Matrix algorithms Cholesky decomposition The Arnoldi method The iteratie CTM method The hard sqares model an example Conergence/reslts Technical notes Efficiency The renormalization grop method Conergence/reslts Technical notes Efficiency Conclsion

6 3. The second-neighbor Ising model Introdction The finite lattice method Finite lattice approximation Transfer matrix method The Ising model an example Conergence of the CTM method Nmber of iterations Matrix size Scaling theory Scaling and the crossoer exponent Finding the critical lines The pper line The lower line The disorder point line Reslts Conclsion Directed lattice walks in a strip Introdction Finding the generating fnction A transfer matrix algorithm A method of recrrences One walker Reslts Variable friendliness Variable nmber of walkers Growth constants Bandwidth Conclsion Mean nknotting times Introdction Small knots Kesten s pattern theorem Forier transforms A walk on an n-cbe Bonds on the mean nknotting time The piot algorithm Validity of the piot algorithm Reslts Conclsion

7 Appendix 226 A. Generating fnctions for walks in a strip B. Critical points for walks in a strip

8 LIST OF FIGURES 1.1 Pictorial representations of spins A two-dimensional sqare grid. We place or spins at each ertex of this grid Some 2-dimensional grids Each ertex in the sqare grid is connected to 4 other ertices Only a small nmber of spins significantly affect any one spin The energy-creating interactions in the model, inclding the external field Some ariations on the Ising model Graphs on the sqare lattice Small graphs which contribte to the Ising partition fnction series Some combinatorial objects of interest Some combinatorial objects of interest that can be constrcted from a walk Eery directed walk that ends at a fixed point has the same nmber of horizontal steps and ertical steps An example of icios walks A Dyck path Variations on icios walks Self-aoiding polygon trails and knots resemble each other Some embeddings of the nknot Reersing a crossing The knot 7 4 can be nknotted with two crossing reersals A 3-dimensional cbic lattice with atoms at each ertex A sqare lattice V is a matrix which transfers a colmn of spins Calclating the first few terms of the low-temperatre series for the Ising model partition fnction. Hollow circles denote spins with ale -1, and dashed bonds denote nlike bonds A corner transfer matrix incorporates the weight of a qarter of the plane IRF models can be described solely by their effect on a single cell. All of these interactions apply at the same time A one-dimensional lattice One-dimensional transference Two-dimensional transference Toroidal bondary conditions Mltiple colmn transfer matrices

9 2.12 ω gies the weight of a single cell Decomposition of a colmn matrix into single cells Reflection symmetry. The weights of configrations (a) and (b) are identical in ndirected models At optimality, ψ is an eigenector of V Decomposition of ψ into m F s Fll-row transfer matrix interpretation of R Fll-row transfer matrix interpretation of S Half-plane transfer matrix interpretation of X Half-plane transfer matrix interpretation of Y Graphical interpretation of Eqation The graphical interpretation of A as a corner transfer matrix gies interpretations of Eqations 2.69 and Graphical interpretation of Eqation Calclating κ. The expression we se is (a) (c) (b) Expansion of F matrices in Eqation Expansion of A matrices in Eqation Log-log plot of approximated κ s. final matrix size The interactions arond a cell for the second-neighbor Ising model A lattice with two different types of spins (filled and hollow) A typical configration in the ferromagnetic low-temperatre phase of the Ising model A typical configration in the anti-ferromagnetic low-temperatre phase of the Ising model A typical configration in the high-temperatre phase of the Ising model We can diide the lattice into two sets of spins sch that eery nearestneighbor bond connects one spin from each set The Ising model is symmetrical in the parameter J With no nearest-neighbor interaction, the lattice decoples into two separate sqare lattices An approximate phase diagram in the ariables and A typical configration in the sper anti-ferromagnetic phase of the secondneighbor Ising model Transfer matrices for a finite lattice. Hollow spins are not in the lattice Single-cell transfer matrices. The first moes an n-spin ct to an n + 1-spin ct; the second moes the ct frther to the right and down; the third redces it to n spins Calclated magnetization s. nmber of iterations at the point (0.42, 0) with matrix size 7. The ale conerges monotonically Magnetization s. iterations at (0.42, 0) with matrix size 8. The ale oscillates, bt conerges

10 3.15 Magnetization s. iterations at (0.42, 0) with matrix size 10. The ale appears to be periodic Magnetization s. iterations at (0.43, 0) with matrix size 19. The ale stays arond the same ale, bt withot discernible periodic behaior Magnetization s. iterations at (0.42, 0) with matrix size 9. The ale oscillates, eentally dierging Magnetization s. iterations at (0.42, 0) with matrix size 18. The ale switches to 1 m halfway Magnetization s. iterations at (0.414, 0) with matrix size 8. The ale is still increasing significantly after 1000 iterations Calclated magnetization s. final matrix size at the point (0.42, 0) Log-log plot of magnetization s. size at the point (0.5, 0) Magnetization s. size at the point (0.41, 0). At sizes higher than 2, the calclated magnetization is almost exactly Calclated magnetization along the -axis for final sizes The leftmost line represents size 1, and the size increases as we moe to the right Estimated critical lines for sizes 1-5. The lowest line represents size 1, and the size increases as we moe pwards Log-log plot of critical points on the -axis s. matrix size Critical exponents near the crossoer point We ealate the magnetization along ertical and horizontal lines to estimate the location of the critical line Calclated (left) and actal (right) magnetization along the -axis for matrix size Estimating critical points by assming a constant error on the critical line. For this matrix size (3 3), we then estimate the critical points to be where ˆm(, ) = ( 1 2 m)8 for calclated (left, size 3) and actal (right) magnetization Calclating critical points by fitting a line to ( 1 2 m)8. Or estimate is the intercept of the line with the x-axis in this case, Magnetization s. inerse size at the point (0.0005, 2 1) Log-log plot of magnetization s. on the line = Figre 3.33 with fitted lines Estimated critical lines in the plane. The disorder point line is also shown Estimated critical exponents along the lower phase bondary Estimated critical exponents s. J 2 J General walks The sqare lattice A self-aoiding walk on the sqare lattice The stretched and rotated sqare lattice An example configration of 4 icios walkers

11 4.6 An example configration of for 3-friendly walkers. The thicker lines contain more than one walker Example configration of for 3-friendly walkers in a strip of width Some simple transfer graphs The distance between walkers can only change by -2, 0, or 2 for any step Possible first steps for two walkers Diiding a single walk in a strip at the points where the walker retrns to height Diiding walks in a strip of width 3 when the walkers are 2 nits apart Diiding walks in a strip of width 4 wheneer the walkers are apart A possible way in which walkers may separate and then join in a single step A configration showing all possible een-height states for three icios walkers in a strip of width 7. There is only one state where the lowest walker is at height Diiding configrations when the first walker reaches height Possible non-triial paths for the first walker in an end-segment Growth constants s. friendliness for 2 walkers, width 3 (pls signs) and 4 (crosses) Log-plot of Figre 4.18 for width 4, with asymptotic fitted line Growth constant s. strip width for 2 walkers, for icios (pls signs) and 1-friendly (crosses) walkers Example configration for three 4-friendly walks in a strip of width 5, bandwidth Electron micrograph of tangled DNA (from [138]) Some knots Reidemeister moes Different positions for a single crossing Calclating the Alexander polynomial of an example knot Different types of crossing The action of topoisomerase II (from [118]) Reersing a crossing A knot demonstrating why it is not sfficient to consider minimal embeddings to find the nknotting nmber The knot 5 1 takes two crossing reersals to become the nknot Example of a SAPT Conerting a SAPT to a knot Some possible reersals of Transfer diagram for Three types of trails. The starting ertices are denoted by hollow circles Decomposition of a half-space trail. Here A 1 = 6, A 2 = 5, A 3 = 3 and A 4 = 2. We also hae n 1 = 8, n 2 = 21, n 3 = 29, and n 4 = n = Transformation of a half-space trail

12 5.18 Decomposition of a self-aoiding trail into two half-space trails. (a) is the original trail; it decomposes into (b) and (c) Joining two SAPTs to form a larger one Transformation of bridges into self-aoiding trails which do not cross a line Joining two trails to make a self-aoiding trail which ends at a neighbor of the origin. We sed the transformed trails in Figre A prime pattern which indces a crossing Trefoil segment; (a) withot crossings and (b) with crossings Bijection between paths on the n-cbe that pass throgh certain points All possible crossing reersals of a trefoil segment. The two knots in the centre are knotted; the oter knots are nknotted. The highlighted crossings are the ones which differ from the connected centre knot Terminology for SAPTs Transforming a SAPT into a rectangle throgh piots. There are 3 piots between (c) and (d) SAPT with ertical spport line and non-intersecting segments. After rotation a diagonal spport line can be drawn Comparing the steps p i 1 p i and p j 1 p j in a SAPT with a diagonal spport line Transforming one rectangle to another ia a rotation and a reflection Mean nknotting time s. length of generating SAPT Log-log plot of mean nknotting time s. length

13 1. INTRODUCTION 1.1 In search of simplicity In the physical sciences, it is of great interest to stdy the properties of a magnet. In general, there are two ways that one can achiee this end by experimentation or by theoretical means. Considering that this is a theoretical thesis, we choose the latter way. We will start by stating the goal of a fair-sized section of statistical mechanics: To find and/or estimate the physical behaior of a magnet ia theoretical means. Since we are taking a theoretical approach, we will approximate the physical magnet with a theoretical model. There are many ways to create sch a model, of arying degrees of seflness. This leads s to the goal of this section: To create an accrate and sefl theoretical model of a magnet. As one might sspect, there is a distinct trade-off between the accracy of a model and its complexity if we make or model too realistic, there are simply too many factors to keep track of, whereas if we simplify too mch, or reslts become more and more inaccrate. The key lies in finding the right balance, so that we can actally get reslts, while still retaining enogh realism to make those reslts worthwhile. The next qestion is: how do we model a magnet? This qestion has many answers, and arios ways of modelling hae different leels of seflness depending on what we wish to know. The approach taken by statistical mechanics is the microscopic approach. The reasoning is that since a magnet is known to be made p of many atoms, each with its own magnetic spin, it shold be possible, and reasonable, to model a magnet as a ery large nmber of these atoms. Interestingly, more than jst magnets can be modelled in this way. Since eery sbstance is composed of lots of atoms, a many-atom model is applicable to many materials. Howeer, for the prposes of this thesis, we will concentrate on a magnetic model. As stated, the magnet contains many magnetic atoms. We wold like to incorporate these atoms into a model which we can se to estimate or calclate the properties of a magnet. The first thing we mst do is to figre ot what factors affect the behaior of the magnet. These factors inclde, bt are not limited to: The natre of the atoms

14 The arrangement of the atoms The interatomic interactions generated by the presence of many atoms in a small space and external ariables sch as The srronding temperatre The srronding external magnetic field. The qestion then becomes what ales to set all of these factors to. Obiosly, there are many possibilities for all of the aboementioned factors. By arying these factors (or, indeed, introdcing new ones) many different magnetic models can be made, some of which can gie wildly arying reslts to others. The majority of these models are simply too complicated for s to een consider anything more than the simplest calclations. These do not interest s. What we want is a simple model that we can generate reslts from, een if or assmptions are not qite correct and or reslts not qite exact. At least this model will gie s some reslts! To get to sch a simple model, we start by specifying or factors. We will consider each factor in trn bt note that there are many possibilities for each factor, of which more than one may be eqally interesting (and alid). We will only consider one possibility to start off with. 1. The natre of the atoms. The first thing we do is to specify the natre of or atoms exactly. While it is certainly possible to take into consideration the mass or physical size of the atoms, or whether we hae different types of atoms, we are ltimately searching for simplicity. As sch, we model the atom by the simplest possible object a single point. This point has exactly one property, its magnetic spin, which we represent by a single nmber. Becase of this, we will also refer to the atoms as spins from now on. In fact we can een do better than this; not only will we set the magnetic spin of an atom to be jst one nmber, bt we also force that nmber to take only two ales. These ales are more or less arbitrary, as a linear transformation will transform any two ales into any other. On the other hand, we wold like to hae both positie and negatie magnetic strengths, and it seems reasonable that the basic nit of strength shold be eqal in both directions. Therefore we take the possible spin ales (also known as states) to be 1 and -1. Sometimes these spins are called p and down spins, mostly de to their possible pictorial representations. We show these in Figre The arrangement of the atoms. Next, we look at the physical arrangement of the atoms. Magnets are always solid objects, and solids hae their atoms fixed in a reglar pattern. We imitate this by placing each spin on the ertex of a reglar grid, for example a two-dimensional sqare grid as shown in Figre 1.2. The qestion then becomes what grid we oght to place the spins on. We can separate this into two factors: 14

15 (a) Pictorial representation of p and down spins. (b) We will depict positie and negatie spins with filled and hollow circles when we need to differentiate between them. Fig. 1.1: Pictorial representations of spins. Fig. 1.2: A two-dimensional sqare grid. We place or spins at each ertex of this grid. The dimension of the grid, and The geometry of the grid. Gien that magnets are 3-dimensional objects, we wold ndobtedly like to se a 3- dimensional grid for or model. On the other hand, 3-dimensional grids often engender ery complex and difficlt to sole models. The only other realistic alternaties are 1- and 2- dimensional grids. It trns ot that a 1-dimensional grid often yields a relatiely simple calclation, bt is not particlarly realistic. Howeer, it is still sefl for testing techniqes that we wold like to extend to higher dimensions. This leaes the 2-dimensional grids. These possess a srprisingly great deal of complexity compared to their 1-dimensional conterparts. In fact, while many 1-dimensional models are exactly solable, most 2-dimensional models remain nsoled, een today. In fact 2-dimensional models bear mch more similarity to 3-dimensional models in terms of solability and the techniqes sed on the models. Howeer, since they are ndobtedly simpler, it seems best for s to se 2-dimensional grids. For the geometry of the grid, there are again many possibilities, a lot of which are alid and interesting. We hae already shown the 2-dimensional sqare grid in Figre 1.2; other simple possibilities in two dimensions inclde trianglar and hexagonal lattices, which we show in Figre 1.3. In 3 dimensions, the choice becomes een more complicated, as we enconter lattices like the face-centred cbic and body-centred cbic 15

16 (a) Trianglar grid. (b) Hexagonal grid. Fig. 1.3: Some 2-dimensional grids. Fig. 1.4: Each ertex in the sqare grid is connected to 4 other ertices. lattices. Ultimately, we wold like to hae models for all grids, bt for the moment, we will hae to jst choose one. We choose to work on a 2-dimensional sqare grid (Z 2 ), placing a spin at eery ertex of this grid. In this grid, each ertex is connected ia a bond to 4 other ertices, as shown in Figre 1.4. The nmber of bonds incident on a single ertex is called the co-ordination nmber of the grid, denoted by q. Becase each bond connects 2 ertices, there are twice as many bonds as ertices. It is worth noting that in other statistical mechanical models, the atoms need not be fixed at all. This occrs most often in models of liqids and/or gases, which makes sense as the atoms are not in fixed positions in these sbstances in actality. On the other hand, magnetic models can also be sed to model gases and liqids, with reasonable sccess. 3. The interatomic interactions. Considering that we are representing the atoms solely by their magnetic spin, the only atomic interactions that we can cont are the magnetic interactions between atoms. Bt a problem soon arises: in the real world, eery magnetic spin wold interact with eery other spin in the magnet, so we mst take into accont all possible spin pairs. Howeer, the nmber of spin pairs is mch greater than the nmber of spins, so this is highly ndesirable, especially when we note that the ast majority of spins 16

17 Fig. 1.5: Only a small nmber of spins significantly affect any one spin. are comparatiely far away from any gien spin, and therefore wold hae a ery small interaction with that atom. We show this in Figre 1.5. We can extend this thoght to its logical conclsion gien any one spin, the spins with the greatest interaction with that spin are those spins which are nearest to it, i.e. the spins which are connected by an edge of the lattice, or the innermost circle in Figre 1.5. These spins are known as the nearest neighbors of the original spin, for obios reasons. We will make a sweeping simplification and say that these are the only spins which interact with or gien spin. Gien that only nearest neighbors interact, we now ask: how do they interact? Becase or atoms are fixed in position, they will neer moe closer or farther away from each other. Instead, the magnetic interaction between two spins prodces an internal energy in the system. Becase the magnet tries to achiee a state which has low energy, keeping track of this energy enables s to set p a probabilistic model of the magnet. Before we do this, we will qantify the spin interactions and the energy that these interactions prodce. Since the spins are either positie or negatie, there are only two possible interactions, which occr if the spins are aligned or not aligned (we say two spins are aligned if they take the same ale). We let there be a certain amont of energy in the system when the spins are aligned, and an eqal bt negatie amont otherwise. We denote this energy by J, keeping it as a parameter of the model. This means that if we hae two neighboring spins with ales σ i and σ j, the energy generated by the interatomic interaction between these two spins will be Jσ i σ j. 4. The srronding temperatre. Considering that the temperatre of the magnet can be easily aried, it makes sense to keep this as a parameter of or model. We denote it by T. At higher temperatres, the atoms are more agitated, so configrations with higher energy are more likely to occr than at lower temperatres. 17

18 Fig. 1.6: The energy-creating interactions in the model, inclding the external field. 5. The external magnetic field. Again, this is a qantity which can be easily aried. The external field will interact with each spin in a ery similar manner to the spins interacting with each other. We let the energy generated by the interaction of a positie spin with the external field be H, and take H to be the energy generated by a negatie spin. Then the energy generated by any spin with ale σ i is eqal to Hσ i. We are now ready to set p or probabilistic model. Sppose that we hae N spins in or model, where ideally N is ery large, or een infinite. We label these spins arbitrarily from 1 to N, and let spin i hae the ale σ i. The total energy of any configration reslting from the magnetic interactions, both internal and external, is J <i,j> σ i σ j H i σ i (1.1) where the first sm is oer all nearest-neighbor pairs of spins i and j. This qantity is called the Hamiltonian of the system, and is denoted H(σ 1, σ 2,..., σ N ). Now, we want to set p the probabilities for the model so that configrations of spins with lower energy are more likely to occr, while configrations with higher energy are less likely. We wold also like all configrations with the same energy to be eqally likely. To achiee this, we take the probability of any configration to be proportional to the exponential of a negatie mltiple of the Hamiltonian. This means that ( ( P (σ 1, σ 2,..., σ N ) e βh(σ 1,σ 2,...,σ N ) = exp β J σ i σ j H )) σ i. (1.2) <i,j> i Becase higher temperatres increase the probability of higher-energy states, we take β = 1, where k is Boltzmann s constant. This distribtion is known as the Gibbs canonical kt distribtion. From here, it is a simple matter of normalisation to find the probabilities, since the sm of all probabilities mst eqal 1. The normalising factor is Z N = ( exp βj σ i σ j + βh ) σ i (1.3) σ 1,σ 2,...,σ N <i,j> i 18

19 which we call the partition fnction. This means that the probability of any configration is ( P (σ 1, σ 2,..., σ N ) = 1 exp βj σ i σ j + βh ) σ i. (1.4) Z N <i,j> i Not by coincidence, we hae gradally arried at the most famos and well-stdied model in statistical mechanics, the Ising model (which in this case is more properly called the spin- 1 2 two-dimensional sqare lattice Ising model), which was introdced by Ising in 1925 ([72]). Ealating the partition fnction is the most important task in stdying these models, as we can derie all of the information that we want to know abot the magnet from it. Therefore the aim of eery statistical mechanical model is to derie an exact, closed form expression for the partition fnction. If we can do this, we call the model soled. In a famos paper in 1944, Onsager ([114]) managed to exactly sole for the partition fnction of the Ising model in the case of zero field (H = 0). Howeer, it is a testament to the complexity of the model (despite the simplicity of its definition) that the arbitrary-field case still has not been soled, een 60 years later. Many of the choices that we hae made in setting p this model are qite arbitrary, althogh they are reasonable. By relaxing or altering or conditions, many other interesting models can be formed. Sch models inclde: The hard sqares model. In this model, we dispense with the qantification of the spin-pair interaction between the atoms. Instead we let each spin hae a base state (1) and an excited state (-1), and we forbid any configrations where two excited states are adjacent to each other. If we draw a sqare arond each excited state with ertices at the nearest neighbors of that spin, then this constraint implies that no two of these sqares can intersect hence the term hard sqares. Becase we hae remoed one parameter, this model is sally slightly less complicated than the Ising model. We illstrate this model in Figre 1.7(a), and will stdy it frther in Chapter 2 (mostly as an example). The q-state Potts model. This model does away with the assmption that the atoms can only take two states. Instead, each spin has q possible states, nmbered from 0 to q 1. This forces s to make some changes to the way the interactions work only spins in state 0 are affected by the external magnetic field, and the only nearest-neighbor pairs of spins which interact are those for which the spins are in the same state. The second-neighbor Ising model. In this model, we remoe the restriction that a spin can only interact with its nearest neighbor. Now we allow spins to interact also with spins which are diagonally remoed across a sqare in the grid (their second neighbors). Becase the interaction between first neighbors is obiosly of different strength to the interaction between second neighbors, we set the strength of the firstneighbor interaction (preiosly J) to be J 1, while the second-neighbor interaction 19

20 (a) A configration in the hard sqares model. Hollow circles denote negatie spins. (b) Interactions in the secondneighbor Ising model. Fig. 1.7: Some ariations on the Ising model. strength is denoted by J 2. The partition fnction of this model then becomes Z N = ( ) βj 1 σ i σ j + βj 2 σ i σ 1,σ 2,...,σ N exp <i,j> i <i,k> 2 σ i σ k + βh (1.5) where the second sm in the exponential is oer all second-neighbor spin pairs i and k. We will explore this model in detail in Chapter 3, bt for the moment, we illstrate the interactions in Figre 1.7(b). 1.2 Series and lattice animals The Ising model is a statistical mechanical model, bt hidden nderneath the srface lie some ery deep combinatorial aspects. One link comes from the partition fnction Z N. We can write this as Z N = e βjσ iσ j e βhσ i. (1.6) σ 1,σ 2,...,σ N <i,j> Now sppose, for the sake of simplification, that there is no external magnetic field, i.e. H = 0. Then the second exponential in the aboe expression is always 1. Looking at the first exponential, becase the spins can only take the ales 1 and -1, we hae i { e βjσ iσ j e βj if σ = i = σ j e βj otherwise (1.7) for any i and j. Therefore we can take e βjσ iσ j = cosh βj(1 + σ i σ j tanh βj). (1.8) 20

21 (a) A graph on the sqare lattice. (b) A connected graph on the sqare lattice. All ertices hae een degree. Fig. 1.8: Graphs on the sqare lattice. Since there are 2N nearest-neighbor pairs on the sqare lattice, we can write the zero-field partition fnction as Z N = (cosh βj) 2N (1 + σ i σ j tanh βj). (1.9) σ 1,σ 2,...,σ N <i,j> Now consider the prodct in the aboe eqation. If we expand this prodct, we will hae a sm of 2 2N terms, since there are 2N bonds. Each of these terms is a prodct of contribtions of either 1 or σ i σ j tanh βj from each bond. Now select one term (arbitrarily) and enision a sqare grid. If we say that for eery nearest-neighbor pair which contribtes σ i σ j tanh βj, there is a bond on the grid, then each term will correspond to a graph on the grid. We show some of the possible graphs in Figre 1.8. Now if we look at each term, we see that the power of σ i in any term is eqal to the degree of the ertex i in the corresponding graph. Howeer, becase σ i can only take the ales 1 or -1, we know that σ i σi n is 0 if n is odd and 1 if n is een. Therefore the only possible non-zero sms occr when there are een powers of eery σ in the term. This means that eery graph which contribtes to the partition fnction mst hae een degree in all ertices. An example of sch a graph is shown in Figre 1.8(b). For these terms, the sm is taken oer all 2 N possible configrations of spins. Therefore, if we set = tanh βj, we can write Z N as the series Z N = 2 N (cosh βj) 2N g g (1.10) where the sm is oer all graphs g on the sqare lattice which hae een degree at eery ertex, and g denotes the nmber of bonds in g. Therefore Z N is a mltiple of the generating fnction of these graphs, and its first few terms can be calclated by enmerating these graphs. This is called the high temperatre expansion of Z N, introdced by an der Waerden in 1941 ([132]). When cast in this fashion, the problem of calclating the partition fnction of the Ising model is shown to be eqialent to a combinatorial problem. 21

22 (a) All alid graphs, p to translation, with 4 bonds. (b) All alid graphs, p to translation, with 6 bonds. (c) All alid connected graphs, p to rotation and translation, with 8 bonds. Fig. 1.9: Small graphs which contribte to the Ising partition fnction series. Now we look at ways that we can sole this combinatorial problem. The most obios soltion is to cont them directly, which is known as direct enmeration. For the Ising model, the non-triial graph with the least nmber of bonds which has een degree in eery ertex is the sqare shown in Figre 1.9(a). The lower left ertex of the sqare (say) can be placed at any ertex in the grid, which means that there are N sch graphs that can be placed on the sqare lattice. The next graphs (with 6 bonds) that satisfy or conditions are the rectangles in Figre 1.9(b). By the same argment, there are 2N sch graphs on the grid. Howeer, when we go to 8 bonds, the sitation becomes more complicated in addition to the figres in Figre 1.9(c), which nmber 9N, we hae the possibility of two disconnected sqares. There are N ways to place the first sqare and N 9 ways to place the second, bt this conts each placement twice, so the total nmber of alid graphs with 8 bonds is 1N 2 + 9N. 2 2 This gies the first few terms of Z N : Z N = 2 N (cosh βj) (1 2N + N 4 + 2N 6 + ( 1 2 N ) 2 N) (1.11) It qickly becomes apparent that direct enmeration rapidly becomes far too complex and time-consming to se for finding any bt the ery first few terms of Z N. In fact, for this problem direct enmeration is an exponential time algorithm, meaning that if we wanted to compte n terms of the partition fnction series sing direct enmeration, the time we will need is approximately proportional to α n for some constant α. This is ery inefficient for example, if α = 2, which is not an oerly large nmber, finding any one term will take as long as finding eery single term before it! For the Ising model, direct enmeration is een more inefficient than this, as α =

23 To find more series terms, we need more efficient algorithms. There are two ways to do this. One is to lower the growth constant α. While this will prodce an algorithm that is exponentially qicker than before, it is still an exponential time algorithm. The holy grail of series enmeration (not only of the Ising model partition fnction, bt almost all series) is to find a polynomial time algorithm (or faster, bt that wold jst be nrealistic), which is an algorithm which takes time on the order of n β to compte n series terms. Unfortnately sch algorithms are few and far between. Of corse, these are jst the two extremes of efficiency; it is entirely possible to hae algorithms which are sb-exponential and still not polynomial, for instance taking time on the order of γ n. In the search for more efficient algorithms to enmerate or series, there hae been deised many sophisticated methods for finding the partition fnction of the Ising model. The most notable of these are the well-known finite lattice method or FLM, which we shall describe in Chapter 3, and the lesser-known bt potentially more efficient corner transfer matrix method or CTM method, which we stdy in detail in Chapter 2 (and apply in Chapter 3). 1.3 Combinatorial objects In the preios section, we saw that it was possible to calclate the Ising model partition fnction simply by conting graphs on a sqare grid. In fact this is by no means a niqe phenomenon: a whole host of other important qantities can be calclated by enmerating combinatorial objects. Indeed, often it is interesting to cont the objects een if they do not hae a direct application in statistical mechanics. For these objects, we therefore ask the qestion: Gien a class of objects with a size measre, how many objects of size n are there? Some sch objects of interest inclde: Bond animals. Very similar to the graphs that are conted by the high temperatre expansion of the Ising model partition fnction, a bond animal is a connected set of bonds. Site animals. As with bond animals, a site animal is a connected set of sites. We define two sites as connected if they are nearest neighbors. Interestingly, there is also a series formlation of the Ising model partition fnction, appropriate for low temperatres, which inoles conting sets of site animals. Polyominoes. Polyominoes can also be thoght of as cell animals. If we define each nit sqare on the grid as a cell, then polyominoes are connected sets of these cells. There are also interesting specializations of these objects sch as Polygons. A polygon (or self-aoiding polygon) is a bond animal where each ertex has either degree 2 or degree 0. We can also think of the interior of a polygon as a special kind of polyomino with no holes. 23

24 (a) A bond animal. (b) A site animal. The sites in the animal are filled. (c) A polyomino. (d) A selfaoiding polygon. (e) A directed bond animal. (f) A colmn-conex polygon. (g) A staircase polygon. Fig. 1.10: Some combinatorial objects of interest. Directed bond and site animals. These are bond and site animals in which eery bond or site can be reached from one bond/site (called the root) by a path that takes only steps in certain directions, commonly north and east. Colmn-conex polygons. A polygon is colmn-conex if the intersection of its interior with any ertical line is connected. Staircase polygons. These are polygons which are the nion of two walks which se only steps in two directions, commonly north and east. We show examples of these objects on a sqare lattice in Figre Many of these objects (in particlar polygons and specialized polygons) can be constrcted from a walk on the lattice. A walk is jst a single path on the lattice, which starts and ends at a ertex. It can also be thoght of as the locs of a walker (characterised solely by its location) moing at constant elocity on the lattice. We take the size of the walk to be its length (or, eqialently, the time taken to complete the walk). Polygons can be constrcted directly from a single walk by enisioning a walker which starts at an arbitrary point and mst finish at that same point, bt otherwise cannot isit a ertex that it has already isited (een the starting point). Many restricted polygons (e.g. colmn-conex polygons) can be constrcted in a similar manner. Some similar objects which can also be constrcted from a single walk are: 24

25 Self-aoiding walks. A self-aoiding walk is a walk which cannot reisit any ertex which it has already isited. Directed walks. A directed walk can only walk in certain directions, sally north and east. Self-aoiding trails. A self-aoiding trail is similar in natre to a self-aoiding walk, bt the walker cannot traerse any edge that it has already traersed (isiting already isited points is allowed as long as edges are not reisited). Self-aoiding polygon trails. This is an amalgamation of self-aoiding polygons and self-aoiding trails the walk cannot reisit edges (points are okay), bt mst finish at its starting point. Spiral walks. These are self-aoiding walks which can only trn in one direction, creating a spiral-like pattern. 3-choice walks. These are walks which are forbidden to make clockwise trns after a step in the horizontal direction. So if it moes east in one step, it cannot moe soth in the next; similarly for west and north. We show examples of these objects in Figre The self-aoiding walk problem in particlar or more precisely, conting the nmber of self-aoiding walks of length n is a well-known and mch-stdied problem in combinatorics. Despite the best efforts of mathematicians oer more than 50 years, this still remains nsoled for general n. For most enmeration problems, if we cannot calclate the exact nmber of objects directly (which happens often), we look for the asymptotic behaior. This is a relationship between the nmber of objects and a measre of size, as that measre grows to. In the case of walks, we try to express the nmber of walks of length n in terms of n as n. If the nmber of walks is approximately α n for some non-triial nmber α, we say that there is an exponential relationship, and call α the growth constant. It is a testament to the difficlty of the self-aoiding walk problem that we do not een hae an exact ale for the growth constant of the nmber of walks on the sqare lattice! In contrast, the directed walk problem is almost triial. At each step, the walker has two possible steps to choose from, north and east. None of these steps are eer prohibited, and therefore the nmber of walks of length n mst be 2 n. Een if we wish to keep track of the end-point of the walk, any directed walk of length n from the origin to (m, n m) mst take m horizontal steps ot of a total of n, which means that there are ( n m) sch walks, as shown in Figre Howeer, we can insert greater complexity into the problem by considering more than one walk at a time. We notice that a directed walk, by constrction, mst be self-aoiding. Sppose that we now take seeral directed walkers, which trael in the same directions. If the walkers do not affect each other, calclating the nmber of possible walks is easy. Howeer, once we impose an aoidance constraint, the problem becomes mch harder. This 25

26 (a) A self-aoiding walk. (b) A directed walk. (c) A selfaoiding trail. (d) A self-aoiding polygon trail. (e) A spiral walk. (f) A 3-choice walk. Fig. 1.11: Some combinatorial objects of interest that can be constrcted from a walk. Fig. 1.12: Eery directed walk that ends at a fixed point has the same nmber of horizontal steps and ertical steps.

27 Fig. 1.13: An example of icios walks. Fig. 1.14: A Dyck path. is the basis of the icios walks problem. In this problem, there are seeral walkers, which can only moe in directed walks. Howeer, when two walkers meet at the same point, they annihilate each other, so no sch configration is allowed. We still hae the problem that a walker may isit a point which has been preiosly isited by other walkers that hae moed on. To oercome this, we adjst the model so that this cannot possibly happen, by forcing all walkers to hae the same x-coordinate at any one time. To achiee this, we rotate the sqare lattice clockwise by an angle of π and expand 4 it by a factor of 2, so that the walkers still hae integer coordinates, bt can only take north-east or soth-east steps. Since this ensres that the walks will hae traelled the same x-distance at any one time, we can then ensre that they stay in step by starting them all at the same x-coordinate, in this case 0. By spacing them 2 nits apart on the y-axis, we then arrie at the icios walks problem. An example of this model is shown in Figre This model is related to the well-known Dyck paths. A Dyck path is a path on the same lattice which starts at the origin, ends at height 0, and does not go below the x-axis. We show a Dyck path in Figre The icios walks problem has been soled exactly by Gttmann, Owczarek and Viennot in 1998 ([65]). They fond that if there are p icios walkers, then the total nmber of 27

28 possible walk configrations of length n is 1 i j n p + i + j 1 i + j 1. (1.12) Like the Ising model, we can also make the icios walks problem a little more interesting by deising seeral ariations. We do this by, again, relaxing or altering the conditions of the model. Some of these ariations are: Walks with a wall. We apply a Dyck path-like constraint to the walks, preenting them from going below the line y = 0. Alternatiely, we cold preent the walks from going aboe the line y = L. Walks in a horizontal strip. We apply two bondary conditions, forcing the walks to remain within the horizontal strip between y = 0 and y = L. Friendly walks. We relax the aoidance constraint, so that walks may toch, bt not cross each other. n-friendly walks. This is a generalisation of the model, rather than a modification. We allow two (bt no more than two) walks to toch, and they may contine to toch for p to n ertices, where n is a nmber called the friendliness. We still do not allow the walks to cross. Under this system, icios walkers are 0-friendly. We show some of these ariations in Figre The different models that can be made with these ariations are qite interesting, and for many cases the models hae not been soled exactly, althogh asymptotic behaior has been fond for many. We are interested in one model in particlar, that of n-friendly walks confined in a horizontal strip. This model amalgamates the principles of walks in a strip and n-friendliness in an obios manner. We stdy this model in detail in Chapter Polygons and knots If we take the self-aoiding polygon trail described in the preios section, we notice that it bears a striking similarity to the common representation of a knot. Formally, a knot is a simple, closed cre in 3 dimensions. The distingishing featre of a knot is, informally, how tangled p it is with itself in other words, how knotted it is. To gain an idea of this concept, consider that there are many closed cres in 3 dimensions which can be transformed continosly into other cres in a physical sense that is, we can physically moe the first cre to match the other cre withot breaking the cre or haing it intersect itself at any stage. On the other hand, there are some cres which cannot be transformed into each other in this fashion. We say that any knots which can be transformed into each other in this way hae the same knot type, althogh sometimes we se the terms knot type and knot interchangeably. We call sch knots eqialent. 28

29 (a) Walks with a wall. (b) Walks in a horizontal strip. (c) Friendly walks. (d) n-friendly walks. Here n = 1. Fig. 1.15: Variations on icios walks.

30 (a) An embedding of a knot. (b) A corresponding selfaoiding polygon trail. Fig. 1.16: Self-aoiding polygon trails and knots resemble each other. To simplify or isalization of knots and knot types, we represent knots in a 2-dimensional form called an embedding. To reach an embedding, we project the knot down onto a plane, forming a closed (bt not necessarily simple) cre. In most knots the projection will cross itself at least once, and in 3-space one of the strands of the knot at the crossing will be at greater height than the other. It is important in finding the type of the knot to know which strand this is, so eery crossing is marked so that there is an oerpass and an nderpass. This is commonly represented by drawing a break in the nder-crossing, so the embedding looks like a closed cre with breaks where it crosses itself. It is this embedding that the selfaoiding polygon trail bears a resemblance to. We show an example of sch a resemblance in Figre The idea behind this resemblance is that both knot embeddings and self-aoiding polygon trails are closed cres which can cross themseles. Following this idea throgh, we can see that any self-aoiding polygon trail (abbreiated to SAPT) can in fact be conerted to a knot embedding simply by creating a crossing at each place where it crosses itself, and randomly assigning one of the strands to be the oerpass and the other to be the nderpass. The adantages of sch an association lie in the fact that we can now se what we know abot self-aoiding polygon trails to infer information abot knots. In particlar, knot theory is ery interested in the idea of nknotting. The idea is that there is a simplest knot, called the nknot, which can be embedded as a simple circle with no crossings, as shown in Figre 1.17(a). Of corse, there are many other embeddings which are eqialent to the nknot, bt which hae crossings, sch as Figre 1.17(b). Unknotting is the act of transforming any knot into the nknot (or more precisely a knot eqialent to the nknot) by reersing crossings. We reerse crossings by switching the two strands in the crossing, so the oerpass becomes the nderpass and ice ersa, bt nothing else changes. This is illstrated in Figre In fact, there is a ery practical application for the se of knots and nknotting. There are many physical objects which can be modelled by knots; in particlar, we are interested in strands of DNA. DNA is well known to hae a doble helix strctre, bt if we take each 30

31 (a) The nknot. (b) Another embedding of the nknot. Fig. 1.17: Some embeddings of the nknot. Fig. 1.18: Reersing a crossing. doble helix as a single strand, we see that a piece of DNA is essentially one long strand. Often this strand is tangled p with itself, forming a knot. Now, for the cell containing the DNA to be able to replicate, the DNA needs to be ntangled (i.e. nknotted). There is an enzyme called topoisomerase II that acts on a section of the DNA by breaking it p, passing another section of DNA throgh the break, and then resealing the strand. We see that this corresponds exactly to the crossing reersal that we mentioned aboe. The aim of the topoisomerase action is to disentangle the DNA into one long, nknotted strand. This is eqialent to nknotting a knot by means of reersing crossings. We want to find ot how long the disentangling will take. Pt in terms of knots, this immediately leads to the following qestion: Gien a knot, what is the smallest nmber of crossing reersals needed to trn this knot into the nknot? The answer to this qestion is called the nknotting nmber of the knot. This is a wellstdied qantity, bt determining the nknotting nmber of a gien knot is often ery hard in fact, there exists a knot with 8 crossings whose nknotting nmber is nknown. This is de to the fact that the nknotting nmber looks at the smallest nmber of reersals needed oer all possible embeddings of a knot, and eery knot can be embedded in an infinite nmber of ways. Not only do different embeddings need a different nmber of minimm reersals, bt it is possible for embeddings with more crossings to need less reersals than other embeddings with less crossings. So we cannot jst take the most simple embeddings when trying to determine the nknotting nmber. In Figre 1.19 we show how the knot known as 7 4 (in part becase it has 7 crossings) can be nknotted in 2 crossing reersals, indicating that its nknotting nmber is either 1 31

32 Fig. 1.19: The knot 7 4 can be nknotted with two crossing reersals. or 2. It trns ot ([94]) that there are no embeddings of 7 4 which can be nknotted by a single reersal, so the nknotting nmber of 7 4 is 2. Howeer, there is a difficlty with sing the nknotting nmber as a model for ntangling DNA. The topoisomerase enzyme, being an enzyme that acts locally, will almost certainly not know the crossing to reerse to take the shortest rote to the nknot; rather, it is mch more likely that it simply reerses crossings at random. In terms of knots, this translates into the qestion: If we reerse crossings at random, what is the aerage nmber of crossing reersals needed to trn a gien knot into the nknot? We call this nmber the mean nknotting time of the knot. Ironically, this may een be harder to calclate than nknotting nmber, becase it is not well defined. Using different embeddings for the same knot reslts in astly different mean nknotting times, so it makes mch more sense to talk of the mean nknotting time of an embedding, rather than a knot. Ideally, we wold like to choose a knot, and then randomly choose an embedding for that knot and find the mean nknotting time for that embedding. This is where the relation between self-aoiding polygon trails and knots comes in handy. One of the problems with embeddings is that they are difficlt to generate randomly after all, there are an infinite nmber of them, so how can we assign a probability distribtion to them? On the other hand, it is (relatiely) easy to generate a SAPT randomly, becase there are a finite nmber of SAPTs of any length. All we need to do is choose one at random. The main problem with this method is that choosing SAPTs randomly in this way does not allow s to choose a knot type. Rather, we obsere the relationship between the length of the self-aoiding polygon trail, which corresponds to DNA length, and the mean nknotting time. The general idea is that the longer the length, the more likely the chance that the SAPT will fall into a complicated knot, and therefore length is directly related to complexity. In this way, we can get an idea of the mean nknotting time of a random knot. This problem is stdied in detail in Chapter 5. 32

33 1.5 In this thesis This thesis is diided into for chapters, each corresponding to a topic. Chapter 2 stdies the corner transfer matrix method. In Section 2.1, we introdce the method and look at some historical backgrond. In Section 2.2, we present some more backgrond material in the form of a qick primer on transfer matrices and how they can be sed to find series expansions. Section 2.3 gies a ariational reslt that will be sed later on. In Section 2.4, we derie (in great detail) the all-important CTM eqations, following the original work of Baxter. In Section 2.5, we show how these eqations yield the exact partition fnction in the infinite-dimensional limit. In Section 2.6, we show how to calclate physical qantities of interest from the eqations, and we illstrate the eqations by soling them for 1 1 matrices in Section 2.7. In Section 2.8 we present some matrix methods that will come in handy. In Section 2.9 we present an iteratie method for calclating series based on the CTM eqations. This method is analysed in Sections to 2.9.4, where we look at the adantages and flaws in the method, as well as the reslts we deried from it. We derie another method based on the renormalisation grop method of Nishino and Oknishi in Section 2.10, and analyse it in a similar manner. Lastly, in Section 2.11 we discss possible enhancements to these methods and sggest ftre directions in which we can take the research. Chapter 3 applies the renormalization grop corner transfer matrix method to the second-neighbor Ising model. In Section 3.1 we derie the model and gie some backgrond. In Section 3.2 we illstrate and discss the crrent premier series-deriation method in two dimensions, the finite lattice method. Next, in Section 3.3, we look at how the renormalization grop CTM method applies to the model, analysing in detail the conergence of the method. We then analyse the second-neighbor Ising model theoretically by means of scaling methods. Section 3.4 gies a qick primer of scaling techniqes and deices, which we then apply to or model in Section 3.5 to estimate the crossoer exponent. In Section 3.6 we discss how to find the critical lines, sing both the CTM and FLM methods. Then we apply these methods to the model, prodcing reslts in Section 3.7, as well as estimating the critical exponent of the magnetization on the lower phase bondary. Lastly, we consider what we hae done and offer possible frther aenes of research in Section 3.8. Chapter 4 stdies the n-friendly directed walks problem when confined in a horizontal strip. Section 4.1 otlines the model and gies some historical backgrond. In Section 4.2 we show how we can gess the generating fnctions of the walkers from the first few terms, sing Padé approximants. Trning to the problem of generating those terms, we gie the general transfer matrix for two walkers in Section 4.3 and a method of recrrences for larger nmbers of walkers in Section 4.4. In Section 4.5, we gie an idea of or general methodology by proing the one-walker generating fnction. Then in Section 4.6, we present or main reslts: we proe the generating fnctions for 2 33

34 walkers in a strip of width 3 and 4, three walkers in a strip of width 4, and p icios walkers in strips of width 2p 1, 2p and 2p + 1. In Section 4.7 we take a brief look at the growth constants we derie from these reslts. Next, in Section 4.8 we speclate abot the possibility of another parameter in the model, bandwidth. Lastly, in Section 4.9 we reflect on or reslts and consider frther possibilities for research. Chapter 5 looks at the problem of mean nknotting times. In Section 5.1 we introdce the general methods and ideas of knots and nknotting, and define the problem. In Section 5.2 we then look at the nknotting times of knots with small nmbers of crossings. Then we moe on to SAPTs and knots. For some backgrond, we gie a detailed proof of Kesten s pattern theorem in Section 5.3 and a brief primer on Forier transforms in Section 5.4. Throgh considering the relationship of the problem with a walk on an n-cbe in Section 5.5, we are able to derie some theoretical bonds on the mean nknotting time in Section 5.6. Then, moing to the practical side of the matter, we describe how we generate self-aoiding polygon trails sing the piot method, and jstify its alidity in Sections 5.7 and 5.8. We se the method to estimate the mean nknotting time of self-aoiding polygon trails of fixed length in Section 5.9. Finally we discss reslts and frther options in Section

35 2. THE CORNER TRANSFER MATRIX METHOD 2.1 Introdction In the stdy of the physical properties of magnets, one can approach the problem from either a macroscopic direction or a microscopic direction. We will se the latter approach; in this case, we think of the magnet as being composed of a large nmber of magnetic atoms (which it is). These atoms are gien fixed positions in space, and each interacts magnetically with eery other atom to prodce an energy in the system, with this interaction diminishing or een disappearing at long distances. For example, they may be in a 3-dimensional cbic lattice sch as that shown in Figre 2.1. This is the basis for the statistical mechanical models of a magnet; sing these models, we seek to find physical properties of the entire system in terms of the atomic interactions. In the most general PSfragcase, replacements we represent each atom by its magnetic component, or spin. The actal nmerical representation of the spin aries depending on the model sed generally it is represented by a single ector. VThis is often simplified in models to a single nmber, or een jst a nmber which can only ψ take two ales. Now, gien this representation, eery a possible configration of spins mst hae a certain probability of occrring. On the other b hand, they may not all be eqally likely indeed, some configrations may well be impossible. c To allocate probabilities to each configration, we look at the arios interactions dacting in the system. Again the nmber and natre σ of these interactions ary according 1 with the model, bt they often inclde interactions σ sch as magnetic field(s) and atom-pair 2 interactions. Taken as a whole, these interactions σ prodce an energy for each possible 3 configration. We call this energy the Hamiltonian of σ the configration. 4 σ 5 Sppose there are N spins in or model. We denote the spin ales by σ 1, σ 2,..., σ N, F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) Fig. 2.1: A 3-dimensional cbic lattice with atoms at each ertex.

36 and symbolise the Hamiltonian as H(σ 1, σ 2,..., σ N ). Under the so-called Gibbs canonical distribtion, the probability that the spin ales will take the particlar ales σ 1, σ 2,..., σ N is defined to be proportional to the exponential of a mltiple of the Hamiltonian: P (σ 1, σ 2,..., σ N ) = 1 Z N e βh(σ 1,σ 2,...,σ N ) (2.1) where β = 1, in which k is Boltzmann s constant and T is the temperatre of the system. kt We call e βh the Boltzmann weight of the system. The mltiple of the Hamiltonian in the Boltzmann weight is negatie, so configrations with lower energies are more likely to occr. 1 The factor of Z N is a normalising factor, which is inserted so that the sm of the probabilities of all configrations is 1. Therefore Z N has the ale Z N = (2.2) σ 1,σ 2,...,σ N e βh(σ1,σ2,...,σn ) where the smmation is oer all possible ales of all spins. We call Z N the partition fnction of the model. The partition fnction is ery important for statistical mechanical models, as it contains all the information abot the physical properties of the magnet. Becase of this, if we can find a closed form expression for the partition fnction of a model, we call that model soled. For example, the mean energy of the system is the expected ale of the Hamiltonian oer all possible spin configrations, nder the Gibbs canonical distribtion. This means that it can be calclated as E = H(σ 1, σ 2,..., σ N ) = H(σ 1, σ 2,..., σ N )P (σ 1, σ 2,..., σ N ) σ 1,σ 2,...,σ N = 1 H(σ 1, σ 2,..., σ N )e βh(σ1,σ2,...,σn ) Z N σ 1,σ 2,...,σ N = 1 Z N Z N β = β ln Z N. (2.3) Here, and from now on, we se angle brackets to denote expectation oer all the spin configrations. The energy of the system can be sed to find the thermodynamical qantities; from the eqation (taken from [128, Section 2-5]) where Ψ is the Hemholtz free energy and S = Ψ T E = Ψ T S (2.4) is the entropy of the system, we derie Ψ = kt ln Z N. (2.5) As the nmber of spins N increases to, this becomes infinite. To hae a measre of the 36

37 free energy in this limit, which is called the thermodynamic limit, we diide the free energy by N, and then take the limit N. This gies the free energy per site 1 ψ = kt lim N N ln Z N. (2.6) As another example, sppose we wish to find how strong the magnet is, as a whole. We call this property the magnetization of the magnet. If all the spins are aligned, the magnetization will be maximised; on the other hand, if they all point in random directions, or if half the spins point in an opposite direction to the other half, then the system will not be magnetized at all. To gie a ale to the strength of the magnet if the spins hae the ales σ 1, σ 2,..., σ N, we sm all the spins in the configration: Magnetic strength = N σ i. (2.7) i=1 Of corse, the system may be in any configration with arying probabilities, so we define the magnetization to be the expected ale of the magnetic strength of the system: N M = σ i = 1 N σ i e βh(σ 1,σ 2,...,σ N ). (2.8) Z N i=1 σ 1,σ 2,...,σ N Again, the magnetization is directly proportional to N, so to find a ale for it in the thermodynamic limit, we will diide by N and take the thermodynamic limit to find the magnetization per site. For most models, the system is translationally inariant, which means that taken by itself, each spin in the system is indistingishable from any other. If this is tre, the expected ale of any spin is eqal to that of any other. Then the magnetization per site can be expressed as m = lim N 1 N i=1 N σ i = σ i (2.9) i=1 where i is arbitrary. To express this in terms of the partition fnction, we look at the effect of an external magnetic field on the system. Sch a field wold affect indistingishable spins eqally, and in many models, this effect is expressed by a term of H i σ i in the Hamiltonian, where H is the strength of the external magnetic field. If this is the case, it is possible to express 37

38 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) Fig. 2.2: A sqare lattice. the magnetization per site in terms of the free energy per site: ψ H = ( ) 1 kt lim H N N ln Z N = kt lim N = kt lim N = kt lim N 1 N Z N H Z N 1 NZ N 1 N β σ i σ 1,σ 2,...,σ N i ( β i σ i ) e βh(σ 1,σ 2,...,σ N ) 1 = lim N N N σ i = m (2.10) and therefore m is another qantity which can be fond from the partition fnction. Howeer, in practice we often find it from Eqation 2.9. In general, it is difficlt to sole sch generalised models as the ones aboe, especially for systems with a large or infinite nmber of spins. Een making some simplifying assmptions still reslts in hard-to-sole models. One famos example of a soled model is the Ising model, or more precisely the sqare lattice spin- 1 Ising model, althogh it is only soled in 2 zero field and in two dimensions or less. In fact, this model, which was introdced by Ising in 1925 ([72]) as a one-dimensional model, is one of the most stdied models in statistical mechanics. In this model, each of the N spins is sitated on a ertex of a ery large sqare lattice. Each of the ertices mst contain a spin, as shown in Figre 2.2. Each spin is represented by a single ariable, which can only take the ales 1 and 1. The model takes into accont two interactions: an external magnetic field, of strength H, and a spin-spin interaction of strength J. The former interaction acts independently and eqally on all spins; each spin that has ale 1 contribtes H to the Hamiltonian, while each spin that has ale 1 contribtes H. Since a lower energy is more likely to occr, this means that the stronger (more positie) H is, the more likely spins are to take the ale 1. The latter interaction acts on all pairs of nearest-neighbor spins, i.e. spins that are 38

39 d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 )... V V V V Fig. 2.3: V is a matrix which transfers a colmn of spins. immediately adjacent to each other on the lattice. Each nearest-neighbor pair of spins that hae the same ale contribte J to the energy, while each nearest-neighbor pair with different-aled spins contribtes J. Again, the larger J is, the more likely nearestneighbor pairs are to hae the same spin. As these two interactions are the only ones taken into accont in the Ising model, the Hamiltonian is H(σ 1, σ 2,..., σ N ) = J <i,j> σ i σ j H i σ i (2.11) where the first sm is oer all pairs of spins i and j which are adjacent to each other on the lattice. H defines the model, as the partition fnction can be expressed in terms of it. The partition fnction of the zero-field (H = 0) ersion of the Ising model was calclated in a landmark paper by Onsager in 1944 ([114]), by sing the notion of transfer matrices. The transfer matrix for a model on a sqare lattice is a matrix which has one row/colmn corresponding to eery possible configration of spin states along one colmn of the lattice. Each of its elements is the Boltzmann weight of a single colmn with its spins fixed. As sch, when we mltiply a ector containing the weights of part of the lattice by the transfer matrix, the net effect is of adding on the weight of a single colmn, as illstrated in Figre 2.3. It trns ot that this redces the problem to that of finding the largest eigenale of an infinite-dimensional transfer matrix. We will gie more details abot transfer matrices in Section 2.2. Onsager was able to find the eigenale of the transfer matrix, reslting in a soltion for the zero-field Ising model. He also stated, withot proof, an expression for the magnetization in zero field (also called the spontaneos magnetization), bt the proof was first pblished by Yang in 1952 ([144]). Howeer, the more general case of arbitrary field remains nsoled, despite all efforts since that time. So we see that een a relatiely simple model, with only two interactions, can be ery tricky to sole indeed. Howeer, een thogh soling a model exactly is obiosly most desirable, it is not the only way to obtain information abot the properties of the magnet. Another possible way of obtaining information is by nmerical calclation. To do this for the Ising model, we 39

40 gie nmerical ales to interaction strengths J and H, instead of leaing them as ariables. Then we se arios nmerical methods to calclate the partition fnction, or any other wanted properties, at those ales. One sch method that can be sed is the finite lattice method (which we will discss later in this section). Another is the corner transfer matrix method, which is the sbject of this chapter. Another, more sophisticated, way that we can gain information abot properties of the model, withot actally soling it, is to se series expansions. With this techniqe, we express the partition fnction (or other qantities of interest) as a series in terms of a ariable or ariables. These ariables depend on the strength of the interactions. We can sally calclate the first few terms of the series of interest exactly. To illstrate, we define the ariables = e 2βJ and µ = e 2βH. Then we can maniplate the partition fnction of the Ising model in the following way: ( Z N = exp βj σ i σ j + βh ) σ i σ 1,σ 2,...,σ N <i,j> i = e βjσ iσ j e βhσ i σ 1,σ 2,...,σ N <i,j> = e 2NβJ+NβH = e 2NβJ+NβH σ 1,σ 2,...,σ N <i,j> σ 1,σ 2,...,σ N i (1 σiσj)/2 <i,j> i ( = e 2NβJ+NβH 1 + N 4 µ + 2N 6 µ 2 + ( ) e βj 1 σ i σ j ( ) e βh 1 σ i i µ (1 σ i)/2 N(N 5) 8 µ ). (2.12) Becase the spins can only take the ales 1 or 1, the expression 1 σ i 2 will be 0 when σ i is 1, and 1 when σ i is 1. So the power of µ in the term arising from a specific configration conts the nmber of spins in that configration which hae ale 1. Similarly, the power of in the term arising from a configration conts the nmber of nearest-neighbor bonds where the spins are neqal. Since the reslt is a series abot the point = µ = 0, this series expansion is called the low-temperatre expansion when the temperatre is low, β is large and the ariables and µ are small. We can also think of it as a high-field expansion when the external field strength is high, µ will approach 0. The first few terms in the series can be deried by inspection: there is only one configration with no powers of or µ, i.e. the configration where the spins are all 1 (and therefore all nearest-neighbor bonds hae eqal spins also known as like neighbors). The next term can be achieed by setting all the spins to 1 and then flipping one spin. As there are N spins, there are N ways to do this. In this new arrangement, there is exactly one spin with ale 1 (which therefore contribtes a factor of µ), and exactly for nlike nearest-neighbor bonds (which each contribte a factor of ), since we are on a sqare lattice. We show this in Figre 2.4(a). The next cople of terms come from setting exactly two spins to hae ale 1. If the 40

41 3 3 3 σ 4 σ 5 F X YA σ 4 σ 5 F X YA σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) (a) One negatie spin. ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) (b) Two adjacent negatie spins. ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) (c) Two separated negatie spins. Fig. 2.4: Calclating the first few terms of the low-temperatre series for the Ising model partition fnction. Hollow circles denote spins with ale -1, and dashed bonds denote nlike bonds. two spins are adjacent, which can happen in 2N ways, there will be 6 nlike bonds (the bond between the spins themseles contains two identical spins of 1). This is shown in Figre 2.4(b). On the other hand, if they are not adjacent, there will be 8 nlike bonds. This can happen in N(N 5) ways, since haing chosen the first spin, there are 5 spins which cannot 2 be chosen (for are adjacent to the first spin and one is the spin itself). We can contine in this way to derie the first few terms of Z N, bt the rapid increase in possible configrations means that we cannot derie more than a few terms by hand. As can be seen, the partition fnction becomes infinite in the thermodynamic limit. So we can take its Nth root in the thermodynamic limit to get the partition fnction per site, denoted by κ: κ = lim N Z 1 N N = e 2βJ+βH ( µ µ µ 2... ). (2.13) For small and µ, the first few terms of the series proides a good approximation to the partition fnction. Of corse, as the ariables become larger, this approximation becomes worse and worse. Ths we wold like to generate as many terms of the series as possible. Obiosly, jst conting the nmbers of arios configrations rapidly becomes infeasible. If we want longer series, we mst se more sophisticated methods than direct enmeration. Again, both the finite lattice method and the corner transfer matrix method proide sch methods. Up to now, the most commonly sed method for finding either nmerics or series for these models has been the finite lattice method (or FLM). This method was deeloped by de Neef in his PhD thesis in 1975 ([40]) and by de Neef and Enting in 1977 ([41]). At its core, the method exploits the relation between the partition fnction per site and a nmber of partition fnctions of small finite lattices. For instance, from [50] (which is also a reiew of the FLM), on the sqare lattice we hae n 1 κ i=1 Z i,n i n 2 i=1 n 3 Zi,n 1 i 3 Zi,n 2 i 3 i=1 41 n 4 i=1 Z 1 i,n 3 i (2.14)

42 σ 3 σ 4 σ 5 F X YA... ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 )... Fig. 2.5: A corner transfer matrix incorporates the weight of a qarter of the plane. where Z i,j is the partition fnction of a finite rectanglar lattice with i rows and j colmns. If this relation is sed in series calclations, the first few terms of the right-hand side will be exactly the same as the corresponding terms on the left. The nmber of these terms is dependent on n the larger n is, the better the approximation. To find the partition fnctions for the finite lattices, a method based on transfer matrices is sed. In general, the finite lattice method is an exponential time algorithm, i.e. to generate exactly n terms in a desired series, the finite lattice method takes approximately α n time for some α. The method has since been optimised by Jensen, Gttmann and Enting ([76] and [77]), who hae redced α by a fair amont. Howeer, the method is nlikely to eer become more efficient than exponential time. We gie more details of the finite lattice method in Section 3.2. The corner transfer matrix method is potentially mch more efficient than the finite lattice method, bt nfortnately its fll potential has neer really been flfilled. It was deeloped originally by Baxter ([12]) and Baxter and Enting ([19]) in two papers in 1978 and It is based on the idea of transfer matrices, bt with a twist. Whereas a transfer matrix transfers one colmn of spins to another, while adding the weight of the colmn in between, the corner transfer matrix transfers half a colmn of spins (which is a colmn of spins which extends to infinity in only one direction) into half a row of spins, or ice ersa, where the colmn and row are joined at the end, as shown in Figre 2.5. This adds the weight of a qarter of the plane. Using corner transfer matrices, and a lot of maniplation, Baxter deried a set of eqations that he called the CTM eqations. The soltion of these eqations will yield the partition fnction per site; the main difficlty is that they are eqations in infinite-dimensional matrices. Howeer, limiting the matrices to finite size gies an approximate soltion which can then be conerted into an approximation for κ. Frthermore, if we se series, this approximation gies the first few series terms of κ. These eqations apply to a certain class of models which are known as interaction rond a face (IRF) models. These are models where all interactions can be described by their effect on a single cell in the case of the sqare lattice, a cell is a single sqare. The most general IRF model has a different interaction for eery different combination of spins in the cell. We show this in Figre 2.6. For example, the Ising model is an IRF model, becase its external field interaction 42

43 σ 2 σ 2 σ 2 σ 3 σ 4 σ 5 F X YA PSfrag replacements σ 3 σ 4 σ 5 V F ψ X a YA σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) b c ω F (σ 1, σ 2 ) d F (σ 2, σ 3 ) σ 1 F (σ 3, σ 4 ) σ 2 F (σ 4, σ 5 ) σ 3 σ 4 H ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) H H J 1 H σ 5 F J 1 J 1 J 1 J 1 X YA J 2 J 2 J 1 J 2 J 2 K 1 J 2 J 2 K 2 K 3 J 2 K 4 LH J 1 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) K 2 K 4 F H(σ 3, σ 4 ) L F (σ 4, σ 5 ) H J 1 J 1 K 1 K 3 (b) Two three-spin interactions. K 1 K2 K 3 K4 L (a) One- and two-spin interactions. (c) The other two three-spin interactions. J 2 J 2 K 1 K 2 L K 3 K 4 (d) For-spin interaction. Fig. 2.6: IRF models can be described solely by their effect on a single cell. All of these interactions apply at the same time.

44 applies to single spins (which always lie within a single cell) and its spin-pair interaction applies to nearest-neighbor pairs of spins (which, again, always lie within a single cell). In terms of the most general model, the Ising model can be described by J 1 = J 2, J 1 = J 2 = K 1 = K 2 = K 3 = K 4 = L = 0. An example of a model which is not IRF is any model on the sqare lattice which has an interaction between spins which are two nits apart (conting the side of a cell as one nit). The corner transfer matrix approach was actally sed as far back as 1968, when Baxter ([9]) applied a similar method to find nmerics for a dimerization problem. In 1976, Kelland ([82]) also sed a similar idea to find nmerical data for the Potts model. Howeer, it was not ntil the landmark paper in 1978 ([12]) that the CTM eqations began to be sed for finding series. In this paper, Baxter gae an iteratie method for soling the CTM eqations p to a certain order. Unfortnately this general method (inoling, among other things, finding eigenales and eigenectors of matrices with series elements) was not extremely efficient. In [19], a new, faster method was proposed for the sqare lattice Ising model, which worked ery well indeed. This method was sed to generate low-temperatre expansions in this paper; in a sbseqent paper Enting and Baxter ([51]) also fond the high-field expansion at certain temperatres p to order 35. Howeer, the method took adantage of Ising model properties that cold not be transferred to other models. In two frther papers ([20] and [22]), arios algorithms based on the CTM eqations were applied to the hard sqares model and the hard hexagons model. In the case of the hard sqares model, so many series terms were deried for the partition fnction that ntil today, the nmber of terms has not been bettered, een with access to enormosly more powerfl compters than those of 25 years ago! For the hard hexagon model, een better was achieed: becase of eigenale redndancy in the matrices, the model was actally soled exactly (in [13]). In 1999 ([18]), Baxter retrned to these models, calclating nmerical ales for the partition fnctions at z = 1 (where z is the fgacity). He managed to calclate 43 decimal places for the hard sqares model, and 39 for the eqialent model on the hexagonal lattice. The corner transfer matrix ideas and methods were also applied to a few other models Baxter applied it to the chiral Potts model in the early 90s ([15], [17], and [16]), the 8-ertex model in 1977 ([10] and [11]), and in 1984 Baxter and Forrester ([21]) applied the idea to the 3-dimensional Ising model for one-dimensional matrices. Howeer, becase the method had to be modified in different ways for each different model, it was difficlt to apply it to a wide range of models. Frthermore, it was mch easier to implement the better-nderstood finite-lattice method. So despite haing the potential to generate large nmbers of series coefficients, the CTM methods were not widely sed. More recently, in 1996 Nishino and Oknishi ([105], [111] and [106]) combined the corner transfer matrix idea with another nmerical matrix method, the density matrix renormalization grop method. This method was inented by White in 1992 ([139] and [140]) for one-dimensional qantm lattice models, and applied to two-dimensional models by Nishino in 1995 ([103]). A reiew can also be fond in [121]. These two methods were combined to prodce the corner transfer matrix renormalization 44

45 A ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) Fig. 2.7: A one-dimensional lattice. grop method, or CTMRG for short. To do this, Nishino and Oknishi stripped away Baxter s CTM eqations and worked on the matrices which were in the eqations. The reslt was a method which was ery efficient, bt was applied only for nmerical reslts (rather than for generating series). In [105], they applied the method to the Ising model as an illstration. In [106], a few calclations were made for q = 2, 3 Potts models. In frther papers, the CTMRG was (nmerically) applied to the q = 5 Potts model ([108]), the 3D Ising model ([109]), the spin- 3 Ising model ([131]), and the two-layer Ising 2 model ([93]). The eigenale distribtion of the CTM matrices was also stdied in [113]. In [107], [110], [104], and [59], the method was also conerted to 3-dimensional lattices, by expressing the ariational state as a prodct of tensors. Foster and Pinettes ([58] and [57]) also sed this method in In these papers, they applied the CTMRG to self-aoiding walks, which can be expressed as IRF models. Howeer, they too did not se the method to generate series. In this chapter we present or attempts to find a general method, based on the CTM eqations, that will yield reslts for both series and nmerical calclations. In Section 2.2, we gie some backgrond on transfer matrices. After some backgrond in Section 2.3, we re-derie Baxter s CTM eqations in Section 2.4. In Section 2.5 we show why the CTM eqations proide the soltion to the model. In Section 2.6, we gie an alternate method of calclating the partition fnction and other qantities if not all ariables are known. In Section 2.7 we sole the eqations in a simple case. Next, in Section 2.8, we gie a short backgrond on some matrix algorithms. In Section 2.9, we detail an iteratie method of soling the CTM eqations, and discss its conergence properties and the reslts we hae obtained with it. In Section 2.10, we detail another method of soling the CTM eqations, based on the CTMRG method. We also discss conergence properties and the reslts we hae obtained. Finally, in Section 2.11, we look back at what we hae done and consider some ftre aenes of research. 2.2 Transfer matrices As the corner transfer matrix method is based on (nsrprisingly) corner transfer matrices, it wold be handy to hae a brief introdction to the theory of (reglar) transfer matrices. This gronding will also come in handy when we explain the finite lattice method in Section 3.2. To illstrate this concept, we will sole the 1-dimensional (spin- 1 ) Ising model. This 2 model is identical to the sqare-lattice Ising model, except that the spins are sitated eenly on a single line (hence 1-dimensional). This means that each spin has 2 nearest neighbors. We show part of this lattice in Figre 2.7. We take the nmber of spins to be N, and label the spins from 1 to N, starting at the left-most spin and moing from left to right. We denote the ale of spin i by σ i. Again, σ i 45

46 can only take the ales -1 and 1. There are still two interactions: an external field interaction of strength H, and a spinpair interaction which acts on nearest-neighbor pairs with strength J. We will assme a circlar bondary condition, so that we identify the spins N + 1 and 1 as the same spin. Similarly to the 2-dimensional Ising model, the Hamiltonian for a configration of spins with ales σ 1, σ 2,..., σ N is H(σ 1, σ 2,..., σ N ) = H i σ i J <i,j> σ i σ j = N (Hσ i + Jσ i σ i+1 ). (2.15) i=1 Eqialently, the partition fnction is Z N = σ 1,σ 2,...,σ N i=1 N e βhσ i e βjσ iσ i+1. (2.16) Notice that this partition fnction is broken down into a prodct of contribtions from each spin and its neighbor on the right the one-dimensional eqialent of a cell. Bt what is the indiidal contribtion from one cell? Natrally, this depends on what ale the spins in the cell take. Ths we can set p a matrix of contribtions, sch that each element corresponds to a particlar set of ales of spins in the cell more precisely, we define the matrix V so that V σi,σ i+1 = e β H 2 (σ i+σ i+1 ) e βjσ iσ i+1 (2.17) is the contribtion to the partition fnction from a cell with spin ales σ i and σ i+1. Note that we diide the external field interaction by half, becase each single spin belongs to 2 different cells. Splitting the term in this way ensres that the transfer matrix is symmetric. Writing ot this matrix gies s σ i+1 ( 1 1 ) 1 e σ βh e βj e βj (2.18) i 1 e βj e βh e βj 46

47 Now we can break down Z N into the indiidal contribtions from each cell: Z N = = = = N e βhσ i e βjσ iσ i+1 σ 1,σ 2,...,σ N i=1 σ 1,σ 2,...,σ N i=1 σ 1,σ 2,...,σ N i=1 N e β H 2 (σ i+σ i+1 ) e βjσ iσ i+1 N V σi,σ i+1 σ 1,σ 2,...,σ N V σ1,σ 2 V σ2,σ 3... V σn,σ N+1 = σ 1 (V N ) σ1,σ N+1 = σ 1 (V N ) σ1,σ 1 = Tr V N (2.19) since we identify spin N + 1 with spin 1. From this eqation, it can be seen why V is called the transfer matrix if we were to calclate Z N by mltiplying the contribtions of each cell in trn from left to right, then at any one time, mltiplying by the appropriate element of V wold add the contribtion of the crrent cell, ths transferring the calclation one cell to the right. Now the trace of a matrix is the sm of the eigenales of that matrix. If we let the eigenales of V be λ 1 and λ 2, where λ 1 λ 2, then the eigenales of V N are λ N 1 and λn 2. Then we can calclate the partition fnction per site in the thermodynamic limit: κ = lim N Z 1 N N = lim N (Tr V N ) 1 N = lim N (λn 1 = lim N λ 1(1 + + λ N 2 ) 1 N ( λ2 λ 1 ) N ) 1 N = λ 1. (2.20) This is why the transfer matrix is so sefl instead of calclating an infinite partition fnction and then taking the Nth root as N, all we need to do is merely find the largest eigenale of the transfer matrix. For the one-dimensional Ising model, the transfer 47

48 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) V Fig. 2.8: One-dimensional transference. matrix is only 2 2, and therefore the eigenale is easily fond: κ = λ 1 = e βj cosh βh + e 2βJ sinh 2 βh + e 2βJ. (2.21) For other models, the transfer matrix is set p in a similar way. The idea is that the transfer matrix represents a section of the lattice which has the property that by adding that section to itself oer and oer again (in one way only), eentally the entire lattice will be coered. This is easy to achiee for a one-dimensional lattice, as the transfer matrix represents one cell, and the lattice is composed entirely of cells laid end-to-end in one direction. Another way we can consider this is to think of a ct in the lattice. By ctting the lattice on a line perpendiclar to the lattice, we generate a ct which (for this lattice) consists of one spin. The transfer matrix then transfers a ct of one site to an eqialent ct one site to the right while adding the weight of the cell in between, as shown in Figre 2.8. For the two-dimensional sqare lattice, we ct the lattice with a ertical line. This means that the transfer matrix represents one colmn of the lattice, as shown in Figre 2.9. If the lattice consists of m rows and n colmns of spins, then the ct will hae m spins, rather than one. Now, each row and colmn of V will represent one possible set of ales for the m spins on the ct. We say that V is indexed by m spins. In particlar, this means that V has dimension 2 m 2 m. For the Ising model, the element of V corresponding to row (σ 1, σ 2,..., σ m ) and colmn (σ 1, σ 2,..., σ m ) is V (σ1,σ 2,...,σ m),(σ 1,σ 2,...,σ m) = m e β H 2 (σ i+σ i ) e β J 2 (σ iσ i+1 +σ i σ i+1 +2σ iσ i ). (2.22) i=1 This is the contribtion to the partition fnction of a colmn with the spins σ 1, σ 2,..., σ m on the left and σ 1, σ 2,..., σ m on the right. Therefore we say that V adds the weight of a colmn. Again, by bilding p one colmn at a time, we can eentally coer the entire lattice. Then we can proceed as before, except that the transfer matrix V will hae dimension 2 m 2 m, and therefore 2 m eigenales. Howeer, the calclations stay mostly the same, with the one exception being that the power of V is the nmber of colmns in the lattice rather 48

49 a b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) V Fig. 2.9: Two-dimensional transference. than the nmber of sites: Z m,n = Tr V n. (2.23) There is a possibility that there may be no one maximm eigenale for V, i.e. the maximm eigenale may be degenerate. This does not affect the calclations, howeer, as Tr V n wold still grow like λ n 1 (albeit with a constant mltiple). We can then contine with the calclations as aboe (with at most 2 m 1 non-dominant eigenales instead of 1) to reach the relation κ = λ 1 m 1. (2.24) So again we only need to find the maximm eigenale of the transfer matrix to sole the model. It is worth noting that althogh the transfer matrices in this section hae been colmn transfer matrices, it is entirely possible to hae transfer matrices which act identically on rows. This makes them row transfer matrices. 2.3 A ariational reslt An important reslt in the deelopment of the CTM eqations is proed in the following theorem, which gies an expression for the maximm eigenale of a symmetric matrix. It is a part of the Corant-Fischer theorem, which proides an expression for all eigenales. Theorem If A is a n n real symmetric matrix with maximm eigenale λ 1, then λ 1 = max x T x T Ax Ax = max x 0 x 0 x T x. (2.25) x T x=1 The ale of x which maximises the aboe expression is the eigenector of A corresponding 49

50 to λ 1. Proof. For the first part, we follow the proof of Wilkinson in [142, pp ]. Since A is symmetric, it mst be orthogonally diagonalizable, so there exists an orthonormal matrix P sch that P T AP = diag(λ 1, λ 2,..., λ n ) (2.26) where λ 1, λ 2,..., λ n are the eigenales of A, arranged in non-increasing order, i.e. λ 1 λ 2 λ n. (2.27) Now let x be an arbitrary n-dimensional ector, and let y = P T x. Since P is orthonormal, we know that x = P y. Then we hae x T Ax = y T P T AP y = y T diag(λ 1, λ 2,..., λ n )y = n λ i yi 2 (2.28) i=1 and x T x = y T P T P y = n yi 2. (2.29) Now, by constrction, y = 0 if and only if x = 0. Since x is arbitrary, max x T Ax = max x 0 y 0 x T P x=1 i y2 i =1 i=1 n λ i yi 2 = λ 1 (2.30) which occrs when y = (±1, 0, 0,..., 0) T. Now if we set x = P (1, 0,..., 0) T, x corresponds to the maximising y in the aboe eqation, so x maximises x T Ax when x T x = 1. Since we can immediately say that max x 0 i=1 (x ) T Ax (x ) T x = (x ) T Ax (2.31) x T Ax x T x max x T Ax. (2.32) x 0 x T x=1 Sppose, howeer that the ineqality is strict and we hae a ector x 2 sch that x T 2 Ax 2 x T 2 x 2 > (x ) T Ax. (2.33) 50

51 Then this implies ( ) T x2 A x 2 x 2 x 2 > (x ) T Ax, (2.34) contradicting the fact that x maximises x T Ax when x T x = 1. Therefore the ineqality is an eqality, and max x T x T Ax Ax = max (2.35) x 0 x 0 x T x x T x=1 which proes the remaining part of the eqation. Now, if x 1 is the eigenector of A corresponding to λ 1, then x T 1 Ax 1 x T 1 x 1 so x 1 soles the maximisation problem. = xt 1 λ 1x 1 x T 1 x 1 = λ 1 (2.36) 2.4 The CTM eqations The basis of the CTM methods rests on the matrix eqations which nderlie the method, known as the CTM eqations. These eqations were first deried by Baxter in [12]. In this section, we will follow in detail Baxter s deriations for the 2-dimensional sqare lattice An expression for the partition fnction Firstly, we define the lattice. We assme that we are working on an m n lattice with toroidal bondary conditions, so that row m + 1 is identified with row 1, and colmn n + 1 is identified with colmn 1, as shown in Figre Ultimately, we wold like to take the thermodynamic limit m, n, and find the partition fnction per site κ in this limit. The CTM eqations only apply to interaction rond a face (IRF) models, so we will assme from now on that the model that we work on is an IRF model. Let V be the colmn transfer matrix of the model, as discssed in Section 2.2. Diagrammatically, V transfers a ct along a colmn by moing it one colmn to the right, and adding the Boltzmann weight of the colmn in between. In this manner, mltiplying by V repeatedly will eentally gie the partition fnction of the entire lattice, as shown in Figre Each colmn of the lattice has m spins, so the rows and colmns of V are indexed by all possible states of these m spins. Sppose that the largest eigenale of V is Λ. Then, as shown in Section 2.2, the limiting partition fnction per site is related to Λ: κ = lim Z 1 mn m,n = lim Λ 1 m. (2.37) m,n m 51

52 PSfrag replacements V ψ a b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) PSfrag replacements Fig. 2.10: Toroidal bondary conditions. ψ a b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 )... V V V V Fig. 2.11: Mltiple colmn transfer matrices.

53 σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 )... a c b d... ( ) a b ω c d Fig. 2.12: ω gies the weight of a single cell. Ths, to find κ, we mst first find Λ. To find Λ, we break V p ( into ) smaller parts. This is possible becase the model is a b an IRF model. We define ω as the Boltzmann weight of the interactions arond a c d single cell, where the spins are fixed to be a in the pper left-hand corner, b in the pper right-hand corner, c in the lower left-hand corner and d in the lower right-hand corner, as shown in Figre Importantly, ω is the only ariable in the eqations which aries when the model changes. Let s look at an example. In the hard sqares model, there are two ales a spin can take: 0, which stands for an empty site, and 1, which stands for an occpied site. Note that this is slightly different from the description in Chapter 1, which sed spin ales of 1 and 1. In this model, we do not allow two occpied sites to be adjacent to each other. We are generally interested in the nmber of occpied sites in the lattice, or in the thermodynamic limit, the nmber of occpied sites per site. Therefore we assign each occpied site a weight z in the partition fnction. Seeing that on the sqare lattice, each spin resides in 4 distinct cells, the weight of each spin in any one cell is diided by 4. Ths the weight of a single cell is ( ) { a b 0 if a = b = 1, a = c = 1, b = d = 1 or c = d = 1 ω = c d z (a+b+c+d)/4 (2.38) otherwise. We will se the hard sqares model later as an example. Since ω conts the weight of one face, and each element of V adds the weight of a colmn 53

54 σ 1 σ 2 σ 3 σ 4 σ 5 F X YA F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) V ω ω ω Fig. 2.13: Decomposition of a colmn matrix into single cells. of m faces, we can express each element of V as a prodct of m of these weights: V (σ1,σ 2,...,σ m),(σ 1,σ 2,...,σ m ) = m i=1 ω ( σi+1 σ i+1 σ i σ i ) (2.39) where the toroidal bondary conditions identify row m + 1 with row 1. Figre 2.13 shows how V breaks down into m cell weights. For many models, the weight of a configration is inariant nder the symmetries of the lattice. In other words, we can reflect or een rotate configrations and they will still contribte the same amont to the partition fnction (see Figre 2.14). We call sch models ndirected. In particlar, if the model has reflection symmetry abot the ertical axis, then the transfer matrix V is symmetric. We assme that this is the case for or model. Becase V is symmetric, we can apply Theorem to derie an expression for the maximm eigenale Λ: Λ = max ψ ψ T V ψ ψ T ψ (2.40) where ψ is a non-zero ector of dimension 2 m. The ale of ψ which maximises the righthand side is the eigenector of V corresponding to Λ. ψ will be indexed in the same way as V, so there will be one element of ψ for each possible configration of the m states on the ct. We can graphically interpret the optimal ale of ψ in Eqation 2.40 as the partition fnction of the half plane. Each element ψ σ1,σ 2,...,σ m is the partition fnction (really the contribtion to the partition fnction) of the left half of the plane, ct along a colmn, with spins on the ct fixed at the ales σ 1, σ 2,..., σ m. By interpreting the optimal ψ in this manner, it is easy to see how it is an eigenector of V, as V adds one colmn to the half-plane, reslting in another half-plane. We show this in Figre It is difficlt to ealate the maximm in Eqation 2.40 as ψ becomes infinite-dimensional in the thermodynamic limit m. Howeer, it is possible to approximate Λ by sing the aboe eqation. Instead of maximising the expression oer all possible ψs, we maximise instead oer a sbset of all possible ψs that we choose. We take this sbset to be all ψ where 54

55 3 3 PSfrag replacements σ 4 σ 5 F X YA σ 4 σ 5 F X YA a b ω F c (σ 1, σ 2 ) df (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) (a) (b) F Fig. 2.14: Reflection symmetry. The weights of configrations (a) and (b) are identical in ndirected models. X YA... ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) ψ σ1,σ 2,...,σ m... ψ σ 5 σ 4 σ 3 σ 2 V σ 1 Fig. 2.15: At optimality, ψ is an eigenector of V. the elements satisfy ψ σ1,σ 2,...,σ m = Tr (F (σ 1, σ 2 )F (σ 2, σ 3 )... F (σ m, σ 1 )) (2.41) where F (a, b) is an arbitrary matrix of dimension 2 p, dependent on the two spins a and b and indexed by all possible ales of a set of p spins. The only restriction that we place on F is the condition F T (a, b) = F (b, a). (2.42) We will need this condition later to introdce a symmetry into or matrices. When p = 1, this expression for ψ is eqialent to the ansatz of Kramers and Wannier in [86] and [87]. If p > 1, then the space for ψ generated by this expression incldes the spaces generated by all lower p. This follows becase replacing F by ( F ) does not change the right hand side of Eqation We can interpret the optimal F (a, b) (i.e. the ale of F (a, b) that gies the optimal ale of ψ) graphically as a half-colmn or half-row transfer matrix. F (a, b) will take a ct of p spins, ending in a spin of ale a, and transfer it one colmn (or row) to the right or left (down or p), adding the intermediate weight at the same time. The new ct also has p spins, bt ends in a spin of ale b. In this way it can be seen how the prodct of m F s becomes the half-plane partition fnction as p becomes large. We illstrate this in Figre 55

56 PSfrag replacements V ψ a d b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ψ ω... F (σ 4, σ 5 ) F (σ 3, σ 4 ) F (σ 2, σ 3 ) F (σ 1, σ 2 ) σ 5 σ 4 σ 3 σ 2 σ 1 X YA p spins ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) p spins F... Fig. 2.16: Decomposition of ψ into m F s. 2p+1 spins p spins F R... Fig. 2.17: Fll-row transfer matrix interpretation of R Now define R to be the 2 2p+1 2 2p+1 matrix (indexed by 2p + 1 spins) with the elements R (λ,a,µ),(λ,b,µ ) = F λ,λ (a, b)f µ,µ (a, b) (2.43) where λ, λ, µ, and µ are sets of p spins. At optimality, each element of R is the prodct of two half-row weights, so the optimal R has a graphical interpretation as a fll-row transfer matrix, transferring 2p + 1 spins at a time, as shown in Figre Then if we let λ i and µ i denote sets of p spins, we hae ψ T ψ = [Tr (F (σ 1, σ 2 )F (σ 2, σ 3 )... F (σ m, σ 1 ))] 2 σ 1,σ 2,...,σ m [ ] 2 = F λ1,λ 2 (σ 1, σ 2 )F λ2,λ 3 (σ 2, σ 3 )... F λm,λ 1 (σ m, σ 1 ) σ 1,σ 2,...,σ m λ 1,λ 2,...,λ m = F λ1,λ 2 (σ 1, σ 2 )F µ1,µ 2 (σ 1, σ 2 )... F λm,λ 1 (σ m, σ 1 )F µm,µ 1 (σ m, σ 1 ) = σ 1,σ 2,...,σ m λ 1,λ 2,...,λ m µ 1,µ 2,...,µ m σ 1,σ 2,...,σ m λ 1,λ 2,...,λ m µ 1,µ 2,...,µ m = Tr R m. R (λ1,σ 1,µ 1 ),(λ 2,σ 2,µ 2 )R (λ2,σ 2,µ 2 ),(λ 3,σ 3,µ 3 )... R (λm,σ m,µ m),(λ 1,σ 1,µ 1 ) 56 (2.44)

57 X YA F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) p spins F ω p spins F 2p+2 spins S Fig. 2.18: Fll-row transfer matrix interpretation of S. If we define ξ to be the dominant eigenale of R, then lim m (ψt ψ) 1 m = lim m (Tr Rm ) 1 m = ξ. (2.45) Similarly, we define S to be the 2 2p+2 2 2p+2 matrix with the elements ( ) a b S (λ,a,b,µ),(λ,c,d,µ ) = ω F c d λ,λ (a, c)f µ,µ (b, d). (2.46) Then it can be proed in an identical way that and if we define η to be the dominant eigenale of S, then ψ T V ψ = Tr S m (2.47) lim m (ψt V ψ) 1 m = η. (2.48) The optimal S also has an interpretation as a fll-row transfer matrix, except that it transfers a row of 2p + 2 spins, as shown in Figre Sbstitting the expressions fond aboe into the formla for Λ, and therefore Λ = max ψ max F = max F κ = lim m Λ 1 m ψ T V ψ ψ T ψ ψ T V ψ ψ T ψ Tr S m (2.49) Tr R m max F η ξ. (2.50) Not only is this a lower limit for κ, bt we shall show later that as the dimension of the F matrices increases (as p ), the expression on the right-hand side tends to κ, so at finite p it can be sed as an approximation to the partition fnction per site. In fact, if we are sing series, this approximation will also gie the first few terms of κ correctly. This is the approximation that the CTM methods se. 57

58 σ 4 σ 5 Y A ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) a X F a b X F Fig. 2.19: Half-plane transfer matrix interpretation of X Eigenale eqations Let X be the eigenector of R corresponding to the eigenale ξ. X contains 2 2p+1 elements, which can be rearranged to form a set of matrices X(a), where a is a spin. Each of these matrices has size 2 p 2 p (indexed by p spins), and their elements are Then the eigenale eqation of R and X can be written as X λ,µ (a) = X (λ,a,µ). (2.51) ξx λ,µ (a) = [ξx] (λ,a,µ) = [RX] (λ,a,µ) = R (λ,a,µ),(λ,b,µ )X (λ,b,µ ) λ,b,µ = F λ,λ (a, b)f µ,µ (a, b)x λ,µ (b) λ,b,µ = F λ,λ (a, b)x λ,µ (b)f µ,µ(b, a) λ,b,µ [ ] = F (a, b)x(b)f (b, a) b λ,µ (2.52) and therefore ξx(a) = b F (a, b)x(b)f (b, a). (2.53) This eqation holds for all ales of the spin a. Graphically, we can think of the optimal X(a) (reslting from the optimal F (a, b)) as a half-plane transfer matrix, taking a half-row ct of p spins and rotating it arond another spin with ale a by an angle of π, while adding the Boltzmann weight of all the cells coered. In this way, it can be seen how the aboe eqation holds at optimality the right hand side merely moes the half-row ct by one row before and after rotation, which is eqialent to adding one fll row of 2p + 1 spins (which is exactly what R does). This still reslts in a half-plane transfer. We illstrate this in Figre Now let Y be the eigenector of S corresponding to η. As aboe, Y can be written as 58

59 σ 4 σ 5 X A F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) a Y b F a b ω c d Y F Fig. 2.20: Half-plane transfer matrix interpretation of Y. a set of 2 p 2 p matrices Y (a, b). In an identical fashion, bt with Y s replacing Xs and Ss replacing Rs, it can be shown that ηy (a, b) = ( ) a b ω F (a, c)y (c, d)f (d, b). (2.54) c d c,d This has a nearly identical graphical interpretation at optimality, except that the halfrow ct is now rotated arond two spins, with fixed ales a and b. We show this in Figre We note that taking the transpose of Eqation 2.53 gies the (eigenale) eqation ξx T (a) = b F (a, b)x T (b)f (b, a). (2.55) This means that the ector corresponding to X T (a) is also an eigenector of R corresponding to ξ. Since it contains the same entries as X, it seems reasonable that it is X. Translating this in terms of X(a) gies X T (a) = X(a) (2.56) ( ) a b and, similarly, since we are assming reflection symmetry in or model so that ω = c d ( ) b a ω, we get d c Y T (a, b) = Y (b, a). (2.57) For optimal matrices, this again makes sense graphically: X T (a) transfers the same region of the plane as X(a) does, bt moes the ct in the opposite direction. The same argment also applies to Y (a, b) Stationarity Eqations 2.53 and 2.54 proide expressions that we can se to ealate ξ and η, gien F. Howeer, not only do we want to calclate ξ and η, bt we mst also maximise η/ξ oer all possible F s. To achiee this, η/ξ mst be stationary with respect to F. This means that at 59

60 the optimal point, we mst hae (η/ξ) = 0. (2.58) F Here we take the deriatie of a scalar with respect to a matrix to mean the matrix formed by differentiating the scalar by each of the matrix elements in trn; in other words, [ ] df(a) = df(a). (2.59) da da i,j Then we need which, after rearranging, becomes i,j ( η/ F )ξ η( ξ/ F ) ξ 2 = 0 (2.60) η/ F λ,µ (a, b) ξ/ F λ,µ (a, b) = η ξ. (2.61) Now since X is the eigenector of R corresponding to ξ, we know that Moreoer, since ξ = XT RX X T X λ,a,µ = X (λ,a,µ)[rx] (λ,a,µ) = = λ,a,µ X2 λ,µ (a) λ,a,b,µ XT µ,λ (a) [F (a, b)x(b)f (b, a)] λ,µ λ,a,µ XT µ,λ (a)x λ,µ(a) a,b Tr XT (a)f (a, b)x(b)f (b, a) a Tr. (2.62) XT (a)x(a) R (λ,a,µ),(λ,b,µ ) = F λ,λ (a, b)f µ,µ (a, b) = F λ,λ(b, a)f µ,µ(b, a) = R (λ,b,µ ),(λ,a,µ), (2.63) we know that R is symmetric. Therefore, from Theorem 2.3.1, we know that X maximises the expression XT RX ξ, and so = 0. Therefore we can take all the elements of X as X T X X 60

61 constants when differentiating the aboe expression for ξ by F. This leads to ξ F λ,µ (a, b) = = = = 1 c Tr X2 (c) F λ,µ (a, b) 1 c Tr X2 (c) a 1,a 2 λ 1,λ 2,λ 3,λ 4 a 1,a 2 λ 1,λ 2,λ 3,λ 4 X λ1,λ 2 (a 1 )F λ2,λ 3 (a 1, a 2 )X λ3,λ 4 (a 2 )F λ4,λ 1 (a 2, a 1 ) ( X λ1,λ 2 (a 1 )X λ3,λ 4 (a 2 ) F λ2,λ 3 (a 1, a 2 ) F λ,µ (a, b) F λ 4,λ 1 (a 2, a 1 ) ) +F λ4,λ 1 (a 2, a 1 ) F λ,µ (a, b) F λ 2,λ 3 (a 1, a 2 ) 2 c Tr X λ1 X2,λ (c) 2 (a 1 )F λ2,λ 3 (a 1, a 2 )X λ3,λ 4 (a 2 ) F λ,µ (a, b) F λ 4,λ 1 (a 2, a 1 ) 2 c Tr X2 (c) a 1,a 2 λ 1,λ 2,λ 3,λ 4 [X(a 1 )F (a 1, a 2 )X(a 2 )] λ1,λ 4 a 1,a 2 λ 1,λ 4 F λ,µ (a, b) F λ 1,λ 4 (a 1, a 2 ) = 2(2 δ λ,µ δ a,b ) [X(a)F (a, b)x(b)] λ,µ c Tr X2 (c) (2.64) where δ is the Kronecker delta. The last line follows becase the only elements in any F which depend on F λ,µ (a, b) are F λ,µ (a, b) itself and F µ,λ (b, a), which is the same element as F λ,µ (a, b) if and only if λ = µ and a = b. In a similar fashion (albeit with more maniplation), we can apply the same procedre to η to get ( ) a c c,d η ω [Y (a, c)f (c, d)y (d, b)] F λ,µ (a, b) = 2(2 δ b d λ,µ λ,µδ a,b ) c,d Tr Y (c, d)y (d, c). (2.65) So for η/ξ to be stationary with respect to F, we mst hae ( ) a c c,d ω [Y (a, c)f (c, d)y (d, b)] b d λ,µ c Tr X2 (c) [X(a)F (a, b)x(b)] λ,µ c,d Tr Y (c, d)y (d, c) = η ξ for all spin ales a, b and sets of p spins λ, µ. If we set α = η c,d Tr Y (c, d)y (d, c) ξ c Tr X2 (c) (2.66) (2.67) 61

62 σ 1 σ 2 σ 3 σ 4 σ 5 A F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) F X a b X F Y c a ω d b Y Fig. 2.21: Graphical interpretation of Eqation then this becomes αx(a)f (a, b)x(b) = c,d ( ) a c ω Y (a, c)f (c, d)y (d, b). (2.68) b d Gien the graphical interpretations for F, X and Y at optimality, this eqation can be interpreted as adding a colmn to a section of the plane which ( coers ) all ( of the) plane bt a c c a half a row. This is shown in Figre 2.21, remembering that ω = ω. b d d b Howeer, if we define the matrices A(a) to be the sqare roots of the X(a) matrices: X(a) = A 2 (a) (2.69) then the eqation implies that Y (a, b) = A(a)F (a, b)a(b) (2.70) α = η a,b Tr A(a)F (a, b)a2 (b)f (b, a)a(a) ξ a Tr A4 (a) = η ξ a Tr A4 (a) ξ a Tr A4 (a) = η (2.71) 62

63 σ 4 c d σ 1 σ 5 F Y σ 2 ω F (σσ 13, σ 2 ) F (σσ 24, σ 3 ) F (σσ 35, σ 4 ) F (σ 4, σ 5 ) X a a X A (a) Graphical interpretation of Eqation A ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) a Y b A a F b A (b) Graphical interpretation of Eqation Fig. 2.22: The graphical interpretation of A as a corner transfer matrix gies interpretations of Eqations 2.69 and and therefore ( ) a c ω Y (a, c)f (c, d)y (d, b) = ( ) a c ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b) b d b d c,d c,d ( ( ) ) a c = A(a) ω F (a, c)y (c, d)f (d, b) A(b) b d c,d = A(a)ηY (a, b)a(b) = ηa 2 (a)f (a, b)a 2 (b) = αx(a)f (a, b)x(b) (2.72) which implies that F maximises η. Ths Eqation 2.70 implies Eqation 2.68, which is ξ eqialent to stationarity (i.e. optimality). The optimal A can be interpreted as a corner transfer matrix, which gies s the name of the method. Whereas X(a) rotates a half-row ct by an angle of π arond a spin with ale a, A(a) achiees exactly half that effect by rotating the ct arond a spin of ale a by an angle of π. We show interpretations of Eqations 2.69 and 2.70 in Figre

64 2.4.4 The CTM eqations Eqations 2.53, 2.54, 2.69 and 2.70 were first deried by Baxter in his original paper [12], and they are called the CTM eqations. For conenience we state them together here. ξx(a) = b ηy (a, b) = c,d F (a, b)x(b)f (b, a) (2.73) ( ) a b ω F (a, c)y (c, d)f (d, b) c d (2.74) X(a) = A 2 (a) (2.75) Y (a, b) = A(a)F (a, b)a(b) (2.76) Note that the first two eqations basically define X and Y in terms of F ; the last two force the matrices to maximise η ξ. The soltion of these eqations for finite matrix sizes will yield an approximation (and lower bond) for the partition fnction per site, κ, from the ineqality κ η ξ. (2.77) We will show in the next section that if the matrices are infinite-dimensional, this approximation is exact. Note that these eqations do not define the matrices niqely; for example, they are alid nder the similarity transformations X(a) P T (a)x(a)p (a), Y (a, b) P T (a)y (a, b)p (b) (2.78) A(a) P T (a)a(a)p (a), F (a, b) P T (a)f (a, b)p (b) (2.79) where P (a) is an orthogonal matrix (P T (a)p (a) = I) of dimension 2 p 2 p. In particlar, we can choose P (a) so that A(a), and hence X(a), is diagonal. Note that this ensres that X T (a) = X(a) and Y T (a, b) = Y (b, a). The eqations are also nchanged nder the transformations X(a) c 2 X(a), Y (a, b) c 2 Y (a, b), A(a) ca(a) (2.80) where c is any constant. In other words, we can choose normalising factors for X, Y and A. Since X and Y are defined only as eigenectors in the eqations, and hence can hae arbitrary normalisation, this seems reasonable. 2.5 The infinite-dimensional soltion Soling the CTM eqations proides a lower bond for κ, which can be sed as an approximation for κ. In this section, we will show that the approximation becomes exact as the 64

65 matrices become infinite-dimensional. To do this, we show that the space for ψ generated by Eqation 2.41 contains the optimal ψ to the maximisation problem in Eqation Again, this section is based on Baxter s workings in [12]. We start off by again considering an m n sqare lattice with toroidal bondary conditions. In addition to the weight ω of each cell, we assign a weight f(a, b) to each horizontal edge with spins a and b. Note that f(a, b) is not the Boltzmann weight of an edge; at the moment it is jst an arbitrary fnction. We do, howeer, impose the condition that f(a, b) = f(b, a). We contine by defining the 2 m -dimensional ector φ 0 by its elements: and we define φ to be [φ 0 ] σ1,σ 2,...,σ m = f(σ 1, σ 2 )f(σ 2, σ 3 )... f(σ m, σ 1 ) (2.81) φ = V n 1 φ 0. (2.82) Now we will constrct a set of matrices F (a, b) which, when sbstitted for F into Eqation 2.41, will gie the maximm ale for ψ. We define F (a, b) to be a 2 n 1 -dimensional sqare matrix with entries ( ) a F λ,µ(a, λ1 b) = ω b µ 1 n 2 f(λ n, µ n ) i=1 ( ) λi λ ω i+1 µ i µ i+1 (2.83) and define α 1, α 2,..., α n to each be sets of m spins (so that, for example, α 1 consists of the spins α 1,1, α 1,2,..., α 1,m ). Then we can derie φ α1 = [ ] V n 1 φ 0 α ( 1 n 1 ) = V αi,α i+1 [φ 0 ] αn = = = α 2,α 3,...,α n α 2,α 3,...,α n i=1 ( n 1 α 2,α 3,...,α n j=1 i=1 j=1 m ( ) ) m αi,j+1 α ω i+1,j+1 f(α n,j, α n,j+1 ) α i,j α i+1,j j=1 ( m n 1 ( ) ) αi,j+1 α f(α n,j, α n,j+1 ) ω i+1,j+1 m α 2,α 3,...,α n j=1 i=1 α i,j α i+1,j F (α 2,j,α 3,j,...,α n,j ),(α 2,j+1,α 3,j+1,...,α n,j+1 ) (α 1,j, α 1,j+1 ) = Tr F (α 1,1, α 1,2 )F (α 1,2, α 1,3 )... F (α 1,m, α 1,1 ). (2.84) Ths sbstitting ψ = φ and F (a, b) = F (a, b) satisfies Eqation The maniplation we hae done aboe is essentially conerting a diision of a half-plane between rows and colmns, as shown in Figre We mst also hae (F (a, b)) T = F (b, a). This follows qite easily from Eqation 2.83, 65

66 Y A ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 )... V V V V V φ 0... α 1,5 α 1,5 α 1,4 α 1,4 α 1,3 α 1,3 α 1,2 α 1,2 α 1,1 α 1,1 φ α1 F F F F F Fig. 2.23: Graphical interpretation of Eqation ( ) ( ) a b c d the fact that ω = ω, and or earlier assmption that f(a, b) = f(b, a). c d a b Now, if we choose f correctly, so that in the basis of eigenectors of V, φ 0 contains some non-zero mltiple of the eigenector corresponding to Λ, then in the thermodynamic limit n, φ = V n 1 φ 0 will become the partition fnction of the half-plane, which is the optimal ψ. Therefore as n (which also makes the dimension of F infinite), the optimal ψ mst satisfy Eqation 2.41 for F = F. Ths when the matrices are infinite-dimensional, the ψ-space generated by Eqation 2.41 contains the optimal soltion, and therefore the approximation is exact. 2.6 Calclating qantities In or statistical mechanical models, the primary qantity that we are interested in is κ, the partition fnction per site. If we hae soltions, or approximate soltions, for all ariables in the CTM eqations, then we can easily find κ by means of the eqation κ = η ξ. (2.85) Howeer, as we will see in sbseqent sections, not all of or methods calclate all of the ariables. In particlar, or renormalization grop method (which we will describe in Section 2.10) only calclates approximate As and F s. We wold therefore like to find an expression for κ inoling only those ariables. We do this by adjsting Eqation If X(a), A(a) and F (a, b) are soltions for the 66

67 CTM eqations, then we hae ξ = = = a,b Tr XT (a)f (a, b)x(b)f (b, a) a Tr XT (a)x(a) a,b Tr (AT (a)) 2 F (a, b)a 2 (b)f (b, a) a Tr (AT (a)) 2 A 2 (a) a,b Tr A2 (a)f (a, b)a 2 (b)f (b, a) a Tr A4 (a) (2.86) since A(a) can be taken to be symmetric. We can find a similar expression for η in the same way. If the matrices sole the CTM eqations, we hae ( ) a b a,b,c,d Tr ω Y c d T (a, b)f (a, c)y (c, d)f (d, b) η = a,b Tr Y T (a, b)y (a, b) ( ) a b a,b,c,d Tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d = a,b Tr (2.87) A2 (a)f (a, b)a 2 (b)f (b, a) since Tr AB = Tr BA for all matrices A, B. Since κ = η, these two expressions allow s to find κ in terms of or soled ariables. ξ ( ( ( ) ) a b a Tr A4 (a)) a,b,c,d Tr ω A(a)F (a, c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d κ = ( 2. a,b Tr A2 (a)f (a, b)a 2 (b)f (b, a)) (2.88) This expression again has a graphical interpretation each of the sms in the expression represents the partition fnction of the entire plane, expressed in a different way, as shown in Figre The first sm in the nmerator jst takes for corner-transfer matrices and pts them together (taking the trace to ensre that the first and last half-row cts, which occpy the same sites, are identical). The sm in the denominator adds a row to this, while the second sm in the nmerator adds a cross, consisting of a row and an intersecting colmn. It can be seen that this cross consists of for half-rows together with a single cell. By diiding by the sqare of the sm in the denominator, which adds two half-rows, we essentially remoe the half-rows and are left with the partition fnction of a single cell, which is exactly what κ is. The partition fnction per site, while ery important, is not the only thermodynamic qantity of importance. We will often want to calclate spin expectations the expected ale of a single spin or prodcts of certain spins. We show next how to calclate some of these. 67

68 d d σ 1 σ 1 σ 1 σ 2 σ 2 σ 2 σ 3 σ 3 σ 3 σ 4 σ 4 σ 4 σ 5 F X Y ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) A A a A A σ 5 X Y ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) A F A a b A F A σ 5 X Y F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) A F A a c F ω F b d A F A (a) For-CTM partition fnction. (b) For-CTM partition fnction with a row. (c) For-CTM partition fnction with a cross. Fig. 2.24: Calclating κ. The expression we se is (a) (c) (b) 2. The most important spin expectation is the magnetization per site, m = σ i. This is the expected ale of an arbitrary site, which does not depend on the site if the model is translationally inariant (which we assme it is). As shown in the Introdction, we can write this as m = ψ ln κ ln(η/ξ) = kt = kt (2.89) H H H when η and ξ sole the CTM eqations. Now, by constrction, η is stationary with respect to all elements in F : ξ η/ξ F (a, b) = 0. (2.90) Recall from Section that ξ X(a) = η Y (a, b) = 0. (2.91) Since η does not depend on X and ξ is independent of Y, we know that η is stationary with ξ respect to all elements in X and Y. Therefore, when we differentiate η with respect to H, the only qantity in the expression ξ that is not stationary with respect to H is ω. Now ω will change with the model, bt in all models that we are interested in, the external field interaction affects each spin eqally, contribting a weight e βhσ i for each spin σ i. As we are working on the sqare lattice, each spin belongs to for different cells, so this weight will be split oer the ωs for those cells. On the other hand, each of the for spins will contribte to the weight. In other words, ( ) a b ω e βh(a+b+c+d)/4 (2.92) c d 68

69 ln(η/ξ) H and this is the only part of ω which incldes H. Using or expression for η ξ in terms of A and F now enables s to differentiate ln η ξ : ( ) a b = ω ln(η/ξ) c d ( ) a,b,c,d a b H ω c d = ( ( ) ln Tr A 4 (a ) a,b,c,d a b ω a c d + ln ( ) a b ω c d Tr A(a )F (a, c )A(c )F (c, d )A(d )F (d, b )A(b )F (b, a ) a,b,c,d ( ) 2 ln ) a b ω c d Tr A 2 (a )F (a, b )A 2 (b )F (b, a ) H a,b = β a + b + c + d ( ) a b ω 4 c d a,b,c,d ( ( ) ) 0 a b 1 a b ω a,b,c,d c d Tr A(a )F (a, c )A(c )F (c, d )A(d )F (d, b )A(b )F (b, a ) ω@ A c d ( ) a b a,b,c,d ω c d Tr A(a )F (a, c )A(c )F (c, d )A(d )F (d, b )A(b )F (b, a ) ( ) a+b+c+d a b a,b,c,d ω Tr A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) 4 c d = β ( ). (2.93) a b a,b,c,d Tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d 69

70 From the CTM eqations, the denominator of the fraction in this expression becomes ( ) a b Tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d a,b,c,d = Tr ( ( ) ) a b A(a) ω F (a, c)y (c, d)f (d, b) A(b)F (b, a) c d a,b c,d = ηtr A(a)Y (a, b)a(b)f (b, a) a,b = ηtr A 2 (a)f (a, b)a 2 (b)f (b, a) a,b = ηtr ( ) X(a) F (a, b)x(b)f (b, a) a b = ηξtr X 2 (a) = ηξ Tr A 4 (a). (2.94) a a Using exactly the same reasoning we can also derie the eqation ( ) a a b 4 Tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) = ηξ a c d 4 Tr A4 (a). a,b,c,d a (2.95) By groping the matrices in a different order, we can derie in similar fashion the eqation a,b,c,d b 4 Tr ω ( ) a b A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d = ηξ b b 4 Tr A4 (b) = ηξ a a 4 Tr A4 (a). (2.96) Similarly, if the mltiplying coefficient is c or d, we can do similar maniplations and achiee the same reslt. Therefore the nmerator of the fraction in Eqation 2.93 is 4ηξ a a 4 Tr A4 (a) = ηξ a atr A 4 (a). (2.97) Ptting it all together, we get or (remarkably simple) expression for the magnetization 70

71 per site: m = kt ln κ H = kt β ηξ a atr A4 (a) ηξ a Tr A4 (a) = a atr A4 (a) a Tr A4 (a). (2.98) As with many things in the CTM method, there is a graphical interpretation to this as well A 4 (a) is the contribtion to the partition fnction of the entire lattice if the central spin is fixed at ale a, as shown preiosly in Figre 2.24(a). Ths the denominator is the partition fnction Z N, and the nmerator is the sm of a mltiplied by the (nnormalised) probability of a being the ale of the central spin. This gies the expected ale of that spin. If we wish to calclate other spin expectations, especially for spins which are close together (e.g. in a single cell), we can se a similar method. The reslt is ery mch what wold be expected in the most general terms, if σ i, σ j, σ k, and σ l are spins arond a single cell, then ( ) a b a,b,c,d f(a, b, c, d)tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d f(σ i, σ j, σ k, σ l ) = ) ( a b a,b,c,d Tr ω A(a)F (a, c)a(c)f (c, d)a(d)f (d, b)a(b)f (b, a) c d (2.99) The interpretation of this qotient is as before, except that we hae to express the partition fnction so that we know the ale of 4 spins. 2.7 The 1x1 soltion an example As an example, we sole the CTM eqations in the case where p = 0 for the hard sqares model. Since p = 0, all the matrices in the eqations are now scalars. We hae already gien the formla for ω in Eqation This is all we need to sole the CTM eqations. It seems reasonable to assme that F (1, 1) = 0, since applying F (1, 1) wold inole two 1 spins side by side, which is forbidden. Additionally, becase the norms of X and Y (as eigenectors of R and S) are ndetermined, we can set X(0) = Y (0, 0) = 1. From Eqation 2.69 with a = 1, we then get A(0) = ±1. As A(0) represents the weight of some section of the plane, it shold be positie, so we take A(0) = 1. From Eqation 2.70 with a = b = 0, we then derie F (0, 0) = 1. Let t = z 1 4, so that we do not hae to deal with fractional powers of z. Sbstitting what. 71

72 we know into the CTM eqations and writing them ot in fll gies the eqations ξ = 1 + F 2 (0, 1)X(1) = 1 X(1) F 2 (0, 1) (2.100) η = 1 + 2tF 2 (0, 1)A(1) = F (0, 1) ( t + t 2 F 2 (0, 1)Y (0, 1) ) (2.101) Y (0, 1) X(1) = A 2 (1), Y (0, 1) = F (0, 1)A(1). (2.102) Sbstitting the last two eqations into the first two lines and remoing ξ and η gies s 1 + F 2 (0, 1)A 2 (1) = ( ) 2 F (0, 1) (2.103) A(1) 1 + 2tF 2 (0, 1)A(1) = 1 ( t + t 2 F 2 (0, 1)A(1) ). (2.104) A(1) The first eqation gies F 2 (0, 1) in terms of A(1): F 2 (0, 1) = 1 1 A 2 (1) A2 (1) = A2 (1) 1 A 4 (1) (2.105) which, when sbstitted into the second eqation, gies an expression for A(1) as the root of a 5th degree polynomial: A 3 (1) 1 + 2t 1 A 4 (1) = t A(1) + A 2 (1) t2 1 A 4 (1) (2.106) t + A(1) t 2 A 3 (1) + 3tA 4 (1) A 5 (1) = 0. (2.107) This cannot be soled in closed form (by Maple), bt it is possible to generate an expression for A(1) as a series in t. This gies A(1) = t t 5 + 4t 9 21t t t t t t (2.108) Sbstitting this expression back into F (0, 1), ξ and η gies a series for the partition fnction per site. κ = 1 + z 2z 2 + 8z 3 40z z z z z (2.109) This gies κ correctly p to the z 7 term. The actal coefficient of z 8 in κ is 57253, which differs by only 1 from or approximation. This demonstrates how, at low matrix sizes, the CTM method can generate ery good series approximations efficiently. 72

73 2.8 Matrix algorithms Now we moe on from Baxter s CTM eqations to derie or own nmerical and series methods based on these eqations. In the deelopment of these methods, we hae sed two matrix decomposition algorithms which we shall otline here Cholesky decomposition The first algorithm is Cholesky decomposition. If we are gien a n n matrix X, then the Cholesky method finds an pper trianglar matrix A sch that A T A = X. (2.110) It does this by applying the following algorithm (taken from [116, pp ]): 1. Set i = Calclate A ii = ( X ii i 1 k=1 A2 ki 3. For j = i + 1, i + 2,..., n, calclate A ij = 1 A ii ( X ij i 1 k=1 A kia kj ). 4. Increase i by 1. ) If i > n, stop. Otherwise go to step 2.. The eqations sed aboe are rearrangements of the elements of the matrix eqation A T A = X, so the method will prodce the correct A The Arnoldi method The other method that we shall se is the Arnoldi method, inented by Arnoldi in 1951 ([3]). Gien a n n matrix A and a nmber m n, the Arnoldi method finds an n m matrix V m sch that V T m AV m is a Hessenberg matrix (all entries below the lower tri-diagonal axis are 0). The method rns as follows (taken from [119, Section 6.3]): 1. Choose an n-dimensional ector 1 of norm Set j = Compte h ij = (A j ) i for i = 1, 2,..., j. 4. Compte w j = A j j i=1 h ij i. 5. Set h j+1,j = w j. 6. If j = m or h j+1,j = 0 then stop. 73

74 7. Set j+1 = 1 h j+1,j w j. 8. Increase j by Go to step 3. If the algorithm finishes with j = m, set V m to be the matrix whose colmns are 1, 2,..., m. Otherwise the algorithm fails. At each step, we are basically applying the Gram-Schmidt orthogonalization process to A j, to form the basis { 1, 2,..., m }. Therefore all the i are orthogonal to each other. By constrction, they hae norm 1, so we know that V T m V m = I m. (2.111) Lemma Let H m be the m m matrix whose (i, j)th entry is h ij (0 if this is not defined by the algorithm). Then Vm T AV m = H m. (2.112) Proof. By the constrction of V m, for all j = 1, 2,..., m, A j = w j + j h ij i i=1 = h j+1,j j+1 + = = j+1 h ij i i=1 j h ij i i=1 m h ij i (2.113) i=1 since h ij = 0 when i > j + 1. This implies that and since V T m V m = I, the reslt follows. AV m = V m H m, (2.114) Later, we will se the Arnoldi method as a qick and easy sbstitte for diagonalizing a matrix. 2.9 The iteratie CTM method Haing deried the CTM eqations, we wold now like to find a nmerical ale or a series for κ by soling them. One method we can se to do so is by iterating throgh the eqations, 74

75 soling one at a time. In his papers, Baxter proposed a general iteratie method, bt oerall eschewed it for more optimised (bt also more specialised) algorithms, also based on iteration. In this section, we present or own generalised iteratie method. While the CTM eqations were generated by assming that X, Y and F are of dimension 2 p 2 p, it is apparent that they place no restriction on the matrices to actally hae sch dimensions. While the power-of-2 dimension makes sense graphically, corresponding to the rows of the matrices being indexed by the ales of a half-row/colmn ct of p spins, the eqations can be deried in identical fashion withot sch an assmption. The only thing we lose is the graphical interpretation, bt the eqations still hold. As sch, we can assme that the matrices are of any size (as long as they are all of the same size). To constrct the iteratie method, we start with a set of ales for all the matrices. Then we impose an order on the CTM eqations, and go throgh the eqations one at a time according to that order. For each eqation, we fix some of the ariables in the eqation, then sole for the nfixed ariables. The soltions then become the new ales for those ariables. Eentally, working throgh all the eqations once shold lead to s changing each ariable once. We hope that if we choose the right order for the eqations, then after each pass throgh the eqations, the ales of the ariables will be closer to the soltion of the eqations than before. Unfortnately, we cannot actally proe that this will be the case; bt or empirical testing indicates that the order we hae chosen does get s closer. The order that we se is Haing set the framework, we mst figre ot how to sole each of the CTM eqations in trn. Firstly, we take Eqation This eqation is merely a (glorified) eigenale eqation for the matrix R in fact, it generates the same eqations as the eigenale eqation ξx = RX (2.115) where X is taken as a ector. Therefore it can be soled sing the power method. To do this, we fix the ale of R, then calclate the right-hand side (which corresponds to the right-hand side of Eqation 2.53). To keep the ales at a reasonable size, we set a normalisation for X here we hae sed X 0,0 (0) = 1. This allows s to find ξ, and then diide the right-hand side by ξ to get a new X. By repeating this process, eentally ξ will become the maximm eigenale of R and X will become the corresponding eigenector, which is what we want. In terms of the matrices in Eqation 2.53, we are applying the eqations [ ] ξ = F (0, b)x(b)f (b, 0) (2.116) b X (a) = 1 F (a, b)x(b)f (b, a) (2.117) ξ b many times. Becase this is jst a modified power method, we know this method will conerge, gien enogh iterations. In practice, we jst iterate a fixed nmber of times. 1,1 75

76 In an identical way, we recast Eqation 2.54 as [ ( ) ] 0 0 η = ω F (0, c)y (c, d)f (d, 0) (2.118) c d c,d 1,1 Y (a, b) = 1 ( ) a b ω F (a, c)y (c, d)f (d, b). (2.119) η c d c,d We can then se a similar method to find η and Y from F. We sed the normalisation Y 0,0 (0, 0) = 1. Now the only ariables which we hae not recalclated are A and F. To find these, we se the remaining Eqations, 2.69 and Eqation 2.69 expresses X in terms of A only, so we shold be able to reerse the eqation by setting A(a) to be the sqare root of X(a). Howeer, it is not immediately obios how we cold find sch a matrix. The most obios rote wold be to diagonalize X(a), i.e. find a matrix P (a) sch that X(a) = P 1 (a)d(a)p (a). (2.120) Then, becase it is easy to find the sqare root of a diagonal matrix (jst take the sqare roots of its components), we set A (a) = P 1 (a) D(a)P (a) (2.121) so that A(a) satisfies Eqation This is easy to do for a matrix of scalars, so we se this procedre if we are calclating nmerical ales. On the other hand, if we want to ealate κ as a series in some ariables, this procedre wold necessitate a diagonalization of a matrix of series, which is mch more difficlt. The best approach that was sggested to do this was to calclate the eigenales as series in z by soling for each coefficient in the series indiidally. Howeer, to find the first coefficient, we wold need to sole an nth degree polynomial (if A is n n), which is impossible to do exactly for large n. Een if we sed a nmerical method to sole for this coefficient, the error this wold introdce wold qickly escalate in magnitde as we sed the reslt to sole for the remaining coefficients. So we cannot se this approach. We hae not yet managed to come p with a reasonable way of diagonalization. For an alternate way of pdating A, we recall from Section that A can be taken to be symmetric. So instead of soling Eqation 2.69 for A, we instead sole the eqation A T (a)a(a) = X(a). (2.122) If A(a) is symmetric, the eqations are identical. Moreoer, from the remarks in Section 2.4.4, we can choose A to be diagonal. Therefore, gien X, we wold like to find a ale for A in the aboe eqation which is diagonal. Unfortnately, since we cannot garantee that the crrent X is diagonal, this will often be impossible. Since we wold like A to be as 76

77 diagonal as possible, we apply the Cholesky method described in Section This gies s an pper trianglar A(a) which satisfies the aboe eqation. Haing sed Eqation 2.69 to re-calclate A, we can then se the remaining eqation to calclate the remaining ariable F. We recast Eqation 2.70 as F (a, b) = A 1 (a)y (a, b)a 1 (b) (2.123) and sbstitte or new ales for A and Y into the right-hand side to derie a new ale for F (a, b). Now we hae gone throgh all the eqations and recalclated each ariable exactly once. Therefore we can retrn to the beginning and repeat this process. Or hope is that by repeating this process enogh times, the matrices will eentally conerge to a soltion of the CTM eqations. Or empirical eidence seems to indicate that this is indeed the case. Howeer, the method so far has some omissions. One is that it assmes that the matrices are of fixed size, and neer changes that size. So as it stands, we can only obtain the soltion to the CTM eqations for finite matrix size. This is ndesirable, becase it is only in the infinite-dimensional limit that the soltions for the eqations gie s the actal ale of κ. Frthermore, it is not always obios what matrices we shold start or ariables at we wold like to start them reasonably close to the soltion, so they hae a chance of conerging, bt it is not obios where that is. Ideally, we want to start or matrices small (thereby haing fewer initial conditions to set), and somehow grow the matrices, so that as the algorithm progresses they become larger and larger, and make or approximation of κ more and more accrate. Frthermore, we want to expand the matrices by one row and colmn at a time, so that we can obsere the conergence effects for fixed matrix sizes. Unfortnately there does not seem to be an intitie way to expand the matrices in this way. We ended p sing ad hoc methods of expanding (which really means that we tried something that didn t seem too weird, and if it worked, we kept it!). In general, or expansion methods expanded only the X and Y matrices, so if we wished to expand the matrices, we did it after recalclating X and Y from the first two CTM eqations. We cold then calclate the expanded A and F from the remaining eqations. For setting initial conditions, we started off with 1 1 matrices which were modeldependent. We sed the ales that the graphical interpretation of the matrices wold hae had if the ct had 0 length. This sally worked well. To smmarize, the procedre for or iteratie CTM method is: 1. Set all the matrices to their initial ales of size Apply the power method to Eqation 2.53 to pdate ξ and X(a) (in practice, we iterated Eqation 2.53 eight times). 3. Apply the power method to Eqation 2.54 to pdate η and Y (a, b). 4. If we hae iterated sfficiently long at the crrent matrix size (we sed 5 iterations), then expand the matrices by one row and colmn. 77

78 5. Apply the Cholesky method to X(a) to pdate A(a). 6. Set F (a, b) = A 1 (a)y (a, b)a 1 (b). 7. Go back to step 2. We fond that for the models that we applied it to, this process seems to conerge reasonably well. We will discss the conergence of the method in later sections The hard sqares model an example To illstrate the iteratie CTM method (in particlar the model-specific parts), we apply it to the low-density (where occpied spins are discoraged) expansion of the hard sqares model in Section 2.7. For this model, ω is gien in Eqation 2.38 as ( ) { a b ω = c d 0 if a = b = 1, a = c = 1, b = d = 1 or c = d = 1 z (a+b+c+d)/4 otherwise where z is the weight of an occpied (state 1) spin. The initial conditions that we sed for this model are (2.124) F (0, 0) = X(0) = X(1) = Y (0, 0) = Y (0, 1) = Y (1, 0) = 1 (2.125) F (0, 1) = F (1, 0) = z 1 4 (2.126) F (1, 1) = Y (1, 1) = 0. (2.127) In particlar, both F (1, 1) and Y (1, 1) are 0 becase they reqire 2 occpied spins to be adjacent. Eentally, it does not really matter what ales we se, as long as the process conerges (which it seems to do). We sed the following expansion procedre: if the matrices are of dimension n n, then after recalclating X and Y ia the first two CTM eqations, we set all matrices to be of dimension (n + 1) (n + 1), with their preios ales in the top left corner. We then set X n+1,n+1 (a) = z 1 2 Xn,n (a) a = 0, 1 (2.128) Y i,n+1 (0, 0) = Y n+1,i (0, 0) = Y n,n (0, 0) (2.129) Y i,n+1 (0, 1) = Y n,n (0, 1) (2.130) Y n+1,i (1, 0) = Y n,n (1, 0) (2.131) with all other elements eqal to 0. As we pointed ot in Section 2.4.4, if we apply the appropriate transformations, then we can take X(a) to be diagonal. In fact, we can also take the diagonal elements to be in decreasing order, where a series is considered smaller than another series if it has a larger leading power of z, or a smaller leading coefficient if the leading powers are eqal. This is 78

79 why we only set X n+1,n+1 (a) to be non-zero, and force its smallest power of z to be larger than the preios diagonal term. Applying the iteratie method sing these initial conditions and expansion method gies the following seqence of κ approximations: 1 + z 2z 2 + 7z 3 28z z 2z 2 + 8z 3 40z z z z z z 2z 2 + 8z 3 40z z z z z z z z 2z 2 + 8z 3 40z z z z z z z z 2z 2 + 8z 3 40z z z z z z z z 3z 2 + 2z 5/2 + 13z 3 22z 7/2 40z z 9/ z (first iteration at size 2) 1 + z 2z 2 + 8z 3 40z z 5 + 4z z 2z 2 + 8z 3 40z z z z z z z z z 23/ z 2z 2 + 8z 3 40z z z z z z z z z z z z z 61/ z 31/ z 2z 2 + 8z 3 40z z z z z z z z z z z z z z 67/ z which appears to conerge reasonably well. The last approximation is correct p to order z 15 the z 16 term is 1 less than the exact nmber Conergence/reslts We applied the iteratie method to the hard sqares model to find series for matrix sizes p to We discss or reslts in this section. One weak point in the theory of the iteratie method is that we can t actally show that the ales we derie will actally conerge to κ. Frthermore, een if it does conerge, we cannot theoretically estimate the rate or behaior of the conergence. Therefore we mst rely on empirical eidence. Fortnately, if we choose the initial conditions and method of expansion properly, the method seems to conerge fairly well. Gien that we are merely cycling throgh the eqations and soling them one by one, we wold expect to either conerge to the soltion or not conerge at all. It seems that if we fix the matrix size and sole the eqations exactly, the approximation that reslts for κ gies the actal ale p to some power of z, depending on the size of the matrices. Frthermore, the method is able to attain all the correct terms at eery size, after a nmber of iterations. This leads s to think that the method is conerging. Of great interest is how rapidly the approximations conerge. We can measre this by obsering how many coefficients or approximation for κ gies exactly. For small matrix sizes (and hence relatiely inaccrate approximations) this is easy to tell, as we know the coefficients of κ p to the z 42 term from [20]. Howeer, as the matrix sizes become larger, 79

80 Matrix size Nmber of correct terms Tab. 2.1: Conergence of the iteratie method for the hard sqares partition fnction. we do not hae this lxry. Fortnately, for higher matrix sizes, we noticed a pattern in the calclations. Becase or ω retrns cell weights as whole powers of t = z 1 4, all or calclations were done in series of t. Howeer, κ is a series in whole powers of z. We noticed that or approximations to κ started as a series in whole powers of z, and then after a nmber of terms, broke down into powers of t that were not whole powers of z. We fond that at higher matrix sizes, the terms where the powers of z were whole were generally correct, while any terms after the first fractional power of z were incorrect. The other way that we sed to determine which coefficients were exact was to compare the reslt with that obtained from a larger matrix dimension. As we obsered before, the ψ-space generated by F at a fixed matrix size incldes all the ψ-spaces generated by F s of lower dimension, so soling the CTM eqations at a fixed size will always yield a more accrate approximation than soling the system at lower sizes. Therefore a series term is probably accrate if it agrees with the corresponding term in an approximation reslting from a higher dimension. Using these methods, Table 2.1 shows the nmber of terms we can get exactly from each fixed matrix size. It is interesting to note that the nmber of correct terms always seems to be a mltiple of 8, and that it does not always increase strictly. Frthermore, it seems possible that at size 2 n, we get 16n correct terms, which wold imply that the algorithm is exponential time. At the largest size (10 10), we managed to get 48 correct terms. It is a tribte to the efficiency of Baxter s method that in 1979 he managed to reach 43 terms with the compting resorces of the time! Howeer, or method appears to be more general Technical notes We programmed the iteratie method for series for the hard sqares model, which is the main model that we experimented pon, sing C++. While doing so, we came across some technical difficlties, which we shall describe here. Firstly, we had the biqitos problem of ronding. The series terms are ery large inte- 80

81 gers, and we will also hae to maniplate large integers in intermediate steps. Considering that the terms are exact integers, howeer, we wold like to se exact arithmetic. Unfortnately the precision reqired (the z 42 th term is on the order of ) is mch larger than that proided in any of the standard data types in C++. Moreoer, if we sed a floating-point type then we wold need an incredibly large precision to eliminate all possible ronding errors. This wold slow the comptation immensely, so we try to se exact types. We sed two different approaches. The first way we tried was to se rational nmbers (type cl RA) from CLN, an arbitrary precision library written by Haible ([67]). Unfortnately, althogh the final series terms are integers, it is not tre that all terms of all intermediate series are integers. In fact, in or calclations these non-integer terms freqently became ery nwieldy fractions, with both nmerator and denominator far in excess of the actal nmber. This slowed the comptation significantly. The other approach we sed was to se the well-known Chinese Remainder Theorem, which we state below (taken from [47, Section 5, Theorem 2]). Theorem Let p 1, p 2,..., p n be a set of integers which are relatiely prime, i.e. gcd(p i, p j ) = 1 for i j. Also let a 1, a 2,..., a n be a set of arbitrary integers. Then there exists a niqe integer x which satisfies x a i mod p i i = 1, 2,..., n (2.132) and 0 x < p 1 p 2... p n. (2.133) We can se this theorem by performing or comptations mltiple times once in integers mod p 1, once in integers mod p 2, and so on. If all the coefficients of the original series lie between 0 and p 1 p 2... p n, then we can reconstitte the series from the reslts of all the rns. We can only do this becase we know that all the coefficients are integers. Howeer, as we mentioned before, not all of the terms of the intermediate seqences are integers, and neither are the terms in the final series that are not exact. Fortnately this does not really matter, as it does not interfere with the calclations, and we really only wish to know the terms which are exact. The remaining difficlty with sing the Chinese Remainder Theorem is that Cholesky decomposition reqires s to take the sqare root of a series, which inoles taking the root of the first term. This is not always possible in the integers modlo p, any nmber will hae the same sqare as its negatie, so there will only be p+1 sqares in the p 1 nmbers. 2 So not all of the integers modlo p will hae sqare roots. We basically cannot do anything abot this, bt gien that we end p with integers, it seems nlikely that we will eer take the sqare root of a nmber that does not hae an integer sqare root, or at least a rational one. This still leaes s with the problem of how to find a sqare root modlo p. We se the following lemma, taken from [85, Section II.2]. Lemma Let p be a prime where p 3 mod 4, and let a be an integer. Then if a has a sqare root modlo p, a p+1 4 is a sqare root of a modlo p. 81

82 Proof. Let x be a sqare root of a. Then in modlo p, Now since a p+1 4 (x 2 ) p+1 4 x p+1 2 xx p 1 2. (2.134) (x p 1 2 ) 2 x p 1 1 mod p (2.135) from Fermat s Little Theorem, we know that x p 1 2 mst be either 1 or 1 mod p. Therefore which proes the lemma. a p+1 4 ±x mod p (2.136) So as long as we se primes which are eqialent to 3 mod 4, we can find the sqare root easily if it exists. Using these methods, we can implement the calclations as integer calclations only, sing the C++ integer type int. The compiler that we sed allocates 4 bytes of information to this type, which is 32 bits. Therefore the range of this type is from 0 to We wanted to be able to add two sch nmbers, so we limit or ariables to half this limit. So for or prime modli we chose the largest possible primes that are eqialent to 3 mod 4 and less than The largest sch prime is The other problem that we encontered was one of storage. Obiosly it is impossible to keep all terms of the series; in fact, considering that only the first few terms are accrate, we wold not een want to. In all the series operations we sed, terms of higher power in the operands do not affect terms of lower power in the reslt, so theoretically we shold keep a fixed nmber of terms in the series, which is eqal to the final nmber of terms that the series shold yield. Unfortnately, for reasons nknown to s, when we attempted to do this, we fond that we did not manage to prodce all the terms that we shold prodce at that matrix size. We hae no idea why this is so, bt to make the method conerge for a certain nmber of terms, we hae to keep almost twice as many terms as that. For example, to derie the fll nmber of series terms (48) at size 8, we hae to keep more than 80 terms for or intermediate series. This slows or calclation down greatly. It also throws off or conergence estimates it is possible that if we had kept more terms, we wold hae deried more series terms from each matrix size Efficiency The efficiency of the iteratie method depends largely on the conergence of the series at each matrix size. As we do not hae a general formla for how many terms will be accrate for a fixed matrix size, it is difficlt to analyse the efficiency of the method. All we can do is obsere the efficiency of the method relatie to matrix sizes. Sppose that we wish to rn the algorithm ntil the matrices are of dimension m m, while keeping s series terms. Each addition of series takes O(s) operations, while each 82

83 mltiplication of series takes O(s 2 ) operations. This is also the efficiency of a diision operation on series. Now, for m m matrices, a matrix addition operation takes O(m 2 ) addition operations, while a mltiplication operation takes O(m 3 ) operations, since each of the m 2 elements in the reslt reqires O(m) addition and O(m) mltiplication operations. For matrices of series, addition wold therefore take O(m 2 s) operations, and mltiplication wold reqire O(m 3 s 2 ) operations. Now, each pass throgh the eqations reqires a fixed nmber of matrix mltiplications and additions. Also, from the step where we expand the matrices to the step where we expand them again is a fixed nmber of passes. On the other hand, if we bild the matrices p from 1 1 matrices, we will need to expand them m 1 times. While the matrices will not always be m m in size, since we take the same nmber of iterations at eery matrix size, we can expect the aerage size to be m, which is linear in m anyway. Therefore, the 2 iteratie method will take O(m 4 s 2 ) operations to prodce matrices of dimension m m with series length s. Theoretically, since this is polynomial time, the efficiency (or inefficiency) of this method wold depend strongly on the relation of the series conergence to the matrix size, as shown in Table 2.1. For small sizes (1-4), this wold appear to be linear, which wold make the CTM extraordinarily (indeed implasibly) efficient, bt at larger sizes conergence is mch slower (and also more npredictable). If, as sggested earlier, the conergence is exponential in the matrix size, then it is also exponential in the time taken. In practical terms, howeer, the iteratie method as gien is simply not particlarly efficient. Qite apart from haing to rn the entire calclation 5 times to make se of the Chinese Remainder Theorem, the high (bt fixed, so it does not show p in efficiency analysis) nmber of calclations reqired for each iteration, and the reqirement to keep npredictable nmbers of extra series terms, make it hard for the method to get past matrices in a reasonable time The renormalization grop method The iteratie method otlined in the aboe section will eentally reach a soltion, gien enogh time. Howeer, in its crrent form it has (at least) three disadantages. Firstly, becase at each iteration we mst apply the power method (which is itself an iteratie method) twice, it isn t ery fast. Secondly, or expansion procedre, for moing from one matrix size to the next higher size, is ery arbitrary we se it only becase it seems to work! Thirdly, bt perhaps most importantly from a theoretical perspectie, we do not hae any proof that the method actally conerges. The so-called corner transfer matrix renormalization grop method (CTMRG), which was first deised by Nishino and Oknishi in [105], does not sffer from any of these problems. In this section we present or ersion of this method, which differs only slightly from the original. In the iteratie CTM method, we basically took all the CTM eqations and tried to sole 83

84 X YA F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) F a b F ω a b Fig. 2.25: Expansion of F matrices in Eqation them for all the ariables in the eqations. As the ariables are matrices which only become exact when they are infinite-dimensional, this inoles a large nmber of ariables and a large amont of compting time. In the renormalization grop method, we strip back the method by retrning to the meaning of the matrices. Considering that from Eqations 2.69 and 2.70, the X and Y matrices can be deried directly from A and F, it wold seem desirable to hae a method where we only need to calclate A and F. So we wold like to hae a procedre which, gien ales for A and F, will find new ales for A and F which are closer to the soltion of the CTM eqations. We do this by looking at the graphical interpretation of the matrices. We hae interpreted the optimal A(a) as the transfer matrix of a qarter plane arond a spin of ale a, while F (a, b) is the transfer matrix of half a row with end-spins a and b. We note that only in the infinite-dimensional case (where the matrices transfer an infinite area) do the matrices yield the exact κ. Therefore, to pt it imprecisely, we will probably get closer to the soltion of the eqations by making the finite-dimensional matrices transfer as mch as possible. This is done by seqentially expanding and redcing the A and F matrices. Eery time we expand the matrices, we make them transfer a larger area. Eery time we redce the matrices, we try to redce them in sch a way that as little information as possible is lost, so they are closer to the exact soltion of the eqations. So now we need to find the procedres for expanding and redcing or matrices. We expand by dobling the size of or matrices, so that the half-row is extended by 1 spin. For the F matrices, we place this spin immediately next to the end-spin, so that the end-spin remains the same and we add one single cell. We then order the rows and colmns of the new F so that (if the possible spin ales are 0 and 1) all the rows/colmns where the new spin is 0 come first. We also need to add the weight of the extra cell, which reslts in a new F of ( ) ( ) 0 a 0 a ω F (a, b) = ω ( 0 b 1 a 0 b F (0, 0) ) F (1, 0) We illstrate the expansion of F in Figre If we look at the ψ that is generated by any particlar F : ω F (0, 1) 1 b ( ) 1 a. (2.137) ω F (1, 1) 1 b ψ σ1,σ 2,...,σ m = Tr (F (σ 1, σ 2 )F (σ 2, σ 3 )... F (σ m, σ 1 )) (2.138) then it can be seen that F corresponds to mltiplying the half-plane partition fnction by 84

85 σ 3 σ 4 σ 5 X Y F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) a A a F ω A F Fig. 2.26: Expansion of A matrices in Eqation the colmn transfer matrix: [V ψ] σ1,σ 2,...,σ m = Tr (F (σ 1, σ 2 )F (σ 2, σ 3 )... F (σ m, σ 1 )). (2.139) Now we can see that expanding F in this manner is eqialent to applying the power method to V, since ψ is an eigenector corresponding to the maximm eigenale of V. If we were to expand F like this repeatedly, we wold therefore conerge to the soltion of the model. On the other hand, eery time we expand F we doble its dimension. Therefore we need to conterbalance the expansion by redcing the matrix size after eery expansion. This is where A comes in. We keep it at the same size as F by expanding it in a similar manner, as shown in Figre Instead of adding a cell, we add a cell and two half-row transfer matrices, to doble the size of A. Formally, we replace A(a) with ( ) ( ) 0 b 1 b b ω A(b) 0 0 b ω A(b) 0 0 A (a) = b ω ( 0 b 1 0 ) A(b) b ω ( 1 b 1 0 ). (2.140) A(b) To redce the dimension, we note that finding κ is the reslt of a maximisation problem in A and F, so we wold like it to be as large as we possibly can. Considering that a Tr A4 (a) can be thoght of as the partition fnction, or redction procedre wold like to make this qantity as large as possible. Since the CTM eqations are alid nder the transformations A(a) P T (a)a(a)p (a) and F (a, b) P T (a)f (a, b)p (b), it seems logical to try some sort of redction which inoles mltiplying by matrices with orthonormal colmns. These matrices wold hae to be non-sqare to redce the dimension. We recall or remark from Section that the A matrices can be taken to be diagonal. If they are indeed diagonal, then an immediately obios redction can be made by retaining the most significant (i.e. largest) elements and throwing away the rest. Therefore we diagonalize A(a): A(a) = V T (a)d(a)v (a) (2.141) where V (a) is an orthonormal matrix and D(a) is diagonal. We frther impose the condition that the elements of D(a) are in decreasing order. We then remoe the desired nmber of rows and colmns from the bottom and right of D(a) to generate or new ale for A(a). 85

86 Notably, we do not reerse the diagonalization. To keep the CTM eqations alid, we mst apply the same transformation to F. First we calclate V T (a)f (a, b)v (b) and then remoe the same nmber of rows and colmns from the bottom and right as we did from D(a). This reslts in or matrices being redced to the desired size. While this redction algorithm works well when we deal with nmerical ales only, if we attempt to apply it when expressing or qantities as series, we then hae to diagonalize matrices of series. As discssed in Section 2.9, this is not an easy task. We conclded that it was best to aoid the diagonalization altogether. Instead, we sed the Arnoldi method as described in Section This proides an approximate diagonalization, while being sbstantially easier to compte. Howeer, in practice we fond that the Arnoldi method does not make the matrices conerge as well as diagonalization. We also need to figre ot how to jmp from one size to a larger one, i.e. how to expand the matrices. Fortnately this is easy, as we can jst ct off one less row and colmn when redcing the matrices. In smmary, the procedre for the renormalization grop method is: 1. Start with initial A and F of dimension Expand the A and F matrices sing the formlas we hae gien aboe. 3. Diagonalize A, or se the Arnoldi method on A. 4. Redce the matrices to their original size as described aboe. If we hae iterated sfficiently long at the crrent matrix size (again we sed 5 iterations), then instead redce the matrices to one row and colmn more than their original size. 5. Go back to step 2. After this procedre has been carried ot, we can then calclate κ ia Eqation We can also calclate other qantities of interest from the expressions in Section Conergence/reslts We first attempted to apply the renormalization grop method to the hard sqares model to find the partition fnction per site as a series of z, the fgacity of an occpied site. Unfortnately, while the series conerged for 2 2 matrices, at higher leels it simply did not conerge, een when we tried diagonalizing the matrices otright instead of sing the Arnoldi method. We are not sre why this is so, except that seeing as we are not garanteed to sole the CTM eqations if we iterate at a fixed size, there is no reqirement for or intermediate approximations to κ to hae their first few terms correct. Also, when we tried to se the Chinese Remainder Theorem in the same way as we sed it for the iteratie method, we ended p attempting to take the sqare root of many nmbers which did not hae a root. There were a few sch cases in the iteratie method, bt it seemed to be able to shrg it off, which the renormalization grop method cold not. 86

87 We did, howeer, notice that the conergence at a fixed matrix size seems to be slightly less than that gien by the iteratie method for example, we managed 10 correct series terms from 2 2 matrices, as opposed to 16 for the iteratie method. This implies that we neer actally reach the soltion of the eqations at fixed size, a conclsion which will be spported by or nmerical calclations on the second-neighbor Ising model in Chapter 3. To try or hand at something more tractable, we attempted to se the method to find nmerical ales, rather than series. Here we were more sccessfl, and managed to apply the method to both the hard sqares model and the second-neighbor Ising model. We stdied the conergence of the method on the second-neighbor Ising model in detail. The reslts of that stdy are gien in Section 3.3. In short, the method appears to work well for nmerics, bt the conergence rate depends on the ales of the parameters (the interactions), or more specifically, how close the parameters are to a critical point or line. The closer to criticality, the worse the conergence. By sing nmerics, we were able to rn the method p to matrices in a relatiely short time. For calclating nmerical ales, this method is more attractie than the iteratie method, becase althogh it is less accrate at any matrix size, it is able to reach larger matrix sizes than the iteratie method in the same time. This means that we can get more accrate approximations to κ with the renormalization grop method. We compared or approximations to κ (denoted by ˆκ) at z = 1 with the ale κ = (2.142) fond in [18]. A log-log plot of the reslts is shown in Figre 2.27 (the nscaled plot is not ery informatie becase the conergence is too rapid). It seems that the conergence roghly obeys a power law, which is encoraging. Unfortnately, after this size, the approximations tend to flctate withot conerging, which we think is de to finite-iteration error Technical notes We encontered similar problems to the iteratie method when dealing with series. With nmerics, we sed the arbitrary precision real type cl R from the CLN library ([67]) for or data. To diagonalize or matrices, we first fond the largest eigenale and corresponding eigenector by means of the power method (iterated 15 times), eentally finding the eigenale by sing the Rayleigh qotient T A T. (2.143) To speed conergence, if we diagonalized the matrix in a preios iteration, we sed the eigenector from that preios diagonalization as a starting ector for the next diagonalization. After finding the eigenale and eigenector, we then sed the reslt below ([122, Theorem 9.5.1]) to deflate the matrix (i.e. trn it into a matrix with the same eigenales except for the largest one). Lemma If a symmetric n n matrix A has eigenales λ 1, λ 2,..., λ n, where λ 1 is 87

88 V ψ a b c d σ 1 σ σ 3-20 ln(κσ 4 ˆκ) σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) ln(matrix size) Fig. 2.27: Log-log plot of approximated κ s. final matrix size the largest eigenale, then if i is the normalised eigenector of A corresponding to λ i, the symmetric matrix A λ 1 1 T 1 (2.144) has eigenales λ 2, λ 3,..., λ n, 0, with corresponding eigenectors 2, 3,..., n, 1 respectiely. Finally we repeated the process (generating the next largest eigenale eery time) ntil the desired nmber of eigenales had been fond Efficiency To find the efficiency of the method relatie to the matrix size, sppose we wish to rn the algorithm to a m m matrix size, while keeping s series terms (where s = 1 if we are doing nmerics). As aboe, series of length s reqire O(s) operations for addition, and O(s 2 ) operations for mltiplication. For m m matrices, addition of two matrices will reqire O(m 2 s) operations, and mltiplication will take O(m 3 s 2 ) operations. Now each expansion of the matrices reqires each element to be mltiplied by a ω, and in the case of A, added together. This reqires O(m 2 s 2 ) operations. On the other hand, the redction reqires the mltiplication of matrices (albeit a fixed nmber) which takes O(m 3 s 2 ) operations. If we are sing diagonalization, we need to find arond m eigenales (remember we are diagonalizing 2m 2m matrices), each of which reqires a fixed nmber of matrix-ector mltiplications (which take O(m 2 s 2 ) operations). Therefore diagonalization 88

89 also takes O(m 3 s 2 ) operations, which means that that is the nmber of operations needed for one iteration (expansion and redction). Again, we hae a fixed nmber of iterations at each matrix size, and m 1 matrix size expansions. As or aerage matrix size will again be linear in m, the nmber of operations reqired for the renormalization grop method is O(m 4 s 2 ), the same as the iteratie method. For nmerical calclations, we hae s = 1, so the algorithm takes O(m 4 ) time. If the error of or approximations do indeed obey a power law in relation to matrix size, then this implies that it also obeys a power law in relation to time taken. This means that the method is still an exponential time algorithm (in that it takes α n time to prodce n digits of κ). On the whole, the renormalization grop method is mch faster than the iteratie method in or testing. One reason is becase the iteratie method not only iterates within each matrix size, bt at eery pass (recalclation of eery matrix once) it ses the power method, which is itself an iteratie matrix method. The renormalization grop, if sing the Arnoldi method, does not hae this inefficiency. If it ses diagonalization, it does hae to se an iteratie matrix method, bt een so, the nmber of matrix mltiplications needed is far less than that needed for the iteratie method Conclsion In this chapter, we hae looked at the problem of corner transfer matrices, and how to generate soltions for statistical mechanical models sing Baxter s CTM eqations. Firstly, we (rather laboriosly) re-deried the CTM eqations, paying special attention to the hard sqares model. Then we proposed two methods. One was or own inention, based on iterating throgh the CTM eqations; the other was based on the renormalization grop method of Nishino and Oknishi. Neither of these methods yielded exactly what we wanted the iteratie method was too slow to improe significantly on Baxter s hard sqare series of 25 years ago, while the faster renormalization grop method did not work at all for series. Althogh we hae a fairly good nderstanding of the CTM eqations and how they work, and we hae deised some methods to exploit them, the fact of the matter remains that we hae not really achieed the breakthrogh in efficiency that Baxter fond in 1978, and that the CTM methods hae promised eer since. This then raises the qestion: what other aenes can we take to frther the CTM idea? For a start, considering that the renormalization grop method works ery efficiently for nmerics, it seems strange that it shold fail so badly for series calclations. It is possible that we are missing some small change, or error, in the method that wold allow it to work for series. Certainly this is something to work on, althogh at the moment we hae no idea what that small change or error might be. (It is also possible that the failre of the algorithm might be de to programming bgs, althogh this seems nlikely!) Other possibilities inclde attempting to apply both methods to other models. One model that has been freqently mentioned as a possibility is the q-state Potts model. This is an extension of the Ising model where each spin has q possible ales, and interacts only with nearest-neighbor spins with the same ale. Owing to the great increase in the nmber 89

90 of possible configrations, physical qantities for the q-state Potts model hae not been calclated as accrately as many people wold like. For the CTM methods, there is an obios extension which wold make the Potts model qantities calclable (simply by not restricting spins to only 2 ales, and then adjsting ω). Althogh this is not ery efficient, there is a possibility that there may be a symmetry between the non-zero states that can be exploited to frther the method. Some thoght has been gien in that direction, albeit with little sccess as yet. Another, more immediate model that we can apply the CTM methods to is the secondneighbor Ising model, as defined in Chapter 1. In fact, we hae done significant stdies on this model with the renormalization grop CTM method and the preiosly-sed finite lattice method. This is the topic of the next chapter. 90

91 3. THE SECOND-NEIGHBOUR ISING MODEL 3.1 Introdction In Chapter 1, we gae a brief introdction to statistical mechanical models and how they arise. As mentioned, one of the most stdied (if not the most stdied) of these models is the spin- 1 sqare lattice Ising model, often referred to simply as the Ising model. This model 2 takes into accont two types of magnetic interaction: an external field of strength H (which acts on single spins), and an interaction of strength J (which acts on nearest-neighbor pairs of spins). The Hamiltonian that arises from these two interactions is H(σ 1, σ 2,..., σ N ) = J <i,j> σ i σ j H i σ i, (3.1) with the partition fnction defined as Z N = σ 1,σ 2,...,σ N e βh(σ1,σ2,...,σn ). (3.2) The Ising model is a ery sefl model to stdy, becase it is relatiely simple, and frthermore a simple case (zero-field) has been soled exactly (in fact, it has been soled on a ariety of two-dimensional lattices see [14], [42] and [43]). Howeer, the assmption that the magnetic spin-pair interaction only applies to spins/atoms that are one nit or less apart seems rather nrealistic. In reality, there wold be magnetic interactions between all spins, bt the strength of sch an interaction wold decrease ery rapidly oer distance. On the other hand, een when the interaction is restricted to nearest-neighbor spins, the model is hard to sole. If we remoed this restriction entirely, the model wold become mch harder. So we compensate, by allowing the interaction to affect nearest-neighbor pairs and second-nearest neighbor pairs only, with different strengths. We let the original nearestneighbor interaction hae strength J 1, and let the second-nearest neighbor interaction apply in a similar manner, bt with strength J 2. This is an IRF model the interactions are shown in Figre 3.1. The Hamiltonian of the system is H(σ 1, σ 2,..., σ N ) = J 1 σ i σ j J 2 σ i (3.3) <i,j> <i,j> 2 σ i σ j H where the second sm is oer all second-nearest neighbor pairs of spins. We call this model the second-neighbor Ising model. i

92 F (σ 1, ξ c σ 2 ) F (σ 2, σ 3 ) F (σ (0, 3, σ 4 ) 2 1) F (σ 4, σ 5 ) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 J 1 JH 2 H ξ 1 (0, 2 H1) J 2 J 1 Jξ 12 J 1 ξ c J 2 J 1 ln Fig. 3.1: The interactions ln m(, arond 0) a cell for the second-neighbor Ising model. ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 H H Fig. 3.2: A lattice with two different types of spins (filled and hollow). Physically, the second-neighbor Ising model is also appropriate when there is more than one type of atom in the lattice. For example, if the lattice consisted of two different types of atoms, interleaed with each other as in Figre 3.2, then all nearest-neighbor bonds wold link one of each type of atom, while all second-neighbor bonds wold link two atoms of the same type. The nearest-neighbor bonds wold then all hae the same strength, as wold all the second-neighbor bonds, bt we wold expect these strengths to be different from each other they might possibly een hae different signs. The second-neighbor Ising model, althogh well-stdied, has not been stdied nearly as mch as its simpler first-neighbor conterpart. In part, this is de to the complexity of the model: there are many more possibilities when another ariable is added to the mix. In particlar, this leads to mch more complex phase transitions. A phase transition occrs when there is a singlarity in the free energy (or eqialently, the partition fnction). Physically, this is indicated by a drastic change in the properties of the magnet when one of the external factors is changed. For example, in or case of a magnet, the interaction strength between particles cannot generally be changed experimentally, bt the temperatre can. If the temperatre is below a certain temperatre, called the critical temperatre, then in the absence of an external magnetic field, the magnet will retain its own magnetism. This phenomenon is called spontaneos magnetism. On the other hand, if the temperatre is aboe the critical temperatre, then in the absence of the external field, the magnet loses its magnetism, i.e. ceases to be a magnet. If we denote the critical temperatre by T c, then the regime T < T c is called the lowtemperatre phase, while the regime T > T c is the high-temperatre phase. At the bondary 92

93 X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.3: A typical configration in the ferromagnetic low-temperatre phase of the Ising model. (T = T c ), we therefore hae a phase transition. In some other models, een more dramatic behaioral changes are possible; for instance, in the model of a liqid, the liqid will become a gas at the phase transition point. In the case of the simple (first-neighbor) Ising model, the model has two zero-field phase transitions, at the critical points tanh Jc kt c = ±( 2 1), H = 0. This splits the range of possible temperatres into 3 phases. In Figres 3.3 to 3.5, we show what the model looks like in zero field in these phases. Figre 3.3 shows the ferromagnetic low-temperatre phase (T < T c, J > 0), where like neighbors are strongly encoraged. This means that most spins take the same ale. Figre 3.4 shows the anti-ferromagnetic low-temperatre phase (T < T c, J < 0), where nlike neighbors are strongly encoraged. This gies an alternating pattern. Figre 3.5 shows the high-temperatre phase (T > T c ), where the interaction is weak. This makes the spins look random. Now, we wish to find the zero-field phase transitions of the second-neighbor Ising model. Natrally, the model with J 2 = 0 is eqialent to the first-neighbor Ising model, and therefore it contains the same critical points as the nearest-neighbor model. Howeer, when J 2 0, the model also has phase transitions. It trns ot that when < tanh J 2 < 2 1, the model always has two critical points. These critical points form a line kt in the J 1 J 2 plane, which we call the critical line. It is worth noting that both the simple and second-neighbor Ising model possess a 93

94 a b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.4: A typical configration in the anti-ferromagnetic low-temperatre phase of the Ising model.

95 a b c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.5: A typical configration in the high-temperatre phase of the Ising model.

96 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.6: (0, We 2 can 1) diide the lattice into two sets of spins (0, sch 2 that 1) eery nearest-neighbor bond connects ln one spin from each set. ln m(, 0) ln ln m(, 0) ˆm(0.42, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c iterations matrix size ˆm(, 0) ( 1 2 m)8 (a) A configration of spins. There are 24 like bonds and 16 nlike bonds. The spins abot to be reersed are indicated. (b) The same configration with half the spins reersed. There are now 16 like bonds and 24 nlike bonds. Fig. 3.7: The Ising model is symmetrical in the parameter J. large amont of symmetry with respect to spin ales. In the case of the first-neighbor Ising model, there is a natral one-to-one correspondence between configrations in the ferromagnetic phase (J > 0) and configrations in the anti-ferromagnetic phase (J < 0) if the external field H is 0. This can be seen by obsering that we can diide the lattice into two sets of spins, as shown in Figre 3.6. Eery nearest-neighbor bond connects one spin from each set. Now, if we take a particlar configration of spins, and reerse the ales of all the spins in one set, we will reerse (i.e. switch like bonds with nlike bonds) eery nearest-neighbor bond in the lattice. This creates a correspondence between a configration in the model with interaction strength J and another configration in the model with interaction strength J. The two configrations will hae exactly the same weight, since we hae no external magnetic field. We illstrate this in Figre 3.7. This correspondence shows that the partition fnction of the simple Ising model is nchanged if the interaction strength is reersed. Therefore, any critical point has a corresponding critical point with eqal bt negatie interaction strength, so the critical points always come in pairs. For the second-neighbor Ising model, a similar symmetry applies. We cannot decople the lattice into two sets of spins for which the bonds only link one spin from each set, bt we can take the same sets that we sed for the first-neighbor Ising model. When we reerse all the spins in one set, we reerse all first-neighbor bonds, bt all second-neighbor 96

97 (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.8: With no nearest-neighbor interaction, the lattice decoples into two separate sqare lattices. bonds stay the same. Ths we can reerse the nearest-neighbor interaction, bt keep the second-neighbor interaction the same, and get the same partition fnction. In particlar, this means that the critical line for the second-neighbor model is symmetrical abot the J 2 axis. A bit of reflection (no pn intended) shows s that when J 1 = 0, there is no interaction between nearest-neighbor sites on the lattice, een indirectly. Ths the lattice decoples into two sqare lattices, each of which hae nothing to do with the other. We show this in Figre 3.8. If we do this, the original second-neighbor interaction becomes a first-neighbor interaction on the two separate lattices. Therefore each lattice acts as an independent firstneighbor Ising model, and has the same critical points. Ths we know that the secondneighbor Ising model also has a phase transition when J 1 = 0 and tanh J 2 = ±( 2 1). kt It trns ot that the topmost critical point (J 1 = 0, tanh J 2 = 2 1), is connected to kt both the J 2 = 0 critical points by a critical line. On the other hand, the other critical point (J 1 = 0, tanh J 2 = 2 + 1) is actally on a new critical line altogether. kt We iew the critical lines by transforming into the ariables = tanh J 1 and = kt tanh J 2, which we will se from now on. This enables s to iew the entire realm of physical kt possibilities in for nit sqares. In these ariables, the critical lines look approximately like those shown in Figre 3.9. We call the diagram of the critical lines the phase diagram. From this diagram, it can be seen that there are 4 different phases. The top right phase is the low-temperatre ferromagnetic phase, where like nearest-neighbor spins are encoraged and the second-neighbor interaction is either positie or insignificant compared to the nearest-neighbor interaction. An example of a likely configration in sch a phase has been shown in Figre 3.3. The top left phase is the low-temperatre anti-ferromagnetic phase, where nlike nearestneighbor spins are encoraged, and again the second-neighbor interaction is positie or insignificant. Note that any nearest-neighbor interaction (positie or negatie) encorages like second-neighbors. An example of a likely configration in this phase has been shown in Figre 3.4. The middle phase is the high-temperatre phase, in which the interactions are too weak to enforce spontaneos magnetization, or work against each other. A typical configration in this phase has been shown in Figre 3.5. Notably, in these three phases, the second-neighbor interaction neer oerrides the first-neighbor interaction. Howeer, in the lowest phase, the 97

98 J 2 PSfrag replacements H ξ 1 ξ 2 ξ c V ψ a (0, 2 1) Low temperatre anti ferromagnetic ln ln m(, 0) ˆm(0.42, 0) b c iterations d matrix size σ 1 σ 2 σ ˆm(, 0) 3 ( 1 2 m)8 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) Sper anti ferromagnetic 1 Low temperatre ferromagnetic High temperatre Fig. 3.9: An approximate phase diagram in the ariables and. J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.10: A typical configration in the sper anti-ferromagnetic phase of the second-neighbor Ising model.

99 anti-ferromagnetic second-neighbor interaction does oerride the first-neighbor interaction, forcing the lattice to decople into two separate anti-ferromagnetic first-neighbor Ising models. This gies a characteristic same row or same colmn look to the spins. We show a likely configration in this phase, called the sper anti-ferromagnetic phase, in Figre The free energy and all its deriaties (inclding most physical qantities of interest) are singlar at a phase transition, and they tend to behae according to a power law close to the critical point. For example, in the first-neighbor Ising model, the magnetization per site m obeys sch a law in terms of the temperatre. In this chapter, we say that f(x) g(x) ln f(x) as x x 0 if and only if lim x x0 = 1. Note that this is a slightly different meaning to ln g(x) that sed in other chapters. Using this notation, m obeys the law m(t ) (T c T ) 1 8 as T T c. (3.4) Many physical qantities obey power laws at criticality, with arying exponents. These exponents are called critical exponents. In Figre 3.9 we gae a rogh sketch of what the phase diagram is estimated to look like. We wold like to locate the critical lines more precisely. The location of the critical lines is a well-stdied problem, with many different methods applied to it. These methods inclde closed-form approximations (starting from 1951 in [45], and contining with [53], [61], [34], [63] and [23]), series expansions (starting from 1969 in [39], contining with [112], [123] and [79]), Monte Carlo methods ([92], [25] and [28]), and renormalization-grop theory ([101], [102] and [133]). More recently, the clster ariation method ([100]) has also been sed to approximate the shape of the critical lines. Becase the second-neighbor Ising model can be expressed as an IRF model, we can se the CTM methods described in Chapter 2 to calclate qantities of the model, and throgh these, the location of the critical line. We do this later in this chapter. For this prpose, we hae sed the more efficient renormalization grop method. Howeer, we fond that for nknown reasons (possibly a breakdown of symmetry), the method breaks down in the sper anti-ferromagnetic phase. For this phase, we instead to sed the finite lattice method (FLM) that we mentioned in Chapter 2. As we approach the critical point, the FLM is not as efficient as the CTM, bt in the absence of the latter, we make do with the former. While we are interested in the location of the critical line eerywhere in the phase diagram, we are particlarly interested in the crossoer point, which is (0, 2 1) in particlar, in the behaior of the critical line as it approaches this point from below. It has been speclated (in, for example, [23]) that the line has a csp at that point, and that as it approaches the point, it obeys a power law: c (0) c () ξc as 0 (3.5) where c () denotes the critical for a gien (sally pertaining to the higher critical line). The exponent ξ c is called the crossoer exponent, and has been estimated, sing a scaling assmption, to be 4 ([48]). We will inestigate the behaior of the critical line at this point 7 later in this chapter. 99

100 Another aspect of the phase diagram that we are also interested in is the lower phase bondary. It has been obsered that on the pper critical line, the arios critical exponents always stay the same, a phenomenon called niersality. Howeer, it was sggested by an Leewen in 1975 ([133]) that the lower phase bondary is not niersal. In 1979, Barber ([8]) confirmed this by showing that the critical exponent for the specific heat is not constant on the lowest critical line. Other stdies hae also looked at the critical exponents on this line, sing Monte Carlo methods ([126], [92], [25], and [2]), series expansions ([112]), renormalisation-grop calclations ([102]) and coherent-anomaly methods ([127] and [99]). We apply or methods to stdy both the location of this line and the critical exponents on it. In Section 3.2, we otline the finite lattice method and how it works. In Section 3.3, we condct a detailed analysis of the conergence of the renormalization grop CTM method when applied to the second-neighbor Ising model. In Section 3.4, we gie a brief primer on scaling theory and generalised homogeneos fnctions, which we then se in Section 3.5 to derie a scaling estimate of the crossoer exponent. Then we trn to or methods to erify or estimate. In Section 3.6, we discss some ways to estimate the location of the critical lines, sing both nmerical and series calclations. We do the actal calclations in Section 3.7, as well as estimating the crossoer exponent and the critical exponent along the lower critical line. Finally, in Section 3.8, we recap what we hae done and look at possible ways to take this research frther. 3.2 The finite lattice method Finite lattice approximation In or analysis of the second-neighbor Ising model, we se the renormalization grop corner transfer matrix method that we described in Chapter 2. Howeer, as the CTM method is still rather experimental, we also analyse the model with the established finite lattice method. In particlar, we hae not managed to get the CTM method to work in the bottom phase, bt the FLM still works there. In this section, we describe the finite lattice method and how it works. The majority of this section is taken from [50]. The finite lattice method, like the CTM method, attempts to calclate arios qantities of interest in a statistical mechanical model. As stated preiosly, the most important of these qantities is the partition fnction, Z N, and the partition fnction per site, κ. The starting point of the finite lattice method comes from the fact that the free energy per site can generally be expressed as a connected graph expansion that is, as a series where the coefficients depend on the nmbers of some type of connected graph. In other words, the FLM assmes the existence of an expansion of the form ψ = α b α φ α (z) (3.6) where the sm is oer all connected graphs α, b α is the nmber of ways α can be embedded 100

101 in the lattice (diided by N), and φ α (z) is the contribtion to the free energy for the graph α. If, instead of the infinite lattice, we then apply a similar assmption to a finite lattice γ, then we assme that ψ γ = α γ η(γ, α)φ α (z) (3.7) where η(γ, α) is the nmber of ways α can be embedded in γ. Here we hae taken ψ γ to be the free energy of the lattice γ (as opposed to the free energy per site). Now take a set of connected graphs A which contains γ, and has the property that any sbgraph of an element of A mst also belong to A. We can then rewrite Eqation 3.7 so that the sm is oer all elements of A, since η(γ, α) 0 if and only if α γ. Since γ is arbitrary, we can then treat the eqation as a component of a matrix-ector eqation. The matrix in this eqation has elements η(γ, α), so we can order the graphs so that it is lower trianglar. By constrction, its diagonal elements are non-zero, so it is inertible. If the inerse has the elements ν(α, γ), then φ α (z) = γ A ν(α, γ)ψ γ (3.8) for all α A. This then implies that we can approximate the free energy per site of the infinite lattice by ψ b α φ α (z) = b α ν(α, γ)ψ γ = a γ ψ γ (3.9) α A α A γ A γ A where a γ = α b αν(α, γ). Ths the free energy per site can be approximated by a linear combination of the free energies of finite sblattices. The more graphs we pt in A, the better the approximation becomes. At this point, this approximation is not particlarly sefl, since A contains all sbgraphs of any of its elements, and the nmber of sbgraphs of een one graph grows ery qickly with the nmber of ertices of that graph. Howeer, we can simplify this to get a sefl approximation. We define A max to be a maximal set where no element is a sbgraph of another element. Then, if we take A to be the set consisting of A max and all sbgraphs of all elements of A max, it can be proed ([70]) that the only elements γ A for which a γ 0 are the graphs which are intersections of any nmber of elements of A max. This drastically redces the nmber of graphs which we mst sm oer. For the sqare lattice, we take A max to be the set of rectangles of fixed perimeter 2k. We define lattice rectangles to inclde all ertices inside the perimeter, which is rectanglar. Then the elements of A for which a γ 0 are rectangles of perimeter 2k. Ths we can rewrite or approximation as ψ a m,n ψ m,n (3.10) m,n where the sm is oer all positie m, n sch that m + n k. We hae taken ψ m,n to be the free energy of an m n rectanglar lattice. Mltiplying by β and exponentiating gies s 101

102 the eqialent expression for the partition fnction per site: κ m,n Z am,n m,n (3.11) where Z m,n is the partition fnction of a m n lattice. This approximation becomes more accrate as k becomes larger, and in the limit k, it will gie the exact partition fnction. In fact, by sing rectangles, we sae een more comptation than the aboe eqation wold indicate, becase it trns ot that the coefficients a m,n anish if the half-perimeter is less than k 3. From [50], the coefficients reqired are a m,n = 1 if m + n = k 3 if m + n = k 1 3 if m + n = k 2 1 if m + n = k 3 0 otherwise. (3.12) If the model has reflection symmetry in the line at an angle of π to the horizontal, then 4 Z m,n = Z n,m and we can enmerate only the rectangles for which m n. Then we se the coefficients a, where a m,n if m = n a m,n = 2a m,n if m < n (3.13) 0 otherwise Transfer matrix method The finite lattice approximation gies s an efficient way of calclating the partition fnction per site from finite lattice partition fnctions. This still leaes s with the problem of finding those partition fnctions. The most obios method, of corse, is to simply calclate them ia direct enmeration we need to sm oer all possible configrations of spins, bt this is a finite sm and relatiely easy to calclate. Howeer, althogh direct enmeration is possible, it qickly becomes infeasible de to the large nmber of calclations reqired, as might be expected. The method that is generally sed to calclate the finite lattice partition fnctions is the transfer matrix method. We described the basics of transfer matrices in the context of an infinite lattice in Section 2.2. The same ideas apply here, except that instead of finding the partition fnction of an infinite lattice by taking the maximm eigenale of an infinitedimensional matrix, we can express the partition fnction as the trace of a finite power of a finite transfer matrix: Z m,n = Tr V n. (3.14) The aboe eqation was deried by assming toroidal bondary conditions. As we now wish to calclate the partition fnction of finite lattices, this is no longer appropriate. For- 102

103 (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ψ 1 ( 1 2 m)8 V ψ 2 Fig. 3.11: Transfer matrices for a finite lattice. Hollow spins are not in the lattice. tnately, we can apply the same principle by sing fixed bondary conditions we pictre the finite lattice as a sbset of an infinite lattice, and gie all spins otside the finite lattice a fixed ale (which may differ for each spin). This ale is sally in alignment with the most likely state, which we call the grond state. The spins otside the lattice contribte only throgh interactions with spins in the lattice; they do not affect the partition fnction otherwise. If we modify V so that it incldes interactions with the grond state at the top and bottom edges of the finite lattice, and remoe the toroidal bondary conditions, then we can still calclate the finite lattice partition fnction. We set ψ 1 and ψ 2 to be ectors with elements that are the interactie contribtions to the partition fnction from the left and right edges of the lattice, respectiely, gien the spin ales on those edges. For example, [ψ 1 ] (1,1,...,1) is the interactie contribtion from the left edge of the lattice if all spins on that edge hae the ale 1. Then we hae Z m,n = ψ T 1 V n ψ 2. (3.15) We illstrate the new arrangement in Figre We wold like to calclate the finite lattice partition fnction by sing this eqation directly, bt the dimension of V is the nmber of possible spin states on a ct of m spins, which is 2 m. Ths V contains 4 m entries, and in (for example) the Ising model, eery one of these elements is non-zero (since no configration is directly prohibited). The nmber of calclations reqired therefore becomes prohibitiely large ery qickly as the lattice size increases. To oercome this problem, we break V p, in mch ( the ) same way that we decomposed a b it into single cells in Section If we define ω to be the weight of a single cell c d with spins a, b, c, and d, then we hae m 1 V (σ1,σ 2,...,σ m),(σ 1,σ 2,...,σ m ) = e b (σ 1, σ 1 )e t(σ m, σ m ) i=1 ω ( σi σ i σ i+1 σ i+1 ) (3.16) 103

104 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.12: Single-cell transfer matrices. The first moes an n-spin ct to an n + 1-spin ct; the second moes the ct frther to the right and down; the third redces it to n spins. where e b and e t are the edge contribtions. Howeer, nlike what we did in Section 2.4.1, we do not split the weight of spins or bonds which lie in more than one cell. Instead, we set ω so that it contains all the weight of its bottom right spin, and the edges adjacent to it. This introdces an asymmetry into ω. For example, take the simple Ising model in the zero-field low-temperatre phase. We normalise the Boltzmann weights so that the configration with all 1 spins has weight 1. Then a like bond has a weight of 1, while an nlike bond has a weight of z = e 2βJ. We set all the bondary sites (not in the finite lattice) to be 1. This gies ( ) a b ω = z (1 bd)/2 z (1 cd)/2, e c d t (a, b) = z (1 ab)/2 z (1 b)/2, e b (a, b) = z (1 b)/2. (3.17) Now we can break V down as a prodct of transfer matrices: ( m ) V = W l W i W r (3.18) i=1 where W i is a matrix which transfers a single cell, gien a split ct of n + 1 sites. W l adds the weight generated by the interaction of the top edge with the bondary spins, and trns the ct from an n-spin ct into an n + 1-spin ct (which means that it is not sqare). W r does the reerse, adding the weight of the bottom bondary and redcing the ct to n sites. For example, ( ) σi σ [W i ] (σ1,σ 2,...,σ m+1 ),(σ 1,σ 2,...,σ m+1 ) = i ω σ i+1 σ i+1 if σ j = σ j for all j i + 1 (3.19) 0 otherwise and W l and W r are defined similarly, inclding interactions with the bondary conditions. Each of these matrices transfers a single cell, as illstrated in Figre For more details, see [29]. Since each of these matrices has at most two non-zero elements in each row, they are 104

105 sparse, and we therefore do not hae to deote mch storage to them. In fact, we can jst not store them at all and mltiply them implicitly. To find the partition fnction of a finite lattice, we start with ψ 1, which is sally easy to find, and then (implicitly) mltiply by the transfer matrices in the prescribed order. We can then se the finite lattice approximation to reconstrct an approximation for the infinite lattice partition fnction per site The Ising model an example To illstrate the finite lattice method, we will constrct an approximation for the partition fnction per site of the simple Ising model. We first illstrate the TM method by finding the partition fnction of a 2 2 lattice, expanding in the low-temperatre ariable z = e 2βJ. We start from a grond state of all 1s, to which we assign a normalised weight of 1. Each like bond has a weight of 1, and each nlike bond has a weight of z. We order the possible cts along a colmn in decreasing lexicographical order (which means that we order according to the topmost spin, then the next topmost, and so on). We then hae the transfer matrices z W l = z z z 0 (3.20) z z Frthermore, the starting ectors are 1 0 z z 0 z z W 1 = 0 z 0 z z 0 z 0 (3.21) z z 0 z z z 0 W r = z (3.22) z z ψ T 1 = ( 1, z 3, z 3, z 4) (3.23) and ψ T 2 = ( 1, z, z, z 2). (3.24) 105

106 Mltiplying these ot gies the normalised partition fnction Z 2,2 = ψ T 1 W l W 1 W r ψ 2 = 1 + 4z 4 + 4z 6 + 7z 8. (3.25) To get an approximation for κ, we apply this procedre to the 1 1, 1 2 and 1 3 lattice, which (withot going into the details) gies and therefore Z 1,1 = 1 + z 4 (3.26) Z 1,2 = 1 + 2z 4 + z 6 (3.27) Z 1,3 = 1 + 3z 4 + 2z 6 + 2z 8 (3.28) κ Z 3 1,1 Z 6 1,2 Z2 1,3 Z 2,2 = 1 + z 4 + 2z 6 + 5z 8 14z (3.29) which is accrate p to the z 8 term. The z 10 term shold hae coefficient Conergence of the CTM method When we se the CTM method to approximate qantities of the second-neighbor Ising model, we wold like to know how accrate the approximations are. In this section we stdy the conergence of the CTM method for this model in particlar, we wold like to know not only the rate of conergence, bt if the method conerges at all! As we se the (spontaneos) magnetization to find or phase bondaries, we will look at how or calclated ale for this qantity behaes as we execte the algorithm. We wold expect other qantities (like partition fnction per site, isothermal ssceptibility etc.) to conerge in a similar manner. Note that de to ease of programming, we hae made the spins take the ales 0 or 1, rather than -1 or 1. Frthermore, the 0 spin now corresponds to what was the 1 spin. This means that the magnetization m is now what wold hae been 1 2 m. Recalling the renormalization grop CTM method from Section 2.10, the method works by finding ales for the matrices A(a) and F (a, b) where a and b take all spin ales. These matrices are fixed at a specific dimension (starting at 1 1). We then execte a fixed nmber of passes, where each pass consists of expanding, and then redcing, the matrices according to the formlas gien in Section After this is done, we increase the dimension of the matrices by 1, and start again. The process ends when we reach a pre-determined matrix size, pon which we calclate the qantities of interest sing formlas gien in Section 2.6. Theoretically, it is difficlt to determine how this process wold conerge. We assme that it does in fact conerge, an assmption which is generally borne ot in practice. We wold also like to know what the rate of conergence depends on. At first glance, there are two factors which preent s from achieing an exact ale. One is that we cannot expand the matrices to infinite size. The other is that we cannot iterate the method at a fixed matrix size for an infinite nmber of iterations. Therefore, we will analyse the conergence of the renormalization grop CTM method as it depends on the final matrix size and the nmber 106

107 of iterations calclated at that final matrix size. It trns ot that there is a third factor that affects the conergence the point on the phase diagram that we ealate the magnetization at, i.e. the ales of the parameters. This does not introdce an error directly, bt affects the conergence of the other two factors Nmber of iterations Firstly, we look at how the nmber of iterations in the final size affects the calclations. This effect is dependent not only on the location of the ealation point, bt also on the final matrix size. To measre the conergence, we fixed the parameters, ran the algorithm p to a fixed size, then iterated the method p to 1000 times, calclating the magnetization at eery iteration. The general pattern seems to be that the calclated magnetization (which we denote by ˆm) gradally conerges to a final ale, as we wold hope. The natre of this conergence is ariable, howeer. If the ealation point is far from the critical line, the conergence is generally monotonic. If the ealation point is near to the critical line, howeer, all sorts of behaior can occr. For low final matrix sizes, ˆm again seems to be monotonic, as shown in Figre Once we start making the size larger, we may get oscillatory conergence to a final ale if we are lcky, as shown in Figre 3.14; or we may get periodic oscillations arond a ale, withot actally conerging, as shown in Figre 3.15; or ˆm may moe in the icinity of a ale, withot conerging or haing any noticeable periodic behaior, as shown in Figre If we are nlcky, we may een enconter diergent behaior, as shown in Figre It is difficlt to say for sre exactly what effect the final matrix size has on the conergence of ˆm in relation to the nmber of iterations. While it generally seems to be tre that oscillations occr at larger sizes, it is not always the case that the amplitde of those oscillations grows with matrix size. Howeer, there does seem to be a negatie relationship between the size of the oscillations and the distance of the ealation point from the critical line. Unfortnately, for some points and matrix sizes, the calclated magnetization does not conerge at all to the ale that we wish. Becase we always calclate in the absence of a magnetic field, the magnetization that we are calclating is actally the spontaneos magnetization bt becase there is no external field, both spin ales are eqally likely. Therefore if m is the spontaneos magnetization, we shold be eqally likely to calclate a ale of 1 m (remember that or spins now take ales of 0 or 1). It sometimes happens that or calclated figre switches between the two ales, which throws the conergence off completely. Figre 3.18 shows an example of this happening. Frthermore, we cannot een se or new ale to find the original m, becase it is possible for the calclated magnetization to switch more than once, which means that we neer conerge. In particlar, this seems to happen at eery iteration when we try to calclate the magnetization in the bottom phase. Finally, when we ealate on or ery near to the critical line, it seems that the nmber of iterations reqired to settle the magnetization is ery high (larger than 1000). This is 107

108 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 PSfrag replacements J 2 V ψ H a ξ 1 ξ 2 ξ c b c d (0, 2 σ1) 1 σ 2 ln ln m(, 0) σ 3 σ 4 σ 5 F ˆm(0.42, 0) matrix size X YA ˆm(, 0) ( 1 2 m)8 ω iterations F (σ 1, σ 2 ) F Fig. (σ 2, 3.13: σ 3 ) Calclated magnetization s. nmber of iterations at the point (0.42, 0) with matrix size F (σ 3, σ 4 ) 7. The ale conerges monotonically. F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) matrix size ˆm(, 0) ( 1 2 m) iterations Fig. 3.14: Magnetization s. iterations at (0.42, 0) with matrix size 8. The ale oscillates, bt conerges.

109 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) PSfrag replacements J 1 V ψ J 2 a H ξ 1 ξ 2 b ξ c d σ 1 (0, 2 σ1) 2 σ 3 ln ln m(, 0) σ 4 σ 5 F X YA matrix size ˆm(, 0) ω F ( 1 (σ 1, 2 m)8 σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) ˆm(0.42, 0) iterations Fig. 3.15: Magnetization s. iterations at (0.42, 0) with matrix size 10. The ale appears to be periodic. J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) matrix size ˆm(, 0) ( 1 2 m)8 ˆm(0.43, 0) iterations Fig. 3.16: Magnetization s. iterations at (0.43, 0) with matrix size 19. The ale stays arond the same ale, bt withot discernible periodic behaior.

110 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 PSfrag replacements J 2 V ψ H ξ 1 a ξ 2 ξ c b c d (0, 2 σ1) 1 σ 2 ln ln m(, 0) σ 3 σ 4 ˆm(0.42, 0) σ 5 F matrix size X YA ˆm(, 0) ( 1 2 m)8 ω iterations F (σ 1, σ 2 ) F Fig. (σ 2, 3.17: σ 3 ) Magnetization s. iterations at (0.42, 0) with matrix size 9. The ale oscillates, eentally dierging. F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 1 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) matrix size ˆm(, 0) ( 1 2 m) iterations Fig. 3.18: Magnetization s. iterations at (0.42, 0) with matrix size 18. The ale switches to 1 m halfway.

111 1 2 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H 0.34 ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) ˆm(0, 414, 0) matrix size ˆm(, 0) ( 1 2 m) iterations Fig. 3.19: Magnetization s. iterations at (0.414, 0) with matrix size 8. The ale is still increasing significantly after 1000 iterations. shown in Figre Fortnately, most of the non-conerging or badly behaed cases tend to occr when the parameters are near the critical line. Ultimately, we cannot really aoid the high-iteration conergence problems. We wold jst like to gie the algorithm enogh time to settle so that it achiees its long-term behaior (so that we don t also hae to deal with the short-term error). To do this we take the following steps: We compte a large nmber of iterations at the final matrix size (500 iterations) We do not calclate the magnetization directly on the critical line, and aoid calclating it too near the line Matrix size Now we look at how stopping the algorithm at a finite matrix size affects the calclated magnetization. We know that if we were to sole the eqations exactly at each size, the approximation wold get better as the matrix size grows. On the other hand, for this method we are not soling the eqations exactly, so this does not apply. To measre the error, we fixed the ealation point, and then ran the algorithm for 1000 iterations at each size, calclating the magnetization after the last iteration at each size. Een thogh we cannot proe that the approximation gets more accrate monotonically with size, this does in fact seem to be the case, as shown in Figre An exact 111

112 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) iterations ˆm(, 0) ( 1 2 m)8 ˆm(0.42, 0) matrix size Fig. 3.20: Calclated magnetization s. final matrix size at the point (0.42, 0). measre of the natre of this conergence is difficlt, owing to the error cased by the finiteiteration oscillations we obsered aboe. Howeer, as noted in Section for κ, in the low-temperatre phase the calclated magnetization seems to depend on size by a power law. We show a log-log plot of magnetization s. size in Figre As we approach the critical line, the rate of conergence slows. If we look at the hightemperatre phase, then for points which are far from the critical line, ˆm is almost exactly 1 for all matrix sizes (we do not gradally conerge to 1 ), which is the exact ale. An 2 2 interesting phenomenon occrs near the critical line in the high-temperatre phase. For smaller matrix sizes, ˆm increases monotonically, bt then after a certain size, which depends on the location of the ealation point, it jmps to almost exactly 1. We show this in Figre We calclated some more data along the -axis (where we know the exact magnetization) to get a better pictre. We fond that at each matrix size, the line of calclated magnetization has a similar shape to the line of exact magnetization. Howeer, the critical point at each finite size is different to the exact ale. We show these lines in Figre It appears that the critical points increase monotonically with respect to matrix size, eentally conerging to the exact ale. Looking at this in another way, we can think of each matrix size as haing its own critical line, and these lines conerge to the tre critical line. This gies s another way of estimating the critical line. In Figre 3.24, we show a plot of these critical lines. We can also look at how the critical points conerge to the actal critical line on the -axis. It appears that they also obey a power law with respect to matrix size. Figre

113 1 2 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 PSfrag replacements J 2 H ξ 1 V ψ a 0 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 b c ln(m(0.5, 0) ˆm(0.5, 0)) d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J ln(matrix size) Fig. 3.21: Log-log plot of magnetization s. size at the point (0.5, 0). J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations ˆm(, 0) ( 1 2 m)8 ˆm(0.41, 0) matrix size Fig. 3.22: Magnetization s. size at the point (0.41, 0). At sizes higher than 2, the calclated magnetization is almost exactly 1 2.

114 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 PSfrag replacements J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size V ψ a b ˆm(, 0) c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ( 1 2 m)8 ω F (σ 1, σ 2 ) Fig. F 3.23: (σ 2, Calclated σ 3 ) magnetization along the -axis for final sizes The leftmost line represents F (σ 3, size σ 4 ) 1, and the size increases as we moe to the right. F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m) Fig. 3.24: Estimated critical lines for sizes 1-5. increases as we moe pwards. The lowest line represents size 1, and the size

115 2 3 F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 ln( 2 1 û c ) ln(matrix size) Fig. 3.25: Log-log plot of critical points on the -axis s. matrix size. shows a log-log plot of critical points against size. 3.4 Scaling theory For the second-neighbor model, one of the properties we wold like to find is the crossoer exponent ξ c. We can derie a theoretical estimate of what the crossoer exponent shold be by means of scaling theory. In this section, we gie a qick primer on this theory. A more detailed exposition can be fond in [68], which is where most of this section is taken from, or [33]. In scaling theory, we assme that when we are ery near to a critical point or line, a linear change of scale in the singlar part of the free energy is eqialent to a power-law change of scale in the parameters. For example, consider the singlar part of the free energy of the Ising model, denoted by ψ s. If we fix the interaction strength J, then ψ s is a fnction of τ = T T c 1 and the external field strength H. The assmption can then be expressed formally as ψ s (λ aτ τ, λ a H H) = λ a ψ ψ s (τ, H) (3.30) for any λ, if we ealate ψ s near the critical point. This assmption is called the scaling assmption. If ψ s satisfies the scaling assmption, we call it a generalised homogeneos fnction. Note that by changing λ to λ 1/a ψ, we can set a ψ eqal to 1. If we do so, we say that ψ has scaling powers a τ and a H. 115

116 Generalised homogeneos fnctions (GHFs) hae seeral interesting properties. In the following lemmas, we will se GHFs in 2 ariables, bt the lemmas also apply for fnctions with more or fewer ariables. Firstly, any deriatie of a GHF is itself a GHF. Lemma If f(x 1, x 2 ) is a GHF sch that then the partial deriatie is also a GHF, satisfying f(λ a 1 x 1, λ a 2 x 2 ) = λ a f f(x 1, x 2 ), (3.31) f (i,j) (x 1, x 2 ) = i x i 1 j x j 2 f(x 1, x 2 ) (3.32) f (i,j) (λ a 1 x 1, λ a 2 x 2 ) = λ a f ia 1 ja 2 f (i,j) (x 1, x 2 ). (3.33) Proof. We proe the lemma for j = 0; the extension to non-zero j is simple. Clearly f (0,0) is a GHF; so let s assme that f (n,0) is a GHF. Then λ a f (n+1)a 1 f (n+1,0) (x 1, x 2 ) = λ a 1 x 1 ( λ a f na 1 f (n,0) (x 1, x 2 ) ) which proes the lemma by indction. = λ a 1 x 1 f (n,0) (λ a 1 x 1, λ a 2 x 2 ) = λ a 1 (λ a 1 x1 ) f (n,0) (λ a 1 x 1, λ a 2 x 2 ) x 1 (λ a 1 x 1 ) = f (n+1,0) (λ a 1 x 1, λ a 2 x 2 ) (3.34) The property that makes GHFs interesting is that they follow a power law relationship as they approach the origin along one axis. Lemma If f(x 1, x 2 ) is a GHF sch that then as x 1 0, and a similar expression holds for x 2. f(λ a 1 x 1, λ a 2 x 2 ) = λ a f f(x 1, x 2 ), (3.35) f(x 1, 0) x 1 a f /a 1 (3.36) Proof. We proe the lemma for x 1 only. By setting λ = x 1 1/a 1 in the GHF-defining eqation and rearranging, we get f(x 1, x 2 ) = x 1 a f /a 1 f(±1, x 1 a 2/a 1 x 2 ). (3.37) 116

117 If we set x 2 = 0, we get a term of f(±1, 0) on the right-hand side, which is constant and depends only on the sign of x 1. Therefore f(x 1, 0) is proportional to x 1 a f /a 1. If we now assme that ψ s is a generalised homogeneos fnction, then all deriaties of ψ s are also GHFs. In particlar, since the magnetization m is related to the free energy by m = ψ H, (3.38) the singlar part of the magnetization mst also be a generalised homogeneos fnction. This helps s to calclate some of the scaling powers of the free energy. For example, the low-temperatre spontaneos magnetization for the Ising model is known to be m = ( 1 (sinh 2J ) 1 8 kt ) 4 for T < T c, and therefore as τ 0 it obeys the power law (3.39) m τ 1 8. (3.40) Howeer, by Lemma 3.4.1, the scaling assmption that we made aboe implies that the singlar part of the magnetization, denoted by m s, satisfies and therefore, by Lemma 3.4.2, we hae m s (λ aτ τ, λ a H H) = λ 1 aτ m s (τ, H) (3.41) m s (τ, 0) τ (1 aτ )/aτ. (3.42) We can then say that and therefore a τ = a τ a τ = 1 8 (3.43) 3.5 Scaling and the crossoer exponent We now show how scaling theory can be sed to estimate the crossoer exponent. This has been done before (see [48]). We start off by defining some notation. Here we hae chosen to work with the magnetization, bt other thermodynamic qantities shold work jst as well. First we normalise or ariables so that a critical point always occrs at the origin, and the fnction is 0 at that point. This enables s to state or reslts in simpler terms. Remember that we are sing the ariables = tanh J 1 and = tanh J 2, and or spins can take the kt kt ales 0 or

118 Definition We define to be the normalised, with corresponding critical line = (3.44) c() = c () (3.45) We also define m s to be the normalised magnetization per site: If the last argment of m s is left ot, it is assmed to be 0. m s (,, H) = 1 m(,, H). (3.46) 2 Now we make the scaling assmption. We cold assme that the singlar part of the free energy is a GHF, bt this is not necessary. Instead we make the (weaker) assmption on the magnetization. Proposition As (,, H) (0, 0, 0), m s (,, H) is a generalised homogeneos fnction, i.e. there exists scaling powers a, a, a H sch that λm s (,, H) = m s (λ a, λ a, λ a H H). (3.47) An obios corollary of this assmption is that m s (,, 0) is a GHF with scaling powers a and a. Therefore, we shall work in zero field when the field is not needed. One immediate conseqence of this is that m s (, 0) and m s (0, ) behae according to a power law in and, respectiely, when they are small. Definition We define ξ 1 to be the critical exponent of the zero-field normalised magnetization as it approaches the origin horizontally from the right: m s (, 0) ξ 1 as 0 +. (3.48) Similarly, we define ξ 2 to be the critical exponent as the origin is approached ertically: m s (0, ) ( ) ξ 2 as 0 +. (3.49) These critical exponents are shown in Figre From Lemma 3.4.2, it follows immediately that ξ 1 = 1 a and ξ 2 = 1 a. Now we are interested in how the critical line behaes close to the crossoer point. It can be shown that nder the scaling assmption, it can indeed be described by a power law. Proposition The critical line c () obeys a power law as 0+ : c() ξc as 0 +. (3.50) 118

119 ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 (0, 2 1) ξ 2 ξ 1 ξ c Fig. 3.26: Critical exponents near the crossoer point. Frthermore, the crossoer exponent ξ c can be expressed in terms of the other critical exponents: ξ c = ξ 1 ξ 2. (3.51) Proof. Let ɛ > 0 be fixed. Sbstitting λ = ( ɛ a into the scaling assmption at zero field gies ( ) 1 ( m s (, a ( ɛ ) ) a a ) = ms, ɛ. (3.52) ɛ ) 1 Now for any, the normalised magnetization m s is 0 at the critical point (, c ()) by constrction. Since the critical line is not horizontal, we know that c() is non-zero, and therefore ( ( ) ) a ɛ a m s, ɛ c () = 0. (3.53) ( Since can be any nmber, bt m s is not triial, it mst be tre that Rearranging gies ) a ɛ a c () is constant: ( ) a = c c () a. (3.54) ɛ c a () = ɛc a a a (3.55) which shows that c() obeys a power law in. Frthermore, the crossoer exponent is ξ c = a a = 1/ξ 2 1/ξ 1 = ξ 1 ξ 2. (3.56) From the aboe proposition and known exponents, we can conjectre the ale of ξ c. As stated aboe, this has been done before (in terms of the temperatre) in [48]. 119

120 Proposition ξ c = 4 7. (3.57) Proof. The isothermal ssceptibility χ is defined as χ = m H = m s H. (3.58) For the sake of conenience, we temporarily reert to sing spins of ale -1 and 1. In the high-temperatre regime T > T c, the magnetization is 0. Partial differentiation then gies s χ = H [ = lim N lim N [ = β lim N = β lim N 1 Z N σ i 1 Z N σ 1,σ 2,...,σ N σ i e βh(σ1,σ2,...,σn ) σ 1,σ 2,...,σ N σ i β N j=1 N σ i σ j j=1 N j=1 m Z N σ j lim N Z N H ] σ j e βh(σ 1,σ 2,...,σ N ) 1 Z N ZN 2 H σ 1,σ 2,...,σ N σ i e βh(σ1,σ2,...,σn ) ] (3.59) for any spin i. Withot going into more details, differentiating with respect to J 1 (and simplifying) reslts in χ = β 2 1 J 1 N lim σ i σ j σ k σ l (3.60) N i,j,<k,l> when J 1 = 0. Let A and B be the two sblattices which diide the lattice so that all bonds contain a spin from A and a spin from B. Then if both i and j belong to the same sblattice, either σ k or σ l will be independent of the other three spins when J 1 = 0, which wold make the term 0 (as it contains a mltiple of m). Therefore (with a factor of 2) we can assme that i is in sblattice A and j is in sblattice B. This gies χ = β 2 1 J 1 N lim 2 N = β 2 1 N lim N 2N 2 i,k A σ i σ k σ k+δ σ j j B,δ σ i σ k σ k+δ σ j k A j B,δ = 4χ 2 (3.61) where δ rns oer all nearest neighbors of the origin (of which there are 4). Since = 120

121 tanh J 1, conerting back into a 0 1 spin system leads to the relation kt χ = 4χ 2 =0. (3.62) =0 As a deriatie of a GHF (m s ), χ is in fact a GHF itself, from Lemma Frthermore, its scaling relation (in terms of the scaling powers of m s ) is When sqared, this relation gies Now χ λ 1 a H χ(,, H) = χ(λ a, λ a, λ a H H). (3.63) λ 2 2a H χ 2 (,, H) = χ 2 (λ a, λ a, λ a H H). (3.64) is again the deriatie of a GHF, so a similar scaling relation holds for it: λ 1 a H a χ (,, H) = χ (λa, λ a, λ a H H). (3.65) Since these fnctions are proportional, they mst hae the same scaling powers. Therefore which implies that 1 a H a = 2 2a H (3.66) a = a H 1. (3.67) We know that when = 0, the system decoples into two lattices where the original 2ndneighbor bonds become nearest-neighbor bonds, as was shown in Figre 3.8. In this case, the magnetization is identical to that of a first-neighbor Ising model with nearest-neighbor interaction strength J 2. We know that the spontaneos magnetization of the Ising model has critical exponent 1; therefore m 8 s(0, ) ( ) 1 8 as 0 +. This implies that a = 8, which in trn implies ξ 2 = 1. 8 Frthermore, if = 0 as well, the model will behae like a simple Ising model at criticality. Therefore (from [129, pp. 144]), we hae M s (0, 0, H) H 1 15 as H 0 +, which means that a H = 15. This implies that a = 14, and therefore ξ 1 = 1. This gies s 14 ξ c = ξ 1 ξ 2 = 1/14 1/8 = 4 7. (3.68) 3.6 Finding the critical lines Proposition gies a prediction of the crossoer exponent sing scaling relations. We wold like to erify this theoretical figre sing the renormalization grop CTM method 121

122 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 Fig. 3.27: We ealate the magnetization along ertical and horizontal lines to estimate the location of the critical line. (which we will refer to as the CTM method from now on). There are (at least) two ways to do this. Probably the easiest way is to make se of Proposition Since we know that the exact ale of ξ 1 is 1, all we need to do is to calclate ξ 8 2. To do this, we ealate the magnetization at arios points along the line = 2 1. We can then analyse the data directly to find the exponent. We do this in Section 3.7. Howeer, sing Proposition still means that we hae sed a scaling assmption. Frthermore, we are also interested not only in the crossoer exponent, bt also the location of the critical lines. By calclating the location of the lines directly, we can also analyse them (withot the need for a scaling assmption) near (0, 2 1) to find another estimate for the crossoer exponent. Unfortnately, finding the location of the critical lines itself does present s with some difficlties in particlar, all nmerical methods, inclding the CTM method, become inaccrate near the lines. The exponent estimate is therefore een more inaccrate; generally, if we want a sefl estimate of the crossoer exponent, we jst hae to se the scaling assmption. Howeer, finding the critical lines is sefl in itself, so we will still do it. We look at each of the lines in trn The pper line Firstly, we estimate the location of the pper critical line sing the CTM method. To do this, we se the method to ealate a qantity of the model (we sed the magnetization) along a line in the plane that we know intersects the critical line. By obsering the behaior of this qantity along these lines, we can work ot where the critical line intersects or ealation lines. We se either horizontal or ertical lines, depending on which part of the phase diagram we are looking at (near the crossoer point, we se ertical lines as they seem to proide more accrate data). This is shown in Figre Now the ales that we get along these lines hae some error (indced in part by stopping 122

123 the algorithm at a fixed matrix size), bt in terms of general shape they are similar to the actal magnetization. We show an example in Figre In particlar, the critical point at finite matrix size (which can be located by obsering when the magnetization stops being 1 2 ) is different to the actal critical point. We can sole this problem in seeral ways: Naiely, we can simply estimate the critical point by the point where the calclated magnetization departs from 1, with a small error (so for example we take the critical 2 point to be the smallest where ˆm < 0.499). This is the least complicated bt also the most inaccrate way. We know that at the actal critical line, the magnetization is constant. If we assme that or calclated magnetization is also constant (so that the error remains the same along the critical line), we can se the fact that a critical point lies on ( 2 1, 0) to calclate the ale of ˆm on the critical line. We can then estimate the critical point to occr wheneer the calclated magnetization attains that ale, as shown in Figre Unfortnately we cannot jstify the assmption that the calclated magnetization is constant on the critical line, bt at least this method proides an estimate of the critical line with one point correct. We can make se of the fact that the critical exponent of the magnetization along the top phase bondary is 1 along all horizontal and ertical lines, from niersality. Then 8 calclating ( 1 2 m)8 along any of these lines will gie a cre similar to Figre We can then select a few points which lie slightly aboe the critical point and fit a line to them, taking the critical point as the intercept of that line. We show this in Figre The problem with this method is that the power law only holds ery near the critical point, so if we take the points too far away, the intercept will be different from the critical point bt if we take it too near, the inaccracy inherent in the CTM method again ensres an error in the estimation of the critical point! Theoretically, the third method seems the best, bt it is difficlt to implement, as we do not hae a good idea of where to take the points that we fit the line to. Empirically, we fond that fitting many lines along different interals and taking the largest intercept worked well, bt this has its own difficlty it reqires many calclations jst to find one critical point. In the end, we sed the second method to estimate the location of the critical line, bt estimated the error in or method from the known critical point (0, 2 1), since we want to obsere the behaior of the line arond that point. The reslts are shown in Section The lower line The CTM method works well when estimating the location of the pper critical line. Howeer, it does not work at all in the lowest phase. Frthermore, we cannot estimate the lower critical line from the high-temperatre phase, as we se the magnetization, which is constant in that phase. Therefore, we trn to the finite lattice method in the sper anti-ferromagnetic phase. 123

124 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) PSfrag replacements J 1 VJ 2 ψ H ξ 1 a ξ 2 b ξ c d (0, σ 1 2 1) σ 2 lnσ 3 ln m(, 0) σ 4 ˆm(0.42, 0) σ 5 iterations F matrix size X YA ˆm(, 0) ( 1 ω F 2(σ m)8 1, σ 2 ) F (σ 2, σ 3 ) Fig. 3.28: Calclated (left) and actal (right) magnetization along the -axis for matrix size 3 3. F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m) Fig. 3.29: Estimating critical points by assming a constant error on the critical line. For this matrix size (3 3), we then estimate the critical points to be where ˆm(, ) =

125 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) PSfrag replacements J 1 J V 2 ψ H ξ 1 a ξ 2 b ξ c d (0, σ 1 2 1) σ 2 ln σ 3 ln m(, 0) σ 4 ˆm(0.42, 0) σ 5 iterations F matrix size X YA ( 1 2 m) e e-05 ˆm(, 0) ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) Fig. 3.30: ( 1 2 m)8 for calclated (left, size 3) and actal (right) magnetization. F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ( 1 2 m) ˆm(, 0) Fig. 3.31: Calclating critical points by fitting a line to ( 1 2 m)8. Or estimate is the intercept of the line with the x-axis in this case,

126 For the CTM method, we calclated nmerics becase the method was nable to prodce series. Howeer, the FLM can prodce series, which gie more accrate approximations of both critical points and critical exponents. For the lowest phase, we fond the leading terms of the magnetization as a series in the ariable w = e 4βJ 1, for seeral fixed ales of J 2. Unfortnately, we were not able to generate ery long seqences, as the FLM is mch more inefficient with second-neighbor interactions present. This is de to the fact that the error is determined by the smallest connected bond graph that cannot fit inside any of the finite lattices. For models with only first-neighbor interactions, the nmber of bonds in this graph is approximately eqal to the half-perimeter of the lattices, bt with second-neighbor interactions, this is roghly haled. Using finite lattices with half-perimeter p to 27, we calclated the magnetization p to order w 13. To analyse these series, we sed the techniqe of Padé approximants. This inoles fitting a rational fnction P (z) to a fnction whose properties we want to estimate; in this case, Q(z) we se dlog Padé approximants, which means that we fit the fnction to the deriatie of the logarithm of or series. This is also eqialent to fitting the series to a first-order homogeneos linear differential eqation with polynomial coefficients. To find (reglar) Padé approximants, we first set the order of P (z) and Q(z). If P has order M and Q has order N, then the approximant is called the [M, N] Padé approximant. To calclate an [M, N] approximant, we need M + N + 1 series terms. Althogh it obiosly depends on the fnction being approximated, often the diagonal or close to diagonal approximants (e.g. [N, N] or [N, N + 1]) prodce better reslts. Now we set the coefficients of P and Q to be ariables, except for the constant term of Q which we take to be 1 (since we can diide P and Q by any non-zero nmber to make this so). By eqating with the series and mltiplying ot, we generate a system of linear eqations which can easily be soled. As a simple example, sppose that we wanted to find the generating fnction of the Fibonacci series. If we take M = N = 2, we then hae 5 nknown coefficients in or approximant. We set P (z) = a 0 + a 1 z + a 2 z 2 (3.69) and Q(z) = 1 + b 1 z + b 2 z 2. (3.70) To be able to sole for these coefficients, we need 5 terms from the series 1, 1, 2, 3, 5,.... Then we hae which gies a 0 + a 1 z + a 2 z b 1 z + b 2 z 2 = 1 + z + 2z 2 + 3z 3 + 5z (3.71) a 0 + a 1 z + a 2 z 2 = (1 + b 1 z + b 2 z 2 )(1 + z + 2z 2 + 3z 3 + 5z ). (3.72) 126

127 Eqating the known coefficients of z gies the eqations a 0 = 1 (3.73) a 1 = 1 + b 1 (3.74) a 2 = 2 + b 1 + b 2 (3.75) 0 = 3 + 2b 1 + b 2 (3.76) 0 = 5 + 3b 1 + 2b 2 (3.77) which, when soled, gie the expected soltion a 0 = 1, a 1 = a 2 = 0, b 1 = b 2 = 1. We will also se Padé approximants (and an extension of them) in later chapters. For more information on Padé approximants, see [6] and [7]. Now we retrn to analysing or series with dlog Padé approximants. If a fnction obeys the relation f(z) A(z z c ) γ when z is near z c, then we hae d dz ln f(z) Aγ(z z c) γ 1 A(z z c ) γ = γ z z c. (3.78) Therefore, we can locate the critical point by taking the reciprocal of the smallest zero of the denominator of the Padé approximant. Frthermore, the critical exponent can be calclated from the formla γ P (z c) (3.79) Q (z c ) which is inariant if P and Q are mltiplied by the same fnction. We do this for or series, and show or reslts in Section The disorder point line As a check on or critical line estimates, we also calclate the disorder point line of the model. Informally, the disorder point line marks the transition between the points where the first-neighbor interaction dominates the second-neighbor interaction, and the points where the reerse applies. We sed the formla gien for the line in [49, Eqations 4.10a-c]. Howeer, there is a slight error in this paper the right-hand side of Eqation 4.10a shold be (x 2 + y 2 + 2xyw + 2xyw 2 + x 2 y 2 w). This can be easily checked from Eqation 4.9 in the paper, remembering that (with a ±1 spin system) the sqare of each spin is 1. This line enables s to check the accracy of or critical line estimates, becase it mst always lie in the high-temperatre phase. This becomes releant as tends to ±1, since both the pper and lower phase bondaries conerge to -1 at these points. 3.7 Reslts Firstly, we attempt to corroborate or scaling theory estimate of ξ c = 4 7 by calclating ξ 1. As otlined aboe, we ealated the magnetization on the line segment 0 < < 0.001, = 127

128 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 ˆm(0.0005, 2 1) /(matrix size) Fig. 3.32: Magnetization s. inerse size at the point (0.0005, 2 1) To conter finite-size effects, we calclated the magnetization at each size (rnning the algorithm for a large nmber of iterations at each size). An example of the reslts we obtained is in Figre As we fond in Section 3.3, the critical points shold hae a power law relationship with the final matrix size, bt withot a known intercept or an exponent, it is difficlt to fit one to the data. In the end, we ended estimating the final ale graphically. We repeated this for arios in the line segment, and prodced a log-log plot of against m, which is shown in Figre We fitted a straight line throgh the points in the plot, as shown in Figre The maximm and minimm slopes gie the nmber ξ 1 = ± We know that ξ 2 = 1 8, so by applying Proposition 3.5.2, this implies that ξ c = ± (3.80) This interal incldes 4 = , which spports Proposition and gies credence to 7 or scaling assmption. Now we estimate the location of the critical line, sing the methods described in Section 3.6. The reslts are shown in Figre Interestingly, fitting a power law to this cre near the crossoer point seems to sggest a crossoer exponent close to 2, rather than 4, bt 3 7 this may be de to the errors in the calclation. We also show the disorder point line, which lies between or critical lines. This plot compares well with the plots in [23]. Lastly, we estimate the critical exponent of the magnetization, also denoted by β, along 128

129 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 PSfrag replacements J 2 V ψ H ξ 1 a ξ 2 ξ c b ln( 1 2 ˆm(, 2 1)) c d (0, 2 σ1) 1 σ 2 ln m(, 0) σ 3 ˆm(0.42, 0) σ 4 σ 5 iterations matrix size F X YA ˆm(, 0) ( 1 2 m)8 ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J ln Fig. 3.33: Log-log plot of magnetization s. on the line = 2 1. J 2 H ξ 1 ξ 2 ξ c ln( 1 2 ˆm(, 2 1)) (0, 2 1) ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m) ln Fig. 3.34: Figre 3.33 with fitted lines.

130 F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 1 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m) Fig. 3.35: Estimated critical lines in the plane. The disorder point line is also shown. the lower phase bondary. We show the estimate for each indiidal approximant in Figre It is apparent that the critical exponent changes continosly along the bondary. Howeer, as becomes larger, the estimates ary wildly, and are ery nreliable; this may be de in part to the presence of doble roots, where the smallest zero of the denominator is ery near to another zero, which throws off the estimate. Howeer, the ariation in the estimates is srprisingly small for large (aboe 0.9), thogh we do not know why this is so. The estimates sggest that as tends to 1, the exponent tends to 0. In other papers, the critical exponent is often plotted against J 2 J 1. For comparison, we do the same in Figre A plot of the same qantity is gien by Ales and de Felício in [2, Figre 8], who prodced their plot ia Monte Carlo simlations. The two plots look ery similar, and it is encoraging to see that two completely different methods prodce sch similar reslts. 3.8 Conclsion In this chapter, we hae sed the renormalization grop CTM method and the finite lattice method to estimate the crossoer exponent and the location of the critical lines for the second-neighbor Ising model. After some discssion where we introdced the finite lattice method and analysed the conergence of the CTM method for this particlar model, we showed how scaling theory can be sed to estimate the crossoer exponent at 4. We then 7 applied or nmerical methods, which prodced an estimate which more or less agreed with this nmber. Using these methods, we also estimated the location of the critical lines, and 130

131 F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) PSfrag replacements F (σ 4, σ 5 ) J V 1 ψ J 2 a H ξ 1 ξ b 2 ξ c d σ 1 (0, σ 2 1) 2 σ 3 ln ln m(, 0) σ 4 ˆm(0.42, 0) σ 5 F iterations matrix size X YA β ˆm(, 0) ω 0 F ( 1 (σ 1, m)8 σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) Fig. 3.36: Estimated critical exponents along the lower phase bondary. J 1 J 2 H ξ 1 ξ ξ c (0, β 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m) J 2 /J 1 Fig. 3.37: Estimated critical exponents s. J 2 J 1.

132 the critical exponent along the lower phase bondary. While not entirely new reslts, or calclations proide another perspectie on the existing estimations of these qantities, sing a new method. They also illstrate one of the ses of the CTM method. We cold extend this research by trying to increase the accracy of or estimate. In particlar, or ways for getting arond the natral error of the CTM method are rather ad hoc, and it wold be good if we cold measre the location of the critical line more accrately. One way to do this wold be to increase the final matrix size of the CTM method, so as to prodce a more accrate estimate. Howeer, this does not alleiate the finite-iterations error, which becomes increasingly significant at higher sizes. Alternatiely, we cold try to prodce series calclations from the CTM method we can either se the iteratie method, or modify the renormalization grop method to prodce series. Another possible area we cold look at is to try and force the CTM method to work for the sper anti-ferromagnetic phase. We sspect that it may be failing de to a breakdown in the symmetry of the model, bt hae been nable to modify the CTM method to oercome this problem as yet. Howeer, considering that the CTM method works well for the other phases, it seems reasonable to sppose that there exists some modification which will enable it to work in this phase. 132

133 4. DIRECTED LATTICE WALKS IN A STRIP 4.1 Introdction One of the more common ideas which occrs in statistical physics is the concept of a walk. Simply pt, a walk is a single path in a gien space. It can also be thoght of as the locs of an object moing at constant elocity, called a walker. Despite being a relatiely simple concept (or perhaps becase it is so simple and nspecific), many aried and interesting problems inole a walk or walks. The most obios qestion to ask abot any walk model is: how many possible walks of a certain length are there? If we can find a closed form expression for this nmber for any model, we call that model soled. The most general walk is nrestricted in the direction that it can take it simply starts at a point and meanders arond. Since, at any time, there is an infinite choice of directions for the walker, there are an infinite nmber of possible walks of any length. If we hae more than one walker, the sitation is een worse; we also hae to set the starting points of the walkers, which again presents s with an infinite nmber of choices. We show this sitation in Figre 4.1. We wold like to hae a walk model where the nmber of possible walks is finite. To do this, we restrict the possible directions that the walk can take. For or prposes, we se the two-dimensional sqare lattice Z 2, shown in Figre 4.2, althogh this is by no means the only restriction that we can make. Many interesting walk models can be made on different 2-dimensional lattices, or in other dimensions. We will also assme, for the time being, that the walker starts at the origin. In the simplest model, after eery nit of time (or step), the walker reaches a ertex of the sqare lattice. It then has for possible directions that it can choose from, all of which are eqally likely. This model is called the random walk model. It is rather triial to sole since the walker has for possible choices after eery step, the nmber of walks of length n is simply 4 n. The random walk model is ery simple, and the soltion is rather obios. Howeer, by jst tweaking it slightly, we can create an interesting model which does not hae an obios soltion. For example, one of the most famos walk models is the self-aoiding walk, first described by Orr in 1947 ([115]). This has the same restrictions as the random walk model, bt with the added restriction that the walker mst neer isit a ertex that it has isited before (hence the term self-aoiding). We show an example of a self-aoiding walk in Figre 4.3. Een thogh this model is only one condition away from the random walk, the selfaoiding walk problem (finding the nmber of self-aoiding walks of a gien length) has not

134 ξ c σ 3 ξ c σ PSfrag replacements 5 (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant σ 4 F X YA V ψ a (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) ωb iterations F (σ 1, σ 2 c) matrix size F (σ 2, σ 3 d) F (σ 3, σ 41 ) F (σ 4, σ 52 ) ˆm(, 0) σ J ( m)8 1 σ J 4 2 n σ H 5 L growth F constant ξ 1 Xξ 2 YA (a) A general ξ c walk (b) Mltiple walks in 2-space. in 2-space. ω F (σ (0, 1, σ 2 2 ) 1) Fig. 4.1: General walks. F (σ 2, σ ln 3 ) ln F (σ m(, 3, σ 4 0) ) ˆm(0.42, F (σ 4, σ 5 0) ) J iterations 1 J matrix size 2 H ξ 1 ˆm(, 0) ξ 2 ( 1 2 m)8 ξ c n L growth constant (0, 2 1) ln ln m(, 0) Fig. 4.2: The sqare lattice. ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.3: A self-aoiding walk on the sqare lattice.

135 (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.4: The stretched and rotated sqare lattice. been soled exactly, een throgh extensie stdies. The crrent most efficient algorithm for enmerating the nmber of self-aoiding walks of a certain length is a ariation of the finite lattice method discssed in Chapter 3, first implemented by Conway, Enting and Gttmann in 1993 ([37]). In [38] and [64] they extended the enmeration to self-aoiding walks of length p to 51, a nmber which has sbseqently been bettered by Jensen in 2004 ([74]), who conted walks p to length 71 (and, in [75], enmerated self-aoiding walks of length p to 40 on the trianglar lattice). These stdies indicate that the growth constant for self-aoiding walks on the sqare lattice is More details and discssion on the self-aoiding walk can be fond in [96] and [71]. As interesting as the self-aoiding walk model is, it is still limited by the fact that it describes only one walker. An interesting expansion on the model wold be to hae two or more walkers, bt still with some aoidance constraint. This leads to what we call icios walkers. Under this idea, if two walkers meet at one ertex, they will annihilate one another. We then ask: what is the probability that all walkers are alie at a gien time? Since it is easy to calclate the nmber of configrations when the icios constraint is remoed, this qestion can be answered by finding the nmber of walk configrations where no annihilations take place. Finding the nmber of sch configrations is the qestion that the icios walk model poses. Howeer, this still leaes a few isses: where will the walkers start, and how do we distingish between two walkers meeting at one ertex, and two walkers isiting the same ertex, bt at different times? We sole these by making the walkers directed. In preios models, the walks cold moe in any direction on the lattice, proided they did not intersect themseles; in contrast, directed walkers can only moe in two of the for possible directions. In particlar, instead of sing the lattice Z 2, we expand this lattice by a factor of 2, and then rotate it by an angle of π clockwise, so we still hae integer coefficients. This 4 transformation is shown in Figre 4.4. Then we restrict the walkers so that they may only moe in the positie x-direction. Finally, we specify that the walkers mst start at the same x-coordinate in this case we se the points (0, 0), (0, 2),.... An example of sch walks are shown in Figre 4.5. Since icios walks are always moing in the positie x-direction, they will neer intersect themseles. Frthermore, at any specified time (or length), they will always hae the same 135

136 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.5: An example configration of 4 icios walkers. x-coordinate, so if two walkers isit the same ertex, it will be at the same time. The icios walk model conts the configrations of fixed length where this does not happen. As an aside, a walk model related to icios walks is that of watermelons. This model restricts the endpoints of the walks so that they mst end with the same separation as their starting separations. In other words, they mst finish at their starting heights, p to a ertical translation. For example, Figre 4.6 shows a watermelon configration. This model is so named becase if the walkers are translated ertically so that they all start at the same point, they will also finish at the same point, giing a watermelon-like shape. To differentiate the standard icios walk model from the watermelon, a free endpoint model is referred to as a star model, again de to the general shape obsered when the walkers are translated so they start together. An eqialent task to finding the nmber of walk configrations for all lengths is to find the generating fnction (abbreiated to g.f.). The generating fnction of a seqence {a n }, n = 0, 1,... is defined to be the formal power series a n z n. (4.1) n 0 It contains all the information that the series contains, so a closed form for the generating fnction is ery alable. More information on generating fnctions can be fond in [141]. Generally, it is ery hard to find an exact expression for the nmber of walks in a model (we often se the terms walks for walk configrations). Howeer, we do not always need to know the exact nmbers to work ot how they behae. Usefl information can be gained by calclating the asymptotics the behaior of the nmber of walks as the length n. For example, in many walk models, the nmber of walks of length n grows exponentially, 136

137 which sally means that Nmber of walks = cα n (n γ + a 2 n γ 1 + a 3 n γ ) as n. (4.2) Using the notation a n b n to denote the fact that lim n a n bn = 1, this can be written as Nmber of walks cα n n γ. (4.3) Now we retrn to the directed icios walk problem. This model was first introdced by Fisher in 1984 ([54]). In this paper, he cast the walkers as short-sighted drnks who shoot each other on sight. He then asked: what is the probability that all walkers srie for n steps? Fisher fond that if there are p walkers in total, the probability of this occrrence decreases asymptotically like n 1 4 p(p 1) as n. It is a simple matter to conert this to the total nmber of icios configrations by obsering that the nmber of nconstrained walks is 2 np, since there are np steps taken in total. This means that the nmber of icios walks is asymptotically 2 np n p2 /4+p/4. The icios walk problem was also considered by Forrester ([55] and [56]), bt with periodic bondary conditions (so that, in effect, the walks were on a cylinder). The model was also considered for arbitrary dimension by Essam and Gttmann in 1995 ([52]), who expressed the generating fnction for the nmber of walks in terms of generalised hypergeometric fnctions. While it is good to get an asymptotic expression for the nmber of icios walks, it wold be preferable to derie an exact expression for this nmber. Sch an expression was conjectred in 1991 ([4]) and proed by Gttmann, Owczarek and Viennot in 1998 ([65]), who proed the formla by relating the walks to Yong tableax. They fond that if there are p walkers, then the total nmber of possible walks of length n is gien by 1 i j n p + i + j 1 i + j 1. (4.4) In later papers, arios modifications, tweaks and generalisations were introdced into the directed icios walker model. In 2002 Gttmann and Vöge ([66] and [135]) introdced the concept of friendliness. This is a relaxation of the icios idea, where instead of the walkers being prohibited from toching each other at all, we relax it by saying that they can toch, bt for only n ertices at a time (this is called n-friendly walkers). Frthermore, the walkers still cannot cross each other, and 3 walkers may not occpy the same ertex. The extreme, where two walkers may toch for any length of time, is called -friendly walkers, or GV -friendly walkers. Figre 4.6 shows a configration of friendly walkers. Note that icios walkers can also be called 0-friendly. In [66], Gttmann and Vöge analysed the n-friendly walker model and fond the gener- 137

138 ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.6: An example configration of for 3-friendly walkers. The thicker lines contain more than one walker. ating fnction for 2 walkers with 0-, 1-, 2- and -friendliness to be ) 2 + 2z (β 2 2 n + 1 4z z 2 1 2z 2 4z 2 β n + 1 4z 2 (4.5) where β n = 1 2n z 2n and n is the friendliness. In fact they actally deried a generalisation 1 2z 2 of this formla for the anisotropic case, where different ariables are sed for p-steps and down-steps. Also in this paper, Gttmann and Vöge conjectred the generating fnction for three 1-friendly walkers (also called osclating walkers) to be 3 15z 4z 2 3(1 z) 1 8z. (4.6) 8z 2 (1 + z) This was recently proed (bt not pblished) by Gessel ([60]); a proof was pblished by Bosqet-Mélo in 2005 ([30]). Another similar ariation to the icios walk model was proposed by Tschiya and Katori in 1998 ([130]). In this paper they sggested another ersion of -friendliness, which is similar to the GV model, except that any nmber of walkers may isit a site at any one time. This became known as the TK -friendly walker model, to distingish it from the GV model. Using a slightly different model (all the walkers started at the same point), Tschiya and Katori related the generating fnction of the walks to the partition fnction of a statistical mechanics model known as the chiral Potts model. In fact, icios and friendly walks can be related to many other models, for example directed percolation ([35]) and random GOE matrices ([5]). The concept of friendliness was incorporated into the icios walk model when another 138

139 c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.7: Example configration of for 3-friendly walkers in a strip of width 6. restriction was added in a sbseqent paper by Gttmann, Krattenthaler and Viennot ([89]). This restriction, called a wall for obios reasons, prohibits any walker from going below the line y = 0. In this paper, it was fond that the nmber of n-friendly walks with m steps and p walkers withot a wall grows asymptotically like 2 mp m p2 /4+p/4 (we hae changed the ariable of the nmber of steps to m to aoid clashing with the friendliness, n). Interestingly, this nmber does not actally depend on the friendliness of the walkers. They also fond in this paper that the nmber of n-friendly walks with m steps, p walkers and a wall grows like 2 mp m p2 /2. Althogh, as expected, this is different from the nmber of walks withot a wall, it still does not depend on the friendliness. In the next paper in the series ([90]), Krattenthaler, Gttmann and Viennot added another restriction to the model. Instead of allowing the walkers to go as high as they wanted, they are now constrained from both the bottom and the top, with the lines y = 0 and y = L. This forms a horizontal strip of width L, as shown in Figre 4.7. Of corse L mst be greater than or eqal to 2p 2, where p is the nmber of walkers, so that all the walks start within the strip. This problem was also considered (in a different form) by Grabiner ([62]), and also by Brak, Essam and Owczarek ([31] and [32]) for icios walks. For this model, it was fond that the nmber of walks for p m-step icios walks grows ( 4 asymptotically like p2 (L+2) 2 p p sπ m. p s=1 cos L+2) For the finite-width strip, the asymptotic growth does depend on the friendliness, as they ( also fond that the nmber of TK - 4 friendly walks grows asymptotically like p2 2 p p (L+2p) p s=1 cos sπ m. L+2p) Althogh the TK - friendly model is not actally an extension of the n-friendly model, it is eqialent to the GV -friendly model for 2 walkers. This means that in the case of two walkers, the growth constants of icios and -friendly walkers are clearly different, and therefore there mst be some change as the friendliness increases. It wold be reasonable to expect that this also happens for greater nmbers of walkers as well. A little reflection shows that the difference between the half-plane (one wall) and finite strip models is not really srprising. The half-plane is essentially a two-dimensional model, while the finite strip model extends to infinity in only one direction. The real qestion we wold now like to look at is: how does the nmber of walks behae 139

140 for a strip of finite width as we adjst the friendliness? In particlar, we know that the growth constant changes as we moe from icios to -friendly walkers, bt does the growth constant change with eery n, or does it stay the same for some n and change only at certain n? This chapter, which is a more detailed ersion of a preios paper ([36]), attempts to answer the aboe qestion. In Section 4.2, we show how we can gess the generating fnction of arios models sing Padé approximants. In Sections 4.3 and 4.4, we present two different methods that we se to generate the first few terms of the series of nmber of walks Section 4.3 is based on transfer matrices, and also gies the general transfer matrix for 2 walkers, while Section 4.4 is based on recrrences. Then we moe to proofs of generating fnctions. In Section 4.5, we proe the generating fnction for one walker in a strip of any width. This generating fnction is well-known, bt it is instrctie to compare both the reslt and the proof techniqe with later reslts. In Section 4.6, we proe arios formlas for the generating fnctions when we keep two of the parameters (nmber of walkers, width, and friendliness) fixed and ary the other parameter. We managed to proe some reslts for arying friendliness and nmber of walkers. In Section 4.7, we discss the behaior of the growth constants we hae obtained, and then propose a possible extension to the model in Section 4.8. Finally, in Section 4.9, we look back on what we hae done and propose possible ways to take or research frther. 4.2 Finding the generating fnction The first step in analysing the walks is to try to find the generating fnction. To do this, we first gain some idea of what form the generating fnction can take. This is done in the following lemma. Lemma The generating fnction for p n-friendly walkers in a strip of width L is a rational fnction. Proof. Another way that we can look at the n-friendly walk model is by considering paths on a graph. For example, sppose that we hae jst one walker in a strip of finite width. We consider all the possible y-coordinates of the walker to be states of a graph, and say that at any time, the system is in a certain state if the walker is at that height. We can then connect two states if and only if the walker can go from one to the other in exactly one step. We call this graph the transfer graph of the model, and show an example in Figre 4.8(a). Using this graph, we can now recast the problem of one walker as a path on the graph starting from state 0. We then hae to find the total nmber of possible paths on this graph. For greater nmbers of icios walkers, this conersion becomes slightly more complicated. The states still contain all possible heights for the walkers at any one time, bt since there are now (say) p walkers, this then becomes a p-tple of heights, so for example if a two-walker system is in state (1, 3), then the walkers are at heights 1 and 3. States are still connected if and only if one state can become the other after one nit of time. We show an example of this in Figre 4.8(b). 140

141 ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L owth constant 0 ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth4 constant (0,2) (0,4) (2,4) (1,3) (1,5) (3,5) (a) Transfer graph for one walker in a strip of width 4. (b) Transfer graph for two icios walkers in a strip of width 5. Fig. 4.8: Some simple transfer graphs. This approach also works for 1-friendly walkers; howeer, when we get to 2-friendly walkers or aboe, the alidity of each state depends not only on the preios state, bt also on the states before that. This makes things trickier, bt can be oercome by changing the states to ordered sets of n p-tples (where n is the friendliness). The states represent the heights of the walkers at the crrent step and preios n 1 steps, which is all we need to know to erify the alidity of the next state. Using this, a representation of the problem as paths on a graph can still be achieed. The important thing to note abot the expression of the problem in this fashion is that (nlike a half-plane model) the transfer graphs all hae a finite nmber of states, since there are only a finite nmber of ways yo can place the walkers in a finite strip at any one time. Therefore, if we constrct the adjacency matrix or transfer matrix of the graph (a matrix with one row/colmn for each state, containing 1s where the row and colmn states are connected and 0 otherwise), this matrix will also be finite. We can then se the following theorem, which we qote from [124, Theorem 4.7.2]. Theorem If A is the transfer matrix of a system, then the generating fnction of paths from state i to state j is gien by ( 1) i+j det(i xa; j, i) det(i xa) (4.7) where det(i xa; j, i) is the minor of I xa obtained by deleting the jth row and ith colmn and taking the determinant of the reslting matrix. Now we wish to find the total nmber of paths starting from a particlar state. This is a sm of terms of the form gien by Theorem Since A is finite-dimensional for any finite-width strip, both nmerator and denominator are finite polynomials. Therefore the generating fnction is a sm of rational fnctions, and therefore is rational itself. As an aside, the transfer matrix in the aboe proof has one row/colmn for each set of n p-tples of heights. Since there are L + 1 possible heights in a strip of width L, the 141

142 transfer matrix has dimension (L+1) np (L+1) np. From Theorem 4.2.2, this means that the nmerator of the generating fnction has order at most (L+1) np 1 and the denominator has order at most (L + 1) np. In practice, howeer, the orders of the nmerator and denominator are generally mch lower. Now that we know that the generating fnction mst be rational, we can se Padé approximants (which we discssed in Section 3.6.2) to gess the generating fnction. To do this, we generate the first few terms of the series of the total nmber of walk configrations. Then we approximate the generating fnction by a rational fnction. Althogh Padé approximants can be sed to approximate any generating fnction, we can go one step frther. Becase or generating fnction is rational, we can actally prodce the exact generating fnction if the degree of the approximant is high enogh. On the other hand, it is not always obios when we hae the exact fnction, as opposed to an approximation. Howeer, by looking at the proof of Lemma 4.2.1, we see that both the nmerator and denominator of the generating fnction can be expressed with integer coefficients. So when or approximants gie s integer coefficients, it is reasonable to sppose that they are exact. It is worth noting that becase the approximants are exact, when we take an approximant that has higher order polynomials than the actal generating fnction, or system of linear eqations is redndant. We sole this by setting all necessary higher order terms to be 0. This can also tell s when the approximant is exact. Once we hae the generating fnction, it is a simple matter to calclate the growth constant. The reciprocal of the smallest positie real zero of the denominator gies s this ale. 4.3 A transfer matrix algorithm If we are gien enogh terms in the series for the nmber of walks, we can se Padé approximants to find the generating fnction and growth constant. Howeer we still need to generate the terms first. One way to go abot this is to se the adjacency matrix of the transfer graph, as discssed in Lemma For small parameter ales, we can calclate this matrix directly. For this we se the following lemma. Lemma For two n-friendly walkers in a strip of width L, the transfer matrix has the 142

143 block strctre L 1 L 1 L 3 L 5... L + 1 L + 1 L T 0 I 0 L 3 0 I 0 T I I L I 0 T L + 1 L + 1 L + 1 (n times). 0 I 0 0 I 0 0 I T T where T is the tri-diagonal matrix with 0s on the diagonal and 1s on the off-diagonals, I is the identity matrix, and 0 is either a single row of 0s, a single colmn of 0s, or a matrix of zeros. The first row and colmn gie the widths of each block; the dimensions of the matrices shold be clear. Proof. An important fact sed implicitly in this proof is that the walkers mst always be an een nmber of nits apart from each other. Also, if two walkers remain a constant distance from each other (i.e. they take the same steps, bt at a fixed distance from each other), then they essentially act like one walker in a strip of lesser width. In this case, the transfer matrix mst be the same as that for one walker, which is T. Firstly, let s consider the icios case. The transfer matrix for this case is the top left section of the aboe matrix. We set p the states in the manner sggested in the proof of Lemma The first block of L 1 states represents the states where the walkers are two nits apart - (0, 2), (1, 3), and so on, in that order. The second block of L 3 states represents the states where the walkers are for nits apart - (0, 4), (1, 5), and so on, again in that order. This contines in a similar manner, with the last block representing states where the walkers are as far apart as possible. Now each step can only change the distance separating the walkers by -2, 0, or 2, as shown in Figre 4.9. Therefore the sections of the transfer matrix which connect blocks that are not adjacent to each other (in the matrix) mst all be 0 matrices. For the sections

144 ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.9: The distance between walkers can only change by -2, 0, or 2 for any step. connecting blocks to themseles, the walkers keep the same distance apart, and therefore the sections are T blocks. Finally, for the sections connecting blocks which are adjacent (in the matrix), there is only one way to step so that the distance separating walkers increases or decreases. Since the states in each block are ordered by height, this gies the characteristic 0 I 0 sections of the transfer matrix. Ptting these obserations together, it can be seen that the transfer matrix for 2 icios walkers is the top left section of the aboe matrix. Now consider the case of arbitrary n (friendliness). Here we set p the states in a slightly different manner to that sggested in Lemma Instead of keeping track of all the states we hae isited in the last n steps, we really only hae to know if the two walkers can or cannot be together. So all we need to do is to keep track of how long they hae been together, if they are; otherwise we can se the same states as icios walkers. This is what we do, so the blocks of states before the doble line in the transfer matrix are the states where the walkers are apart, and the blocks of states after the line are states where the walkers are together. Howeer, to keep track of how long the walkers hae been together, we attach a sbscript to each of the latter states. This sbscript gies the nmber of ertices that the walkers hae been together prior to (and inclding) this ertex. So, for example, a state of (2, 2) 3 means that both walkers are at height 3, and they hae been together for exactly 2 steps prior to this ertex. Then we make the first block of L + 1 states represent the states (0, 0) 1, (1, 1) 1, and so on, while the second set represents (0, 0) 2, (1, 1) 2, and so on, and this contines ntil we reach (L, L) n. Now if two walkers which are together step apart, they will end p 2 nits apart. Again the ordering by height gies the 0 I 0 sections of the lower first colmn. Conersely, if the walkers are two nits apart and step together, they hae only been together for the crrent ertex, and mst hae a sbscript of 1. Since it is impossible to moe from a distance of 4 or more apart to the walkers being together in one step or ice ersa, we hae the top right and bottom left sections of the aboe transfer matrix. Finally, if two walkers that are together stay together after one step, they hae been together for one ertex more, and the sbscript of the state mst increase by 1. Apart from that, they act as a single walker in a strip of width L, which has transfer matrix T. This completes the transfer matrix as stated aboe. 144

145 By taking powers of this transfer matrix and smming the entries in the first row, we were able to enmerate the walker series many times faster than by a direct enmeration algorithm. We can then gess the generating fnction sing Padé approximants. 4.4 A method of recrrences The transfer matrix method described in the aboe section works well, and proides extra information in the form of the entire transfer matrix (which can often be mined for more data). Howeer, it does hae a few shortcomings: the transfer matrix gets large ery qickly with respect to width and friendliness, and it only works for 2 walkers. For sitations where the transfer matrix method is nwieldy or inappropriate, we sed a method of recrrences to generate the terms of the series. This method was sggested to s by a techniqe that Gessel sed in [60], in his proof of the osclating 3-walker generating fnction. It is extremely efficient, taking only a linear amont of time in the nmber of steps to generate terms. This algorithm starts ot by generalising the problem. Sppose we wish to find the nmber of walks for two n-friendly walkers in a strip of width L. We define h(i, j, n, m) to be the total nmber of walks of m steps for two n-friendly walkers in a strip of width L, gien that they start at heights i and i + 2j respectiely, and hae been together (if j = 0) for n ertices before the start, not inclding the crrent ertex. We wish to find h(0, 1, 0, m) for arbitrary m. We will do this by calclating h for all ales of its parameters. For cases where the walkers are at illegal starting positions, h mst be 0. This occrs when i < 0 (below the lower bondary), j < 0 (walkers cross), i + 2j > L (aboe the pper bondary) and n n ( too friendly ). Also, in sitations where the walkers are in alid positions bt hae no more steps to take (m = 0), h mst be 1. We can now diide the remaining parameter ales into two cases. In the first case, the walkers start together, i.e. j = 0. In this case, we look at the possible first steps of the walks, as shown in Figre 4.10(a). The possibilities are: The walkers both step pwards. This increases i and n by 1 while still leaing j at 0. Also m is redced by 1 since we hae taken a step and therefore hae one less step remaining. The walkers both step downwards. This increases n by 1 and decreases i and m by 1. j is still 0. The walkers step apart. This decreases i and m by 1, increases j by 1, and forces n to be 0. We do not hae to worry abot going to an inalid state becase the ale of h will atomatically be set to 0 for that state. This means that we hae the eqation h(i, 0, n, m) = h(i + 1, 0, n + 1, m 1) + h(i 1, 0, n + 1, m 1) + h(i 1, 1, 0, m 1). (4.8) 145

146 ξ 1 iterations ξ matrix 2 size ξ c ˆm(, 0) (0, 2 1) ( 1 2 m)8 ln n ln m(, 0) L ˆm(0.42, growth 0) constant iterations matrix size (a) Possible steps if the walkers are together. ˆm(, 0) ( 1 2 m)8 n L growth constant (b) Possible steps if the walkers are apart. Fig. 4.10: Possible first steps for two walkers. The other case occrs when the walkers are apart (which means n = 0). Again we look at the possible first steps (shown in Figre 4.10(b)). The possibilities are: The walkers both step pwards. This increases i by 1, decreases m by 1, and keeps j the same. The walkers both step downwards. decreases i by 1. This is similar to the preios possibility bt The walkers step apart. This decreases i by 1 and increases j by 1. m goes down by 1. The walkers step together. This increases i by 1 and decreases j and m by 1. If the walkers end at the same height, we will still hae n = 0 becase they hae not been together prior to this ertex. Ptting it all together, this gies the recrrence eqation for j > 0: h(i, j, 0, m) = h(i+1, j, 0, m 1)+h(i 1, j, 0, m 1)+h(i 1, j+1, 0, m 1)+h(i+1, j 1, 0, m 1). (4.9) Now to find h(0, 1, 0, m), we simply find h for all ales of i, j, n and m for m < m and then apply the aboe eqation. We will be able to find any ale of h by adding ales from the set of ales of h with m decreased by 1. After finding the first few terms, we can then se Padé approximants to gess the generating fnction. For three walkers, we can se the same principle, except that we need more ariables the height of the first (lowest) walker, the height difference between the first and second walkers and second and third walkers, the nmber of ertices that the first two walkers hae been together, the nmber of ertices that the second two walkers hae been together, and 146

147 the nmber of steps remaining. Since we do not hae a general formla for the three walker transfer matrix, this is the method of choice for calclating generating fnctions for three walkers. Theoretically, this method will work for any nmber of walkers, bt obiosly it gets mch more complicated as the nmber of walkers increases. We hae sed it for p to 4 walkers. The generating fnctions for some of these cases (which we hae not proed) are in Appendix A. We also gie their critical points in Appendix B. 4.5 One walker If there is only one walker, the model is considerably simplified, becase the isse of friendliness does not arise. The generating fnction for this case is well known (for example, in [88]). We will proe it again here sing a generating fnction argment that is similar to proof techniqes that we will se later on in the chapter. Also, it is instrctie to compare this reslt with similar reslts for more walkers. From now on, we will assme that the range of a smmation is from to nless specified otherwise. Theorem In a strip of width L, the (isotropic) generating fnction for one walk that ends on the x-axis is g L (x) = h L(x) (4.10) h L+1 (x) and the generating fnction for one walk with an arbitrary end-point is f L (x) = 1 h L+1 (x) L x i h L i (x) (4.11) i=0 where h L (x) = i ( ) L i ( 1) i x 2i (4.12) i is a polynomial of degree 2 L 2. If we define λ ± = 1± 1 4x 2 2, then this becomes g L (x) = λl+1 + λ L+1 λ L+2 + λ L+2 (4.13) and f L (x) = 1 λ L+2 + λ L+2 ( λ L+1 + x L+1 λl+1 x L+1 1 x/λ + 1 x/λ ). (4.14) Proof. We first consider the case where the walk mst end on the x-axis. We know that the walker retrns to the x-axis at least once, so we diide the walk at all points where it retrns to the x-axis, as shown in Figre Now we look at the generating fnction of each of these diided segments. 147

148 J 2 ln H ln m(, 0) ξ 1 ˆm(0.42, 0) ξ 2 iterations ξ c matrix size (0, 2 1) ˆm(, 0) ( 1 ln ln m(, 2 m)8 0) ˆm(0.42, 0) n iterations L growth constant matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant (a) A walk in a strip of width 3. (b) The diided walk. Fig. 4.11: Diiding a single walk in a strip at the points where the walker retrns to height 0. Each of these segments consists of a walk in the strip, starting and ending at height 0, and not toching 0 in between. Therefore it can be described as an p-step, followed by a walk in a strip of width L 1 starting and ending at the eqialent of height 0 (i.e. the height of the lower bondary), followed by a down-step. Therefore the generating fnction for each segment is x 2 g L 1 (x). Since or original walk consists of an arbitrary nmber of these segments, we mst hae g L (x) = 1 1 x 2 g L 1 (x). (4.15) We can now se indction on L. Obiosly g 0 (x) = 1, since a walker cannot walk in a strip of no width. Now assme that g L 1 (x) has the form gien in Eqation Then g L (x) = ( 1 x 2 g L 1 (x) ) 1 ( i = ( 1)i( ) L i i x 2i x 2 i ( 1)i( ) ) L 1 i 1 i x 2i i ( 1)i( ) L i i x 2i i = ( 1)i( ) L i i x 2i i (( ) ( ( 1)i L i i + L i )) i 1 x 2i i = ( 1)i( ) L i i x 2i ). (4.16) x 2i i ( 1)i( L+1 i i 148

149 By indction, the first part of the theorem is proed. Now consider the case where the walker can hae an arbitrary end-point. We can diide any sch walk into two parts, separated by the last retrn of the walker to the x-axis. Again we demonstrate this in Figre The first part is a walk in the strip which retrns to 0, and ths has generating fnction g L (x). The second part is either triial (if the walker ends at 0), or consists of an p-step followed by a walk in a strip of width L 1, starting from the eqialent of height 0 and with arbitrary end-point. Therefore the second part has generating fnction 1 + xf L 1 (x). This gies s f L (x) = g L (x)(1 + xf L 1 (x)) = g L (x) + xg L (x)g L 1 (x)(1 + xf L 2 (x)) =... = g L (x) + xg L (x)g L 1 (x) + + x L 1 g L (x)... g 1 (x) + x L g L (x)... g 1 (x)f 0 (x) = g L (x) + xg L (x)g L 1 (x) + + x L 1 g L (x)... g 1 (x) + x L g L (x)... g 1 (x)g 0 (x) since f 0 (x) = g 0 (x) = 1. Contining, f L (x) = h L(x) h L+1 (x) + x h L(x) h L+1 (x) L = 1 h L+1 (x) i=0 h L 1 (x) h L (x) + + x L h L(x) h L+1 (x)... h 0(x) h 1 (x) (4.17) x i h L i (x). (4.18) The alternate forms we hae gien for g L (x) and f L (x) come from expressing h L (x) in the form h L (x) = λl+1 + λ L+1 (4.19) 1 4x 2 which can be deried from [84, Section 1.2.9, Exercise 15]. This then gies g L (x) = λl+1 + λ L+1 1 4x 2 1 4x 2 λ L+2 + λ L+2 = λl+1 + λ L+1 λ L+2 + λ L+2 (4.20) 149

150 and f L (x) = = = = 1 h L+1 (x) 1 4x 2 ( 1 λ L+2 + λ L+2 1 λ L+2 + λ L+2 1 λ L+2 + λ L+2 ( λ L+1 + λ L+1 + L i=0 x i (λ L i+1 + λ L i+1 ) L ( x i=0 λ + ) i λ L+1 L ( ) ) i x i=0 λ ) 1 (x/λ + ) L+1 λ L+1 1 (x/λ ) L+1 1 x/λ + 1 x/λ ( λ L+1 + x L+1 λl+1 x L+1 1 x/λ + 1 x/λ ). (4.21) 4.6 Reslts Using the transfer matrix algorithm or method of recrrences and Padé approximants, we can find the generating fnction (and hence growth constant) of p n-friendly walkers in a strip of width L, for any gien ales of p, n and L (if they are small enogh). Howeer, if we want to get a sense of how the growth constant changes with respect to the parameters of the model, it wold be mch more sefl to hae a general formla for the generating fnction in terms of these parameters. On the other hand, while we can find generating fnctions for fixed parameter ales simply by ealating series, we hae to proe theoretical reslts if we want to find a general formla, which is mch harder. Rather nsrprisingly, we were nable to proe a general reslt in all three parameters. Howeer, all is not lost, as it is still of some ale to proe a general formla in one of the parameters, while fixing the ales of the other two. We were able to proe seeral formlas of this natre simply by calclating the generating fnctions for seeral cases, and gessing the general formla. Then we sed generating fnction argments similar to those in Section 4.5 to proe the actal formla. In this section, we gie those reslts Variable friendliness As we discssed in the introdction, we wold like to obsere how the growth constant changes as the model moes from 0-friendliness to -friendliness. Therefore it makes sense to start by fixing the nmber of walkers and width and changing the friendliness. We do this in the next theorem. Theorem The generating fnction for two n-friendly walkers in a strip of width 3 is n i=0 F ix i 1 x n 1 i=0 F ix = 1 F n+1xn+1 F n xn+2 (4.22) i+2 1 2x x 2 + x 3 + F n x n+2 + F n 1 x n+3 150

151 (0, J 1 2 1) J 2 ln H ln m(, 0) ξ 1 ˆm(0.42, 0) ξ 2 iterations ξ c matrix size (0, 2 1) ˆm(, 0) ( 1 ln ln m(, 2 0) m)8 ˆm(0.42, 0) n iterations L growth constant matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant (a) A configration with diiding lines. (b) The diided walks. Fig. 4.12: Diiding walks in a strip of width 3 when the walkers are 2 nits apart. where F n is the nth Fibonacci nmber (F 0 = F 1 = 1, F n = F n 1 + F n 2 (n 2)). This extends to -friendly walkers which hae generating fnction 1/(1 x x 2 ) 1 x x 2 /(1 x x 2 ) = 1 1 2x x 2 + x 3. (4.23) Proof. Becase the walkers mst be an een distance apart at any time, and the strip has width 3, any configration of walks mst end either with the walkers together or 2 nits apart. We start by analysing the second case. For any walk configration where the walks end 2 nits apart, we diide the configrations at the x-coordinates where the walkers are 2 nits apart, as shown in Figre This gies s a series of segments where the walkers start and end 2 nits apart, bt are neer 2 nits apart in between. We will first find the generating fnction of one of these segments. Sppose that the walkers in the segment start in the state (0, 2). We can do this withot loss of generality becase this state, when reflected in the line y = 3, gies s the only other 2 possible starting state (1, 3). There are two possibilities for the first step: the walkers can either step in the same direction to (1, 3), which immediately ends the segment, or they can come together. In the former case, the segment contribtes x to the generating fnction (one step). In the latter, the walkers can stay together for p to n ertices, bt once they moe apart, they will be 2 nits apart and the segment ends. Let s examine this case in more detail. The initial step, in which the walkers step together, contribtes x to the generating fnction, as does the final step apart; the remaining 151

152 steps are identical to one walk starting at height 1 in a strip of width 3. Howeer, to ensre that the bonds are not exceeded when the walkers step apart, the walk mst end at height 1 or 2. We now can se Theorem 4.2.2, and the transfer matrix of one walk in a strip of width 3, which is the 4 4-dimensional T, i.e (4.24) From Theorem 4.2.2, the generating fnction of a walk which starts at height 1 and ends at height 1 is 1 x 2 1 3x 2 + x 4. (4.25) The generating fnction of a walk which starts at height 1 and ends at height 2 is x 1 3x 2 + x 4. (4.26) Therefore the generating fnction of one walk in a strip of width 3, starting at 1 and ending at 1 or 2 is 1 + x x 2 1 3x 2 + x = x x = F 2 i x i. (4.27) i 0 Howeer, we hae left ot one restriction the walk cannot be longer than n 1 steps (which is eqialent to isiting n ertices). To add this restriction we simply disallow all walks that are too long; this gies the generating fnction n 1 F i x i. (4.28) i=0 Recalling that we hae 2 extra steps for this case, and the other case contribtes x, we see that the generating fnction for each of the segments is n 1 x + x 2 F i x i. (4.29) i=0 The entire configration (if the walks end apart) is a seqence of an arbitrary nmber of these segments, and therefore has the generating fnction 1 1 x n 1 i=0 F. (4.30) i+2 ix Now sppose that we do not limit the endpoints of the walks. We diide the walks at the last point where the walkers are 2 nits apart. The first segment is exactly the configration 152

153 that we hae analysed aboe, since the walkers end apart; the second segment is either empty (i.e. the walks end apart) or contains the walkers coming together and then walking together for p to n 1 steps. The first case contains no steps and therefore contribtes 1 to the generating fnction. The second case consists of one step (contribting x) followed by the eqialent of one walk starting at height 1 in a strip of width 3, taking no more than n 1 steps. As aboe, we apply Theorem to the transfer matrix, and find that if the length restriction is remoed, the generating fnction of this walk is 1 + x 1 x x 2 = i 0 which means that the generating fnction for the second section is F i+1 x i (4.31) n F i+1 x i+1 = i=0 n F i x i. (4.32) i=0 Ptting it all together, the total generating fnction for the model that we want is the prodct of the generating fnctions for the first and second parts, i.e. n i=0 F ix i 1 x n 1 i=0 F (4.33) ix i+2 which is the first expression gien aboe. Now, mltiplying a trncated Fibonacci series by 1 x x 2 gies s ( n ) (1 F i x ) i x x 2 i=0 = n F i x i i=0 = 1 + x + n F i x i+1 i=0 n F i x i+2 i=0 n F i x i x F n x n+1 i=2 = 1 (F n 1 + F n )x n+1 F n x n+2 + n F i 1 x i F n 1 x n+1 F n x n+2 i=2 n (F i F i 1 F i 2 )x i i=2 n F i 2 x i = 1 F n+1 x n+1 F n x n+2. (4.34) From this it is easily seen that mltiplying the top and bottom of the first expression by 1 x x 2 gies s the second expression aboe. i=2 Using a similar techniqe, we were able to extend this reslt to a strip of width

154 Theorem The generating fnction for two n-friendly walkers in a strip of width 4 is n i=0 a ix i 2(3 k x n+2 + n i=k+2 3i 2 x 2i+1 ) k+1 i=0 b ix 2i + 2(3 k x n+3 + n i=k+2 3i 2 x 2i+2 ) if n = 2k + 1 is odd, and = 1 + 2x x2 2x k x n+1 ( 2 6x + 4x 3 ) + 2(3 2k )x 2n+3 1 8x 2 + 8x k x n+3 (9 4x 2 ) 2(3 2k )x 2n+4 (4.35) n i=0 a ix i + 3 k 1 x n+1 + 2( 3 k 1 x n+2 + n i=k+1 3i 2 x 2i+1 ) k i=0 b ix 2i 2(3 k 1 x n+2 + n i=k+1 3i 2 x 2i+2 ) = 1 + 2x x2 2x k 1 x n+1 ( 3 8x x 2 + 6x 3 ) 2(3 2k 1 )x 2n+3 1 8x 2 + 8x k 1 x n+2 (5 + 4x 2 ) + 2(3 2k 1 )x 2n+4 (4.36) if n = 2k is een, where a i are coefficients defined by a 0 = 1, a 1 = 2, a 2i = 2(3) i 1 (i 1), and a 2i+1 = 4(3) i 1 (i 1), and b i are coefficients defined by b 0 = 1, b 1 = 5, and b i = 7(3) i 2 (i 2). This also extends to the -friendly case, which has generating fnction 1 + 2x + 2x 2 (1 + 2x)/(1 3x 2 ) 1 5x 2 7x 4 /(1 3x 2 ) = 1 + 2x x2 2x 3 1 8x 2 + 8x 4. (4.37) Proof. We work on a similar principle to the proof of Theorem In that proof, there were only 2 possible states (1 p to reflection) where the walkers cold be separate, and we diided the walks wheneer we reached those states. We do the same here, bt this time there are 4 sch states (0, 2), (1, 3), (2, 4), and (0, 4). As before, we will identify the states (0, 2) and (2, 4) with each other, as they are eqialent nder reflection in y = 2, and denote this state by (0, 2)/(2, 4). We gie these states the arbitrary ordering of (0, 2)/(2, 4), (1, 3), (0, 4). Firstly, sppose that the walks end apart. We diide the walks wheneer one of these states is attained, as before. We can now recast the problem as a path-on-a-graph problem, bt in a different way from before. Rather than checking the state of the walkers at eery x-coordinate, we only hae states on the graph wheneer the walkers are not together. This means that we hae exactly 3 states, so the corresponding transfer matrix is 3 3. Howeer, the elements of the transfer matrix mst now contain the possibility of the walkers coming together in between the diiding points. To do this, we modify the transfer matrix instead of only containing 1s and 0s, the elements are now the generating fnction of all possible transitions from state to state. Note that this isn t a tre transfer matrix it corresponds to xa in Theorem rather than A itself. We now try to find this modified transfer matrix. For the colmn and row pertaining to (0, 4), all steps to or from (0, 4) mst go to (1, 3), so the corresponding elements are x for the state (1, 3) and 0 otherwise. For the other states, we mst coer 2 possibilities either the walkers do not come together (in which case the generating fnction is easy to find) or they do (in which case we mst resort to the model of 1 walker in a strip of width 4). 154

155 For example, let s find the element of the matrix in the row of (0, 2)/(2, 4) and colmn of (1, 3). We will take the starting state to be (0, 2) (since (2, 4) gies the same reslt). Now it is possible to go from (0, 2) to (1, 3) in one step, which contribtes x to the generating fnction. Otherwise, any transition between these states mst consist of an opening and closing step (contribting x 2 ) bracketing the eqialent of a single walk in a strip of width 4, starting at height 1 and ending at height 2, and comprising no more than n 1 steps. Now the transfer matrix for sch a walk is the 5 5 T matrix, i.e (4.38) Using Theorem again, remoing the length restriction on the single walk gies s a generating fnction of x x 3 1 4x 2 + 3x = x 4 1 3x = 3 i x 2i+1. (4.39) 2 i 0 To ensre that there are no powers of x exceeding n 1, we mst ct off the sm at i = n 2 2, and therefore the modified transfer matrix entry is x + n 2 2 i=0 3 i x 2i+3 = x + x 3 1 (3x2 ) n 2 1 3x 2. (4.40) Using more or less identical procedres, we can find the remaining elements of the modified transfer matrix. The only point of interest is that when calclating entries for the colmn corresponding to (0, 2)/(2, 4) we mst allow for the possibility of going to either state. That is why, for example, the ((1, 3), (0, 2)/(2, 4)) entry is twice that of the ((0, 2)/(2, 4), (1, 3)) entry, althogh any of the walks can be reersed. From or calclations, we find the modified transfer matrix to be x 2 1 (3x 2 ) n x 2 2x + 2x 3 1 (3x 2 ) n 2 1 3x 2 1 x + x 3 1 (3x 2 ) n 2 1 3x x x2 1 (3x 2 ) n x 2 x 0 x 0. (4.41) We can now apply (the always-sefl) Theorem to this transfer matrix. As we hae inclded the step weights in the matrix, the matrix replaces xa in the theorem. We se the theorem to find the generating fnctions of all configrations where the walkers end in a certain state. For example, sppose we wish to find the g.f. of walks where the walkers end at (0, 2) 155

156 or (2, 4). If we assme that n = 2k is een (the calclation for n odd is similar), Theorem gies the nmerator of the g.f. as x2 2 3 x2 1 (3x2 ) k 1 3x 2 x 2 = 1 1 3x 2 ( 1 5x 2 + 4x 4 + 2(3 k 1 )x n+2) (4.42) and the denominator as (1 x 2 1 ) (3x2 ) k 1 ( 1 5x 2 + 4x 4 + 2(3 k 1 )x n+2) 1 3x 2 1 3x 2 (x + x 3 1 ) (3x2 ) (2 k + 2x 3 1 ) (3x2 ) k 1 3x 2 1 3x 2 1 ( = 1 8x 2 + 8x k 1 x n+2 (5 + 4x 2 ) + 2(3 2k 1 )x 2n+4) (4.43) 1 3x 2 after some maniplation (we sed Maple). We can do the same for all possible ending states, to get the generating fnction of all walks which end at a gien state. Now we retrn to the original problem, where the walkers are not constrained to end apart. As in Theorem 4.6.1, we diide the configrations at the last point where the walkers are apart. Figre 4.13 shows the fll decomposition of a walk configration. From aboe, we know the generating fnction of the first section for any gien ending state. The second section is either empty with generating fnction 1, or contains a step where the walkers come together (g.f. x) followed by the eqialent of one walker in a strip of width 4. The starting height of this walker depends on the final state of the first section, bt we can se Theorem and the transfer matrix of one walker to find the generating fnction for any starting height. Withot going into the details (which in any case are almost identical to preios calclations inoling Theorem 4.2.2), it is simpler to consider the cases n een and odd separately, and then calclate the generating fnctions to each state (first section) and from each state (second section). These generating fnctions are shown in Table 4.1. To find the generating fnction of the total nmber of walks with nrestricted endings, we mltiply the generating fnctions to and from each state, and sm the reslts oer all states. After a lot of algebraic maniplation, we reach the second form in the statement of the theorem aboe. In a similar fashion to Theorem 4.6.1, diiding both sides of the fraction by 1 3x 2 reslts in the first form, which seems to be the form with the lowest order in both nmerator and denominator. It is worth noting that it shold be possible to se a similar procedre to the aboe two theorems to find the generating fnction in terms of n for two walkers in a strip of any width, bt obiosly the algebra becomes ery mch more complicated as the width increases. Interestingly, there appears to be a similar formla for three n-friendly walkers in a strip of width

157 F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) (0, 2 1) J 1 ln ln m(, J 2 0) ˆm(0.42, H 0) ξ 1 iterations ξ matrix 2 size ξ c ˆm(, 0) (0, ( 2 1 1) 2 m)8 ln n ln m(, 0) L growth ˆm(0.42, constant 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant (a) Walks in a strip of width 4 with diiding lines. (b) The diided walks. Fig. 4.13: Diiding walks in a strip of width 4 wheneer the walkers are apart. g.f. to state (n = 2k) g.f. to state (n = 2k + 1) g.f. from state (n = 2k) g.f. from state (n = 2k + 1) where State (0, 2)/(2, 4) (1, 3) (0, 4) 1 5x 2 +4x 4 +2(3 k 1 )x n+2 d 1 (x) 1 5x 2 +4x 4 +2(3 k )x n+3 d 2 (x) 1 + x 1 2x (1+2x)3k x n 1 3x x 1+2x (2+3x)3k x n 1 3x 2 x 2x 3 3 k x n+3 d 1 (x) x 2 2x 4 3 k x n+4 d 1 (x) x 2x 3 3 k x n+2 x 2 2x 4 3 k x n+3 d 2 (x) d 2 (x) 1 3x x 1+3x+x2 (4+6x)3 k 1 x n 1 + x 1+2x+x2 (2+4x)3 k x n 1 3x 2 1 d 1 (x) = 1 8x 2 + 8x k 1 x n+2 (5 + 4x 2 ) + 2(3 2k 1 )x 2n+4 d 2 (x) = 1 8x 2 + 8x k x n+3 (9 4x 2 ) 2(3 2k )x 2n+4. Tab. 4.1: Generating fnctions to and from all states for two walkers in a strip of width 4.

158 ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.14: A possible way in which walkers may separate and then join in a single step. Theorem The generating fnction for three n-friendly walkers in a strip of width 4 is n i=0 a ix i 5 n i=k+1 3i 2 x 2i k i=0 b ix 2i + = 1 + 2x + 2x2 + 3k 1xn+1 ( 6 20x) + 5(32k 1 )x2n+2 n i=k+1 3i 2 x 2i 1 8x (3 k 1 )x n+2 3 2k 1 x 2n+2 (4.44) if n = 2k is een, and n i=0 a ix i + 3 k x n n i=k+2 3i 2 x 2i k i=0 b ix 2i 3 k+1 x n+1 n i=k+2 3i 2 x = 1 + 2x + 2x2 + 3kxn+1 ( 4 6x + 2x2 ) 2i 1 8x k x n+1 (2 + 8x 2 ) + 3 2k x 2n+2 (4.45) if n = 2k + 1 is odd, where a i are coefficients defined by a 0 = 1, a 2i 1 = 2(3 i 1 ) (i 1), and a 2i = 5(3 i 1 ) (i 1), and b i are coefficients defined by b 0 = 1, b i = 5(3 i 1 ) (i 1). Again, this extends to the (GV) -friendly case, which has generating fnction 5(32k )x2n (2x + 5x 2 )/(1 3x 2 ) 1 5x 2 /(1 3x 2 ) = 1 + 2x + 2x2 1 8x 2. (4.46) Proof. We approach this proof in a similar manner to the proofs of the preios two theorems an essentially two-layered approach where we diide the configrations at certain points, find generating fnctions for all possible walks in each segment, and then combine these fnctions together into one big generating fnction for the reqired walks. In the preios theorems, we chose or diision points to be all the points where the walkers were apart. Howeer, what we really want is for each segment to hae an easily calclable generating fnction. In the preios theorems, or method of diision worked becase by forcing the walkers to stay together, we essentially created a one-walker model. It is tempting to try the same thing for 3 walkers and diide wheneer the walkers are all separate, with the hope of forcing two of the walkers together and creating a icios two-walker model (which is icios becase we cannot hae 3 walkers at the same ertex). Howeer, this does not work becase it is possible to step from a state where two walkers are together to a state where two different walkers are together in a single step, and therefore withot reaching a state (with integral x-coordinate) where all the walkers are separate, as shown in Figre In order to ensre that walkers which are together stick together within a segment, we 158

159 diide the walks not only when all walkers are separate (state (0, 2, 4)) bt also when two walkers are together for the first ertex. Extending or notation for two-walker states in an intitie manner, these states are (0, 2, 2) 1, (1, 1, 3) 1, (1, 3, 3) 1, and (2, 2, 4) 1. Note that any states where two walkers are together and at height 0 or 4 are reachable (for large enogh n), bt will not occr with sbscript 1, as that wold imply that one of the walkers had exceeded the bondary in the preios step. As before, we identify the states (0, 2, 2) 1 and (2, 2, 4) 1 with each other, since they are reflections of each other in y = 2. We also identify the states (1, 1, 3) 1 and (1, 3, 3) 1. As before, we denote these amalgamations by (0, 2, 2) 1 /(2, 2, 4) 1 and (1, 1, 3) 1 /(1, 3, 3) 1 respectiely. Now, as before, we set p a modified transfer matrix that contains the generating fnctions of all possible transitions between states. The actal calclations of these generating fnctions are similar to before, althogh we mst of corse note the qalitatie difference between the states. For the row corresponding to (0, 2, 4), any one step mst bring two walkers together, and in the next state all walkers mst hae odd y-coordinates. Since the walkers can go to either (1, 1, 3) 1 or (1, 3, 3) 1, the only non-zero entry in this row is in the colmn corresponding to the state (1, 1, 3) 1 /(1, 3, 3) 1, where the entry is 2x. Now let s look at the entry from (0, 2, 2) 1 /(2, 2, 4) 1 to (0, 2, 4). By reflection, we can jst cont the paths from (0, 2, 2) 1 to (0, 2, 4). These paths will consist of a section where the two higher walkers stay together, after which they mst separate, since they end separated. Howeer, at the time when the walkers separate, either all walkers are separate, or the lower two walkers are at the same height for the first ertex. Therefore they mst separate at a diiding state, and ths cannot separate before the final state (0, 2, 4). The only state which can go to (0, 2, 4) in one step, bt has the top two walkers joined, is (1, 3, 3) with any sbscript. Since the top two walkers mst stay together, any set of walks from (0, 2, 2) 1 to (1, 3, 3) is eqialent to two icios walkers in a strip of width 4, starting from (0, 2) and ending at (1, 3). Since the original walkers are n-friendly, the icios walkers mst take no more than n 1 steps, bt are nrestricted otherwise. From Theorem 4.3.1, the transfer matrix for two icios walkers in a strip of width 4 is (4.47) where we take the states in the order (0, 2), (1, 3), (2, 4), and (0, 4). Applying Theorem 4.2.2, the g.f. of walks from (0, 2) to (1, 3) is x 1 3x 2 = i 0 3 i x 2i+1. (4.48) Applying the length restriction and remembering that we still need another step to get from 159

160 g.f. to state (n = 2k) g.f. to state (n = 2k + 1) State (0, 2, 4) (0, 2, 2) 1 /(2, 2, 4) 1 (1, 1, 3) 1 /(1, 3, 3) 1 1 6x 2 +10(3 k 1 )x n+2 3 2k 1 x 2n+2 d 1 (x) 1 6x 2 +2(1+x 2 )3 k x n k x 2n+2 d 2 (x) g.f. from state (n = 2k) 1 g.f. from state (n = 2k + 1) 1 where d 1 (x) = 1 8x (3 k 1 )x n+2 3 2k 1 x 2n+2 d 2 (x) = 1 8x k x n+1 (2 + 8x 2 ) + 3 2k x 2n+2. 2x 2 2(3 k )x n+2 d 1 (x) 2x 8x 3 +2(3 k )x n+3 d 1 (x) 2x 2 2(3 k+1 )x n+1 2x 8x 3 +2(3 k )x n+2 d 2 (x) d 2 (x) (1+x)(1 3 k x n ) 1+x (1+3x)3 k x n 1 3x 2 1 3x 2 (1+x)(1 3 k+1 x n ) 1+x (1+3x)3 k x n 1 3x 2 1 3x 2 Tab. 4.2: Generating fnctions to and from all states for three walkers in a strip of width 4. (1, 3, 3) to (0, 2, 4) gies the entry in the modified transfer matrix as x n 2 2 i=0 3 i x 2i+1 = x 2 1 (3x2 ) n 2 1 3x 2. (4.49) The remaining entries are calclated in a similar way. In each case we hae set the diisions p so that the walkers which are together mst stay together within a segment, except at the last point. Taking the states in the order (0, 2, 4), (0, 2, 2) 1 /(2, 2, 4) 1 and (1, 1, 3) 1 /(1, 3, 3) 1, the fll modified transfer matrix is 0 0 2x x 2 1 (3x 2 ) n 2 x 2 1 (3x 2 ) n 2 x 1 2x2 x 2 (3x 2 ) n x 2 1 3x 2 1 3x 2. (4.50) x 1 (3x2 ) 1 3x 2 n+1 2 x 1 (3x2 ) n+1 2 x 2 1 (3x 2 ) n 2 1 3x 2 1 3x 2 What remains is for s to constrct the possible end-segments. Again, if walkers which are together step apart, a diiding state mst be reached, and therefore walkers which are together at the last diiding point mst stay together. For (0, 2, 4), this means that there can be no frther steps, bt for the other states, we hae the eqialent of two icios walkers starting at either (0, 2) or (1, 3). Again we can find the reqired generating fnctions by applying Theorem The reslts are shown in Table 4.2. By mltiplying the generating fnctions to and from each state and smming oer all states, we achiee the second form gien in the statement of the theorem. Again, the first form is achieed by diiding both nmerator and denominator by 1 3x 2. Looking at the first forms gien in the aboe theorems, a pattern emerges. Both nmer- 160

161 ator and denominator hae fixed coefficients as n increases, these coefficients stay the same. It seems that for any particlar n, all coefficients p to x n (and sometimes x n+1 ) are fixed, while higher coefficients (if they exist) ary with n. We also notice that if we extend these fixed coefficients into an infinite series, we derie the -friendly generating fnction. Unfortnately, it seems that as we increase the strip width, the order of the nmerator and denominator grows, bt the nmber of fixed coefficients stays the same, so we jst get more nfixed (and npredictable) coefficients Variable nmber of walkers Althogh arying the friendliness (n) and analysing the effect is or main goal, it is still interesting to stdy the effect of other ariables on the generating fnction and growth constant. Again, general formlas are difficlt to proe, bt for a ery small width to nmber of walkers ratios, the lack of room for the walkers to moe in can reslt in ery simple generating fnctions. We present a few sch reslts in this section. Theorem The generating fnction for p icios walkers in a strip of width 2p 1 is 1 1 x. (4.51) The generating fnction for p icios walkers in a strip of width 2p is 1 + x 1 (p + 1)x 2. (4.52) Proof. The first reslt is obios, as there is only one possible configration of walks at any length. For the second reslt, at positions with odd x-coordinate there is only one possible state that the walkers can be in. At positions with een x-coordinate, there are p + 1 possible states. This can be seen by obsering that there are p+1 points in the strip with een height, so there mst be one empty space which can be placed in any of p + 1 places. All of these states can be reached from and can go to the only possible state with odd x-coordinate with one step. Ths the nmber of configrations is a seqence 1, 1, p+1, p+1, (p+1) 2, (p+1) 2,.... The generating fnction now follows. Extending the width by one more nit gies s a mch more complicated generating fnction, bt it is still proable. Theorem The generating fnction for p icios walkers in a strip of width 2p + 1 is f p (x) = h p 1(x) h p+1 (x) (4.53) 161

162 where h p (x) = i ( ( 1) i+1 p+i 2 ) 2 x i (4.54) i is a polynomial of degree p. Alternatiely, if we define λ ± = 2 x2 ±x x 2 4, then 2 ( ) ( ) 1 2x + x2 12 h p (x) = 2 2x λ k+ x x + x2 + 2x λ k x 2 (4.55) 4 if p = 2k is een, and ( 1 x h p (x) = 2 if p = 2k + 1 is odd. 2x + ) ( x2 x 3 1 x 2x λ k x x + ) x2 x 3 2 2x λ k x 2 (4.56) 4 Proof. A nmber of properties of the conjectred generating fnction are instrmental in proing this theorem. For conenience we state these as a separate lemma. Lemma If f p (x) and h p (x) are as stated in Theorem 4.6.5, then 1. h p (x) = h p 2 (x) xh p 1 ( x) 2. h p (x) + h p 2 (x) = (2 + ( 1) p x)h p 1 (x) 3. h p (x) = (2 x 2 )h p 2 (x) h p 4 (x) 4. f p (x) = 1 2 x 2 f p 2 (x) 5. f p (x) = 1+f p 1(x) 3 x 2 f p 1 (x) 6. f p ( x) = ( fp 1 (x) 1 f p+1 (x) 1 ) f p+1 (x). Proof. 1. h p (x) = ( ( 1) i+1 p+i 2 ) 2 x i i i = (( ( 1) i+1 p 2+i ) ( 2 2 p 1+i 1 )) + 2 x i i i 1 i = h p 2 (x) + x ( ( 1) i+2 p 1+i ) 2 2 x i i i = h p 2 (x) x ( ( 1) i p 1+i ) 2 +i 2 ( x) i i i = h p 2 (x) xh p 1 ( x) (4.57) 162

163 since ( 1) i 2 +i = ( 1) i i 2 = ( 1) i+1 2. Note that this implies h p ( x) = (h p 1 (x) h p+1 (x))/x. 2. If p is een, then (2 + ( 1) p x)h p 1 (x) = 2 ( ( 1) i+1 p 1+i ) 2 2 x i + ( 1) ( p ( 1) i+1 p 1+i ) 2 2 x i+1 i i i i = ( ( 2( 1) i+1 p 1+i ) ( ( 1) p+ i2 p 2+i )) 2 x i i i 1 i = ( ( p 2( 1) i + i 1 ) ( p ( 1) i + i 1 )) x i i i 1 i een + ( ( p 2( 1) i+1 + ) ( i 1 p ( 1) i 1 + )) i x i i i 1 i odd = (( p ( 1) i + ) ( i p i 1 )) 2 2 x i i i i een + ( ( p ( 1) i ) ( i 1 p ) ( i 1 1 p ) ( i 1 1 p )) i x i i i 1 i i i odd = (( p ( 1) i + ) ( i p i 1 )) 2 2 x i + (( p ( 1) i+1 + ) ( i 1 p )) i x i i i i i i een i odd = (( ( 1) i+1 p+i 2 ) ( 2 p 2+i )) + 2 x i i i i = h p (x) + h p 2 (x). (4.58) The proof is similar for the case where p is odd. 3. From (1), h p (x) = h p 2 (x) xh p 1 ( x) = h p 2 (x) x(h p 3 ( x) + xh p 2 (x)) = (1 x 2 )h p 2 (x) + h p 2 (x) h p 4 (x) = (2 x 2 )h p 2 (x) h p 4 (x). (4.59) 163

164 4. From (3), f p (x) = h p 1(x) h p+1 (x) Note that this implies f p (x) = 2 x 2 1 f p+2 (x). 5. From (2) and (3), = = 1 (2 x 2 )h p 1 (x) h p 3 (x) h p 1 (x) 1 2 x 2 f p 2 (x). (4.60) 1 + f p 1 (x) 3 x 2 f p 1 (x) h p (x) + h p 2 (x) = (3 x 2 )h p (x) h p 2 (x) = h p(x) + h p 2 (x) h p (x) + h p+2 (x) = (2 + ( 1)p x)h p 1 (x) (2 + ( 1) p+2 x)h p+1 (x) = h p 1(x) h p+1 (x) = f p(x). (4.61) 6. From (1), f p ( x) = h p 1( x) h p+1 ( x) = (h p 2(x) h p (x))/x (h p (x) h p+2 (x))/x = h p 2(x)/h p+2 (x) h p (x)/h p+2 (x) h p (x)/h p+2 (x) h p+2 (x)/h p+2 (x) = f p 1(x)f p+1 (x) f p+1 (x) f p+1 (x) 1 ( ) fp 1 (x) 1 = f p+1 (x). (4.62) f p+1 (x) 1 We retrn to the proof of Theorem To proe the theorem, we will find a recrrence in p that the actal generating fnction satisfies, and proe that the stated generating fnction also satisfies it. Note that at any particlar time, there are exactly p + 1 possible positions for the walkers to be in. This can be seen by obsering that the walkers mst hae either all odd or all 164

165 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.15: A configration showing all possible een-height states for three icios walkers in a strip of width 7. There is only one state where the lowest walker is at height 2. een y-coordinates; in either case, there are p + 1 sch y-coordinates in a strip of width 2p + 1, which means that there are p + 1 choices for the only noccpied y-coordinate (or hole ), and ths p + 1 choices for the entire set of walkers. Note in particlar that if the first (lowest) walker has y-coordinate 2 or 3, the no-crossing constraint ensres that the lone hole is below the first walker, and ths there is exactly 1 possible arrangement for all the walkers, namely (2, 4,..., 2p) if the first walker is at height 2 or (3, 5,..., 2p + 1) otherwise. Here we hae again extended the notation of Lemma in the logical manner. We show this sitation in Figre Let the generating fnction of the walks be g p (x). We shall show that g p (x) = f p (x). g p (x) is also the g.f. of walks which start from (3, 5,..., 2p+1), since that state is a reflection in y = p+ 1 of the reqired starting state (0, 2,..., 2p 2). We will constrct a recrrence by 2 considering the generating fnction of walks which start from (2, 4,..., 2p), which we denote by g p (x). Since this state is the only state that is reachable from (3, 5,..., 2p + 1) after one step, we know that g p (x) is merely g p (x) with the first step taken off. In other words, g p (x) = g p(x) 1. (4.63) x Now consider a configration of walks starting from (2, 4,..., 2p). We diide the walks at all points where the first walker has a height of 2. An example is shown in Figre Looking at the first walker only, there are two possibilities for a diided segment, as demonstrated in Figre 4.16(b): either the walker steps p, in which case it steps down immediately and is at height 2, or the walker steps down, spends an nspecified amont of time oscillating between 0 and 1, and then retrns to height 2. Becase the highest p 1 walkers hae only one position to go to when the first walker is at height 2 or 3, the first possibility has generating fnction x 2. The second possibility is more complicated. We can say that since the first walker starts at 2, the starting position for all walkers mst be (2, 4,..., 2p). In particlar, the highest 165

166 F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L rowth constant (a) A configration for (0, three 2 1) icios walkers in a strip of width 7. ln ln m(, 0) ˆm(0.42, 0) c d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA ω F (σ 1, σ 2 ) F (σ 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant (b) The first walk of the configration. There are two possibilities for this walk. (c) The remaining walks of the configration. Depending on the first walk, they are either totally constrained or are p 1 icios walks in a strip. Fig. 4.16: Diiding configrations when the first walker reaches height 2.

167 iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.17: Possible non-triial paths for the first walker in an end-segment. p 1 walkers start from (4, 6,..., 2p). These walkers neer go below the line y = 2, since that wold iolate the icios constraint. Howeer, ntil the last point of the segment (where the first walker retrns to 2), the only other restrictions on the walkers are the icios constraint and the pper wall. Therefore we can treat the walkers, from the starting point of the segment to one step before the first walker retrns to 2, as p 1 icios walkers in a strip of width 2p 1 = 2(p 1) + 1, namely the strip from y = 2 to y = 2p + 1. The walkers start from the state (4, 6,..., 2p). This is shown in Figre 4.16(c). Translating this model down by 2 nits shows that its generating fnction is g p 1 (x). Howeer, we mst restrict the lengths of these walks to be odd, becase the first walker mst end at height 1 (to take the next step to height 2). To do this we sbtract g p 1 ( x) from g p 1 (x) and diide by 2. We mst also mltiply by a factor of x for the nconted last step, as there is only a single possibility for this step since the first walker goes to height 2. So the generating fnction for the second possibility is 1 2 x ( g p 1(x) g p 1 ( x)) = 1 2 (g p 1(x) + g p 1 ( x)) 1 (4.64) which means that the total generating fnction for each segment is x (g p 1(x) + g p 1 ( x)) 1. (4.65) Now we look at the possible end-segments after the last diision point. There are 3 possible configrations, of which the non-triial ones are shown in Figre In the first possibility, the walks end at the last diision point, so the end-segment has generating fnction 1. In the second, the first walker steps p to height 3, and mst end there. This leaes exactly one possibility for the remaining walkers, so the segment has generating fnction x. The last possibility is that the first walker steps down to 1 and does not retrn to 2. We can se the same argment as aboe to relate the remaining walkers to p 1 icios walkers in a strip of width 2p 1. Howeer, this time the first walker does not need to end at height 1, so the length can be een or odd. Frthermore, the first walker takes at least 1 step, so we mst ensre that the remaining walkers hae length at least 1. This is done by sbtracting 1 from the generating fnction to gie a g.f. of g p 1 (x) 1. Therefore the total generating 167

168 fnction for an end-segment is 1 + x + g p 1 (x) 1 = x + g p 1(x) 1. (4.66) x We now pt all or generating fnctions together. The generating fnction for p icios walkers in a strip of width 2p+1 starting from (2, 4,..., 2p) is g p (x). Bt these walks consist of an arbitrary nmber of diided segments, followed by exactly one end-segment. Therefore and so g p (x) 1 x = 1 1 (x (g p 1(x) + g p 1 ( x)) 1) g p (x) = 1 + = ( x + g p 1(x) 1 x x 2 + g p 1 (x) 1 2 x (g p 1(x) + g p 1 ( x)) ) (4.67) (g p 1(x) g p 1 ( x)) 2 x (g p 1(x) + g p 1 ( x)). (4.68) Since g 1 (x) = f 1 (x) = 1 1 x x 2, all that remains is to show that f p (x) satisfies the aboe recrrence. This is eqialent to showing that (4 2x 2 )f p (x) f p (x)f p 1 (x) f p (x)f p 1 ( x) f p 1 (x) + f p 1 ( x) 2 = 0. (4.69) Now we can apply Lemma 4.6.6, (4), (5) and (6): (4 2x 2 )f p (x) f p (x)f p 1 (x) f p (x)f p 1 ( x) f p 1 (x) + f p 1 ( x) 2 = (4 2x 2 )f p (x) f p (x)f p 1 (x) f p 2(x) 1 f p (x) 1 f 2 p (x) f p 1 (x) + f p 2(x) 1 f p (x) 1 f p(x) 2 = (4 2x 2 f p 1 (x))f p (x) f p 2(x) 1 f p (x) 1 (f p(x) 1)f p (x) f p 1 (x) 2 = (4 2x 2 f p 1 (x) f p 2 (x) + 1)f p (x) f p 1 (x) 2 = (5 2x 2 f p 1 (x) (2 x 2 1 f p (x) ))f p(x) f p 1 (x) 2 = (3 x 2 f p 1 (x))f p (x) (1 + f p 1 (x)) = (3 x f p 1 (x) f p 1 (x)) 3 x 2 f p 1 (x) (1 + f p 1(x)) = 0. (4.70) The alternate form of h p (x) can be deried by calclating the generating fnction of 168

169 h p (x). From Lemma (3), we hae h p 4 (x)y p (4.71) h p (x)y p = (2 x 2 ) h p 2 (x)y p p 4 p 4 p 4 h p (x)y p = 1 + (1 x)y + (1 x x 2 )y 2 + (1 2x x 2 + x 3 )y 3 p 0 ( +(2 x 2 )y 2 h p (x)y p 1 (1 x)y y 4 h p (x)y p p 0 p 0 (4.72) h p (x)y p (1 (2 x 2 )y 2 + y 4 ) = 1 + (1 x)y (1 + x)y 2 y 3 (4.73) p 0 p 0 h p (x)y p = 1 + (1 x)y (1 + x)y2 y 3 1 (2 x 2 )y 2 + y 4. (4.74) Expanding in partial fractions gies the alternate form. ) As an interesting aside, we know the actal growth constant for the icios walk model from [90] for p walkers in a strip of width L, it is 2 p p sπ s=1 cos. By eqating this with the L+2 growth constants that we derie from or generating fnctions, we derie the trigonometric identities p 2 p sπ cos 2p + 1 = 1 (4.75) s=1 s=1 p 2 p sπ cos 2p + 2 = p + 1 (4.76) and since the inerse of the growth constant is a zero of the denominator of the generating fnction, we also hae ( ) 1 p h p+1 2 p sπ cos = 0. (4.77) 2p + 3 s=1 169

170 In fact, the first identity is fairly simple to derie by standard methods: p 2 p iπ cos 2p + 1 sin iπ p 2p + 1 = sin 2iπ 2p + 1 i=1 i=1 = sin 2π 2p + 1 sin = sin 2π 2p + 1 sin p iπ = sin 2p + 1 i=1 4π (2p 2)π 2pπ... sin sin 2p + 1 2p + 1 2p + 1 4π 2p sin 3π 2p + 1 sin π 2p + 1 where we se sin(π x) = sin x on the last p terms. Bt since this also implies 2 the third identity can be restated as 2 p+1 p+1 sπ cos 2p + 3 s=1 h p+1 ( 2 cos (4.78) = 1, (4.79) ) (p + 1)π = 0. (4.80) 2p + 3 Frthermore, sing a combination of the first and second identities, we can restate the growth constant for icios walkers. It trns ot to be p k 1 2 p sπ cos 2p + 2k = 21 k p + k s=1 i=1 sec (p + i)π 2p + 2k (4.81) if L = 2p + 2k 2 is een and p 2 p sπ cos 2p + 2k + 1 = 2 k s=1 k sec i=1 (p + i)π 2p + 2k + 1 (4.82) if L = 2p + 2k 1 is odd. 4.7 Growth constants Recalling or motiation in the introdction, we wanted to look at the dependence of the growth constant on n, the friendliness, for fixed width and nmber of walkers. We will denote the growth constant for p n-friendly walkers in a strip of width L by µ p,n (L). By analyzing the zeros of the denominators of or calclated generating fnctions, it is immediately apparent that the growth constant increases monotonically with n. We show a plot of µ against n in 170

171 F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 growth constant L n Fig. 4.18: Growth constants s. friendliness for 2 walkers, width 3 (pls signs) and 4 (crosses). Figre Now, we wold like to know the natre of this dependence. By taking a log-plot of or growth constants, it seems ery likely that the relationship is exponential in natre. By fitting lines to the points, we derie the (approximate) relationships and µ 2, (3) µ 2,n (3) 0.415(2) (1) n (4.83) µ 2, (4) µ 2,n (4) (7) (4) n. (4.84) The standard errors are shown in brackets. We show the log-plot for width 4 in Figre In or preios paper ([36]), we noted that the icios growth constant for two walkers is 4 cos π 2π π 2π cos and the -friendly growth constant is 4 cos cos. Becase of this, L+2 L+2 L+4 L+4 we speclated that the growth constant for finite friendliness might take the similar form π 4 cos cos 2π L+λ 2 (n) L+λ 2. Unfortnately, this is (rather obiosly, as it trned ot) not the (n) case, as simply calclating possible ales for λ 2 (1) by finding the growth constants for L = 4 and L = 5 prodces different ales. Howeer, the idea is that the dependence of the growth constant on the width of the strip is of a similar natre for any friendliness, and this seems to be tre. We show a plot of this in Figre

172 F (σ 4, σ 5 ) J 1 Sfrag replacements J 2 H ξ 1 ξ 2 V ψ a ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations σ 5 matrix sizef X ˆm(, 0) YA ( 1 2 m)8 ω F (σ 1, σ 2 ) F (σ 2, Lσ 3 ) growth constant F (σ 3, σ 4 ) F (σ 4, σ 5 ) b c d σ 1 σ 2 σ 3 σ 4 ln(µ 2, (4) µ 2,n (4)) n J 1 J 2 Fig. 4.19: Log-plot of Figre 4.18 for width 4, with asymptotic fitted line. H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n growth constant L Fig. 4.20: Growth constant s. strip width for 2 walkers, for icios (pls signs) and 1-friendly (crosses) walkers.

173 (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant Fig. 4.21: Example configration for three 4-friendly walks in a strip of width 5, bandwidth Bandwidth Extending the model to cases with p > 2 sggests a natral generalization. The restriction that at most two walkers can meet at any one point which is the sole difference between the GV -friendly model and the TK -friendly model seems to be relatiely arbitrary. This sggests that we can create another parameter of the model, which we call bandwidth (denoted by b), which denotes the maximm nmber of walkers that can meet at any point or line. Natrally, the concept of bandwidth is only releant if p, the nmber of walkers, is greater than 2. The GV -friendly model then has b = 2, the smallest possible ale it can take withot forcing the walkers to be icios, and the TK model has b = p, the largest possible ale it can take. We show an example configration with higher bandwidth in Figre We were able to adapt the method of recrrences described in Section 4.4 for higher bandwidth. Using this method, we calclated some generating fnctions for p = b = 3, for small widths. The reslts are shown in Appendix A. The most notable featre of these generating fnctions is that the degree of both nmerator and denominator grows more rapidly with n than in the b = 2 case. Also the pattern of fixed coefficients which we obsered aboe appears to be absent. Theorem The nmber of 1-friendly walks with m steps, for fixed p and L, is independent of the bandwidth b for b > 1. Proof. Since there are exactly two bonds that a walk can traerse to reach any point, if we reqire more than two walkers to reach the same site, then at least 2 walkers mst share a bond. Howeer, the walkers are osclating, so this is impossible. Ths the nmber of walks for any b is the same as the nmber of walks with b = Conclsion In this chapter, we hae analysed the n-friendly directed walks model in a horizontal strip of finite width. Firstly, we showed how we cold gess the generating fnctions of these walks 173

174 from the first few terms, sing Padé approximants. Then we showed how to generate those terms, by inestigating the general transfer matrix for 2 walkers, or with a more general method based on recrrences. By extending patterns in these generating fnctions, we were then able to proe, by means of generating fnction recrrence argments, a nmber of general reslts in one of the parameters of the model nmber of walkers, strip width and friendliness. We proed seeral cases where the other parameters were small. We then analysed the growth constants for the arios models, and fond that they depend in an exponential manner on the friendliness of the walkers. We also proposed a new extension to this model, the bandwidth. We cold extend this research by trying to proe general reslts for other parametric ales, for example 2 walkers with width 5 and arbitrary friendliness. Theoretically, it shold at least be possible (if extremely tedios) to calclate the generating fnction for 2 walkers in any fixed strip width sing similar techniqes to those which we hae been sing. It wold be extremely alable if we cold somehow combine all these reslts and proe a general generating fnction for 2 walkers with any friendliness or strip width, bt that is still some distance away. Another direction that we cold look at is in frther modifications to the model. Bandwidth has not been flly explored, althogh its seflness has not really been established; we cold also look, among other options, at watermelons instead of stars, or assign different weights to steps rather than a standard weight for each step in particlar, we cold look at the anisotropic generating fnctions. There are many modifications that we cold possibly make, bt we think that or crrent model already displays most of the important featres of this class of models. 174

175 5. MEAN UNKNOTTING TIMES 5.1 Introdction A ery interesting and important area of hman biology is the stdy of the behaior of DNA strands. DNA famosly has a doble helix strctre, bt at large scale we can think of the doble helix strctre as a single strand. Then the DNA can more or less be thoght of as being a long line that is tangled with itself. This is shown in Figre 5.1, taken from [138]. For a cell to replicate, the DNA inside it mst be ntangled. A good way to describe this process is throgh the se of knots. A knot is formally defined as a single, simple, and closed cre in 3 dimensions. While DNA is not always closed, this still proides a sefl model of entanglement. In the stdy of knots, we are generally interested in how the knot is tangled, rather than the exact position of the cre. Ths the exact dimensions of the knot are sally nimportant. Instead, the most common representation of a knot is throgh its embedding, which is a drawing of the knot, projected down into a (sally nspecified) plane. Howeer, where the knot crosses itself on the plane (a crossing), indications are made as to which strand of the knot lies higher than the other strand. This is sally done by breaking the lower strand as it approaches the crossing. Some knots are shown ia their embeddings in Figre 5.2. It is immediately obios that one knot can hae many distinct embeddings, depending on the plane in which it is projected. Not only is this so, bt it is possible to hae two knots which are different in 3-space, bt which hae the same embedding. So embeddings are not necessarily niqe representations of knots. What we really wish to represent from a knot, which the embedding encompasses, is its entanglement the way in which it is tangled with itself. Informally, when two knots hae the same entanglement, we say that they are eqialent. The eqialence class of any knot is known as its knot type. This leads to the ery important qestion: when are two knots eqialent? The intitie answer is obios: two knots are eqialent when one can be transformed into the other by moing the strands in a continos physical fashion, withot breaking or ctting any strand. What remains is to make this formal. This is done by defining the Reidemeister moes. These are operations which we can perform on a knot which do not change its type. They were first introdced in 1927 by Reidemeister ([117]). The first moe inoles ntwisting or twisting a loop. The second inoles separating two strands which are not tangled with each other, or ptting one on top of the other. The third moe inoles moing a strand nderneath a crossing of two strands with which it is not tangled. These three moes are shown in Figre 5.3. We define two knots as eqialent if and only if one can be reached from the other throgh

176 Yd A σ 1 ω PSfrag replacements σ 2 F (σ 1, σ 2 ) σ 3 F (σ 2, σ 3 ) σ 4 F (σ 3, σ 4 ) σ 5 F (σ 4, σ 5 ) F J X 1 YA J 2 ξ 1 ξ V 2 ψ ξ c a (0, b 2 1) c ln d ln m(, 0) σ 1 ˆm(0.42, 0) σ 2 H iterations σ 3 ξ matrix size ω 1 σ 4 ω F (σ ξ 2 1, σ 2 ) σ 5 F (σ 1, σ 2 ) F (σ ξ c 2, σ 3 ) F F (σ 2, σ 3 ) ˆm(, 0) F (σ 3, σ 4 ) ( 1 X F (σ 3, σ 4 ) F (σ 2 m)8 (0, 4, σ 5 ) YA F (σ 4, σ 5 ) 2 1) n J 1 J 1 ln L J ln m(, growth 2 ω J 0) constant 2 F (σ H 1, σ 2 ) H ˆm(0.42, 0) h n1 ξ 1 F (σ 2, σ 3 ) ξ 1 iterations ξ 2 F (σ 3, σ 4 ) ξ Fig. 5.1: Electron micrograph 2 of tangled DNA (from [138]). matrix size ξ c F (σ 4, σ 5 ) ξ c J 1 J 2 ˆm(, 0) (0, ( m)8 1) H (0, 2 1) ln ξ 1 ln ln m(, 0) n ξ 2 ln m(, 0) ˆm(0.42, 0) L ξ c ˆm(0.42, 0) growth constant iterations iterations h matrix size n1 matrix size (0, 2 1) Fig. 5.2: Some knots. ln ˆm(, 0) ln m(, 0) ˆm(, 0) ( 1 2 m)8 ˆm(0.42, 0) ( 1 2 m)8 n iterations n L matrix size L growth constant growth constant h n1 h n1 ˆm(, 0) ( 1 (a) 2 Moe m)8 1 n L growth constant h n1 d σ 1 σ 2 σ 3 σ 4 σ 5 F X YA (b) Moe 2 (c) Moe 3 Fig. 5.3: Reidemeister moes.

177 a sccession of Reidemeister moes. Intitiely, the Reidemeister moes are all physical moes if the knot was a tangled piece of string, all of these moes wold be physically possible simply by moing the string. Note that it is possible to define other sets of moes which will sere the same prpose as the Reidemeister moes, so they are not niqe. Of special importance in knot theory is the concept of the nknot. This is the knot which can be embedded as jst a single loop which does not cross itself. The nknot is, in a sense, the simplest knot possible. Howeer, it is not always obios whether any gien knot, with a gien embedding, is eqialent to the nknot (nknotted) or not. By looking at and mentally maniplating the knot ia the Reidemeister moes, this can sometimes be worked ot, bt there is no fixed process of applying the moes to reach the nknot (or proe that the knot is not nknotted). We need other, more efficient ways to calclate if two knots are eqialent. This gies rise to the concept of knot inariants. Knot inariants are nmerical properties of knots which are the same for any two knots which are eqialent. Ideally, they shold be different for any two knots which are not eqialent, althogh this is not always the case. Therefore knot inariants are generally more sefl in telling whether two knots are different, rather than whether they are the same. For a property to be a knot inariant, it mst be nchanged nder the Reidemeister moes. Some important knot inariants are the Jones polynomial ([78]) and the knot inariant that we will se, the Alexander polynomial. For more information on knot inariants and some links to statistical mechanics, see [143]. The Alexander polynomial of a knot is a polynomial in one ariable that can be calclated from the crossings of the knot. It was inented by Alexander in 1928 ([1]). The Alexander polynomial is identical for eqialent knots (and is therefore a tre knot inariant), and althogh there are pairs of noneqialent knots which hae the same Alexander polynomial, the freqency of these occrrences is ery small. In fact, the smallest (i.e. least nmber of crossings) knot which has the same Alexander polynomial as the nknot, bt is not nknotted, has 11 crossings. So the Alexander polynomial is nambigos when it comes to identifying knots with less than 11 crossings. Technically, the Alexander polynomial is defined ia topological aenes, bt an easier way wold be to define it recrsiely. Firstly, the notion of an Alexander polynomial is extended so that it is defined for tangled sets of knots (called links). Next, the Alexander polynomial of an nknot, or any nmber of disjointed nknots (an nlink), is 0. Now sppose that we wish to calclate the Alexander polynomial of a link L, which we call L (x). If we take one crossing of L and change it to the positions shown in Figre 5.4, we then call the reslting links L +, L 0 and L, one of which will be L. The Alexander polynomial can then be defined by the eqation L+ (t) L (t) + (t 1/2 t 1/2 ) L0 (t) = 0. (5.1) Althogh an elegant definition, the recrsie definition of the Alexander polynomial does not lend itself well to calclation. By choosing the crossing of L wisely, we can always ensre that both of the remaining links are simpler than L. Howeer, it is not always clear which crossing to choose. 177

178 iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 (a) L + (b) L 0 (c) L Fig. 5.4: Different positions for a single crossing. The Alexander polynomial may be calclated efficiently in the following way (taken from [98]): take a knot L with a particlar embedding (the Alexander polynomial will be the same for all embeddings). Then orient the knot in one particlar direction. This is done by giing a direction to the knot strand that it follows the whole way arond the knot, and is represented by arrows. We show this in Figres 5.5(a) and 5.5(b). Now sppose that L has n crossings. Start at any arbitrary point on the knot and trael in the direction of the orientation. Wheneer an nderpass (a crossing where the strand we are traelling on goes below the crossing strand) is reached, assign that crossing a nmber which is one more than the preios nderpass (starting from 1). Since all crossings hae exactly one strand which goes nderneath, eery crossing will hae exactly one nmber assigned to it. Then diide the knot into segments diided by the nderpasses, and assign each segment a nmber which is the same nmber as the labelling of the next nderpass (according to the orientation of the knot). We show this by contining with or example in Figre 5.5. Now define an n n matrix called the knot matrix, where each row is determined by the correspondingly nmbered nderpass. Sppose that the nderpass labelled k goes nder a segment labelled i. If i = k or i = k + 1 (where the labelings are circlar so that n + 1 is eqialent to 1), then the elements of the knot matrix in row k are a k,k = 1, a k,k+1 = 1 and all other elements are 0. If i k or k + 1, then we frther classify each nderpass according to whether the oerpass approaches from the right or the left, as shown in Figre 5.6. If the oerpass approaches from the right as we trael along the direction of the nderpass, then the elements in row k are a k,k = t, a k,k+1 = 1, and a k,i = t 1 with all other elements 0. If the oerpass approaches from the left, the elements a k,k and a k,k+1 are swapped arond. In or example, the knot matrix is t 1 0 t 1 t 1 1 t 0 0 t 1 t 1 t 0 t 1 1. (5.2) The Alexander polynomial is the determinant of any n 1 n 1 minor of the knot matrix, mltiplied by a factor of ±t m so that its lowest-power term in t is a positie constant. 178

179 F (σ 2, σ (0, 3 ) F (σ 2 1) 3, σ 4 ) a F (σ 4, σ 5 ) ln b ln m(, 0) J 1 c ˆm(0.42, 0) J 2 iterations H σ 1 matrixξ 1 size σ 2 ξ 2 σ 3 ξ c σ ˆm(, 4 0) ( 1 2 m)8 σ 5 F (0, 2 1) n X ln L growth ln m(, constant YA 0) ˆm(0.42, 0) h n1 ω iterations F (σ 1, σ 2 ) (a) An example knot. matrix F (σsize 2, σ 3 ) F (σ 3, σ 4 ) F (σ 4, σ 5 ) 3 ˆm(, 0) ( 1 J 1 2 m)8 J 2 n L growth constant h n1 ψ d H 4 ξ 1 ξ 2 ξ c 1 2 F (σ 2, σ (0, 3 ) F (σ 2 1) 3, σ 4 ) F (σ 4, σ 5 ) ln ln m(, 0) J 1 ˆm(0.42, 0) iterations H σ 1 matrixξ 1 size σ 2 ξ 2 σ 3 ξ c σ ˆm(, 0) 4 ( 1 2 m)8 σ 5 F (0, 2 1) n X ln L growth ln m(, constant YA 0) ˆm(0.42, 0) h n1 ω iterationsf (σ (b) An orientation 1, σ 2 ) for the knot. matrix sizef (σ 2, σ 3 ) F (σ 3, σ 4 ) 2 F (σ 4, σ 5 ) 3 ˆm(, 0) (c) An nderpass nmbering for (d) A segment nmbering for the the knot. The starting point is knot. (0, 2 1) (0, 2 1) indicated. ln ln ln m(, 0) ln m(, 0) Fig. 5.5: Calclating the Alexander polynomial of an example knot. ˆm(0.42, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 J 2 ( 1 2 m)8 n L growth constant h n1 ψ 1 a b c d J 1 J 2 H ξ 1 ξ 2 ξ c iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 4 (a) Approaching from the left. (b) Approaching from the right. Fig. 5.6: Different types of crossing.

180 2 3 F (σ 3, σ 4 ) F (σ 4, σ 5 ) J 1 J 2 H ξ 1 ξ 2 ξ c (0, 2 1) ln ln m(, 0) ˆm(0.42, 0) iterations matrix size ˆm(, 0) ( 1 2 m)8 n L growth constant h n1 Fig. 5.7: The action of topoisomerase II (from [118]). Therefore the Alexander polynomial of the knot in or example is t 1 ( t + 3t 2 t 3 ) = 1 3t + t 2. (5.3) The Alexander polynomial gies s a way of distingishing knots, so now we can start doing calclations with them. We retrn to or original idea of modelling DNA strands by knots. In or model, we hypothesise that the DNA is twisted in a knot-like fashion. Ths we can represent a strand of DNA by a single knot. Now, in order to replicate, the strand of DNA mst be ntangled. The shape of DNA is changed by enzymes known as topoisomerases; in particlar, it becomes disentangled by the se of the enzyme topoisomerase II, which acts on the DNA by breaking it at one point, passing another section of the DNA throgh the break, and then rejoining the broken ends. This process is well known; for a more detailed accont, see [118], [137] or [136]. The process is illstrated in Figre 5.7 (taken from [118]). We model this physical process by means of reersing crossings. To reerse a crossing in a knot, we swap the strands of a crossing so that the strand which was preiosly on top now lies on the bottom, and ice ersa. The remainder of the knot is nchanged. We show this process in Figre 5.8. The action of reersing a crossing prodces an identical effect to the topoisomerase enzyme. To contine the analogy, we will then need to transform a knot into the nknot by 180

Graphs and Networks Lecture 5. PageRank. Lecturer: Daniel A. Spielman September 20, 2007

Graphs and Networks Lecture 5. PageRank. Lecturer: Daniel A. Spielman September 20, 2007 Graphs and Networks Lectre 5 PageRank Lectrer: Daniel A. Spielman September 20, 2007 5.1 Intro to PageRank PageRank, the algorithm reportedly sed by Google, assigns a nmerical rank to eery web page. More

More information

1 The space of linear transformations from R n to R m :

1 The space of linear transformations from R n to R m : Math 540 Spring 20 Notes #4 Higher deriaties, Taylor s theorem The space of linear transformations from R n to R m We hae discssed linear transformations mapping R n to R m We can add sch linear transformations

More information

Restricted cycle factors and arc-decompositions of digraphs. J. Bang-Jensen and C. J. Casselgren

Restricted cycle factors and arc-decompositions of digraphs. J. Bang-Jensen and C. J. Casselgren Restricted cycle factors and arc-decompositions of digraphs J. Bang-Jensen and C. J. Casselgren REPORT No. 0, 0/04, spring ISSN 0-467X ISRN IML-R- -0-/4- -SE+spring Restricted cycle factors and arc-decompositions

More information

The Brauer Manin obstruction

The Brauer Manin obstruction The Braer Manin obstrction Martin Bright 17 April 2008 1 Definitions Let X be a smooth, geometrically irredcible ariety oer a field k. Recall that the defining property of an Azmaya algebra A is that,

More information

We automate the bivariate change-of-variables technique for bivariate continuous random variables with

We automate the bivariate change-of-variables technique for bivariate continuous random variables with INFORMS Jornal on Compting Vol. 4, No., Winter 0, pp. 9 ISSN 09-9856 (print) ISSN 56-558 (online) http://dx.doi.org/0.87/ijoc.046 0 INFORMS Atomating Biariate Transformations Jeff X. Yang, John H. Drew,

More information

MAT389 Fall 2016, Problem Set 6

MAT389 Fall 2016, Problem Set 6 MAT389 Fall 016, Problem Set 6 Trigonometric and hperbolic fnctions 6.1 Show that e iz = cos z + i sin z for eer comple nmber z. Hint: start from the right-hand side and work or wa towards the left-hand

More information

Reduction of over-determined systems of differential equations

Reduction of over-determined systems of differential equations Redction of oer-determined systems of differential eqations Maim Zaytse 1) 1, ) and Vyachesla Akkerman 1) Nclear Safety Institte, Rssian Academy of Sciences, Moscow, 115191 Rssia ) Department of Mechanical

More information

PHASE PLANE DIAGRAMS OF DIFFERENCE EQUATIONS. 1. Introduction

PHASE PLANE DIAGRAMS OF DIFFERENCE EQUATIONS. 1. Introduction PHASE PLANE DIAGRAMS OF DIFFERENCE EQUATIONS TANYA DEWLAND, JEROME WESTON, AND RACHEL WEYRENS Abstract. We will be determining qalitatie featres of a discrete dynamical system of homogeneos difference

More information

EE2 Mathematics : Functions of Multiple Variables

EE2 Mathematics : Functions of Multiple Variables EE2 Mathematics : Fnctions of Mltiple Variables http://www2.imperial.ac.k/ nsjones These notes are not identical word-for-word with m lectres which will be gien on the blackboard. Some of these notes ma

More information

Vectors in Rn un. This definition of norm is an extension of the Pythagorean Theorem. Consider the vector u = (5, 8) in R 2

Vectors in Rn un. This definition of norm is an extension of the Pythagorean Theorem. Consider the vector u = (5, 8) in R 2 MATH 307 Vectors in Rn Dr. Neal, WKU Matrices of dimension 1 n can be thoght of as coordinates, or ectors, in n- dimensional space R n. We can perform special calclations on these ectors. In particlar,

More information

Minimizing Intra-Edge Crossings in Wiring Diagrams and Public Transportation Maps

Minimizing Intra-Edge Crossings in Wiring Diagrams and Public Transportation Maps Minimizing Intra-Edge Crossings in Wiring Diagrams and Pblic Transportation Maps Marc Benkert 1, Martin Nöllenbrg 1, Takeaki Uno 2, and Alexander Wolff 1 1 Department of Compter Science, Karlsrhe Uniersity,

More information

u P(t) = P(x,y) r v t=0 4/4/2006 Motion ( F.Robilliard) 1

u P(t) = P(x,y) r v t=0 4/4/2006 Motion ( F.Robilliard) 1 y g j P(t) P(,y) r t0 i 4/4/006 Motion ( F.Robilliard) 1 Motion: We stdy in detail three cases of motion: 1. Motion in one dimension with constant acceleration niform linear motion.. Motion in two dimensions

More information

Change of Variables. (f T) JT. f = U

Change of Variables. (f T) JT. f = U Change of Variables 4-5-8 The change of ariables formla for mltiple integrals is like -sbstittion for single-ariable integrals. I ll gie the general change of ariables formla first, and consider specific

More information

ANOVA INTERPRETING. It might be tempting to just look at the data and wing it

ANOVA INTERPRETING. It might be tempting to just look at the data and wing it Introdction to Statistics in Psychology PSY 2 Professor Greg Francis Lectre 33 ANalysis Of VAriance Something erss which thing? ANOVA Test statistic: F = MS B MS W Estimated ariability from noise and mean

More information

Low-emittance tuning of storage rings using normal mode beam position monitor calibration

Low-emittance tuning of storage rings using normal mode beam position monitor calibration PHYSIAL REVIEW SPEIAL TOPIS - AELERATORS AND BEAMS 4, 784 () Low-emittance tning of storage rings sing normal mode beam position monitor calibration A. Wolski* Uniersity of Lierpool, Lierpool, United Kingdom

More information

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University 9. TRUSS ANALYSIS... 1 9.1 PLANAR TRUSS... 1 9. SPACE TRUSS... 11 9.3 SUMMARY... 1 9.4 EXERCISES... 15 9. Trss analysis 9.1 Planar trss: The differential eqation for the eqilibrim of an elastic bar (above)

More information

Simpler Testing for Two-page Book Embedding of Partitioned Graphs

Simpler Testing for Two-page Book Embedding of Partitioned Graphs Simpler Testing for Two-page Book Embedding of Partitioned Graphs Seok-Hee Hong 1 Hiroshi Nagamochi 2 1 School of Information Technologies, Uniersity of Sydney, seokhee.hong@sydney.ed.a 2 Department of

More information

Numerical Model for Studying Cloud Formation Processes in the Tropics

Numerical Model for Studying Cloud Formation Processes in the Tropics Astralian Jornal of Basic and Applied Sciences, 5(2): 189-193, 211 ISSN 1991-8178 Nmerical Model for Stdying Clod Formation Processes in the Tropics Chantawan Noisri, Dsadee Skawat Department of Mathematics

More information

Direct linearization method for nonlinear PDE s and the related kernel RBFs

Direct linearization method for nonlinear PDE s and the related kernel RBFs Direct linearization method for nonlinear PDE s and the related kernel BFs W. Chen Department of Informatics, Uniersity of Oslo, P.O.Box 1080, Blindern, 0316 Oslo, Norway Email: wenc@ifi.io.no Abstract

More information

Discontinuous Fluctuation Distribution for Time-Dependent Problems

Discontinuous Fluctuation Distribution for Time-Dependent Problems Discontinos Flctation Distribtion for Time-Dependent Problems Matthew Hbbard School of Compting, University of Leeds, Leeds, LS2 9JT, UK meh@comp.leeds.ac.k Introdction For some years now, the flctation

More information

Section 7.4: Integration of Rational Functions by Partial Fractions

Section 7.4: Integration of Rational Functions by Partial Fractions Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to

More information

Linear Strain Triangle and other types of 2D elements. By S. Ziaei Rad

Linear Strain Triangle and other types of 2D elements. By S. Ziaei Rad Linear Strain Triangle and other tpes o D elements B S. Ziaei Rad Linear Strain Triangle (LST or T6 This element is also called qadratic trianglar element. Qadratic Trianglar Element Linear Strain Triangle

More information

2 Faculty of Mechanics and Mathematics, Moscow State University.

2 Faculty of Mechanics and Mathematics, Moscow State University. th World IMACS / MODSIM Congress, Cairns, Astralia 3-7 Jl 9 http://mssanz.org.a/modsim9 Nmerical eamination of competitie and predator behaior for the Lotka-Volterra eqations with diffsion based on the

More information

Relativity II. The laws of physics are identical in all inertial frames of reference. equivalently

Relativity II. The laws of physics are identical in all inertial frames of reference. equivalently Relatiity II I. Henri Poincare's Relatiity Principle In the late 1800's, Henri Poincare proposed that the principle of Galilean relatiity be expanded to inclde all physical phenomena and not jst mechanics.

More information

Complexity of the Cover Polynomial

Complexity of the Cover Polynomial Complexity of the Coer Polynomial Marks Bläser and Holger Dell Comptational Complexity Grop Saarland Uniersity, Germany {mblaeser,hdell}@cs.ni-sb.de Abstract. The coer polynomial introdced by Chng and

More information

3.3 Operations With Vectors, Linear Combinations

3.3 Operations With Vectors, Linear Combinations Operations With Vectors, Linear Combinations Performance Criteria: (d) Mltiply ectors by scalars and add ectors, algebraically Find linear combinations of ectors algebraically (e) Illstrate the parallelogram

More information

ON THE PERFORMANCE OF LOW

ON THE PERFORMANCE OF LOW Monografías Matemáticas García de Galdeano, 77 86 (6) ON THE PERFORMANCE OF LOW STORAGE ADDITIVE RUNGE-KUTTA METHODS Inmaclada Higeras and Teo Roldán Abstract. Gien a differential system that inoles terms

More information

arxiv: v1 [math.co] 25 Sep 2016

arxiv: v1 [math.co] 25 Sep 2016 arxi:1609.077891 [math.co] 25 Sep 2016 Total domination polynomial of graphs from primary sbgraphs Saeid Alikhani and Nasrin Jafari September 27, 2016 Department of Mathematics, Yazd Uniersity, 89195-741,

More information

Minimal Obstructions for Partial Representations of Interval Graphs

Minimal Obstructions for Partial Representations of Interval Graphs Minimal Obstrctions for Partial Representations of Interal Graphs Pael Klaík Compter Science Institte Charles Uniersity in Prage Czech Repblic klaik@ik.mff.cni.cz Maria Samell Department of Theoretical

More information

Chords in Graphs. Department of Mathematics Texas State University-San Marcos San Marcos, TX Haidong Wu

Chords in Graphs. Department of Mathematics Texas State University-San Marcos San Marcos, TX Haidong Wu AUSTRALASIAN JOURNAL OF COMBINATORICS Volme 32 (2005), Pages 117 124 Chords in Graphs Weizhen G Xingde Jia Department of Mathematics Texas State Uniersity-San Marcos San Marcos, TX 78666 Haidong W Department

More information

The Real Stabilizability Radius of the Multi-Link Inverted Pendulum

The Real Stabilizability Radius of the Multi-Link Inverted Pendulum Proceedings of the 26 American Control Conference Minneapolis, Minnesota, USA, Jne 14-16, 26 WeC123 The Real Stabilizability Radis of the Mlti-Link Inerted Pendlm Simon Lam and Edward J Daison Abstract

More information

Lecture 3. (2) Last time: 3D space. The dot product. Dan Nichols January 30, 2018

Lecture 3. (2) Last time: 3D space. The dot product. Dan Nichols January 30, 2018 Lectre 3 The dot prodct Dan Nichols nichols@math.mass.ed MATH 33, Spring 018 Uniersity of Massachsetts Janary 30, 018 () Last time: 3D space Right-hand rle, the three coordinate planes 3D coordinate system:

More information

arxiv: v1 [math.co] 10 Nov 2010

arxiv: v1 [math.co] 10 Nov 2010 arxi:1011.5001 [math.co] 10 No 010 The Fractional Chromatic Nmber of Triangle-free Graphs with 3 Linyan L Xing Peng Noember 1, 010 Abstract Let G be any triangle-free graph with maximm degree 3. Staton

More information

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE

More information

An Investigation into Estimating Type B Degrees of Freedom

An Investigation into Estimating Type B Degrees of Freedom An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information

More information

6.4 VECTORS AND DOT PRODUCTS

6.4 VECTORS AND DOT PRODUCTS 458 Chapter 6 Additional Topics in Trigonometry 6.4 VECTORS AND DOT PRODUCTS What yo shold learn ind the dot prodct of two ectors and se the properties of the dot prodct. ind the angle between two ectors

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH275: Statistical Methods Exercises III (based on lectres 5-6, work week 4, hand in lectre Mon 23 Oct) ALL qestions cont towards the continos assessment for this modle. Q. If X has a niform distribtion

More information

Series expansions from the corner transfer matrix renormalization group method

Series expansions from the corner transfer matrix renormalization group method Series expansions from the corner transfer matrix renormalization group method 1 Andrew Rechnitzer 2 1 LaBRI/The University of Melbourne 2 University of British Columbia January 27, 2011 What is the CTMRG

More information

Sources of Non Stationarity in the Semivariogram

Sources of Non Stationarity in the Semivariogram Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary

More information

Concept of Stress at a Point

Concept of Stress at a Point Washkeic College of Engineering Section : STRONG FORMULATION Concept of Stress at a Point Consider a point ithin an arbitraril loaded deformable bod Define Normal Stress Shear Stress lim A Fn A lim A FS

More information

Change of Variables. f(x, y) da = (1) If the transformation T hasn t already been given, come up with the transformation to use.

Change of Variables. f(x, y) da = (1) If the transformation T hasn t already been given, come up with the transformation to use. MATH 2Q Spring 26 Daid Nichols Change of Variables Change of ariables in mltiple integrals is complicated, bt it can be broken down into steps as follows. The starting point is a doble integral in & y.

More information

Modelling, Simulation and Control of Quadruple Tank Process

Modelling, Simulation and Control of Quadruple Tank Process Modelling, Simlation and Control of Qadrple Tan Process Seran Özan, Tolgay Kara and Mehmet rıcı,, Electrical and electronics Engineering Department, Gaziantep Uniersity, Gaziantep, Trey bstract Simple

More information

Second-Order Wave Equation

Second-Order Wave Equation Second-Order Wave Eqation A. Salih Department of Aerospace Engineering Indian Institte of Space Science and Technology, Thirvananthapram 3 December 016 1 Introdction The classical wave eqation is a second-order

More information

Math 144 Activity #10 Applications of Vectors

Math 144 Activity #10 Applications of Vectors 144 p 1 Math 144 Actiity #10 Applications of Vectors In the last actiity, yo were introdced to ectors. In this actiity yo will look at some of the applications of ectors. Let the position ector = a, b

More information

NUCLEATION AND SPINODAL DECOMPOSITION IN TERNARY-COMPONENT ALLOYS

NUCLEATION AND SPINODAL DECOMPOSITION IN TERNARY-COMPONENT ALLOYS NUCLEATION AND SPINODAL DECOMPOSITION IN TERNARY-COMPONENT ALLOYS COLLEEN ACKERMANN AND WILL HARDESTY Abstract. The Cahn-Morral System has often been sed to model the dynamics of phase separation in mlti-component

More information

Predicting Popularity of Twitter Accounts through the Discovery of Link-Propagating Early Adopters

Predicting Popularity of Twitter Accounts through the Discovery of Link-Propagating Early Adopters Predicting Poplarity of Titter Acconts throgh the Discoery of Link-Propagating Early Adopters Daichi Imamori Gradate School of Informatics, Kyoto Uniersity Sakyo, Kyoto 606-850 Japan imamori@dl.soc.i.kyoto-.ac.jp

More information

Online Stochastic Matching: New Algorithms and Bounds

Online Stochastic Matching: New Algorithms and Bounds Online Stochastic Matching: New Algorithms and Bonds Brian Brbach, Karthik A. Sankararaman, Araind Sriniasan, and Pan X Department of Compter Science, Uniersity of Maryland, College Park, MD 20742, USA

More information

Setting The K Value And Polarization Mode Of The Delta Undulator

Setting The K Value And Polarization Mode Of The Delta Undulator LCLS-TN-4- Setting The Vale And Polarization Mode Of The Delta Undlator Zachary Wolf, Heinz-Dieter Nhn SLAC September 4, 04 Abstract This note provides the details for setting the longitdinal positions

More information

IN this paper we consider simple, finite, connected and

IN this paper we consider simple, finite, connected and INTERNATIONAL JOURNAL OF MATHEMATICS AND SCIENTIFIC COMPUTING (ISSN: -5), VOL., NO., -Eqitable Labeling for Some Star and Bistar Related Graphs S.K. Vaidya and N.H. Shah Abstract In this paper we proe

More information

On the tree cover number of a graph

On the tree cover number of a graph On the tree cover nmber of a graph Chassidy Bozeman Minerva Catral Brendan Cook Oscar E. González Carolyn Reinhart Abstract Given a graph G, the tree cover nmber of the graph, denoted T (G), is the minimm

More information

Turbulence and boundary layers

Turbulence and boundary layers Trblence and bondary layers Weather and trblence Big whorls hae little whorls which feed on the elocity; and little whorls hae lesser whorls and so on to iscosity Lewis Fry Richardson Momentm eqations

More information

Spanning Trees with Many Leaves in Graphs without Diamonds and Blossoms

Spanning Trees with Many Leaves in Graphs without Diamonds and Blossoms Spanning Trees ith Many Leaes in Graphs ithot Diamonds and Blossoms Pal Bonsma Florian Zickfeld Technische Uniersität Berlin, Fachbereich Mathematik Str. des 7. Jni 36, 0623 Berlin, Germany {bonsma,zickfeld}@math.t-berlin.de

More information

ON TRANSIENT DYNAMICS, OFF-EQUILIBRIUM BEHAVIOUR AND IDENTIFICATION IN BLENDED MULTIPLE MODEL STRUCTURES

ON TRANSIENT DYNAMICS, OFF-EQUILIBRIUM BEHAVIOUR AND IDENTIFICATION IN BLENDED MULTIPLE MODEL STRUCTURES ON TRANSIENT DYNAMICS, OFF-EQUILIBRIUM BEHAVIOUR AND IDENTIFICATION IN BLENDED MULTIPLE MODEL STRUCTURES Roderick Mrray-Smith Dept. of Compting Science, Glasgow Uniersity, Glasgow, Scotland. rod@dcs.gla.ac.k

More information

4.2 First-Order Logic

4.2 First-Order Logic 64 First-Order Logic and Type Theory The problem can be seen in the two qestionable rles In the existential introdction, the term a has not yet been introdced into the derivation and its se can therefore

More information

Lesson 81: The Cross Product of Vectors

Lesson 81: The Cross Product of Vectors Lesson 8: The Cross Prodct of Vectors IBHL - SANTOWSKI In this lesson yo will learn how to find the cross prodct of two ectors how to find an orthogonal ector to a plane defined by two ectors how to find

More information

The Cross Product of Two Vectors in Space DEFINITION. Cross Product. u * v = s ƒ u ƒƒv ƒ sin ud n

The Cross Product of Two Vectors in Space DEFINITION. Cross Product. u * v = s ƒ u ƒƒv ƒ sin ud n 12.4 The Cross Prodct 873 12.4 The Cross Prodct In stdying lines in the plane, when we needed to describe how a line was tilting, we sed the notions of slope and angle of inclination. In space, we want

More information

Connectivity and Menger s theorems

Connectivity and Menger s theorems Connectiity and Menger s theorems We hae seen a measre of connectiity that is based on inlnerability to deletions (be it tcs or edges). There is another reasonable measre of connectiity based on the mltiplicity

More information

Study on the Mathematic Model of Product Modular System Orienting the Modular Design

Study on the Mathematic Model of Product Modular System Orienting the Modular Design Natre and Science, 2(, 2004, Zhong, et al, Stdy on the Mathematic Model Stdy on the Mathematic Model of Prodct Modlar Orienting the Modlar Design Shisheng Zhong 1, Jiang Li 1, Jin Li 2, Lin Lin 1 (1. College

More information

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields.

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields. Kraskopf, B, Lee,, & Osinga, H (28) odimension-one tangency bifrcations of global Poincaré maps of for-dimensional vector fields Early version, also known as pre-print Link to pblication record in Explore

More information

Key words. partially ordered sets, dimension, planar maps, planar graphs, convex polytopes

Key words. partially ordered sets, dimension, planar maps, planar graphs, convex polytopes SIAM J. DISCRETE MATH. c 1997 Societ for Indstrial and Applied Mathematics Vol. 10, No. 4, pp. 515 528, Noember 1997 001 THE ORDER DIMENSION O PLANAR MAPS GRAHAM R. BRIGHTWELL AND WILLIAM T. TROTTER Abstract.

More information

New Regularized Algorithms for Transductive Learning

New Regularized Algorithms for Transductive Learning New Reglarized Algorithms for Transdctie Learning Partha Pratim Talkdar and Koby Crammer Compter & Information Science Department Uniersity of Pennsylania Philadelphia, PA 19104 {partha,crammer}@cis.penn.ed

More information

GRAY CODES FAULTING MATCHINGS

GRAY CODES FAULTING MATCHINGS Uniersity of Ljbljana Institte of Mathematics, Physics and Mechanics Department of Mathematics Jadranska 19, 1000 Ljbljana, Sloenia Preprint series, Vol. 45 (2007), 1036 GRAY CODES FAULTING MATCHINGS Darko

More information

Modelling by Differential Equations from Properties of Phenomenon to its Investigation

Modelling by Differential Equations from Properties of Phenomenon to its Investigation Modelling by Differential Eqations from Properties of Phenomenon to its Investigation V. Kleiza and O. Prvinis Kanas University of Technology, Lithania Abstract The Panevezys camps of Kanas University

More information

Study of the diffusion operator by the SPH method

Study of the diffusion operator by the SPH method IOSR Jornal of Mechanical and Civil Engineering (IOSR-JMCE) e-issn: 2278-684,p-ISSN: 2320-334X, Volme, Isse 5 Ver. I (Sep- Oct. 204), PP 96-0 Stdy of the diffsion operator by the SPH method Abdelabbar.Nait

More information

A scalar nonlocal bifurcation of solitary waves for coupled nonlinear Schrödinger systems

A scalar nonlocal bifurcation of solitary waves for coupled nonlinear Schrödinger systems INSTITUTE OF PHYSICS PUBLISHING Nonlinearity 5 (22) 265 292 NONLINEARITY PII: S95-775(2)349-4 A scalar nonlocal bifrcation of solitary waes for copled nonlinear Schrödinger systems Alan R Champneys and

More information

Curves - Foundation of Free-form Surfaces

Curves - Foundation of Free-form Surfaces Crves - Fondation of Free-form Srfaces Why Not Simply Use a Point Matrix to Represent a Crve? Storage isse and limited resoltion Comptation and transformation Difficlties in calclating the intersections

More information

Spring, 2008 CIS 610. Advanced Geometric Methods in Computer Science Jean Gallier Homework 1, Corrected Version

Spring, 2008 CIS 610. Advanced Geometric Methods in Computer Science Jean Gallier Homework 1, Corrected Version Spring, 008 CIS 610 Adanced Geometric Methods in Compter Science Jean Gallier Homework 1, Corrected Version Febrary 18, 008; De March 5, 008 A problems are for practice only, and shold not be trned in.

More information

Differential Geometry. Peter Petersen

Differential Geometry. Peter Petersen Differential Geometry Peter Petersen CHAPTER Preliminaries.. Vectors-Matrices Gien a basis e, f for a two dimensional ector space we expand ectors sing matrix mltiplication e e + f f e f apple e f and

More information

4 Exact laminar boundary layer solutions

4 Exact laminar boundary layer solutions 4 Eact laminar bondary layer soltions 4.1 Bondary layer on a flat plate (Blasis 1908 In Sec. 3, we derived the bondary layer eqations for 2D incompressible flow of constant viscosity past a weakly crved

More information

Pulses on a Struck String

Pulses on a Struck String 8.03 at ESG Spplemental Notes Plses on a Strck String These notes investigate specific eamples of transverse motion on a stretched string in cases where the string is at some time ndisplaced, bt with a

More information

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on . Tractable and Intractable Comptational Problems So far in the corse we have seen many problems that have polynomial-time soltions; that is, on a problem instance of size n, the rnning time T (n) = O(n

More information

Affine Invariant Total Variation Models

Affine Invariant Total Variation Models Affine Invariant Total Variation Models Helen Balinsky, Alexander Balinsky Media Technologies aboratory HP aboratories Bristol HP-7-94 Jne 6, 7* Total Variation, affine restoration, Sobolev ineqality,

More information

SECTION 6.7. The Dot Product. Preview Exercises. 754 Chapter 6 Additional Topics in Trigonometry. 7 w u 7 2 =?. 7 v 77w7

SECTION 6.7. The Dot Product. Preview Exercises. 754 Chapter 6 Additional Topics in Trigonometry. 7 w u 7 2 =?. 7 v 77w7 754 Chapter 6 Additional Topics in Trigonometry 115. Yo ant to fly yor small plane de north, bt there is a 75-kilometer ind bloing from est to east. a. Find the direction angle for here yo shold head the

More information

arxiv: v2 [cs.si] 27 Apr 2017

arxiv: v2 [cs.si] 27 Apr 2017 Opinion Dynamics in Networks: Conergence, Stability and Lack of Explosion arxi:1607.038812 [cs.si] 27 Apr 2017 Tng Mai Georgia Institte of Technology maitng89@gatech.ed Vijay V. Vazirani Georgia Institte

More information

A New Approach to Direct Sequential Simulation that Accounts for the Proportional Effect: Direct Lognormal Simulation

A New Approach to Direct Sequential Simulation that Accounts for the Proportional Effect: Direct Lognormal Simulation A ew Approach to Direct eqential imlation that Acconts for the Proportional ffect: Direct ognormal imlation John Manchk, Oy eangthong and Clayton Detsch Department of Civil & nvironmental ngineering University

More information

1. INTRODUCTION. A solution for the dark matter mystery based on Euclidean relativity. Frédéric LASSIAILLE 2009 Page 1 14/05/2010. Frédéric LASSIAILLE

1. INTRODUCTION. A solution for the dark matter mystery based on Euclidean relativity. Frédéric LASSIAILLE 2009 Page 1 14/05/2010. Frédéric LASSIAILLE Frédéric LASSIAILLE 2009 Page 1 14/05/2010 Frédéric LASSIAILLE email: lmimi2003@hotmail.com http://lmi.chez-alice.fr/anglais A soltion for the dark matter mystery based on Eclidean relativity The stdy

More information

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty Technical Note EN-FY160 Revision November 30, 016 ODiSI-B Sensor Strain Gage Factor Uncertainty Abstract Lna has pdated or strain sensor calibration tool to spport NIST-traceable measrements, to compte

More information

Elements of Coordinate System Transformations

Elements of Coordinate System Transformations B Elements of Coordinate System Transformations Coordinate system transformation is a powerfl tool for solving many geometrical and kinematic problems that pertain to the design of gear ctting tools and

More information

Faster exact computation of rspr distance

Faster exact computation of rspr distance DOI 10.1007/s10878-013-9695-8 Faster exact comptation of rspr distance Zhi-Zhong Chen Ying Fan Lsheng Wang Springer Science+Bsiness Media New Yk 2013 Abstract De to hybridiation eents in eoltion, stdying

More information

Chapter 6 Momentum Transfer in an External Laminar Boundary Layer

Chapter 6 Momentum Transfer in an External Laminar Boundary Layer 6. Similarit Soltions Chapter 6 Momentm Transfer in an Eternal Laminar Bondar Laer Consider a laminar incompressible bondar laer with constant properties. Assme the flow is stead and two-dimensional aligned

More information

Asymptotic Gauss Jacobi quadrature error estimation for Schwarz Christoffel integrals

Asymptotic Gauss Jacobi quadrature error estimation for Schwarz Christoffel integrals Jornal of Approximation Theory 146 2007) 157 173 www.elseier.com/locate/jat Asymptotic Gass Jacobi qadratre error estimation for Schwarz Christoffel integrals Daid M. Hogh EC-Maths, Coentry Uniersity,

More information

E ect Of Quadrant Bow On Delta Undulator Phase Errors

E ect Of Quadrant Bow On Delta Undulator Phase Errors LCLS-TN-15-1 E ect Of Qadrant Bow On Delta Undlator Phase Errors Zachary Wolf SLAC Febrary 18, 015 Abstract The Delta ndlator qadrants are tned individally and are then assembled to make the tned ndlator.

More information

Xihe Li, Ligong Wang and Shangyuan Zhang

Xihe Li, Ligong Wang and Shangyuan Zhang Indian J. Pre Appl. Math., 49(1): 113-127, March 2018 c Indian National Science Academy DOI: 10.1007/s13226-018-0257-8 THE SIGNLESS LAPLACIAN SPECTRAL RADIUS OF SOME STRONGLY CONNECTED DIGRAPHS 1 Xihe

More information

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees Graphs and Their Applications (6) by K.M. Koh* Department of Mathematics National University of Singapore, Singapore 1 ~ 7543 F.M. Dong and E.G. Tay Mathematics and Mathematics EdOOation National Institte

More information

Math 116 First Midterm October 14, 2009

Math 116 First Midterm October 14, 2009 Math 116 First Midterm October 14, 9 Name: EXAM SOLUTIONS Instrctor: Section: 1. Do not open this exam ntil yo are told to do so.. This exam has 1 pages inclding this cover. There are 9 problems. Note

More information

CONTENTS. INTRODUCTION MEQ curriculum objectives for vectors (8% of year). page 2 What is a vector? What is a scalar? page 3, 4

CONTENTS. INTRODUCTION MEQ curriculum objectives for vectors (8% of year). page 2 What is a vector? What is a scalar? page 3, 4 CONTENTS INTRODUCTION MEQ crriclm objectives for vectors (8% of year). page 2 What is a vector? What is a scalar? page 3, 4 VECTOR CONCEPTS FROM GEOMETRIC AND ALGEBRAIC PERSPECTIVES page 1 Representation

More information

The Linear Quadratic Regulator

The Linear Quadratic Regulator 10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.

More information

Analysis of Enthalpy Approximation for Compressed Liquid Water

Analysis of Enthalpy Approximation for Compressed Liquid Water Analysis of Entalpy Approximation for Compressed Liqid Water Milioje M. Kostic e-mail: kostic@ni.ed Nortern Illinois Uniersity, DeKalb, IL 60115-2854 It is cstom to approximate solid and liqid termodynamic

More information

The Minimal Estrada Index of Trees with Two Maximum Degree Vertices

The Minimal Estrada Index of Trees with Two Maximum Degree Vertices MATCH Commnications in Mathematical and in Compter Chemistry MATCH Commn. Math. Compt. Chem. 64 (2010) 799-810 ISSN 0340-6253 The Minimal Estrada Index of Trees with Two Maximm Degree Vertices Jing Li

More information

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli 1 Introdction Discssion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen Department of Economics, University of Copenhagen and CREATES,

More information

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter WCE 007 Jly - 4 007 London UK Step-Size onds Analysis of the Generalized Mltidelay Adaptive Filter Jnghsi Lee and Hs Chang Hang Abstract In this paper we analyze the bonds of the fixed common step-size

More information

arxiv: v1 [cs.dm] 27 Jun 2017 Darko Dimitrov a, Zhibin Du b, Carlos M. da Fonseca c,d

arxiv: v1 [cs.dm] 27 Jun 2017 Darko Dimitrov a, Zhibin Du b, Carlos M. da Fonseca c,d Forbidden branches in trees with minimal atom-bond connectiity index Agst 23, 2018 arxi:1706.086801 [cs.dm] 27 Jn 2017 Darko Dimitro a, Zhibin D b, Carlos M. da Fonseca c,d a Hochschle für Technik nd Wirtschaft

More information

Flood flow at the confluence of compound river channels

Flood flow at the confluence of compound river channels Rier Basin Management VIII 37 Flood flow at the conflence of compond rier channels T. Ishikawa 1, R. Akoh 1 & N. Arai 2 1 Department of Enironmental Science and Technology, Tokyo Institte of Technology,

More information

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-1 April 01, Tallinn, Estonia UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL Põdra, P. & Laaneots, R. Abstract: Strength analysis is a

More information

EXERCISES WAVE EQUATION. In Problems 1 and 2 solve the heat equation (1) subject to the given conditions. Assume a rod of length L.

EXERCISES WAVE EQUATION. In Problems 1 and 2 solve the heat equation (1) subject to the given conditions. Assume a rod of length L. .4 WAVE EQUATION 445 EXERCISES.3 In Problems and solve the heat eqation () sbject to the given conditions. Assme a rod of length.. (, t), (, t) (, ),, > >. (, t), (, t) (, ) ( ) 3. Find the temperatre

More information

Essentials of optimal control theory in ECON 4140

Essentials of optimal control theory in ECON 4140 Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as

More information

A New Method for Calculating of Electric Fields Around or Inside Any Arbitrary Shape Electrode Configuration

A New Method for Calculating of Electric Fields Around or Inside Any Arbitrary Shape Electrode Configuration Proceedings of the 5th WSEAS Int. Conf. on Power Systems and Electromagnetic Compatibility, Corf, Greece, Agst 3-5, 005 (pp43-48) A New Method for Calclating of Electric Fields Arond or Inside Any Arbitrary

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. DISCRETE MATH. Vol., No., pp. 8 86 c 009 Society for Indstrial and Applied Mathematics THE SURVIVING RATE OF A GRAPH FOR THE FIREFIGHTER PROBLEM CAI LEIZHEN AND WANG WEIFAN Abstract. We consider

More information

Palindromes and local periodicity

Palindromes and local periodicity Palindromes and local periodicity A. Blondin Massé, S. Brlek, A. Garon, S. Labbé Laboratoire de Combinatoire et d Informatiqe Mathématiqe, Uniersité d Qébec à Montréal, C. P. 8888 Sccrsale Centre-Ville,

More information

Self-induced stochastic resonance in excitable systems

Self-induced stochastic resonance in excitable systems Self-indced stochastic resonance in excitable systems Cyrill B. Mrato Department of Mathematical Sciences, New Jersey Institte of Technology, Newark, NJ 7 Eric Vanden-Eijnden Corant Institte of Mathematical

More information