Simulation of DNA electrophoresis by lattice reptation models

Size: px
Start display at page:

Download "Simulation of DNA electrophoresis by lattice reptation models"

Transcription

1 Utrecht University Faculty of Physics and Astronomy Simulation of DNA electrophoresis by lattice reptation models A. van Heukelum October 1999 Report number: ITFUU 99/07 Supervisors: G.T. Barkema and R.H. Bisseling Address: Institute for Theoretical Physics Utrecht University, Princetonplein 5, 3584 CC Utrecht The Netherlands 1

2 2

3 Abstract The cage model for reptation is extended to describe the dynamics of a polymer in a gel, under the influence of a homogeneous electric field. This model gives results that agree qualitatively with experiments on DNA strands in agarose gels, provided the polymer s persistence length, a measure of its rigidity, is large compared to its diameter, and of the same order of magnitude as the pore size of the gel. This manuscript describes the proposed model, and compares it to another model for electrophoresis, the Duke Rubinstein model. An efficient sequential and parallel implementation is presented to calculate the drift velocity of the polymer numerically exactly, and results for polymers up to twelve persistence lengths are reported. From these results, numerically exact results are also obtained for the diffusion constant.

4

5 Contents 1 Introduction Gel electrophoresis of DNA Motion of DNA strands in a gel Lattice reptation models Cage model Repton model Comparison of Cage model and Repton model Electrophoresis Comparison of Cage model and Repton model for electrophoresis Direct calculation of drift velocity Computation without explicit transition matrix Transition matrix Exploiting symmetries to reduce the state space The abc representation and the reduced transition matrix Parallel implementation BSP model Data distribution for matrix vector product Exploiting the specific structure of the reduced transition matrix Results Implementation details Computational results Scaling with polymer length Convergence of the power method Scaling relations for the parallel implementation Physics results Drift velocity Diffusion constant Conclusions Computational conclusions Physics conclusions

6 4

7 Chapter 1 Introduction It is well known that DNA contains the genetic code of living beings. Each cell in one organism contains the same genetic information, since it is just duplicated at each cell division. Retrieving the DNA from cells is a relatively easy task, and splitting the DNA at certain places, using enzymes, has become a standard procedure. The resulting mixture of DNA fragments is unique for each individual, like a fingerprint. When the DNA fragments are separated by length using gel electrophoresis, this unique signature is made visible. Also, genetic mutations from one generation to the next can be made visible in the same way. In spite of the frequent use of electrophoresis in genetics and biochemistry, the dynamics of polymers such as DNA in a gel are not completely understood. This chapter explains the scientific jargon used to describe the properties of a gel, the properties of a DNA strand and the motion of DNA strands through the gel. Chapter 2 introduces the most common lattice reptation models usedtodescribethemotionoflongpolymericmoleculesina gel and shows how to extend those models to simulate electrophoresis. In this paper we choose to compute the drift velocities of the polymers using the cage model for reptation and a direct computation method. Chapter 3 describes how to implement such a direct method and how symmetries in the cage model can be used to speed up the computation of the drift velocity. The implementation of such a direct computation on parallel computers is discussed in Chapter 4. Chapter 5 presents the computational and physics results. The conclusions drawn from those results are discussed in Chapter Gel electrophoresis of DNA A gel consists of polymers or gel strands that are crosslinked, forming a stable threedimensional network. The pores, spaces between the gel strands,are filled with a solvent. To separate DNA fragments by length a solution of the DNA fragments is injected into a gel on one side and the DNA fragments are pulled through the pores of the gel by an electric field. Experimentally it is found that, as long as the force on each fragment is below a certain threshold, the drift velocity of the DNA fragments is inversely proportional to the length of the fragments and directly proportional to the applied electric field. Since the drift velocity depends on the length of the fragments, after some time the mixture of DNA fragments separates into a number of bands, each consisting of DNA fragments with the same length and thus the same velocity. Shorter fragments are located in bands that have moved further from the point of injection. If the electric force on the DNA fragments is above this threshold the velocity depends quadratically on the electric field, but is independent of the length of the fragment: the bands collapse. This makes the separation of long DNA fragments with electrophoresis a difficult task. Like any other polymer, DNA consists of a large number of connected monomers. A DNA monomer consists of a base pair which contributes about 2.5Å tothelengthofthednastrand. The microscopic structure forms the wellknown double helix structure of DNA in three dimensions 5

8 DPDPDPDPDPDPDPDP T A A C T G C G : : : : : : : : A T T G A C G C DPDPDPDPDPDPDPDP Figure 1.1: The left picture shows a schematic representation of a DNA strand. The building blocks of DNA are desoxyribose (D) and phosphate (P) with an additional nucleic base: adenine (A), cytosine (C), thymine (T) or guanine (G). The solid lines are molecular bonds between the components and the dotted lines represent hydrogen bonds. Note that the bases match only as A:T and C:G. Therightpictureshowsanimpression of the double helix structure of the polymer. as shown in figure 1.1. This structure makes the DNA polymer quite rigid. The diameter of the double helix is about 20Å and the distance between two turns is about 35Å. The persistence length defines the typical length over which the polymer preserves its orientation. For DNA it is much larger than the diameter of the double helix structure; one persistence length usually contains between 130 and 375 base pairs [2]. 1.2 Motion of DNA strands in a gel The gel strands impose an important restriction onto the motion of the DNA fragments: when the DNA fragments move perpendicular to their length axis, their movement will soon be blocked by the gel strands. The gel effectively does not allowsidewaysmovementofthednaoutsideacertain tube, consistingofthegelstrandssurroundingthepolymer. Furthermore,thefrictionbetweenthe gel, the solvent and the DNA strands absorbs almost all of the kinetic energy of the polymers, so we may use a model for overdamped motion to describe it. In the absence of external forces on the DNA strands, the dominant motion is longitudinal mass transport by thermal energy. De Gennes model of reptation [1] describes the dynamics of a polymer in an environment that prevents the polymer from moving perpendicular to itself. The polymer can still move longitudinally: the head seeks the way and all other monomers must follow the same path. To model this longitudinal motion, certain defects are allowed to move along the polymer, as shown in figure 1.2. A defect can come into existence at either end of the chain, contracting the polymer by acertainamountoflength,calledthestored length of the defect. The defect can now travel along the polymer, and when it reaches the other end of the chain it disappears, releasing its stored length by extending the chain on that side. From this model De Gennes concluded that, for long polymer chains of length L: the relaxation time of the defects along the chain scales as L 2 ; the typical time needed for the chain to leave its tube scales as L 3 ; the mobility µ of the chain and the diffusion constant D scale as L 2 ; the mobility of the polymer is insensitive to the type of the defects introduced, as long as they do not allow sideways movement of the tube. If a small electric field is applied, the drift velocity of the polymer is described by the Nernst Einstein equation v = FD,whereF is the total force on the polymer and D is the diffusion constant. For electrophoresis of DNA the force is F = qle, withq the effective charge of one persistence length of DNA, L the length of the polymer in units of persistence length and E the 6

9 B C A B C A B C A C B A Figure 1.2: De Gennes model for reptation. The chain is unable to move perpendicular to its axis, but it may deform locally. Such a deformation is called a defect. Thedistancea monomer is displaced by a passing defect is called the stored length of the defect. Inside the chain, stored length is conserved, but at both ends defects carrying stored length can come into existence and disappear. This motion of polymers is called reptation, ordiffusionof stored length. electric field strength. For large polymers the diffusion constant scales as D L 2. The drift velocity then scales as v qe/l. As argued by Barkema, Marko and Widom [15], once the force qle on the polymer exceeds a certain threshold value the NernstEinstein relation fails. They showed that above this threshold, but still below qe 1, the velocity scales as v E 2, independent of length. 7

10 8

11 Chapter 2 Lattice reptation models Lattice models for reptation describe the dynamics ofasinglereptatingpolymer, represented by achainofconnectedparticles. Each lattice site represents the center of a pore of the gel and the particles, often just called monomers, are restricted to points of this lattice to simulate the polymer efficiently. The dynamics of the chain is described by single particle moves from one lattice site to a randomly chosen nearby lattice site. The most frequently used lattice models, the cage and the repton models on cubic lattices, are explained in the following sections. Both models have one length scale: the distance between the lattice points. This length scale describes the persistence length of the polymers as well as the pore size. The persistence length of DNA strands is of the same order of magnitude as the pore size of an agarose gel: the typical persistence length for DNA strands is about 500Å; the typical pore size for an agarose gel (1% weight percentage) is about 1000Å [13]. In both models, the polymers are represented as random walks, not as selfavoiding walks: the models do not impose excluded volume constraints by limiting the number of monomers in apore. Therearetwojustificationsforleavingoutexcludedvolumeeffects. Firstly,sinceboth the pore size and the persistence length of the polymers are much larger than the diameter of the double helix structure, excluded volume effects aresmall. Secondly,thescalingpropertiesof apolymerinanenvironmentofotherpolymersarethoseofarandomwalk,notaselfavoiding walk. For polymers that have a persistence length of only a few monomers, selfavoidance can be implemented by putting an upper limit to the number of particles located in each pore. 2.1 Cage model In the cage model, proposed by Evans and Edwards [3], the polymer is described as L monomers, connected by N = L 1bondsofunitlength. Figure2.1givesanimpressionofapolymerinthe cage model, and the gel surrounding the polymer. A pair of bonds connected to the same monomer and running in the opposite direction is called a kink. In the cage model the kinks represent the defects as described by De Gennes model for reptation. The dynamics of the cage model consist of moves of monomers that are in the middle of a kink, plus moves of the end points. An inner monomer in the cage model is in one of the following situations: (a) the bonds to its nearest neighbors form an angle of 180 ;(b)thebondsformanangleof90 ;(c)themonomeris in the middle of a kink. In states (a) and (b) the monomer is unable to move. Moving a monomer in situation (b) is not allowed because in these moves the polymer would cross gel strands. For example moving monomer 6 to the pore in the downright direction gives a valid configuration, but the gel prevents this move. In state (c) all moves that lead to a valid configuration are allowed; they are all reorientations of the kink. An end monomer is always free to move to any new position that gives a valid polymer configuration. In figure 2.1, monomers 1, 3, 4, 8, 11 and 12 are able to move, each to five other lattice sites. All other monomers are unable to move. 9

12 Figure 2.1: The left picture gives an impression of the threedimensional cage model. The right picture is a twodimensional sketch of the model in which the monomers are represented by numbered dots and connected by bonds; the dots on the lattice sites represent gel strands that run perpendicular to the paper. The thinlined squares denote the pores of the gel. Each move is statistically attempted once per unit of time: a random monomer is chosen, and if this monomer is located in the middle of a kink, the kink will be given randomly one of six possible directions (its old direction plus the five other ones). Thus, for the whole polymer, a total of 6L moves are attempted per unit time. The time per attempt is then given by t =(6L) 1. The backbone of a polymer in the cage model is found by repeatedly removing a kink until no kinks remain. The backbone of the polymer in figure 2.1 consists of monomers 1, 2, 3, 6, 7 and 10. This polymer configuration has three units of stored length: kinks 8 and 11 have one unit of stored length each, but kinks 3 and 4 share one unit of stored length. 2.2 Repton model In the repton model, proposed by Rubinstein [6], the polymer is described as L monomers, connected by N = L 1bondswitheitherzeroorunitlength. Figure2.2givesanimpressionofa polymer in the repton model, and the gel surrounding the polymer. In the repton model, the zerolength bonds represent the defects described by De Gennes model for reptation. The dynamics of the repton model consist of moves of monomers with one zerolength bond and one unitlength bond, plus moves of the end points. An inner monomer in the repton model is in one of the following situations: (a) the monomer is in the same pore as both nearest neighbors; (b) both nearest neighbors are in adjacent pores; (c) the monomer has one nearest neighbor in the same pore and the other one is in an adjacent pore. In states (a) and (b) the monomer is unable to move. In state (c) the only allowed move is the move where the monomer joins its neighbor in the adjacent pore. The end monomers may be in one of two states: the nearest neighbor is either in an adjacent pore or in the same pore. In the first state, the monomer may join its neighbor intheadjacentpore;inthesecondstate,the monomer is free to move to any of the six adjacent pores. In figure 2.2, monomers 1, 2, 3, 4, 5, 7, 9 and 10 may move to one other location, and monomer 12 may move to six new locations. All other monomers are unable to move. An elementary move consists of choosing a random monomer and trying a move up or down the chain. For inner monomers each move is statistically attempted once per unit of time. The time for one move is given by t =(2L) 1. 10

13 Figure 2.2: The left picture gives an impression of the threedimensional repton model. The right picture is a twodimensional sketch of the model in which the monomers are represented by numbered dots and connected by bonds; the dots on the lattice sites represent gel strands that run perpendicular to the paper. The thinlined squares denote the pores of the gel Figure 2.3: As with real polymers the cage model for reptation can form hernias. A hernia is a piece of the polymer that is larger than a single kink, but is not part of the backbone. Formation of hernias is not possible in the repton model. 2.3 Comparison of Cage model and Repton model Both the cage model and the repton model simulate a single polymer whose dynamics is constrained to reptation. Both lattice models confine the backbone of the polymers to a tube, although the repton model is stricter: all sites occupied by the chain belong to the tube. The cage model allows formation of so called hernias, in which several kinks are accumulated and hinder each other s mobility (see figure 2.3). Such hernias actually enable the interior of the polymer to select a new direction, later to be followed by one of the two tails. Since such hernias occur in the experimental situation, the cage model is in this respect more realistic than the repton model. With a slight modification to the repton model, it can also be used to model hernias: in the case where three neighboring monomers occupy one pore, the middle one should be allowed to move to a new pore and vice versa. This modification is usually left out because it prevents a projection of the repton model to a onedimensional model, and the repton model with this modification is harder to treat theoretically. For the repton model, the diffusion constant D as a function of length has been determined 11

14 numerically exact for polymers up to the length N =20byNewmanandBarkema[18],andin approximation by means of Monte Carlo methods for polymer lengths up to N =250[18]. Forthe cage model, Monte Carlo estimates of the diffusion constant are reported by Evans and Edwards [3] and by Barkema and Krenzlin [21]; no numerically exact calculations are known to us. 2.4 Electrophoresis The repton model has been extended by Duke to describe the motion of reptating polymers under the influence of an electric field [10]; the cage model can be extended in a similar way. Here, we will describe the extension in both lattice models that allows for the simulation of electrophoresis. The extended model should obey two rules: for a vanishing electric field it should become equivalent to the basic model and the polymer states should have a local detailed balance. Local detailed balance is assumed for the singlemonomer moves. If R ji is the rate at which state i goes to state j, andf i is the steady state frequency of polymer state i, localdetailedbalance is described by R ji f i = R ij f j, i.e., the frequency of transitions from state i to state j should be equal to the frequency of transitions form state j to state i. If state i cannot go to state j, thisequationistriviallytrue, since in that case R ji = R ij =0. Configurationsdifferingbyoneelementarymoveshouldoccur with relative probabilities equal to the ratio of the Boltzmann factors f i exp( U i /k B T ), where U i is the potential energy in state i. The potential energy of a charged monomer, with effective charge q in the electric field E is U i = q E r i,where r i is the position of the monomer. This determines the ratio of the rate of a move and that of its reverse move. For simplicity, the body diagonals of the unit cubes of the lattice and the direction of the electric field are chosen to be aligned: E =(E, E, E). The elementary moves of the cage model move a monomer two units of distance in one of the six possible directions. Just like in the zero field model, a random monomer is chosen. If the monomer is in the middle of a kink, the kink is given a random direction, but with a bias: kinks oriented along the electric field have lower energy than kinks against the field, and are thus selected more frequently. The probabilities of choosing anewdirectionalongtheelectricfieldisp and against the electric field is P. For the cage model in d dimensions we have: e E e E P = 1 d e E e E and P = 1 d e E e E. In this equation, E is the potential energy difference, in units of thermal energy, between two particles on nearest neighbor sites on the lattice: E = aqe/k B T ; a is the lattice constant, q the effective charge of a particle, E the electric field strength and k B T the thermal energy. The energy difference between a kink along the electric field and a kink against the electric field is 2E such that the Boltzmann factor is P /P = e 2E. Note that we regain the zero field probabilities for E 0: P = P = 1 2d. The orientation of kinks in the cage model is chosen with rate 1 d ee in a direction along the electric field, and with rate 1 d e E against the electric field. The time for an elementary step thus is (dl(e E e E )) 1. Here L is the number of monomers, d the dimensionality and E the dimensionless parameter for theelectricfieldstrength. The elementary moves of the repton model move a particle one unit of distance along or against the field. Just like in the zero field model a random monomerischosen. Ifthemonomerhasone nearest neighbor in the same pore and one in an adjacent pore, then the monomer is put with probability P in the pore where the monomer has the lower potential energy and with P in the pore with the higher potential energy, with: 12

15 P = e E/2 e E/2 e E/2 and P = e E/2 e E/2 e E/2. This ensures a local thermal equilibrium: P /P = e E,whichisjusttheBoltzmannfactor.The end monomers are always free to move. An end monomer with a nearest neighbor in an adjacent pore may move to that pore with probability P or P,andanendmonomerinthesamepore as its nearest neighbor gets a random new place with probabilities 1 d P and 1 d P. 2.5 Comparison of Cage model and Repton model for electrophoresis Both the repton model and the cage model describe themotionofanentangledchainquitewell. Monte Carlo simulations show similar behavior up to EL 10, at least for chain lengths up to several hundred monomers. For EL 1 the drift velocity is proportional to the electric field. For EL 1atransitionoccursbetweenlinear and quadratic behavior. The hernias play an important role in the motion of the polymers [5, 7, 10, 11, 12]. The regular process for the polymer to move requires that kinks are transported from the trailing end of the polymer towards the head of the polymer. Thus, all kinks have to pass the hernia. The hernias tend to be oriented along the electric field so that kinks leaving the hernia have to travel against the electric field to get back to the backbone. Hernias, formed along the polymer chain, grow by capturing kinks this way. A consequence is that the distribution of kinks along the chain becomes uneven. Hernias are accounted for in the cage model, but not in the repton model. As in the zero field case, the repton model may be changed to include a description of hernias, but then the repton model loses its advantage in efficiency, since the projection onto the onedimensional particle model is no longer possible. 13

16 14

17 Chapter 3 Direct calculation of drift velocity When a polymer reptates, it changes from one state to another. For each state, there is a certain probability that the polymer is in that specific state; the vector in state space, in which each component contains the probability that the polymer is in that specific state, is called the state vector. This vector is usually very large, for instance for the cage model it has 6 L 1 components. The elements of this state vector necessarily add up to unity. To specify the dynamics of the polymer, the state vector does not suffice: one also needstospecifytheratesformovingfromone specific state to another. The transition matrix A that describes how the state vector changes in a small amount of time t is even larger than the state vector: it has 6 L 1 6 L 1 elements (although many of these elements are zero). It turns out that the state vector is the eigenvector of the transition matrix with unity eigenvalue, while all other eigenvectors have a smaller eigenvalue. The common approach to finding the equilibrium state vector, i.e. the eigenvector with the largest eigenvalue, is to use the so called power method: an arbitrary state vector is chosen initially and the transition matrix is repeatedly applied until the state vector converges to the equilibrium state vector. This can be done without explicitly determining the transition matrix, as we will show in section 3.1. However, we will show that if the transition matrix is calculated explicitly, as done in section 3.2, a lot of symmetry is revealed. This symmetry can be exploited to speed up the calculation significantly through a reduction of the state space, as described in section 3.3. In section 3.4 we will introduce a different state vector, in which many states that are related by symmetry are grouped together. The resulting states are called abc representations. Calculations in the abc representation are more efficient, and as we will show in the coming chapters, the introduction of this representation allows us to obtain results for large polymers (up to length 12). 3.1 Computation without explicit transition matrix The cage model describes the dynamics for the polymers by giving the rates at which the direction of a kink in the polymer is changed in an elementary step. A cage polymer of length L has N = L 1bondswhereeachbondpointsinoneofsixdirections. Numberingthedirections x= 0,y= 1,z= 2,x= 3,y= 4andz= 5itispossibletoenumeratethepolymerstates using s = N 1 n=0 6N n 1 b n,whereb n is the direction of bond n in the chain. The rate at which state j goes to state i is given by R ij. The rate for the move j i, i j is R ij = e E if the new kink is along the electric field (thus lowering the potential energy), and R ij = e E if the new kink is against the electric field (increasing the potential energy). The equilibrium state vector lim n f (n) is found by applying the movement rules many times to an arbitrary state vector f (0). For each iteration we start by setting f (n) = f (n 1). For each state j that can change to state i in one elementary step, which consists of either choosing a new direction for a kink or moving of one of the end points, we increase f (n) i by t R ij f (n 1) j and decrease f (n) j by the same amount. The memory and CPU time requirements for this computation are as follows. Two floating 15

18 point values, i.e. old and new values, are used for each of the 6 L 1 states. Since the method described computes the transition rates implicitly, a negligible amount of memory is used for the matrix. Assuming that floating point values of 8 bytes are used, the memory required is 16 6 L 1 bytes; for example for L =9weneed26MbofmemoryandforL =12wewouldneedabout 5.4 Gb. The number of inner monomers in a chain of length L is L 2; the probability that an inner monomer forms a kink is 1 6 and the number of possible configurations of a cage polymer is 6 L 1. The total number of kinks is thus (L 2)6 L 2. Each polymer configuration has two end points, so we have also 2 6 L 1 end points. The kinks and end points may move to five new positions, so a total of 5(L 10)6 L 2 moves is allowed by the cage model. For each allowed move the computation described above needs one multiplication, assuming that t R ij is precomputed, one addition and one subtraction; the number of floating point operations is 15(L 10)6 L 2.For L =9,thiscomesto80MflopsperiterationandforL =12thiswouldcometoabout20Gflops per iteration. As we will see in section 5.2.2, the number of iterations needed is typically L 4.The number of iterations done for L =9is7200,suchthatthetotalcomputationcostcomesto576 Gflops. For L =12,forwhich22500iterationsareneeded,thetotalcomputationcostwouldbe about 450 Tflops. For this calculation t may be chosen freely, but the method only converges when t is chosen small enough. In the Monte Carlo approach t =(dl(e E e E )) 1 was used [22]. We can speed up the calculation by choosing t = ω(dl(e E e E )) 1. For 0 <ω 2convergenceofthe method is guaranteed; this is a property of the transition matrix. The structure of the transition matrix and its properties are explained in the next section. 3.2 Transition matrix The transition matrix is a 6 L 1 6 L 1 matrix. It is a very sparse matrix, which can be seen as follows. In an elementary step the state is either unaltered or one of the endpoints or kinks moves to one of five possible new positions. The maximum number of nonzero elements in a row of the matrix is thus 5L 1. Thetotal numberofnonzeroelementsofthe matrixis calculatedas follows. Each of the 6 L 1 states has two endpoints that may move to five new places; this gives 10 6 L 1 matrix elements. One sixth of the L 2innermonomersareinthemiddleofakinkand may also move to five new places; this gives 5 6 (L 2)6L 1 matrix elements. The state may also stay unaltered; this gives another 6 L 1 elements. The total number of nonzero matrix elements is (5L 56) 6 L 2.Theaveragenumberofelementsperrowis 5 6 L To compute the drift velocity we need the equilibrium state vector of the polymer states: this is the solution, f, of Rf = 0, where i f i = 1. We solve this problem by the power method: multiplying the equation with t and adding f to both sides of the equation we get (I t R)f = f. The equilibrium state vector is f =lim n A n f (0),withthetransition matrix A = I t R. Applying the transition matrix leaves the sum of the frequencies equal to one, since the probability of finding a polymer in any state is unity. The eigenvalues of the transition matrix are 1 = λ 1 >λ 2 λ 3 λ n > 0, for the choice of t =(dl(e E e E )) 1. Using repeated multiplication to find the eigenvector of eigenvalue λ 1 =1worksiftheabsolute value of all other eigenvalues is smaller than one. The relative error in the solution decreases as Max( λ 2 /λ 1, λ n /λ 1 ) k, where k is the number of iterations performed. If we take t = ω(dl(e E e E )) 1,theeigenvalueschangetoλ i = ωλ i 1 ω. Thealgorithmisstillguaranteed to converge if 0 <ω 2. The convergence is fastest if the smallest eigenvalue is the opposite of the next to highest eigenvalue: λ 2 = λ n. The problem of finding the steady state behavior is thus quite simple: enumerate all possible states, define some state vector with all elements betweenzeroandoneandthesumofallelements one, and then apply the transition matrix until the state vector converges to a final vector. This vector gives the rates at which the states occur at equilibrium. The transition matrix method uses much more memory than the matrix free method described in the previous section. In the 16

19 frequency # frequency # frequency # Table 3.1: The state vector for the full state space contains many elements that have the same value, or frequency. This table shows all values for the equilibrium state vector for L = 5, E =0.2 andthenumberoftimesthevalueoccursinthestatevector. Thestatevectorhas 1296 elements, but only 37 different values. following sections it is explained how the size of the transition matrix can be reduced. 3.3 Exploiting symmetries to reduce the state space Acloserexaminationofthesteadystatevectorshowsthatmanydifferentconfigurationsoccur with the same frequency, as shown in table 3.1. If we know in advance that states s and s occur with the same frequency, then we do not need to calculate that of s ;if,duringourcomputations, we need the frequency of state s,wecanusethefrequencyofs. Exploiting this property, by selecting only one state to represent the whole set of states with identical state frequency, we might end up with a different transition matrix and state vector that are smaller than the original ones. States with identical frequency are related by symmetry. The simplest symmetry is rotation of 2π/3aroundthedirectionoftheelectricfield. Thenumber of times a certain frequency occurs in the equilibrium state vector must therefore be divisible by three, as show in table 3.1 for L =5. Another symmetry that results in different polymer states with identical frequency is due to the order of numbering the steps. To enumerate a polymer configuration, one has to start at one of the two end points of the polymer, which gives two possibilities to number the state. Unlike the rotational symmetry, that caused the number of times that a certain value occurs to be a multiple of three, this headtail symmetry does not force this number to be also a multiple of two, since in a few cases the polymer itself is symmetric. In the next section the abc representation is introduced, and it is explained that this representation removes symmetries from the original problem. Then a method is introduced to calculate the abc representation efficiently, and also a way to implement the power method for the reduced transition matrix. 3.4 The abc representation and the reduced transition matrix We try to find a description for the symmetries in the polymer states. We propose the following definition that can find most, but not all symmetries: two polymer states, in which the monomers 17

20 E f d e g a b2 c 3 11 h 8 9 E h g c3 d 4 e 5 f 1 2 a b 7 Figure 3.1: Two examples of how to find the abc representation for polymers in two dimensions. Starting at the monomer in the lower left corner, the left polymer has abc representation abcbcdefeghg and the right one abcdefedgdch. are numbered along the chain, are in the same symmetry class and hence have the same probability if the following rules hold: if in one polymer state a bond is running along, resp. against the electric field, the same bond in the other polymer must also run along, resp. against the electric field. if in one polymer state a set of monomers are located on the same lattice site, the same set of monomers in the other polymer must also be located on the same lattice site. The symmetry classes resulting from these rules are described by the abc representation. For computing the abc representation one first chooses one of the end points of the polymer. The lattice site it occupies is called a; thenforeachmonomerinthechainthelatticesiteitoccupiesis given a unique name (b, c,...) ifitwasnotvisitedearlier. Whendoingthisoneshouldkeeptrack of the lattice sites visited for each monomer and whether the bonds between them are along () or against () theelectricfield. Examplesof abc representations are abcbcdefeghg and abcdefedgdch, showninfigure3.1. Position,bondandabcrepresentationsof those polymers are compared in table 3.2. Although we did not formally prove that two polymer states with the same abc representation necessarily have the same frequency, it is very plausible, and we verified numerically that this property holds at least up to a polymer length L =9. The knowledge that polymer states j 1,j 2,...,j r have the same abc representation, and consequently that the state vector components f j1,f j2,...,f jr contain the same value, can be exploited to reduce the number of computations. The matrixvector multiplication f (n) = Af (n 1) can now be simplified by using the following equality: ( r r ) A ijk f jk = A ijk f j1. k=1 For each abc representation, we may thus set columns j 2,j 3,...,j n of matrix A to zero, provided that we add their contents to column j 1.Wecandeletethecolumnsofthematrixthataresetto zero, and the corresponding elements of the input vector. We can also remove the same elements from the result vector and the corresponding rows of the matrix, to obtain f (n) = B f (n 1),the matrixvector multiplication in the reduced state space. Figure 3.2 shows an example of this procedure for L =3. In practice, we do not first create the full transition matrix and then reduce it to the reduced transition matrix as described above. First, a socalled abc tree is built by enumerating all polymer k=1 18

21 representations of representations of left polymer right polymer monomer position bond abc position bond abc 1 (0, 0) a (0, 0) a 2 (1, 0) x b (1, 0) x b 3 (2, 0) x c (1, 1) y c 4 (1, 0) x b (2, 1) x d 5 (2, 0) x c (3, 1) x e 6 (2, 1) y d (4, 1) x f 7 (3, 1) x e (3, 1) x e 8 (3, 2) y f (2, 1) x d 9 (3, 1) y e (2, 2) y g 10 (4, 1) x g (2, 1) y d 11 (4, 0) y h (1, 1) x c 12 (4, 1) y g (1, 2) y h Table 3.2: The three representations of the two polymers given in figure 3.1. xx xy xz xx xy xz xx xy xz xx xy xz abc abc abc aba abc abc aba abc abc abc abc abc Figure 3.2: Construction of the reduced transition matrix B from the original matrix A. On the left the polymer states of polymers of L =3andtheircorrespondingabcrepresentations are shown. On the right the state vector. For simplicity, the states beginning with y, z, y and z have already been removed, because they are related to the states given here by asimplerotation. ThegreysquaresaredeletedfrommatrixA; thedarkergreyrowsofthe matrix and elements of the vector are simply removed, and the lighter grey elements of the matrix are added to the remaining element of the same abc representation. 19

22 Level 1: a Level 2: b b Level 3: Level 4: Level 5: d c b d c a b c d c b d e c ē d c ā d e c ā ē d ā d c ā c d a d e c ē d ā c d e c a ē e ā c ē d c a d e c ē d ā d c a c d a d e a c ē d a c d e c ē d c b d c a b c d c b d Figure 3.3: The abc data structure is a tree with a variable number of children in each node. The abc tree is drawn for L =5,butforL>5wehavethesameinitialbranchingofthetree. The abc representations can now be enumerated uniquely. configurations, and by computing the corresponding abc representation. For each abc representation that is not already in the abc tree, a leaf is added to the tree and the polymer state is put in the tree as representative for the symmetry class; if the abc representation already exists, acounterforthenumberofpolymerstatesbelongingtothesymmetryclassisincreased. The result is shown in figure 3.3. Each path in the tree from the root node down to a leaf represents adifferentabcrepresentationandeachleafcontains a representative polymer configuration and the number of polymer configurations that belong to this abc representation. The last step in creating the abc tree is to enumerate the abc representations in some way. We chose to sort the children of each node by their local abc representation, such that abc... is the leftmost leaf and abc... the rightmost leaf. The reduced transition matrix B is now generated as follows. Each abc representation has a unique number i. Rowi of the reduced transition matrix is obtained by taking the representative polymer configuration and examining the polymer configurations that can be reached in one elementary step. For each such a reachable polymer configuration we compute its abc representation and use the abc tree to get the number given to that abc representation, say j. We conclude that a transition from abc representation j to abc representation i is possible; the rate depends on whether the move is along or against the electric field. Initially the matrix B is the identity matrix. For each possible transition j to i as found by the procedure above, the element B ij is increased, and the element B ii decreased, by t e E if the move is along the electric field or by t e E if the move is against the electric field. Each abc representation represents three or more polymer states. For a normal transition matrix the sum of the elements of the vector is one, but for our reduced matrix we should weigh each element of the vector with the number of polymer configurations with the same abc representation. Also, when we want to calculate the drift velocity, this weightingis necessaryto getcorrectresults. We have two simple ways to define a starting vector: the simplest way is to start with all frequencies equal to 6 (L 1),sincewehaveexactly6 L 1 polymer states. The other way to do this is to set almost all frequencies to zero, except for the polymer configurations that have only bonds along the electric field or only bonds against the electric field. Only two abc representations are generated for those states, abc... and its inverse abc..., andbecausetheabctreewassortedwe know that they are the first and last state in the vector. Each of them occurs 3 L 1 times so the frequencies should be set to (L 1). The reduced state space method uses less memory than the full state space method, even if the full state space method is computed by a matrix free method. The memory usage for both methods is given in table 3.3. Creating the reduced transition matrix takes much less time than the subsequent computation of the equilibrium state vector. Since this method requires less memory 20

23 no transition matrix reduced transition matrix L vector memory matrix vector memory kb kb kb kb kb kb kb kb kb kb Mb kb Mb Mb Mb Mb Mb Mb Gb Mb Table 3.3: Memory needed to calculate the drift velocity using the matrix free method (columns 2 and 3) and the abc tree method (columns 4, 5 and 6). Column 2 gives the number of states as counted by the matrix free method (6 L 1 ). Column 3 gives the amount of memory needed to compute the state vector using this method. Column 4 gives the number of nonzero matrix elements in the reduced transition matrix. Column 5 gives the number of abc states. Column 6 gives the amount of memory needed to compute the reduced state vector using the abc representation. For L = 12theabctreemethodusesafactor23less memory. It also uses a factor 490 less floating point operations (not shown in this table). and is also faster, this is the method we used to compute the drift velocities. The method described earlier is much simpler to implement and it was used to check the results of the reduced transition matrix method for lengths up to L =9. 21

24 22

25 Chapter 4 Parallel implementation The reduced transition matrix becomes excessively large when L>12. Parallel machines often have more memory than commonly used sequential machines such as workstations or PCs and this memory can be dedicated to solving larger problems. Our task is then to distribute the matrix over the processors, such that the problem can be solved as efficiently as possible, hopefully also improving the performance by a factor close to the number of processors used. 4.1 BSP model Abulksynchronousparallel(BSP)programoperatesbyalternatingbetweenaphasewhereall processors simultaneously compute local results and a phase where they communicate with each other. A superstep in a BSP algorithm consists of a computation phase followed by a communication phase. Before and after each communication phase a global synchronization is carried out. The BSPlib library (for the programming language C) [24] consists of only 20 primitives and is based on one sided communications. One sided communications, as opposed to two sided communications, cannot create deadlock situations. The communication mechanisms built into the BSP library are remote write, remote read and bulk synchronous message passing. In all three cases the remote processor is, at least conceptually, passive in the current superstep. The basic communication primitives are summarized below. Remote write: the processor that executes a put statement copies a block of memory to a remote memory address at the time of the next synchronization. Remote read: the processor that executes a get statement copies a block of memory from a remote memory address at the time of the next synchronization. Bulk synchronous message passing: the processor that executes a send statement sends a message, consisting of a tag and a payload part, to the buffer of a remote processor at the time of the next synchronization. The messages can be read from the buffer by a move operation after the next synchronization. The BSP cost model consists of four parameters: the number of processors p, thespeedof the processors s, thecommunicationtimeg and the synchronization time l. Thespeedofthe processors is measured as the number of floating point operations per second. The communication time is measured as the average time taken to communicate a single word to a remote processor, when all the processors are simultaneously communicating; the unit of time is the time per floating point operation (flop). The synchronization time is the amount of time needed for all processors to synchronize, also measured in flop time. As mentioned earlier a BSP program is either in a computing phase or in a communication phase. This makes predicting the performance of algorithms much easier than in the case of 23

26 Figure 4.1: M N generalized block/cyclic distribution for matrices on p = MN = 6pro cessors. The rows have a blockcyclic distribution, with p blocks which are cyclicly numbered 0, 1,...,M 1, 0, 1,...,andthecolumnshaveablockdistribution,N blocks numbered 0, 1,...N 1. From left to right: M =6,N =1;M =3,N =2;M =2,N =3and M =1,N =6. parallel programming models where computation and communication are interleaved in a less structured fashion. The analysis of the cost of a superstep is relatively simple. For each processor i we count the number of flops w i,thenumberofwordssenttootherprocessorsh (s) i and the number of words received h (r) i. The time taken by processor i for computation is w i and for communication is h i =Max(h (s) i,h (r) i ). The cost of the superstep is Max i (w i )Max i (h i )g l. This shows that optimally we should divide the problem to be solved in equal parts, in the sense that the calculations and communications are evenly distributed over the available processors. Of course, we should also take care to reduce the total amount of communication. 4.2 Data distribution for matrix vector product Agoodwaytodistributeann n dense matrix over p = MN processors is a generalized M N block/cyclic distribution: the rows are divided into p row blocks of equal size and the columns into N column blocks of equal size; then the matrix elements a ij are assigned to the processors as follows: φ 0 (i) =(i div n p )modm, φ 1 (j) =j div n N, a ij P (φ 0 (i)mφ 1 (j)), as shown in figure 4.1. The vector elements are best distributed to the same processor as the diagonal of the matrix. Note that for each generalized block/cyclic distribution: all processors have an equally large part of the matrix; each column is distributed over M processors; each row is distributed over N processors; each processor has the same number of submatrices and each processor has the same number of diagonal elements. This scheme fits within the general Cartesian framework of Ref. [23]; it is similar but not identical to the block/cyclic distribution. The approach of Bisseling and McColl [23] to the matrix vector product r = Ax can be divided into four stages: fanout: the elements x j are communicated to the processors containing the a ij ; local matrixvector multiplications: the partial results u it = j a ijx j are computed, with the sum taken over only the local a ij,whichallhavethesamet = φ 1 (j); fanin: the partial results, u iφ1(j), oftheprocessorsaresenttotheprocessorthatpossesses the corresponding element r i ; 24

27 summation of the partial results: r i = N 1 t=0 u it. If the matrix is divided into rows (which is the special case N = 1 for our generalized block/cyclic distribution), the fanin and summation of partial sums is avoided; this saves some communication, but all processors then have to communicate with all other processors in the fanout part. On the other hand, if the matrix is divided into columns (M =1),thenthefanout communication is avoided and the fanin communication is an alltoall operation. For the general M N distribution, the fanout is an MtoM communication and the fanin an NtoN communication. The communication then takes O((M N) n n p )g time, instead of O(M N p )g. The communication is minimal if M = N = p is used. For a sparse matrix, the algorithm is adapted to avoid computations and communications involving zero elements: elements x j are only sent if the corresponding a ij 0;partialsumsare only computed using products a ij x j with a ij 0andthepartialsumsareonlysentandsummedif they are nonzero. The next section shows how advantage is taken of the specific sparsity structure of the matrix. 4.3 Exploiting the specific structure of the reduced transition matrix The abc tree was created by computing the abc representation for each of the 6 L 1 polymer configurations. A node somewhere in the middle of the tree represents all abc representations with a certain prefix in their abc representation. In particular, a leaf in the tree represents one abc representation. When we cut off the tree at a certain level, say at level 3 as shown in figure 4.2, we get a number of subtrees, where each subtree represents a group of abc representations with the same prefix. When we divide the reduced state vector and the rows and columns of the matrix in the same way, as shown in figure 4.3, we obtain a partitioning of the matrix into submatrices. Note that the submatrices are themselves sparse matrices and that many of them are empty. We adapted the standard sparse matrix vector multiplication program to work with our specific sparse matrix. We view the matrix as a much smaller, but still sparse matrix, where each element itself represents a sparse submatrix, as described above. The element is zero if and only if the corresponding submatrix is empty. This view simplifies finetuning of the data distribution, because complete submatrices are assigned to the same processor. In the overall algorithm, the submatrices are handled as sparse for the computation of the local matrix vector products, but they are treated as full matrices for communication purposes because they usually have elements in nearly every row and column, so the required communication is almost the same as for dense matrices. This allows us to send consecutive blocks of values, instead of a sequence of isolated values. The cutoff level of the abc tree is chosen by the user. The choice of cutoff level has the following effects. A low cutoff level L reduces the overhead of the communication, but also increases the total volume of communication, since more values are sent which are not needed by the receiver. A high cutoff level decreases the total volume of communication, but the overhead for communicating small vectors becomes more important. Furthermore, a high cutoff level improves the load balance because a larger number of smaller blocks are distributed over the available processors, instead of a small number of large blocks. 25

28 Level 1: a Level 2: b b Level 3: c (11) a (9) c (11) c (11) a (9) c (11) Level 4: Level 5: d b d c b c d b d e c ē d c ā d e c ā ē d ā d c ā c d a d e c ē d ā c d e c a ē e ā c ē d c a d e c ē d ā d c a c d a d e a c ē d a c d e c ē d b d c b c d b d Figure 4.2: Splitting the abc tree for L =5. HerethetreeiscutatlevelL =3;thesizesofthe subtrees at this level are given in the nodes of the tree at level three. For L =5andL =3, the abc state vector is divided into six subvectors of sizes 11, 9, 11, 11, 9and11respectively. For L =1, 2, 3, 4, 5,...,theabcstatevectorisdividedinto1, 2, 6, 18, 62,... subvectors. 26

29 L =3: 6 6: L =4: 18 18: L =5,L =3: 62 62: L =6,L =4: : Figure 4.3: The black squares show the nonzero structure of the reduced transition matrices for lengths 3, 4, 5 and 6. For L =5andL =6theshadedareasshowthesubmatrixstructure induced by cutting off the abc tree at L =3andL =4respectively. Notethecorrespondence with the nonzero structure of L =3andL =4. 27

Electrophoresis simulated with the cage model for reptation

Electrophoresis simulated with the cage model for reptation JOURNAL OF CHEMICAL PHYSICS VOLUME 113, NUMBER 9 1 SEPTEMBER 2000 Electrophoresis simulated with the cage model for reptation A. van Heukelum and H. R. W. Beljaars Institute for Theoretical Physics, Utrecht

More information

Simulation of Polymer Dynamics in Gels and Melts

Simulation of Polymer Dynamics in Gels and Melts Simulation of Polymer Dynamics in Gels and Melts Simulation of Polymer Dynamics in Gels and Melts Simulatie van Polymeerdynamica in een Gel of Smelt (met een samenvatting in het Nederlands) Proefschrift

More information

Diffusion constant for the repton model of gel electrophoresis

Diffusion constant for the repton model of gel electrophoresis PHYSICAL REVIEW E VOLUME 56, NUMBER 3 SEPTEMBER 1997 Diffusion constant for the repton model of gel electrophoresis M. E. J. Newman Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 G.

More information

Parallel Sparse Matrix Vector Multiplication (PSC 4.3)

Parallel Sparse Matrix Vector Multiplication (PSC 4.3) Parallel Sparse Matrix Vector Multiplication (PSC 4.) original slides by Rob Bisseling, Universiteit Utrecht, accompanying the textbook Parallel Scientific Computation adapted for the lecture HPC Algorithms

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Polymer dynamics in repton model at large fields

Polymer dynamics in repton model at large fields JOURNAL OF CHEMICAL PHYSICS VOLUME 120, NUMBER 16 22 APRIL 2004 Polymer dynamics in repton model at large fields Anatoly B. Kolomeisky Department of Chemistry, Rice University, Houston, Texas 77005-1892

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

Matrix Assembly in FEA

Matrix Assembly in FEA Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,

More information

A Hybrid Method for the Wave Equation. beilina

A Hybrid Method for the Wave Equation.   beilina A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)

More information

Cyclops Tensor Framework

Cyclops Tensor Framework Cyclops Tensor Framework Edgar Solomonik Department of EECS, Computer Science Division, UC Berkeley March 17, 2014 1 / 29 Edgar Solomonik Cyclops Tensor Framework 1/ 29 Definition of a tensor A rank r

More information

Parallel LU Decomposition (PSC 2.3) Lecture 2.3 Parallel LU

Parallel LU Decomposition (PSC 2.3) Lecture 2.3 Parallel LU Parallel LU Decomposition (PSC 2.3) 1 / 20 Designing a parallel algorithm Main question: how to distribute the data? What data? The matrix A and the permutation π. Data distribution + sequential algorithm

More information

CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C.

CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C. CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring 2006 Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C. Latombe Scribe: Neda Nategh How do you update the energy function during the

More information

1. Introductory Examples

1. Introductory Examples 1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example

More information

Physics 115/242 Monte Carlo simulations in Statistical Physics

Physics 115/242 Monte Carlo simulations in Statistical Physics Physics 115/242 Monte Carlo simulations in Statistical Physics Peter Young (Dated: May 12, 2007) For additional information on the statistical Physics part of this handout, the first two sections, I strongly

More information

25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications

25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications 25.1 Introduction 25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications We will use the notation A ij to indicate the element in the i-th row and j-th column of

More information

arxiv: v1 [math.co] 3 Feb 2014

arxiv: v1 [math.co] 3 Feb 2014 Enumeration of nonisomorphic Hamiltonian cycles on square grid graphs arxiv:1402.0545v1 [math.co] 3 Feb 2014 Abstract Ed Wynn 175 Edmund Road, Sheffield S2 4EG, U.K. The enumeration of Hamiltonian cycles

More information

A Parallel Algorithm for Computing the Extremal Eigenvalues of Very Large Sparse Matrices*

A Parallel Algorithm for Computing the Extremal Eigenvalues of Very Large Sparse Matrices* A Parallel Algorithm for Computing the Extremal Eigenvalues of Very Large Sparse Matrices* Fredrik Manne Department of Informatics, University of Bergen, N-5020 Bergen, Norway Fredrik. Manne@ii. uib. no

More information

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 22 Oct 2003

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 22 Oct 2003 arxiv:cond-mat/3528v [cond-mat.stat-mech] 22 Oct 23 Polymer Dynamics in Repton Model at Large Fields Anatoly B. Kolomeisky Department of Chemistry, Rice University, Houston, TX 775-892, USA Andrzej Drzewiński

More information

Condensed Matter Physics Prof. G. Rangarajan Department of Physics Indian Institute of Technology, Madras

Condensed Matter Physics Prof. G. Rangarajan Department of Physics Indian Institute of Technology, Madras Condensed Matter Physics Prof. G. Rangarajan Department of Physics Indian Institute of Technology, Madras Lecture - 10 The Free Electron Theory of Metals - Electrical Conductivity (Refer Slide Time: 00:20)

More information

CME323 Distributed Algorithms and Optimization. GloVe on Spark. Alex Adamson SUNet ID: aadamson. June 6, 2016

CME323 Distributed Algorithms and Optimization. GloVe on Spark. Alex Adamson SUNet ID: aadamson. June 6, 2016 GloVe on Spark Alex Adamson SUNet ID: aadamson June 6, 2016 Introduction Pennington et al. proposes a novel word representation algorithm called GloVe (Global Vectors for Word Representation) that synthesizes

More information

Chapter 7. The worm algorithm

Chapter 7. The worm algorithm 64 Chapter 7 The worm algorithm This chapter, like chapter 5, presents a Markov chain for MCMC sampling of the model of random spatial permutations (chapter 2) within the framework of chapter 4. The worm

More information

Lecture Notes: Markov chains

Lecture Notes: Markov chains Computational Genomics and Molecular Biology, Fall 5 Lecture Notes: Markov chains Dannie Durand At the beginning of the semester, we introduced two simple scoring functions for pairwise alignments: a similarity

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

GROUP THEORY AND THE 2 2 RUBIK S CUBE

GROUP THEORY AND THE 2 2 RUBIK S CUBE GROUP THEORY AND THE 2 2 RUBIK S CUBE VICTOR SNAITH Abstract. This essay was motivated by my grandson Giulio being given one of these toys as a present. If I have not made errors the moves described here,

More information

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission. Course MP3 Lecture 4 13/11/2006 Monte Carlo method I An introduction to the use of the Monte Carlo method in materials modelling Dr James Elliott 4.1 Why Monte Carlo? The name derives from the association

More information

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 Star Joins A common structure for data mining of commercial data is the star join. For example, a chain store like Walmart keeps a fact table whose tuples each

More information

Improvements for Implicit Linear Equation Solvers

Improvements for Implicit Linear Equation Solvers Improvements for Implicit Linear Equation Solvers Roger Grimes, Bob Lucas, Clement Weisbecker Livermore Software Technology Corporation Abstract Solving large sparse linear systems of equations is often

More information

There are self-avoiding walks of steps on Z 3

There are self-avoiding walks of steps on Z 3 There are 7 10 26 018 276 self-avoiding walks of 38 797 311 steps on Z 3 Nathan Clisby MASCOS, The University of Melbourne Institut für Theoretische Physik Universität Leipzig November 9, 2012 1 / 37 Outline

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Structure and Bonding of Organic Molecules

Structure and Bonding of Organic Molecules Chem 220 Notes Page 1 Structure and Bonding of Organic Molecules I. Types of Chemical Bonds A. Why do atoms forms bonds? Atoms want to have the same number of electrons as the nearest noble gas atom (noble

More information

Markov Processes. Stochastic process. Markov process

Markov Processes. Stochastic process. Markov process Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

ab initio Electronic Structure Calculations

ab initio Electronic Structure Calculations ab initio Electronic Structure Calculations New scalability frontiers using the BG/L Supercomputer C. Bekas, A. Curioni and W. Andreoni IBM, Zurich Research Laboratory Rueschlikon 8803, Switzerland ab

More information

Physics 562: Statistical Mechanics Spring 2002, James P. Sethna Prelim, due Wednesday, March 13 Latest revision: March 22, 2002, 10:9

Physics 562: Statistical Mechanics Spring 2002, James P. Sethna Prelim, due Wednesday, March 13 Latest revision: March 22, 2002, 10:9 Physics 562: Statistical Mechanics Spring 2002, James P. Sethna Prelim, due Wednesday, March 13 Latest revision: March 22, 2002, 10:9 Open Book Exam Work on your own for this exam. You may consult your

More information

arxiv:cond-mat/ v1 1 Jan 1993

arxiv:cond-mat/ v1 1 Jan 1993 Effect of Loops on the Vibrational Spectrum of Percolation Network Hisao Nakanishi HLRZ, KFA Jülich, Postfach 1913 W-5170 Jülich, Germany arxiv:cond-mat/9301001v1 1 Jan 1993 Present and permanent address:

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Efficient implementation of the overlap operator on multi-gpus

Efficient implementation of the overlap operator on multi-gpus Efficient implementation of the overlap operator on multi-gpus Andrei Alexandru Mike Lujan, Craig Pelissier, Ben Gamari, Frank Lee SAAHPC 2011 - University of Tennessee Outline Motivation Overlap operator

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Random Walks A&T and F&S 3.1.2

Random Walks A&T and F&S 3.1.2 Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO OUTLINE q Contribution of the paper q Gossip algorithm q The corrected

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

PEAT SEISMOLOGY Lecture 2: Continuum mechanics

PEAT SEISMOLOGY Lecture 2: Continuum mechanics PEAT8002 - SEISMOLOGY Lecture 2: Continuum mechanics Nick Rawlinson Research School of Earth Sciences Australian National University Strain Strain is the formal description of the change in shape of a

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

Improved model of nonaffine strain measure

Improved model of nonaffine strain measure Improved model of nonaffine strain measure S. T. Milner a) ExxonMobil Research & Engineering, Route 22 East, Annandale, New Jersey 08801 (Received 27 December 2000; final revision received 3 April 2001)

More information

End-to-end length of a stiff polymer

End-to-end length of a stiff polymer End-to-end length of a stiff polymer Jellie de Vries April 21st, 2005 Bachelor Thesis Supervisor: dr. G.T. Barkema Faculteit Btawetenschappen/Departement Natuur- en Sterrenkunde Institute for Theoretical

More information

Inference in Bayesian Networks

Inference in Bayesian Networks Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)

More information

GEL ELECTROPHORESIS OF DNA NEW MEASUREMENTS AND THE REPTON MODEL AT HIGH FIELDS

GEL ELECTROPHORESIS OF DNA NEW MEASUREMENTS AND THE REPTON MODEL AT HIGH FIELDS Vol. 36 (2005) ACTA PHYSICA POLONICA B No 5 GEL ELECTROPHORESIS OF DNA NEW MEASUREMENTS AND THE REPTON MODEL AT HIGH FIELDS M.J. Krawczyk, P. Paściak, A. Dydejczyk, K. Kułakowski Faculty of Physics and

More information

1 Dirac Notation for Vector Spaces

1 Dirac Notation for Vector Spaces Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

(But, they are entirely separate branches of mathematics.)

(But, they are entirely separate branches of mathematics.) 2 You ve heard of statistics to deal with problems of uncertainty and differential equations to describe the rates of change of physical systems. In this section, you will learn about two more: vector

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Chapter 7. Entanglements

Chapter 7. Entanglements Chapter 7. Entanglements The upturn in zero shear rate viscosity versus molecular weight that is prominent on a log-log plot is attributed to the onset of entanglements between chains since it usually

More information

CS 542G: Conditioning, BLAS, LU Factorization

CS 542G: Conditioning, BLAS, LU Factorization CS 542G: Conditioning, BLAS, LU Factorization Robert Bridson September 22, 2008 1 Why some RBF Kernel Functions Fail We derived some sensible RBF kernel functions, like φ(r) = r 2 log r, from basic principles

More information

Vectors and Matrices

Vectors and Matrices Vectors and Matrices Scalars We often employ a single number to represent quantities that we use in our daily lives such as weight, height etc. The magnitude of this number depends on our age and whether

More information

MATRIX MULTIPLICATION AND INVERSION

MATRIX MULTIPLICATION AND INVERSION MATRIX MULTIPLICATION AND INVERSION MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 2.3 and 2.4. Executive summary Note: The summary does not include some material from the

More information

Nearly Free Electron Gas model - I

Nearly Free Electron Gas model - I Nearly Free Electron Gas model - I Contents 1 Free electron gas model summary 1 2 Electron effective mass 3 2.1 FEG model for sodium...................... 4 3 Nearly free electron model 5 3.1 Primitive

More information

Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors

Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1 1 Deparment of Computer

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Appendix C - Persistence length 183. Consider an ideal chain with N segments each of length a, such that the contour length L c is

Appendix C - Persistence length 183. Consider an ideal chain with N segments each of length a, such that the contour length L c is Appendix C - Persistence length 183 APPENDIX C - PERSISTENCE LENGTH Consider an ideal chain with N segments each of length a, such that the contour length L c is L c = Na. (C.1) If the orientation of each

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Distance Constraint Model; Donald J. Jacobs, University of North Carolina at Charlotte Page 1 of 11

Distance Constraint Model; Donald J. Jacobs, University of North Carolina at Charlotte Page 1 of 11 Distance Constraint Model; Donald J. Jacobs, University of North Carolina at Charlotte Page 1 of 11 Taking the advice of Lord Kelvin, the Father of Thermodynamics, I describe the protein molecule and other

More information

Practical Combustion Kinetics with CUDA

Practical Combustion Kinetics with CUDA Funded by: U.S. Department of Energy Vehicle Technologies Program Program Manager: Gurpreet Singh & Leo Breton Practical Combustion Kinetics with CUDA GPU Technology Conference March 20, 2015 Russell Whitesides

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Dominating Set Counting in Graph Classes

Dominating Set Counting in Graph Classes Dominating Set Counting in Graph Classes Shuji Kijima 1, Yoshio Okamoto 2, and Takeaki Uno 3 1 Graduate School of Information Science and Electrical Engineering, Kyushu University, Japan kijima@inf.kyushu-u.ac.jp

More information

C. Show your answer in part B agrees with your answer in part A in the limit that the constant c 0.

C. Show your answer in part B agrees with your answer in part A in the limit that the constant c 0. Problem #1 A. A projectile of mass m is shot vertically in the gravitational field. Its initial velocity is v o. Assuming there is no air resistance, how high does m go? B. Now assume the projectile is

More information

2. FUNCTIONS AND ALGEBRA

2. FUNCTIONS AND ALGEBRA 2. FUNCTIONS AND ALGEBRA You might think of this chapter as an icebreaker. Functions are the primary participants in the game of calculus, so before we play the game we ought to get to know a few functions.

More information

Monte Carlo Simulations in Statistical Physics

Monte Carlo Simulations in Statistical Physics Part II Monte Carlo Simulations in Statistical Physics By D.Stauffer Introduction In Statistical Physics one mostly deals with thermal motion of a system of particles at nonzero temperatures. For example,

More information

CS 224w: Problem Set 1

CS 224w: Problem Set 1 CS 224w: Problem Set 1 Tony Hyun Kim October 8, 213 1 Fighting Reticulovirus avarum 1.1 Set of nodes that will be infected We are assuming that once R. avarum infects a host, it always infects all of the

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Seminar 6: COUPLED HARMONIC OSCILLATORS

Seminar 6: COUPLED HARMONIC OSCILLATORS Seminar 6: COUPLED HARMONIC OSCILLATORS 1. Lagrangian Equations of Motion Let consider a system consisting of two harmonic oscillators that are coupled together. As a model, we will use two particles attached

More information

2.5D algorithms for distributed-memory computing

2.5D algorithms for distributed-memory computing ntroduction for distributed-memory computing C Berkeley July, 2012 1/ 62 ntroduction Outline ntroduction Strong scaling 2.5D factorization 2/ 62 ntroduction Strong scaling Solving science problems faster

More information

Notes for course EE1.1 Circuit Analysis TOPIC 4 NODAL ANALYSIS

Notes for course EE1.1 Circuit Analysis TOPIC 4 NODAL ANALYSIS Notes for course EE1.1 Circuit Analysis 2004-05 TOPIC 4 NODAL ANALYSIS OBJECTIVES 1) To develop Nodal Analysis of Circuits without Voltage Sources 2) To develop Nodal Analysis of Circuits with Voltage

More information

Name: Date: Period: Biology Notes: Biochemistry Directions: Fill this out as we cover the following topics in class

Name: Date: Period: Biology Notes: Biochemistry Directions: Fill this out as we cover the following topics in class Name: Date: Period: Biology Notes: Biochemistry Directions: Fill this out as we cover the following topics in class Part I. Water Water Basics Polar: part of a molecule is slightly, while another part

More information

Matrix Computations: Direct Methods II. May 5, 2014 Lecture 11

Matrix Computations: Direct Methods II. May 5, 2014 Lecture 11 Matrix Computations: Direct Methods II May 5, 2014 ecture Summary You have seen an example of how a typical matrix operation (an important one) can be reduced to using lower level BS routines that would

More information

3D HP Protein Folding Problem using Ant Algorithm

3D HP Protein Folding Problem using Ant Algorithm 3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

MATH 61-02: PRACTICE PROBLEMS FOR FINAL EXAM

MATH 61-02: PRACTICE PROBLEMS FOR FINAL EXAM MATH 61-02: PRACTICE PROBLEMS FOR FINAL EXAM (FP1) The exclusive or operation, denoted by and sometimes known as XOR, is defined so that P Q is true iff P is true or Q is true, but not both. Prove (through

More information

Parallel Scientific Computing

Parallel Scientific Computing IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.

More information

Olle Inganäs: Polymers structure and dynamics. Polymer physics

Olle Inganäs: Polymers structure and dynamics. Polymer physics Polymer physics Polymers are macromolecules formed by many identical monomers, connected through covalent bonds, to make a linear chain of mers a polymer. The length of the chain specifies the weight of

More information

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions University of California Berkeley Handout MS2 CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan Midterm 2 Solutions Problem 1. Provide the following information:

More information

Solving Systems of Polynomial Equations

Solving Systems of Polynomial Equations Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.

More information

GEOMETRY OF MATRICES x 1

GEOMETRY OF MATRICES x 1 GEOMETRY OF MATRICES. SPACES OF VECTORS.. Definition of R n. The space R n consists of all column vectors with n components. The components are real numbers... Representation of Vectors in R n.... R. The

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS DIANNE P. O LEARY AND STEPHEN S. BULLOCK Dedicated to Alan George on the occasion of his 60th birthday Abstract. Any matrix A of dimension m n (m n)

More information

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009 Parallel Preconditioning of Linear Systems based on ILUPACK for Multithreaded Architectures J.I. Aliaga M. Bollhöfer 2 A.F. Martín E.S. Quintana-Ortí Deparment of Computer Science and Engineering, Univ.

More information

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Content. Department of Mathematics University of Oslo

Content. Department of Mathematics University of Oslo Chapter: 1 MEK4560 The Finite Element Method in Solid Mechanics II (January 25, 2008) (E-post:torgeiru@math.uio.no) Page 1 of 14 Content 1 Introduction to MEK4560 3 1.1 Minimum Potential energy..............................

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information