Multilinear (tensor) algebra

Size: px
Start display at page:

Download "Multilinear (tensor) algebra"

Transcription

1 Multilinear (tensor) algebra In these notes, V will denote a fixed, finite dimensional vector space over R. Elements of V will be denoted by boldface Roman letters: v, w,.... Bookkeeping: We are going to develop an efficient notation that permits us to keep track of algebraic operations on vectors and tensors. Good notation prevents us from having to do the same computation over and over. If dim(v) = n, then each basis {e 1, e 2,..., e n } of V has precisely n elements, and every element v V can be written uniquely as a linear combination of them: n v = v 1 e 1 + v 2 e v n e n = v a e a. The numbers (v 1, v 2,..., v n ) are called the components of v in the given basis. The reason for the superscripts will become evident shortly. If {ẽ a : 1 a n} is a second basis for V, then the same vector v will have the expression v = n ṽ a ẽ a a=1 in the new basis. The new basis is related to the first one through a non-singular n n matrix P : n ẽ b = Pb a e a. a=1 P is called the change of basis matrix. Now we have two expressions for the vector v, which is a geometric object, so the expressions must be equal: v = ṽ b ẽ b = ( ) ṽ b Pb a e a = ( ) ṽ b Pb a e a = v a e a, so b b a a b a ṽ b Pb a = v a. Solving for the components ṽ b, we find b a=1 ṽ b = a (P 1 ) b av a. Even at this early stage in our deliberations, it s clear that repeated summations and nested parentheses are going to be both distractions and potential sources of silly algebraic errors. We shall adopt Einstein s summation convention: we omit the summation signs and agree that any pair of repeated upper and lower indices is to be summed over. 1

2 For example v a e a means v 1 e v n e n. So v a e a = v c e c. Both a and c are dummy indices, and can be changed to something else whenever the situation seems to call for it. T a a means T T n n. This is the trace of the matrix with entries T a b. An expression of the form which reads T ab x a This works quite nicely, provided that: = 0 stands for a system of n PDEs, the first one of T T + x1 x 2 n1 T + + x = 0. n 1. The sum is taken only over upper and lower indices, as in the examples above. Any expression in which two identical indices appear as upper or lower indices such as x aa, is wrong. If something like this shows up, a mistake has been made. 2. A given index can appear at most twice in a given expression. Something like W a P b av a has no meaning. To solve for ṽ a in the expression ṽ b Pb a = va, we need to multiply both sides of this expression by P 1. Notice that the upstairs index is the row index. We can t use (P 1 ) b a: although this would look fine on the right hand side, it would give us 3 b s on the left. Instead we write ṽ b P a b (P 1 ) c a = v a (P 1 ) c a. Now Pb a(p 1 ) c a is the (c, b) entry of the product matrix P 1 P = I. That is, { 0 if c b Pb a (P 1 ) c a = (P 1 ) c apb a = δb, c where δb c = 1 if c = b, where the quantity on the right is called the Kronecker delta. So he left hand side becomes ṽ b P a b (P 1 ) c a = ṽ b δ c b = ṽ c. Finally, notice that quantities such as Pb a from matrices, and so they commute: are just numbers, even though they may come L a bm b c = L a 1M 1 c + L a nm n c = M 1 c L a M n c L a n = M b c L a b. This is an overcomplicated notation for this particular problem: we could just have written P ṽ = v = ṽ = P 1 v - much simpler. Unfortunately, this standard notation is of little help in computations involving higher order objects like the curvature tensor, while the index notation doesn t get any more difficult than what you ve just seen. You ll get plenty of practice with this notation in the following sections. The positioning of the indices (up or down) is crucial. 2

3 The dual space V Definition: The dual space of V is Examples: Fix any w V and define V = {φ : V R with φ a linear function}. φ w (v) = w t v, v V. Then φ w is linear and thus an element of V. Let f : R n R be differentiable at the point x 0. For any v R n, let φ(v) = df(x 0 + tv) t=0 dt be the derivative of f in the direction v. Writing out this expression in the standard basis, we find φ(v) = f x v1 + + f x n vn = f x a va, (1) all the partial derivatives being evaluated at x 0. Then φ is a linear function on V = R n called the differential of f at x 0, often written as df( x) = f x a x a. The elements of V are called covariant vectors to distinguish them from elements of V which are called contravariant. They are different: getting ahead of ourselves a bit, look at the expression in (1) - it s a scalar and must be independent of the basis. If we change the basis in R n using the matrix P, then we know the components of v get multiplied by P 1. In order for v a f/ x a to remain the same, the components of df, namely the partial derivatives, must get multiplied by P. So the components of df don t behave the same as those of v under a change of basis. Returning to the general subject, any linear function with domain V is completely determined by what it does to a basis {e a } of V. We define n numbers φ a by φ a = φ(e a ). Since φ is linear it follows that for any bv V, φ(v) = φ(v a e a ) = v a φ(e a ) = v a φ a. Here, the linearity of φ has been used in writing φ( v a e a ) = v a φ a. The numbers φ(e a ) = φ a are the component of φ. If these are the components, what s the basis? 3

4 Definition: Given a basis {e a : 1 a n} of V, the dual basis {e 1, e 2,..., e n } of V is defined by e a (e b ) = δ a b, (2) and extending by linearity, where this last means that for any v, we define e a (v) by requiring it to be linear. Thus e a (v) = e a (v b e b ) = (linearity!) v b e a (e b ) = v b δ a b = v a. Under a change of basis for V given by the matrix P, we have What is ẽ b? Well, we must have ẽ b = P a b e a. ẽ b (ẽ a ) = δ b a = ẽ b (P c ae c ) = P c aẽb (e c ). If we write ẽ b (e c ) = Q b c, then this says P c aq b c = δ b a, and therefore, Q = P 1 : Exercise: ẽ b = (P 1 ) b ce c. 1. Show that {e a : 1 a n} is a basis for V and that any φ V can be written uniquely in the form φ a e a. 2. Show that V = V. Hint: φ(v) is linear in φ as well as v. The proof does not need coordinates or bases; the isomorphism of these two vector spaces is natural. Because of this, mathematicians often write something like < φ, v > instead of φ(v) to indicate the bilinearity. Unfortunately this is too easily confused with Dirac s notation (which involves a Hermitian metric) so we won t do it. What s the geometric meaning of a covariant vector? Since it s a function, we can look at the surfaces on which it s constant: φ(v) = c φ a v a = c. This is the equation of a hyperplane in V; it passes through the origin if c = 0. It is not defined using the dot product; we ll come back to this. Example: Let L(x a, ẋ a, t) be the Lagrangian of some physical system. The Euler-Lagrange equations are ( ) d L = L for 1 a n. dt ẋ a xa The conjugate momenta are defined by p a = L ẋ a. 4

5 For a conservative system, with potential energy V (x a ), the equations of motion are just dp a dt = V x a. Both sides of this equation are the components of covariant vectors. Indeed, the Legendre transformation, connecting the Hamiltonian and the Lagrangian, has the form H = p a ẋ a L, in which both H and L are scalars and so, therefore is p a ẋ a ; the ẋ a are the components of the velocities and transform as such. So the conjugate momenta must transform oppositely. Tensors We ll give the general definition here, for completeness, but we re only going to be dealing with tensors of relatively low rank. Definition: A tensor T of rank (r, s) is a multilinear function T : V V V V V V R. There are r copies of V and s copies of V. So this is a function of r + s vector or (r + s)n scalar variables. The word multilinear means that T is linear in each of its arguments. T is said to be contravariant of rank r and covariant of rank s. 5

6 Examples: 1. A vector is a tensor of rank (1, 0); a dual vector is a tensor of rank (0, 1). 2. A covariant tensor g of rank 2 is a tensor of rank (0, 2); it s a bilinear function of vectors. So for any pair of vectors, g(v, w) is a real number, and for all scalars c 1, c 2, g(c 1 v 1 + c 2 v 2, w) = c 1 g(v 1, w) + c 2 g(v 2, w) and g(v, c 1 w 1 + c 2 w 2 ) = c 1 g(v, w 1 ) + c 2 g(v, w 2 ). Specific examples of rank 2 covariant tensors include the metric tensors of Euclidean and Minkowski space, the electromagnetic field tensor and the energy-momentum tensor; the latter two also have contravariant forms - see below. 3. The curvature tensor R has rank (1, 3), so it s a multilinear function of 4 arguements: R(u, v, w, φ). 4. A linear transformation L : V V is (naturally identified with) a tensor ˆL of rank (1, 1): let φ V, v V, and define ˆL(φ, v) = φ(l(v)). 5. In conjunction with the above, the identity map I(v) = v is identified with a tensor of rank (1, 1) called the Kronecker delta. We ll talk a bit more about these in a minute. Definition: The set of all tensors of rank (r, s) is a vector space over R with the usual pointwise definitions of addition and scalar multiplication: (T + S)(φ,..., v) = T (φ,..., v) + S(φ,..., v) (ct )(φ,..., v) = ct (φ,..., v) This vector space is denoted V V V V V V, where there are r copies of V and s of V. Example: Suppose v, w V. Then we define an element of V V, denoted v w by the requirement (v w)(φ, µ) = φ(v)µ(w). This is evidently bilinear in φ and µ and so it s a tensor of rank (2, 0) called the tensor product of v and w. In a similar fashion, we can define things like φ v V V. A tensor which can be written in the form v w φ is said to be decomposable. Components of tensors: We give an example; generalization should be easy. Suppose T has rank (2, 1); then we define T ab c = T (e a, e b, e c ). 6

7 (There are n 3 numbers here!), and we write T = T ab ce a e b e c, where, by definition, e a e b e c is the tensor of rank (2, 1) such that (e a e b e c )(φ, µ, v) = φ a µ b v c. Equivalently, Exercise: (e a e b e c )(e e, e f, e g ) = δ e aδ f b δc g. 1. Show that {e a e b e c : 1 a, b, c n} is a basis for V V V. 2. How do the components T ab c transform under a change of basis? (We always assume that the new basis in V is the dual basis of the new one in V.) 3. Show that the identity map on V can be written as I = e a e a. What are its components? Is this true in every basis? Further examples: We can fool around algebraically with tensors, getting maps into various vector spaces other than just R: Suppose, as above, that T has rank (1.2). For any v V, let Thus T is a map from V V V. For φ V, we can define T (v) = T a bce a e b e c (v) = T a bcv c e a e b. T (φ) = T a bce a (φ)e b e c = φ a T a bce b e c V V. If L = L a b e a e b has rank (1, 1), then as in the first example, we can define L(v) = L a be a e b (v) = L a bv b e a V, as mentioned above in a slightly different form. Metric tensors Notation: From now on, we drop the boldface fonts for simplicity. A vector will be written simply as V (upper case) and its components in a given basis as V a. We ll retain the boldface font for basis vectors and the zero vector. Suppose g is a rank (0, 2) tensor (also called a bilinear form) which is 7

8 1. Symmetric: g(u, V ) = g(v, U), U, V V. 2. Non-degenerate: g(u, V ) = 0, U = V = 0 Symmetry implies that, in any basis, g(e a, e b ) = g ab = g ba = g(e b, e a ), so the components of g form a symmetric n n matrix. As we know (notes on SR), there exists a basis in which the matrix of components can be brought to canonical form g ab = g cd P c ap d b = ±δ ab. If all the signs preceding δ ab are positive, then g is said to be positive-definite, meaning that, in this coordinate system g(v, V ) = (V 1 ) 2 + (V 2 ) (V n ) 2 > 0 The particular expression for g(v, V ) as the sum of squares is only valid in a coordinate system in which g ab = δ ab, but the value of g(v, V ) and the fact that it s positive, unless V = 0 is independent of the basis. A bilinear form with all these properties is called a (positive-definite) metric tensor. If all properties except that of positive-definiteness hold, then g is called indefinite. If the canonical form of g is Diag{1, 1, 1,..., 1}, then g is called a Lorentz metric. If g is a metric, then the number g(u, V ) is called the scalar product of U and V. Definition: The pair (R n, g) where g is positive-definite, is called n-dimensional Euclidean space, and denoted E n.. Definition: The pair (R n, g), where g is a Lorentz metric, is called n-dimensional Minkowski space and denoted M n. The metric g defines a vector space isomorphism between V and its dual, known in the trade as lowering indices: Fix a V V, and use it to define the linear function Since φ V V, its components are given by φ V (U) = g(v, U). (φ V ) a = φ V (e a ) = g(v, e a ) = g(v b e b, e a ) = V b g(e b, e a ) = V b g ab. The matrix of this transformation, in the given bases, is just ((g ab )). It is non-singular, so the map is an isomorphism between V and its dual. From the definition, it s clearly well-defined, and therefore independent of the basis. Definition: We write the component (φ V ) a as V a ; that is, V a = V b g ab, and say that the covariant vector V b e b has been obtained from V by lowering an index. We can raise indices as well: since g gives an isomorphism from V V, it has an inverse g 1 : V V which is called raising an index. The components of g 1 are, somewhat 8

9 surprisingly, denoted by g ab (no inverse). It is (almost) impossible to make a mistake since the metric has its indices downstairs and the inverse upstairs. Since the two matrices are inverses, in any coordinate system, we must have g ab g bc = δ a c. (3) The tensor of rank (2, 0) given by g ab e a e b is called the contravariant metric tensor. Explicitly, the isomorphism defined by the contravariant metric is given as follows: For any fixed φ V, the vector V φ is the linear function µ(v φ ) = g(µ, φ) = g ab µ a φ b. As above, we will simply write φ a = g ab φ b, and we ll say that we ve raised the index on φ. Exercise: Show that for any vectors V and W, V a W a = V a W a. Is V a W b = V a W b? Why or why not? The scalar product of U and V is often written without explicitly showing the components of the metric tensor: g(u, V ) = g ab U a V b = U b V b = U a V a. The same process allows us to raise and lower indices on tensors of any rank: T ab = g ac g bd T cd, or T a b = T ac g cb. Note: It is important to keep track of the relative positions of indices. It is not generally true that T a b = T a b, although it might happen in some particular case. In E n, in a Cartesian (orthonormal) basis, the metric has the components Diag{1, 1,..., 1}, and the covariant and contravariant component of tensor are numerically identical in this basis, since, for example T 1b = g 1c T c b = δ 1c T c b = δ 11 T 1 b = T 1 b. This leads to some confusion in scientific texts. A simple example is given by the usual calculus book definition of the derivative of f in the direction V as f V. In fact, the directional derivative is given, as shown above, by the expression V a f/ x a, and doesn t involve the dot product at all. The numbers f/ x a are the components of the covariant vector df. To get a (contravariant) vector involves raising an index. The correct definition of the gradient is ( f) a ab f = g x often written as b a f. 9

10 In Cartesian coordinates, these are numerically equal, but in spherical polar coordinates, the components of the gradient vector are not ( f/ r, f/ θ, f/ φ). These are the components of df in the appropriate basis. The components of f, on the other hand, must be found by raising indices with the metric tensor, whose components are not constants in this coordinate system. In fact, as we ll see when we look at tensor analysis, the components of the gradient are ( f r, 1 r 2 f θ, 1 r 2 sin 2 θ f φ ). In M 4, the preferred coordinates are the inertial frames, in which the Lorentz metric takes the form g ab = Diag{1, 1, 1, 1} = g ab. So raising and lowering indices is simple but not trivial: writing the components of V as (V 0, V 1, V 2, V 3 ), we have and Or, as we ve written it elsewhere, V 0 = V 0, V 1 = V 1, V 2 = V 2, V 3 = V 3, so that g(u, V ) = U a V a = U 0 V 0 U 1 V 1 U 2 V 2 U 3 V 3, g(v, V ) = V a V a = (V 0 ) 2 (V 1 ) 2 (V 2 ) 2 (V 3 ) 2. τ 2 = t 2 dx dx. Odds and ends: Contraction Definition: rs new tensors of rank (r 1, s 1) can be produced from a tensor T of rank (r, s) by picking one upper and one lower index and summing over them to obtain the new tensor with components in a process called contraction. T a 1...a i 1 aa i+1...a r b 1...b j 1 ab j+1...b s, (Coordinate-free definition: To contract on the i th contravariant and j th covariant arguments, take T (φ 1,..., φ r 1, V 1,..., V s 1 = tr(t (φ 1,..., φ i 1,, φ i,..., φ r 1, V 1,..., V j 1,, V j,..., V s 1 )), where the circles denote the slots of the omitted arguments. The quantity in the outer parenthesis is of rank (1, 1) and tr denotes the trace. This is just written down so you know it s possible to do it without indices, but we ll never use this form.) Examples: 1. Contracting the tensor T a be a e b gives the scalar T a a. This is called the trace of T. 10

11 2. If U = U ab e a e b, then we can t sum on the two indices directly, but we can first lower an index and then contract to get g ab U ab. 3. The 2-covariant tensor obtained from the curvature tensor via is called the Ricci tensor. R ab = R acb c 4. For those not satisfied with the level of mathematical rigor, we could write, in any basis, R ab = Ric(e a, e b ) = tr(r(e a,, e b, )). 5. If we use the shorthand notation a = x a, then a number of differential operators can be written as contractions: φ = a a φ (the Laplace operator in E n ) φ = a a φ (the d Alembertian operator in M 4 ) In the first equation, we re using the Cartesian components of the Euclidean metric to raise the index prior to contraction; in the second, the Lorentz metric is used. Symmetric and skew-symmetric tensors Definition: A covariant tensor T is said to be symmetric in two of its arguments if. In terms of components, this reads T (..., U,..., V,...) = T (..., V,..., U,...) T...a...b... = T...b...a.... It is symmetric if this holds for all pairs of arguments that is, the value remains unchanged under any transposition of two arguments, and hence under any permutation of the arguments. Similarly for contravariant tensors. Mixed tensors can be symmetric under the interchange of two contravariant or two covariant vectors, but there s no notion of a totally symmetric tensor of mixed rank. Definition: The covariant tensor is said to be skew-symmetric if it changes sign under a transposition of any two of its arguments: F (..., U,..., V,...) = F (..., V,..., U,...). Under an arbitrary permutation of its arguments, F changes sign under odd permutations, but not under even ones. Examples: 11

12 1. Any 2-covariant tensor can be decomposed into the sum of a symmetric and a skew-symmetric tensor: T (U, V ) = S(U, V ) + A(U, V ), where S(U, V ) = 1 (T (U, V ) + T (V, U)), 2 A(U, V ) = 1 (T (U, V ) T (V, U)) 2 2. Higher rank tensors have totally symmetric and skew-symmetric parts in the following sense: if T is of rank r, we can define S(T )(U 1, U 2,..., U r ) = 1 T (U σ1, U σ2,..., U σr ) r! σ A(T )(U 1, U 2,..., U r ) = sign(σ) 1 T (U σ1, U σ2,..., U σr ) r! where the sum is over all permutations of order r, and, in the second case, sign(σ) = ±1 depending on whether σ is even or odd. (Remark: S and A are homomorphisms from the tensor algebra r V onto subalgebras of r V which are called, not surprisingly, the symmetric and skewsymmetric subalgebras.) 3. In terms of components, a rank 2 tensor is symmetric if T ab = T ba, and skewsymmetric if T ab = T ba, for all pairs of indices. Similarly for higher rank tensors. σ 12

13 Exercises 1. Show that {e a e b e c : 1 a, b, c n } forms a basis for the vector space V V V. 2. Show that, under the change of basis ẽ a = Pae b b, the components Tbc a rank (1, 2) undergo the transformation of a tensor of T a bc = T d ef(p 1 ) a dp e b P f c. 3. Is the bilinear form defined by g(x, Y ) = X t AY, where A = positive definite? What s the canonical form of g? 4. For φ, µ V, define their wedge product by Show that (a) φ µ = µ φ; φ φ = 0. (b) If µ = µ a e a, φ = φ b e b, then φ µ = 1 (φ µ µ φ). 2 µ φ = µ a φ b e a e b ( ) = (µ a φ b φ a µ b )e a e b a<b = µ [a φ b] e a e b, where, in the last line, µ [a φ b] = 1 2 (µ aφ b φ a µ b ). 5. Show that if T V V is skew-symmetric (i.e., T (V, W ) = T (W, V ) V, W V), then T = T ab e a be b = T [ab] e a e b = T ab e a e b. 13

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

A Brief Introduction to Tensors

A Brief Introduction to Tensors A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

More information

Lecture I: Vectors, tensors, and forms in flat spacetime

Lecture I: Vectors, tensors, and forms in flat spacetime Lecture I: Vectors, tensors, and forms in flat spacetime Christopher M. Hirata Caltech M/C 350-17, Pasadena CA 91125, USA (Dated: September 28, 2011) I. OVERVIEW The mathematical description of curved

More information

1.4 LECTURE 4. Tensors and Vector Identities

1.4 LECTURE 4. Tensors and Vector Identities 16 CHAPTER 1. VECTOR ALGEBRA 1.3.2 Triple Product The triple product of three vectors A, B and C is defined by In tensor notation it is A ( B C ) = [ A, B, C ] = A ( B C ) i, j,k=1 ε i jk A i B j C k =

More information

Chapter 2. Matrix Arithmetic. Chapter 2

Chapter 2. Matrix Arithmetic. Chapter 2 Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the

More information

2.14 Basis vectors for covariant components - 2

2.14 Basis vectors for covariant components - 2 2.14 Basis vectors for covariant components - 2 Covariant components came from φ - but this in cartesian coordinates is just φ = φ x i + φ y j + φ z k so these LOOK like they have the same basis vectors

More information

1 Dirac Notation for Vector Spaces

1 Dirac Notation for Vector Spaces Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes

More information

Classical Mechanics in Hamiltonian Form

Classical Mechanics in Hamiltonian Form Classical Mechanics in Hamiltonian Form We consider a point particle of mass m, position x moving in a potential V (x). It moves according to Newton s law, mẍ + V (x) = 0 (1) This is the usual and simplest

More information

Lorentz Transformations and Special Relativity

Lorentz Transformations and Special Relativity Lorentz Transformations and Special Relativity Required reading: Zwiebach 2.,2,6 Suggested reading: Units: French 3.7-0, 4.-5, 5. (a little less technical) Schwarz & Schwarz.2-6, 3.-4 (more mathematical)

More information

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity.

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. http://preposterousuniverse.com/grnotes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been framed

More information

SPECIAL RELATIVITY AND ELECTROMAGNETISM

SPECIAL RELATIVITY AND ELECTROMAGNETISM SPECIAL RELATIVITY AND ELECTROMAGNETISM MATH 460, SECTION 500 The following problems (composed by Professor P.B. Yasskin) will lead you through the construction of the theory of electromagnetism in special

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general http://pancake.uchicago.edu/ carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been

More information

Week 6: Differential geometry I

Week 6: Differential geometry I Week 6: Differential geometry I Tensor algebra Covariant and contravariant tensors Consider two n dimensional coordinate systems x and x and assume that we can express the x i as functions of the x i,

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

General tensors. Three definitions of the term V V. q times. A j 1...j p. k 1...k q

General tensors. Three definitions of the term V V. q times. A j 1...j p. k 1...k q General tensors Three definitions of the term Definition 1: A tensor of order (p,q) [hence of rank p+q] is a multilinear function A:V V }{{ V V R. }}{{} p times q times (Multilinear means linear in each

More information

Covariant Formulation of Electrodynamics

Covariant Formulation of Electrodynamics Chapter 7. Covariant Formulation of Electrodynamics Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 11, and Rybicki and Lightman, Chap. 4. Starting with this chapter,

More information

Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Some Concepts used in the Study of Harish-Chandra Algebras of Matrices

Some Concepts used in the Study of Harish-Chandra Algebras of Matrices Intl J Engg Sci Adv Research 2015 Mar;1(1):134-137 Harish-Chandra Algebras Some Concepts used in the Study of Harish-Chandra Algebras of Matrices Vinod Kumar Yadav Department of Mathematics Rama University

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? Let V be a finite-dimensional vector space. 1 It could be R n, it could be the tangent space to a manifold at a point, or it could just

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric

Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric By Y. N. Keilman AltSci@basicisp.net Every physicist

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Chapter 11. Special Relativity

Chapter 11. Special Relativity Chapter 11 Special Relativity Note: Please also consult the fifth) problem list associated with this chapter In this chapter, Latin indices are used for space coordinates only eg, i = 1,2,3, etc), while

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Gauge Fixing and Constrained Dynamics in Numerical Relativity

Gauge Fixing and Constrained Dynamics in Numerical Relativity Gauge Fixing and Constrained Dynamics in Numerical Relativity Jon Allen The Dirac formalism for dealing with constraints in a canonical Hamiltonian formulation is reviewed. Gauge freedom is discussed and

More information

Survey on exterior algebra and differential forms

Survey on exterior algebra and differential forms Survey on exterior algebra and differential forms Daniel Grieser 16. Mai 2013 Inhaltsverzeichnis 1 Exterior algebra for a vector space 1 1.1 Alternating forms, wedge and interior product.....................

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Physics 411 Lecture 7. Tensors. Lecture 7. Physics 411 Classical Mechanics II

Physics 411 Lecture 7. Tensors. Lecture 7. Physics 411 Classical Mechanics II Physics 411 Lecture 7 Tensors Lecture 7 Physics 411 Classical Mechanics II September 12th 2007 In Electrodynamics, the implicit law governing the motion of particles is F α = m ẍ α. This is also true,

More information

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS The integers are the set 1. Groups, Rings, and Fields: Basic Examples Z := {..., 3, 2, 1, 0, 1, 2, 3,...}, and we can add, subtract, and multiply

More information

Classical differential geometry of two-dimensional surfaces

Classical differential geometry of two-dimensional surfaces Classical differential geometry of two-dimensional surfaces 1 Basic definitions This section gives an overview of the basic notions of differential geometry for twodimensional surfaces. It follows mainly

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

2.2 Coordinate transformations

2.2 Coordinate transformations 2.2 Coordinate transformations Lets now think about more general spaces which have arbitrary curvature. Define a point P in some space, and another point a little further on called R. These points have

More information

Math. 460, Sec. 500 Fall, Special Relativity and Electromagnetism

Math. 460, Sec. 500 Fall, Special Relativity and Electromagnetism Math. 460, Sec. 500 Fall, 2011 Special Relativity and Electromagnetism The following problems (composed by Professor P. B. Yasskin) will lead you through the construction of the theory of electromagnetism

More information

Math 535a Homework 5

Math 535a Homework 5 Math 535a Homework 5 Due Monday, March 20, 2017 by 5 pm Please remember to write down your name on your assignment. 1. Let (E, π E ) and (F, π F ) be (smooth) vector bundles over a common base M. A vector

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Electromagnetic. G. A. Krafft Jefferson Lab Jefferson Lab Professor of Physics Old Dominion University Physics 804 Electromagnetic Theory II

Electromagnetic. G. A. Krafft Jefferson Lab Jefferson Lab Professor of Physics Old Dominion University Physics 804 Electromagnetic Theory II Physics 704/804 Electromagnetic Theory II G. A. Krafft Jefferson Lab Jefferson Lab Professor of Physics Old Dominion University 04-13-10 4-Vectors and Proper Time Any set of four quantities that transform

More information

MULTILINEAR ALGEBRA MCKENZIE LAMB

MULTILINEAR ALGEBRA MCKENZIE LAMB MULTILINEAR ALGEBRA MCKENZIE LAMB 1. Introduction This project consists of a rambling introduction to some basic notions in multilinear algebra. The central purpose will be to show that the div, grad,

More information

DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU /858B Fall 2017

DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU /858B Fall 2017 DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU 15-458/858B Fall 2017 LECTURE 4: DIFFERENTIAL FORMS IN R n DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane

More information

Physics 6303 Lecture 3 August 27, 2018

Physics 6303 Lecture 3 August 27, 2018 Physics 6303 Lecture 3 August 27, 208 LAST TIME: Vector operators, divergence, curl, examples of line integrals and surface integrals, divergence theorem, Stokes theorem, index notation, Kronecker delta,

More information

1.13 The Levi-Civita Tensor and Hodge Dualisation

1.13 The Levi-Civita Tensor and Hodge Dualisation ν + + ν ν + + ν H + H S ( S ) dφ + ( dφ) 2π + 2π 4π. (.225) S ( S ) Here, we have split the volume integral over S 2 into the sum over the two hemispheres, and in each case we have replaced the volume-form

More information

Gravitation: Special Relativity

Gravitation: Special Relativity An Introduction to General Relativity Center for Relativistic Astrophysics School of Physics Georgia Institute of Technology Notes based on textbook: Spacetime and Geometry by S.M. Carroll Spring 2013

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Attempts at relativistic QM

Attempts at relativistic QM Attempts at relativistic QM based on S-1 A proper description of particle physics should incorporate both quantum mechanics and special relativity. However historically combining quantum mechanics and

More information

Review and Notation (Special relativity)

Review and Notation (Special relativity) Review and Notation (Special relativity) December 30, 2016 7:35 PM Special Relativity: i) The principle of special relativity: The laws of physics must be the same in any inertial reference frame. In particular,

More information

Ruminations on exterior algebra

Ruminations on exterior algebra Ruminations on exterior algebra 1.1 Bases, once and for all. In preparation for moving to differential forms, let s settle once and for all on a notation for the basis of a vector space V and its dual

More information

Vector spaces, duals and endomorphisms

Vector spaces, duals and endomorphisms Vector spaces, duals and endomorphisms A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse v for any

More information

4 Relativistic kinematics

4 Relativistic kinematics 4 Relativistic kinematics In astrophysics, we are often dealing with relativistic particles that are being accelerated by electric or magnetic forces. This produces radiation, typically in the form of

More information

Chapter 8. Rigid transformations

Chapter 8. Rigid transformations Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending

More information

Introduction to relativistic quantum mechanics

Introduction to relativistic quantum mechanics Introduction to relativistic quantum mechanics. Tensor notation In this book, we will most often use so-called natural units, which means that we have set c = and =. Furthermore, a general 4-vector will

More information

Introduction to Tensor Notation

Introduction to Tensor Notation MCEN 5021: Introduction to Fluid Dynamics Fall 2015, T.S. Lund Introduction to Tensor Notation Tensor notation provides a convenient and unified system for describing physical quantities. Scalars, vectors,

More information

Honours Algebra 2, Assignment 8

Honours Algebra 2, Assignment 8 Honours Algebra, Assignment 8 Jamie Klassen and Michael Snarski April 10, 01 Question 1. Let V be the vector space over the reals consisting of polynomials of degree at most n 1, and let x 1,...,x n be

More information

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know Physics 110. Electricity and Magnetism. Professor Dine Spring, 2008. Handout: Vectors and Tensors: Everything You Need to Know What makes E&M hard, more than anything else, is the problem that the electric

More information

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a Math 360 Linear Algebra Fall 2008 9-10-08 Class Notes Matrices As we have already seen, a matrix is a rectangular array of numbers. If a matrix A has m columns and n rows, we say that its dimensions are

More information

An OpenMath Content Dictionary for Tensor Concepts

An OpenMath Content Dictionary for Tensor Concepts An OpenMath Content Dictionary for Tensor Concepts Joseph B. Collins Naval Research Laboratory 4555 Overlook Ave, SW Washington, DC 20375-5337 Abstract We introduce a new OpenMath content dictionary named

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations, Physics 6010, Fall 2010 Hamiltonian Formalism: Hamilton s equations. Conservation laws. Reduction. Poisson Brackets. Relevant Sections in Text: 8.1 8.3, 9.5 The Hamiltonian Formalism We now return to formal

More information

Usually, when we first formulate a problem in mathematics, we use the most familiar

Usually, when we first formulate a problem in mathematics, we use the most familiar Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information

Several variables. x 1 x 2. x n

Several variables. x 1 x 2. x n Several variables Often we have not only one, but several variables in a problem The issues that come up are somewhat more complex than for one variable Let us first start with vector spaces and linear

More information

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated ) Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated 24--27) Denis Sevee s Vector Geometry notes appear as Chapter 5 in the current custom textbook used at John Abbott College for

More information

Physics 6303 Lecture 2 August 22, 2018

Physics 6303 Lecture 2 August 22, 2018 Physics 6303 Lecture 2 August 22, 2018 LAST TIME: Coordinate system construction, covariant and contravariant vector components, basics vector review, gradient, divergence, curl, and Laplacian operators

More information

(VII.B) Bilinear Forms

(VII.B) Bilinear Forms (VII.B) Bilinear Forms There are two standard generalizations of the dot product on R n to arbitrary vector spaces. The idea is to have a product that takes two vectors to a scalar. Bilinear forms are

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Vectors. September 2, 2015

Vectors. September 2, 2015 Vectors September 2, 2015 Our basic notion of a vector is as a displacement, directed from one point of Euclidean space to another, and therefore having direction and magnitude. We will write vectors in

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

1.2 Euclidean spacetime: old wine in a new bottle

1.2 Euclidean spacetime: old wine in a new bottle CHAPTER 1 EUCLIDEAN SPACETIME AND NEWTONIAN PHYSICS Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external... Isaac Newton Scholium

More information

21 Symmetric and skew-symmetric matrices

21 Symmetric and skew-symmetric matrices 21 Symmetric and skew-symmetric matrices 21.1 Decomposition of a square matrix into symmetric and skewsymmetric matrices Let C n n be a square matrix. We can write C = (1/2)(C + C t ) + (1/2)(C C t ) =

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

Lecture: Lorentz Invariant Dynamics

Lecture: Lorentz Invariant Dynamics Chapter 5 Lecture: Lorentz Invariant Dynamics In the preceding chapter we introduced the Minkowski metric and covariance with respect to Lorentz transformations between inertial systems. This was shown

More information

Spinor Formulation of Relativistic Quantum Mechanics

Spinor Formulation of Relativistic Quantum Mechanics Chapter Spinor Formulation of Relativistic Quantum Mechanics. The Lorentz Transformation of the Dirac Bispinor We will provide in the following a new formulation of the Dirac equation in the chiral representation

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Modern Geometric Structures and Fields

Modern Geometric Structures and Fields Modern Geometric Structures and Fields S. P. Novikov I.A.TaJmanov Translated by Dmitry Chibisov Graduate Studies in Mathematics Volume 71 American Mathematical Society Providence, Rhode Island Preface

More information

A Primer on Three Vectors

A Primer on Three Vectors Michael Dine Department of Physics University of California, Santa Cruz September 2010 What makes E&M hard, more than anything else, is the problem that the electric and magnetic fields are vectors, and

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying Math 396. Isomorphisms and tensor products In this handout, we work out some examples of isomorphisms involving tensor products of vector spaces. The three basic principles are: (i) to construct maps involving

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Physics 236a assignment, Week 2:

Physics 236a assignment, Week 2: Physics 236a assignment, Week 2: (October 8, 2015. Due on October 15, 2015) 1. Equation of motion for a spin in a magnetic field. [10 points] We will obtain the relativistic generalization of the nonrelativistic

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information