Barr, Extending the Einstein Summation Notation, ponents. The arrays of numbers are not the tensors themselves. Agivensetofnumbers represent the

Size: px
Start display at page:

Download "Barr, Extending the Einstein Summation Notation, ponents. The arrays of numbers are not the tensors themselves. Agivensetofnumbers represent the"

Transcription

1 The Einstein Summation Notation Introduction to Cartesian Tensors and Extensions to the Notation Alan H. Barr California Institute of Technology An Order-independent Representation The Einstein summation notation is an algebraic short-hand for expressing multicomponent Cartesian quantities, manipulating them, simplifying them, and expressing the expressions in a computer language short-hand. It manipulates expressions involving determinants, rotations, multidimensional derivatives, matrix inverses, cross products and other multicomponent mathematical entities. Esn can help you write closed form expressions for such things as deriving and computing messy multidimensional derivatives of expressions, expressing multidimensional Taylor series and chain rule, expressing the rotation axis of a 3D rotation matrix, nding the representation of a matrix in another coordinate system, deriving a fourth (4D) vector perpendicular to 3 others, or rotating around N-2 axes simultaneously, in N dimensions. Esn is an \expert friendly" notation: after an initial investment of time, it converts dicult problems into problems with workable solutions. It does not make \easy" problems easier, however. The algebraic terms explicitly delineate the numerical components of the equations so that the tensors can be manipulated by a computer. ACpreprocessor has been implemented for this purpose, which converts embedded Einstein summation notation expressions into C Language expressions. In conventional matrix and vector notation, the multiplication order contains critical information for the calculation. In the Einstein subscript form, however, the terms in the equations can be arranged and factored in dierent orders without changing the algebraic result. The ability to move the factors around as desired is a particularly useful symbolic property for derivations and manipulations. Mathematical Background: Tensors In general, abstract \coordinate independent" multidimensional mathematical operators and objects can be categorized into two types: linear ones like avector or a matrix, and non-linear ones. Although there are many dierent denitions, for this document it will suce to think of tensors as the set of (multi)linear multidimensional mathematical objects { whatever the nonlinear ones are, they are not tensors. Various other tensor denitions exist { for instance, a tensor can be thought of as the mathematical objects which satisfy certain transformation rules between dierent coordinate systems. Dierent numerical representations of a Cartesian tensor are obtained by transforming between Cartesian coordinate systems, which have orthonormal basis vectors, while a generalized tensor is transformed in non-orthonormal coordinates. The components of a Cartesian tensor are magnitudes projected onto the orthogonal Cartesian basis vectors three dimensional N-th order Cartesian tensors are represented in terms of 3 N com-

2 Barr, Extending the Einstein Summation Notation, ponents. The arrays of numbers are not the tensors themselves. Agivensetofnumbers represent the tensors, but only in a particular coordinate system. The numerical representation of a vector, such as 2 C A, only has geometric meaning when we also 3 know the coordinate system's basis vectors {the vector goes one unit in the rst basis vector direction, two units in the second basis vector direction, and three units along the third basis vector direction. Thus, a tensor is an abstract mathematical object that is represented by dierent sets of numbers that depend on the particular coordinate system that has been chosen the same tensor will be represented by dierent numbers when we change the coordinate sytesm. another. The tangent vectors transformed properly with the stretching transformation. They are \contravariant" noncartesian tensors. Figure 2: Normal vectors on the sphere Figure : Non-Cartesian coordinates. Tangent vectors are \contravariant" noncartesian tensors. Noncartesian coordinates need two transformation rules, not just one. For instance, in Figure we're given a circle and its tangent vectors. If we stretch the whole gure in the horizontal direction, (this is equivalent to lengthening the horizontal Cartesian basis vector so it is not a unit vector anymore, so the coordinates are now noncartesian), we can see that the stretched circle and the stretched vectors are still tangent to one Figure 3: Unlike the tangent vectors, the normal vectors, when the gure is stretched in the horizontal direction, are not the normal vectors for the new surface Generalized tensor transformations (for noncartesian coordinates) are locally linear transformations that come in two types: contravariant tensors transform with the tangent vectors to a surface embedded in the transformed space, while covariant tensors transform with the normal vectors. Contravariant indices are generally indicated by superscripts, while covariant indices are indicated by subscripts. Good treatments of these conven-

3 Barr, Extending the Einstein Summation Notation, tions may be found in textbooks describing tensor analysis and three-dimensional mechanics (see [SEGEL], and [MISNER,THORNE,WHEELER]). Generalized tensors are outside the scope of this document. to obtain the numerical representations of the same object in another coordinate system. That rule is described in section 3.. Example : For example, let us consider the component-bycomponent description of the three-dimensional matrix equation a = b + M c Figure 4: Normal vectors are covariant tensors they are transformed by the inverse transpose of the stretching transformation matrix. For Cartesian transformations, such as pure rotation, tangent vectors transform via the same matrix as the normal vectors there is no dierence between the covariant and contravariant representations. Thus, subscript indices suce for Cartesian tensors. A zero-th order tensor is a scalar, whose single component is the same in all coordinate systems, while rst order tensors are vectors, and second order tensors are represented with matrices. Higher order tensors arise frequently intuitively, if a mathematical object is a \k-th order Cartesian tensor," in an N dimensional Cartesian coordinate system, then. Theobjectisanentity which \lives" in the N dimensional Cartesian coordinate system. 2. The object can be represented with k subscripts and N k components total. 3. The numerical representation of the object is typically dierent in dierent coordinate systems. 4. The representations of the object obey the Cartesian form of the transformation rule, (we utilize one underscore to indicate a vector, two for a matrix, etc. We will occasionally use boldface letters to indicate vectors or matrices.) The above set of equations is actually a set of three one-component equations, valid in any coordinate system in which we choose to represent the components of the vectors and matrices. Using vertical column vectors, the equation a a 2 a 3 A b b 2 b 3 which is equivalent to: A m m 2 m 3 m 2 m 22 m 23 m 3 m 32 m 33 A a = b + m c + m 2 c 2 + m 3 c 3 a 2 = b 2 + m 2 c + m 22 c 2 + m 23 c 3 a 3 = b 3 + m 3 c + m 32 c 2 + m 33 c c c 2 c 3 The above equations condense into three instances of one equation, a i = b i + 3X j = m i j c j i = 2 3: The essence of the Einstein summation notation is to create a set of notational defaults so that the summation sign and the range of the subscripts do not need to be written explicitly in each expression. This notational compactness is intended to ease the expression and manipulation of complex equations. With the following rules, the above example becomes: A

4 Barr, Extending the Einstein Summation Notation, a i = b i + m i j c j combine it with G or evaluate FG and then combine it with E:.2 Denition of Terms: There are two types of subscript indices in the above equation. The subscript \i" is free to vary from one to three on both sides of the above equation, so it is called a free index. The free indices must match on both sides of an equation. The other type of subscript index in Example is the dummy subscript \j." This subscript is tied to the term inside the summation, and is called a bound index. Sometimes we will place \boxes" around the bound indices, to more readily indicate that they are bound. Dummy variables can be renamed as is convenient, as long as all instances of the dummy variable name are replaced by the new name, and the new names of the subscripts do not conict with the names of other subscripts, or with variable names reserved for other purposes..3 The Classical Rules for the N dimensional Cartesian Einstein Summation Notation We remind the reader that an algebraic term is an algebraic entity separated from other terms by additions or subtractions, and is composed of a collection of multiplicative factors. Each factor may itself composed of a sum of terms. A subexpression of a given algebraic expression is a subtree combination of terms and/or factors of the algebraic expression, The classical Einstein summation convention (without author-provided extensions) is governed by the following rules: Scoping Rule: Given a valid summationnotation algebraic expression, the notation is still valid in each of the algebraic subexpressions. Thus, it is valid to re-associate subexpressions: If E, F, and G are valid ESNexpressions, then you can evaluate EF and then EFG = (EF)G = E(FG) Rule : A subscript index variable is free to vary from one to N if it occurs exactly once in a term. Rule 2: A subscript index variable is bound to a term as the dummy index of a summation from one to N if it occurs exactly twice in the term. We will sometimes put boxes around bound variables for clarity. Rule 3: A term is syntactically wrong if it contains three or more instances of the same subscript variable. Rule 4: Commas in the subscript indicate that a partial derivative is to be taken. A (free or bound) subscript index after the comma indicates partial derivatives with respect to the default arguments of the function, (frequently spatial variables x, x 2, and x 3 or whatever x y z spatial coordinate system is being used). Partial derivatives with respect to a reserved variable (say t, the time parameter) are indicated by putting the reserved variable after the comma. With these rules, Example becomes a i = b i + m i j c j The subscript \i" is a free index in each of the terms in the above equation, because it occurs only once in each term, while \j" is a bound index, because it appears twice in the last term. The boxes on the bound indices are not necessary they are used just for emphasis. At rst, it is helpful to write out the interpretations of some \esn" expressions in full, complete with the summation signs, bound indices, and ranges on the free indices. This procedure can help clarify questions that arise, concerning the legality of a particular manipulation. In this way, you are brought back into familiar territory to see what the notation is \really" doing.

5 Barr, Extending the Einstein Summation Notation, Why only two instances of a subscript index in a term? b j c j d j =? (b j c j )d j =? b j (c j d j ): However, 3X j= b j c j d j 6=( 3X j = b j c j ) d j etc: The rst expression is a number, while the second expression is a vector which equals a scalar (bc) times vector d. The third expression is a dierent vector, which is the product of the vector b with the scalar c d, which are not remotely equivalent. Thus, we are limited to two instances of any particular subscript index in a term in order to retain the associative property of multiplication in our algebraic expressions. Unlike conventional matrix notation, Esn factors are both commutative and associative within a term. For beginners it is particularly important to check Esn expressions for validity with respect to rule 3 and the number of subscript instances in a term. 2 A Few Esn Symbols The Kronecker delta, or identity matrix, is represented with the symbol with subscripts for the rows and columns of the matrix. ( i6= j ij = i= j The logical \" symbol is an extended symbol related to the Kronecker delta. logical expression = So i = j = ij. ( expression is true otherwise The order-three permutation symbol ijk is used to manipulate cross products and three dimensional matrix determinants, and is dened as follows: ijk = 8 >< >: if any pair of subscripts is equal (i j k) is an even perm. of ( 2 3) ; (i j k) is an odd perm. of ( 2 3) An even permutation of a list is a permutation created using an even number of interchange operations of the elements. An odd permutation requires an odd number of interchanges. The six permutations of (,2,3) may be obtained with six interchange operations of elements: ( 2 3) (2 3) (2 3 ) (3 2 ) (3 2) ( 3 2): Thus, the even permutations are (,2,3), (2,3,), and (3,,2), while the odd permutations are (2,,3), (3,2,), and (,3,2). 2. Preliminary simplication rules The free-index operator of a vector, matrix or tensor converts a conventional vector expression into the Einstein summation form. It is indicated by parentheses with subscripts on the right. Vectors require one subscript, as in (b) i = b i which is verbosely read as \the i-th component of vector b (in the default Cartesian coordinate system) is the number b sub i." Two subscripts are needed for matrices: (M) ij = M ij : can be read as \the ij-th component ofmatrixm in the default coordinate system is the number M sub i j." To add two vectors, you add the components: (a + b) i = a i + b i This can be read as \the i-th component of the sum of vector a and vector b is the number a sub

6 Barr, Extending the Einstein Summation Notation, i plus b sub i." To perform matrix-vector multiplications, the second subscript of the matrix must be bound to the index of the vector. Thus, to multiply matrix M by vector b, new bound subscripts are created, as in: which converts to (M b) i = M i j b j (M b) = M b + M 2 b 2 + M 3 b 3 (M b) 2 = M 2 b + M 22 b 2 + M 23 b 3 (M b) 3 = M 3 b + M 32 b 2 + M 33 b 3 To perform matrix-matrix multiplications, the second subscript of the rst matrix must be bound to the rst subscript of the second matrix. Thus, to multiply matrix A by matrix B, you create indices like: which converts to: (A B) ij = A i k B k j (A B) = A B + A 2 B 2 + A 3 B 3 (A B) 2 = A 2 B + A 22 B 2 + A 23 B 3 (A B) 3 = A 3 B + A 32 B 2 + A 33 B 3 (A B) 2 = A B 2 + A 2 B 22 + A 3 B 32 (A B) 22 = A 2 B 2 + A 22 B 22 + A 23 B 32 (A B) 32 = A 3 B 2 + A 32 B 22 + A 33 B 32 (A B) 3 = A B 3 + A 2 B 23 + A 3 B 33 (A B) 23 = A 2 B 3 + A 22 B 23 + A 23 B 33 (A B) 33 = A 3 B 3 + A 32 B 23 + A 33 B 33 In the esn C preprocessor, the default ranges are different: #f c;i = a;i + b;i g (assuming the arrays have been declared over the same range). In C, this expands to: c[] = a[] + b[] c[] = a[] + b[] c[2] = a[2] + b[2] 2.. Rules involving ijk and ij The delta rule When a Kronecker delta subscript is bound in a term, the expression simplies by () eliminating the Kronecker delta symbol, and (2) replacing the bound subscript in the rest of the term by the other subscript of the Kronecker delta. For instance, becomes and becomes v ik = i j M j k v ik = M ik v k = ij a i M jk v k = a j M j k or equivalently, a i M ik. Note that in standard notation, v k =(M T a) k : Rules for the order-n permutation symbol Interchanges of subscripts ip the sign of the permutation symbol: i i 2...i N = ; i2 i...i N Repeated indices eliminate the permutation symbol (since lists with repeated elements are not permutations). iijk...` = This is related to the behavior of determinants, where interchanges of columns change the sign, or repeated columns send the determinant to zero. Note repeat of index i. The order-three permutation symbol subscript rule For the special case of an order-three permutation symbol, the following identity holds.

7 Barr, Extending the Einstein Summation Notation, ijk = jki The order-3 epsilon-delta rule The order-3 - rule allows the following subscript simplication (when the rst subscript of two permutation symbols match: i jk i pq = jp kq ; jq kp In other words, the combination of two permutation symbols with the same rst subscript is equal to an expression involving Kronecker deltas and the four free indices j, p, k, and q. In practice, the subscripts in the permutation symbols are permuted via the above relations, in order to apply the - rule. A deriviation to verify the identity is provided in Appendix A. 3 Transformation Rules for Cartesian Tensors We express vectors, matrices, and other tensors in dierent Cartesian coordinate systems, without changing which tensor we are representing. The numerical representation (of the same tensors) will generally be dierent, in the dierent coordinate systems. We now derive the transformation rules for Cartesian tensors in the Einstein summation notation, to change the representation from one Cartesian coordinate system to another. Some people in fact use these transformation rules as the denition of a Cartesian tensor { any mathemtaical object whose representation transforms like the Cartesian tensors do is a Cartesian tensor. Consider a three dimensional space, with righthanded 2 orthonormal 3 basis vectors e 2 e and 2 The author strongly recommends avoiding the computer graphics convention of left handed coordinates, and recommends performing all physically based and geometric calculations in right handed coordinates. That way, you can use the last 3 years of mathematics texts as references. 3 mutually perpendicular unit vectors 3 e. In other words, each basis vector has unit length, is perpendicular to the other basis vectors, and the 3D vectors are named subject to the right hand rule. Right handed unit basis vectors satisify: e 2 e = 3 e 2 e 3 e = e,and 3 e e = 2 e: In addition, we note that i e j e = ij : This result occurs because the dot products of perpendicular vectors is zero, and the dot products of identical unit vectors is. We also consider another set of right-handed orthonormal basis vectors for the same three dimensional space, indicated with \hats," vectors ^e 2^e and 3^e, which will also have the same properties. i^e j^e = ij,etc. 3. Transformations of vectors Consider a vector a, and express it with respect to both bases, with the (vertical column) array of numbers a a 2 a 3 in one coordinate system, and with the array of dierent numbers ^a ^a 2 ^a 3 in the other. vector a = a e + a 2 2 e + a 3 3 e = a i i e and also for the same vector, a =^a ^e +^a 2 2^e +^a 3 3^e =^a i i^e Since the two expressions represent the same vector, they are equal: a i i e =^a i i^e (sum over i) We derive the relation between ^a i and a i,by dotting both sides of the above equation with basis vector k e. Thus, by commuting and reassociating, and the fact that the unit basis vectors are mutually perpendicular, we obtain:

8 Barr, Extending the Einstein Summation Notation, (a i i e) k e =(^a i i^e) k e,or a i ( i e k e)=^a i ( i^e k e), or a i ik =^a i ( i^e k e), or a k =( k e i^e) ^a i The above expression, in conventional notation, becomes: a = a 2 = a 3 = NX NX i= NX i= i= ( e i^e) ^a i ( 2 e i^e) ^a i ( 3 e i^e) ^a i Thus,a matrix T re-expresses the numerical representation of a vector relative to a new basis via where a i = T ij ^a j T ij =( i e j^e) Note that the transpose of T is its inverse, and det T =,sot is a rotation matrix. 3.2 Transformations of matrices The nine basis vectors of (3 dimensional) matrices (2nd order tensors) are given by the nine outer products of the previous basis vectors: k` = k e`e: The above matrix equation uses \dyadic" notation for the outer product of the two basis vectors k e and `e. The outer product of two vectors is a matrix, whose i ; j-th component is given by: (a b) ij = a i b j Thus, the nine quantities which form the nine dimensional basis \vectors" of a three-by-three matrix are the outer products of the original basis vectors: By dotting twice by p e and q e, a similar derivation shows us that the transformation rule for a matrix is: M ij = T ip T jq ^Mpq For instance, with the standard bases, where etc., note that etc. e = 2 e = B C A C A e e B e 2 e C A C A 3.3 Transformations of N-th order Tensors You would not be surprised then, to imagine basis vectors of 3rd order tensors as being given by ijk = i e j e k e with a transformation rule (from hatted coordinates to unhatted) A ijk =(T ip )(T jq )(T kr ) ^Apqr Also, not surprisingly, given an N th order tensor quantity X, it will have analogous basis vectors and will transform with N copies of the T matrix, via: X i i 2...i N = (T i j )...(T in j N ) ^Xj...j N M = M ij ij = M ij i e j e

9 Barr, Extending the Einstein Summation Notation, Axis-Angle Representations of 3D Rotation One very useful application of Esn is in the representation and manipulation of rotations. In three dimensions, rotation can take place only around one vector axis. 4. The Projection Operator 4.2 Rotation is a Linear Operator We will be exploiting linearity properties for the derivation: and See Figure 5. Rot(a + b) =Rot(a)+Rot(b) Rot( a) = Rot(a): The rst operator needed to derive the axis-angle matrix formulation is the projection operator. To project out vector b from vector a, let a n b = a ; b such that the result is perpendicular to b. The projection operation \a n b" can be read as vector a \without" vector b. Note that Rot(e2) = -sin, cos e2 Rot(e) = cos, sin = a b b b : e Rot(A+B) = Rot(A)+Rot(B) Rot(A) Rot(B) Figure 5. Visual demonstration that rotation is a linear operator. Just rotate the document and observe that the relationship holds: Rot(A+B) = Rot(A) + Rot(B) no matter which rotation operates on the vectors. Figure 6. Two-d rotation of basis vectors e and 2 e by angle are displayed. Rot( e)=cos e + sin 2 e, while Rot( 2 e)=; sin e + cos 2 e. 4.3 Two-D Rotation. Two dimensional rotation is easily derived, using linearity. We express the vector in terms of its basis vector components, and then apply the rotation operator. Since Therefore a = a e + a 2 2 e Rot(a) =Rot(a e + a 2 2 e): Using linearity,

10 Barr, Extending the Einstein Summation Notation, 2.5 Rot(a) =a Rot( e)+a 2 Rot( 2 e) From Figure 6 we canseethatthat Rot( e)=cos e +sin 2 e and Finally, let e 2 = e 3 e. It's in the direction of r (a n r) which is in the same direction as r a. Note that e 2 e = 3 e 2 e 3 e = e,and Thus, Rot( 2 e)=; sin e +cos 2 e: 3 e e = 2 e: We have a right-handed system of basis vectors. Rot(a) = (cos a ; sin a 2 ) e + (sin a +cosa 2 ) 2 e axis r 4.5 Deriving the axis-angle formulation First, note that rotation of vectors parallel to r around itself remain unchanged. We're now ready to derive Rot(a) around r by. By denition, r x a Rot(a\r) Thus, a = a n r + r Rot(a) =Rot(a n r + r) angle theta = Rot(a n r)+rot(r) = Rot(ja n rj e)+r a \ r Figure 7. The rotation is around unit vector axis r, by right handed angle. Note that the vectors r, a n r and r a form an orthogonal triple. 4.4 An Orthogonal triple Given a unit vector r around which the rotation will take place (the axis) we can make three orthogonal basis vectors. Let the third basis vector, 3 e = r. Let the rst basis vector, e, be the unit vector in the direction of a n r. Note that by the denition of projection, 3 e is perpendicular to e. = ja n rj Rot( e)+r = ja n rj (cos e + sin e 2 )+r =cos (a n r)+sin (r a)+(a r)r 4.6 Deriving the components of the rotation matrix Since rotation is a linear operator, we represent it with a matrix, R. Using Esn, we can easily factor out the components of the matrix, to obtain its components: from the previous equation, we know the i-th component of both sides of the equation is given by: (Rot(a)) i = ; R a i = (cos (a n r) +sin (r a) +(a r)r) i

11 Barr, Extending the Einstein Summation Notation, 2.5 Thus, R ij a j =cos (a n r) i + sin (r a) i +(a r) r i =cos ( ij ; r i r j )a j +sin ikj r k a j + a j r j r i Note that each term has a j in it. We put everything on one side of the equation, and factor out the a j on the right. ;R ij +cos ( ij ; r i r j )+sin ikj r k + r j r i a j = parentheses with subscripts, requiring one subscript for vectors, as in ( b ) i = b i, and two subscripts for matrices, as in ( M ) ij = M ij. You can think of () i as an operator which dots the argument bythei-th basis vector. Sometimes we will use the free index operator to select column vectors of a matrix, such as the following: ( E ) i = i e Unfortunately, we can't divide out the a j term, because j is involved in a sum, from to 3. However, since the above equation is true for all values of a j, then the esn factor on the left must be zero. (For instance, we could let a j = j, 2j and 3j in sequence.) Thus, putting R ij to the other side of the equation, the ijth components of the rotation matrix are given by: R ij =cos ( ij ; r i r j )+sin ikj r k + r j r i You can reverse the sign of the permutation symbol to get: R ij =cos ( ij ; r i r j ) ; sin ijk r k + r j r i 5 Summary of Manipulation Identities In this section, a series of algebraic identities are listed with potential applications in multidimensional mathematical modeling. List of Einstein Summation Identities: () The free-index operator of a vector, matrix or tensor converts a conventional vector expression into the Einstein form. It is indicated by In this case e is the rst column vector of the matrix E, 2 e is the second column, etc. 4 (2) The dot product of two vectors a and b is expressed via: a b = a i b i = a i b i : (2a) The outer product of two vectors a and b is expressed via: (a b) ij = a i b j : (3) In 3-D the vector cross product of two vectors a and b is expressed via: (a b) i = ijk a j b k or, putting boxes on the matching sets, or (a b) i = i j k a j b k (a b) = a 2 b 3 ; a 3 b 2 (a b) 2 = a 3 b ; a b 3 (a b) 3 = a b 2 ; a 2 b (in which the free index of the output cross product vector becomes the rst subscript of the permutation symbol, while two new bound indices are created). (4) ij jk = i j j k = ik = ki. 4 Note that i e 6= (e) i. The right hand side is an i-th scalar, while the left side is an i-th vector.

12 Barr, Extending the Einstein Summation Notation, (5) ii = i i = 3 in three dimensional space. ii = N in N dimensional spaces. (6) i j i j = ii = N. (7) ijk = ; jik. (8) ijk = jki. (9) The - rule allows the following subscript simplication: i jk i pq = jp kq ; jq kp () If S ij is Symmetric, i.e., if S ij = S ji, then qij S ij =: () If A ij is Antisymmetric, i.e., if A ij = ;A ji, then A ij ; A ji = 2A ij. In addition, since M ij ; M ji is always antisymmetric, qij (M ij ; M ji )=2 qij M ij (2) Partial derivatives are taken with respect to the default argument variables when the subscript index follows a comma: For example, given a scalar function F of vector argument x, the i-th component of the gradient of F is expressed as: (rf ) i = F i : Given a vector function F of vector argument x, the the derivative of the i-th component of F (x) with respect to the j-th component of x is expressed as: F i j i : Argument evaluation takes place after the partial derivative evaluation: F i j (x) i j = x (3) x i j = ij where x is the default spatial coordinate vector. (4) Sometimes partial derivatives may also be taken with respect to reserved symbols, set aside in advance. (5) (r 2 F ) i = F i jj F : (6) The determinant of an NxN matrix M may be expressed via: detm = i i 2...i N M i M 2i2...M NiN The order N cross product, is produced by leaving out one of the column vectors in the above expression, to produce a vector perpendicular to N ; other vectors. For three-by-three matrices, det(m) = ijk M i M 2j M 3k = ijk M i M j2 M k3 : (7) Another identity involving the determinant: qnp det(m) = ijk M qi M nj M pk (8) The rst column of a matrix M is designated M i, while the second and third columns are indicated by M i2 and M i3. The three rows of a three dimensional matrix are indicated by M i, M 2i, and M 3i. (9) The transpose operator is achieved by switching subscripts: (M T ) ij = M ji : (2) A matrix times its inverse is the identity matrix:

13 Barr, Extending the Einstein Summation Notation, M i j (M ; ) j k = ik : (22a) The axis-axis representation of a rotation rotates unit vector axis a to unit vector axis b by setting (2) The SinAxis operator. The instantaneous rotation axis r and counter-clockwise angle of rotation of a three by three rotation matrix R is governed by the following relation (a minus sign is necessary for the left-handed version): and letting r = a b ja bj =cos ; (a i b i ) (SinAxis(R)) i = r i sin = 2 ijk R kj : This identity seems easier to derive through the \esn" form than through the matrix notation form of the identity. It expands to r sin ==2(R 32 ; R 23 ) r 2 sin ==2(R 3 ; R 3 ) r 3 sin ==2(R 2 ; R 2 ) (22) The three by three right-handed rotation matrix R corresponding to the instantaneous unit rotation axis r and counter-clockwise angle of rotation is given by: R ij = r i r j + cos ( ij ; r i r j ) ; sin ijk r k : Expanded, the above relation becomes: M = r r + cos ( ; r r ) M 2 = r 2 r ; cos r 2 r + r 3 sin M 3 = r 3 r ; cos r 3 r ; r 2 sin M 2 = r r 2 ; cos r r 2 ; r 3 sin M 22 = r 2 r 2 + cos ( ; r 2 r 2 ) M 32 = r 3 r 2 ; cos r 3 r 2 + r sin M 3 = r r 3 ; cos r r 3 + r 2 sin M 23 = r 2 r 3 ; cos r 2 r 3 ; r sin M 33 = r 3 r 3 + cos ( ; r 3 r 3 ) (23) The inverse of a rotation matrix R is its transpose: i.e., (R ; ) ij = R ji R ; = R T (24) When R is a rotation matrix, R ij R kj = ik. Likewise, R ji R jk = ik. (25a) The matrix inverse 5 of a general N by N matrix M: (M ; ) ji = ii 2 i 3...i N jj2 j 3...j N M i2 j 2...M in j N (N ; ) detm Please note that the numerator of the above expression involves an enormous number of individual components when written out in a conventional form. The above expression exhibits an incredible economy. (25b) The three by three matrix inverse is given by: (M ; ) qi = ijk qnp M jn M kp 2detM The algebraically simplied terms of the above 5 The author hesitates to expand this daunting expression out : Relation 22 yields a right-handed rotation around the axis r by angle. It can be veried by multiplying by vector a i and comparing the result to the axis-angle formula in section 4.5.

14 Barr, Extending the Einstein Summation Notation, expression are given by: This is equivalent to (M ; ) = (M 33 M 22 ; M 23 M 32 )= det M (M ; ) 2 = (M 23 M 3 ; M 33 M 2 )= det M (M ; ) 3 = (M 32 M 2 ; M 22 M 3 )= det = 3X 3X j= k= F i j G j k (M ; ) 2 = (M 3 M 32 ; M 33 M 2 )= det M (M ; ) 22 = (M 33 M ; M 3 M 3 )= det M (M ; ) 32 = (M 2 M 3 ; M 32 M )= det M (M ; ) 3 = (M 23 M 2 ; M 3 M 22 )= det M (M ; ) 23 = (M 3 M 2 ; M 23 M )= det M (M ; ) 33 = (M 22 M ; M 2 M 2 )= det M (The factor of 2 canceled out.) To verify the above relationship, we can perform the following computation: qr =(M ; M)qr =(M ; ) qi M ir Thus, the above terms simplify to: M ir ijk qnp M jn M kp 2detM qnp ( ijk M ir M jn M kp ) = 2 det M From identity 7, the determinant cancels out and the above simplies to: qnp rnp 2 which, by the epsilon-delta rule, becomes pqn prn 2 = 3 qr ; qr 2 which veries the relation. = qr nn ; qn nr 2 = qr (26) From the preceding result, we can see that in 3 dimensions: ijk M jr M ks = 2 det M(M ; ) qi qrs : (27) The Multidimensional Chain Rule includes the eects of the summations automatically. Additional subscript indices are created as necessary. For instance, (F (G(x(t)))) i t = F i j (G(x(t)))G j k ( x(t))x k t : where and F i j i j = G( x(t)) G j k j k = x(t) (28) Multidimensional Taylor Series: F i (x) =F i (x )+F i j (x )(x j ; x j ) + 2 F i jk (x )(x j ; x j )(x k ; x k ) + 3 F i jkp (x )(x j ;x j )(x k ;x k )(x p ;x p )+ (29) Multidimensional Newton's Method can be derived from the linear terms of the multidimensional Taylor series, in which we are solving F (x) =, and letting J ij = F i j : x j = x j ; (J ; ) ji F i (x ): (3) Orthogonal Decomposition of a vector v in terms of orthonormal (orthogonal unit) vectors e, 2 e,and 3 e: v =(v j e) j e: (3) Change of Basis: we want to express v j j e in terms of new components times orthonormal basis vectors j^e. A matrix T re-expresses the numerical representation of a vector relative to the new basis via where v j = T ij ^v j

15 Barr, Extending the Einstein Summation Notation, (32) The delta rule: T ij =( i e j^e): ij esn-expression i = esn-expression j : (33) The generalized stokes theorem: Z@R n j esn-expression i d= Z (34) Multiplying by matrix inverse: if then M ij x j = esn-expression i (M ; ) pi M ij x j =(M ; ) pi esn-expression i or, simplifying and renaming p back to j, x j =(M ; ) ji esn-expression i Note the transpose relationship of matrices M and M (;) in the rst and last equations. (35) Conversion of Rotation matrix R ij to the axis-angle formulation: We rst note that any collection of continuously varying axes r and angles produces a continuous rotation matrix function, via identity 22. However, it is not true that this relation is completely invertible, due to an ambiguity of sign: the same matrix is produced by dierent axis-angle pairs. For instance, the matrix produced by r and is the same matrix produced by ;r and ;. In fact, if = then there is no net rotation, and any unit vector r produces the identity matrix (null rotation). Since j sin j = j ijk R ij j, and 2 cos = R ii ; : 2 the angle is given by = Atan(j sin j cos ): If =,any axis suces, and we are nished. Otherwise, if 6=,we need to know the value of r. In that case, if 6=, then R esn-expression i j dv r k = ; ijk R ij 2sin Likewise, Otherwise, if =, then since the original rotation matrix Z@R n i esn-expression i d= R esn-expression i i dv R ij =2r i r j ; ij Z then Letting r i r j = R ij + ij 2 M ij = R ij + ij 2 we can solve for r i, by taking the ratio of nondiagonal and square roots of nonzero diagonal terms, via r i = M i[j] q M[j][j] Note that we are using [j] to be an independent index as dened near the bottom of page 3. We choose the value of j to be such that M [j][j] is its largest value. 6 Extensions to the tensor notation for Multi-component equations In this section, we introduce the no-sum notation and special symbols for quaternions.

16 Barr, Extending the Einstein Summation Notation, The \no-sum" operator The classical summation convention is incomplete as it usually stands, in the sense that not all multi-component equations (i.e, those involving summation signs and subscripted variables) can be represented. In the scalar convention, the default is not to sum indices, so that summation signs must be written explicitly if they are desired. Using the classical summation convention, summation is the default, and there is no convenient way not to sum over a repeated index (other than perhaps an awkward comment in the margin, directing the reader that the equation is written with no implicit summation). Thus, an extension of the notation is proposed in which an explicit \no-sum" or \free-index" operator prevents the summation over a particular index within a term. The no-sum operator is represented via 6 P i or (which is easier to write) a subscripted prex parenthesis ( i ). This modi- cation extends the types of formulations we can represent, and augments algebraic manipulative skills. Expressions involving the no-sum operator are found in calculations which take place in a particular coordinate system without the extension, all of the terms are tensors, and transform correctly from one coordinate system to another. For instance, a diagonal matrix B C A M a a 2 a 3 is not diagonal in all coordinate systems. Nonetheless, it can be convenient to do the calculation in the coordinate system in which M is diagonal. M ij = 6 X i (a i ij )=( i a i ij ) Some of the identities for this augmented notation are found later in this document. The ease of manipulation and the compactness of representation are the main advantages of the summation convention. Generally, an expression is converted from conventional matrix and vector notation into the summation notation, simplied in the summation form, and then converted back into matrix and vector notation for interpretation. Sometimes, however, there is no convenient way to express the result in conventional matrix notation. 6.2 Quaternions The other proposed extensions to the notation aid in manipulating quaternions. For convenience, a few new special symbols are dened to take quaternion inverses, quaternion products, products of quaternions and vectors, and conversions of quaternions to rotation matrices. A few identities involving these symbols are also presented in this section Properties of Quaternions A quaternion is a four dimensional mathematical object which is a linear combination of four independent basis vectors:, i, j, and k, satisfying the following relations: i 2 = j 2 = k 2 = ; ijk = ; By pre- and post-multiplying by any of i, j, or k, it is straightforward to show that Thus, a quaternion can represented as ij = k jk = i ki = j ji = ;k kj = ;i ik = ;j q = q q q 2 q 3 C A q = q + q i + q 2 j + q 3 k and quaternion multiplication takes place by applying the above identities, to obtain

17 Barr, Extending the Einstein Summation Notation, p q = (p + p i + p 2 j + p 3 k)(q + q i + q 2 j + q 3 k) = p (q + q i + q 2 j + q 3 k) +p i(q + q i + q 2 j + q 3 k) +p 2 j(q + q i + q 2 j + q 3 k) +p 3 k(q + q i + q 2 j + q 3 k) = p q + p q i + p q 2 j + p q 3 k +p iq + p q (ii) +p q 2 (ij) +p q 3 (ik) +p 2 jq + p 2 q (ji) +p 2 q 2 (jj) +p 2 q 3 (jk) +p 3 kq + p 3 q (ki) +p 3 q 2 (kj) +p 3 q 3 (kk) = p q ; p q p 2 q 2 ; p 3 q 3 +i(p q + p q + p 2 q 3 ; p 3 q 2 ) +j(p 2 q + p q 2 + p 3 q ; p q 3 ) +k(p 3 q + p q 3 + p q 2 ; p 2 q ) The geometric interpretation of a quaternion A quaternion can be represented as a 4-D composite vector, consisting of a scalar part s and a three dimensional vector portion v: q = A quaternion is intimately related to the axisangle representation for a three dimensional rotation (see Figure ). The vector portion v of a unit quaternion is the rotation axis, scaled by thesine of half of the rotation angle. The scalar portion, s is the cosine of half the rotation angle. q = s v : cos(=2) sin(=2)r where r is a unit vector, and q q = Thus, the conversion of a unit quaternion to and from a rotation matrix can be derived using the axis-angle formulas 22 and 35. Remember that the quaternion angle is half of the axis-angle angle. Quaternion Product: A more compact form for quaternion multiplication is s s2 = v v 2 s s 2 ; v v 2 : s v 2 + s 2 v + v v 2 Note that s s2 6= s 2 s v v 2 v 2 v With this notation, quaternion-vector, vectorquaternion, and vector-vector multiplication rules become clear: to represent a vector, we set the scalar part of the quaternion to zero, and use the above relation to perform the multiplications. Quaternion Inverse: The inverse of a quaternion q is another quaternion q ; such that q q ; =. It is easily veried that q ; = s ;v =(s 2 + v v): Rotating a vector with quaternions is achieved by pre-multiplying the vector with the quaternion, and postmultiplying by the quaternion inverse. We use this property to derive the conversion formula from quaternions to rotation matrices. Quaternion Rotation: (Rot(v)) i =(qvq ; ) i i= 2 3: With this identity, it is possible to verify the geometric interpretation of quaternions and their relationship to the axis-angle formulation for rotation. The most straightforward way to verify this relation is to expand the vector v into components parallel to r and those perpendicular to it, plug into the rotation formula, and compare to the equation in section 4.5. i.e., to evaluate (Rot(v)) = (q((v n r)+r)q ; ) 6.3 Extended Symbols for Quaternion Manipulations The author has developed a few special symbols to help write and manipulate quaternion quantities more conveniently. We dene all quaternion

18 Barr, Extending the Einstein Summation Notation, components as going from to 3, with the vector part still going from to 3, with zero for the scalar component. If we are not using the full range of a variable (going, say from to 3 when the original goes from to 3),we need to explicitly denote that. We hereby extend our Kronecker delta to allow zero in the subscripts, so and = i = i 6= : We create a new permutation symbol, which allows zero in the subscripts. It will be + for even permutations of (... 3), and ; forodd permutations. The quaternion inverter structure constant ij allows us to compute quaternion inverses: where q ; I = ij q j =(q K q K ): ij = 8 >< >: i = j = ; i = j 6= otherwise Note, using the full range for i and j, that ij =2 i j ; ij : The quaternion product structure constant Kij allows us to multiply two quaternions p and q via (pq) K = Kij p i q j where the nonzero components are given by: (i, j,k=...3) = ij = ij ik = ik jk = ; jk ijk = ijk Note, using the full range, for i, j, and k that : ijk = ij k + ik j ; i jk + ijk The quaternion-vector product structure constant v ki` allows us to multiply a quaternion q and a vector v via (qv) k = v ki` qi v` where the nonzero v components are given by: (i,j,k=...3) v ik = ik v jk = ; jk v ijk = ijk Note, using the full range for i, j, and k, that v ijk = ik j ; i jk + ijk The vector-quaternion product structure constant v k`i allows us to multiply a vector v and a quaternion q via (vq) k = v k`i v` q i where the nonzero v components are given by: (i,j,k=...3) v ij = ij v jk = ; jk v ijk = ijk Note, using the full range for i, j, and k that v ijk = ij k ; i jk + ijk The vector-vector product is the conventional cross product. The quaternion to rotation matrix structure constant ijk` allows us to create a rotation 6 matrix R R ij = ijk` q k q` =(q N q N ): It is straightforward to express in terms of the 's: 6 Note that qn q N = q q in the following equation.

19 Barr, Extending the Einstein Summation Notation, ijk` = ikp v pjn N` To derive this relation, we note that the i-th component of the rotation of vector a is given by: (i, j,k=...3) (Rot(a)) i = = (qaq ; ) i i= 2 3: ikp q k (aq ; )p = ikp q k v pjn a j (q ; ) N = ikp q v k pjn a j N` q` =(q q) = ( ikp v pjn N` ) q k q` =(q q)a j Since we can represent these rotations with a three dimensional rotation matrix R, and (Rot(a)) i = R ij a j for all a j, we can eliminate a j from both sides of the equation, yielding Thus, R ij = v ikp pjn N` qk q` =(q q): ijk` = ikp v pjn N` : 7 What is Angular Velocity in 3 and greater dimensions? Using the axis/angle representation of rotation, matrices, and quaternions, we dene, derive, interpret and demonstrate the compatibility between the two main classic equations relating the angular velocity vector, the rotation itself, and the derivative of the rotation. Matrix Eqn: or where Quaternion Eqn: m = m m = m ( ) ik = ijk j q = 2 Denition of angular velocity in three dimensions Consider a time varying rotation Rot(t), (for instance, represented with a matrix function or quaternion function), which brings an object from body coordinates to world coordinates. In three dimensions, we dene angular velocity as the vector quantity. whose direction is the instantaneous unit vector axis of rotation of the time varying rotation and 2. whose magnitude is the angular rate of rotation around the instantaneous axis. We derive angular velocity both for matrix representations and for quaternion representations of rotation. The direction of the instantaneous axis of rotation can be obtained by using a matrix-to-axis operator on the the relative rotation from t to t + h. In symbolic form, angular velocity is given by = lim h Axis(RelRot(t h)) Angle(RelRot(t h)) h for either representation method. Matrix representation Let M(t) be a time varying rotation matrix which takes us from body coordinates to world coordinates, and let N(t h) be the relative rotation from M(t) to M(t+h). In other words, since N takes us from M(t) to M(t+h), M(t + h) =N(t h) M(t) q

20 Barr, Extending the Einstein Summation Notation, so RelRot(t h) = N(t h) N(t h) = M(t + h) M T (t) N ij (t h) = M ip (t + h) (M T ) pj (t) N ij (t h) = M ip (t + h) M jp (t) = M ip (t)+hm ip + ) O(h2 so N ij (t h) = ij + hm ip (t) M jp ) (t)+o(h2 Expressing in terms of M and M' To nd, we can convert N(t h) to axis/angle form, to nd the direction of the axis and the angular rate of rotation, and take the limit as h goes to zero. Expressing M and M' in terms of We can express M ps (t) Mqs (t) in terms of by multiplying both sides by ijk. j = 2 pqj M ps (t) Mqs (t) M jp (t) ijk j = 2 ijk pqj M ps (t) Mqs (t) = 2 ( jp kq ; jq kp ) M ps (t) Mqs (t) = 2 (M js (t) M ks (t) ; M ks (t) M js (t)) ijk j = M js (t) M ks (t) Expressing M' in terms of and M We take the equation in the previous section, and multiply by M kp : = lim h Axis(N (t h)) Angle(N (t h)) h However, a simpler method is available. We note that as h, Angle(N) sin(angle(n)). Thus, the product of the matrix Axis operator and the Angle operator in the limit will equal the matrix SinAxis operator (which is easier to compute). SinAxis(N(t h)) = lim h h pqi Npq (t h) i = 2 lim h = 2 lim h h pqi pq + hmps (t) Mqs (t) T hus i = 2 ipq M ps (t) Mqs (t) Note that the SinAxis operator is described in identity 2 also note that or [i] = M [i +]s M [i +2]s ) = M 2s M 3s 2 = M 3s M s 3 = M s M 2s M kp ijk j = M js (t) M ks (t)m kp M kp ijk j = M jp (t) so M jp (t) = ijk j M kp Thus, we have derived the matrix equation presented in the introduction. Quaternion representation Let q(t) be a time varying rotation quaternion which takes us from body coordinates to world coordinates, and let p(t h) be the relative rotation from q(t) to q(t+h). In other words, since p takes us from q(t) to q(t+h), so q(t + h) =p(t h) q(t) RelRot(t h) = p(t h) p(t h) = p(t + h) p ; (t) Expressing in terms of q and q' To nd, we can convert p(t h) to axis/angle form, to nd the direction of the axis and the angular rate of rotation, and take the limit as h goes to zero.

21 Barr, Extending the Einstein Summation Notation, = lim h Axis(p(t h)) Angle(p(t h)) h However, as in the case with the matrices, a simpler method is available. We notethatash, Angle(N) sin(angle(n)). Thus, as before, the product of the matrix Axis operator and the Angle operator in the limit will equal twice the quaternion VectorPart operator which returns the vector portion of the quaternion. 7 VectorPart(p(t h)) = 2 lim h h VectorPart(q(t+h)q = 2 lim ; ) h h = 2VectorPart( + q (t)q ; ) so = 2q (t)q ; Expressing q' in terms of and q We take the equation in the previous section, and multiply by q on the right: 2 q (t) = 2 Thus, we have derived the quaternion equation presented in the introduction. Relating angular velocity to rotational basis vectors Let basis vector p e be the p-th column of M, so the i-th element of the p-th basis vector is given by Thus, or ( p e) i = M ip ( e p ) i = ijk j ( p e) k e p = j p e 7 The VectorPart operator can be thought ofasahalf- SinAxis operator on the unit quaternion. q What about non-unit quaternions? Let Q = mq be a non-unit quaternion (with magnitude m, and q is a unit quaternion). So or Q = (mq) = mq + m q = m q + qd=dt(q Q)=2 2 = Q + Q=(Q 2 Q)=2 d=dt(q Q) =2 = Q + Q 2 Q Q=Q Q (I ; QQ)Q = 2 Q Q = 2 (I + QQ )Q (;QQ) Alternate derivation of (works in N dimensions) Let M(t) be an N dimensional rotation matrix which brings an object from body coordinates to world coordinates. Note that M in M kn = ik Taking the derivative of both sides, which means Thus, M in M kn + M in M kn = M in M kn = ;M in M kn = A ik = ;A ki M in M kn = A ik To solveform,multiply both sides by M kj. So M ij = A ik M kj In N dimensions, the antisymmetric matrix A takes the place of the angular velocity vector.

22 Barr, Extending the Einstein Summation Notation, Examples. Example. To simplify the vector a ( b c ) we use the re-association rules, the rearrangement rules, the permutation symbol subscript interchange rules, the - rule, the simplication rules, and the re-association and rearrangement rules: (a (b c)) i = i j k a j (b c) k = ijk a j knp bn cp = ( ijk knp )a j bn cp = ( kij knp )a j bn cp = ( in jp ; ip jn )a j bn cp = a j b i c j ; a j b j c i = (a j c j )b i ; (a j b j )c i Therefore a(b c) =(a c)b;(a b)c. Note that a (b c) 6= (a b) c. Example 2. The equation c = a i b i has two repeated subscripts in the term on the right, so the index \i" is bound to that term with an implicit summation. This is a scalar equation because there are no free indices on either side of the equation. In other words, c = a b + a 2 b 2 + a 3 b 3 = a b Example 3. To show that a(bc) =(ab)c = det(a b c) a (b c) = a i (b c) i = a i ijk b j c k = ijk a i b j c k = det(a b c) ijk a i b j c k = ( ijk a i b j )c k = ( kij a i b j )c k = (a b) k c k = (a b) c The second factor of the rst term on the right side of the above equation is symmetric in subscripts \i" and \j," so by the symmetric identity (), thetermiszero.so, Since and qji R ij = qji ikj sin r k : ( qji ikj )= iqj ikj = qk jj ; qj jk =3 qk ; qk =2 qk qji R ij =2 qk sin r k qji R ij = 2 sin rq which completes the derivation. Example 5. To verify the matrix inverse identity(25),wemultiply by the original matrix M im on both sides, to see if we really have an identity. (M ; ) qi M im =? 2 M im ijk qnp M jn M kp det M qm =? qnp ( ijk M im M jn M kp ) 2 det M Using identity (7), this simplies to so qm =? 2 (qnp mnp ) det M det M qm = 2 (2 qm ) Thus, the identity is veried. Example 6. To verify identity (26), we multiply by the determinant, and by qrs. Identity () is used to eliminate the factor of 2. The other details are left to the reader. Example 7 To discover the inverse of a matrix A of the form : Example 4. To derive identity (2) from identity (22), we multiply (22) by qji : i.e., A = ai ; bxx : qji R ij = qji (r i r j +c ( ij ;r i r j ))+ qji s ikj r k : A ij =(a ij ; bx i x j ): We are looking for B jk such that A ij B jk = ij :

VECTORS, TENSORS AND INDEX NOTATION

VECTORS, TENSORS AND INDEX NOTATION VECTORS, TENSORS AND INDEX NOTATION Enrico Nobile Dipartimento di Ingegneria e Architettura Università degli Studi di Trieste, 34127 TRIESTE March 5, 2018 Vectors & Tensors, E. Nobile March 5, 2018 1 /

More information

Introduction to tensors and dyadics

Introduction to tensors and dyadics 1 Introduction to tensors and dyadics 1.1 Introduction Tensors play a fundamental role in theoretical physics. The reason for this is that physical laws written in tensor form are independent of the coordinate

More information

1 Vectors and Tensors

1 Vectors and Tensors PART 1: MATHEMATICAL PRELIMINARIES 1 Vectors and Tensors This chapter and the next are concerned with establishing some basic properties of vectors and tensors in real spaces. The first of these is specifically

More information

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a

More information

Dr. Tess J. Moon, (office), (fax),

Dr. Tess J. Moon, (office), (fax), THE UNIVERSITY OF TEXAS AT AUSTIN DEPARTMENT OF MECHANICAL ENGINEERING ME 380Q ENGINEERING ANALYSIS: ANALYTICAL METHODS Dr Tess J Moon 471-0094 (office) 471-877 (fax) tmoon@mailutexasedu Mathematical Preliminaries

More information

Index Notation for Vector Calculus

Index Notation for Vector Calculus Index Notation for Vector Calculus by Ilan Ben-Yaacov and Francesc Roig Copyright c 2006 Index notation, also commonly known as subscript notation or tensor notation, is an extremely useful tool for performing

More information

VECTOR AND TENSOR ALGEBRA

VECTOR AND TENSOR ALGEBRA PART I VECTOR AND TENSOR ALGEBRA Throughout this book: (i) Lightface Latin and Greek letters generally denote scalars. (ii) Boldface lowercase Latin and Greek letters generally denote vectors, but the

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Introductory Course on Multiphysics Modelling TOMAZ G. ZIELIŃKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Vectors, tensors, and index notation. Generalization of the concept

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

The Matrix Representation of a Three-Dimensional Rotation Revisited

The Matrix Representation of a Three-Dimensional Rotation Revisited Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Professor Terje Haukaas University of British Columbia, Vancouver Notation

Professor Terje Haukaas University of British Columbia, Vancouver  Notation Notation This document establishes the notation that is employed throughout these notes. It is intended as a look-up source during the study of other documents and software on this website. As a general

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

2 Tensor Notation. 2.1 Cartesian Tensors

2 Tensor Notation. 2.1 Cartesian Tensors 2 Tensor Notation It will be convenient in this monograph to use the compact notation often referred to as indicial or index notation. It allows a strong reduction in the number of terms in an equation

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components Cartesian Tensors Reference: Jeffreys Cartesian Tensors 1 Coordinates and Vectors z x 3 e 3 y x 2 e 2 e 1 x x 1 Coordinates x i, i 123,, Unit vectors: e i, i 123,, General vector (formal definition to

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Physics 6303 Lecture 2 August 22, 2018

Physics 6303 Lecture 2 August 22, 2018 Physics 6303 Lecture 2 August 22, 2018 LAST TIME: Coordinate system construction, covariant and contravariant vector components, basics vector review, gradient, divergence, curl, and Laplacian operators

More information

Review of Vector Analysis in Cartesian Coordinates

Review of Vector Analysis in Cartesian Coordinates Review of Vector Analysis in Cartesian Coordinates 1 Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers. Scalars are usually

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

Vector, Matrix, and Tensor Derivatives

Vector, Matrix, and Tensor Derivatives Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. VECTOR ALGEBRA

This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. VECTOR ALGEBRA This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. September 23, 2014 Hans P. Paar VECTOR ALGEBRA 1 Introduction Vector algebra is necessary in order to learn vector calculus.

More information

Notation, Matrices, and Matrix Mathematics

Notation, Matrices, and Matrix Mathematics Geographic Information Analysis, Second Edition. David O Sullivan and David J. Unwin. 010 John Wiley & Sons, Inc. Published 010 by John Wiley & Sons, Inc. Appendix A Notation, Matrices, and Matrix Mathematics

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

MITOCW ocw f99-lec17_300k

MITOCW ocw f99-lec17_300k MITOCW ocw-18.06-f99-lec17_300k OK, here's the last lecture in the chapter on orthogonality. So we met orthogonal vectors, two vectors, we met orthogonal subspaces, like the row space and null space. Now

More information

Introduction to Vector Spaces

Introduction to Vector Spaces 1 CSUC Department of Physics Mechanics: Class Notes Introduction to Vector Spaces I. INTRODUCTION Modern mathematics often constructs logical systems by merely proposing a set of elements that obey a specific

More information

STEP Support Programme. STEP 2 Matrices Topic Notes

STEP Support Programme. STEP 2 Matrices Topic Notes STEP Support Programme STEP 2 Matrices Topic Notes Definitions............................................. 2 Manipulating Matrices...................................... 3 Transformations.........................................

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

EN221 - Fall HW # 1 Solutions

EN221 - Fall HW # 1 Solutions EN221 - Fall2008 - HW # 1 Solutions Prof. Vivek Shenoy 1. Let A be an arbitrary tensor. i). Show that II A = 1 2 {(tr A)2 tr A 2 } ii). Using the Cayley-Hamilton theorem, deduce that Soln. i). Method-1

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

1.4 LECTURE 4. Tensors and Vector Identities

1.4 LECTURE 4. Tensors and Vector Identities 16 CHAPTER 1. VECTOR ALGEBRA 1.3.2 Triple Product The triple product of three vectors A, B and C is defined by In tensor notation it is A ( B C ) = [ A, B, C ] = A ( B C ) i, j,k=1 ε i jk A i B j C k =

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

A Brief Introduction to Tensors

A Brief Introduction to Tensors A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

More information

ECS 178 Course Notes QUATERNIONS

ECS 178 Course Notes QUATERNIONS ECS 178 Course Notes QUATERNIONS Kenneth I. Joy Institute for Data Analysis and Visualization Department of Computer Science University of California, Davis Overview The quaternion number system was discovered

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Covariant Formulation of Electrodynamics

Covariant Formulation of Electrodynamics Chapter 7. Covariant Formulation of Electrodynamics Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 11, and Rybicki and Lightman, Chap. 4. Starting with this chapter,

More information

= 1 and 2 1. T =, and so det A b d

= 1 and 2 1. T =, and so det A b d Chapter 8 Determinants The founder of the theory of determinants is usually taken to be Gottfried Wilhelm Leibniz (1646 1716, who also shares the credit for inventing calculus with Sir Isaac Newton (1643

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Leandra Vicci. Microelectronic Systems Laboratory. Department of Computer Science. University of North Carolina at Chapel Hill. 27 April 2001.

Leandra Vicci. Microelectronic Systems Laboratory. Department of Computer Science. University of North Carolina at Chapel Hill. 27 April 2001. Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation Leandra Vicci Microelectronic Systems Laboratory Department of Computer Science University of North Carolina at Chapel

More information

Vector analysis and vector identities by means of cartesian tensors

Vector analysis and vector identities by means of cartesian tensors Vector analysis and vector identities by means of cartesian tensors Kenneth H. Carpenter August 29, 2001 1 The cartesian tensor concept 1.1 Introduction The cartesian tensor approach to vector analysis

More information

Chapter 1. Describing the Physical World: Vectors & Tensors. 1.1 Vectors

Chapter 1. Describing the Physical World: Vectors & Tensors. 1.1 Vectors Chapter 1 Describing the Physical World: Vectors & Tensors It is now well established that all matter consists of elementary particles 1 that interact through mutual attraction or repulsion. In our everyday

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

The word Matrices is the plural of the word Matrix. A matrix is a rectangular arrangement (or array) of numbers called elements.

The word Matrices is the plural of the word Matrix. A matrix is a rectangular arrangement (or array) of numbers called elements. Numeracy Matrices Definition The word Matrices is the plural of the word Matrix A matrix is a rectangular arrangement (or array) of numbers called elements A x 3 matrix can be represented as below Matrix

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Complex Numbers and Quaternions for Calc III

Complex Numbers and Quaternions for Calc III Complex Numbers and Quaternions for Calc III Taylor Dupuy September, 009 Contents 1 Introduction 1 Two Ways of Looking at Complex Numbers 1 3 Geometry of Complex Numbers 4 Quaternions 5 4.1 Connection

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 CHAPTER 4 MATRICES 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 Matrices Matrices are of fundamental importance in 2-dimensional and 3-dimensional graphics programming

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear Algebra. Carleton DeTar February 27, 2017

Linear Algebra. Carleton DeTar February 27, 2017 Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

1.3 LECTURE 3. Vector Product

1.3 LECTURE 3. Vector Product 12 CHAPTER 1. VECTOR ALGEBRA Example. Let L be a line x x 1 a = y y 1 b = z z 1 c passing through a point P 1 and parallel to the vector N = a i +b j +c k. The equation of the plane passing through the

More information

Rigid Geometric Transformations

Rigid Geometric Transformations Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

More information

MITOCW ocw nov2005-pt1-220k_512kb.mp4

MITOCW ocw nov2005-pt1-220k_512kb.mp4 MITOCW ocw-3.60-03nov2005-pt1-220k_512kb.mp4 PROFESSOR: All right, I would like to then get back to a discussion of some of the basic relations that we have been discussing. We didn't get terribly far,

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Notes on vectors and matrices

Notes on vectors and matrices Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written

More information

Introduction to Tensor Calculus and Continuum Mechanics. by J.H. Heinbockel Department of Mathematics and Statistics Old Dominion University

Introduction to Tensor Calculus and Continuum Mechanics. by J.H. Heinbockel Department of Mathematics and Statistics Old Dominion University Introduction to Tensor Calculus and Continuum Mechanics by J.H. Heinbockel Department of Mathematics and Statistics Old Dominion University PREFACE This is an introductory text which presents fundamental

More information

An Introduction to Matrix Algebra

An Introduction to Matrix Algebra An Introduction to Matrix Algebra EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #8 EPSY 905: Matrix Algebra In This Lecture An introduction to matrix algebra Ø Scalars, vectors, and matrices

More information

Connections and Determinants

Connections and Determinants Connections and Determinants Mark Blunk Sam Coskey June 25, 2003 Abstract The relationship between connections and determinants in conductivity networks is discussed We paraphrase Lemma 312, by Curtis

More information

Quaternions and Groups

Quaternions and Groups Quaternions and Groups Rich Schwartz October 16, 2014 1 What is a Quaternion? A quaternion is a symbol of the form a+bi+cj +dk, where a,b,c,d are real numbers and i,j,k are special symbols that obey the

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Tensor Transformations and the Maximum Shear Stress. (Draft 1, 1/28/07)

Tensor Transformations and the Maximum Shear Stress. (Draft 1, 1/28/07) Tensor Transformations and the Maximum Shear Stress (Draft 1, 1/28/07) Introduction The order of a tensor is the number of subscripts it has. For each subscript it is multiplied by a direction cosine array

More information

Lecture 7. Quaternions

Lecture 7. Quaternions Matthew T. Mason Mechanics of Manipulation Spring 2012 Today s outline Motivation Motivation have nice geometrical interpretation. have advantages in representing rotation. are cool. Even if you don t

More information

1.13 The Levi-Civita Tensor and Hodge Dualisation

1.13 The Levi-Civita Tensor and Hodge Dualisation ν + + ν ν + + ν H + H S ( S ) dφ + ( dφ) 2π + 2π 4π. (.225) S ( S ) Here, we have split the volume integral over S 2 into the sum over the two hemispheres, and in each case we have replaced the volume-form

More information

Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

,, rectilinear,, spherical,, cylindrical. (6.1)

,, rectilinear,, spherical,, cylindrical. (6.1) Lecture 6 Review of Vectors Physics in more than one dimension (See Chapter 3 in Boas, but we try to take a more general approach and in a slightly different order) Recall that in the previous two lectures

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information