Chapter 0. Preliminaries. 0.1 Things you should already know

Size: px
Start display at page:

Download "Chapter 0. Preliminaries. 0.1 Things you should already know"

Transcription

1 Chapter 0 Preliminaries These notes cover the course MATH45061 (Continuum Mechanics) and are intended to supplement the lectures. The course does not follow any particular text, so you do not need to buy any text books. The notes should be sufficiently self-contained that you will be able to use them to understand the course material. That said, there are many excellent textbooks out there that present the concepts from slightly different perspectives. Most textbooks tend to favour fluid mechanics or solid mechanics and there are only a few that treat these two continua in a unified manner. The subject is incredibly broad and it s impossible to do it justice in a 15-credit lecture course. However, I shall attempt to give a flavour of the methods and richness of the subject. The early stages of the course will be spent developing the necessary mathematical framework to study the mechanics of continua. Although the basic concepts are straightforward, the mathematics rapidly becomes cumbersome when developing a framework that will allow use of general coordinate systems. One of the first problems that you will face is notation. There is no universally accepted notation in continuum mechanics. The problem is that for a general treatment it is effectively impossible to find a notation that doesn t look cluttered unless you suppress important bits of information. Conversely, if information is suppressed so that the notation looks clean, the results can be ambiguous or unclear. Of course, once you understand what is going on the notation doesn t matter, but it helps to have a notation that is as easy as possible to work with. I have not found the perfect notation, but I believe that I have found a notation that is consistent, complete and relatively easy to use. Please do spend time getting used to and working with the notation. Do the initial exercises as soon as you get a chance and go back to them if you get confused. Having a firm command of the notation will really help in following the lectures. 0.1 Things you should already know The course is as self-contained as it can be, but you should already be confident with the basic calculus of scalar and vector fields (div, grad, curl, multiple integrals, divergence theorem,...); Taylor series for functions of many variables; the solution of ordinary and partial differential equations (general methods for linear equations); as well as basic linear algebra (how to work with matrices, vectors, definitions of eigenvalues, linear independence,...). If you do not immediately know the answers to the questions in section (or at least how to find the answers) then I would suggest revising the appropriate material. I will not assume any knowledge of mechanics beyond basic particle mechanics and Newton s laws, but, of course, if you have already done courses in fluid or solid mechanics many of the concepts that we discuss should be familiar. 1

2 0.1.1 Pre-course fitness check 1. Two vectors are defined in components in a global Cartesian basis: a = (0, 1, 2), b = (3, 2, 1). (a) Find a b and hence determine whether the two vectors are orthogonal. (b) Find two unit vectors â and ˆb that are parallel to a and b respectively. 2. Is it always possible to find the inverse of a matrix? If so, prove it; if not, provide a counterexample. 3. In Cartesian coordinates (x, y, z), f = xyz and F = (x, y, z). (a) Is f a scalar or vector field? What about F? (b) Find f, F, and F. (c) What does the notation 2 mean? 4. Find the Taylor series of sin(xy) about the point x = 0, y = π/2. 5. A linear system of simultaneous equations is given by 4x + 3y + 2z = 1, 2x + 7y = 0, 8x + 6y + 4z = 5. Write the system as a matrix equation. Does the system have a solution? If so, find the solution; if not, how could the system be changed to ensure that it does have a solution. 6. Find the eigenvalues and eigenvectors of the matrix Find the general solution u(x) of the equation d 2 u dx 2 + ω2 u = Find the solution u(x, y) of the PDE 2 u = 0 in a square domain x, y [0, 1], subject to the boundary conditions u = 1 on the line y = 0, but u = 0 on all other boundaries. 9. State the divergence theorem and construct an example to verify it. 10. State Newton s three laws of motion. Use them to determine the position at which a cannonball fired at an angle of π/4 radians with a velocity of 1ms 1 returns to the ground, assuming uniform gravitational acceleration of magnitude g.

3 0.2 Nomenclature and notational conventions Another difficulty that students have found with this material is simply remembering all the different symbols. I have tried to use a consistent notation throughout and so here are the rules of the notation. This will not make sense if this is the first time that you are reading these notes, but I hope that it will be a handy reference. Scalar variables are written in italics like this x, but constants are not, e.g. the base of natural logarithms e and imaginary number i. Vectors are written in bold face, a, whereas matrices and tensors are in a san-serif font, M, so that we would write a linear system of equations as Ma = b. Round brackets are used if one variable is a function of another, V (R, t). The standard notation is used for scalar ( ), vector ( ) and tensor ( ) products. Subscripts or superscripts that denote components of a vector or tensor in a Cartesian coordinate system are uppercase Roman, x I. Subscripts or superscripts that denote components of a vector or tensor in a general coordinate system are lower case Roman, a i. Quantities with a lower case Roman subscript obey a covariant transformation law under change of coordinate system; e.g. when changing from ξ i to χ i, a i = ξj χ i a j (c.f. tangent base vectors or partial derivatives). Quantities with a lower case Roman superscript obey a contravariant transformation law under change of coordinate system; e.g. a i = χi ξ j a j (c.f. vector components or differential forms). Note that the position of the index reflects the location of the new coordinate within the partial derivative in the transformation rule. For orthonormal coordinate systems, the two transformations are identical so a I = a I. We use the summation convention that a subscript and superscript with the same index should be summed over that index 3 a j b j = a j b j = a 1 b 1 + a 2 b 2 + a 3 b 3. j=1 Note that this convention ensures that invariant quantities are easily identified. Coordinates in the reference (original) configuration of the continuum are denoted by x I (Cartesian) or ξ i (general). These are called Lagrangian coordinates. Coordinates in the deformed (current) configuration of the continuum are denoted by X I (Cartesian) or χ i (general). These are called Eulerian coordinates. Note that in Cartesian formulations both the original and deformed coordinates refer to the same global Cartesian base vectors, hence the index is the same. In the general formulations, we allow the two coordinate systems to be different and hence we use the different indices. Uppercase quantities refer to the deformed (Eulerian) representation. Lowercase quantities refer to the undeformed (Lagrangian) viewpoint. Many other textbooks use exactly the opposite convention so be careful. This notation is oxymoronic for two-point tensors (quantities with on Eulerian index and one Lagrangian) and so following standard conventions these are uppercase for deformation measures F, H and lowercase for stress measures, p.

4 Health Warning Any notation will eventually break and I shall try to point out any abuses of notation when they arise, but if you spot any inconsistencies or mistakes then please let me know. Lists of symbols In the tables below we list the most commonly used symbols in these lecture notes. One significant complication in the general theory is the need to distinguish between the Lagrangian (material) and Eulerian (spatial) treatments. For any specific problem, we can usually choose either the Eulerian or the Lagrangian viewpoint and can avoid an explosion of different symbols. However, whether we choose Eulerian or Lagrangian coordinates, we must still connect the deformed and undeformed configurations of the continuum. The natural choice is to use Eulerian coordinates to describe the deformed configuration and Lagrangian for the undeformed, but the relationship between the two coordinate systems is required to determine strain measures. We must therefore, in principle, consider equivalent quantities in deformed and undeformed configurations in both sets of coordinates. If we have tensor or vector quantities then we can easily convert between the two coordinate systems by the appropriate transformation rules, but we cannot convert between deformed and undeformed domains without knowing details of the deformation, i.e. R(r) or r(r). Quantity Undeformed Configuration Deformed Configuration Material region Ω 0 Ω t Density ρ 0 ρ Volume of region V 0 V t Infinitesimal volume element dv, dv 0 dv, dv t Surface of region V 0 = S 0 V t = S t Unit normal to surface n N Infinitesimal scalar surface element da, ds 0 da, ds t Infinitesimal vector surface element da = nda da = N da Position vector r R Infinitesimal line element dr dr Table 1: List of notation used to denote analogous geometric quantities for a material region of a continuum in its undeformed and deformed configurations; see chapter 2 for further details. For vector quantities we distinguish the components referred to the different bases g i, g i, G i and G i, see Table 2, as follows: v(r, t) = v i g i = v i g i = V (R, t) = V i G i = V i G i.

5 Quantity Lagrangian Eulerian Global Cartesian basis e I Undeformed Position vector r r(r) Deformed Position vector R(r) R Undeformed Cartesian coordinate x I : r = x I e I x I (X J ) Deformed Cartesian coordinate X I (x J ) X I : R = X I e I General Coordinate ξ i : r(ξ i ), R(ξ i ) χ i : R(χ i ), r(χ i ) Displacement field u(r, t) = R r U(R, t) Velocity field v(r, t) V (R, t) Acceleration field a(r, t) A(R, t) Partial derivative w.r.t. general coordinate f,i = a ξ i F,i = A χ i Undeformed covariant base vector g i = r,i g i = r,i Deformed covariant base vector G i = R,i G i = R,i Undeformed contravariant base vector g i : g i g j = δj i g i Deformed contravariant base vector G i G i Undeformed covariant metric tensor g : g ij = g i g j g : g i j = g i g j Undeformed contravariant metric tensor g = (g ) 1 g = (g ) 1 Deformed covariant metric tensor G : G ij = G i G j G : G i j = G i G j Deformed contravariant metric tensor G = (G ) 1 G = (G ) 1 Determinant of undeformed covariant metric g = g g = g Determinant of deformed covariant metric G = G G = G Change in volume during deformation J = G/ g J = J = G/ g Undeformed gradient operator r f = f,i g i r f = f,i g i Deformed gradient operator R F = F,i G i R F = F,i G i Undeformed Christoffel symbol Γ i jk = gi g j,k Γ i = gi g j k j,k Undeformed covariant derivative : f i j = f,j i + Γ i kj fk : f i j = f i,j Γi k j fk Deformed Christoffel symbol Γ i jk = Gi G j,k Γ i j k = Gi G j,k Deformed covariant derivative : F i j = F,j i kj F k : F i j = F i,j Γi k j F k Table 2: List of standard coordinates and common kinematic and geometric quantities in Lagrangian and Eulerian viewpoints; see chapter 2 for further details.

6 Quantity Lagrangian Eulerian Deformation gradient tensor F = ( r R) T H = F 1 Deformation tensor c = F T F (Cauchy Green) C = H T H (Cauchy) Deformation tensor b = HH T = c 1 (Piola) B = FF T = C 1 (Finger) Cartesian strain tensor e = (c I)/2 (Green Lagrange) E = (I C)/2 (Almansi) General strain tensor γ ij = (G ij g ij )/2 (Green Lagrange) γ i j = (G i j g i j )/2 (Almansi) Velocity gradient tensor l = ( r v) T L = ( R V ) T Rate of deformation tensor γ ij = Dγ ij D = (L + L T )/2 Spin tensor W = (L L T )/2 Vorticity ω Table 3: Measures of strain and strainrate. The material deformation gradient tensor, F, connects the deformed and undeformed positions and is neither Eulerian nor Lagrangian. We have chosen to place it in the Lagrangian column for convenience and then its inverse, H, is then placed in the Eulerian column. Quantity Lagrangian Form Eulerian Form Body Force f F Surface Traction t T Stress tensor / Deformed Area T = T ij G i G j (Body) T = T i j G i G j (Cauchy) Physical Stress (tensor) σ : σ IJ = T IJ Stress / Undeformed Area s = JF 1 σf T (2nd Piola Kirchhoff) p = Jσ T F T (1st Piola Kirchhoff) Table 4: Measures of force and stress. Note that the 1st Piola Kirchoff stress is a two-point tensor, so does not naturally fit in the Eulerian or Lagrangian frameworks. We have placed it in the Eulerian column because if the stress vector is decomposed into Cartesian coordinates the Lagrangian formulation of the equations of motion use p. Quantity Lagrangian Form Eulerian Form Internal energy φ Φ Heat flux q Q Heat supply b B Specific entropy η 0 η Temperature θ Θ Helmholtz free energy ψ Ψ Table 5: Thermodynamic and energetic quantities in Lagrangian and Eulerian representations.

7 Chapter 1 Describing the Physical World: Vectors & Tensors It is now well established that all matter consists of elementary particles 1 that interact through mutual attraction or repulsion. In our everyday life, however, we do not see the elemental nature of matter, we see continuous regions of material such as the air in a room, the water in a glass or the wood of a desk. Even in a gas, the number of individual molecules in macroscopic volumes is enormous cm 3. Hence, when describing the behaviour of matter on scales of microns or above it is simply not practical to solve equations for each individual molecule. Instead, theoretical models are derived using average quantities that represent the state of the material. In general, these average quantities will vary with position in space, but an important concept to bear in mind is that the fundamental behaviour of matter should not depend on the particular coordinate system chosen to represent the position. The consequences of this almost trivial observation are far-reaching and we shall find that it dictates the form of many of our governing equations. We shall always consider our space to be three-dimensional and Euclidean 2 and we describe position in space by a position vector, r, which runs from a specific point, the origin, to our chosen location. The exact coordinate system chosen will depend on the problem under consideration; ideally it should make the problem as easy as possible. 1.1 Vectors A vector is a geometric quantity that has a magnitude and direction. A more mathematically precise, but less intuitive, definition is that a vector is an element of a vector space. Many physical quantities are naturally described in terms of vectors, e.g. position, velocity and acceleration, force. The invariance of material behaviour under changes in coordinates means that if a vector represents a physical quantity then it must not vary if we change our coordinate system. Imagine drawing a line that connects two points in two-dimensional (Euclidean) plane, that line remains unchanged whether we describe it as x units across and y units up from the origin or r units from the origin in the θ direction. Thus, a vector is an object that exists independent of any coordinate system, but if we wish to describe it we must choose a specific coordinate system and its representation in that coordinate system (its components) will depend on the specific coordinates chosen. 1 The exact nature of the most basic unit is, of course, still debated, but the fundamental discrete nature of matter is not. 2 We won t worry about relativistic effects at all. 7

8 1.1.1 Cartesian components and summation convention The fact that we have a Euclidean space means that we can always choose a Cartesian coordinate system with fixed orthonormal base vectors, e 1 = i, e 2 = j and e 3 = k. For a compact notation, it is much more convenient to use the numbered subscripts rather than different symbols to distinguish the base vectors. Any vector quantity a can be written as a sum of its components in the direction of the base vectors a = a 1 e 1 + a 2 e 2 + a 3 e 3 ; (1.1) and the vector a can be represented via its components (a 1, a 2, a 3 ); and so, e 1 = (1, 0, 0), e 2 = (0, 1, 0) and e 3 = (0, 0, 1). We will often represent the components of vectors using an index, i.e. (a 1, a 2, a 3 ) is equivalent to a I, where I {1, 2, 3}. In addition, we use the Einstein summation convention in which any index that appears twice represents a sum over all values of that index a = a J e J = 3 a J e J. (1.2) Note that we can change the (dummy) summation index without affecting the result a J e J = 3 a J e J = J=1 J=1 3 a K e K = a K e K. The summation is ambiguous if an index appears more than twice and such terms are not allowed. For clarity later, an upper case index is used for objects in a Cartesian (or, in fact, any orthonormal) coordinate system and, in general, we will insist that summation can only occur over a raised index and a lowered index for reasons that will hopefully become clear shortly. It is important to recognise that the components of a vector a I do not actually make sense unless we know the base vectors as well. In isolation the components give you distances but not direction, which is only half the story. K= Curvilinear coordinate systems For a complete theoretical development, we shall consider general coordinate systems 3. Unfortunately the use of general coordinate systems introduces considerable complexity because the lines on which individual coordinates are constant are not necessarily straight lines nor are they necessarily orthogonal to one another. A consequence is that the base vectors in general coordinate systems are not orthonormal and vary as functions of position Tangent (covariant base) vectors The position vector r = x K e K, is a function of the Cartesian coordinates, x K, where x 1 = x, x 2 = y and x 3 = z. Note that the Cartesian base vectors can be recovered by differentiating the position vector with respect to the appropriate coordinate r x K = e K. (1.3) 3 For our purposes a coordinate system to be a set of independent scalar variables that can be used to describe any position in the entire Euclidean space.

9 In other words the derivative of position with respect to a coordinate returns a vector tangent to the coordinate direction; a statement that is true for any coordinate system. For a general coordinate system, ξ i, we can write the position vector as r(ξ 1, ξ 2, ξ 3 ) = x K (ξ i )e K, (1.4) because the Cartesian base vectors are fixed. Here the notation x K (ξ i ) means that the Cartesian coordinates can be written as functions of the general coordinates, e.g. in plane polars x(r, θ) = r cosθ, y(r, θ) = r sin θ, see Example 1.1. Note that equation (1.4) is the first time in which we use upper and lower case indices to distinguish between the Cartesian and general coordinate systems. A tangent vector in the ξ 1 direction, t 1, is the difference between two position vectors associated with a small (infinitesimal) change in the ξ 1 coordinate t 1 (ξ i ) = r(ξ i + dξ 1 ) r(ξ i ), (1.5) where dξ 1 represents the small change in the ξ 1 coordinate direction, see Figure 1.1. t 1 (ξ i ) ξ 2 constant r(ξ i ) r(ξ i + dξ 1 ) O Figure 1.1: Sketch illustrating the tangent vector t 1 (ξ) corresponding to a small change dξ 1 in the coordinate ξ 1. The tangent lies along a line of constant ξ 2 in two dimensions, or a plane of constant ξ 2 and ξ 3 in three dimensions. Assuming that r is differentiable and Taylor expanding the first term in (1.5) demonstrates that which yields t 1 = r(ξ i ) + r ξ 1dξ1 r(ξ i ) + O((dξ i ) 2 ), t 1 = r ξ 1dξ1, if we neglect the (small) quadratic and higher-order terms. Note that exactly the same argument can be applied to increments in the ξ 2 and ξ 3 directions and because dξ i are scalar lengths, it follows that g i = r ξ i is also a tangent vector in the ξ i direction, as claimed. Hence, using equation (1.4), we can compute tangent vectors in the general coordinate directions via g i = r ξ i = xk ξ i e K. (1.6)

10 We can interpret equation (1.6) as defining a local linear transformation between the Cartesian base vectors and our new tangent vectors g i. The transformation is linear because g i is a linear combination of the vectors e K, which should not be a surprise because we explicitly neglected the quadratic and higher terms in the Taylor expansion. The transformation is local because, in general, the coefficients will change with position. The coefficients of the transformation can be written as entries in a matrix, M, in which case equation (1.6) becomes g 1 g 2 g 3 = M {}} { x 1 ξ 1 x 2 ξ 1 x 3 ξ 1 x 1 ξ 2 x 2 ξ 2 x 3 ξ 2 x 1 ξ 3 x 2 ξ 3 x 3 ξ 3 Provided that the transformation is non-singular (the determinant of the matrix M is non-zero) the tangent vectors will also be a basis of the space and they are called covariant base vectors because the transformation preserves the tangency of the vectors to the coordinates. In general, the covariant base vectors are neither orthogonal nor of unit length. It is also important to note that the covariant base vectors will usually be functions of position. Example 1.1. Finding the covariant base vectors for plane polar coordinates A plane polar coordinate system is defined by the two coordinates ξ 1 = r, ξ 2 = θ such that Find the covariant base vectors. Solution 1.1. The position vector is given by e 1 e 2 e 3 x = x 1 = r cosθ and y = x 2 = r sin θ. r = x 1 e 1 + x 2 e 2 = r cosθe 1 + r sin θe 2 = ξ 1 cos ξ 2 e 1 + ξ 1 sin ξ 2 e 2, and using the definition (1.6) gives g 1 = r ξ 1 = cosξ2 e 1 + sin ξ 2 e 2, Note that g 1 is a unit vector, g 1 = g 1 g 1 =. and g 2 = r ξ 2 = ξ1 sin ξ 2 e 1 + ξ 1 cosξ 2 e 2. cos 2 ξ 2 + sin 2 ξ 2 = 1, but g 2 is not, g 2 = ξ 1. The vectors are orthogonal g 1 g 2 = 0 and can be are related to the standard orthonormal polar base vectors via g 1 = e r and g 2 = re θ Contravariant base vectors The fact that the covariant basis is not necessarily orthonormal makes life somewhat awkward. For orthonormal systems we are used to the fact that when a = a K e K, then unique components can be obtained via a dot product 4. a e I = a K e K e I = a I, (1.8) 4 The dot or scalar product is an operation on two vectors that returns a unique scalar: the product of the lengths of the two vectors and the cosine of the angle between them. In the present context we only need to know that for orthogonal vectors the dot product is zero and the dot product of a unit vector with itself is one, see for further discussion. (1.7)

11 where the last equality is a consequence of the orthonormality. In our covariant basis, we would write a = a k g k, so that a g i = a k g k g i, (1.9) but no further simplification can be made. Equations (1.9) are a linear system of simultaneous equations that must be solved in order to find the values of a k, which is considerably more effort than using an explicit formula such as (1.8). The explicit formula (1.8) arises because the matrix e K e I is diagonal, which means that the equations decouple and are no longer simultaneous. We can, however, recover most of the nice properties of an orthonormal coordinate system if we define another set of base vectors that are each orthogonal to two of the covariant base vectors and have unit length when projected in the direction of the remaining covariant base vector. In other words, the new vectors g i are defined such that { g i g j = δj i 1, if i = j, (1.10) 0, otherwise, where the object δj i is known as the Kronecker delta. In orthonormal coordinate systems the two sets of base vectors coincide; for example, in our global Cartesian coordinates e I e I. We can decompose g i into its components in the Cartesian basis g i = gk i ek, where we have used the raised index on the base vectors for consistency with our summation convention. Note that gk i is thus defined to be the K-th Cartesian component of the i-th contravariant base vector. From the definition (1.10) and (1.6) ( K) ( ) g i x L K e ξ e j L = gk i x L ξ ek e j L = gk i x L ξ j δk L = gl i x L = δ ξ j. i (1.11) j Note that we have used the index-switching property of the Kronecker delta to write gk i δk L = gi L, which can be verified by writing out all terms explicitly. Multiplying both side of equation (1.11) by ξj yields x K gl i x L ξ j ξ j x = K δi j ξ j x = ξi K x ; K and from the chain rule x L ξ j ξ j x = xl K x = K δl K, because the Cartesian coordinates are independent. Hence, g i Lδ L K = g i K = ξi x K, and so the new set of base vectors are g i = ξi x K ek. (1.12) The equation (1.12) defines a local linear transformation between the Cartesian base vectors and the vectors g i. In a matrix representation, equation (1.12) is g 1 g 2 g 3 = M T { }} { ξ 1 x 1 ξ 1 x 2 ξ 1 x 3 ξ 2 x 1 ξ 2 x 2 ξ 2 x 3 ξ 3 x 1 ξ 3 x 2 ξ 3 x 3 e 1 e 2 e 3, (1.13)

12 and we see that the new transformation is the inverse transpose 5 of the linear transformation that defines the covariant base vectors (1.6). For this reason, the vectors g i are called contravariant base vectors. Example 1.2. Finding the contravariant base vectors for plane polar coordinates For the plane polar coordinate system defined in Example 1.1, find the contravariant base vectors. Solution 1.2. The contravariant base vectors are defined by equation (1.12) and in order to use that equation directly, we must express our polar coordinates as functions of the Cartesian coordinates and then we can compute r = ξ 1 = x 1 x 1 + x 2 x 2, and tan θ = tan ξ 2 = x2 x 1, ξ 1 x 1 = cosξ2, ξ 1 x 2 = sin ξ2, ξ 2 ξ2 = sin x1 ξ 1 and ξ 2 x = cosξ2 1 ξ 1 Thus, and g 1 = ξ1 x 1e 1 + ξ1 x 2e 2 = cosξ 2 e 1 + sin ξ 2 e 2 = g 1, g 2 = ξ2 x 1e 1 + ξ2 sin ξ2 cos ξ2 x 2e 2 = e ξ e ξ 1 2 = 1 (ξ 1 ) g 2. 2 We can now easily verify that g i g j = δ i j. An alternative (and often easier) approach is to find the contravariant base vectors by finding the inverse transpose of the matrix M that defines the covariant base vectors and using equation (1.13) Components of vectors in covariant and contravariant bases We can find the components of a vector a in the covariant basis by taking the dot product with the appropriate contravariant base vectors a = a k g k, where a i = a g i ( = a k g k g i = a k δ i k = ai. ) (1.14) Similarly components of the vector a in the contravariant basis are given by taking the dot product with the appropriate covariant base vectors a = a k g k, where a i = a g i ( = ak g k g i = a k δ k i = a i. ) (1.15) 5 That the inverse matrix is given by M 1 = ξ 1 x 1 ξ 2 x 1 ξ 3 x 1 ξ 1 x 2 ξ 2 x 2 ξ 3 x 2 ξ 1 x 3 ξ 2 x 3 ξ 3 x 3, can be confirmed by checking that MM 1 = M 1 M = I, the identity matrix. Alternatively, the relationship follows directly from equation (1.11) written in matrix form.

13 In fact, we can obtain the components of a general vector in either the covariant or contravariant basis directly from the Cartesian coordinates. If a = a K e K = a K e K, then the components in the covariant basis associated with the curvilinear coordinates ξ i are a i = a g i = a K e K g i = a K ξi x ej e J K = a K ξi ξi x J δj K = ak x, K a contravariant transform. Similarly, the components of the vector in the contravariant basis may be obtained by covariant transform from the Cartesian components and so and a i = xk ξ i a K a i = ξi x K ak. (1.16a) (1.16b) Invariance of vectors: (significance of index position) Having established the need for two different types of transformations in curvilinear coordinate systems, we are now in a position to consider the significance of the raised and lowered indices in our summation convention. We shall insist that for an index to be lowered the object must transform covariantly under a change in coordinates and for an index to be raised the object must transform contravariantly under a change in coordinates 6. An important exception to this rule are the coordinates themselves: ξ i represents the three scalar coordinates, e.g. in spherical polar coordinates ξ 1 = r, ξ 2 = θ and ξ 3 = φ; ξ i are not the components of a vector and do not obey contravariant transformation rules. Equation (1.16a) demonstrates that the components of a vector in the contravariant basis are indeed covariant, justifying the lowered index, and equation (1.16b) provides similar justification for the contravariance of components in the covariant basis. We shall now demonstrate that these transformation properties also follow directly from the requirement that a physical vector should be independent of the coordinate system. Consider a vector a, which can be written in the covariant or contravariant basis a = a i g i = a i g i. (1.17) We now consider a change in coordinates from ξ i to another general coordinate system χ i. It will be of vital importance later on to know which index corresponds to which coordinate system so we have chosen to add an overbar to the index to distinguish components associated with the two coordinate systems, ξ i and χ i. The covariant base vectors associated with χ i are then g i r χ i = xj χ i e J = ξk χ i x J ξ k e J = ξk χ i g k; (1.18) and the transformation between g i and g k is of the same (covariant) type as that between g i and e K in equation (1.6). The transformation is covariant because the new coordinate is the independent 6 The logic for the choice of index location is the position of the generalised coordinate in the partial derivative defining the transformation: g i = xk ξ i e K (lowered index), g i = ξi x K e K (raised index).

14 variable in the partial derivative (it appears in the denominator). In our new basis, the vector a = a i g i and because a must remain invariant a = a i g i = a i g i. Using the transformation of the base vectors (1.18) to replace g i gives a i ξk χ ig k = a i g i = a k g k a k = ξk χ i ai. (1.19) Hence, the components of the vector must transform contravariantly because multiplying both sides of equation (1.19) by the inverse transpose transformation χ j / ξ k gives a i = χi ξ kak. (1.20) This transformation is contravariant because the new coordinate is the dependent variable in the partial derivative (it appears in the numerator). A similar approach can be used to show that the components in the contravariant basis must transform covariantly in order to ensure that the vector remains invariant. Thus, the use of our summation convention ensures that the summed quantities remain invariant under coordinate transformations, which will be essential when deriving coordinate-independent physical laws. Interpretation The fact that base vectors and vector components must transform differently for the vector to remain invariant is actually quite obvious. Consider a one-dimensional Euclidean space in which a = a 1 g 1. If the base vector is rescaled 7 by a factor λ so that g 1 = λg 1 then to compensate the component must be rescaled by the factor 1/λ: a 1 = 1 λ a1. Note that for a 1 1 transformation matrix with entry λ, the inverse transpose is 1/λ Orthonormal coordinates If the coordinates are orthonormal then, by construction, there is no distinction between the covariant and contravariant basis, g i = g i. Using equations (1.6) and (1.12), we see that and so g i = xk ξ i e K = g i = ξi x K e K, x K ξ i = ξi x K. (1.21) Hence, the covariant and contravariant transformations are identical in orthonormal coordinate systems, which means that there is no need to distinguish between raised and lowered indices. This simplification is adopted in many textbooks and the convention is to use only lowered indices. When working with orthonormal coordinates we will also adopt this convention for simplicity, but we must always make sure that we know when the coordinate system is orthonormal. It is for this reason that we have adopted the convention that upper case indices are used for orthonormal coordinates. 7 In one dimension all we can do is rescale the length, although the scaling can vary with position.

15 If the coordinate system is not known to be orthonormal, we will use lower case indices and must distinguish between the covariant and contravariant transformations. Condition (1.21) implies that In the matrix representation, equation (1.22) is x K x K = xk ξ i ξ j ξ i ξ j x = K δi j. (1.22) MM T = I M T M = I, where I is the identity matrix. In other words the components of the transformation form an orthogonal matrix. It follows that (all) orthonormal coordinates can only be generated by an orthogonal transformation from the reference Cartesians. This should not be a big surprise: any other transform will change the angles between the base vectors or their relative lengths which destroys orthonormality. The argument is entirely reversible: if either the covariant or contravariant transform is orthogonal then the two transforms are identical and the new coordinate system is orthonormal. An aside Further intuition for the reason why the covariant and contravariant transformations are identical when the coordinate transform is orthogonal can be obtained as follows. Imagine that we have a general linear transformation represented as a matrix M the acts on vectors such that components in the fixed Cartesian coordinate system p = p K e K transform as follows p K = M K J pj. Note that the index K does not have an overbar because p K is a component in the fixed Cartesian coordinate system, e K. The transformation can, of course, also be applied to the base vectors of the fixed Cartesian coordinate system e I, [ẽ I ] K = M K J [e I] J, where [ ] K indicates the K-th component of the base vector. Now, [e I ] J = δ J I and it follows that [ẽ I ] K = M K I, which allows us to define the operation of the matrix components on the base vectors directly because ẽ I = [ẽ I ] K e K = MI K e K. (1.23a) Thus the operation of the transformation on the components is the transpose of its operation on the base vectors 8. We could write the new base vectors as eĩ = M K Ĩ e K to be consistent with our previous notation, but this will probably lead to more confusion in the current exposition. Now consider a vector a that must remain invariant under our transformation. Let the vector ã be the vector with the same numerical values of its components as a but with transformed base vectors, i.e. ã = a K ẽ K. Thus, the vector ã will be a transformed version of a. In order to ensure that the vector remains unchanged under transformation we must apply the appropriate inverse 8 This statement also applies to general bases.

16 transformation to ã relative to the new base vectors, ẽ I. In other words, the transformation of the coordinates must be ã K = [M 1 ] K J ã J = [M 1 ] K J aj, (1.23b) where we have used the fact that ã J = a J by definition. Using the two transformation equations (1.23a,b) we see that ã K ẽ K = [M 1 ] K J a J M L Ke L = [M 1 ] K J M L Ka J e L = δ L J a J e L = a J e J, as required. Thus, we have the two results: (i) a general property of linear transformations is that the matrix representation of the transformation of vector components is the transpose of the matrix representation of the transformation of base vectors; (ii) in order to remain invariant the transform of the components of the vector must actually undergo the inverse of the coordinate transformation. Thus, the transformations of the base vectors and the coordinates coincide when the inverse transform is equal to its transpose, i.e. when the transform is orthogonal. If that all seems a bit abstract, then hopefully the following specific example will help make the ideas a little more concrete. Example 1.3. Equivalence of covariant and contravariant transformations under affine transformations Consider a two-dimensional Cartesian coordinate system with base vectors e I. A new coordinate system with base vectors e I is obtained by rotation through an angle θ in the anticlockwise direction about the origin. Derive the transformations for the base vectors and components of a general vector and show that they are the same. Solution 1.3. The original and rotated bases are shown in Figure 1.2(a) from which we determine that the new base vectors are given by e 1 = cosθe 1 + sin θe 2 and e 2 = sin θe 1 + cosθe 2. e 2 e 2 e 2 e 2 p θ e 1 θ p e 1 θ (a) e 1 (b) e 1 Figure 1.2: (a) The base vectors e I are the Cartesian base vectors e I rotated through an angle θ about the origin. (b) If the coordinates of the position vector p are unchanged it is also rotated by θ to p.

17 Consider a position vector p = p I e I in the original basis. If we leave the coordinates unchanged then the new vector p = p I e I is the original vector rotated by θ, see Figure 1.2(b). We must therefore rotate the position vector p through an angle θ relative to the fixed basis e I, but this is actually equivalent to a positive rotation of the base vectors. Hence the transforms for the components of vector and the base vectors are the same. p 1 = cosθ p 1 + sin θ p 2 and p 2 = sin θ p 1 + cosθ p Tensors Tensors are geometric objects that have magnitude and zero, one or many associated directions, but are linear in character. A more mathematically precise definition is to say that a tensor is multilinear map or alternatively an element of a tensor product of vector spaces, which is somewhat tautological and really not helpful at this point. The order (or degree or rank) of a tensor is the number of associated directions and so a scalar is a tensor of order zero and a vector is a tensor of order one. Many quantities in continuum mechanics such as strain, stress, diffusivity and conductivity are naturally expressed as tensors of order two. We have already seen an example of a tensor in our discussion of vectors: linear transformations from one set of vectors to another, e.g. the transformation from Cartesian to covariant base vectors, are second-order tensors. If the vectors represent physical objects, then they must not depend on the coordinate representation chosen. Hence, the linear transformation must also be independent of coordinates because the same vectors must always transform in the same way. We can write our linear transformation in a coordinate-independent manner as a = M(b), (1.24) and the transformation M is a tensor of order two. In order to describe M precisely we must pick a specific coordinate system for each vector in equation (1.24). In the global Cartesian basis, equation (1.24) becomes a I e I = M(b J e J ) = b J M(e J ), (1.25) because it is a linear transformation. We now take the dot product with e K to obtain a I e K e I = b J e K M(e J ) a K = b J e K M(e J ), where the dot product is written on the left to indicate that we are taking the dot product after the linear transformation has operated on the base vector e J. Hence, we can write the operation of the transformation on the components in the form a I = M IJ b J, (1.26) where M IJ = e I M(e J ). Equation (1.26) can be written in a matrix form to aid calculation a 1 M 11 M 12 M 13 b 1 a 2 = M 21 M 22 M 23 b 2. a 3 M 31 M 32 M 33 b 3 The quantity M IJ represents the component of the transformed vectors in the I-th Cartesian direction if the original vector is of unit length in the J-th direction. Hence, the quantity M IJ is meaningless without knowing the coordinate system associated with both I and J.

18 In fact, there is no need to choose the same coordinate system for I and J. If we write the vector a in the covariant basis, equation (1.25) becomes a i g i = b J M(e J ). Taking the dot product with the appropriate contravariant base vector gives which means that a k = b J g k M(e J ) = b J ξ k x K e K M(e J ), a k = MJ k b J = ξk x M KJb K J MJ k = ξk x M KJ. K In other words the components of each (column) vector corresponding to a fixed second index in a coordinate representation of M must obey a contravariant transformation if the associated basis undergoes a covariant transform, i.e. the behaviour is exactly the same as for the components of a vector. If we now also represent the vector b in the covariant basis, equation (1.25) becomes a i g i = b j M(g j ). Taking the dot product with the appropriate contravariant base vector gives a k = b j g k M(g ( ) j ) = b j ξk x J x e K M K ξ e j J = b j ξk x J x K ξ e K M(e j J ), on using the linearity of the transformation. Hence, a k = Mj k b j = ξk x M x J K KJ ξ j bj Mj k = ξk x M x J K KJ ξ, j and the components of each (row) vector associated with a fixed first index in a coordinate representation of M undergo a covariant transformation when the associated basis undergoes a covariant transform, i.e. the opposite behaviour to the components of a vector. The difference in behaviour between the two indices of the components of the linear transformation arises because one index corresponds to the basis of the input vector, whereas the other corresponds to the basis of the output vector. There is a sum over the second (input) index and the components of the vector b and in order for this sum to remain invariant the transform associated with the second index must be the opposite to the components of the vector b, in other words the same as the transformation of the base vectors of that vector. The obvious relationships between components can easily be deduced when we represent our vectors in the contravariant basis, a i = M ij b j, a i = M j i b j, a i = M ij b j. (1.27) Many books term M ij a contravariant second-order tensor; M ik a covariant second-order tensor and M j i a mixed second-order tensor, but they are simply representations of the same coordinateindependent object in different bases. Another more modern notation is to say that M ij is a type (2, 0) tensor, M ij is type (0, 2) and Mj i is a type (1, 1) tensor, which allows the distinction between mixed tensors or orders greater than two.

19 1.2.1 Invariance of second-order tensors Let us now consider a general change of coordinates from ξ i to χ i. Given that a i = M i jb j, (1.28a) we wish to find an expression for M i j such that9 a i = M i j bj. (1.28b) Using the transformation rules for the components of vectors (1.19) it follows that (1.28a) becomes ξ i χ n an = Mj i ξ j χ n bn. We now multiply both sides by χ m / ξ i to obtain χ m ξ i ξ i χ n an = δ m n a n = a m = χm ξ i Mi j Comparing this expression to equation (1.28b) it follows that M i j = χi ξ m ξ nmn m χ, j ξ j χ n bn. and thus we see that covariant components must transform covariantly and contravariant components must transform contravariantly in order for the invariance properties to hold. Similarly, it can be shown that M i j = χi χ j, and M ξ n ξ mmnm i j = ξn ξ m χ i χ M nm. (1.29) j An alternative definition of tensors is to require that they are sets of index quantities (multidimensional arrays) that obey these transformation laws under a change of coordinates. The transformations can be expressed in matrix form, but we must distinguish between the covariant and contravariant cases. We shall write M to indicate a matrix where all components transform covariantly and M for the contravariant case. We define the transformation matrix F to have the components F = χ 1 χ 1 χ 1 ξ 1 ξ 2 ξ 3 χ 2 χ 2 χ 2 ξ 1 ξ 2 ξ 3 χ 3 χ 3 χ 3 ξ 1 ξ 2 ξ 3, or F j i = χi ξ ; j and then from the chain rule and independence of coordinates F 1 = ξ 1 ξ 1 ξ 1 χ 1 χ 2 χ 3 ξ 2 ξ 2 ξ 2 χ 1 χ 2 χ 3 ξ 3 ξ 3 ξ 3 χ 1 χ 2 χ 3, or [F 1 ] i j = ξi χ j. If M and M represent the matrices of transformed components then the transformation laws (1.29) become M = FM F T, and M = F T M F 1. (1.30) 9 This is a place where the use of overbars makes the notation look cluttered, but clarifies precisely which coordinate system is associated with each index. This notation also allows the representation of components in the two different coordinate systems, so-called two-point tensors, e. g. M i, which will be useful. j

20 1.2.2 Cartesian tensors If we restrict attention to orthonormal coordinate systems, then the transformation between coordinate systems must be orthogonal 10 and we do not need to distinguish between covariant and contravariant behaviour. Consider the transformation from our Cartesian basis e I to another orthonormal basis e I. The transformation rules for components of a tensor of order two become M I J = xn x M x I x M NM. J The transformation between components of two vectors in the different bases are given by which can be written the form a I = xi x K a K = xk x I a K, a I = Q IK a K, where Q IK = xi x K = xk x I, and the components Q IK form an orthogonal matrix. Hence the transformation property of a (Cartesian) tensor of order two can be written as or in matrix form M I J = Q IN M NM Q JM M = QMQ T. (1.31) In many textbooks, equation (1.31) is defined to be the transformation rule satisfied by a (Cartesian) tensor of order two Tensors vs matrices There is a natural relationship between tensors and matrices because, as we have seen, we can write the components of a second-order tensor in a particular coordinate system as a matrix. It is often helpful to think of a tensor as a matrix when working with it, but the two concepts are distinct. A summary of all the above is that a tensor is an geometric object that does not depend on any particular coordinate system and expresses a linear relationship between other geometric objects. 1.3 Products of vectors: scalar, vector and tensor Scalar product We have already used the scalar or dot product of two vectors, and the discussion here is included only for completeness. The scalar product is the product of two vectors that returns a unique scalar: the product of the lengths of the vectors and the cosine of the angle between them. Thus far, we have only used the dot product to define orthonormal sets of vectors. If we represent two vectors a and b in the co- and contravariant bases, then a b = (a i g i ) (b j g j ), 10 Although the required orthogonal transformation may vary with position, as is the case in plane polar coordinates.

21 and so An alternative decomposition demonstrates that a b = a i b j g i g j = a i b j δ i j = a i b i. a b = a i b i, and we note that the scalar product is invariant under coordinate transformation, as expected. In orthonormal coordinate systems, there is no distinction between co and contravariant bases and so a b = a K b K Vector product The vector or cross product is a product of two vectors that returns a unique vector that is orthogonal to both vectors. In orthonormal coordinate systems, the vector product is defined by a 1 b 1 a 2 b 3 b 2 a 3 a b = a 2 b 2 = a 3 b 1 a 1 b 3. a 3 b 3 a 1 b 2 a 2 b 1 In order to represent the vector product with index notation it is convenient to define a quantity known as the alternating, or Levi-Civita, symbol e IJK. In orthonormal coordinate systems the components of e IJK are defined by 0 when any two indices are equal; e IJK = e IJK = +1 when I, J, K is an even permutation of 1, 2, 3; (1.32) 1 when I, J, K is an odd permutation of 1,2,3. e.g. e 112 = e 122 = 0, e 123 = e 312 = e 231 = 1, e 213 = e 132 = e 321 = 1. Strictly speaking e IJK thus defined is not a tensor because if the handedness of the coordinate system changes then the sign of the entries in e IJK should change in order for it to respect the appropriate invariance properties; such objects are sometimes called pseudo-tensors. We could ensure that e IJK is a tensor by restricting our definition to right-handed (or left-handed) orthonormal systems, which will be the approach taken in later chapters. The vector product of two vectors a and b in orthonormal coordinates is [a b] I = e IJK a J b K, (1.33) which can be confirmed by writing out all the components. In addition, the relationship between the Cartesian base vectors e I can be expressed as a vector product using the alternating tensor e I e J = e IJK e K. (1.34) Let us now consider the case of general coordinates: the cross product between covariant base vectors is given by g i g j = xi ξ i e I xj ξ j e J = xi ξ i x J ξ j e I e J = xi ξ i x J ξ j e IJKe K.

22 The expression on the right-hand side corresponds to the first two indices of the alternating tensor undergoing a covariant transformation so that g i g j = e ijk e K. If we now transform the third index covariantly we must transform the base vector contravariantly so that g i g j = ǫ ijk ξ k x K e K = ǫ ijk g k ; (1.35) where ǫ ijk xi ξ i x J ξ j x K ξ k e IJK. A similar argument shows that g i g j = ǫ ijk g k, where ǫ ijk = ξi e IJK. x K If we decompose the vectors a and b into the contravariant basis we have ξ j ξ k x I x J a b = (a i g i ) (b j g j ) = a i b j g i g j = a i b j ǫ ijk g k. Thus, if we decompose the vector product into the covariant basis we have the following expression for the components [a b] k = ǫ ijk a i b j, or [a b] i = ǫ ijk a j b k Tensor product The tensor product is a product of two vectors that returns a second-order tensor. It can be motivated by the following discussion. Recall that equation (1.24) can be written in the form a i = M ij b j, where M ij = g i M(g j ). The components M ij correspond to the representation of tensor with respect to a basis, but which basis? We shall define the basis to be that formed from the tensor product of pairs of base vectors: g i g j, where the symbol is used to denote the tensor product. Hence, we can represent a tensor in the different forms M = M ij g i g j = M IJ e I e j = M i j g i g j, which is analogous to representing vectors in the different forms Returning to equation (1.24), we have a = a I e I = a i g i = a i g i. a = M(b) a = (M ij g i g j )(b), and because M ij are just coefficients it follows that g i g j are themselves tensors of second order 11. Decomposing a and b into the contravariant and covariant bases respectively gives a i g i = (M ij g i g j )(b n g n ) = M ij b n (g i g j )(g n ), (1.36) 11 You should think carefully to convince yourself that this is true.

Chapter 0. Preliminaries. 0.1 Things you should already know

Chapter 0. Preliminaries. 0.1 Things you should already know Chapter 0 Preliminaries These notes cover the course MATH45061 (Continuum Mechanics) and are intended to supplement the lectures. The course does not follow any particular text, so you do not need to buy

More information

Chapter 1. Describing the Physical World: Vectors & Tensors. 1.1 Vectors

Chapter 1. Describing the Physical World: Vectors & Tensors. 1.1 Vectors Chapter 1 Describing the Physical World: Vectors & Tensors It is now well established that all matter consists of elementary particles 1 that interact through mutual attraction or repulsion. In our everyday

More information

Chapter 0. Preliminaries. 0.1 Things you should already know

Chapter 0. Preliminaries. 0.1 Things you should already know Chapter 0 Preliminaries These notes cover the course MATH45061 (Continuum Mechanics) and are intended to supplement the lectures. The course does not follow any particular text, so you do not need to buy

More information

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé Chapter 3 Forces, Momentum & Stress 3.1 Newtonian mechanics: a very brief résumé In classical Newtonian particle mechanics, particles (lumps of matter) only experience acceleration when acted on by external

More information

Chapter 2. Kinematics: Deformation and Flow. 2.1 Introduction

Chapter 2. Kinematics: Deformation and Flow. 2.1 Introduction Chapter 2 Kinematics: Deformation and Flow 2.1 Introduction We need a suitable mathematical framework in order to describe the behaviour of continua. Our everyday experience tells us that lumps of matter

More information

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know Physics 110. Electricity and Magnetism. Professor Dine Spring, 2008. Handout: Vectors and Tensors: Everything You Need to Know What makes E&M hard, more than anything else, is the problem that the electric

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

Tensor Calculus. arxiv: v1 [math.ho] 14 Oct Taha Sochi. October 17, 2016

Tensor Calculus. arxiv: v1 [math.ho] 14 Oct Taha Sochi. October 17, 2016 Tensor Calculus arxiv:1610.04347v1 [math.ho] 14 Oct 2016 Taha Sochi October 17, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. Email: t.sochi@ucl.ac.uk.

More information

A Primer on Three Vectors

A Primer on Three Vectors Michael Dine Department of Physics University of California, Santa Cruz September 2010 What makes E&M hard, more than anything else, is the problem that the electric and magnetic fields are vectors, and

More information

Introduction to tensors and dyadics

Introduction to tensors and dyadics 1 Introduction to tensors and dyadics 1.1 Introduction Tensors play a fundamental role in theoretical physics. The reason for this is that physical laws written in tensor form are independent of the coordinate

More information

Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

More information

Physics 6303 Lecture 2 August 22, 2018

Physics 6303 Lecture 2 August 22, 2018 Physics 6303 Lecture 2 August 22, 2018 LAST TIME: Coordinate system construction, covariant and contravariant vector components, basics vector review, gradient, divergence, curl, and Laplacian operators

More information

Tensors - Lecture 4. cos(β) sin(β) sin(β) cos(β) 0

Tensors - Lecture 4. cos(β) sin(β) sin(β) cos(β) 0 1 Introduction Tensors - Lecture 4 The concept of a tensor is derived from considering the properties of a function under a transformation of the corrdinate system. As previously discussed, such transformations

More information

Chapter 7. Kinematics. 7.1 Tensor fields

Chapter 7. Kinematics. 7.1 Tensor fields Chapter 7 Kinematics 7.1 Tensor fields In fluid mechanics, the fluid flow is described in terms of vector fields or tensor fields such as velocity, stress, pressure, etc. It is important, at the outset,

More information

2.20 Fall 2018 Math Review

2.20 Fall 2018 Math Review 2.20 Fall 2018 Math Review September 10, 2018 These notes are to help you through the math used in this class. This is just a refresher, so if you never learned one of these topics you should look more

More information

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture No. # 02 Conservation of Mass and Momentum: Continuity and

More information

1.4 LECTURE 4. Tensors and Vector Identities

1.4 LECTURE 4. Tensors and Vector Identities 16 CHAPTER 1. VECTOR ALGEBRA 1.3.2 Triple Product The triple product of three vectors A, B and C is defined by In tensor notation it is A ( B C ) = [ A, B, C ] = A ( B C ) i, j,k=1 ε i jk A i B j C k =

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

MATH45061: SOLUTION SHEET 1 II

MATH45061: SOLUTION SHEET 1 II MATH456: SOLUTION SHEET II. The deformation gradient tensor has Cartesian components given by F IJ R I / r J ; and so F R e x, F R, F 3 R, r r r 3 F R r, F R r, F 3 R r 3, F 3 R 3 r, F 3 R 3 r, F 33 R

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

Physics 6303 Lecture 3 August 27, 2018

Physics 6303 Lecture 3 August 27, 2018 Physics 6303 Lecture 3 August 27, 208 LAST TIME: Vector operators, divergence, curl, examples of line integrals and surface integrals, divergence theorem, Stokes theorem, index notation, Kronecker delta,

More information

Vectors, metric and the connection

Vectors, metric and the connection Vectors, metric and the connection 1 Contravariant and covariant vectors 1.1 Contravariant vectors Imagine a particle moving along some path in the 2-dimensional flat x y plane. Let its trajectory be given

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

1.13 The Levi-Civita Tensor and Hodge Dualisation

1.13 The Levi-Civita Tensor and Hodge Dualisation ν + + ν ν + + ν H + H S ( S ) dφ + ( dφ) 2π + 2π 4π. (.225) S ( S ) Here, we have split the volume integral over S 2 into the sum over the two hemispheres, and in each case we have replaced the volume-form

More information

Math (P)Review Part II:

Math (P)Review Part II: Math (P)Review Part II: Vector Calculus Computer Graphics Assignment 0.5 (Out today!) Same story as last homework; second part on vector calculus. Slightly fewer questions Last Time: Linear Algebra Touched

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Introductory Course on Multiphysics Modelling TOMAZ G. ZIELIŃKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Vectors, tensors, and index notation. Generalization of the concept

More information

KINEMATICS OF CONTINUA

KINEMATICS OF CONTINUA KINEMATICS OF CONTINUA Introduction Deformation of a continuum Configurations of a continuum Deformation mapping Descriptions of motion Material time derivative Velocity and acceleration Transformation

More information

Vector and Tensor Calculus

Vector and Tensor Calculus Appendices 58 A Vector and Tensor Calculus In relativistic theory one often encounters vector and tensor expressions in both three- and four-dimensional form. The most important of these expressions are

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

. D CR Nomenclature D 1

. D CR Nomenclature D 1 . D CR Nomenclature D 1 Appendix D: CR NOMENCLATURE D 2 The notation used by different investigators working in CR formulations has not coalesced, since the topic is in flux. This Appendix identifies the

More information

Mathematical Background

Mathematical Background CHAPTER ONE Mathematical Background This book assumes a background in the fundamentals of solid mechanics and the mechanical behavior of materials, including elasticity, plasticity, and friction. A previous

More information

Introduction and Vectors Lecture 1

Introduction and Vectors Lecture 1 1 Introduction Introduction and Vectors Lecture 1 This is a course on classical Electromagnetism. It is the foundation for more advanced courses in modern physics. All physics of the modern era, from quantum

More information

Vector, Matrix, and Tensor Derivatives

Vector, Matrix, and Tensor Derivatives Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

CE-570 Advanced Structural Mechanics - Arun Prakash

CE-570 Advanced Structural Mechanics - Arun Prakash Ch1-Intro Page 1 CE-570 Advanced Structural Mechanics - Arun Prakash The BIG Picture What is Mechanics? Mechanics is study of how things work: how anything works, how the world works! People ask: "Do you

More information

Physical Conservation and Balance Laws & Thermodynamics

Physical Conservation and Balance Laws & Thermodynamics Chapter 4 Physical Conservation and Balance Laws & Thermodynamics The laws of classical physics are, for the most part, expressions of conservation or balances of certain quantities: e.g. mass, momentum,

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Vector calculus background

Vector calculus background Vector calculus background Jiří Lebl January 18, 2017 This class is really the vector calculus that you haven t really gotten to in Calc III. Let us start with a very quick review of the concepts from

More information

Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric

Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric By Y. N. Keilman AltSci@basicisp.net Every physicist

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Review of Vector Analysis in Cartesian Coordinates

Review of Vector Analysis in Cartesian Coordinates Review of Vector Analysis in Cartesian Coordinates 1 Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers. Scalars are usually

More information

MAE Continuum Mechanics Course Notes

MAE Continuum Mechanics Course Notes Course Notes Brandon Runnels Contents LECTURE 1 0 Introduction 1.1 0.1 Motivation........................................ 1.1 0.2 Notation......................................... 1.2 0.2.1 Sets......................................

More information

Covariant Formulation of Electrodynamics

Covariant Formulation of Electrodynamics Chapter 7. Covariant Formulation of Electrodynamics Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 11, and Rybicki and Lightman, Chap. 4. Starting with this chapter,

More information

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé Chapter 3 Forces, Momentum & Stress 3.1 Newtonian mechanics: a very brief résumé In classical Newtonian particle mechanics, particles lumps of matter) only experience acceleration when acted on by external

More information

,, rectilinear,, spherical,, cylindrical. (6.1)

,, rectilinear,, spherical,, cylindrical. (6.1) Lecture 6 Review of Vectors Physics in more than one dimension (See Chapter 3 in Boas, but we try to take a more general approach and in a slightly different order) Recall that in the previous two lectures

More information

Vectors and Matrices Notes.

Vectors and Matrices Notes. Vectors and Matrices Notes Jonathan Coulthard JonathanCoulthard@physicsoxacuk 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Appendix: Orthogonal Curvilinear Coordinates. We define the infinitesimal spatial displacement vector dx in a given orthogonal coordinate system with

Appendix: Orthogonal Curvilinear Coordinates. We define the infinitesimal spatial displacement vector dx in a given orthogonal coordinate system with Appendix: Orthogonal Curvilinear Coordinates Notes: Most of the material presented in this chapter is taken from Anupam G (Classical Electromagnetism in a Nutshell 2012 (Princeton: New Jersey)) Chap 2

More information

Tensors and Special Relativity

Tensors and Special Relativity Tensors and Special Relativity Lecture 6 1 Introduction and review of tensor algebra While you have probably used tensors of rank 1, i.e vectors, in special relativity, relativity is most efficiently expressed

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Lecture I: Vectors, tensors, and forms in flat spacetime

Lecture I: Vectors, tensors, and forms in flat spacetime Lecture I: Vectors, tensors, and forms in flat spacetime Christopher M. Hirata Caltech M/C 350-17, Pasadena CA 91125, USA (Dated: September 28, 2011) I. OVERVIEW The mathematical description of curved

More information

2 Tensor Notation. 2.1 Cartesian Tensors

2 Tensor Notation. 2.1 Cartesian Tensors 2 Tensor Notation It will be convenient in this monograph to use the compact notation often referred to as indicial or index notation. It allows a strong reduction in the number of terms in an equation

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Introduction to Vector Spaces

Introduction to Vector Spaces 1 CSUC Department of Physics Mechanics: Class Notes Introduction to Vector Spaces I. INTRODUCTION Modern mathematics often constructs logical systems by merely proposing a set of elements that obey a specific

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Contents. Motivation. 1 di 7 23/03/ :41

Contents. Motivation. 1 di 7 23/03/ :41 1 di 7 23/03/2015 09:41 From Wikipedia, the free encyclopedia In mathematics, orthogonal coordinates are defined as a set of d coordinates q = (q 1, q 2,..., q d ) in which the coordinate surfaces all

More information

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components Cartesian Tensors Reference: Jeffreys Cartesian Tensors 1 Coordinates and Vectors z x 3 e 3 y x 2 e 2 e 1 x x 1 Coordinates x i, i 123,, Unit vectors: e i, i 123,, General vector (formal definition to

More information

(But, they are entirely separate branches of mathematics.)

(But, they are entirely separate branches of mathematics.) 2 You ve heard of statistics to deal with problems of uncertainty and differential equations to describe the rates of change of physical systems. In this section, you will learn about two more: vector

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 3th, 28 From separation of variables, we move to linear algebra Roughly speaking, this is the study of

More information

III. TRANSFORMATION RELATIONS

III. TRANSFORMATION RELATIONS III. TRANSFORMATION RELATIONS The transformation relations from cartesian coordinates to a general curvilinear system are developed here using certain concepts from differential geometry and tensor analysis,

More information

Notation, Matrices, and Matrix Mathematics

Notation, Matrices, and Matrix Mathematics Geographic Information Analysis, Second Edition. David O Sullivan and David J. Unwin. 010 John Wiley & Sons, Inc. Published 010 by John Wiley & Sons, Inc. Appendix A Notation, Matrices, and Matrix Mathematics

More information

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity.

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. http://preposterousuniverse.com/grnotes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been framed

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

An OpenMath Content Dictionary for Tensor Concepts

An OpenMath Content Dictionary for Tensor Concepts An OpenMath Content Dictionary for Tensor Concepts Joseph B. Collins Naval Research Laboratory 4555 Overlook Ave, SW Washington, DC 20375-5337 Abstract We introduce a new OpenMath content dictionary named

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general http://pancake.uchicago.edu/ carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been

More information

Week 6: Differential geometry I

Week 6: Differential geometry I Week 6: Differential geometry I Tensor algebra Covariant and contravariant tensors Consider two n dimensional coordinate systems x and x and assume that we can express the x i as functions of the x i,

More information

ME185 Introduction to Continuum Mechanics

ME185 Introduction to Continuum Mechanics Fall, 0 ME85 Introduction to Continuum Mechanics The attached pages contain four previous midterm exams for this course. Each midterm consists of two pages. As you may notice, many of the problems are

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study

More information

The Matrix Representation of a Three-Dimensional Rotation Revisited

The Matrix Representation of a Three-Dimensional Rotation Revisited Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

More information

PART ONE DYNAMICS OF A SINGLE PARTICLE

PART ONE DYNAMICS OF A SINGLE PARTICLE PART ONE DYNAMICS OF A SINGLE PARTICLE 1 Kinematics of a Particle 1.1 Introduction One of the main goals of this book is to enable the reader to take a physical system, model it by using particles or rigid

More information

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian...

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian... Vector analysis Abstract These notes present some background material on vector analysis. Except for the material related to proving vector identities (including Einstein s summation convention and the

More information

EOS 352 Continuum Dynamics Conservation of angular momentum

EOS 352 Continuum Dynamics Conservation of angular momentum EOS 352 Continuum Dynamics Conservation of angular momentum c Christian Schoof. Not to be copied, used, or revised without explicit written permission from the copyright owner The copyright owner explicitly

More information

SPECIAL RELATIVITY AND ELECTROMAGNETISM

SPECIAL RELATIVITY AND ELECTROMAGNETISM SPECIAL RELATIVITY AND ELECTROMAGNETISM MATH 460, SECTION 500 The following problems (composed by Professor P.B. Yasskin) will lead you through the construction of the theory of electromagnetism in special

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

The quantum state as a vector

The quantum state as a vector The quantum state as a vector February 6, 27 Wave mechanics In our review of the development of wave mechanics, we have established several basic properties of the quantum description of nature:. A particle

More information

Vectors. September 2, 2015

Vectors. September 2, 2015 Vectors September 2, 2015 Our basic notion of a vector is as a displacement, directed from one point of Euclidean space to another, and therefore having direction and magnitude. We will write vectors in

More information

Chapter 3 Stress, Strain, Virtual Power and Conservation Principles

Chapter 3 Stress, Strain, Virtual Power and Conservation Principles Chapter 3 Stress, Strain, irtual Power and Conservation Principles 1 Introduction Stress and strain are key concepts in the analytical characterization of the mechanical state of a solid body. While stress

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

Derivatives in General Relativity

Derivatives in General Relativity Derivatives in General Relativity One of the problems with curved space is in dealing with vectors how do you add a vector at one point in the surface of a sphere to a vector at a different point, and

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Lecture Notes Introduction to Vector Analysis MATH 332

Lecture Notes Introduction to Vector Analysis MATH 332 Lecture Notes Introduction to Vector Analysis MATH 332 Instructor: Ivan Avramidi Textbook: H. F. Davis and A. D. Snider, (WCB Publishers, 1995) New Mexico Institute of Mining and Technology Socorro, NM

More information

Lecture 3: Vectors. Any set of numbers that transform under a rotation the same way that a point in space does is called a vector.

Lecture 3: Vectors. Any set of numbers that transform under a rotation the same way that a point in space does is called a vector. Lecture 3: Vectors Any set of numbers that transform under a rotation the same way that a point in space does is called a vector i.e., A = λ A i ij j j In earlier courses, you may have learned that a vector

More information

MATH45061: SOLUTION SHEET 1 V

MATH45061: SOLUTION SHEET 1 V 1 MATH4561: SOLUTION SHEET 1 V 1.) a.) The faces of the cube remain aligned with the same coordinate planes. We assign Cartesian coordinates aligned with the original cube (x, y, z), where x, y, z 1. The

More information

1.2 Euclidean spacetime: old wine in a new bottle

1.2 Euclidean spacetime: old wine in a new bottle CHAPTER 1 EUCLIDEAN SPACETIME AND NEWTONIAN PHYSICS Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external... Isaac Newton Scholium

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

1 Gauss integral theorem for tensors

1 Gauss integral theorem for tensors Non-Equilibrium Continuum Physics TA session #1 TA: Yohai Bar Sinai 16.3.216 Index Gymnastics: Gauss Theorem, Isotropic Tensors, NS Equations The purpose of today s TA session is to mess a bit with tensors

More information

VECTORS, TENSORS AND INDEX NOTATION

VECTORS, TENSORS AND INDEX NOTATION VECTORS, TENSORS AND INDEX NOTATION Enrico Nobile Dipartimento di Ingegneria e Architettura Università degli Studi di Trieste, 34127 TRIESTE March 5, 2018 Vectors & Tensors, E. Nobile March 5, 2018 1 /

More information

Caltech Ph106 Fall 2001

Caltech Ph106 Fall 2001 Caltech h106 Fall 2001 ath for physicists: differential forms Disclaimer: this is a first draft, so a few signs might be off. 1 Basic properties Differential forms come up in various parts of theoretical

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Physics 6303 Lecture 5 September 5, 2018

Physics 6303 Lecture 5 September 5, 2018 Physics 6303 Lecture 5 September 5, 2018 LAST TIME: Examples, reciprocal or dual basis vectors, metric coefficients (tensor), and a few general comments on tensors. To start this discussion, I will return

More information