Vector analysis and vector identities by means of cartesian tensors

Similar documents
Index Notation for Vector Calculus

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

VECTORS, TENSORS AND INDEX NOTATION

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

1.4 LECTURE 4. Tensors and Vector Identities

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Classical Mechanics in Hamiltonian Form

A Primer on Three Vectors

Introduction to Tensor Notation

PHY481: Electromagnetism

Module 4M12: Partial Differential Equations and Variational Methods IndexNotationandVariationalMethods

2.20 Fall 2018 Math Review

PHY481: Electromagnetism

Vector and Tensor Calculus

The Matrix Representation of a Three-Dimensional Rotation Revisited

NIELINIOWA OPTYKA MOLEKULARNA

Physics 6303 Lecture 2 August 22, 2018

Mathematical Preliminaries

Introduction to relativistic quantum mechanics

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

Electromagnetism HW 1 math review

Vector calculus. Appendix A. A.1 Definitions. We shall only consider the case of three-dimensional spaces.

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian...

Math review. Math review

Vectors and Matrices Notes.

Tensors, and differential forms - Lecture 2

Basic mathematics for nano-engineers (II)

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor

Electromagnetic energy and momentum

Tensor Calculus. arxiv: v1 [math.ho] 14 Oct Taha Sochi. October 17, 2016

Valiantly Validating Vexing Vector Verities

Physics 6303 Lecture 3 August 27, 2018

A.1 Appendix on Cartesian tensors

Chapter 7. Kinematics. 7.1 Tensor fields

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Some elements of vector and tensor analysis and the Dirac δ-function

Tensors - Lecture 4. cos(β) sin(β) sin(β) cos(β) 0

1.13 The Levi-Civita Tensor and Hodge Dualisation

MATRICES. a m,1 a m,n A =

Tensor Analysis in Euclidean Space

Vectors. (Dated: August ) I. PROPERTIES OF UNIT ANSTISYMMETRIC TENSOR

Introduction to tensors and dyadics

This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. VECTOR ALGEBRA

Physics 221A Fall 2017 Appendix E Introduction to Tensor Analysis

CE-570 Advanced Structural Mechanics - Arun Prakash

MP2A: Vectors, Tensors and Fields

Haus, Hermann A., and James R. Melcher. Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall, ISBN:

Dr. Tess J. Moon, (office), (fax),

Brief Review of Vector Algebra

1.3 LECTURE 3. Vector Product

Covariant Formulation of Electrodynamics

Vectors and Matrices

Lecture 3: Vectors. Any set of numbers that transform under a rotation the same way that a point in space does is called a vector.

Tensor Calculus (à la Speedy Gonzales)

Properties of Matrix Arithmetic

1 Vectors and Tensors

We are already familiar with the concept of a scalar and vector. are unit vectors in the x and y directions respectively with

TENSOR ALGEBRA. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

Contents. Motivation. 1 di 7 23/03/ :41

A Brief Introduction to Tensors

Topic 5.9: Divergence and The Divergence Theorem

HOMOGENEOUS AND INHOMOGENEOUS MAXWELL S EQUATIONS IN TERMS OF HODGE STAR OPERATOR

A f = A f (x)dx, 55 M F ds = M F,T ds, 204 M F N dv n 1, 199 !, 197. M M F,N ds = M F ds, 199 (Δ,')! = '(Δ)!, 187

Physics 236a assignment, Week 2:

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis.

Contents. Part I Vector Analysis

Some Concepts used in the Study of Harish-Chandra Algebras of Matrices

Introduction and Vectors Lecture 1

Electrodynamics. 1 st Edition. University of Cincinnati. Cenalo Vaz

02 - tensor calculus - tensor algebra tensor calculus

The Raman Effect. A Unified Treatment of the Theory of Raman Scattering by Molecules. DerekA. Long

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

Lecture Notes Introduction to Vector Analysis MATH 332

Chapter 1. Describing the Physical World: Vectors & Tensors. 1.1 Vectors

Elastic Wave Theory. LeRoy Dorman Room 436 Ritter Hall Tel: Based on notes by C. R. Bentley. Version 1.

Multiple Integrals and Vector Calculus (Oxford Physics) Synopsis and Problem Sets; Hilary 2015

A primer on matrices

III. TRANSFORMATION RELATIONS

Classical Mechanics Solutions 1.

INTEGRALSATSER VEKTORANALYS. (indexräkning) Kursvecka 4. and CARTESIAN TENSORS. Kapitel 8 9. Sidor NABLA OPERATOR,

FoMP: Vectors, Tensors and Fields

Guide for Ph.D. Area Examination in Applied Mathematics

Appendix A: Matrices

Quantum Physics II (8.05) Fall 2004 Assignment 3

%#, for x = a & ' 0, for x $ a. but with the restriction that the area it encloses equals 1. Therefore, whenever it is used it is implied that

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Week 15-16: Combinatorial Design

Chapter 0. Preliminaries. 0.1 Things you should already know

Rigid Bodies. November 5, 2018

Electromagnetic Theory Prof. Michael J. Ruiz, UNC Asheville (doctorphys on YouTube) Chapter A Notes. Vector Analysis

Physics 6303 Lecture 5 September 5, 2018

MAC Module 1 Systems of Linear Equations and Matrices I

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Professor Terje Haukaas University of British Columbia, Vancouver Notation

Supplementary Notes on Mathematics. Part I: Linear Algebra

Determinants. Chia-Ping Chen. Linear Algebra. Professor Department of Computer Science and Engineering National Sun Yat-sen University 1/40

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric

Phys 201. Matrices and Determinants

SOLUTION OF EQUATIONS BY MATRIX METHODS

Transcription:

Vector analysis and vector identities by means of cartesian tensors Kenneth H. Carpenter August 29, 2001 1 The cartesian tensor concept 1.1 Introduction The cartesian tensor approach to vector analysis uses components in a rectangular coordinate system to derive all vector and field relationships. These relationships may then be transformed to other coordinate systems and expressed in coordinate-free vector notation. The result is much simpler than attempting derivations in a coordinate-free manner or in multiple coordinate systems. Vector identities and vector differential operations and integral theorems often appear complicated in standard vector notation. But by means of cartesian tensors the derivations of these can be unified, simplified, and made accessible to all students. Typical undergraduate electromagnetic theory texts do not mention the cartesian tensor method, and modern advanced texts usually do not include vector analysis as a topic but only summarize its results. I learned the cartesian tensor approach from Professor Charles Halijak in a class at Kansas State University in 1962. I have since found the method, or parts of it, expounded in texts on vectors and tensors. The earliest reference I have read is by Levi- Civita[1]. His work, in a slightly modified form, is included in a text on tensor analysis by Sokolnikoff[2] and is used in a text on the theory of relativity by Møller[3]. The ideas in the following are also given in a programmed instruction text[4] and all of the following relationships are derived in one section or another of the text by Bourne and Kendall[5]. I will not give any further references in the following, but will present the work in a tutorial manner. 1.2 Definitions and notation Typically, rectangular cartesian coordinates use x, y, and z as the variables for distances along the axes. In the cartesian tensor method these are replaced by x 1, x 2, and x 3 with an unspecified coordinate given as x i. If unit vectors are denoted by the coordinates with hats over them: ˆx 1, ˆx 2, ˆx 3, then in the usual vector notation, a vector field A is expressed as A = A 1 ˆx 1 + A 2 ˆx 2 + A 3 ˆx 3. (1) 1

CartTen K.H.Carpenter 29AUG01 2 With the use of subscripts to identify axes, this component form may be written as a sum: A = 3 A i ˆx i. (2) 1 Rather than keep all the notation of eq.(2), in the cartesian tensor method one represents the vector field by A i, just as one represents the coordinates by x i. As an example of the simplification this notation gives, consider the dot product of two vectors, A and B: 3 A B = A 1 B 1 + A 2 B 2 + A 3 B 3 = A i B i (3) This sum is simplified even further by writing for the dot product the term A i B i where summation on repeated subscripts is assumed. Thus we make the identification 1 A B A i B i (4) This is known as the Einstein summation convention. When two subscripts in a term in an expression are the same then the term is to be summed on the repeated subscript over the range of the subscript. (This extends to higher dimensioned vector spaces, e.g. the four-vector space of special relativity hence Einstein s use of it.) 1.3 Vectors and tensors An object having one subscript is called a vector, for example A i. An object having no subscript is called a scalar, for example c. An object having multiple subscripts is called a tensor, and the rank of the tensor is the number of subscripts: for example, m im is a second rank tensor. Vectors are first rank tensors, and scalars are zero rank tensors. The concept of being a tensor involves more than having subscripts. A tensor is an object which transforms on a rotation of coordinates like a direct product of as many vectors as its rank. Direct product means just multiplying the objects together. Thus if A i and B i are vectors, A i B j is a second rank tensor. Setting these indices equal (and summing on them) removes the dependence on the subscript with the result being a scalar, or zero rank tensor. Setting two subscripts of a tensor to the same letter and summing over its values reduces the rank of the tensor by two. This is called contraction. 1.4 Special tensors There are two special tensors which are needed in deriving vector identities. These are the Kronecker symbol, δ i j, and the Levi-Civita symbol, ɛ i jk.

CartTen K.H.Carpenter 29AUG01 3 The Kronecker symbol is usually referred to as the Kronecker delta and it is defined as δ i j = { 1, i = j 0, i j (5) The δ i j are the matrix elements of the identity matrix. The definition of δ i j is such that the contraction of it with another tensor causes a substitution of the subscript letter from the one in the Kronecker delta to the one in the other tensor: δ i j b k jl = b kil (6) for example. The Levi-Civita symbol is also called the alternating tensor and is the totally antisymmetric tensor of the same rank as the number of dimensions. For three dimensions it is ɛ i jk = 1, for i, j, k an even permutation of 1, 2, 3 1, for i, j, k an odd permutation of 1, 2, 3 0, for any two of the subscripts equal The definition of the Levi-Civita symbol is closely related to the definition of determinants. In fact, determinants could be defined in terms of the ɛ i jk. If a square matrix A has elements a i j then a 11 a 12 a 13 A = a 21 a 22 a 23 a 31 a 32 a 33 = ɛ i jka 1i a 2 j a 3k (8) where summation on the repeated subscripts is implied. 1.5 Relations between the special tensors The definitions of ɛ i jk and δ i j given in eqs.(7) and (5) result in the following identity (7) ɛ i jk ɛ klm = δ il δ jm δ im δ jl (9) where summation is implied over index k. This identity may be demonstrated from the properties of determinants. The demonstration is not difficult, but it is lengthy and so is placed in Appendix A. 2 Vector operations and vector identities With the Levi-Civita symbol one may express the vector cross product in cartesian tensor notation as: A B ɛ i jk A j B k. (10) This form for cross product, along with the relationship of eq.(9), allows one to form vector identities for repeated dot and cross products. For example, the triple vector product becomes A (B C) ɛ i jk ɛ klm A j B l C m. (11)

CartTen K.H.Carpenter 29AUG01 4 The tensor side of this equation is evaluated by means of eq.(9) as to establish the standard identity so that (δ il δ jm δ im δ jl )A j B l C m = B i A j C j C i A j B j A (B C) = (A C)B (A B)C The del operator is represented in cartesian tensors by / x i (12) f f / x i (13) A A i / x i (14) A ɛ i jk A k / x j (15) become the gradient, divergence, and curl. Identities involving the del operator may be evaluated in a manner similar to that done for the triple vector product above. For example ( A) ( B) ɛ i jk ɛ ilm ( A k / x j )( B m / x l ). (16) The tensor side of this equation may be simplified as (δ jl δ km δ jm δ kl )( A k / x j )( B m / x l ) = ( A k / x j )( B k / x j ) ( A k / x j )( B j / x k ). This example yields an identity which cannot be expressed in terms of ordinary vector notation, but requires dyadic notation. As such it is not usually mentioned in beginning courses on vectors. 3 Integral theorems The divergence theorem, also known as Gauss s theorem, is widely used. In conventional vector notation it is D ds = D dv. (17) This equation is either proven in a manner similar to the proof of Green s theorem in the plane, but extended to three dimensions, or it is heuristically derived by using an alternate definition of the divergence in terms of limits of ratios of integrals: D = lim v 0 D ds. (18) v In cartesian tensor notation the divergence theorem is written as D i n i ds = D i / x i dv (19)

CartTen K.H.Carpenter 29AUG01 5 where n i is the outward (unit vector) normal to the surface. If the derivation of eq.(17) is done in the notation of eq.(19) one sees that there is nothing special about the integrand having only one tensor subscript. In fact, one might as well derive a more general integral theorem which is A i jk...l...mnp n q ds = A i jk...l...mnp / x q dv. (20) The general form of eq.(20) may be specialized to what would be a curl theorem in ordinary vector notation. Let A i jk...l...mnp be replaced by ɛ i jk B k to obtain ɛ i jk B k n j ds = ɛ i jk B k / x j dv (21) which in standard vector notation is ( B) ds = B dv. (22) A gradient theorem may be obtained by replacing A i jk...l...mnp with a scalar to yield f n j ds = f / x j dv f ds = f dv. (23) Other possible applications of this method will find application in special situations. Appendix A Derivation of fundamental identity between Levi- Civita tensors and Kroneker deltas First note that from eq.(8) 1 0 0 0 1 0 0 0 1 = δ 11 δ 12 δ 13 δ 21 δ 22 δ 23 δ 31 δ 32 δ 33 = ɛ i jkδ 1i δ 2 j δ 3k = ɛ 123 = 1 (24) where we have used the property of contracting a Kroneker delta with the subscripts on the Levi-Civita symbol. Now if we replace the elements of the determinant A in eq.(8) by Kroneker deltas as follows: a 11 = δ l1, a 12 = δ l2, a 13 = δ l3 a 21 = δ m1, a 22 = δ m2, a 23 = δ m3 a 31 = δ n1, a 32 = δ m2, a 33 = δ n3 then we have ɛ lmn = ɛ i jk δ li δ m j δ nk = δ l1 δ l2 δ l3 δ m1 δ m2 δ m3 δ n1 δ n2 δ n3 (25)

CartTen K.H.Carpenter 29AUG01 6 Now, the value of a determinant is unchanged when rows and columns are interchanged, that is, when the matrix is transposed before taking the determinant. Also the letters used for the subscripts are immaterial. Hence we can rewrite ɛ i jl as eq.(25) with the matrix transposed and lmn replaced by i jk. Then we can multiply these two determinants together, and use the property that the determinant of a product of two matrices is the product of the determinants of the individual matrices to obtain ɛ lmn ɛ i jk = where we have used the fact that δ l1 δ l2 δ l3 δ m1 δ m2 δ m3 δ n1 δ n2 δ n3 δ i1 δ j1 δ k1 δ i2 δ j2 δ k2 δ i3 δ j3 δ k3 = δ il δ jl δ kl δ im δ jm δ km δ in δ jn δ kn (26) δ l1 δ i1 + δ l2 δ i2 + δ l3 δ i3 = δ ip δ lp = δ il to obtain the upper left element of the product matrix, and so on. A less formal presentation of the same results can be demonstrated by looking at the following determinant and observing that this makes ɛ 123 the determinant of the identity matrix, which has the value 1; every interchange of a row changes the sign of the determinant; for two rows equal the determinant is zero. ɛ i jk = δ i1 δ i2 δ i3 δ j1 δ j2 δ j3 δ k1 δ k2 δ k3 (27) In the same way, examining the determinant δ il δ jl δ kl ɛ lmn ɛ i jk = δ im δ jm δ km δ in δ jn δ kn (28) shows that ɛ 123 ɛ 123 = 1 and, again, any interchange of either rows or columns changes the sign, and any two rows or columns identical makes the value zero. Hence the identity of the product of epsilon symbols and the determinant is shown. To complete the demonstration of eq.(9) one needs to use eq.(28) to form ɛ i jk ɛ klm, expand the determinant into a sum of terms of products of Kroneker deltas and then do the summation on k: δ ik δ jk δ kk ɛ i jk ɛ klm = δ il δ jl δ kl δ im δ jm δ km = δ ik δ jl δ km + δ im δ jk δ kl + δ kk δ il δ jm (29) δ im δ jl δ kk δ il δ jk δ km δ ik δ jm δ kl = δ im δ jl + δ im δ jl + 3δ il δ jm 3δ im δ jl δ il δ jm δ il δ jm = δ il δ jm δ im δ jl where the summation on index k has been done. Note δ kk = 3.

CartTen K.H.Carpenter 29AUG01 7 References [1] Tullio Levi-Civita, The Absolute Differential Calculus (Calculus of Tensors), edited by Enrico Persico, translated by Marjorie Long, Blackie & Son limited, London, Glasgow, 1929. [2] I. S. Sokolnikoff, Tensor Analysis, John Wiley & Sons, Inc., New York, 1951. [3] C. Møller, The Theory of Relativity, Oxford, London, 1952. [4] Richard E. Haskell, Introduction to Vectors and Cartesian Tensors, A Programmed Text for Students of Engineering and Science, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1972. [5] D. E. Borne and P. C. Kendall, Vector Analysis and Cartesian Tensors, 2nd Ed., Thomas Nelson & Sons Ltd., Great Britain, 1977.