A Brief Introduction to Tensors

Similar documents
1 Vectors and Tensors

Multilinear Algebra For the Undergraduate Algebra Student

Boolean Inner-Product Spaces and Boolean Matrices

Exercise Set Suppose that A, B, C, D, and E are matrices with the following sizes: A B C D E

General tensors. Three definitions of the term V V. q times. A j 1...j p. k 1...k q

A matrix over a field F is a rectangular array of elements from F. The symbol

Notes on Linear Algebra

Linear Algebra (Review) Volker Tresp 2017

Duality of finite-dimensional vector spaces

Vector spaces, duals and endomorphisms

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4

2.14 Basis vectors for covariant components - 2

Multilinear (tensor) algebra

Elementary Linear Algebra

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

EN221 - Fall HW # 1 Solutions

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Foundations of tensor algebra and analysis (composed by Dr.-Ing. Olaf Kintzel, September 2007, reviewed June 2011.) Web-site:

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

ELEMENTARY LINEAR ALGEBRA

Review of linear algebra

An OpenMath Content Dictionary for Tensor Concepts

Chapter 4. Matrices and Matrix Rings

1 Principal component analysis and dimensional reduction

1.4 LECTURE 4. Tensors and Vector Identities

The Matrix Representation of a Three-Dimensional Rotation Revisited

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

MATH2210 Notebook 2 Spring 2018

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Lecture 4: Products of Matrices

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

Linear Algebra (Review) Volker Tresp 2018

Elementary maths for GMT

Systems of Linear Equations and Matrices

Introduction to tensors and dyadics

Vector and tensor calculus

CONVERSION OF COORDINATES BETWEEN FRAMES

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Matrix operations Linear Algebra with Computer Science Application

Math 4377/6308 Advanced Linear Algebra

1 Matrices and Systems of Linear Equations

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATHEMATICS 217 NOTES

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

Calculus II - Basic Matrix Operations

Linear Algebra. Min Yan

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Review of Linear Algebra

Section 5.5: Matrices and Matrix Operations

Quantum Physics II (8.05) Fall 2004 Assignment 3

Two matrices of the same size are added by adding their corresponding entries =.

ELEMENTARY LINEAR ALGEBRA

Mathematical Preliminaries

Lecture I: Vectors, tensors, and forms in flat spacetime

A primer on matrices

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

LINEAR ALGEBRA REVIEW

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

1 Matrices and vector spaces

Chemnitz Scientific Computing Preprints

Review of Matrices and Block Structures

Matrix Algebra for Engineers Jeffrey R. Chasnov

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Lecture Notes in Linear Algebra

Linear Algebra March 16, 2019

Matrices. In this chapter: matrices, determinants. inverse matrix

Introduction to Tensor Notation

Matrices. Chapter Definitions and Notations

W if p = 0; ; W ) if p 1. p times

Math 54 Homework 3 Solutions 9/

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis.

Foundations of Matrix Analysis

Chapter 2. Matrix Arithmetic. Chapter 2

Mathematical Methods wk 2: Linear Operators

Chapter 1: Systems of Linear Equations

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Matrix Algebra: Summary

ABSOLUTELY FLAT IDEMPOTENTS

Notes on Curvilinear Coordinates

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Matrix Arithmetic. j=1

A.1 Appendix on Cartesian tensors

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

1. Tensor of Rank 2 If Φ ij (x, y) satisfies: (a) having four components (9 for 3-D). (b) when the coordinate system is changed from x i to x i,

Systems of Linear Equations and Matrices

EA = I 3 = E = i=1, i k

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

1 Matrices and Systems of Linear Equations. a 1n a 2n

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

An Introduction To Linear Algebra. Kuttler

A PRIMER ON SESQUILINEAR FORMS

Elementary Row Operations on Matrices

Transcription:

A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction, tensor spaces of all integral orders will defined inductively Initially the underlying vector space, V, will be assumed to be an inner product space in order to simplify the discussion Subsequently, the presentation will be generalized to vector spaces without inner product Usually, bold-face letters, a, will denote vectors in V and upper case letters, A, will denote tensors The inner product on V will be denoted by a b 2 Tensors over an Inner Product Space First tensor spaces are developed over an inner product vector space {V, } Implicit in the discussion is use of an orthonormal basis in V for deriving component representations of vectors and tensors Zero-Order Tensors The space of Zero-Order Tensors, T 0, is isomorphic to the scalar field, F, corresponding to the underlying vector space V, which in this course will be either the real or complex numbers, R or C, respectively A zero order tensor, α T 0, acts as a linear transformation from T 0 to T 0 α[ ] : T 0 T 0 via multiplication of scalars That is, given β T 0, β α[β] := αβ First-Order Tensors The space of First-Order Tensors, T 1, is isomorphic to the underlying vector space, V A first order tensor, a T 1, acts as a linear transformation from T 0 T 1 and from T 1 T 0 as follows In the first instance, for all α T 0, whereas in the second instance, for all b T 1, a[α] := α a (1) a[b] := a b (2) It should be noticed, that both (??) and (??) are bi-linear forms, ie they are linear forms in each of their two independent variables separately Copyright c 2011 by J R Walton All rights reserved 1

Second-Order Tensors The space of Second-Order Tensors, T 2, is isomorphic to the space, Lin[V], of linear transformations A : V V Elements in T 2 act as linear transformations from T i T j with i, j = 0, 1, 2 subject to i + j = 2 For i = 0, j = 2 one has for A T 2 and α T 0 For i = 1, j = 1, one has for A T 2 and a T 1 A[α] := α A (3) A[a] := Aa (4) where the expression of the right hand side of (??) denotes the action of the linear transformation A Lin[V] on the vector a V For i = 2, j = 0, one has for A, B T 2 where A B denotes the natural inner product on Lin[V] defined by A[B] := A B (5) A B := tr[a T B] (6) In (??), tr[a] denotes the trace of A Lin[V] and A T denotes the transpose of A Finally, a second-order tensor A T 2 can be used to define a bi-linear transformation on V Specifically, A[, ] : T 1 T 1 T 0 is defined by a, b A := a (A[b]) (7) for all a, b T 1 An important class of second order tensors is given by the Elementary Tensor Product of two first order tensors Specifically, given a, b T 1 the elementary tensor product a b of a and b is the second order tensor whose action on a first order tensor c T 1 is defined by From the definitions (??), (??), one sees that a b[c] := a(b c) (8) [a b] T = b a Tr[a b] = a b a b c d = (a c)(b d) a bc d = (b c)a d Coordinates with Respect to an Orthonormal Basis Given an orthonormal basis, {e 1,, e n }, for the underlying vector space V, one can construct a natural orthonormal basis for the space of second order tensors, T 2, of the form Consequently, given A T 2, one has {e i e j, i, j = 1,, n} (9) A = i,j a ij e i e j 2

with the coordinates of A relative to the natural basis given by a ij = A e i e j = e i (Ae j ) Hence, if x V has coordinates x k relative to the basis {e 1,, e n }, then the action of A on x can be computed using components [Ax] = [a ij x j ] where summation over the index j is implied It is useful to note that one easily shows that if a, b V have coordinates [a] = [a i ] and [b] = [b i ], respectively, relative to the orthonormal basis {e 1,, e n }, then the components of the second-order tensor a b relative to the natural basis on T 2 are [a b] = [a i b j ] Finally, the component form of the bi-linear form (??) is where summation over i, j = 1,, n is implied a, b A = A a b = a (Ab) = a ij a i b j Third-Order Tensors The space of third-order tensors, T 3, is most easily constructed by first considering elementary tensor products of the form a b c for first-order tensors (vectors in V) a, b, c T 1 A third-order tensor can be used to define a linear transformation from T p T 3 p for p = 0, 1, 2, 3 The action of a third-order elementary tensor product as such a linear transformation can be completely specified by defining its action on p th order elementary tensor products (Why?) For p = 0 and α T 0, one defines For p = 1 and d T 1, one defines For p = 2 and d e T 2, one defines a b c[α] := αa b c T 3 a b c[d] := (c d)a b T 2 a b c[d e] := (b d)(c e)a T 1 It should be noted that in this expression, the scalar multiplying a can also be written as b c d e, where now denotes the dot-product on T 2 defined previously Finally, for d e f T 3, one defines a b c[d e f] := (a d)(b e)(c f) (10) Note that this last expression is a multi-linear form on the six variables a,, f Moreover, this last expression can be used to define an inner product on the space T 3 Indeed, as a natural basis for T 3 one takes the set of elementary tensor products, {e i e j e k, i, j, k = 1, n}, and then defines the dot-product on T 3 using (??) to define e i e j e k e l e m e n := (e i e l )(e j e m )(e k e n ) = δ il δ jm δ kn where δ ij denotes the Kronecker symbol δ ij = { 1 when i = j, 0 when i j 3

and extending the definition to all of T 3 by linearity In particular, a general third order tensor A T 3 has the component representation A = [a ijk ] where the a ijk are define through where summation over i, j, k is implied A = a ijk e i e j e k Fourth and Higher-Order Tensors The generalization to fourth-order tensors and higher should now be clear One first defines the special class of N th -order elementary tensor products of first-order tensors, and then uses the dot product to define their various actions as multi-linear transformations The vector space of all N th or tensors is then constructed by taking all finite linear combinations of such N th order elementary tensor products For example, an N th order tensor elementary tensor product of the form A = a 1 a N p b 1 b p defines a multilinear transformation A : T p T N p through A[c 1 c p ] = a 1 a N p (b 1 c 1 ) (b p c p ) An important fourth-order tensor in applications is the Elasticity Tensor of linear elasticity theory Specifically, the elasticity tensor, D, is the fourth-order tensor by which the stress tensor, T, is computed from the infinitessimal strain tensor, E, as which in component form becomes where summation over k, l = 1, 2, 3 is implied T = D[E] t ij = d ijkl e kl 3 Tensors over a Vector Space without Inner Product The construction of tensor spaces of all orders given below proceeds in somewhat the same fashion as done previously, only now the underlying vector space, V, is not assumed to have an inner product, In particular, the term orthonormal basis has no meaning in this context However, many of the conveniences of an orthonormal basis can be realized through the introduction of the notion of a Dual Space to V 31 Dual Space and Dual Basis 311 Dual Space Given a finite dimensional vector space V, one defines its Dual Space V to be Lin[V, R], the vector space of all linear transformations from V to the real numbers (or more generally, to the associated scalar field F) Recall that if V has dimension N, then Lin[V, R] can be realized as all 1 N matrices with real entries The action of a linear transformation a V on a vector b V is denoted by a, b Example Let V = R N (ignoring its natural inner product) Elements a V are n-tuples of real numbers a = 4 a 1 a N

whereas elements b V are 1 N matrices b = ( b 1 b N ) The action b, a is then given by matrix multiplication 312 Dual Basis b, a = ( ) b 1 b N a 1 a N = b 1 a 1 + + b N a N Given a basis B = {e 1,, e N } for V, one defines its Dual Basis to be the unique basis B = {e 1,, e N } for the dual space satisfying e i, e j = δ i j, for i, j = 1,, N (11) where δ i j denotes the Kronecker symbol Every vector a V has a representation a = a1 e 1 + + a N e N The coefficients a i, i = 1,, N are called the Contravariant Coordinates of the vector a V Correspondingly, every dual vector b V has a representation b = b 1 e 1 + + b N e N, with the coefficients b i, i = 1,, N being called the Covariant Coordinates of b It follows from (??) that a i = e i, a and b i = b, e i (12) 32 The Tensor Spaces The tensor space Tq p (V) is defined to be the vector space of all (p + q)-multilinear, real-valued functions A : V } {{ V } p times } V {{ V } R (13) q times Thus, A is a function of p-variables from V and q-variables from V that is linear in each variable separately The Contravariant Order of A is p and the Covariant Order of A is q A Pure Contravariant Tensor has order (p, 0) while a Pure Covariant Tensor has order (0, q) Example Every transformation A Lin(V) defines a tensor  T 1 1 through Â(v, v) = v, Av for every v V and v V Example Any p-vectors from V and q-dual vectors from V can be used to construct a tensor in Tq p in the form of a tensor product More specifically, if v 1,, v p V and v 1,, v q V, then one defines the tensor product v 1 v p v 1 v q Tq p through the action v 1 v p v 1 v q (u 1,, u p, u 1,, u q ) = u 1, v 1 u p, v p v 1, u 1 v q, u q Given a basis B = {e 1,, e N } for V with associated dual basis B = {e 1,, e N } for V, one constructs the natual product basis for Tq p as {e i1 e ip e j 1 e jq, i 1,, i p, j 1,, j q = 1,, N} 5

Thus, one sees the dim(tq p ) = N (p+q) where dim(v) = N It is now straight forward to construct the component form of a general tensor Hence, for a tensor A Tq p, one defines its component form relative to the natural product basis [ ] [A] = a i 1i p through the following argument Let u 1,, u p V have covariant coordinates [u i ] B = [u i k ], i = 1,, p, k = 1,, N and let u 1,, u q V have contravariant coordinates [u j ] B = [u k j ], j = 1,, q, k = 1,, N Then, A(u 1,, u p, u 1,, u q ) = a i 1i p e i1 e ip e j 1 e jq (u 1,, u p, u 1,, u q ) = a i 1i p u 1, e i1, u p, e ip e j 1, u 1 e jq, u q = a i 1i p u 1 i 1 u p i p u j 1 1 u jq q Generalized Tensor Product There is a useful generalization of the elementary tensor product to tensors of arbitrary order Specifically, given A Tq p and B Ts r, one defines the tensor product A B T p+r q+s through the action A B(v 1,, v p+r, v 1,, v q+s ) := A(v 1,, v p, v 1,, v q ) B(v p+1,, v p+r, v q+1,, v q+s ) Thus the orders of tensors add in forming tensor products Generalized Contraction Similarly, it is useful to introduce a generalization of the dot product (contraction operator) to higher order tensors To that end, let A Tq p and B Tr s with r p and s q One then defines the dot product A B to be a tensor in T p r q s given (in component form) by [ ] [A B] := a i 1i p b j q s+1j q i p r+1 i p Thus, the orders of tensors subtract in the generalized dot product 6