Vectors and Matrices Notes.

Similar documents
Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Index Notation for Vector Calculus

A Primer on Three Vectors

Vector, Matrix, and Tensor Derivatives

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

VECTORS, TENSORS AND INDEX NOTATION

Math 123, Week 2: Matrix Operations, Inverses

CS 246 Review of Linear Algebra 01/17/19

1.4 LECTURE 4. Tensors and Vector Identities

A Review of Matrix Analysis

Elementary Linear Algebra

A primer on matrices

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Vector and Tensor Calculus

Vector analysis and vector identities by means of cartesian tensors

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

Matrices: 2.1 Operations with Matrices

Physics 6303 Lecture 2 August 22, 2018

Introduction to relativistic quantum mechanics

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

Matrices. Chapter Definitions and Notations

MATH 320, WEEK 7: Matrices, Matrix Operations

Announcements Wednesday, October 10

Lecture 13 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

Classical Mechanics in Hamiltonian Form

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian...

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Mathematics for Graphics and Vision

This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. VECTOR ALGEBRA

Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n.

Classical Mechanics Solutions 1.

Lorentz Transformations and Special Relativity

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Chapter 2. Matrix Arithmetic. Chapter 2

,, rectilinear,, spherical,, cylindrical. (6.1)

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Basic Linear Algebra in MATLAB

1.3 LECTURE 3. Vector Product

I = i 0,

Section 1.5. Solution Sets of Linear Systems

A primer on matrices

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

1300 Linear Algebra and Vector Geometry

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

2.20 Fall 2018 Math Review

Matrices BUSINESS MATHEMATICS

1 Matrices and Systems of Linear Equations. a 1n a 2n

Mathematical Methods wk 2: Linear Operators

Determinants - Uniqueness and Properties

Introduction to tensors and dyadics

Vectors. September 2, 2015

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Notes on vectors and matrices

12. Perturbed Matrices

Mathematical Preliminaries

Lecture 3: Vectors. Any set of numbers that transform under a rotation the same way that a point in space does is called a vector.

Announcements Monday, October 02

Matrix Arithmetic. j=1

PHYS Handout 6

Introduction to Group Theory

Matrices. 1 a a2 1 b b 2 1 c c π

VECTOR AND TENSOR ALGEBRA

= 1 and 2 1. T =, and so det A b d

NIELINIOWA OPTYKA MOLEKULARNA

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Announcements Monday, September 18

Basic mathematics for nano-engineers (II)

Relativistic Electrodynamics

Topic 15 Notes Jeremy Orloff

Basic Concepts in Linear Algebra

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Strauss PDEs 2e: Section Exercise 4 Page 1 of 5

Systems of Linear Equations and Matrices

A Brief Introduction to Tensor

Next topics: Solving systems of linear equations

Linear Algebra Review

Linear Equations in Linear Algebra

= a. a = Let us now study what is a? c ( a A a )

CE-570 Advanced Structural Mechanics - Arun Prakash

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Review of Basic Concepts in Linear Algebra

Systems of Linear Equations and Matrices

1111: Linear Algebra I

PHY481: Electromagnetism

Tensors, and differential forms - Lecture 2

Computing Neural Network Gradients

The Matrix Representation of a Three-Dimensional Rotation Revisited

Inverses and Elementary Matrices

1 Index Gymnastics and Einstein Convention

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Phys 201. Matrices and Determinants

Sometimes the domains X and Z will be the same, so this might be written:

B553 Lecture 5: Matrix Algebra Review

GEOMETRY AND VECTORS

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Image Registration Lecture 2: Vectors and Matrices

1.13 The Levi-Civita Tensor and Hodge Dualisation

Transcription:

Vectors and Matrices Notes Jonathan Coulthard JonathanCoulthard@physicsoxacuk 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove some very tricky vector and matrix identities with very little effort As with most things, it will only become clearer with practice, and so it is a good idea to work through the examples for yourself, and try out some of the exercises Example: Scalar Product Let s start off with the simplest possible example: the dot product For real column vectors a and b, b 1 a b = a T b = ( b 2 a 1 a 2 a 3 ) b 3 = a 1b 1 + a 2 b 2 + a 3 b 3 + (1) or, written in a more compact notation a b = i a i b i, (2) where the σ means that we sum over all values of i Example: Matrix-Column Vector Product Now let s take matrix-column vector multiplication, Ax = b A 11 A 12 A 23 x 1 b 1 A 21 A 22 A 23 x 2 A 31 A 32 A 33 x 3 = b 2 b 3 (3) You are probably used to multiplying matrices by visualising multiplying the elements highlighted in the red boxes Written out explicitly, this is b 2 = A 21 x 1 + A 22 x 2 + A 23 x 3 + (4) If we were to shift the A box and the b box down one place, we would instead get b 3 = A 31 x 1 + A 32 x 2 + A 33 x 3 + (5) 1

It should be clear then, that in general, for the ith element of b, we can write b i = A i1 x 1 + A i2 x 2 + A i3 x 3 + (6) Or, in our more compact notation, b i = j A ij x j (7) Note that if the matrix A had only one column, then i would take only one value (i = 1) b would then also only have one element (b 1 ) making it a scalar Our matrix-column vector product would therefore reduce exactly to a dot product In fact, we can interpret the elements b i as the dot product between the row vector which is the ith row of A, and the column vector x Example: Matrix-Matrix multiplication One last simple example before we start proving some more nontrivial stuff Consider the matrix product AB = C A 11 A 12 A 23 B 11 B 12 B 23 C 11 C 12 C 23 A 21 A 22 A 23 B 21 B 22 B 23 A 31 A 32 A 33 B 31 B 32 B 33 = C 21 C 22 C 23 C 31 C 32 C 33 (8) I have once again marked with a box the way that you are probably used to seeing these multiplications done Explicitly, C 32 = A 31 B 12 + A 32 B 22 + A 33 B 32 + (9) As with the previous example, you might interpret C 23 as the dot product between the row vector A 3i, and the column vector B i2, ie C 23 = k A 2k B k3 (10) it is clear how this generalises to any element C ij, C ij = k A ik B kj (11) The rule for matrix multiplication is: make the inner index (k, in this case) the same, and sum over it Example: Trace of a product of matrices The trace of a matrix is defined to be the sum of its diagonal elements Tr(C) = i C ii (12) We would like to prove that Tr(AB) = Tr(BA) (13) 2

First, let s take the definition of the matrix product (in index form) and plug it into the definiton of the trace, ie plug C ij = A ik B kj (14) k into Tr(C) = i C ii (15) We obtain Tr(AB) = ( ) A ik B ki (16) i k Now A ik, and B ki are just scalars (ie single elements of the matrices A and B respectively), and so we can commute them Tr(AB) = ( ) B ki A ik (17) i Now, the thing inside the brackets is almost a matrix product, but we are summing over the wrong index (the outer index rather than the inner one) The question is, can we swap the order of the two summations? Because addition is commutative (a + b = b + a), we can 1 Exercise: By writing out the sums with a small number of terms, convince yourself that you can indeed commute i and k Finally, by swapping i and k, we obtain Tr(AB) = k k ( ) B ki A ik, i = (BA) kk, k = Tr(BA) (18) Exercise: Now try a slightly more complicated example for yourself Using index notation, prove that Tr(AB YZ) is invariant under cyclic permutations of the matrices 11 Einstein Summation convention Our notation is much more compact than writing out huge matrices and trying to figure out how the multiplications, etc work in general However, writing out Σs can become very cumbersome Since we have (stuff) = (stuff) (19) i j i j C (stuff) = C(stuff), (20) j i j (21) we can just drop the Σs entirely, and adopt what is called Einstein summation convention It is simply summarised as follows: 1 Small health warning: Swapping the order of the summations amounts to reordering the terms in the sum This is fine if we are just summing over a finite number of terms However, if the sums are infinite, we can only reorder the terms if the sum converges absolutely If it converges only conditionally, then we cannot reorder the terms (see wikipedia for the definitions of these terms) You are unlikely to encounter a situation where this is an issue, but be aware that such situations exist 3

If an index appears twice in a term, then it is implied that we sum over it For example, A ij B jk j A ijb jk This is a very powerful convention that you will use extensively when you learn special and general relativity in your third year However, in the immediate future, it can also make proofs of various matrix and vector identities very easy! For instance our proof of Tr(AB) = Tr(BA) can be written as Tr(AB) = A ik B ki = B ki A ik (22) = Tr(BA) (23) (Note that in a problem sheet, or in an exam, you should explain each line in this proof the notation makes it look much more trivial than it really is!) 111 Tips As a wise man once said: With great power comes great responsibility While Einstein summation convention can indeed make our lives much easier, it can also produce a great deal of nonsense if you are not very careful when keeping track of your indices Here are some tips for doing so: Free indices appear only once in an expression and thus are not summed over Dummy indices appear twice, and are implicitly summed over To help avoid confusion, it is a good idea to use roman letters (i, j, k) for free indices, and greek letters (λ, µ, ν) for dummy indicies Dummy indices should never appear in the final answer The free indices should always be the same in every term in an expression An index should never appear more than twice in a single term 112 The Kronecker Delta The Kronecker delta is a useful symbol which crops up all the time It is defined as { 1, if i = j δ ij = 0, otherwise It should be clear that this is basically a representation of the identity matrix It also has the useful property that if you sum over one of the indices, then it kills the sum, and replaces the dummy index with the other (free) index For example, a ij δ jn = a in (25) (since the only nonzero term in the sum is j = n) convention, then we would write the above as j (24) If we were to use Einstein summation a iµ δ µn = a in (26) As another example, the scalar product between two vectors a b can be written as: Where the δ µν forces the indices of a and b to be equal a b = a µ b ν δ µν = a µ b µ (27) 4

113 The Levi-Civita symbol The Levi-Civita symbol, ε ijk is another handy object It is defined as 1, if ijk is a cyclic permutation of 123 ε ijk = 1, if ijk is an anticyclic permutation of 123 0, if any two of i, j, or k are equal (28) Why is such a thing useful? Well, let s consider the object ε iµν a µ b ν It has one free index, i, so it is a vector What are its components? If we set i = 1, from the definition of ε we see that the only nonzero terms in the sum will be µ, ν 0 This leaves us with µ, ν = 2 or 3 Writing out these components explicitly, we find ε 1µν a µ b ν = ε 123 a 2 b 3 + ε 132 a 3 b 2 (29) = a 2 b 3 a 3 b 2 (30) Which you probably recognise as the first component of the vector a b Indeed, if we go through the components, we will indeed find that An identity you will find useful is ε iµν a µ b ν = (a b) i (31) ε µjk ε µlm = δ jl δ km δ jm δ kl (32) The proof of this identity is just the evaluation of all of the cases (thankfully most of them are zero!) Exercise: Prove the identity (32) Example: The scalar triple-product In the problem sheet on vectors, we (without proof) made use of the fact that the scalar tripleproduct, a (b c), is unchanged under cyclic permutations of the vectors a, b, and c Armed with our new notation, proving this becomes trivial! a (b c) = ε λµν a λ b µ c ν (33) = ε νλµ a λ b µ c ν (34) = ε νλµ c ν a λ b µ (35) = c (a b) (36) We immediately see that (since everything on the right commutes), cyclicly permuting the vectors corresponds to cyclicly permuting the indices λµν in the ε Since ɛ λµν is unchanged under cyclic permutations, a (b c) is unchanged under cyclic permutations of the vectors! If possible, write them in matrix- Exercise: Which of the following expressions are valid? column vector notation If they are not valid, why not? 1 µ x µ 2 A iσ x σ 3 ε λµν x λµ 5

4 a i b j + c k d j 5 ε iµν ε νσρ a µ b σ c ρ You now might want to go back and have a look at the problem sheet from week 2 Vectors and matrices I (specifically, Q 24 of the class problems) for some more practice Try proving them all with index notation and Einstein summation convention! 6