Vectors. September 2, 2015

Similar documents
Rotational motion of a rigid body spinning around a rotational axis ˆn;

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

A Primer on Three Vectors

Worksheet 1.4: Geometry of the Dot and Cross Products

Course Notes Math 275 Boise State University. Shari Ultman

Vector Algebra August 2013

The Cross Product. Philippe B. Laval. Spring 2012 KSU. Philippe B. Laval (KSU) The Cross Product Spring /

Rigid Geometric Transformations

Additional Mathematical Tools: Detail

Lecture 2: Vector-Vector Operations

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Introduction to Vector Spaces

Vector Geometry. Chapter 5

4.1 Distance and Length

(arrows denote positive direction)

Physics 6303 Lecture 2 August 22, 2018

Vectors. J.R. Wilson. September 28, 2017

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Rigid Geometric Transformations

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

,, rectilinear,, spherical,, cylindrical. (6.1)

102 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE

MAT 1339-S14 Class 8

1 Vectors and Tensors

VECTOR AND TENSOR ALGEBRA

The Matrix Representation of a Three-Dimensional Rotation Revisited

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Vectors. J.R. Wilson. September 27, 2018

Vector calculus background

Remark 3.2. The cross product only makes sense in R 3.

MATH Linear Algebra

SINGLE MATHEMATICS B : Vectors Summary Notes

Chapter 6: Vector Analysis

Review: Linear and Vector Algebra

1.3 LECTURE 3. Vector Product

Vectors & Coordinate Systems

VECTORS, TENSORS AND INDEX NOTATION

Dr. Allen Back. Sep. 8, 2014

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Worksheet 1.3: Introduction to the Dot and Cross Products

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Course MA2C02, Hilary Term 2010 Section 4: Vectors and Quaternions

Chapter 2: Vector Geometry

Vectors. January 13, 2013

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Linear Algebra, Summer 2011, pt. 3

Vectors Part 1: Two Dimensions

The Cross Product. In this section, we will learn about: Cross products of vectors and their applications.

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Vectors. Vectors and the scalar multiplication and vector addition operations:

1.4 LECTURE 4. Tensors and Vector Identities

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Linear Algebra. Alvin Lin. August December 2017

LS.1 Review of Linear Algebra

Department of Physics, Korea University

Linear Algebra. Preliminary Lecture Notes

Vectors a vector is a quantity that has both a magnitude (size) and a direction

M. Matrices and Linear Algebra

Math Linear Algebra

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2012/2013, 4th quarter. Lecture 2: vectors, curves, and surfaces

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Lecture Notes 1: Vector spaces

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Chapter 6: Orthogonality

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Review of Coordinate Systems

Matrix Algebra: Vectors

Linear Algebra March 16, 2019

Curvilinear Coordinates

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

1.1 Bound and Free Vectors. 1.2 Vector Operations

1 Last time: least-squares problems

Vectors and Matrices

Mathematics for Graphics and Vision

October 25, 2013 INNER PRODUCT SPACES

Section 13.4 The Cross Product

Vector Calculus. Lecture Notes

SECTION v 2 x + v 2 y, (5.1)

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2.

There are two things that are particularly nice about the first basis

UNIT-05 VECTORS. 3. Utilize the characteristics of two or more vectors that are concurrent, or collinear, or coplanar.

The Calculus of Vec- tors

Linear Models Review

Course Name : Physics I Course # PHY 107

Principal Component Analysis

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Applied Linear Algebra in Geoscience Using MATLAB

12.3 Dot Products, 12.4 Cross Products

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces

Math Review: Vectors and Tensors for Rheological Applications

Definitions and Properties of R N

Chapter 8 Vectors and Scalars

F F. proj cos( ) v. v proj v

Vector Operations. Vector Operations. Graphical Operations. Component Operations. ( ) ˆk

Linear Algebra. Preliminary Lecture Notes

Euclidean Space. This is a brief review of some basic concepts that I hope will already be familiar to you.

Last week we presented the following expression for the angles between two vectors u and v in R n ( ) θ = cos 1 u v

Transcription:

Vectors September 2, 2015 Our basic notion of a vector is as a displacement, directed from one point of Euclidean space to another, and therefore having direction and magnitude. We will write vectors in boldface here, but for problems and lectures it is faster to use an arrow, v. Since we will write the magnitude of the vector v or v as simply v it is important to always use some distinguishing notation. While we may always think of a vector as direction and magnitude, we can give a more abstract definition that doesn t depend on these intuitions. This has the advantage of allowing us easy algebraic manipulations. We give an algebraic definition, then describe two important ways of combining two vectors to get a third: the dot product and the cross product. Finally, we discuss rotations, which leave the important vector relationships invariant. 1 Definition: A vector space, V, is a collection of objects called vectors, u, v, w,... V together with an operation, addition, satisfying the following properties: 1. The sum of any two vectors, u, v V, denoted w u + v, is in the vector space as well, w V. Closure 2. There is an identity vector, 0, called the zero vector, such that for all vectors, u V, u + 0 0 + u u 3. For every u V, there is an inverse vector in V, called u, such that u + u u + u 0 4. Vector addition is associative. For any three vectors, u, v, w,... V, u + v + w u + v + w and therefore we may unambiguously write u + v + w for the sum of more than two vectors. 5. Vector addition is commutative. For any two vectors, u, v V, u + v v + u There is also a second operation, scalar multiplication for our purposes by real or complex numbers, a, b,... R or C, such that for any two scalars, a, b, and any two vectors, u, v V, w au + bv 1

is also in V. Finally, we require the distributive laws, a + b u au + bu a u + v au + av A vector space is n-dimensional if n is the smallest number such that any collection of n + 1 distinct vectors, {u 1,..., u n+1 } is linearly dependent, in the sense that there exist numbers a 1, a 2,... a n+1 such that a 1 u 1 + a 2 u 2 +... + a n+1 u n+1 0 It is then always possible to choose a basis, that is, a set of n independent vectors, e 1,..., e n such that every vector in V may be written as a linear combination of the form, u a 1 e 1 +... + a n e n for some real complex numbers, a 1,..., a n. These numbers are the components of the vector u in that basis, and u may be conveniently written as u a 1, a 2,..., a n 2 Dot product 2.1 Length We require our vectors to have a real, positive length, u, which we will write using a dot: We require this length to be linear in the sense that: u u u u u 2 au a u By extending linearity to sums of vectors, we can extend the dot product to a real-valued mapping of any pair of vectors. Noticing that linearity under vector addition would require u + v 2 u + v u + v we define the dot product of any two vectors to be u v 1 2 u u + v + v u + v u u + u v + v u + v v u + v 2 u 2 v 2 From this it follows that u v v u. Two vectors are said to be perpendicular or orthogonal if u v 0, and parallel if u v u v. 2

2.2 Orthonormal basis Our intuitions about vectors are clearest if we choose an orthonormal basis. Given any vector u, and its length u, the vector 1 uu has unit length, 1 u u 1 u u 1 u u 1 Suppose we have a basis, e 1,..., e n. We can then form a new basis which is orthonormal by using a procedure called Gram-Schmidt orthogonalization. To be a basis, no two basis vectors can be parallel prove this!. Therefore, given any two, e 1, e 2 we make the first one unit length, ê 1 1 e 1 e 1 Now construct a unit vector orthogonal to ê 1 by writing We now have ê 2 e 2 e 2 ê 1 ê 1 e 2 e 2 ê 1 ê 1 ê 1 ê 1 1 ê 1 ê 2 0 ê 2 ê 2 1 We continue in this way, adding one new vector at a time and constructing one perpendicular to all the previous ones, until we have done the full set. Then for all i, j 1,..., n, where the Kronecker delta, δ ij is defined to be the matrix { 1 i j δ ij 0 i j ê i ê j δ ij 1 This is just the identity matrix. Notice that eq.1 is actually n 2 distinct equations, since each of i, j may take ndifferent values. 2.3 Three dimensions In Euclidean 3-space, we define a Cartesian orthonormal basis ê i î, ĵ, ˆk for i 1, 2, 3, where î is a unit vector pointing in the +x direction, ĵ a unit vector in the y-direction and ˆk is a unit vector point in the +z directions. Then any vector may be written as v v v 1 î + v 2 ĵ + v 3ˆk v x î + v y ĵ + v z ˆk v i ê i v i ê i i1 3 i1

We will use these various notations interchangeably. Notice that, while an arrow over a symbol or boldface indicates a vector, a carat over a vector, ˆk, indicates that it is a unit vector. For any unit vector, we have ˆv ˆv 1. Using the distributive laws, we find the usual Pythagorean expression for the length of a vector: v 2 v v v x î + v y ĵ + v z ˆk v x î + v y ĵ + v z ˆk v x î + v y ĵ + v z ˆk v x î + v x î + v y ĵ + v z ˆk v y ĵ + v x î + v y ĵ + v z ˆk v z ˆk v x î v x î + v y ĵ v x î + v z ˆk v x î + v x î v y ĵ + v y ĵ v y ĵ + v z ˆk v y ĵ + v x î v z ˆk + v y ĵ v z ˆk + v z ˆk v z ˆk v 2 xî î + v yv x ĵ î + v zv xˆk î + v x v y î ĵ + v2 yĵ ĵ + v zv y ˆk ĵ + v x v z î ˆk + v y v z ĵ ˆk + v 2 z ˆk ˆk v 2 x + v 2 y + v 2 z where we have used the orthonormality. Writing out all nine or n 2 separate terms is tedious. However, if we use index notation, it s not so bad. Consider the following shorter version: where we use the fact that v 2 v v v i ê i i1 v i i1 j1 v i i1 j1 v i v i i1 v j ê j j1 v j ê i ê j v j δ ij v 2 1 + v 2 2 + v 2 3 v j δ ij v i j1 because the δ ij vanishes unless i j. Notice that in the first step is is crucial to use different summation indices in the two sums. For a general dot product, we now have u v 1 2 1 2 u + v 2 u 2 v 2 u x + v x 2 + u y + v y 2 + u z + v z 2 u 2 x + u 2 y + u 2 z v 2 x + vy 2 + vz 2 1 2 2u xv x + 2u y v y + 2u z v z u x v x + u y v y + u z v z 4

2.4 Angles We can use the dot product to define the angle between two vectors, at least up to the sign of the angle. Suppose we have two vectors, u, v. If we choose the plane spanned by linear combinations of just these two to be the xy plane, and then choose the x-axis to lie along the direction of the first vector, then we have u uî while the second vector lies somewhere in the xy plane, v v x î + v y ĵ v cos ϕî + v sin ϕĵ The dot product of u, v is then u v uî v cos ϕî + v sin ϕĵ uv cos ϕî î + sin ϕî ĵ uv cos ϕ and therefore, we find the angle from cos ϕ u v uv The beauty of this construction is that a rotation is defined as a linear transformation of vectors that preserves their lengths. Since we have defined the dot product in terms of lengths, rotations also preserve the dot product, and the angle is now defined in terms of lengths and a dot product. Therefore, the expression above does not depend on our choosing the x and y axes the way we did. We may always find the cosine of the angle between two vectors by dividing their dot product by their norms, regardless of the directions of the coordinate axes. This result gives us another convenient way to write the dot product, namely, u v uv cos ϕ where u, v are the lengths and ϕ the angle between the vectors. 3 Cross product There is another way to define a product of two vectors, one which gives a third vector. It exists because given any two independent vectors in 3-space, there is a unique third vector orthogonal to them both. The first two define a plane, and the third is normal to that plane. Choosing the direction of the normal by a right-hand rule, we only need to specify the length of the product vector. We take it to be the area of the parallelogram defined by the first two vectors. Once again, let s start with two vectors u, v, and choose coordinates so that they lie in the xy plane. Then the area of the parallelogram formed by the two is given by A uv sin ϕ Clearly this area is unchanged by rotations, though the normal is not. Since the z direction is given by ˆk, we have u v uv sin ϕˆk The only part of this that changes with rotations is the direction of ˆk. Let ˆn be the direction normal to both u and v as given by the right-hand rule. Then the cross product may be written in a way independent of coordinates as u v uv sin ϕˆn 5

Notice that for any vector, v v 0. We may make this definition completely independent of rotations by defining the triple product. Let w be an arbitrary third vector. Then in general, u, v, w give the sides of a parallelepiped. Since the cross product gives the area of the base, while h ˆn w gives the height, the volume of the parallelepiped is w u v uv sin ϕˆn w Ah The triple product is purely geometrical and therefore independent of how we rotate the vectors. By choosing w perpendicular to the base, we may recover the cross product from the triple product. To find the component form of the cross product, we use our orthonormal basis. Clearly, we have: î î 0 ĵ ĵ 0 ˆk ˆk 0 For the remaining products, notice that any pair of the unit vectors give the sides of a unit square. The angle is 90 and the area is 1. The only thing remaining to keep track of is the orientation. Using the right hand rule we see that î ĵ ˆk ĵ ˆk î ˆk î ĵ Reversing the sign allows us to tell which direction the normal points, so that ĵ î ˆk ˆk ĵ î î ˆk ĵ We can now find components of the cross product. Let u u 1 î + u 2 ĵ + u 3ˆk v v 1 î + v 2 ĵ + v 3ˆk Remembering that everything to do with vectors must be linear,we find u v u 1 î + u 2 ĵ + u 3ˆk v 1 î + v 2 ĵ + v 3ˆk resulting in u 1 v 1 î î + u 2v 1 ĵ î + u 3v 1ˆk î +u 1 v 2 î ĵ + u 2v 2 ĵ ĵ + u 3v 2ˆk ĵ +u 1 v 3 î ˆk + u 2 v 3 ĵ ˆk + u 3 v 3ˆk ˆk u 2 v 1ˆk + u3 v 1 ĵ +u 1 v 2ˆk u3 v 2 î u 1 v 3 ĵ + u 2 v 3 î u v u 2 v 3 u 3 v 2 î + u 3v 1 u 1 v 3 ĵ + u 1v 2 v 1 u 2 ˆk We notice that this is just the determinant of the matrix u 1 u 2 u 3 v 1 v 2 v 3 î ĵ ˆk While this result may also be achieved more easily using index notation, it requires use of the Levi-Civita symbol. We will introduce this later as an optional tool. 6

4 Rotations We defined rotations as linear transformations that preserve the lengths of all vectors. We may write the components of a linear transformation of an arbitrary vector u as ũ i R ij u j j1 and the condition that makes it a rotation is that the length of ũ equals the length of u. In components, u i u i ũ i ũ i i1 i1 R ij u j R ik u k i1 j1 i1 j1 k1 j1 k1 k1 R ij u j R ik u k R ij R ik u j u k where we use associativity of addition. Now notice that this holds if and only if, for then the triple sum becomes j1 k1 i1 R ij R ik δ jk i1 R ij R ik u j u k i1 j1 k1 δ jk u j u k u j u j The transpose of a matrix, R ij is just R ji, so the condition we need in order to have a rotation is R t R 1 A matrix is a rotation if and only if its transpose is its inverse. Let s check that this works in 2-dimensions. Let a b R ij c d Rij t a c b d Then imposing the condition, we have j1 2 RjiR t ik δ jk i1 a c a b b d c d a 2 + c 2 ab + cd ab + cd b 2 + d 2 1 0 0 1 1 0 0 1 7

Therefore, we need: a 2 + c 2 1 ab + cd 0 b 2 + d 2 1 We can get the first condition by writing a cos θ, c sin θ for any angle θ, and similarly b cos ϕ, d sin ϕ for some ϕ. The middle condition then tells us that 0 cos θ cos ϕ + sin θ sin ϕ cos θ ϕ so we need ϕ θ + π 2. Since sin θ + π 2 cos θ and cos θ + π 2 sin θ, our rotation matrix becomes cos θ sin θ R ij sin θ cos θ which is just right. 5 Exercises For the three vectors find the following: 1. Find u 3î + 4ĵ + 5ˆk v î + 2ĵ w 7ĵ + 2ˆk u v u w w v 2. By explicitly taking the dot and cross products find: 3. Find u u v v u v w u v v w w w v w 4. Prove that for any three vectors the BAC-CAB rule, A B C B A C C A B You can just substitute general expressions for the products, but there s an easier way to do this if you use the rotational invariance of the result. 8