Introduction to Geometry

Similar documents
Homework 2. Solutions T =

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Math 341: Convex Geometry. Xi Chen

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

1. General Vector Spaces

Elementary linear algebra

October 25, 2013 INNER PRODUCT SPACES

LINEAR ALGEBRA REVIEW

Math 302 Outcome Statements Winter 2013

Review of linear algebra

Math Linear Algebra II. 1. Inner Products and Norms

Optimization Theory. A Concise Introduction. Jiongmin Yong

Linear Algebra. Min Yan

Linear Algebra. Preliminary Lecture Notes

The following definition is fundamental.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Inner product spaces. Layers of structure:

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

NOTES on LINEAR ALGEBRA 1

Linear Algebra. Preliminary Lecture Notes

Knowledge Discovery and Data Mining 1 (VO) ( )

Part IA. Vectors and Matrices. Year

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

NOTES (1) FOR MATH 375, FALL 2012

Elementary maths for GMT

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Quantum Computing Lecture 2. Review of Linear Algebra

(x, y) = d(x, y) = x y.

Linear Algebra I. Ronald van Luijk, 2015

Linear algebra. S. Richard

NORMS ON SPACE OF MATRICES

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Algebra Workshops 10 and 11

Math 3108: Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Econ Slides from Lecture 7

Lecture 7. Econ August 18

2. Matrix Algebra and Random Vectors

1 Euclidean geometry. 1.1 The metric on R n

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra Review. Vectors

CS 246 Review of Linear Algebra 01/17/19

Chapter 8. Rigid transformations

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Linear Algebra March 16, 2019

SYLLABUS. 1 Linear maps and matrices

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.

ELEMENTARY LINEAR ALGEBRA

Foundations of Matrix Analysis

Chapter Two Elements of Linear Algebra

Tangent spaces, normals and extrema

Lecture Notes 1: Vector spaces

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Chapter 2: Linear Independence and Bases

Linear Algebra Highlights

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Math Linear Algebra Final Exam Review Sheet

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MATH 583A REVIEW SESSION #1

Analysis-3 lecture schemes

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Introduction to Linear Algebra, Second Edition, Serge Lange

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Definitions and Properties of R N

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Mathematics Department Stanford University Math 61CM/DM Inner products

Applied Linear Algebra in Geoscience Using MATLAB

(II.B) Basis and dimension

Tune-Up Lecture Notes Linear Algebra I

MTH 464: Computational Linear Algebra

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

6 Inner Product Spaces

Homework set 4 - Solutions

Some notes on Coxeter groups

First we introduce the sets that are going to serve as the generalizations of the scalars.

G1110 & 852G1 Numerical Linear Algebra

Extra Problems for Math 2050 Linear Algebra I

Elementary Linear Algebra

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math113: Linear Algebra. Beifang Chen

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Review of Linear Algebra

Algebra II. Paulius Drungilas and Jonas Jankauskas

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Mathematical foundations - linear algebra

ELEMENTARY LINEAR ALGEBRA

Vector Calculus. Lecture Notes

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra Homework and Study Guide

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Vector Spaces

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Properties of Linear Transformations from R n to R m

Transcription:

Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional vector space R n....... 4 1.3 Linear dependence of vectors................... 4 1.4 Dimension of vector space. Basis in vector space........ 7 1.5 Scalar product. Euclidean space................. 9 1.6 Orthonormal basis in Euclidean space.............. 11 1.7 Transition matrices. Orthogonal bases and orthogonal matrices 12 1.7.1 Linear operators and transition matrices........ 15 1.8 Orthogonal 2 2 matrices.................... 15 1.9 Orientation in vector space.................... 18 1.1 Linear operator in E 3 preservinig orientation is a rotation.. 23 1.11 Vector product in oriented E 3.................. 25 1.11.1 Vector product area of parallelogram......... 28 1.11.2 Area of parallelogram in E 2 and determinant of 2 2 matrices.......................... 29 1.11.3 Volume of parallelepiped................ 31 2 Differential forms in E 2 and E 3 32 2.1 Tangent vectors, curves, velocity vectors on the curve..... 32 2.2 Reparameterisation........................ 33 2.3 -forms and 1-forms........................ 35 2.3.1 Vectors directional derivatives of functions...... 37 1

2.4 Differential 1-form in arbitrary coordinates.......... 4 2.4.1 Calculations in arbitrary coordinates......... 4 2.4.2 Calculations in polar coordinates............ 42 2.5 Integration of differential 1-forms over curves.......... 44 2.6 Integral over curve of exact form................. 47 2.7 Differential 2-forms in E 2.................... 48 2.8 d d -forms (functions) 1-forms 2-forms.......... 49 2.9 Exact and closed forms..................... 5 2.1 Integration of two-forms. Area of the domain......... 5 3 Curves in E 3. Curvature 52 3.1 Curves. Velocity and acceleration vectors............ 52 3.2 Behaviour of acceleration vector under reparameterisation.. 54 3.3 Length of the curve....................... 56 3.4 Natural parameterisation of the curves............. 56 3.5 Curvature. Curvature of curves in E 2 and E 3.......... 59 3.6 Curvature of curve in an arbitrary parameterisation....... 6 4 Surfaces in E 3. Curvatures and Shape operator. 63 4.1 Coordinate basis, tangent plane to the surface.......... 64 4.2 Curves on surfaces. Length of the curve. Internal and external point of the view. First Quadratic Form............ 65 4.3 Unit normal vector to surface.................. 68 4.4 Curves on surfaces normal acceleration and normal curvature 7 4.5 Shape operator on the surface.................. 71 4.6 Principal curvatures, Gaussian and mean curvatures and shape operator.............................. 73 4.7 Shape operator, Gaussian and mean curvature for sphere and cylinder.............................. 75 4.8 Principal curvatures and normal curvature........... 77 5 Parallel transport. Geometrical meaning of Gaussian curvature 79 5.1 Concept of parallel transport of the vector tangent to the surface 79 5.2 Geometrical meaning of Gaussian curvature. Theorema Egregium 81 2

6 Appendices 83 6.1 Formulae for vector fields and differentials in cylindrical and spherical coordinates....................... 83 6.2 Curvature and second order contact (touching) of curves... 85 6.3 Integral of curvature over planar curve............. 87 6.4 Relations between usual curvature normal curvature and geodesic curvature.............................. 9 6.5 Normal curvature of curves on cylinder surface......... 92 6.6 Parallel transport of vectors tangent to the sphere....... 93 6.7 Parallel transport along a closed curve on arbitrary surface... 96 6.8 Gauss Bonnet Theorem...................... 96 6.9 A Tale on Differential Geometry................ 99 1 Euclidean space We recall important notions from linear algebra. 1.1 Vector space. Vector space V on real numbers is a set of vectors with operations + addition of vector and multiplication of vector Lon real number (sometimes called coefficients, scalars). These operations obey the following axioms a, b V, a + b V, λ R, a V, λa V. a, ba + b = b + a (commutativity) a, b, c, a + (b + c) = (a + b) + c (associativity) such that a a + = a a there exists a vector a such that a + ( a) =. λ R, λ(a + b) = λa + λb λ, µ R(λ + µ)a = λa + µa 3

(λµ)a = λ(µa) 1a = a It follows from these axioms that in particularly is unique and a is uniquely defined by a. (Prove it.) Examples of vector spaces... 1.2 Basic example of n-dimensional vector space R n A basic example of vector space (over real numbers) is a space of ordered n-tuples of real numbers. R 2 is a space of pairs of real numbers. R 2 = {(x, y), x, y R} R 3 is a space of triples of real numbers. R 3 = {(x, y, z), x, y, z R} R 4 is a space of quadruples of real numbers. R 4 = {(x, y, z, t), x, y, z, t, R} and so on... R n is a space of n-typles of real numbers: R n = {(x 1, x 2,..., x n ), x 1,...,, x n R} (1.1) If x, y R n are two vectors, x = (x 1,..., x n ), y = (y 1,..., y n ) then x + y = (x 1 + y 1,..., x n + y n ). and multiplication on scalars is defined as (λ R). λx = λ (x 1,..., x n ) = (λx 1,..., λx n ), (λ R). 1.3 Linear dependence of vectors We often consider linear combinations in vector space: λ i x i = λ 1 x 1 + λ 2 x 2 + + λ m x m, (1.2) i where λ 1, λ 2,..., λ m are coefficients (real numbers), x 1, x 2,..., x m are vectors from vector space V. 4

We say that linear combination (1.2) is trivial if all coefficients λ 1, λ 2,..., λ m are equal to zero. λ 1 = λ 2 = = λ m =. We say that linear combination (1.2) is not trivial if at least one of coefficients λ 1, λ 2,..., λ m is not equal to zero: λ 1, orλ 2, or... orλ m. Recall definition of linearly dependent and linearly independent vectors: Definition The vectors {x 1, x 2,..., x m } in vector space V are linearly dependent if there exists a non-trivial linear combination of these vectors such that it is equal to zero. In other words we say that the vectors {x 1, x 2,..., x m } in vector space V are linearly dependent if there exist coefficients µ 1, µ 2,..., µ m such that at least one of these coefficients is not equal to zero and µ 1 x 1 + µ 2 x 2 + + µ m x m =. (1.3) Respectively vectors {x 1, x 2,..., x m } are linearly independent if they are not linearly dependent. This means that an arbitrary linear combination of these vectors which is equal zero is trivial. In other words vectors {x 1, x 2, x m } are linearly independent if the condition µ 1 x 1 + µ 2 x 2 + + µ m x m = implies that µ 1 = µ 2 = = µ m =. Very useful and workable Proposition Vectors {x 1, x 2,..., x m } in vector space V are linearly dependent if and only if at least one of these vectors is expressed via linear combination of other vectors: x i = j i λ j x j. (1.4) Proof. If the condition (1.4) is obeyed then x i j i λ jx j =. This non-trivial linear combination is equal to zero. Hence vectors {x 1,..., x m } are linearly dependent. Now suppose that vectors {x 1,..., x m } are linearly dependent. This means that there exist coefficients µ 1, µ 2,..., µ m such that at least one of 5

these coefficients is not equal to zero and the sum (1.3) equals to zero. WLOG suppose that µ 1. We see that to x 1 = µ 2 µ 1 x 2 µ 3 µ 1 x 3 µ m µ 1 x m, i.e. vector x 1 is expressed as linear combination of vectors {x 2, x 3,..., x m }. Formulate and give a proof of useful Lemma Let m vectors {x 1, x 2,..., x m } belong to the span of n vectors {a 1, a 2,..., a n }, i.e. every vector x i (i = 1,..., m) can be expressed as a linear combination of vectors {a 1, a 2,..., a n }: x 1 = λ 1 1a 1 + λ 2 1a 2 + + λ n 1a n x 2 = λ 1 2a 1 + λ 2 2a 2 + + λ n 2a n x 3 = λ 1 3a 1 + λ 2 3a 2 + + λ n 3a n (1.5)... x m = λ 1 ma 1 + λ 2 ma 2 + + λ n ma n Then vectors {x 1, x 2,..., x m } are linearly dependent if m > n. Proof Prove using mathematical induction. If n = 1 then the statement is obvious: it follows from (1.5) that all vectors x i are proportional to vector a 1. Hence they are linearly dependent, since they are proportional each other. Let a statement be true for m = k. Prove it for m = k + 1. Consider the first equation x 1 = λ 1 1a 1 + λ 2 1a 2 + + λ k 1a k + λ k+1 1 a k+1 (1.6) If vector x 1 = then nothing to prove: vectors {, x 2, x 3,..., x m } are linearly dependent. If x 1 then one of the coefficients in (1.6) is not equal to zero. WLOG suppose that λ 1 k+1. Hence a k+1 can be expressed as a linear combination of vectors {x 1, a 1, a 2,..., a k }: a k+1 = x 1 λ1 1 λ k+1 1 a 1 λ2 1 λ k+1 1 a 2 λk 1 λ k+1 1 Input this expansion of a k+1 in expressions in (1.5) for vectors x 2, x 3,..., x m. We will see that m 1 vectors x 2 = x 2 λ k+1 2 x 1, x 3 = x 3 λ k+1 3 x 1..., x m = x m λ k+1 m x 1 (1.7) are expressed as linear combinations of k vectors {a 1, a 2,..., a k }. Hence due to inductive hypothesis vectors x 2, x 3,..., x m are linearly dependent: µ 2 x 2 + + µ m x m = µ 2 (x 2 λ k+1 2 x 1 ) + + µ m (x m λ k+1 m x 1 ) =, 6 a k

where one of coefficients µ 2,..., µ m is not equal to zero. Now it follows from (1.7) that this is non-trivial combination of vectors x 1,..., x m, i.e. vectors x 1,..., x m are linearly dependent. 1.4 Dimension of vector space. Basis in vector space. Definition Vector space V has a dimension n if there exist n linearly independent vectors in this vector space, and any n + 1 vectors in V are linearly dependent. In the case if in the vector space V there exist n linearly independent vectors for an arbitrary natural number n then the space V is infinite-dimensional Basis Recall that we say that vector space V is spanned by vectors {x 1,..., x n } (or vectors vectors {x 1,..., x n } span vector space V ) if any vector a V can be expresses as a linear combination of vectors {x 1,..., x n }. Basis is a set of linearly independent vectors in vector space V which span (generate) vector space V. More in detail: Definition Let V be n-dimensional vector space. The ordered set {e 1, e 2,..., e n } of n linearly independent vectors in V is called a basis (an ordered basis) of the vector space V. Proposition 1 Let {e 1,..., e n } be an arbitrary basis in n-dimensional vector space V. Then any vector a V can be expressed as a linear combination of vectors {e 1,..., e n } in a unique way, i.e. for every vector x V there exist a set an ordered set of coefficients {a 1,..., a n } such that and if x = x 1 e 1 + + x n e n (1.8) x = a 1 e 1 + + a n e n = b 1 e 1 + + b n e n, (1.9) then a 1 = b 1, a 2 = b 2,..., a n = b n. In other words for any vector x V there exists an ordered n-tuple (x 1,..., x n ) of coefficients such that x = n i=1 xi e i and this n-tuple is unique. Proof Let x be an arbitrary vector in vector space V. The dimension of vector space V equals to n. Hence n + 1 vectors (e 1,..., e n, x) are linearly dependent: λ 1 e 1 + +λ n e n +λ n+1 x = and this combination is non-trivial. 7

If λ n+1 = then λ 1 e 1 + +λ n e n = and this combination is non-trivial, i.e. vectors (e 1,..., e n are linearly dependent. Contradiction. Hence λ n+1, i.e. vector x can be expressed via vectors (e 1,..., e n ): x = x 1 e 1 +... x n e n where x i = λ i λ n+1. We prove that any vector can be expressed via vectors of basis. Prove now uniqueness. This expansion is unique. Indeed if (1.9) holds then (a 1 b 1 )e 1 + (a 2 b 2 )e 2 + + (a n b n )e n =. Due to linear independence of basis vectors this means that (a 1 b 1 ) = (a 2 b 2 ) = = (a n b n ) =, i.e. a 1 = b 1, a 2 = b 2,..., a n = b n Definition Coefficients {a 1,..., a n } are called components of the vector x in the basis {e 1,..., e n } or just shortly components of the vector x. Another very useful and workable statement Proposition 2 Let {e 1,..., e m } be an ordered set of vectors in vector space V such that an arbitrary vector x V can be expressed as a linear combination of vectors {e 1,..., e n } in a unique way (see (1.8) and (1.9) above). Then V is a finite-dimensional space of dimension m. {e 1,..., e m } is a basis in this space. This is very practical statement: it can be often used to find a dimension of vector space. Remark We say a basis not the basis, since there are many bases in the vector space V. (See below and also Homework 1). Proof. Show first that the vectors {e 1,..., e m } are linearly independent. Let µ 1 e 1 + µ 2 + + µ m e m =. This relation holds if µ 1 = µ 2 = µ 3 = = µ m =. Due to the uniqueness of the expansion applied to the vector x = we see that µ 1 e 1 + µ 2 + + µ m e m = implies that µ 1 = µ 2 = µ 3 = = µ m =. Hence vectors {e 1,..., e m } are linearly independent. We proved that dim V m. Consider an arbitrary m + 1 vectors x 1, x 2,..., x m+1. For any of these vectors x i vectors {x i, e 1,..., e m } are linearly dependent and vectors {e 1,..., e m } are linearly independent. Hence any of vectors x i can be expressed as a linear combination of vectors {e 1,..., e m }. Thus we proved that any m + 1 vectors in V belong to the space of vectors {e 1,..., e m }. Hence according to the lemma in the subsection 1.2 of lecture notes any m+1 vectors in V are linearly dependent. Thus we proved that dimv = m. The ordered set {e 1,..., e m } is a set of m linearly independent vectors in m-dimensional vector space V. Hence {e 1,..., e m } is a basis. Remark Basis is a maximal set of linearly independent vectors in a linear space V. 8

Canonical basis in R n We considered above the basic example of n-dimensional vector space a space of ordered n-tuples of real numbers: R n = {(x 1, x 2,..., x n ), x i R} (see the subsection 1.2). What is the meaning of letter n in the definition of R n? Consider vectors e 1, e 2,..., e n R n : e 1 = (1,,...,, ) e 2 = (, 1,...,, )...... e n = (,,...,, 1) (1.1) Then for an arbitrary vector R n a = (a 1, a 2, a 3,..., a n ) a = (a 1, a 2, a 3,..., a n ) = a 1 (1,,...,, )+a 2 (, 1,...,, )+a 3 (,, 1,...,, )+ +a n (, 1,...,, 1) = m = a i e i = a i e i (we will use sometimes condensed notations x = x i e i ) i=1 Thus we see that for every vector a R n we have unique expansion via the vectors (1.1). Thus according to Proposition 2 above the dimension of the space R n is equals to n and (1.1) is a basis in R n. Remark One can find another basis in R n just take an arbitrary ordered set of n linearly independent vectors. (See exercise 7 in Homework 1). The basis (1.1) is distinguished. Sometimes it is called canonical basis in R n. Remark One can consider a set of ordered n-tuples in R n as the set of points. Two points a, b R n define a vector: if a = (a 1,..., a n ), b = (b 1,..., b n ), then the vector ab attached to the point a has coordinates = (b 1 a 1, b 2 a 2..., b n a n ) 1. 1.5 Scalar product. Euclidean space In vector space one have additional structure: scalar product of vectors. 1 R n considered as a set of points is called affine space 9

Definition Scalar product in a vector space V is a function B(x, y) on a pair of vectors which takes real values and satisfies the the following conditions: B(x, y) = B(y, x) (symmetricity condition) B(λx + µx, y) = λb(x, y) + µb(x, y) (linearity condition) (x, x), (x, x) = x = (positive-definiteness condition) (1.11) Definition Euclidean space is a vector space equipped with a scalar product. One can easy to see that the function B(x, y) is bilinear function, i.e. it is linear function with respect to the second argument also 2. This follows from previous axioms: B(x, λy+µy ) }{{} = B(λy+µy, x) }{{} = symm. linear. λb(y, x)+µb(y, x) = }{{} symm. λb(x, y)+µb(x, y ). A bilinear function B(x, y) on pair of vectors is called sometimes bilinear form on vector space. Bilinear form B(x, y) which satisfies the symmetricity condition is called symmetric bilinear form. Scalar product is nothing but symmetric bilinear form on vectors which is positive-definite: B(x, x) ) and is non-degenerate ((x, x) = x =. Example We considered the vector space R n, the space of n-tuples (see the subsection 1.2). One can consider the vector space R n as Euclidean space provided by the scalar product B(x, y) = x 1 y 1 + + x n y n (1.12) Exercise a) Check that it is indeed scalar product. Notations! Scalar product sometimes is called inner product or dot product. Later on we will use for scalar product B(x, y) just shorter notation (x, y) or x, y. Sometimes it is used for scalar product a notation x y. Usually this notation is reserved only for the canonical case (1.12). b) Show that operation (x, y) = x 1 y 1 + x 2 y 2 x 3 y 3 does not define scalar product in R 3. (See also exercises in Homework 2) 2 Here and later we will denote scalar product B(x, y) just by (x, y). Scalar product sometimes is called inner product. Sometimes it is called dot product. 1

1.6 Orthonormal basis in Euclidean space One can see that for scalar product (1.12) and for the basis {e 1,..., e n } defined by the relation (1.1) the following relations hold: { 1 if i = j (e i, e j ) = δ ij = (1.13) if i j Let {e 1, e 2,..., e n } be an ordered set of n vectors in n-dimensional Euclidean space which obeys the conditions (1.13). One can see that this ordered set is a basis 3. Definition-Proposition The ordered set of vectors {e 1, e 2,..., e n } in n-dimensional Euclidean space which obey the conditions (1.13) is a basis. This basis is called an orthonormal basis. One can prove that every (finite-dimensional) Euclidean space possesses orthonormal basis. Later by default we consider only orthonormal bases in Euclidean spaces. Respectively scalar product will be defined by the formula (1.12). Indeed let {e 1, e 2,..., e n } be an orthonormal basis in Euclidean space. Then for an arbitrary two vectors x, y, such that x = x i e i, y = y j e j we have: ( (x, y) = x i e i, ) y j e j = n x i y j (e i, e j ) = i,j=1 n x i y j δ ij = i,j=1 n x i y i (1.14) We come to the scalar product (1.12). Later on we usually will consider scalar product defined by the formula (1.12) ((1.14)). Length of the vectors, angle between vectors The scalar product of vector on itself defines the length of the vector: Length of the vector x = x = (x, x) = (x 1 ) 2 + + (x n ) 2 (1.15) If we consider Euclidean space E n as the set of points then the distance 3 Indeed prove that conditions (1.13) imply that these n vectors are linear independent. Suppose that λ 1 e 1 + λ 2 e 2 + + λ n e n =. For an arbitrary i multiply the left and right hand sides of this relation on a vector e i. We come to condition λ i =. Hence vectors (e 1, e 2,..., e n ) are linearly dependent. i=1 11

between two points x, y is the length of corresponding vector: distance between points x, y = x y = (y 1 x 1 ) 2 + + (y n x n ) 2 (1.16) Geometrical properties of scalar product We recall very important formula how scalar (inner) product is related with the angle between vectors: (x, y) = x 1 y 1 + x 2 y 2 = x y cos φ (1.17) where φ is an angle between vectors x and y in E 2. This formula is valid also in the three-dimensional case and any n-dimensional case for n 1. It gives as a tool to calculate angle between two vectors: (x, y) = x 1 y 1 + x 2 y 2 + + x n y n = x y cos φ (1.18) In particulary it follows from this formula that angle between vectors x, y is acute if scalar product (x, y) is positive angle between vectors x, y is obtuse if scalar product (x, y) is negative vectors x, y are perpendicular if scalar product (x, y) is equal to zero (1.19) x = (x, x) (1.2) Remark Geometrical intuition tells us that cosinus of the angle between two vectors has to be less or equal to one and it is equal to one if and only if vectors x, y are collinear. Comparing with (1.18) we come to the inequality: (x, y) 2 = ( x 1 y 1 + + x n y n) 2 ( (x 1 ) 2 + + (x n ) 2) ( (y 1 ) 2 + ( + (y n ) 2) = (x, x)(y, y) and(x, y) 2 = (x, x)(y, y) if vectors are colienar, i.e. x i = λy i (1.21) This is famous Cauchy Buniakovsky Schwarz inequality, one of most important inequalities in mathematics. (See for more details Homework 2) 1.7 Transition matrices. Orthogonal bases and orthogonal matrices One can consider different bases in vector space. 12

Let A be n n matrix with real entries, A = a ij, i, j = 1, 2,..., n: a 11 a 12... a 1n a 21 a 22... a 2n A = a 31 a 32... a 3n............ a (n 1) 1 a (n 1)2... a (n 1)n a n 1 a n2... a nn Let {e 1, e 2,..., e n } be an arbitrary basis in n-dimensional vector space V. The basis {e 1, e 2,..., e n } can be considered as row of vectors, or 1 n matrix with entries vectors. Multiplying 1 n matrix {e 1, e 2,..., e n } on matrix A we come to new row of vectors {e 1, e 2,..., e n} such that, {e 1, e 2,..., e n} = {e 1, e 2,..., e n }A = (1.22) a 11 a 12... a 1n a 21 a 22... a 2n a 31 a 32... a 3n............ (1.23) {e 1, e 2,..., e n} = {e 1, e 2,..., e n } a (n 1) 1 a (n 1)2... a (n 1)n a n 1 a n2... a nn e 1 = a 11 e 1 + a 21 e 2 + a 31 e 3 + + a (n 1) 1 e n 1 + a n 1 e n e 1 = a 12 e 1 + a 22 e 2 + a 32 e 3 + + a (n 1) 2 e n 1 + a n 2 e n e 1 = a 13 e 1 + a 23 e 2 + a 33 e 3 + + a (n 1) 3 e n 1 + a n 1 e n =... +... +... + +............ e n = a 1n e 1 + a 2n e 2 + a 3n e 3 + + a (n 1) n e n 1 + a n n e n (1.24) or shortly: e i = n e k a ki. k=1 What is the condition that the row {e 1, e 2,..., e n} is a basis too? The row {e 1, e 2,..., e n} is a basis if and only if vectors (e 1, e 2,..., e n) are linearly independent. Thus we come to 13

Proposition 1 Let {e 1, e 2,..., e n } be a basis in n-dimensional vector space V, and let A be an n n matrix with real entries. Then {e 1, e 2,..., e n} = {e 1, e 2,..., e n }A is a basis if and only if the matrix A has rank n, i.e. it is non-degenerate matrix. We call matrix A a transition matrix from a basis {e 1, e 2,..., e n } to the basis {e 1, e 2,..., e n}, Remark Recall that the rank of matrix A is maximal number of linearly independent rows (or columns). n n matrix A of rank n are called nondegenerate matrix. Non-degenerate matrix = invertible matrix. Matrix is invertible if and only if its determinant is not equal to zero. Now suppose that {e 1, e 2,..., e n } is orthonoromal basis in n-dimensional Euclidean vector space. What is the condition that the new basis {e 1, e 2,..., e n} = {e 1, e 2,..., e n }A is an orthonormal basis too? Definition We say that n n matrix is orthogonal matrix if its product on transposed matrix is equal to unity matrix: A T A = I (1.25) Exercise. Prove that determinant of orthogonal matrix is equal to ±1. Solution A T A = I. Hence det(a T A) = det A T det A = (det A) 2 = det I = 1. Hence det A = ±1 We see that in particular orthogonal matrix is non-degenerate and {e 1, e 2,..., e n} = {e 1, e 2,..., e n }A is a basis if {e 1, e 2,..., e n } is a basis and A is orthogonal matrix. The following Proposition is valid: Proposition 2 Let {e 1, e 2,..., e n } be an orthonormal basis in n-dimensional Euclidean vector space. Then the new basis {e 1, e 2,..., e n} = {e 1, e 2,..., e n }A is orthonormal basis if and only if the transition matrix is orthogonal matrix. Proof The basis {e 1, e 2,..., e n} is orthonormal means that (e i, e j) = δ ij. We have: ( n ) n n δ ij = (e i, e j) = e m A mi, e j = e n A nj = A mi A nj (e m, e n ) = n m,n=1 m=1 A mi A nj δ mn = n=1 n A mi A mj = m=1 m,n=1 n A T ima mj = (A t A) ij, (1.26) m=1 14

Hence (A T A) ij = δ ij, i.e. A T A = I. One can see that any orthogonal matrix has determinant 1 or 1: det(a t A) = (det A) 2 = 1 det A = ±1. Hence one can consider It is very useful to consider the following groups: The group O(n) group of orthogonal n n matrices: O(n) = {A: A t T = I}. (1.27) The group SO(n) special orthogonal group of n n matrices: SO(n) = {A: A t A = I, }. (1.28) 1.7.1 Linear operators and transition matrices Let {e 1,..., e n } be a basis in n-dimensional vector space V. Then one can see that an arbitrary n n matrix A = a ij defines a linear operator (with respect to this basis) in the following way: x n 1 x = i=1 e i x i = (e 1, e 2,..., e n ) x 1 Âx = (e 1, e 2,..., e n ) A x 2... = x n 1.8 Orthogonal 2 2 matrices n e ix i = i=1 x 2... x n (1.29) n e k a ki x i. (1.3) i,k=1 Find orthogonal 2 2 matrices. Note that rotation of basis and reflection of basis is orthogonal transformation. We show now that an arbitrary transition matrix from orthonormal basis to an arbitrary orthonormal basis in E 2, i.e. orthogonal 2 2 matrix is a rotation or reflection. Consider 2-dimensional Euclidean space E 2 with orthonormal basis {e, f}: (e, e) = (f, f) = 1 (i.e. e = f = 1) and (e, f) = (i.e. vectors e, f are orthogonal). Let {e, f } be a new basis: ( ) α β {e, f } = {e, f}t = {e, f}, i.e. e = αe + γf, f = βe + δf γ δ 15

new basis is orthonormal basis also,(e, e ) = (f, f ) = 1 and (e, f ) =,i.e. transition matrix is an orthogonal matrix: 1 = (e, e ) = (αe + γf, αe + γf) = α 2 + γ 2 = 1 = (e, f ) = (αe + γf, βe + δf) = αβ + γδ = = (f, e ) = (βe + δf, αe + γf) = αβ + γδ = 1 = (f, f ) = (βe + δf, βe + δf) = β 2 + δ 2 = 1 ( ) α β Or in matrix notations: The matrix A = is orthogonal matrix if γ δ and only if ( ) t ( ) ( ) ( ) ( ) ( ) α β α β α γ α β α A t A = = = 2 + γ 2 αβ + γδ 1 γ δ γ δ β δ γ δ αβ + γδ β 2 + δ 2 =. 1 (1.31) We have α 2 + γ 2 = 1, αβ + γδ = and β 2 + δ 2 = 1. Hence one can choose angles φ, ψ : 2π such that α = cos φ, γ = sin φ, β = sin ψ, δ = cos ψ. The condition αβ + γδ = means that cos φ sin ψ + sin φ cos ψ = sin(φ + ψ) = Hence sin φ = sin ψ, cos φ = cos ψ (φ + ψ = ) or sin φ = sin ψ, cos φ = cos ψ (φ + ψ = π) The first case: sin φ = sin ψ, cos φ = cos ψ, ( ) ( ) α β cos φ sin φ A φ = = (det A γ δ sin φ cos φ φ = 1) (1.32) The second case: sin φ = sin ψ, cos φ = cos ψ, ( ) ( ) α β cos φ sin φ Ã φ = = (det γ δ sin φ cos φ Ãφ = 1) (1.33) Consider the first case, when a matrix A φ is defined by the relation (1.32). In this case the new basis is: ( ) ( ) cos φ sin φ cos φ e + sin φ f (e, f ) = (e, f)a φ = (e, f) = (1.34) sin φ cos φ sin φ e + cos φ f One can see that that new basis {e, f } is orthonormal basis too and transition matrix T φ rotates the basis (e, f) on the angle φ (see Homework 1). 16

We call the matrix A φ rotation matrix Now consider the second case, when a matrix Ãφ is defined by the relation (1.33). One can see that ( ) ( ) ( ) cos φ sin φ cos φ sin φ 1 Ã φ = = = A sin φ cos φ sin φ cos φ 1 φ R (1.35) ( ) 1 where we denote by R = a transition matrix from the basis {e, f} 1 to the basis {e, f} the reflection. We see that in the second case the orthogonal matrix is composition of φ rotation and reflection matrix: {e, f}ãφ=a R {ẽ, f}: {e, f} {e Aφ = cos φ e + sin φ f, f = sin φ e + cos φ f} {ẽ R = e, f = f} (1.36) One can see that the transition matrix Ãφ is a reflection matrix with respect to the axis which have the angle φ with x-axis. We come to proposition 2 Proposition. Let A be an arbitrary 2 2 orthogonal matrix, i.e. A t A = 1 and in particularly det A = ±1. (Transition matrix transforms an orthonormal basis to an orthonormal one.) If det A = 1 then there exists an angle φ [, 2π) such that A = A φ is a transition matrix (1.32) which rotates the basis vectors on the angle φ. If det A = 1 then there exists an angle φ [, 2π) such that A = Ãφ is a transition matrix is a composition of rotation and reflection (see (1.36)) or it is a reflection with respect to the axis which have the angle φ/over2 with x-axis. Let (x, y) be components of the vector a in the basis (e, f), and (x, y ) be components of the vector a in the rotated basis {e, f }. Then it follows from (1.34) that ( ) ( ) ( ) a = x e + y f = (e, f x x x ) = (e, f)a φ = (e, f). y y y Hence ( ) x y ( ) x = A φ y = ( ) ( ) cos φ sin φ x sin φ cos φ y = ( x cos φ y ) sin φ x sin φ + y cos φ (1.37) 17

and respectively ( ) ( ) ( ) ( ) x x cos φ sin φ x y = A φ = = y sin φ cos φ y ( ) x cos φ + y sin φ x sin φ + y cos φ (1.38) because A 1 φ = A φ. 1.9 Orientation in vector space In the three-dimensional Euclidean space except scalar (inner) product, one can consider another important operation: vector product. For defining this operation we need additional structure:orientation. A basis {a, b, c} have the same orientation as the basis {a, b, c } if they both obey right hand rule or if they both obey left hand rule. In the other case we say that these bases have opposite orientation. How to make this conception more mathematical? Consider the set of all bases in the given vector space V. If (e 1,... e n ), (e 1,... e n) are two bases then one can consider the matrix T transition matrix which transforms the old basis to the new one (see (1.23)). The transition matrix is not degenerate, i.e. determinant of this matrix is not equal to zero. Definition Let {e 1,... e n }, {e 1,... e n} R n be two bases in R n and T be transition matrix: {e 1,... e n} = {e 1,... e n }T. (1.39) We say that these two bases have the same orientation if the determinant of transition matrix from the first basis to the second one is positive: det T >. We say that the basis {e 1,... e n} has an orientation opposite to the orientation of the basis {e 1,... e n} (or in other words these two bases have opposite orientation) if the determinant of transition matrix from the first basis to the second one is negative: det T <. Remark Transition matrix from basis to basis is non-degenerate, hence its determinant cannot be equal to zero. It can be or positive or negative. One can see that orientation establishes the equivalence relation in the set of all bases. Denote {e 1,... e n } {e 1,... e n}, 18

if two bases {e 1,... e n } and {e 1,... e n} have the same orientation, i.e. det T > for transition matrix. Show that is an equivalence relation, i.e. this relation is reflexive, symmetric and transitive. Check it: it is reflexive, i.e. for every basis {e 1,... e n } {e 1,..., e n } {e 1,..., e n }, (1.4) because in this case transition matrix T = I and deti = 1 >. it is symmetric, i.e. If {e 1,..., e n } {e 1,..., e n} then {e 1,..., e n) {e 1,..., e n }, because if T is transition matrix from the first basis {e 1,..., e n } to the second basis {e 1,..., e n}: {e 1,..., e n} = {e 1,..., e n }T, then the transition matrix from the second basis {e 1,..., e n} to the first basis {e 1,..., e n } is the inverse matrix T 1 : {e 1,..., e n } = {e 1,..., e n}t 1. Hence det T 1 = 1 > if det T >. det T Is transitive, i.e. if {e 1,..., e n } {e 1,..., e n} and {e 1,..., e n) {ẽ 1,..., ẽ n }, then {e 1,..., e n } {ẽ 1,..., ẽ n }, because if T 1 is transition matrix from the first basis {e 1,... e n } to the second basis {e 1,..., e n} and T 2 is transition matrix from the second basis {e 1,..., e n} to the third basis {ẽ 1,..., ẽ n } then the transition matrix T from the first basis {e 1,..., e n } to the third basis {ẽ 1,..., ẽ n } is equal to T = T 2 T 1. We have that det T = det(t 2 T 1 ) = det T 2 det T 1 > because det T 1 > and det T 2 >. Since it is equivalence relation the set of all bases is a union if disjoint equivalence classes. Two bases are in the same equivalence class if and only if they have the same orientation. One can see that there are exactly two equivalence classes. Indeed let {e 1, e 2..., e n } be an arbitrary basis in n-dimensional vector space V. Swap the vectors e 1, e 2. We come to a new basis: {e 1, e 2..., e n} e 1 = e 2, e 2 = e 1, all other vectors are the same: e 3 = e 3,..., e n = e n (1.41) 19

We have: {e 1, e 2, e 3..., e n} = {e 2, e 1, e 3,..., e n } = {e 1, e 2, e 3,..., e n }T swap, (1.42) where one can easy see that the determinant for transition matrix T swap is equal to 1. E.g. write down the transition matrix (1.42) in the case if dimension of vector space is equal to 5, n = 5. Then we have {e 1, e 2, e 3, e 4, e 5} = {e 2, e 1, e 3, e 4, e 5 } = {e 1, e 2, e 3, e 4, e 5 }T where 1 1 T swap = 1 (det T swap = 1). (1.43) 1 1 We see that bases {e 1, e 2..., e n } and {e 1, e 2..., e n} have opposite orientation. They does not belong to the same equivalence class. Now consider in V an arbitrary basis {ẽ 1, ẽ 2..., ẽ n }. For convenience call the initial basis {e 1, e 2..., e n } the I-st basis, call the basis {e 1, e 2..., e n} the II-nd basis and call a new arbitrary basis {ẽ 1, ẽ 2..., ẽ n } a III-rd basis. Show that the III-rd basis and the I-st basis have the same orientation or the III-rd basis and the II-nd basis have the same orientation, i.e. the third basis belongs to the equivalence class of the first basis or it belongs to the equivalence class of the second basis. Thus we will show that there are exactly two equivalence classes. Let T 1 be transition matrix from the I-st basis to the III-rd basis,i.e. from the basis {e 1, e 2..., e n }) to the basis {ẽ 1, ẽ 2..., ẽ n }. Let T 2 be transition matrix from II-nd basis to the III-rd basis,i.e.from the basis {e 1, e 2..., e n} to the basis {ẽ 1, ẽ 2..., ẽ n }. We have: {ẽ 1, ẽ 2..., ẽ n } = {e 1, e 2..., e n }T 1, {ẽ 1, ẽ 2..., ẽ n } = {e 1, e 2..., e n}t 2 Recall that {e 1, e 2..., e n} = {e 1, e 2..., e n }T swap with det T swap = 1 <. Hence {ẽ 1, ẽ 2..., ẽ n } = {e 1, e 2..., e n}t 2 = {e 1, e 2..., e n }T 1 T T 2 = T 1 T swap. We have det T 2 = det(t 1 T swap ) = det T 1 det T swap = det T 1. Hence det T 1 and det T 2 have opposite signs. We come to 2

Conclusion If det T 1 > and det T 2 <, then the I-st and III-rd bases have the same orientation and the II-nd and III-rd bases have opposite orientation. If det T 2 > and det T 1 <, then the II-nd and III-rd bases have the same orientation and the I-st and III-rd bases have opposite orientation. In other words if bases {e 1, e 2..., e n }, {e 1, e 2..., e n} have opposite orientation then an arbitrary basis {ẽ 1, ẽ 2..., ẽ n } belongs to the equivalence class of the basis {e 1, e 2..., e n } or it belongs to the equivalence class of the basis {e 1, e 2..., e n}. The set of all bases is a union of two disjoint subsets. Any two bases which belong to the same subset have the same orientation. Any two bases which belong to different subsets have opposite orientation. Definition An orientation of a vector space is an equivalence class of bases in this vector space. Note that fixing any basis we fix orientation, considering the subset of all bases which have the same orientation that the given basis. There are two orientations. Every basis has the same orientation as a given basis or orientation opposite to the orientation of the given basis. Definition An oriented vector space is a vector space equipped with orientation. Consider examples. Example (Orientation in two-dimensional space). Let {e x, e y } be any basis in R 2 and a, b are arbitrary two vectors in R 2. Consider an ordered pair {a, b, }. The ( transition ) matrix from the basis {e x, e y } to the ordered ax b pair {a, b} is T = x : a y b y ( ) ax b {a, b} = {e x, e y }T = {e x, e y } x, a y b y { a = a x e x + a y e y b = b x e x + b y e y One can see that the ordered pair {a, b} also is a basis, (i.e. these two vectors are linearly independent in R 2 ) if and only if transition matrix is not degenerate, i.e. det T. (1.44) We see that the basis {a, b} has the same orientation as the basis {e x, e y } if det T >. (1.45) 21

We see that the basis {a, b} has the orientation opposite to the orientation of the basis {e x, e y } if det T <. (1.46) Exercise Show that ( bases ) {e x, e y } and {e y, e x } have opposite orientation 1 (Transition matrix T = has determinant 1.). 1 Exercise Bases ( {e x, e y ) } and { e y, e x } have the same orientation. (Transition matrix T = has determinant 1.) 1 1 (There are plenty exercises in the Homework 3.) Relations (1.45),(1.46) define equivalence relations in the set of bases. Orientation is equivalence class of bases. There are two orientations, every basis has the same orientation as a given basis or opposite orientation. Example(Orientation in three-dimensional euclidean space.) Let {e x, e y, e z } be any basis in E 3 and a, b, c are arbitrary three vectors in E 3 : a = a x e x + a y e y + a z e z b = b x e x + b y e y + b z e z, c = c x e x + c y e y + c z e z. Consider ordered triple {a, b, c}. The transition matrix from the basis {e x, e y, e z } a x b x c x to the ordered triple {a, b, c} is T = a y b y c y : a z b z c z a x b x c x {a, b, c} = {e x, e y, e z }T = {e x, e y, e z } a y b y c y a z b z c z One can see that the ordered triple {a, b, c} also is a basis, (i.e. these three vectors are linearly independent) if and only if transition matrix is not degenerate, i.e. det T. (1.47) We see that the basis {a, b, c} has the same orientation as the basis {e x, e y, e z } if det T >. (1.48) We see that the basis {a, b, c} has the orientation opposite to the orientation of the basis {e x, e y, e z } if det T <. (1.49) 22

Exercise Show that bases {e x, e y, e z } and { e x, e y, e z } have opposite orientation but bases {e x, e y, e z } and {e y, e x, e z } have the same orientation. We say that Euclidean space is equipped with orientation if we consider in this space only orthonormal bases which have the same orientation. Remark Note that in the example above we considered in E 3 arbitrary bases not necessarily orthonormal bases. Relations (1.48),(1.49) define equivalence relations in the set of bases. Orientation is equivalence class of bases. There are two orientations, every basis has the same orientation as a given basis or opposite orientation. If two bases {e i }, {e i } have the same orientation then they can be transformed to each other by continuous transformation, i.e. there exist one-parametric family of bases {e i (t)} such that t 1 and {e i (t)} t= = {e i }, {e i (t)} t=1 = {e i }. (All functions e i (t) are continuous) In the case of three-dimensional space the following statement is true (Euler Theorem): Let {e i }, {e i } (i = 1, 2, 3) be two orthonormal bases in E 3 which have the same orientation. Then there exists an axis n such that basis {e i } transforms to the basis {e i } under rotation around the axis. 1.1 Linear operator in E 3 preservinig orientation is a rotation Let P be a linear operator in vector space R n. Let {e 1, e 2,..., e n } be an arbitrary basis in R n. Considering the action of P on basis vectors we come to vectors (e 1, e 2,..., e n): e 1 = P (e 1 ), e 2 = P (e 2 )..., e n = P (e n ) (1.5) If operator P is non-degenerate (det P ) then ordered n-tuple {e 1, e 2,..., e n} is a basis too. Non-degenerate linear operator maps the basis to another basis. Definition. Let {e 1, e 2,..., e n } be an arbitrary basis in R n. Consider the basis {e 1, e 2,..., e n}, where e i = P (e i) We say that non-degenerate linear operator P (det P ) preserves orientation if bases {e 1, e 2,..., e n } and {e 1, e 2,..., e n}, where e i = P (e i) have the same orientation. In this case det P >. We say that linear operator P changes orientation if bases {e 1, e 2,..., e n } and {e 1, e 2,..., e n} have opposite orientation. In this case det P <. It is easy to see that this definition is correct: The property of operator P to preserve orientation does not depend on choosing a basis. If bases {e 1, e 2,..., e n } and {e 1, e 2,..., e n}, where e i = P (e i ) have the same (opposite) orientation, then for an another basis {f 1, f 2,..., f n } in R n, the bases {f 1, f 2,..., f n } and {f 1, f 2,..., f n}, where f i = P (f i) have the same (opposite) orientation also. 23

In other words we say that non-degenerate linear operator P preserves orientation if it maps vectors of an arbitrary basis to the vectors of another basis which have the same orientation as an initial basis. We say that non-degenerate linear operator P changes orientation if it maps vectors of an arbitrary basis to the vectors of another basis which have the orientation opposite an orientation of initial basis. Example Let {e x, e y, e z } be an orthonormal basis in E 3. Consider linear operator P such that P (e x ) = e y, P (e y ) = e x, P (e z ) = e z. This operator maps orthonormal basis {e x, e y, e z } to the basis {e y, e x, e z } which is orthonormal too. Both bases have the same orientation. Hence the operator P is linear operator preserving orientation. In this case it is orthogonal operator, because it maps orthonormal basis to the orthonormal one. One can see that P is rotation operator: Under the action of operator P vectors in E 3 rotate on the angle π 2 about the axis Oz. The vectors λe z collinear (proportional) to the vector e z. are eigenvectors of this operator: P e z = e z. The axis is a line spanned by the vector e z. One can show that in Euclidean vector space E 3 every orthogonal operator which preserves orientation is a rotation. Theorem (Euler Theorem). Let P be an linear orthogonal operator in E 3 preserving orientation. Then it is a rotation operator about some axis passing through the origin. (The proof of this theorem see in the solutions of Homework 3, Exercise 7) How to find an axis of rotation? Vectors which belong to axis (starting at origin) are eigenvectors of P. They all are proportional each other. Eigenvalue of these vectors is equal to 1 Rotation does not change the vectors which belong to axis. Hence Claim To find an axis we have to find eigenvector of the operator P with eigenvalue 1. In the example above vector e z was the eigenvector of the operator P. Consider more interesting example: Example Let {e x, e y, e z } be an orthonormal basis in E 3. Consider linear operator P such that P (e x ) = e z, P (e y ) = e y, P (e z ) = e x. (1.51) This operator maps orthonormal basis {e x, e y, e z } to the basis {e z, e y, e x } which is orthonormal too. Both bases have the same orientation. Hence the operator P is linear orthogonal operator preserving orientation. According to the Euler Theorem it is a rotation operator about an axis. Find this axis. Let vector N (starting at origin) belongs to the axis. Then P N = N. (1.52) N is eigenvector of the operator P. Its eigenvalue is equal to 1. To find an axis of rotation (1.51) we have to find an eigenvector (1.52). It is easy to see that vector e x + e z obeys the condition (1.52): P (e x + e z ) = e x + e z 24

We see that eigenvector of P is an arbitrary vector proportional (collinear) to the vector e x + e z. These vectors span the line λ(e x + e y ) axis of rotation. We see that the axis of rotation is the line spanned by the eigenvectors which is a bisectrix of the angle between Ox and Oz axis. 1.11 Vector product in oriented E 3 Now we give a definition of vector product of vectors in 3-dimensional Euclidean space equipped with orientation, i.e. we fix an equivalence class of orthonormal bases with the same orientation. Recall that it suffices to fix an arbitrary orthonormal basis {e x, e y, e z }. Then an equivalence class is defined as the set of all orthonormal bases which have the same orientation as the basis {e x, e y, e z } Let E 3 be three-dimensional oriented Euclidean space, i.e. Euclidean space equipped with an equivalence class of bases with the same orientation. To define the orientation it suffices to consider just one orthonormal basis {e 1, e 2, e 3 }. Then the equivalence class of the bases is a set of all bases which have the same orientation as the basis {e 1, e 2, e 3 }. By default we suppose that orthonormal basis {e x, e y, e z } belongs to the equivalence class of bases defining orientation of E 3. Let E 3 be three-dimensional oriented Euclidean space. Definition Vector product L(x, y) = x y is a function of two vectors which takes vector values such that the following axioms (conditions) hold The vector L(x, y) = x y is orthogonal to vector x and vector y: (x y) x, (x y) y (1.53) In particular it is orthogonal to the the plane spanned by the vectors x, y (in the case if vectors x, y are linearly independent) x y = y x, (anticommutativity condition) (1.54) (λx + µy) z = λ(x z) + µ(y z), (linearity condition) (1.55) 25

If vectors x, y are perpendicular each other then the magnitude of the vector x y is equal to the area of the rectangle formed by the vectors x and y: x y = x y, if x y, i.e.(x, y) =. (1.56) If the ordered triple of the vectors {x, y, z}, where z = x y is a basis, then this basis and an orthonormal basis {e x, e y, e z } in E 3 have the same orientation: {x, y, z} = {e x, e y, e z }T, where for transition matrix T, det T >. (1.57) Vector product depends on orientation in Euclidean space. Comments on conditions (axioms) (1.53) (1.57): 1. The condition (1.55) of linearity of vector product with respect to the first argument and the condition (1.54) of anticommutativity imply that vector product is an operation which is linear with respect to the second argument too. Show it: z (λx+µy) = (λx+µy) z = λ(x z) µ(y z) = λ(z x)+µ(z y). Hence vector product is bilinear operation. Comparing with scalar product we see that vector product is bilinear anticommutative (antisymmetric) operation which takes vector values, while scalar product is bilinear symmetric operation which takes real values. 2. The condition of anticommutativity immediately implies that vector product of two colinear (proportional) vectors x, y (y = λx) is equal to zero. It follows from linearity and anticommuativity conditions. Show it: Indeed x y = x (λx) = λ(x x) = λ(x x) = x (λx) = x y. (1.58) Hence x y =, if y = λx. 3. It is very important to emphasize again that vector product depends on orientation. According the condition (1.57) if z = x y and we change the orientation of Euclidean space, then z z since the basis {x, y, z} as an orientation opposite to the orientation of the basis {x, y, z}. You may ask a question: Does this operation (taking the vector product) which obeys all the conditions (axioms) (1.53) (1.57) exist? And if it exists is it unique? 26

We will show that the vector product is well-defined by the axioms (1.53) (1.57), i.e. there exists an operation x y which obeys the axioms (1.53) (1.57) and these axioms define the operation uniquely. We will assume first that there exists an operation L(x, y) = x y which obeys all the axioms (1.53) (1.57). Under this assumption we will construct explicitly this operation (if it exists!). We will see that the operation that we constructed indeed obeys all the axioms (1.53) (1.57). Thus we will prove the uniqueness and existence. Let {e x, e y, e z } be an arbitrary orthonormal basis of oriented Euclidean space E 3 which belongs to the equivalence class of bases defining orientation. Then it follows from the considerations above for vector product that e x e x =, e x e y = e z, e x e z = e y e y e x = e z, e y e y =, e y e z = e x e z e x = e y, e z e y = e x, e z e z = (1.59) E.g. e x e x =, because of (1.54), e x e y is equal to e z or to e z according to (1.56), and according to orientation arguments (1.57) e x e y = e z. Now it follows from linearity and (1.59) that for two arbitrary vectors a = a x e x + a y e y + a z e z, b = b x e x + b y e y + b z e z a b = (a x e x +a y e y +a z e z ) (b x e x +b y e y +b z e z ) = a x b y e x e y +a x b z e x e z + a y b x e y e x + a y b z e y e z + a z b x e z e x + a z b y e z e y = (a y b z a z b y )e x + (a z b x a x b z )e y + (a x b y a y b x )e z. (1.6) It is convenient to represent this formula in the following very familiar way: e x e y e z L(a, b) = a b = det a x a y a z (1.61) b x b y b z We see that the operation L(x, y) = x y which obeys all the axioms (1.53) (1.57), if it exists, has an appearance (1.61), where {e x, e y, e z } is an arbitrary orthonormal basis (with rightly chosen orientation). On the other hand using the properties of determinant and the fact that vectors are orthogonal if and only if their scalar product equals to zero one can easy 27

see that the vector product defined by this formula indeed obeys all the conditions (1.53) (1.57). Thus we proved that the vector product is well-defined by the axioms (1.53) (1.57) and it is given by the formula (1.61) in an arbitrary orthonormal basis (with rightly chosen orientation). Remark In the formula above we have chosen an arbitrary orthonormal basis which belongs to the equivalence class of bases defining the orientation. What will happen if we choose instead the basis {e x, e y, e z } an arbitrary orthonormal basis {f 1, f 2, f 3 }. We see that such that answer does not change if both bases {e x, e y, e z } and {f 1, f 2, f 3 } have the same orientation, Formulae (1.59) are valid for an arbitrary orthonormal basis which have the same orientation as the orthonormal basis {e x, e y, e z }. In oriented Euclidean space E 3 we may take an arbitrary basis from the equivalence class of bases defining orientation. On the other hand if we will consider the basis with opposite orientation then according to the axiom (1.57) vector product will change the sign. (See also the Homework 4) 1.11.1 Vector product area of parallelogram The following Proposition states that vector product can be considered as area of parallelogram: Proposition 2 The modulus of the vector z = x y is equal to the area of parallelogram formed by the vectors x and y.: S(x, y) = x y, (1.62) where we denote by S(x, y) the area of parallelogram formed by the vectors x, y. Proof: Consider the expansion y = y + y, where the vector y is orthogonal to the vector x and the vector y is parallel to to vector x. The area of the parallelogram formed by vectors x and y is equal to the product of the length of of the vector x on the height. The height is equal to the length of the vector y. We have S(x, y) = x y. On the other z = x y = x (y + y ) = x y + x y. But x y =, because these vectors are colinear. Hence z = x y and z = x y = S(x, y) because vectors x, y are orthogonal to each other. This Proposition is very important to understand the meaning of vector product. Shortly speaking vector product of two vectors is a vector which is 28

orthogonal to the plane spanned by these vectors, such that its magnitude is equal to the area of the parallelogram formed by these vectors. The direction is defined by orientation. Remark It is useful sometimes to consider area of parallelogram not as a positive number but as an real number positive or negative (see the next subsubsection.) It is not worthless to recall the formula which we know from the school that area of parallelogram formed by vectors x, y equals to the product of the base on the height. Hence where θ is an angle between vectors x, y. x y = x y sin θ, (1.63) Finally I would like again to stress: Vector product of two vectors is equal to zero if these vectors are colinear (parallel). Scalar product of two vectors is equal to zero if these vector are orthogonal. Exercise Show that the vector product obeys to the following identity: ((a b) c) + ((b c) a) + ((c a) b) =. (Jacoby identity) (1.64) This identity is related with the fact that heights of the triangle intersect in the one point. Exercise Show that a (b c) = b(a, c) c(a, b). 1.11.2 Area of parallelogram in E 2 and determinant of 2 2 matrices. Let a, b be two vectors in 2-dimensional vector space E 2. One can consider E 2 as a plane in 3-dimensional Euclidean space E 3. Let n be a unit vector in E 3 which is orthogonal to E 2. One can see a b is proportional to the normal vector n to the plane E 2 : a b = A(a, b)n (1.65) and area of parallelogram equals to the modulus of the coefficient A(c, b): S(a, b) = a b = A(a, b) (1.66) 29