REFLECTIONS IN A EUCLIDEAN SPACE

Similar documents
MATH 223A NOTES 2011 LIE ALGEBRAS 35

A PRIMER ON SESQUILINEAR FORMS

Properties of Transformations

Linear Algebra Massoud Malek

Pullbacks, Isometries & Conformal Maps

Orthogonal Complements

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

INNER PRODUCT SPACE. Definition 1

PROPERTIES OF SIMPLE ROOTS

Math 61CM - Solutions to homework 6

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Chapter 4 Euclid Space

Linear Algebra. Workbook

Math 396. An application of Gram-Schmidt to prove connectedness

NOTES ON BILINEAR FORMS

Linear Algebra Practice Problems

Vector Spaces. Commutativity of +: u + v = v + u, u, v, V ; Associativity of +: u + (v + w) = (u + v) + w, u, v, w V ;

SYMMETRIES IN R 3 NAMITA GUPTA

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra

7. Dimension and Structure.

Some notes on Coxeter groups

Math 370 Homework 2, Fall 2009

MA 265 FINAL EXAM Fall 2012

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

(x, y) = d(x, y) = x y.

There are two things that are particularly nice about the first basis

Exercise Sheet 1.

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Math 110: Worksheet 3

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2.3. VECTOR SPACES 25

Chapter 2. Vectors and Vector Spaces

11.1 Three-Dimensional Coordinate System

Introduction to Linear Algebra, Second Edition, Serge Lange

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Exam questions with full solutions

LINEAR ALGEBRA W W L CHEN

Introduction to Matrix Algebra

ISOMETRIES OF R n KEITH CONRAD

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Math 203, Solution Set 4.

(v, w) = arccos( < v, w >

0 Sets and Induction. Sets

Math 396. Quotient spaces

GROUP THEORY AND THE 2 2 RUBIK S CUBE

Abstract Vector Spaces

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Properties of Matrices and Operations on Matrices

Linear Algebra March 16, 2019

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

5. Orthogonal matrices

(v, w) = arccos( < v, w >

Final Examination 201-NYC-05 December and b =

Chapter 1 Vector Spaces

Linear Algebra Review. Vectors

Domesticity in projective spaces

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Vector spaces. EE 387, Notes 8, Handout #12

Typical Problem: Compute.

REVIEW OF LINEAR ALGEBRA

MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

MATH 235. Final ANSWERS May 5, 2015

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

2. Review of Linear Algebra

Vector Spaces, Affine Spaces, and Metric Spaces

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Class Meeting # 12: Kirchhoff s Formula and Minkowskian Geometry

(v, w) = arccos( < v, w >

Systems of Linear Equations

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

QUIVERS AND LATTICES.

MATH Linear Algebra

Mathematical Foundations: Intro

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Chapter 4 & 5: Vector Spaces & Linear Transformations

ISOMETRIES AND THE LINEAR ALGEBRA OF QUADRATIC FORMS.

The Symmetric Space for SL n (R)

Groups and Symmetries

,, rectilinear,, spherical,, cylindrical. (6.1)

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

LINEAR ALGEBRA REVIEW

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.

1 Linear Algebra Problems

1 9/5 Matrices, vectors, and their applications

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Tutorials in Optimization. Richard Socher

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j

SIMPLE AND POSITIVE ROOTS

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

Transcription:

REFLECTIONS IN A EUCLIDEAN SPACE PHILIP BROCOUM Let V be a finite dimensional real linear space. Definition 1. A function, : V V R is a bilinear form in V if for all x 1, x, x, y 1, y, y V and all k R, x 1 + kx, y = x 1, y + k x, y, and x, y 1 + ky = x, y 1 + k x, y. Definition. A bilinear form, in V is symmertic if x, y = y, x for all x, y V. A symmetric bilinear form is nondegenerate if a, x = 0 for all x V implies a = 0. It is positive definite if x, x > 0 for any nonzero x V. A symmetric positive definite bilinear form is an inner product in V. Theorem 3. Define a bilinear form on V = R n by e i, e j = δ ij, where n {e i } i=1 is a basis in V, and δ ij = 1 if i = j and δ ij = 0 if i = j. Then, is an inner product in V. Proof. We must check that it is symmetric and positive definite: e i, e j = e j, e i = 1 if i = j and e i, e j = e j, e i = 0 if i = j e i, e i = 1. Definition 4. A Euclidean space is a finite dimensional real linear space with an inner product. n Theorem 5. Any Euclidean n dimensional space V has a basis {e i } i=1 such that e i, e j = δ ij. Proof. First we will show that V has an orthogonal basis. Let {u 1,..., u n } be a basis of V, and let v 1 = u 1 ; note that v 1 = 0, and so v 1, v 1 = 0. Also, let v be given by the following: v v, v 1 = u v1. v 1, v 1 Date: May 10, 004. 1

PHILIP BROCOUM The vectors v 1 and v are orthogonal for v, v 1 = u u, v 1 v1, v 1 = u, v 1 u, v 1 v 1, v 1 v 1, v 1 v 1, v 1 = 0. Note v 1 and v are linearly independent, because u 1 and u are basis elements and therefore linearly independent. We proceed by letting Now, for j 1 k 1, = u j u j, v j 1 v j v j 1... u j, v 1 v1. v j 1, v j 1 v 1, v 1 v j, v k = u j, v k u j, v j 1 u j, v k v 1, v 1 v 1, v k = 0. v j 1, v j 1 v j 1, v k... v k, v k v k, v k... u j, v 1 This equality holds by induction because v i, v k = 0 for i = j 1,..., 1 and i = k, and the remaining negative term on the right hand side cancels with the first term for i = k. We cannot write the equation for v j to give a linear relation between {u 1,..., u j } because this is a subset of a basis. Thus, {v 1,..., v j } are linearly independent. Continuing with induction until j = n produces an orthogonal basis for V. Note that all denominators in the above calculations are positive real numbers because v i = 0 for i = 1,..., n. Lastly, when we define v i e i =, i = 1,..., n, v i then e i = 1 for all i, and {e 1,..., e n } is an orthonormal basis for V such that e i, e j = δ ij. Below V = R n is a Euclidean space with the inner product,. Definition 6. Two vectors x, y V are orthogonal if x, y = 0. Two subspaces U, W V are orthogonal if x, y = 0 for all x U, y W. Theorem 7. If U, W are orthogonal subspaces in V, then dim(u) + dim(w ) = dim(u + W ). Proof. Let {e i } n i=1 be an orthonormal basis of V. A subset of these basis vectors will form a basis of U. Likewise, another subset of these basis vectors will form a basis of W. No basis vector of V can be a basis vector of both U and W because the two subspaces are orthogonal. We can see this by taking x to be a basis vector of U and taking y to be a basis vector of W. We now have x, y = δ ij We know this is zero, so x and y cannot be the same. Since this applies to all x U and all y W, the two subspaces share none of the same

REFLECTIONS IN A EUCLIDEAN SPACE 3 basis vectors. We can then form a basis of U + W by taking the basis of U plus the basis of W. Thus, dim(u)+dim(w ) = dim(u + W ). Definition 8. The orthogonal complement of the subspace U V is the subspace U = {y V : x, y = 0, for all x U}. Definition 9. A hyperplane H x V is the orthogonal complement to the one dimensional subspace in V spanned by x V. Theorem 10. (Cauchy Schwartz). For any x, y V, x, y x, x y, y, and equality holds if and only if vectors x, y are linearly dependent. Proof. For all a and b in R we have, by positive definiteness: ax by, ax by 0 By bilinearity and symmetry this gives: a x, x ab x, y + b y, y 0 We can now set a = x, y and b = x, x and cancel the first term: x, x ( x, y + x, x y, y ) 0 If x, x = 0, then by positive definiteness, x = 0 and therefore x, y = 0 as well. Thus the equality holds in this case. If x, x = 0, then it is positive and the theorem follows from the equality above. We will be interested in the linear mappings that respect inner products. Definition 11. An orthogonal operator in V is a linear automorphism f : V V such that f(x), f(y) = x, y for all x, y V. Theorem 1. If f 1, f are orthogonal operators in V, then so are the inverses f1 1, f 1 and the composition f 1 f. The identity mapping is orthogonal. Proof. Because an orthogonal operator f is one to one and onto, we know that for all a, b V there exists an x, y V such that x = f 1 (a) and y = f 1 (b). Thus, f(x) = a and f(y) = b. By orthogonality, f(x), f(y) = x, y. This means that a, b = x, y and thus we conclude that f 1 (a), f 1 (b) = a, b. The composition of f 1 and f is f 1 f (x), f 1 f (y). Because f 1 is orthogonal, f 1 f (x), f 1 f (y) = f (x), f (y). Because f is also orthogonal, f (x), f (y) = x, y. The identity map is orthogonal by definition and this completes the proof.

4 PHILIP BROCOUM Remark 13. The above theorem says that orthogonal operators in a Euclidean space form a group, i.e. a set closed with respect to compositions, contains an inverse to each element, and contains an identity operator. Example 14. All orthogonal operators in R can be expressed as matrices and they form a group with respect to the matrix multiplication. The orthogonal operators of R are all the reflections and rotations. The standard rotation matrix is the following ( ) cosθ sinθ sinθ cosθ and the standard reflection matrix is the following ( ) cosθ sinθ sinθ cosθ The matrices of the above two types form the set of all orthogonal operators in R. We know this because Ax, Ay = x, y. Thus, A T A = I and so det(a ) = 1 which implies det(a) = ±1. The matrices with determinants plus or minus one are of the two forms shown above. Now we are ready to introduce the notion of a reflection in a Euclidean space. A reflection in V is a linear mapping s : V V which sends some nonzero vector α V to its negative and fixes pointwise the hyperplane H α orthogonal to α. To indicate this vector, we will write s = s α. The use of Greek letters for vectors is traditional in this context. Definition 15. A reflection in V with respect to a vector α V is defined by the formula: s α (x) = x x, α α. α, α Theorem 16. With the above definition, we have: (1) s α (α) = α and s α (x) = x for any x H α ; () s α is an orthogonal operator; (3) s α = Id. Proof. (1) Using the formula, we know that, s α (α) = α α, α (α). α, α Since α,α = 1, the formula simplifies to, α,α s α (α) = α α,

REFLECTIONS IN A EUCLIDEAN SPACE 5 and thus s α (α) = α. For x H α, we know that x, α = 0. We can now write, (0) s α (x) = x α, α α, and so s α (x) = x. () By Theorem 5, we can write any x V as a sum of basis vectors {e i } n i=1 such that e i, e j = δ ij. We can also choose this basis so that e 1 is in the direction of α and the others are orthogonal to α. Because of linearity, it suffices to check that s α (e i ), s α (e j ) = e i, e j. s α (e i ) = e i e i, α α α, α If e i = α then it must be orthogonal to α. (0) s α (e i ) = e i α, α α = e i If e i = e 1 = α then by part one s α (e 1 ) = e 1. We can now conclude that s α (e i ), s α (e j ) = ±e i, ±e j. It is easy to check that ±e i, ±e j = e i, e j = δ ij. (3) By the same reasoning as part two, it suffices to check that s α (e i ) = e i for any basis vector e i. Also from part two, we know that s α (e i ) is either equal to e i or e i. If s α (e i ) = e i, then e i = α and clearly s α (e i ) = e i. If s α (e i ) = e i, then s α (e i ) = ( (e i )) = e i and this completes the proof. Therefore, reflections generate a group: their compositions are orthogonal operators by Theorem 1, and an inverse of a reflection is equal to itself by Theorem 16. Below we consider some basic examples of subgroups of orthogonal operators obtained by repeated application of reflections. Example 17. Consider the group S n of permutations of n numbers. It is generated by transpositions, t ij where i = j are two numbers between 1 and n, and t ij sends i to j and j to i, while preserving all other numbers. The compositions of all such transpositions form S n. Define a set of linear mappings T ij : R n R n in an orthonormal basis {e i } n i=1 by T ij e i = e j ; T ij e j = e i ; T ij e k = e k, k = i, j.

6 PHILIP BROCOUM Then, since any element σ S n is a composition of transpositions, it defines a linear automorphism of R n equal to the composition of the linear mappings defined above. (1) T ij acts as a reflection with respect to the vector e i e j R n. Proof. We being with our definition of a reflection. s α (x) = x x, α α. α, α x, e i e j (x) = x ei e j, e i e j (e i e j ). Because of linearity, it suffices to check the following: (a) (e k ) = e k, k = i, j (b) (c) (e k ) = e j, k = i (e k ) = e i, k = j The third item follows from the second by symmetry, so we must only check the first two conditions outlined above. Note that e i e j, e i e j = because e i e j, e i e j = e i, e i e j e j, e i e j = e i, e i e i, e j e j, e i e j, e j = 1+0+0+1 = and so our formula simplifies to (e k ) = e k e k, e i e j (e i e j ). Suppose k does not equal i or j for condition (a). e k, e i e j = e k, e i = e k, e j = 0 + 0 = 0 (e k ) = e k. Suppose k equals i for condition (b). e i, e i e j = e i, e i e i, e j = 1 + 0 = 1 (e i ) = e i (e i e j ) = e j. Thus, T ij does indeed act like a reflection with respect to the vector e i e j R n. () Any element σ of S n fixes pointwise the line in R n spanned by e 1 + e +... e n.

REFLECTIONS IN A EUCLIDEAN SPACE 7 Proof. The line described can be written as a variable times the sum of the basis vectors. c(e 1 + e +... + e n ), c R. In other words, the line is the set of points equidistant from each basis vector. Because vector addition is commutative, it matters not in which order the basis vectors are added. Since we know that elements of S n simply switch basis vectors around, the line described remains unchanged. (3) Let n = 3. There are six elements S 3 in R 3. If we call the basis vectors x, y, z then the six elements are: T xy, T xz, T yz, T xy T xz, T xy T yz, T xx. There is a special plane U orthogonal to x + y + z such that T ij [U] = U for all i and j. It can be said that U is closed under transpositions, because any point in U remains in U after a transposition (although the point may be in a different place, it is still in the plane U). Because U is closed, it can be thought of as R and the elements of S 3 correspond to the reflection and rotation matrices mentioned earlier in Example 14. T xy reflects the plane U through the plane x = y. T xz and T yz reflect the plane U through the planes x = z and y = z respectively. These three transpositions correspond to reflections in R about the line where the two planes intersect. The other three elements of S 3 rotate U about the line described above in part (). T xy T xz rotates U 10 degrees clockwise. T xy T yz rotates U 10 degrees counter clockwise. Finally, T xx is a rotation by 360 degrees, also known as the identity map. Remark 18. Notice that the product of two reflections is a rotation. As seen above in the plane U, a transposition is a reflection, and the composition of any two transpositions is a rotation. Example 19. The action of S n in R n described above can be composed n with the reflections {P i } i =1, sending e i to its negative and fixing all other elements of the basis e k, k = i. (1) The obtained set of orthogonal operators has no identity ele ment. Proof. If T ij P k were an identity element, then P k would have to be the inverse of T ij. However, P k is its own inverse, so P k must equal T ij. There is no transposition T ij that is equal to a reflection P k because T ij acts as a reflection with respect to the vector e i e j and P k acts as a reflection with respect to the

8 PHILIP BROCOUM vector e k. Since there are no zero basis vectors, e i e j never equals e k. There is no overlap between the T ij s and the P k s and thus there cannot be an identity element. () Four distinct orthogonal operators can be constructed in this way for n = and 18 can be constructed for n = 3. In R there exist two reflections; P x, P y, and two transpositions; T xy, and T xx (the identity). These can be composed in = 4 ways. In R 3 there are three reflections and six elements of S n, and these can be composed in 3 6 = 18 ways. (3) Consider the case where n =. The operators correspond to the matrices of orthogonal operators in R as listed in Example 14. The operator T xx P x is a reflection about the y axis. The operator T xx P y is a reflection about the y axis. The operator T xy P x is a rotation counter clockwise by 70 degrees. The operator T xy P y is a rotation counter clockwise by 90 degrees. Remark 0. The two examples above correspond to the series A n 1 and B n in the classification of finite reflection groups.