# Chapter 8. Rigid transformations

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending the language to do the job we want done. Figures in 3D are a great deal more complicated than ones in 2D, and there are a number of new mathematical ideas to be introduced. The biggest problem in 3D is you can t represent an object exactly, since in drawing it you must collapse it to 2D. Nonetheless this aim is not entirely unreasonable, because of course our eyes render the world on the back of the retina, itself a two-dimensional surface. But what we need to do to make the illusion work is to move objects around as we look at them, or at least allow for this possibility; and introduce shadows and other responses to light to create a perception of depth. In the next chapter we shall describe in detail how we arrange viewing things in space. In this one we shall describe how things move around without distortion. It is useful to discuss this in dimensions one and two as well as three. 1. Rigid transformations If we move an object around normally, it will in some sense remain rigid, and will not distort. Here is the technical way we formulate rigidity: Suppose we move an object from one position to another. In this process, any point P of the object will be moved to another point P. We shall say that the points of the object are transformed into other points. A transformation is said to be rigid if it preserves relative distances that is to say, if P and Q are transformed to P and Q then the distance from P to Q is the same as that from P to Q. We shall make an extra assumption about rigid transformations. It happens that it is a redundant assumption, since it can be proven that every rigid transformation satisfies this condition. We shall not prove it, however, because it would require a long digression, and instead just take it more or less for granted. The condition is this: All the rigid transformations we consider will be affine. This means that if we have chosen a linear coordinate system in whatever set we are looking at (a line, a plane, or space). then the transformation P P is calculated in terms of coordinate vectors x and x according to the formula x = Ax + v where A is a matrix and v a vector. In 3D, for example, we require x 1 x 2 = A x 1 x 2 + v 1 v 2. x 3 x 3 v 3 It turns out that all rigid transformations are in fact affine, but we shall not worry about that here. The matrix A is called the linear component, v the translation component of the transformation. A rigid transformation preserves angles as well as distances. That is to say, if P, Q and R are three points transformed to P, Q, and R, then the angle θ between segments PQand PR is the same as the angle θ between P Q and P R. This is because of the cosine law, which says that cos θ = QR 2 PQ 2 PR 2 PQ PR = Q R 2 P Q 2 P R 2 P Q P R =cosθ.

2 Rigid transformations 2 A few other facts are more elementary: The composition of rigid transformations is rigid. The inverse of a rigid transformations is rigid. In the second statement, it is implicit that a rigid transformation has an inverse. This is easy to demonstrate. An affine transformation will be rigid when its linear component is, since a translation will certainly not distort lengths. But if its linear component does not have an inverse, then it is singular, which means that it will collapse some line, at least, onto a point. Then it cannot preserve lengths, which is a contradiction. In order to classify rigid transformations, which is what we shall now do, we must thus classify the linear ones. Exercise 1.1. The inverse of the transformation x Ax + v is also affine. What are its linear and translation components? 2. Dot and cross products A bit later we shall need to know some basic facts about vector algebra. In any number of dimensions we define the dot product of two vectors to be u =(x 1,x 2...,x n ), v =(y 1,y 2,...,y n ) u v = x 1 y 1 + x 2 y 2 + x n y n. The relation between dot products and geometry is expressed by the cosine rule for triangles, which asserts that if θ is the angle between u and v then cos θ = u v u v. In particular u and v are perpendicular when u v =0. In 3D there is another kind of product. If then their cross product u v is the vector u =(x 1,x 2,x 3 ), v =(y 1,y 2,y 3 ) (x 2 y 3 y 2 x 3,x 3 y 1 x 1 y 3,x 1 y 2 x 2 y 1 ). This formula can be remembered if we write the vectors u and v in a 2 3 matrix [ ] x1 x 2 x 3 y 1 y 2 y 3 and then for each column of this matrix calculate the determinant of the 2 2 matrix we get by crossing out in turn each of the columns. The only tricky part is that with the middle coefficient we must reverse sign. Thus ( ) x u v = 2 x 3 y 2 y 3, x 1 x 3 y 1 y 3, x 1 x 2 y 1 y 2,. Here a b c d = ad bc. The geometrical significance of the cross product is contained in these rules: The length of w = u v is the area of the parallelogram spanned in space by u and v.

3 Rigid transformations 3 It lies in the line perpendicular to the plane containing u and v and its direction is determined by the right hand rule. u v v u The cross product u v will vanish only when u and v are multiples of one another. In these notes, the main use of dot products and cross products will be in calculating projections. (1) Suppose α to be any vector in space and u some other vector in space. The projection of u along α is the vector u 0 we get by projecting u perpendicularly onto the line through α. α u u u 0 u What is this projection? It must be a multiple of α. We can figure out what multiple by using trigonometry. We know three facts: (a) The angle θ between α and u is determined by the formula cos θ = u α α u. (b) The length of the vector u 0 is u cos θ, and this is to be interpreted algebraically in the sense that if u 0 faces in the direction opposite to α this is negative. (c) Its direction is either the same or opposite to α. The vector α/ α is a vector of unit length pointing in the same direction as α. Therefore u 0 = u cos θ α α = u u α α u α α = ( ) u α α 2 α = ( u α ) α. α α (2) Now let u be the projection of u onto the plane Π through the origin perpendicular to α.

4 Rigid transformations 4 The vector u has the orthogonal decomposition u = u 0 + u and therefore we can calculate u = u u 0. (3) Finally, let u be the vector in Π we get by rotating u by 90 in Π, using the right hand rule to determine what direction of rotation is positive. How do we calculate u? We want it to be perpendicular to both α and u, so it ought to be related to the cross product α u. A little thought should convince you that in fact the direction of u will be the same as that of α u, so that u will be a positive multiple of α u. We want u to have the same length as u. Since α and u are perpendicular to each other, the length of the cross product is equal to the product of the lengths of α and u, and we must divide by α to get a vector of length u. Therefore, all in all u = α α u. Incidentally, in all of this discussion it is only the direction of α that plays a role. It is often useful to normalize α right at the beginning of these calculations, that is to say replace α by α/ α. Exercise 2.1. Write PostScript programs to calculate dot products, cross products, u 0, u, u. 3. The classification of linear rigid transformations Let A be an n n matrix. a 1,1 a 1,2... a 1,n a A = 2,1 a 2,2... a 2,n.... a n,1 a n,2... a n,n Let e i be the column vector with exactly one non-zero coordinate equal to 1 in the i-th place. Then Ae i is equal to the i-th column of A. The transformation corresponding to A takes the origin to itself, and the length of e i is 1. Therefore the length of the i-th column of A is also 1. The angle between e i and e j is 90 if i j, and therefore the angle between the i-th and j-th columns of A is also 90. Since the square of the length of a vector u is equal to the dot product u u, and the angle between two vectors of length 1 is given by their dot product u v, this means that the columns u i of A satisfy the relations u i u j = { 1 i = j 0 otherwise Any matrix A satisfying these conditions is said to be orthogonal. The transpose t A of a matrix A is obtained by flipping A along its diagonal. In other words, the rows of t A are the columns of A, and vice-versa. By definition of the matrix product t AA, its entries are the various dot products of the columns of A with the rows of t A. Therefore a matrix A is orthogonal if and only if t AA= I, A 1 = t A. If A and B are two n n matrices, then det(ab) =det(a)det(b).

5 Rigid transformations 5 The determinant of A is the same as that of its transpose. If A is an orthogonal matrix, then det(i) =det(a)det( t A)=det(A) 2 so that det(a) =±1. If det(a) =1, A is said to preserve orientation, otherwise reverse orientation. There is a serious qualitative difference between the two types. If we start with an object in one position and move it continuously, then the transformation describing its motion will be a continuous family of rigid transformations. The linear component at the beginning is the identity matrix, with determinant 1. Since the family varies continuously, the linear component can never change the sign of its determinant, and must therefore always be orientation preserving. A way to change orientation would be to reflect the object, as if in a mirror. R R 4. Orthogonal transformations in 2D In 2D the classification of orthogonal transformations is very simple. First of all, we can rotate an object through some angle (possibly 0 ). This preserves orientation. The matrix of this transformation is, as we saw much earlier, [ ] cos θ sin θ. sin θ cos θ Second, we can reflect things in an arbitrary line.

6 Rigid transformations 6 That is to say, given a line l, we can transform points on l into themselves, points in the line through the origin perpendicular to l into their negatives. This reverses orientation. Exercise 4.1. If l is the line at angle θ with respect to the positive x-axis, what is the matrix of this reflection? It turns out there are no more possibilities. Every linear rigid transformation in 2D is either a rotation or a reflection. Let e 1 =(1, 0), e 2 =(0, 1), and let T be a linear rigid transformation. Since e 1 and e 2 both have length 1, both Te 1 and Te 1 also have length 1. All of these lie on the unit circle. Since the angle between e 1 and e 2 is 90,sois that between Te 1 and Te 2. There are two distinct possibilities, however. Either we rotate in the positive direction from Te 1 to Te 2, or in the negative direction. Te 2 e 2 e 2 Te 1 Te 1 e 1 e 1 Te 2 In the first case, we obtain Te 1 and Te 2 from e 1 and e 2 by a rotation. In the second case, something more complicated is going on. Here, as we move a vector u from e 1 towards e 2 and all the way around again to e 1, Tu moves along the arc from Te 1 to Te 2 all the way around again to Te 1, and in the opposite direction. Now if we start with two points anywhere on the unit circle and move them around in opposite directions, sooner or later they will meet. At that point we have Tu = u. Since T fixes u it fixes the line through u, hence takes points on the line through the origin perpendicular to it into itself. It cannot fix the points on that line, so it must negate them. In other words, T amounts to reflection in the line through u. Exercise 4.2. Explain why we can take u to be either of the points half way between e 1 and Te 1. Exercise 4.3. Find a formula for the reflection of v in the line through u. 5. Rigid motions in 3D There is one natural way to construct rigid linear motions in 3D. Choose an axis, and choose on it a direction. Equivalently choose a unit vector u, and the axis to be the line through u, with direction that of u.

7 Rigid transformations 7 α Choose an angle θ. Rotate things around the axis through angle θ, in the positive direction as seen along the axis from the positive direction. This is called an axial rotation. The only orientation-preserving linear rigid transformations in 3D are axial rotations. We shall give two proofs, one algebraic and the other geometric. But we postpone both until later in the chapter. 6. Calculating the effect of axial rotations To begin this section, I remark again that to determine an axial rotation we must specify not only an axis but a direction on that axis. This is because the sign of a rotation in 3D is only determined if we know whether it is assigned by a left hand or right hand rule. At any rate if choosing a vector along an axis fixes a direction on it. Given a direction on an axis we shall adopt the convention that the direction of positive rotation follows the right hand rule. So now the question we want to answer is this: Given a vector α 0and an angle θ. If u is any vector in space and we rotate u around the axis through α by θ, what new point v do we get? This is one of the main calculations we will make to draw moved or moving objects in 3D. There are some cases which are simple. If u lies on the axis, it is fixed by the rotation. If it lies on the plane perpendicular to α it is rotated by θ in that plane (with the direction of positive rotation determined by the right hand rule). If u is an arbitrary vector, we express it as a sum of two vectors, one along the axis and one perpendicular to it, and then use linearity to find the effect of the rotation on it. To be precise, let R be the rotation we are considering. Given u we can find its projection onto the axis along α to be ( α u ) u 0 = α α α Its projection u is then u u 0. We write u = u 0 + u Ru = Ru 0 + Ru = u 0 + Ru.

8 Rigid transformations 8 How can we find Ru? Normalize α so α =1, in effect replacing α by α/ α. This normalized vector has the same direction and axis as α. The vector u = α u will then be perpendicular to both α and to u and will have the same length as u. The plane perpendicular to α is spanned by u and u, which are perpendicular to each other and have the same length. The following picture shows what we are looking at from on top of α. u u It shows: The rotation by θ takes u to Ru =(cosθ) u +(sinθ) u. In summary: (1) Normalize α, replacing α by α/ α. (2) Calculate (3) Calculate (4) Calculate (5) Finally set u 0 = ( α u ) α. α α u = u u 0. u = α u. Ru = u 0 +(cosθ) u +(sinθ) u. Exercise 6.1. What do we get if we rotate the vector (1, 0, 0) around the axis through (1, 1, 0) by 36? Exercise 6.2. Write a PostScript procedure with α and θ as arguments and returns the matrix associated to rotation by θ around α. 7. Eigenvalues and rotations In this section we shall see the first proof that all linear, rigid, orientation-preserving transformations are axial rotations. It requires the notions of eigenvalue and eigenvector. If T is any linear operator, an eigenvector of T is a vector v 0such that Tvis a scalar multiple of v: Tv = cv. the number c is called the eigenvalue corresponding to v. If A is a matrix representing T, then the eigenvalues of T are the roots of the characteristic polynomial det(a xi)

9 Rigid transformations 9 where x is a variable. For a 3 3 matrix A xi = a 1,1 x a 1,2 a 1,3 a 2,1 a 2,2 x a 2,3 a 3,1 a 3,2 a 3,3 x and the characteristic polynomial is a cubic polynomial which starts out x 3 +. For x<0 and x large, this expression is positive, and for x>0 and x large it is negative. It must cross the x-axis somewhere, which means that it must have at least one real root. Therefore A has at least one real eigenvalue. In 2D this argument fails there may be two conjugate complex eigenvalues instead. Let c be a real eigenvalue of T, v a corresponding eigenvector. Since T is a rigid transformation, Tv = v, or cv = v. Since cv = c v and v 0, c =1and c = ±1. If c =1, then we have a vector fixed by T. Since T preserves angles, it takes all vectors in the plane through the origin perpendicular to v into itself. Since T reserves orientation and Tv = v, the restriction of T on this plane also preserves orientation. Therefore T rotates vectors in this plane, and must be a rotation around the axis through v. If c = 1, then we have Tv = v. The transformation T still takes the complementary plane into itself. Since T preserves orientation in 3D but reverses orientation on the line through v, T reverses orientation on this plane. But then T must be a reflection on this plane. We can find u such that Tu = u, and w perpendicular to u and v such that Tw = w. In this case, T is rotation through 180 around the axis through u. 8. Finding the axis and angle If we are given a matrix R which we calculate to be orthogonal and with determinant 1, how do we find its axis and rotation angle? (1) How do we find its axis? If e i is the i-th standard basis vector (one of i, j, or k) the i-th column of R is Re i. Now for any vector u the difference Ru u is perpendicular to the rotation axis. Therefore we can find the axis by calculating a cross product (Re i e i ) (Re j e j ) for one of the three possible distinct pairs from the set of indices 1, 2, 3 unless it happens that this cross-product vanishes. Usually all three of these cross products will be non-zero vectors on the rotation axis, but in exceptional circumstances it can happen that one or more will vanish. It can even happen that all three vanish! But this only when A is the identity matrix, in which case we are dealing with the trivial rotation, whose axis isn t well defined anyway. At any rate, any of the three which is not zero will tell us what the axis is. (2) How do we find the rotation angle? As a result of part (1), we have a vector α on the rotation axis. Normalize it to have length 1. Choose one of the e i so that α is not a multiple of e i. Let u = e i. Then Ru is the i-th column of R. Find the projection u 0 of u along α, set u = u u 0. Calculate Ru = Ru u 0. Next calculate u = α u. and let θ be the angle between u and Ru. The rotation angle is θ if Ru u >=0otherwise θ. Exercise 8.1. If find the axis and angle R =

10 Rigid transformations Euler s Theorem The fact that every orthogonal matrix with determinant 1 is an axial rotation may seem quite reasonable, after some thought about what else might such a linear transformation be, but I claim that it is not quite intuitive. To demonstrate this, let me point out that it implies that the combination of two rotations around distinct axes is again a rotation. This is not at all obvious, and in particular it is difficult to see what the axis of the combination should be. This axis was constructed geometrically by Euler. 1 =2 P 1 1 =2 2 =2 2 =2 P 2 Let P 1 and P 2 be points on the unit sphere. Suppose P 1 is on the axis of a rotation of angle θ 1, P 2 that of a rotation of angle θ 2. Draw the spherical arc from P 1 to P 2. On either side of this arc, at P 1 draw arcs making an angle of θ 1 /2 and at P 2 draw arcs making an angle of θ 2 /2. Let these side arcs intersect at α and β on the unit sphere. The the rotation R 1 around P 1 rotates α to β, and the rotation R 2 around P 2 moves β back to α. Therefore α is fixed by the composition R 2 R 1, and must be on its axis. Exercise 9.1. What is the axis of R 1 R 2? Prove geometrically that generally R 1 R 2 R 2 R 1. Given Euler s Theorem, we can finish our second proof that all linear rigid transformations in 3D are axial rotations by showing the following to be true: Any linear rigid transformation can be expressed as the composition of axial rotations. Let T be the given linear rigid transformation. Let N be the north pole (0, 0, 1), S the south pole (0, 0, 1), E the corresponding equator on the unit sphere. If TN = N, then T itself must be just a rotation around the NS axis. If TN N, then a single rotation A around an axis through opposite points of E will bring TN up to N, but then TAmust be an axial rotation B, and T = BA Linear transformations and matrices There is one point we have been a bit careless about. Suppose T to be a rigid linear transformation. It has been asserted that if Tv = v then T preserves orientation on the plane through the origin perpendicular to v, and if

11 Rigid transformations 11 Tv = v then it reverses orientation on that plane. Why exactly is this? It depends on a fundamental but difficult point about the relationship between linear transformations and matrices. There is no canonical way to associate a matrix to a linear transformation, A linear transformation is a geometrical thing it rotates or reflects or scales or shears in some way. A matrix is in some sense the set of coordinates of the transformation. Something similar happens with vectors which are also geometrical things, possessing, as you are told in physics courses, direction and magnitude. In order to assign coordinates to a vector we must choose first a coordinate system. There is really no best way to do this, and in some sense large classes of coordinate systems are equivalent. Likewise, to assign a matrix to a linear transformation we must first choose a coordinate system. If we do that, say in space, then we can define three special vectors e 1 =(1, 0, 0), e 2 =(0, 1, 0), e 3 =(0, 0, 1). The coordinate system is in turn determined completely by these three vectors, which are called the basis of vectors determined by the coordinate system. If v is any vector then it may be expressed as a linear combination c 1 e 1 + c 2 e 2 + c 3 e 3. The coordinates of the head of v are the coefficients c i. At any rate, we get a matrix from a linear transformation T by setting the entries in its i-th column to be the coordinates of the point Te i. Suppose we choose two different coordinate systems, with special vectors e and also f. We get from the first a matrix A associated to T, and from the second a matrix B. What is the relationship between the matrices A and B? We shall not see here a proof, but the result is relatively simple. Let F be the 3 3 matrix whose columns are the coordinates of the vectors f in terms of the vectors e. Then AF = FB. The important consequence of this for us now is that the determinant of a linear transformation, which is defined in terms of a matrix associated to it, is independent of the coordinate system which gives rise to the matrix. That is because A = FBF 1, det(a) =det(fbf 1 )=det(f )det(b)det(f ) 1 =det(b). For our immediate purposes we apply this in this fashion: suppose Tv = v. We choose a set of basis vectors with the first equal to v and the others in the plane perpendicular to v. Since T preserves this plane, we can associate to it a 2 2 matrix A. The 3D matrix of T is then [ ] 0 a b a b, A =. c d 0 c d The determinant of this matrix is then equal to det(a). Since T preserves orientation, this must be positive, which implies that det(a) < 0, and T acts upon the perpendicular plane so as to reverse orientation.

### Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

### Elementary coordinate geometry

CHAPTER 2 Elementary coordinate geometry C The page on which you draw may, for all practical purposes, be considered as a window onto a plane extending uniformly to infinity. We shall not look too closely

### A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics

### A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

### and spell things the way you like. If you want to put the current shade on the stack you write currentgray

Mathematics 308 Fall 1996 Coordinate systems The aim of this chapter is to understand how PostScript deals with coordinate systems. The user does his drawing in user coordinates and PostScript renders

### 88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

### Mathematics 308 Geometry. Chapter 2. Elementary coordinate geometry

Mathematics 308 Geometry Chapter 2. Elementary coordinate geometry Using a computer to produce pictures requires translating geometry to numbers, which is carried out through a coordinate system. Through

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### 22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

### 22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Repeated Eigenvalues and Symmetric Matrices.3 Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

### Introduction to Matrix Algebra

Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

### 1 Last time: multiplying vectors matrices

MATH Linear algebra (Fall 7) Lecture Last time: multiplying vectors matrices Given a matrix A = a a a n a a a n and a vector v = a m a m a mn Av = v a a + v a a v v + + Rn we define a n a n a m a m a mn

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

### 4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

### Solution Set 7, Fall '12

Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

### Rigid Geometric Transformations

Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

### Elementary Linear Algebra

Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

### LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

### Rigid Geometric Transformations

Rigid Geometric Transformations Carlo Tomasi November 6, 217 This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian

### Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

### 4.1 Distance and Length

Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

### NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Rigid Geometric Transformations

Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

### Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

### NOTES ON VECTORS, PLANES, AND LINES

NOTES ON VECTORS, PLANES, AND LINES DAVID BEN MCREYNOLDS 1. Vectors I assume that the reader is familiar with the basic notion of a vector. The important feature of the vector is that it has a magnitude

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### Matrices and Linear Algebra

Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

### 13 Spherical geometry

13 Spherical geometry Let ABC be a triangle in the Euclidean plane. From now on, we indicate the interior angles A = CAB, B = ABC, C = BCA at the vertices merely by A, B, C. The sides of length a = BC

### 2. Every linear system with the same number of equations as unknowns has a unique solution.

1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

### Vector Geometry. Chapter 5

Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

### Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

### [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

### Vectors a vector is a quantity that has both a magnitude (size) and a direction

Vectors In physics, a vector is a quantity that has both a magnitude (size) and a direction. Familiar examples of vectors include velocity, force, and electric field. For any applications beyond one dimension,

### Econ Slides from Lecture 7

Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

### (x, y) = d(x, y) = x y.

1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance

### (x, y) = d(x, y) = x y.

1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance

### Notes on multivariable calculus

Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

### Eigenvalues and Eigenvectors

LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

### Homework 2. Solutions T =

Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

### LINEAR ALGEBRA - CHAPTER 1: VECTORS

LINEAR ALGEBRA - CHAPTER 1: VECTORS A game to introduce Linear Algebra In measurement, there are many quantities whose description entirely rely on magnitude, i.e., length, area, volume, mass and temperature.

### Course Notes Math 275 Boise State University. Shari Ultman

Course Notes Math 275 Boise State University Shari Ultman Fall 2017 Contents 1 Vectors 1 1.1 Introduction to 3-Space & Vectors.............. 3 1.2 Working With Vectors.................... 7 1.3 Introduction

### Course Notes Math 275 Boise State University. Shari Ultman

Course Notes Math 275 Boise State University Shari Ultman Fall 2017 Contents 1 Vectors 1 1.1 Introduction to 3-Space & Vectors.............. 3 1.2 Working With Vectors.................... 7 1.3 Introduction

### Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

### Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

### APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

### Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

### The Geometry of R n. Supplemental Lecture Notes for Linear Algebra Courses at Georgia Tech

The Geometry of R n Supplemental Lecture Notes for Linear Algebra Courses at Georgia Tech Contents Vectors in R n. Vectors....................................... The Length and Direction of a Vector......................3

### HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES Math 225 Points Look around. What do you see? You see objects: a chair here, a table there, a book on the table. These objects occupy locations,

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

### Foundations of Matrix Analysis

1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

### Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

### [ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

. Matrices A matrix is any rectangular array of numbers. For example 3 5 6 4 8 3 3 is 3 4 matrix, i.e. a rectangular array of numbers with three rows four columns. We usually use capital letters for matrices,

### Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

### Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

### TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

### Chapter 2: Matrices and Linear Systems

Chapter 2: Matrices and Linear Systems Paul Pearson Outline Matrices Linear systems Row operations Inverses Determinants Matrices Definition An m n matrix A = (a ij ) is a rectangular array of real numbers

### 1.1 Single Variable Calculus versus Multivariable Calculus Rectangular Coordinate Systems... 4

MATH2202 Notebook 1 Fall 2015/2016 prepared by Professor Jenny Baglivo Contents 1 MATH2202 Notebook 1 3 1.1 Single Variable Calculus versus Multivariable Calculus................... 3 1.2 Rectangular Coordinate

### Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

### 4. Determinants.

4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

### Section 2.2: The Inverse of a Matrix

Section 22: The Inverse of a Matrix Recall that a linear equation ax b, where a and b are scalars and a 0, has the unique solution x a 1 b, where a 1 is the reciprocal of a From this result, it is natural

### Chapter 6: Orthogonality

Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

### Introduction to Matrices

POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

### (arrows denote positive direction)

12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate

### 2. Prime and Maximal Ideals

18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

### Linear transformations

Linear Algebra with Computer Science Application February 5, 208 Review. Review: linear combinations Given vectors v, v 2,..., v p in R n and scalars c, c 2,..., c p, the vector w defined by w = c v +

### Some Notes on Linear Algebra

Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

### Section 13.4 The Cross Product

Section 13.4 The Cross Product Multiplying Vectors 2 In this section we consider the more technical multiplication which can be defined on vectors in 3-space (but not vectors in 2-space). 1. Basic Definitions

### Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8.1 Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### 1 Euclidean geometry. 1.1 The metric on R n

1 Euclidean geometry This chapter discusses the geometry of n-dimensional Euclidean space E n, together with its distance function. The distance gives rise to other notions such as angles and congruent

### Review of Linear Algebra

Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

### CS 143 Linear Algebra Review

CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

### Linear algebra. S. Richard

Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

### Don t forget to think of a matrix as a map (a.k.a. A Linear Algebra Primer)

Don t forget to think of a matrix as a map (aka A Linear Algebra Primer) Thomas Yu February 5, 2007 1 Matrix Recall that a matrix is an array of numbers: a 11 a 1n A a m1 a mn In this note, we focus on

### Chapter 2 Notes, Linear Algebra 5e Lay

Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

### Mathematical Fundamentals

Mathematical Fundamentals Ming-Hwa Wang, Ph.D. COEN 148/290 Computer Graphics COEN 166/366 Artificial Intelligence COEN 396 Interactive Multimedia and Game Programming Department of Computer Engineering

### BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

### Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated 24--27) Denis Sevee s Vector Geometry notes appear as Chapter 5 in the current custom textbook used at John Abbott College for

### Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

### Linear Algebra Review

Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

### Some notes on Coxeter groups

Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

### Vectors. A vector is usually denoted in bold, like vector a, or sometimes it is denoted a, or many other deviations exist in various text books.

Vectors A Vector has Two properties Magnitude and Direction. That s a weirder concept than you think. A Vector does not necessarily start at a given point, but can float about, but still be the SAME vector.

### 3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

### Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

### CS 246 Review of Linear Algebra 01/17/19

1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

### Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

### Elementary maths for GMT

Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

### Linear Algebra - Part II

Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

### Tangent spaces, normals and extrema

Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

### Usually, when we first formulate a problem in mathematics, we use the most familiar

Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent

### Conceptual Questions for Review

Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

### 1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

### Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.