Volume in n Dimensions

Similar documents
[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

v = ( 2)

Chapter 4. Solving Systems of Equations. Chapter 4

Gaussian elimination

Ideas from Vector Calculus Kurt Bryan

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra, Summer 2011, pt. 2

MATH 12 CLASS 23 NOTES, NOV Contents 1. Change of variables: the Jacobian 1

Math 234 Exam 3 Review Sheet

Linear Algebra Handout

1 Matrices and matrix algebra

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Sometimes the domains X and Z will be the same, so this might be written:

Chapter 1: Linear Equations

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Math 3C Lecture 20. John Douglas Moore

LAB 8: INTEGRATION. Figure 1. Approximating volume: the left by cubes, the right by cylinders

Chapter 1: Linear Equations

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Integrals in cylindrical, spherical coordinates (Sect. 15.7)

1 GSW Sets of Systems

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Eigenvalues and Eigenvectors

36 What is Linear Algebra?

Matrices. 1 a a2 1 b b 2 1 c c π

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Differential Equations

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

MATH 310, REVIEW SHEET 2

Dot Products, Transposes, and Orthogonal Projections

= W z1 + W z2 and W z1 z 2

Elementary maths for GMT

The Cross Product The cross product of v = (v 1,v 2,v 3 ) and w = (w 1,w 2,w 3 ) is

MATH 320, WEEK 7: Matrices, Matrix Operations

MATRIX DETERMINANTS. 1 Reminder Definition and components of a matrix

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Linear Algebra Basics

Span and Linear Independence

Matrix-Vector Products and the Matrix Equation Ax = b

LINEAR ALGEBRA: THEORY. Version: August 12,

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

88 CHAPTER 3. SYMMETRIES

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Differential Forms. Introduction

MAC Module 1 Systems of Linear Equations and Matrices I

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Row Reduction

1 Last time: multiplying vectors matrices

8 Square matrices continued: Determinants

0.1. Linear transformations

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Eigenvalues by row operations

1 Review of the dot product

Solutions to Sample Questions for Final Exam

Problem Set # 1 Solution, 18.06

Introduction to Electromagnetism Prof. Manoj K. Harbola Department of Physics Indian Institute of Technology, Kanpur

Midterm 1 revision source for MATH 227, Introduction to Linear Algebra

Study Guide for Linear Algebra Exam 2

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Motion in Three Dimensions

EXAM 2 REVIEW DAVID SEAL

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

MATH 54 - WORKSHEET 1 MONDAY 6/22

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Eigenvectors and Hermitian Operators

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 8. Rigid transformations

2 Systems of Linear Equations

Math 54 Homework 3 Solutions 9/

Getting Started with Communications Engineering

Linear Algebra, Summer 2011, pt. 3

Linear Equations in Linear Algebra

UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

Solving Systems of Equations Row Reduction

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

4 Elementary matrices, continued

Topic 15 Notes Jeremy Orloff

MA1131 Lecture 15 (2 & 3/12/2010) 77. dx dx v + udv dx. (uv) = v du dx dx + dx dx dx

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

LA lecture 4: linear eq. systems, (inverses,) determinants

MAT 211 Final Exam. Spring Jennings. Show your work!

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Linear Equations in Linear Algebra

LS.1 Review of Linear Algebra

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

4 Elementary matrices, continued

Change of Variables, Parametrizations, Surface Integrals

MAT1302F Mathematical Methods II Lecture 19

Major Ideas in Calc 3 / Exam Review Topics

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Transcription:

Volume in n Dimensions MA 305 Kurt Bryan Introduction You ve seen that if we have two vectors v and w in two dimensions then the area spanned by these vectors can be computed as v w = v 1 w 2 v 2 w 1 (where in the cross product we fake both vectors as 3D vectors by adding an artificial zero component for the z direction.) Given three vectors u, v, and w, the volume of the parallelpiped spanned can be computed as u (v w), which of course can be expanded out. The idea of the area spanned by two 2D vectors or the volume of three 3D vectors has been important in several applications. We ve used it in setting up and computing the flux of a vector field over a curve (2D) or a surface (3D), as well as in computing surfaces areas, which were crucial steps in setting up the divergence and Stokes Theorems. We also used these ideas in deriving the volume element dv in other coordinate systems. How does this work in n dimensions? Specifically, given n vectors v 1, v 2,..., v n in n dimensional space, what volume do they span? Note that I m using subscripts for indexing the vectors these are NOT components. The answer isn t too hard to figure out, especially if we start with two dimensions for inspiration. Two Dimensions Consider a function 2 (v 1, v 2 ) which takes as input 2 two-dimensional vectors and returns the area of the parallelogram spanned by the vectors. I put area in quotes, because the definition of 2 that we cook up below will not be exactly the area, but very closely related. So pretend we don t know how to compute the area we re going to figure it out with a bit of intuition, and this will serve as a guide for higher dimensions. Intuitively, what properties should the area function 2 have? First, it s obvious that 2 (v, v) = 0 for any vector v, since no area is spanned. Second, the pictures below ought to convince you that if we fix one of the input vectors then the function 2 is LINEAR in the remaining variable, that is, 2 (u, v + w) = 2 (u, v) + 2 (u, w), and 2 (u, cv) = c 2 (u, v). 1

Or you can fix the second variable then 2 is linear in the first variable. Finally, it s obvious that if e 1 and e 2 denote the unit vectors in the x and y directions, then 2 (e 1, e 2 ) = 1. To summarize what we ve said above, we want 2 to have the properties 1. For any vector v, 2 (v, v) = 0. 2. If either variable is fixed, 2 is linear in the remaining variable. 3. 2 (e 1, e 2 ) = 1 Theorem 1: There is one and only one function which satisfies properties 1 to 3 above. Before proving this it will be handy to prove Proposition 1: Any function which satisfies properties 1 to 3 above must satisfy 2 (v 1, v 2 ) = 2 (v 2, v 1 ). Note that this proposition shows that 2 is not area, since 2 can be negative. To prove the proposition, first note that by property 1 2 (v 1 + v 2, v 1 + v 2 ) = 0. Now, using property 2 repeatedly we find that 0 = 2 (v 1 + v 2, v 1 + v 2 ) = 2 (v 1, v 1 + v 2 ) + 2 (v 2, v 1 + v 2 ) = 2 (v 1, v 1 ) + 2 (v 1, v 2 ) + 2 (v 2, v 1 ) + 2 (v 2, v 2 ) = 2 (v 1, v 2 ) + 2 (v 2, v 1 ) from which we immediately get Proposition 1. Here is a proof of Theorem 1. It is constructive, in that we will not only show that there is a function consistent with properties 1 to 3, but also how to compute it. Before we begin it will be helpful to note that 2 (v 1, cv 1 + v 2 ) = c 2 (v 1, v 1 ) + 2 (v 1, v 2 ) = 2 (v 1, v 2 ). (1) 2

In short, we can add a constant multiple of any one of the vectors to another input vector without changing the value of 2. In the case that v 1 = e 1 = (1, 0) and v 2 = e 2 = (0, 1) we know from property 3 how to compute 2 (v 1, v 2 ). We re going to reduce the general case to this special case by repeated application of rules 1 and 2. Let s first do a specific example so you see how it works, then the general case. Let v 1 = (3, 2) and v 2 = ( 2, 7). Then repeated strategic application of equation (1) gives 2 (v 1, v 2 ) = 2 ((3, 2), ( 2, 7)) = 2 ((3, 2), 2 (3, 2) + ( 2, 7)) 3 = 2 ((3, 2), (0, 25/3)) = 2 ((3, 2) + 6 (0, 25/3), (0, 25/3)) 25 = 2 ((3, 0), (0, 25/3)) = 25 2 ((1, 0), (0, 1)) = 25. Notice that at each stage I choose to multiply a given vector by just the right constant so that when the result is added to the other vector something is cancelled. If we apply the above strategy to the general case in which v 1 = (, a 2 ) and v 2 = (b 1, b 2 ) we obtain 2 (v 1, v 2 ) = 2 ((, a 2 ), (b 1, b 2 )) = 2 ((, a 2 ), b 1 (, a 2 ) + (b 1, b 2 )) = 2 ((, a 2 ), (0, b 2 b 1 a 2 )) = 2 ((, a 2 ) + a 2 b 2 a 2 b 1 (0, b 2 b 1 a 2 ), (0, b 2 b 1 a 2 )) = 2 ((, 0), (0, b 2 b 1 a 2 )) = ( b 2 b 1 a 2 ) 2 ((1, 0), (0, 1)) = b 2 b 1 a 2. This proves that 2 ((, a 2 ), (b 1, b 2 )) = b 2 a 2 b 1 is the only formula which can be consistent with properties 1 to 3. In fact, you can check explicitly that it IS consistent with them. Now, 2 is not itself the area, for the simple reason that area is nonnegative. But in fact, you should recognize that the area spanned by v 1 and v 2 is just 2 (v 1, v 2 ). If you ve studied any linear or matrix algebra then you recognize the formula for 2 as being that for the determinant of the 2 by 2 matrix [ a1 a 2 b 1 b 2 ] 3

The Case in n Dimensions With two dimensions as inspiration, let s generalize to n dimensions (think of 3D to see what s going on.) We will define a function n (v 1, v 2,..., v n ) which accepts n vectors (all n-dimensional) and returns what we might call the signed n dimensional volume spanned by the vectors (meaning that it is the volume to within a plus or minus sign.) If you think about it (certainly in 3D) we should demand exactly the same properties from n that we demanded from 2, namely 1. If any two of the input vectors are identical (so they re parallel) then n = 0, i.e., n (v 1,..., w,..., w,..., v n ) = 0. 2. If n 1 of the input vectors are fixed then n is linear in the remaining variable. 3. n (e 1, e 2,..., e n ) = 1. Just as for two dimensions we can prove Theorem 2: There is one and only one function n which satisfies properties 1 to 3 above. We will actually show that there is AT MOST one such function, and show how it would have to be computed. The proof that the resulting formula or procedure really is well-defined and consistent with properties 1 to 3 is messy and not very enlightening and so, to quote many authors who don t like to write such stuff out, it is left to the reader. One thing worth noting is that Proposition 1 is still valid. Specifically, we have Proposition 1 : For any function n which satisfies properties 1 to 3 above, interchanging any two vectors in n reverses the sign of n. The proof is almost exactly like the 2D case (first try the case in which v 1 and v 2 are interchanged, and you ll see that any other case is basically the same). Here is a computation which shows that there is at most one function which meets the three criteria above, and shows how such a function must be computed. In fact, rather than write out the general case, let s do a specific example in three dimensions. You will see how to do it in higher dimensions, and also why I didn t want to write out a bigger case. Consider the three vectors v 1 = (1, 4, 2), v 2 = (1, 4, 4), v 3 = (3, 2, 4). First, it will be very convenient to write the vectors as the rows of a 3 by 3 matrix, i.e., as 4

Using this notation we can think of n as a function which acts on n by n matrices. I will put absolute value signs around the matrix as short hand for n applied to the corresponding vectors. Now, I want to multiply the first vector in 3 (v 1, v 2, v 3 ) by 3 and add the result to the last vector, because this will not change the value of n : 3 (v 1, v 2, 3v 1 + v 3 ) = 3 3 (v 1, v 2, v 1 ) + 3 (v 1, v 2, v 3 ) (by properties 1 and 2). Also, it will zero out the first component of the last vector. In matrix terms, I want to multiply the first row by 3 and add it to the last row. This shows that = 0 14 10 Our goal is to use this kind of multiply a row by a constant and add to another row procedure to eventually produce a matrix with zeros everywhere except on the diagonal. With this in mind I ll now multiply the first row by 1 and add the result to the second row. By the same reasoning as above, this will not change the value of 3 and we ll have = 0 0 6 0 14 10 Notice now that I ve eliminated all of the entries in the first column, except the one on the diagonal at the upper left. I d like now to employ the same procedure to eliminate all of the entries in the second column, except the one on the diagonal (second row, second column). However, as it stands this is impossible multiplying the second row by any scalar and adding to rows 1 or 3 will not get rid of the second column entries. The key here is to pivot, that is, interchange two rows to get something nonzero into the second column/second row position. From Proposition 1 we know that this will reverse the sign of n. Interchanging row 2 and row 1 will spoil the zero element we obtained in the lower left corner, so we ll interchange rows 2 and 3 to find that = 0 14 10 0 0 6 Now proceed as before: Multiply row 2 by 4 and add the result to row 1 to get 14 1 0 6 7 = 0 14 10 0 0 6 Finally, multiply the last row by appropriate scalars and add to rows 1 and 2 to obtain 1 0 0 = 0 14 0 0 0 6 5 (2)

In other words, we ve shown that 3 (v 1, v 2, v 3 ) is 3 (e 1, 14e 2, 6e 3 ), and the latter is by property 2 above (linearity) equal to 84 3 (e 1, e 2, e 3 ), which by inspection equals 84. Thus 3 (v 1, v 2, v 3 ) = 84 and the volume spanned by the vectors v 1, v 2, v 3 is equal to 84 = 84. This procedure can be carried out methodically for any n vectors in n dimensions, and this constructive procedure shows that there is at MOST one function consistent with properties 1 to 3. It doesn t quite show that such a function exists. For although we have a procedure to compute n, it s not clear that the function is well-defined; if we had done the procedure in a different order (there were many possible choices for how to eliminate entries in the matrix) would we have obtained the same result? Even if that were the case, is the resulting function n actually consistent with properties 1 to 3? Fortunately, it turns out that n is well-defined and consistent with the specified properties. We thus have a method for finding the volume spanned by n vectors in n dimensional space. You might recognize n as the determinant of the n by n matrix formed from the vectors v 1,..., v n, and the procedure we used for computing the determinant as simply Gaussian elimination. In fact, our procedure is somewhat overkill we didn t need to zero out the matrix entries above the diagonal. We could have stopped with equation (2) and found n by simply reading off the product of the entries along the diagonal of the matrix on the right of equation (2). You should think about why this is true. By the way, Gaussian elimination is not a bad way to compute the determinant. It is infinitely more efficient than expansion by minors, a technique you might well have seen before. For an n by n matrix Gaussian elimination is O(n 3 ) efficient, while expansion by minors is O(n!). Change of Variables and Integrals in Other Coordinate Systems Let (x 1,..., x n ) be rectangular coordinates for n dimensional space. Suppose we have another coordinate system with coordinates (u 1,..., u n ). Just as for polar or cylindrical or spherical, the coordinates in each system can be thought of as functions of the coordinates in the other system, e.g., x 1 = x 1 (u 1,..., u n ) (3). (4) x n = x n (u 1,..., u n ) (5) Suppose that you change the u i variable by a small amount du i while holding the other u k constant. In doing so the x j variable would change by a small amount dx j = x j u i du i for j = 1 to n. Let dx i denote the position change due to changing u i by an amount du i. Then dx i = dx 1 e 1 + dx 2 e 2 + + dx n e ( n x1 = e 1 + x ) n e n du i (6) u i u i 6

where e k is the unit vector in the x k direction. Now consider the result of changing each of the u i by a small amount du i from some base point. The resulting n vectors dx 1,..., dx n would span some kind of n dimensional parallelogram, and from our previous computation its volume would be dv = n (dx 1,..., dx n ) du 1, du n = det x 1 u 1 x n u 1. x 1 u n x n u n du 1, du n where det is the determinant. The matrix above is the transpose of what is called the Jacobian matrix. (It would actually be the Jacobian if we had done our vectors as columns.) Thus if you want to integrate a function f over a region D and use the coordinate system u 1,..., u n you should compute f dv D where dv is given as above. This will yield the same value for the integral in any coordinate system. Example: As a specific example, consider the relations between rectangular and spherical coordinates: x = ρ cos(θ) sin(ϕ) y = ρ sin(θ) sin(ϕ) z = ρ cos(ϕ) In this case you find that (taking the spherical variables in the order ρ, θ, ϕ) dv = det cos(θ) sin(ϕ) sin(θ) sin(ϕ) cos(ϕ) ρ sin(θ) sin(ϕ) ρ cos(θ) sin(ϕ) 0 ρ cos(θ) cos(ϕ) ρ sin(θ) cos(ϕ) ρ sin(ϕ) which works out to dv = ρ 2 sin(ϕ) dρ dθ dϕ, thus usual formula. Exercise dρ dθ dϕ, Compute the volume spanned by three vectors v 1 = (, a 2, a 3 ), v 2 = (b 1, b 2, b 3 ), and v 3 = (c 1, c 2, c 3 ) and verify that it is consistent with the three properties above. 7