Complex Matrix Transformations

Similar documents
MITOCW ocw f99-lec30_300k

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Note: Please use the actual date you accessed this material in your citation.

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Lesson 6: Algebra. Chapter 2, Video 1: "Variables"

( )( b + c) = ab + ac, but it can also be ( )( a) = ba + ca. Let s use the distributive property on a couple of

MATH 320, WEEK 7: Matrices, Matrix Operations

MITOCW ocw f99-lec17_300k

MITOCW ocw-18_02-f07-lec02_220k

CHAPTER 1 LINEAR EQUATIONS

is any vector v that is a sum of scalar multiples of those vectors, i.e. any v expressible as v = c 1 v n ... c n v 2 = 0 c 1 = c 2

Solving Multi-Step Linear Equations (page 3 and 4)

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Math 123, Week 2: Matrix Operations, Inverses

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

6: Polynomials and Polynomial Functions

Determinants of 2 2 Matrices

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

WSMA Algebra - Expressions Lesson 14

MITOCW ocw f99-lec01_300k

Special Theory Of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Tutorial:A Random Number of Coin Flips

Algebra Review. Finding Zeros (Roots) of Quadratics, Cubics, and Quartics. Kasten, Algebra 2. Algebra Review

36 What is Linear Algebra?

MITOCW R11. Double Pendulum System

MITOCW ocw f99-lec23_300k

Numerical Methods Lecture 2 Simultaneous Equations

MITOCW ocw f99-lec09_300k

Making the grade: Part II

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

MITOCW ocw nov2005-pt1-220k_512kb.mp4

Getting Started with Communications Engineering

Vector, Matrix, and Tensor Derivatives

An Introduction to Matrix Algebra

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Finite Mathematics : A Business Approach

MITOCW MITRES18_006F10_26_0602_300k-mp4

value of the sum standard units

Instructor (Brad Osgood)

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

Solving Quadratic & Higher Degree Equations

Solving Equations by Adding and Subtracting

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

MITOCW watch?v=vu_of9tcjaa

MITOCW ocw f99-lec05_300k

Mathematics for Graphics and Vision

Ch. 3 Equations and Inequalities

Solving Quadratic & Higher Degree Equations

The Boundary Problem: Markov Chain Solution

TAYLOR POLYNOMIALS DARYL DEFORD

Eigenvalues and eigenvectors

1.1 The Language of Mathematics Expressions versus Sentences

Pre-calculus is the stepping stone for Calculus. It s the final hurdle after all those years of

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Since the logs have the same base, I can set the arguments equal and solve: x 2 30 = x x 2 x 30 = 0

The Inductive Proof Template

But, there is always a certain amount of mystery that hangs around it. People scratch their heads and can't figure

Measurement Uncertainty

Sometimes the domains X and Z will be the same, so this might be written:

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices

University of Colorado at Colorado Springs Math 090 Fundamentals of College Algebra

Solve Systems of Equations Algebraically

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5.

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

February 13, Option 9 Overview. Mind Map

AP Calculus AB. Slide 1 / 175. Slide 2 / 175. Slide 3 / 175. Integration. Table of Contents

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Alex s Guide to Word Problems and Linear Equations Following Glencoe Algebra 1

The Haar Wavelet Transform: Compression and. Reconstruction

Mathematical Logic Part One

Free Ebooks Laboratory Manual In Physical Geology

Lecture 10: Powers of Matrices, Difference Equations

LAB 2 - ONE DIMENSIONAL MOTION

The student solutions shown below highlight the most commonly used approaches and also some that feature nice use of algebraic polynomial formulas.

8. TRANSFORMING TOOL #1 (the Addition Property of Equality)

Systems of equation and matrices

Confidence intervals

Guide to Negating Formulas

Physics Motion Math. (Read objectives on screen.)

Polynomials; Add/Subtract

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Homework 2. This homework is due September 14, 2015, at Noon.

Math Week 1 notes

Prealgebra. Edition 5

Numbers and symbols WHOLE NUMBERS: 1, 2, 3, 4, 5, 6, 7, 8, 9... INTEGERS: -4, -3, -2, -1, 0, 1, 2, 3, 4...

Matrix Dimensions(orders)

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

Making the grade. by Chris Sangwin. Making the grade

Properties of Arithmetic

1.1 Linear Equations and Inequalities

Volume vs. Diameter. Teacher Lab Discussion. Overview. Picture, Data Table, and Graph

Implicit Differentiation Applying Implicit Differentiation Applying Implicit Differentiation Page [1 of 5]

1. In Activity 1-1, part 3, how do you think graph a will differ from graph b? 3. Draw your graph for Prediction 2-1 below:

A Hypothesis about Infinite Series, of the minimally centered variety. By Sidharth Ghoshal May 17, 2012

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

30. TRANSFORMING TOOL #1 (the Addition Property of Equality)

Quadratic Equations Part I

Algebra Year 10. Language

The Cycloid. and the Kinematic Circumference. by Miles Mathis

Transcription:

Gama Network Presents: Complex Matrix Transformations By By Scott Johnson Gamasutra May 17, 2002 URL: http://www.gamasutra.com/features/20020510/johnson_01.htm Matrix transforms are a ubiquitous aspect of 3D game programming. Yet it is surprising that game programmers do not often use a rigorous method for creating them or a common way of discussing them. Practitioners in the field of Robotics have mastered them long ago but these methods haven't made their way into daily practice among game programmers. Some of the many symptoms include models that import the wrong way and characters that rotate left when they are told to rotate right. So after a review of matrix conventions and notation, we'll introduce a useful naming scheme, a shorthand notation for transforms and tips for debugging them that will allow you to create concatenated matrix transforms correctly in much shorter time. Matrix Representation of Transforms Matrices represent transforms by storing the vectors that represent one reference frame in another reference frame. Figure 1 shows two 2D reference frames offset by a vector T and rotated relative to each. To represent frame one in the space of frame zero, we need the translation vector T, and the unit axis vectors X1 and Y1 expressed in the zero frame. Figure 1: 2D Reference Frames offset and rotated from each other We know that we need to store vectors in a matrix but now we have to decide how. We can either store them in a square matrix as rows or as columns. Each convention is shown below with the vectors

expanded into their x and y components. Figure 2: 2D Transform Stored as Columns Figure 3: 2D Transform Stored as Rows Each stores the same information so the question of which one is better will not be discussed. The difference only matters when you use them in a multiplication. Matrix multiplication is a set of dot products between the rows of the left matrix and columns of the right matrix. Figure 4 below shows the multiplication of two 3x3 matrices, A and B. Figure 4: The First Dot Product in a Matrix Multiply The first element in the product A times B is the row (a00, a01, a02) dotted with the column (b00, b10, b20). The dot product is valid because the row and the column each have three components. This dictates how a row vector and a column vector are each multiplied by a matrix. A column vector must go on the right of the matrix. A row vector must go on the left of the matrix. In each convention, the vectors are represented consistently as rows or columns as one might expect but it is important to realize that the order changes. Again, we must switch the order because the rows on the left must be the same size as the columns on the right in order for the matrix multiplication to be defined. You can convert between row and column matrices by taking the matrix transpose of either matrix. Here we show that the transpose of a column matrix is a row matrix.

A Naming Scheme for Transform Matrices In the first section we defined a matrix transform (figures 2 and 3) from reference frame 1 to reference frame 0 by expressing the vectors of frame 1 in frame 0. Let's name it M1to0 to make the reference frames it transforms between explicit. When we start to introduce new reference frames, as in figure 5, this name will be very handy. Figure 5: Introducing a third reference frame (2) and a point P2 in that frame. These frames could represent the successive joints of a robot arm or an animation skeleton. Suppose the problem is to find P2 in the space of the zero frame. We'll call this point P0. We can now write out the answer to this problem using our naming scheme for matrices, keeping in mind the order of multiplication between row vectors and matrices and column vectors and matrices. Column Convention P1 = M2to1 * P2 P0 = M1to0 * P1 Substituting P1 into the equation for P0: P0 = M1to0 * M2to1 * P2 We have been consistent with the way column vectors are multiplied with matrices by keeping the column vectors to the right of the transform matrices. Row Convention P1 = P2 * M2to1 P0 = P1 * M1to0 P0 = P2 * M2to1 * M1to0 We have been consistent with the way row vectors are multiplied with matrices by keeping the row vectors to the left of the transform matrices. So the problem has been reduced to finding the transform matrices, and already we have accomplished a lot. We established a convention for naming points in space by the reference frame that they are in (P0, P1). We named matrices for the reference frames that they transform between (M1to0, M2to1). And finally, we leveraged the naming scheme to write out a mathematical expression for the correct answer. There is no ambiguity regarding the order of the matrices or which matrices we need to find.

Figure 6: Reference Frames with T offset vectors shown. Figure 6 shows the translation vectors between the frames. With the new information in the figure, we can plug into the matrices from figures 1 and 2 to get the needed transform matrices. Column Convention P0 = M1to0 * M2to1 * P2 Row Convention P0 = P2 * M2to1 * M1to0 Thus we have solved the problem of finding point P0 given P2. If we reversed the problem and needed to find point P2 given point P0 we could solve it using the same method. We would quickly find that we need the matrices M0to1 and M1to2 and we can get them using

matrix inversion. M0to1 = (M1to0) -1 M1to2 = (M2to1) -1 Again, we write the equation for P2 given P0, M1to0, and M2to1 by allowing the naming scheme to guide the order of the matrix concatenation. Column Convention P2 = M1to2 * M0to1 * P0 P2 = (M2to1) -1 * (M1to0)-1 * P0 Row Convention P2 = P0 * M0to1 * M1to2 P2 = P0 * (M1to0) -1 * (M2to1) -1 Another way to write those equations is by multiplying the matrices first. Matrix multiplication is not communative (meaning you can't switch the order of the factors) but it is associative (meaning you can regroup the factors with parentheses). We can take the row equation: P2 = P0 * M0to1 * M1to2 And group the matrices together to illustrate the naming scheme for concatenated matrices. P2 = P0 * (M0to1 * M1to2) P2 = P0 * M0to2 So when multiplying matrices together using this naming scheme you just chain the reference frame names together. M0to2 = M0to1 * M1to2 These matrix derivations make excellent comments in the code that can save the person who reads your code lots of time. Simplified Math Notation for Matrix Concatenations The following is the component wise matrix multiplication for two 3x3 matrices and it is big. The multiplication of two 4x4 matrices is even bigger. It is already a large bulky expression with just two matrices. No one ever gained any insight into matrix concatenation of transform matrices by looking at the product expressed by each component. Instead we'll substitute algebraic variables for the sections of a transform in order to come up with a much more intuitive notation. These are the components of a 4x4 column transform matrix: The upper left 3x3 portion is a rotation and the far right column forms the translation. Let's simplify the matrix by making some definitions.

Now we can represent the 4x4 matrix as a 2x2 matrix : Working with 2x2 matrix multiplication is much easier. It is easy enough to do by hand. It is just four dot products between the rows on the left and the columns on the right. In the coming notation, many of the multiplications will be with one or zero so that will make it even easier. Up to this point, we haven't dealt with scale but it is easy enough to add. This new notation allows us to study the effects of combining rotation, translation, and scale by combining building blocks for each one. Figure 7 defines a 2x2 rotation matrix that is really a representation of a 4x4 transform matrix. Likewise, Figure 8 defines a 2x2 scale matrix that represents a 4x4 transform matrix. Figure 7: A 2x2 rotation matrix that represent a 4x4 transform Figure 8: A 2x2 scale matrix that represents a 4x4 transform This notation is not concerned with whether R has rows or columns in it so the R matrix (Figure 7) is the same in both row and column conventions. S is a diagonal matrix so its 2x2 matrix (Figure 8) is the same in both row and column conventions. The 2x2 matrix for translation must change based on the row/column convention to reflect the location of the translation in the full 4x4 transform. Column Convention (T is a column): Row Convention (T is a row): Figure 9: 2x2 Translation matrix that represents a 4x4 transform

Now we have the building blocks and we can start combining them. Let's start with a simple translation and rotation, change the order of multiplication and see what we can learn from it. Column Convention With translation on the left and rotation on the right we get the familiar M1to0 matrix, represented as a 2x2. Switching the factors yields an entirely different result. The rotation, R, is the same, but the translation portion of the right hand side shows that R has rotated the translation. Row Convention In order to get the familiar M1to0 row matrix, we need to put rotation on the left of the translation. The other way around results in a rotated translation. Now that the differences in the notation between row and column conventions have been shown, we'll only show the column convention to avoid repeating the same point. The column transform for figure 6 is shown below. The change is that we have to distinguish between the different rotations and translations by naming them differently with subscripts. Now we experiment with scale. If we tack a scale matrix factor on the right of the product we get: Right away you can see that the scale does not affect the translation (upper right portion of the product) at all because S doesn't appear in it. This makes sense because with columns, the full transform equation with points P0 and P5 included would look like this, and it is just as though P5 was scaled and then the rest of the transform occurred afterwards. The given point was named P5 because each matrix is considered a transform from one space to another. If the scale is introduced on the left,

then every term in the result is scaled, as you might expect. There are countless combinations to explore. The notation makes it easier to form a complex transform from intuitive simple pieces. It is easy to multiply 2x2 matrices by hand but it gets very tedious to repeat. Instead, you can enter any of the above symbolic expressions into Mathematica, MathCad, or Maple V and the product is computed for you. Math programs take some effort to learn but your investment will be paid back many times over. Interpreting Concatenated Matrix Transforms Transforms are described in steps made up of translations, scales, and rotations. There is sometimes confusion though about which step is first. The problem is that there are two valid ways of interpreting a transform. You can think of a transform as progressing from right to left with a point, P, being transformed from distant reference frames towards the zero frame. One might describe the following matrix transform as "P4 is rotated by R2, translated by T2, rotated by R1 and then translated by T1." One can also describe the transform as a series of changes applied from left to right. Each change is applied to a reference frame. It would then be described as, "Starting with the zero frame, the axes are translated by T1, rotated by R1, translated by T2 and then finally rotated by R2." The former description mentions a rotation by R2 as the first step. The latter description mentions a translation by T1 as coming first so it can be confusing. The right to left interpretation is obviously valid because you just start at the right and multiply your column vector by the right most matrix. At each step you get a column vector in another reference frame. The other interpretation is valid because you can imagine combining matrices from left to right. After each multiplication, you have a product matrix that can be partitioned into axis vectors and a translation, just like in Figure 2. If you run into a discrepancy with someone about the way to read a matrix, write it out and discuss the pieces of the transform. The matrix math is the same regardless of the way it is read. You might each be talking about the same matrix but in two different ways. Learning Your Company's Matrix Conventions C++ has been so widely accepted by game developers that by now everyone that wants a matrix class already has one. Chances are your thoughts on whether row or column matrices are better are irrelevant because the company's (or team's) matrix class already exists and you have to use it. The task now is to make sure that you learn the company's matrix conventions. This includes the way the matrix elements are stored, and the decision to form row or column matrices. You could ask another developer or you could take a look at the way a matrix is multiplied with a vector in the matrix class implementation. Look at the dot product performed to reach the first element in the matrix product. If the vector is dotted with the top row of the matrix, the vector is a column. If the vector is dotted with the left most column, then the vector is a row. Next do a sanity check with some other functions. For instance, if there is a function that converts a quaternion to a matrix, check that it is following the same convention. Look up the conversion in a reference and check that the reference author agrees with the author of your class. After you are sure of the class conventions you won't ever have to question what they are again. Debugging Matrix Concatenations There is a bad but accepted method of creating matrix transforms amongst many game programmers that goes like this. Make an initial guess of what the transform expression might be and type it in. Try it out and see if it works. If it doesn't work, transpose and swap matrices in the expression until it works. This is exactly what not to do. Instead, you should write out the expression for your matrix transform and know that it is right. You know it is right because you know your matrix conventions, and you used the above matrix naming scheme to create the expression. Of course there will be times when you have the correct expression

but it doesn't work when you try it in code. When that happens you have to check that the matrices you created actually match their names and you have to check the matrices that were passed in from other sources as well. It can still be difficult but at least you will be progressing towards the right answer by isolating the problem. The reason it is so important not to mechanically transpose or swap your matrices is that it is easy to get lost in all the possible transposes. We've seen that the difference between row and column matrices is a transpose. Unscaled rotation matrices have the property that their inverse is their transpose. So if you blindly invert a matrix you can be introducing a transpose. With enough swapping and transposing, you can get back to where you started because of the matrix identity: It is easy to get lost in all the transposes after only a few hacks. Another difficulty is that two transposes undo each other. The iterative hacking of the matrix expression is supposed to stop when the result looks right but you may have two errors. This is why mysterious transposes live on in some code bases. After a while it would require a time consuming rigorous audit of too much code to fix. The best way to avoid those situations is to make the matrices correctly the first time. Conclusion We've covered several helpful ways to make creating transforms easier. Name vectors and points with the reference frame they are in. Name matrices by the reference frames that they transform between. Use the matrix names to guide the way they can be combined. Use the simplified 2x2 versions of the transforms to visualize and plan out your desired transform. And lastly, don't ever hack your transforms by swapping matrices or transposing them. If you follow these rules and get your fellow programmers to follow these rules, working with transforms becomes much easier. References and Further Reading The naming schemes, matrix concatenation, and the 2x2 transform notation was all covered in Prof. Lynn Conway's undergraduate Robotics course at the University of Michigan. Our course text had good coverage in a more rigorous manner: Robotics for Engineers, Yoram Koren, McGraw-Hill, 1985., pp. 88-101. Unfortunately this book is out of print. Amazon occasionally has a used one. Copyright 2003 CMP Media Inc. All rights reserved.