Section 1.8 Matrices as Linear Transformations

Similar documents
Key Point. The nth order linear homogeneous equation with constant coefficients

Linear Algebra, Summer 2011, pt. 2

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Determinants of 2 2 Matrices

Eigenvalues and eigenvectors

Matrix Calculations: Linear maps, bases, and matrices

Example. The 0 transformation Given any vector spaces V and W, the transformation

1 Last time: multiplying vectors matrices

Questionnaire for CSET Mathematics subset 1

We will work with two important rules for radicals. We will write them for square roots but they work for any root (cube root, fourth root, etc.).

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

There are two things that are particularly nice about the first basis

Metric spaces and metrizability

Topic 14 Notes Jeremy Orloff

The Integers. Peter J. Kahn

SECTION 1.8 : x = f LEARNING OBJECTIVES

7.1 Indefinite Integrals Calculus

The Integers. Math 3040: Spring Contents 1. The Basic Construction 1 2. Adding integers 4 3. Ordering integers Multiplying integers 12

Name (print): Question 4. exercise 1.24 (compute the union, then the intersection of two sets)

Section 4.6 Negative Exponents

Introduction to Linear Algebra

Section 3.1: Definition and Examples (Vector Spaces), Completed

This last statement about dimension is only one part of a more fundamental fact.

19. TAYLOR SERIES AND TECHNIQUES

Solving Equations by Adding and Subtracting

ES.1803 Topic 13 Notes Jeremy Orloff

Matrix-Vector Products and the Matrix Equation Ax = b

Algebra & Trig Review

1 Introduction. 2 Solving Linear Equations

Notes on multivariable calculus

x =. x = x 2 x 1 x 2 x 2

What is proof? Lesson 1

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

Lecture 19: Introduction to Linear Transformations

Math Circles - Lesson 2 Linear Diophantine Equations cont.

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

Notes on Linear Algebra I. # 1

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2

CHAPTER 1: Functions

5.2 Infinite Series Brian E. Veitch

MEI Core 1. Basic Algebra. Section 1: Basic algebraic manipulation and solving simple equations. Manipulating algebraic expressions

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

DIFFERENTIAL EQUATIONS

9. LECTURE 9. Objectives

Span & Linear Independence (Pop Quiz)

Advanced Calculus Questions

Problem 1: (3 points) Recall that the dot product of two vectors in R 3 is

Finding Limits Graphically and Numerically

8.5 Taylor Polynomials and Taylor Series

Linear Algebra. Preliminary Lecture Notes

There are five types of transformation that we will be dealing with in this section:

Systems of Linear Equations with the System Solver

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

YOU CAN BACK SUBSTITUTE TO ANY OF THE PREVIOUS EQUATIONS

Commutative Rings and Fields

Effective Resistance and Schur Complements

MAT137 - Term 2, Week 2

Boyle s Law and Charles Law Activity

Linear Algebra. Preliminary Lecture Notes

Discrete Random Variables

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

a factors The exponential 0 is a special case. If b is any nonzero real number, then

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

1 Introduction. 2 Solving Linear Equations. Charlotte Teacher Institute, Modular Arithmetic

Review for Final Exam, MATH , Fall 2010

MATLAB Project 2: MATH240, Spring 2013

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Algebra II (Common Core) Summer Assignment Due: September 11, 2017 (First full day of classes) Ms. Vella

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

6.5 Systems of Inequalities

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Section 3.1: Definition and Examples (Vector Spaces)

Pre-calculus is the stepping stone for Calculus. It s the final hurdle after all those years of

Math Lab 10: Differential Equations and Direction Fields Complete before class Wed. Feb. 28; Due noon Thu. Mar. 1 in class

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:

MITOCW MITRES18_006F10_26_0602_300k-mp4

2. Two binary operations (addition, denoted + and multiplication, denoted

Absolute and Local Extrema

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Lesson 21 Not So Dramatic Quadratics

Introduction to Vectors

Core 1 Basic Algebra. Section 1: Expressions and equations

Elementary Algebra - Problem Drill 01: Introduction to Elementary Algebra

Suppose we have the set of all real numbers, R, and two operations, +, and *. Then the following are assumed to be true.

Section 1.8/1.9. Linear Transformations

Matrices, Row Reduction of Matrices

Solving Systems of Linear Equations

4.3 Rational Inequalities and Applications

Usually, when we first formulate a problem in mathematics, we use the most familiar

and The important theorem which connects these various spaces with each other is the following: (with the notation above)

Solving Linear and Rational Inequalities Algebraically. Definition 22.1 Two inequalities are equivalent if they have the same solution set.

Vectors Part 1: Two Dimensions

Technique 1: Volumes by Slicing

Announcements Monday, September 25

EXAMPLES OF PROOFS BY INDUCTION

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

22 Approximations - the method of least squares (1)

Transcription:

Section.8 Matrices as Linear Transformations Up to this point in the course, we have thought of matrices as stand alone constructions, objects of interest in their own right. We have learned multiple matrix operations and their properties, and have seen several special types of matrices. In this section, we will readjust our focus: we re going to discuss a geometric interpretation of matrices, and begin to understand matrices as objects which can be used to alter the shapes of vectors in Euclidean space. For the rest of this section, and indeed throughout much of the course, we will need to keep in mind a dual interpretation of notation such as. Of course, we know that this object is a matrix; however, it has a geometric interpretation, as a vector in dimensional Euclidean space. We build this correspondence by thinking of the matrix ( ) as the point (,, ) in three-dimensional space; indeed, we can go farther, and identify ( ) with the vector whose tip is the point (,, ): It is an intuitive fact that every matrix corresponds in a similar way to another vector in dimensional space. Of course, this concept works for other column vectors as well; keep it in mind as we progress through this section: Key Point. The column matrix c c. c n

Section.8 can be identified uniquely with the vector in n dimensional space whose tip is at the point (c, c,..., c n ). In other words, choosing a particular n column matrix is the same as choosing a vector in n dimensional space, and we will often emphasize this fact by referring to a column matrix as a column vector or simply a vector. Throughout the remainder of this section, we will use notation such as R n to indicate n dimensional Euclidean space; so R refers to the plane, R to three dimensional space, etc. Linear Transformations You have spent much of our time in previous mathematics classes on the study of functions. In particular, the majority of your time in calculus was spent studying the properties of real-valued functions functions whose output is a real number. In this section we are going to begin to study vector-valued functions functions whose output is a vector. For example, consider the function below: f( x x x ) x x x + x + x This function f has a three dimensional vector as its input, and a three dimensional vector as its output. Let s evaluate f at a few different vectors to try to get a feel for what s going on here: f( ) + +. 8. So Next, let s calculate f( ) f( ). 8 + + 4 9.

Section.8 Clearly, f is just a function; it sends one three dimensional vector to another one. Vector-valued functions can be quite difficult to work with, but those that have nice properties (and are thus easier to work with) have a special name: Definition. Let u and v be vectors in R n, and let k be a real number. A linear transformation f from R n R m is any function from n dimensional Euclidean space to m dimensional Euclidean space satisfying the following properties: () f(u + v) f(u) + f(v) () f(ku) kf(u). In other words, a linear transformation is (first of all) a function that turns n dimensional vectors into m dimensional vectors, and second, behaves nicely (see () and () above). If f is a linear transformation from R n R m, we often write f : R n R m to specifically indicate the domain and codomain of f. A Geometric View of Matrices As promised at the beginning of this section, we are now going to switch to a discussion of a geometric interpretation of matrices. To understand this new interpretation, let s start with a simple example. Let A, and consider the column vectors v, v, and v. Since v, v, and v are, they can be interpreted as vectors in three dimensional space, and are accordingly graphed below in red, blue, and green respectively:

Section.8 Now since the matrix A is, we know that we can multiply each of our vectors by A. In fact, each of the matrices Av, Av, and Av are matrices column vectors, all of which can be interpreted again as vectors in three dimensional space! Let s calculate each of Av, Av, and Av, and compare the graphs of these new vectors to the graphs of the old ones v, v, and v. We know that Av + + + + + + 8. 4

Section.8 Similarly, Av + + + + + +, and Av 4 + + + 9 + + 9 + 4 9. Now that we have calculated the vectors Av, Av, and Av, we should compare their graphs with the graphs of the original v, v, and v. All six vectors are plotted below; the new vectors are plotted with dashed lines, in colors corresponding to the colors of their original counterpart: 5

Section.8 Upon close inspection, the graphic above reveals some interesting information: it appears that applying A to the original vectors stretched them up and shifted them back a bit. Indeed, we could apply A to any three dimensional vector, or to the entire space; in a sense, we can think of A as a function that can be used to transform the shape of three dimensional space. Let s start with the same set of vectors v, v, and v, graphed below:

Section.8 Now let s multiply each of the vectors v, v, and v by the matrix B ( Of course, each of Bv, Bv, and Bv is a matrix, and thus a two dimensional vector. You should check for yourself that ( ) ( ) ( ) 5 Bv, Bv, and Bv. The three new vectors are graphed below on the (two dimensional) xy plane: ). 7

Section.8 Of course, we can still include them in a graph of the original vectors, as vectors lying on the xy plane (recall that v and Bv are both graphed in red, etc.). In this case, multiplying the vectors by B, in a sense, transformed them from three dimensional vectors to two dimensional ones. Let s look at one more example, using the same two dimensional vectors from the previous example; we ll rename them ( ) ( ) ( ) 5 w, w, and w ; 8

Section.8 the vectors are graphed below in red, blue, and green respectively: In this example, let s multiply each of the vectors above by the matrix C. Since C is, the resulting vectors will be. You should verify that 7 9 Cw, Cw, and Cw. 5 The resulting three dimensional vectors are graphed below, along with the original w, w, and w. It is clear that the matrix R has transformed our two dimensional vectors into three dimensional ones: 9

Section.8 Matrices and Linear Transformations The next theorem shows how the idea of a linear transformation is related to matrices: Theorem.8.. () Every m n matrix is a linear transformation from R n to R m. () If f is a linear transformation from R n to R m, then f has a representation as an m n matrix. Essentially, the theorem above says that linear transformations and matrices are two ways of thinking about exactly the same thing. We have actually already seen an example of this phenomenon: at the beginning of this section, we discussed the linear transformation f( x ) x x x. x + x + x Later, we studied the action of the matrix A x on three dimensional vectors. If you were paying close attention, you may have noticed that f(v ) Av and f(v ) Av. It turns out that A is actually the matrix form of the linear transformation given by f (you should check this claim yourself). Remark. We have now seen two different interpretations of matrices as objects of study in their own right, and as functions on Euclidean space. There are occasions on which we wish to emphasize one interpretation over the other; accordingly, we will introduce a bit of new notation to help us distinguish between the interpretations. If we wish to emphasize that an m n matrix A is a linear transformation (or function) from R n to R m, we will write T A (x) to indicate the product Ax, that is T A (x) Ax. The notation T A (x) simply indicates that we are thinking of A as a function acting on Euclidean space. When we wish to think of our matrices simply as matrices, we will continue to use matrix notation such as Ax to indicate the product of a matrix A and vector x.

Section.8 Let s reexamine the examples we saw in the beginning of this section. In the first, we looked at the matrix A. We can now think of A as a linear transformation from R to R, and write Similarly, the matrix T A : R R. B ( is a linear transformation from R to R, and C is a linear transformation from R to R. ) Standard Bases To help us better understand the way in which matrices act as linear transformations, we need to introduce the concept of standard basis vectors. You are already familiar with some of the standard basis vectors; for example, in R (the plane), the standard basis vectors are ( ) ( ) e and e. Similarly, in R, the standard basis vectors are e, e, and e n. Key Point. Standard basis vectors are important because any vector in R n is just a linear combination of the standard basis vectors for R n ; in a sense, you can use standard basis vectors to build any vector you want, indeed to build the entire space. You may have guessed the general definition for standard basis vectors, given below: Definition. N dimensional Euclidean space R n has n standard basis vectors, denoted by e., e., and e n..

Section.8 We will see that standard basis vectors have many uses; the first we will learn is discussed below. Finding the Matrix of a Linear Transformation Earlier, we saw that matrices and linear transformations are exactly the same objects. On one hand, every matrix can be thought of as a function which has all of the lovely properties of a linear transformation. On the other hand, the action T (x) of a linear transformation T on a vector x can always be described by matrix multiplication, T (x) Ax for some matrix A. This observation leads to a natural question: given a linear transformation T, how do we find the associated matrix A? The answer to the question turn out to be quite simple: Theorem. If T is a linear transformation from R n R m, then its matrix representation is the m n matrix given by the formula A T (e ) T (e )... T (e n ), where e, e, etc. are the standard basis vectors of the domain of T, R n. The matrix notation above indicates that the first column of A is T (e ), the second is T (e ), etc. In other words, to understand the matrix of a linear transformation, we simply need to understand how the transformation treats the standard basis vectors. Example Find the matrix A of the linear transformation ( ) x x x T ( ) x x. x + x As indicated by the theorem, we simply need to determine where the transformation T sends the standard basis vectors; this will give us the columns of the matrix A. Before we make any calculations, however, let s make sure that we understand what we re looking for. T is a linear transformation from R R, so the correct matrix A should be. Since the domain of T is R, we need to determine where T sends each of the standard basis vectors e ( ) and e ( ).

Section.8 Let s make the calculations: since T ( ( x x ) x x ) x x + x, we know that T (e ) ( ) T ( ) +. Similarly, T (e ) ( ) T ( ) +. Now T (e ) and T (e ) form the columns of the matrix form A of T, so we know that A. You should check for yourself that T and A treat vectors in the same way that is, that for any two dimensional vector x. T (x) Ax