Lecture 19: Introduction to Linear Transformations

Similar documents
Lecture 9: Submatrices and Some Special Matrices

Span and Linear Independence

Lecture 5: Introduction to Markov Chains

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Linear Algebra, Summer 2011, pt. 2

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Span & Linear Independence (Pop Quiz)

Chapter Five Notes N P U2C5

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

There are two main properties that we use when solving linear equations. Property #1: Additive Property of Equality

Section 3.1: Definition and Examples (Vector Spaces), Completed

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Lecture 10: Powers of Matrices, Difference Equations

Math 54 Homework 3 Solutions 9/

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin:

Solutions to Exam I MATH 304, section 6

Math 308 Spring Midterm Answers May 6, 2013

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Algebra Practice Problems

Matrix-Vector Products and the Matrix Equation Ax = b

Math Precalculus I University of Hawai i at Mānoa Spring

Matrices. Chapter Definitions and Notations

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Announcements Monday, September 18

Vectors Year 12 Term 1

Math (P)Review Part I:

Math 3C Lecture 20. John Douglas Moore

a (b + c) = a b + a c

12. Perturbed Matrices

Chapter 1 Review of Equations and Inequalities

MATH10212 Linear Algebra B Homework Week 3. Be prepared to answer the following oral questions if asked in the supervision class

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Wed Feb The vector spaces 2, 3, n. Announcements: Warm-up Exercise:

Linear Algebra March 16, 2019

Elementary maths for GMT

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MITOCW ocw f99-lec09_300k

Section 1.8 Matrices as Linear Transformations

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2

Properties of Linear Transformations from R n to R m

36 What is Linear Algebra?

Fact: Every matrix transformation is a linear transformation, and vice versa.

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Dot Products, Transposes, and Orthogonal Projections

2. Two binary operations (addition, denoted + and multiplication, denoted

STEP Support Programme. STEP 2 Matrices Topic Notes

, p 1 < p 2 < < p l primes.

MATH 310, REVIEW SHEET 2

Unit 5 AB Quadratic Expressions and Equations 1/9/2017 2/8/2017

Quadratic Expressions and Equations

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Lecture 4: Products of Matrices

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Vector Basics, with Exercises

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

2 Systems of Linear Equations

The Boundary Problem: Markov Chain Solution

Math 115 Spring 11 Written Homework 10 Solutions

GUIDED NOTES 2.2 LINEAR EQUATIONS IN ONE VARIABLE

3 Fields, Elementary Matrices and Calculating Inverses

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation

Introduction to Algebra: The First Week

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

1 Last time: inverses

MAT 1302B Mathematical Methods II

Questionnaire for CSET Mathematics subset 1

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

The Gram-Schmidt Process

Thermodynamics (Classical) for Biological Systems Prof. G. K. Suraishkumar Department of Biotechnology Indian Institute of Technology Madras

System of Linear Equations

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

8.1 Solutions of homogeneous linear differential equations

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Properties of Arithmetic

Elementary Linear Algebra

EQ: How do I convert between standard form and scientific notation?

MATH10212 Linear Algebra B Homework Week 4

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

MATH 310, REVIEW SHEET

Announcements Wednesday, October 10

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Announcements Monday, October 02

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5.

Solving Equations by Adding and Subtracting

TheFourierTransformAndItsApplications-Lecture28

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

Conceptual Explanations: Simultaneous Equations Distance, rate, and time

Linear Algebra. Chapter Linear Equations

Transcription:

Lecture 19: Introduction to Linear Transformations Winfried Just, Ohio University October 11, 217

Scope of this lecture Linear transformations are important and useful: A lot of applications of linear algebra involve linear transformations. Linear algebra is much easier to understand when you look at it through the lens of linear transformations. Linear transformations are not hard to understand when one thinks of them in terms of concrete examples. Here I will develop the theory of linear transformations only as far as it directly relates to the remainder of this course and omit its more abstract aspects. In particular, we will think of a linear transformation (almost always) as a certain type of function T : R n R m, where R is the set of real numbers. Note: The textbook uses L instead of T.

A motivating example Think about all portfolios that contain only shares of Tried-And-True and of Get-Rich-Fast. The current value [ ] of such a portfolio could be represented by a x vector v curr = in R y 2, where x denotes the current value of its Tried-and-True shares and y denotes the current value of its Get-Rich-Fast shares. If you are willing to think about negative values for x and y as shares owed, each point in R 2 represents exactly one portfolio. Now we cannot know for sure what will happen to the shares over the next year, but we know that the value next year can also be represented as a point v next in R 2. We can think about the events of the coming year, whatever they may be, as transforming v curr into v next. This description defines a function, or transformation T : R 2 R 2 such that T ( v curr ) = v next. Ohio University Since Winfried 184 Just, Ohio University MATH32, Lecture 19: Linear Department Transformations of Mathematics

I wish I knew a formula for T... but I don t. Neither does anybody else. Next year of course, every investor will have 2-2 hindsight. But a bit of abstraction is quite useful here: Even without the formula, we can already determine two properties of T. If you double the holdings in your current portfolio, its value next year will also increase by a factor of 2 relative to what it would have been for your current holdings. The same goes for tripling (multiplying by a factor of λ = 3 instead of λ = 2), and so on. In mathematical terms: T (λ v) = λt ( v) for all scalars λ. Moreover, if you wish to merge your portfolio v with the portfolio w of your Significant Other, it does not matter whether the merger is done now, or next year: T ( v + w) = T ( v) + T ( w).

Linear transformations: Our definition Definition (For the purpose of this course) Let n, m be positive integers and let T : R n R m be a function. Then T is called a linear transformation if it satisfies both of the following conditions for all vectors v, w in R n and all scalars λ in R: (i) T (λ v) = λt ( v) (ii) T ( v + w) = T ( v) + T ( w). These two simple properties have many important consequences. They can be derived for the much broader class of T : V W, where V, W are abstract vector spaces and the scalars are allowed to be also complex numbers or members of any algebraic field. This more general theory is rather abstract and gives linear transformations a reputation for being a difficult concept. Our definition will do here, but you should know that it is just a special case and everything works just the same way if, in particular, the scalars are complex numbers. Ohio University Since Winfried 184 Just, Ohio University MATH32, Lecture 19: Linear Department Transformations of Mathematics

My Crystal Ball Let us return to our example. Recall [ ] that the value of a portfolio is x represented by a vector v curr = in R y 2, where x denotes the value of its Tried-and-True shares and y denotes the value of its Get-Rich-Fast shares. We defined a transformation T : R 2 R 2 by T ( v curr ) = v next. Now I m going to tell you a secret. Will you keep it? My Crystal Ball tells me that Tried-and-True shares will gain 2% over the coming year, while Get-Rich-Fast shares will lose 5% of their value. Trust me. What is the formula for T? T ([ ]) x y = [ ] 3x.5y Where have we seen this before?

Stretching and compressing a sheet of rubber In Lecture 6 this example was called T A ( v) = A v: ([ ]) [ ] [ ] [ ] x 3 x 3x T A = A v = = y.5 y.5y The matrix here is: A = [ ] 3.5 This transformation was interpreted as a threefold stretch in the horizontal (x-) direction and a twofold compression in the vertical (y-) direction of a sheet of rubber that lies flat on a surface. We see that the same transformation pops up in two very different contexts. It can be defined as multiplying a matrix by a vector.

Products of matrices and column vectors Let A be a matrix of order m n and let v be an n 1 column vector. a 11... a 1n v 1 w 1 A v =... =. a m1... a mn v n Then A v is an m 1 column vector. This defines a transformation T A : R n R m by T ( v) = A v w m By general properties of matrix multiplication: (i) T A (λ v) = A(λ v) = λa v = λt A ( v) (ii) T A ( v + w) = A( v + w) = A v + A w = T A ( v) + T A ( w). The transformation T A is linear!

Where have we seen this before? We have seen examples of transformations T A = A v already for 2 2 square matrices A and interpreted them geometrically. Now suppose you have a transformation T : R 2 R 2 that first compresses a sheet of rubber by a factor of two in the vertical direction, stretches it by a factor of 3 in the horizontal direction, and then rotates it by an angle or π 3. Given any two points on the sheet that are represented by vectors v and w, can we determine where the point v + w ends up, that is, T ( v + w), by adding the vectors T ( v) and T ( w)? Homework 56: (a) Take a few minutes and try to figure out the answer relying exclusively on your geometric intuition. (b) Show that the transformation T described above can be represented as T = T A for some matrix A. After part (b) of Homework 56, the answer to our question becomes easy: The transformation T A must be linear, and the question is precisely whether T has property (ii) of linear transformations. Yes!

Where have we seen this before? Now consider two transformations T A = R n R m and T B = R k R p. Here A must have order m n and B must have order p k. We would like to calculate the value of the composition T B T A ( v) = T B (T A ( v)). This is done by first calculating w = T A ( v) (by definition a column vector of dimension m) and applying transformation T B ( w). But we can do this only if w is in the domain R k of T B. Thus we must have m = k. This is exactly the condition for when the product BA is defined! T B T A ( v) = T B (T A v) = B(A v) = (BA) v = T BA ( v). Compositions of these linear transformations corresponds to matrix products.

Cashing in a portfolio Let us return to our example of[ the ] value of a portfolio that is x represented by a vector v curr = in R y 2, where x denotes the value of its Tried-and-True shares and y denotes the value of its Get-Rich-Fast shares. We defined a transformation T : R 2 R 2 by T ( v curr ) = v next. [ ] 3 My Crystal Ball says T = T A, where A =..5 The owner wants to transform it into cash a year from now. Homework 57: (a) Show that cashing in means applying a linear transformation T B : R 2 R 1 and find the matrix B. (b) Note that the entire transformation of the owner s current holdings v curr into cash amounts to applying the transformation T B T A : R 2 R 1, and that T B T A ( v curr ) = T B (T A ( v curr )) = T BA ( v curr ). Find BA.

Where have we seen this before? Let A be the coefficient matrix of a system of linear equations a 11 x 1 + + a 1n x n = b 1... a m1 x 1 + + a mn x n = b m We can write the system in matrix form as A x = b, where x = [x 1... x n ] T and b = [b 1... b m ] T. Thus T A ( x) = b. We can deduce the following: Theorem When the transformation T A maps R n onto R m, the above system must be consistent. When the transformation T A : R n R m is one-to-one, the above system cannot be underdetermined.

Recall some definitions Consider any function T : X Y, where X and Y are arbitrary sets. The range R of T is the set of all y in Y such that there exists some x in X with T (x) = y. The function T is onto if Y = R, that is, if every y in Y is a function value T (x) for some x in X. The function T is one-to-one if T never takes the same value for any two different x in X, that is, when T (x 1 ) T (x 2 ) whenever x 1 x 2. Homework 58: Read the definitions of consistent and underdetermined systems from a previous lecture if need be. Then reread the theorem on the previous slide a few times and convince yourself that it is nothing else but a translation of onto and one-to-one into the context of T A and systems of linear equations with coefficient matrix A.

Flashback: Products of matrices and column vectors Let A be a matrix of order m n and let v be an n 1 column vector. a 11... a 1n v 1 w 1 A v =... =. a m1... a mn Then A v is an m 1 column vector. This defines a transformation T A : R n R m by T A ( v) = A v v n w m By general properties of matrix multiplication: (i) T A (λ v) = A(λ v) = λa v = λt A ( v) (ii) T A ( v + w) = A( v + w) = A v + A w = T A ( v) + T A ( w). The transformation T A is linear!

Are all linear transformations like this? Suppose T : R n R m is a linear transformation. Does there always exist a matrix A such that T = T A, that is, T ( x) = A x for all x in R n? Not quite. When P is the matrix of transition probabilities of a Markov Chain with n states, and probability distributions are written as row vectors x, then T P ( x) = xp defines a linear transformation T P : R n R n that cannot be written as in the above question. Homework 59: If we write probability distributions as column vectors instead, then for the above transformation T P there does exist a matrix A such that T P ( x) = A x for all x in R n. (a) Find A. (b) Find one example of P for weather.com-light examples of Markov chains so that P = A and another example with P A.

All linear transformations of column vectors are of this form Theorem (Matrix representation of linear transformations) Suppose T : R n R m is a linear transformation. If both the elements of the domain R n of T and the function values T ( x) in R m are treated as column vectors, then there exists a matrix A of order m n such that T = T A, that is, T ( x) = A x for all x in R n. Proof: We need some notation. Recall the following definition. Definition A vector w is a linear combination of vectors v 1, v 2,..., v n if there exist scalars d 1, d 2,..., d n such that w = d 1 v 1 + d 2 v 2 + + d n v n.

Review: The vectors e 1, e 2,..., e n Fix a positive integer n and treat the elements of R n as column vectors. Let x be an n 1 column vector in R n. Then x = x 1 x 2. x n = x 1 1. + x 2 1. + + x n. 1 Recall the notation: e 1 = 1. e 2 = 1.... e n =. 1 Then every n 1 column vector x is a linear combination of e 1, e 2,..., e n, and the coefficients of this combination are unique.

Review: The standard basis in R n Fix a positive integer n and treat the elements of R n as column vectors. A basis for R n is a set of n 1 column vectors b 1, b 2,..., b n such that every n 1 column vector x can be expressed as a linear combination d 1 b1 + d 2 b2 + + d n bn in exactly one way. The vectors e 1, e 2,..., e n of the previous slide form the so-called standard basis of R n.

The vectors T ( e 1 ), T ( e 2 ),..., T ( e n ) Now we are ready to prove the theorem. Let T : R n R m be a linear transformation. Consider the m 1 column vectors T ( e 1 ) = a 11 a 21. T ( e 2 ) = a 12 a 22.... T ( e n) = a 1n a 2n a m1 a m2 a mn a 11 a 12... a 1n a 21 a 22... a 2n Let A = [T ( e 1 ), T ( e 2 ),..., T ( e n )] =... a m1 a m2... a mn.

We show that T = T A Let T : R n R m be a linear transformation, and let A be the matrix defined on the previous slide. Then a 11 a 12... a 1n 1 a 11 a 21 a 22... a 2n A e 1 =.... = a 21. = T ( e 1) a m1 a m2... a mn a m1

We show that T = T A, continued Let T : R n R m be a linear transformation, and let A be the matrix defined on the previous slide. Then a 11 a 12... a 1n a 12 a 21 a 22... a 2n 1 A e 2 =.... = a 22. = T ( e 2) a m1 a m2... a mn a m2

We show that T = T A, continued Let T : R n R m be a linear transformation, and let A be the matrix defined on the previous slide. Then a 11 a 12... a 1n a 1n a 21 a 22... a 2n A e n =.... = a 2n. = T ( e n) a m1 a m2... a mn 1 a mn

We show that T = T A, completed Let T : R n R m be a linear transformation, and let A be the matrix defined on the previous slide. We have shown that for j = 1, 2,..., n: A e j = T ( e j ). By properties of matrix multiplication and linearity of T, for all x in R n : T A ( x) = A x = A(x 1 e j + x 2 e 2 + + x n e n ) = x 1 A e 1 + x 2 A e 2 + + x n A e n = x 1 T ( e 1 ) + x 2 T ( e 2 ) + + x n T ( e n ) = T (x 1 e 1 ) + T (x 2 e 2 ) + + T (x n e n ) = T (x 1 e 1 + x 2 e 2 + + x n e n ) = T ( x). This completes the proof of the theorem.

Some practice problems Homework 6: For each of the following linear transformations T find the matrix A such that T = T A : (a) T : R 2 R 2, T ( e 1 ) = [2, 3] T, T ( e 2 ) = [ 1, 4] T (Note that the latter T is doing double duty here by first denoting the transformation and then as a superscript that indicates the transpose of a vector.) (b) T : R 2 R 2, T ([1, 1] T ) = [2, 3] T, T ([ 1, 1] T ) = [ 1, 4] T (c) T : R 3 R 2, T ( e 1 ) = [2, 3] T, T ( e 2 ) = T ( e 3 ) = 2T ( e 1 )