Mon Jan Matrix and linear transformations. Announcements: Warm-up Exercise:

Similar documents
Math Week 1 notes

Mon Mar matrix eigenspaces. Announcements: Warm-up Exercise:

Examples: u = is a vector in 2. is a vector in 5.

Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

Mon Apr dot product, length, orthogonality, projection onto the span of a single vector. Announcements: Warm-up Exercise:

5) homogeneous and nonhomogeneous systems 5a) The homogeneous matrix equation, A x = 0 is always consistent.

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

Wed Feb The vector spaces 2, 3, n. Announcements: Warm-up Exercise:

= L y 1. y 2. L y 2 (2) L c y = c L y, c.

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

6. The scalar multiple of u by c, denoted by c u is (also) in V. (closure under scalar multiplication)

Announcements Monday, September 18

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Sept. 3, 2013 Math 3312 sec 003 Fall 2013

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin:

MATH10212 Linear Algebra B Homework Week 4

Announcements Wednesday, November 7

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

MATH 1210 Assignment 3 Solutions 17R-T2

3 Fields, Elementary Matrices and Calculating Inverses

Properties of Linear Transformations from R n to R m

Announcements Wednesday, November 7

Criteria for Determining If A Subset is a Subspace

if we factor non-zero factors out of rows, we factor the same factors out of the determinants.

Section 3.1: Definition and Examples (Vector Spaces)

Section 1.5. Solution Sets of Linear Systems

Math Week 1 notes

Math Week 1 notes

is any vector v that is a sum of scalar multiples of those vectors, i.e. any v expressible as v = c 1 v n ... c n v 2 = 0 c 1 = c 2

Announcements September 19

Vector Spaces and SubSpaces

Linear Algebra. Instructor: Justin Ryan

Mathematics 96 (3581) CA 6: Property Identification Mt. San Jacinto College Menifee Valley Campus Spring 2013

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

Vector Spaces - Definition

Exercise 2) Consider the two vectors v 1. = 1, 0, 2 T, v 2

11.1 Vectors in the plane

Math 54 HW 4 solutions

Math 3C Lecture 20. John Douglas Moore

Earlier this week we defined Boolean atom: A Boolean atom a is a nonzero element of a Boolean algebra, such that ax = a or ax = 0 for all x.

Theorem 3: The solution space to the second order homogeneous linear differential equation y p x y q x y = 0 is 2-dimensional.

Math 3191 Applied Linear Algebra

Linear vector spaces and subspaces.


Linear Algebra Exam 1 Spring 2007

COE428 Notes Week 4 (Week of Jan 30, 2017)

Row Space, Column Space, and Nullspace

Span & Linear Independence (Pop Quiz)

5.1 Second order linear differential equations, and vector space theory connections.

Announcements Monday, November 06

Math 308 Midterm, Feb 11th 2015 Winter Form Bonus. Possible (6) 63

In case (1) 1 = 0. Then using and from the previous lecture,

5.5 Deeper Properties of Continuous Functions

Matrix Calculations: Linear maps, bases, and matrices

MATH 320, WEEK 7: Matrices, Matrix Operations

Decidability and Undecidability

Chapter 1: Linear Equations

IE 336 Seat # Name (clearly) Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Unit Plan: Matrices (4 weeks or 20 instructional Days)

Math 24 Winter 2010 Sample Solutions to the Midterm

Linear algebra and differential equations (Math 54): Lecture 10

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

Section 13.3 Concavity and Curve Sketching. Dr. Abdulla Eid. College of Science. MATHS 104: Mathematics for Business II

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Announcements Wednesday, September 05

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Exercise 4) Newton's law of cooling is a model for how objects are heated or cooled by the temperature of an ambient medium surrounding them.

Composition of Functions

MTH 464: Computational Linear Algebra

Announcements Monday, September 25

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

Vector Space Examples Math 203 Spring 2014 Myers. Example: S = set of all doubly infinite sequences of real numbers = {{y k } : k Z, y k R}

Introduction to Vector Functions

Precalculus Graphical, Numerical, Algebraic Media Update 7th Edition 2010, (Demana, et al)

Linear Algebra: Homework 3

Readings and Exercises

Vectors for Zero Robotics students

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Announcements Wednesday, September 27

Solutions Week 2. D-ARCH Mathematics Fall Linear dependence & matrix multiplication. 1. A bit of vector algebra.

Notes on Unemployment Dynamics

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

Announcements Monday, November 13

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Linear Algebra I for Science (NYC)

2.5 Multiplication of Matrices

Dot Products, Transposes, and Orthogonal Projections

Definition 2.3. We define addition and multiplication of matrices as follows.

Proofs. 29th January 2014

Mon 3 Nov Tuesday 4 Nov: Quiz 8 ( ) Friday 7 Nov: Exam 2!!! Today: 4.5 Wednesday: REVIEW. In class Covers

MATH 1553-C MIDTERM EXAMINATION 3

Chapter 1.6. Perform Operations with Complex Numbers

Direct Proof Division into Cases

Math 4377/6308 Advanced Linear Algebra

Geometric Image Manipulation

Ch. 7.3, 7.4: Vectors and Complex Numbers

Transcription:

Math 2270-004 Week 4 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent an in-depth outline of what we plan to cover. These notes cover material in 1.8-1.9, 2.1-2.2. Mon Jan 29 1.8-1.9 Matrix and linear transformations Announcements: Warm-up Exercise:

Recall from Friday that Definition: A function T which has domain equal to n and whose range lies in m is called a linear transformation if it transforms sums to sums, and scalar multiples to scalar multiples. Precisely, T : n m is linear if and only if T u v = T u T v u, v n T c u = c T u c, u n. Notation In this case we call m the codomain. We call T u the image of u. The range of T is the collection of all images T u, for u n. Important connection to matrices: Each matrix A m n gives rise to a linear transformation T : n m, namely T x A x x n. This is because, as we checked last week, matrix transformation satisfies the linearity axioms: A u v = A u A v u, v n A c u = c A u c, u n Recall, we verified this by using the column notation A =,,..., writing u 1 v 1 u = u 2 :, v = v 2 :, and computing u n A u v =,,... u v v n = u 1 v 1 u 2 v 2... u n v n And, = u 1 u 2... u n v 1 v 2... v n = A u A v. A c u =,,... c u. = cu 1 cu 2... cu n = c A u = c A u.

Geometry of linear transformations: (We saw this illustrated in a concrete example on Friday) Theorem: For each matrix A m n and corresponding linear transformation T : n m, given by T x A x x n m. (parallel) lines in the domain n are transformed by T into (parallel) lines or points in the codomain 2-dimensional planes in the domain n are transformed by T into (parallel) planes, (parallel) lines, or points in the codomain m. proof: A parametric line through a point with position vector p and direction vector u in the domain can be expressed as the set of point having position vectors p t u, t. The image of this line is the set of points A p t u = A p t A u, t which is either a line through A p with direction vector A u, when A u 0, or the point A p when A u = 0. Since parallel lines have parallel direction vectors, their images also will have parallel direction vectors, and therefore be parallel lines. A parametric (2-dimensional) plane through a point with position vector p and independent direction vectors u, v in the domain can be expressed as the set of point having position vectors p t u s v, t, s. The image of this line is the set of points A p t u s v = A p t A u s A v, t, s which is either a plane through A p with independent direction vectors A u, A v, or a line through point A p if A u, A v are dependent but not both zero, or the point A p if A u = A v = 0. Since parallel planes can be expressed with the same direction vectors, their images also will have parallel direction vectors, and therefore be parallel.

Exercise 1) Consider the linear transformation S : 2 given by S : = 1 2. Make a geometric sketch that indicates what the transformation does. In this case the interesting behavior is in the domain.

For the rest of today we'll consider linear transformations from 2 2. (See the transformed goldfish in text section 1.9 - it's worth it.) We write the standard basis vectors for 2 as e 1 = 1 0, e 2 = 0 1. If T : 2 2 is linear, then T = T e 1 e 2 = T e 1 T e 2 = T e 1 T e 2 = T e 1 T e 1! In other words, if T : 2 2 is a linear transformation, it's actually a matrix transformation. And the columns of the matrix, in order, are T e 1, T e 2. (The same idea shows that every linear T : n m is actually a matrix transformation...and that the columns of the matrix are T applied to the standard basis vectors in the domain n.) In the following diagrams the unit square in the domain, together with an "L" to keep track of whether the transformation involves a reflection or not, is on the left. On the right is the image of the square and the L under the transformation. Find the matrix for T!!! (Also note, that because parallel lines transform to parallel lines, grids go to transformed grids, so once you know how the unit square transforms, you know everything about the transformation. Or, to put it another way, once you know T e 1, T e 2, you know the matrix for T so you know how every position vector transforms.)