MATH 260 Class notes/questions January 10, 2013

Similar documents
MATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Linear Algebra Highlights

2. Every linear system with the same number of equations as unknowns has a unique solution.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MAT Linear Algebra Collection of sample exams

MATH 2360 REVIEW PROBLEMS

is Use at most six elementary row operations. (Partial

Chapter 1. Vectors, Matrices, and Linear Spaces

Lecture 22: Section 4.7

CSL361 Problem set 4: Basic linear algebra

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math 321: Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 1: Systems of Linear Equations

Solving Linear Systems Using Gaussian Elimination

Math Linear Algebra Final Exam Review Sheet

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

4 Elementary matrices, continued

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Linear Algebra March 16, 2019

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Matrices and Matrix Algebra.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

1 Linear transformations; the basics

Linear Algebra. Min Yan

Chapter 3. Vector spaces

Lecture 4: Gaussian Elimination and Homogeneous Equations

Math Linear algebra, Spring Semester Dan Abramovich

Math 3108: Linear Algebra

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Matrices: 2.1 Operations with Matrices

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

MTH 362: Advanced Engineering Mathematics

2018 Fall 2210Q Section 013 Midterm Exam I Solution

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra I Lecture 8

Math 369 Exam #2 Practice Problem Solutions

4 Elementary matrices, continued

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

The definition of a vector space (V, +, )

Solution to Homework 1

1 - Systems of Linear Equations

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 215 HW #9 Solutions

5 Linear Transformations

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Math 1314 Week #14 Notes

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Math113: Linear Algebra. Beifang Chen

Exam 2 Solutions. (a) Is W closed under addition? Why or why not? W is not closed under addition. For example,

SYMBOL EXPLANATION EXAMPLE

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Math 353, Practice Midterm 1

MAT 2037 LINEAR ALGEBRA I web:

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Online Exercises for Linear Algebra XM511

3.4 Elementary Matrices and Matrix Inverse

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

Math 110, Spring 2015: Midterm Solutions

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Chapter 1: Linear Equations

Review 1 Math 321: Linear Algebra Spring 2010

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Elementary maths for GMT

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Math 321: Linear Algebra

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Chapter 4. Solving Systems of Equations. Chapter 4

Math 4377/6308 Advanced Linear Algebra

MATH PRACTICE EXAM 1 SOLUTIONS

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Study Guide for Linear Algebra Exam 2

Introduction to Matrices

Section 6.2 Larger Systems of Linear Equations

Chapter 1: Linear Equations

(b) The nonzero rows of R form a basis of the row space. Thus, a basis is [ ], [ ], [ ]

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Chapter 2 Subspaces of R n and Their Dimensions

Row Space and Column Space of a Matrix

A matrix over a field F is a rectangular array of elements from F. The symbol

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

Span & Linear Independence (Pop Quiz)

Chapter 2 Linear Transformations

Transcription:

MATH 26 Class notes/questions January, 2 Linear transformations Last semester, you studied vector spaces (linear spaces) their bases, dimension, the ideas of linear dependence and linear independence Now we re going to study linear maps (or transformations) In general, once we have a new mathematical structure, the next step is to understand maps (functions from one instance of the structure to another) which respect the structure In general, such mappings are called morphisms or homomorphisms, although they often are given more specific names in specific situations In the case of vector spaces, we ll be studying mappings between them which respect vector addition and scalar multiplication: Definition: If V and W are vector spaces, a function T : V Ñ W is called a linear transformationfrom V to W if it has the following two properties: (a) T px yq T pxq T pyq for all x and y in V, (b) T pcxq ct pxq for all x in V and all scalars c Show that these two properties can be combined into the single formula T pax byq at pxq bt pyq p q for all x and y in V and scalars a and b In other words, show that a mapping T satisfies the definition above if and only if it satisfies (*) 2 Show that for any linear transformation T, any n elements x, x 2,, x n of V and scalars a,, a n we have T ņ i a i x i ņ i a i T px i q Two trivial examples of linear mappings are the identity transformationfrom V to V, which satisfies T pxq x for all x P V, and the zero transformationfrom V to W, which satisfies Zpxq for all x Which of the following are linear transformations? (a) Multiplication by a fixed scalar: T pxq cx for all x in V for a fixed scalar c (b) Inner product with a fixed element: If V is a Euclidean (inner-product) space, and v is a fixed element of V, then define T : V Ñ R by T pxq xv, xy (c) Translation: If v is a fixed element of V then define T : V Ñ V by T pxq x v (d) Rotation: For x P R 2, let T pxq be the vector obtained by rotating the vector x counterclockwise around the origin through an angle of π{

2 (e) Projection: For x P R 2, let T pxq be the orthogonal projection of x onto the line y 2x (f) Projection: For x P R 2, let T pxq be the orthogonal projection of x onto the line y 2x (g) Reflection: For x P R, let T pxq be the reflection of x through the plane x y 2z (h) Differentiation: Let V be the space of all functions which are differentiable (at least once) on the interval pa, bq, and W the space of all functions on pa, bq which are derivatives (why are V and W vector spaces?) T is the differentiation operator: T pfq f (i) Multiplication by a fixed function Let ϕpxq be a fixed continuous function, and V be the linear space of continuous functions on pa, bq Define T : V Ñ V by T pfq ϕf (j) Integration: Let V be the set of continuous functions on ra, bs, and let T pfq g where for a x b gpxq» x a fptq dt Definition: Let T : V Ñ W be a linear map The range of T is the set T pv q tw P W w T pvq for some v P V u The kernel (or nullspace) of T is the set ker T tv P V T pvq u 4 Prove that T maps the zero vector of V to the zero vector of W Then, prove that T pv q is a vector subspace of W, and ker T is a vector subspace of V For each linear mapping in problem, find its kernel and image (some of them are tricky!) Definition: The dimension of T pv q is called the rank of T, and the dimension of ker T is called the nullity of T 6 Prove: If V is finite-dimensional and T : V Ñ W, then the dimension of V is equal to the rank of T plus the nullity of T, ie, dim V dim ker T dim T pv q 7 Prove: If V is infinite-dimensional and T : V Ñ W then at least one of T pv q or ker T is infiintedimensional 8 Find the range, kernel, rank and nullity of the following transformations: (a) T prx, x 2 sq rx x 2, x 2x 2 s (b) T prx, x 2 sq rx x 2, 2x 2x 2 s

(c) T prx, x 2, x sq rx, x x 2, x x 2, x s (d) T prx, x 2, x sq rx x 2, x 2, x s (e) T prx, x 2, x, x 4, x s rx 2, x 4,, x s (f) T pfq g where gpxq» π π p cospx tqqfptq dt (so T maps the space of continuous functions on r π, πs to itself) 9 Suppose S : V Ñ W and T : V Ñ W are linear maps and c is a scalar Define new maps via ps T qpxq Spxq T pxq and pct qpxq ct pxq Show that these new maps are linear Explain why this implies that the set of all linear maps from V to W, denoted LpV, W q, is itself a vector space Explain why the composition of linear transformations is a linear transformation In other words, if U, V, and W are vector spaces, and if R P LpV, W q and S P LpU, V q, then the map prsq: U Ñ W defined by prsqpuq RpSpuqq is in LpU, W q Then show that (a) Composition (of linear maps) is associative: RpST q prsqt So if T : V Ñ V we can define T n unambiguously (b) Other associative laws: pcrqs cprsq and RpcSq crs (c) Distributive laws: pr SqT RT ST, RpS T q RS RT Definition: A linear map T is called one-to-one (or injective, or monomorphic) if x y implies T pxq T pyq Let T P LpV, W q The following are equivalent: (i) T is injective on V (ii) T is invertible on T pv q and the inverse map T : T pv q Ñ V is linear (iii) ker T consists only of the zero vector 2 Let T P LpV, W q and suppose V is finite-dimensional The following are equivalent: (i) T is injective on V (ii) If e, e 2,, e p are linearly independent in V then T pe q, T pe 2 q, T pe p q are linearly independent in T pv q (or in W ) (iii) The dimension of T pv q is equal to the dimension of V (iv) If te, e 2,, e n u is a basis for V then tt pe q, T pe 2 q,, T pe n qu is a basis for T pv q

4 Let S and T be transformations in LpV, V q (sometimes called endomorphisms of V ) If ST T S we say that S and T commute (duh) (a) If S and T commute, show that pst q n S n T n for n (b) If S and T are any invertible operators in LpV, V q, then pst q T S (c) If S and T are commutative, invertible operators, then S and T commute 4 Let V be the space of all real polynomials Let D be the differentiation operator which sends ppxq P V to p pxq, and let T be the operator that maps ppxq to xppxq Show that (a) DT T D I where I is the identity operator (b) DT n T n D nt n for n 2 Matrices and linear transformations Let T : V Ñ W and te, e 2,, e n u be a basis of V Show that T is determined completely by the values of T pe q, T pe 2 q, T pe n q 6 If V and W are finite-dimensional, and T P LpV, W q, let te, e 2,, e n u be a basis of V and tf, f 2,, f m u be a basis of W and suppose that T pe q t f t 2 f 2 t m f m T pe 2 q t 2 f t 22 f 2 t m2 f m T pe n q t n f t 2n f 2 t mn f m Show that T is completely determined by the mn numbers t ij, i m, j n, and that conversely any choice of mn numbers t ij determines a unique linear map Definition: Given choices of bases te, e 2,, e n u and tf, f 2,, f m u for the finite-dimensional vector spaces V and W respectively, the rectangular array of numbers t t 2 t n t 2 t 22 t 2n mpt q t m t m2 t mn where the t ij are as given in problem 6, is called the matrix of the linear transformation T (with respect to the given bases) 7 Define matrix addition component-wise, and scalar multiplication likewise With these operations, show that the set M m n of m-by-n matrices is a vector space Further, for any linear transformations S and T, show that mps T q mpsq mpt q and mpct q c mpt q (where the operations on the left are operations on linear transformations and the operations on the right are on matrices) Together with problem 6, this shows that LpV, W q is isomorphic to M m n Each choice of bases for V and W gives rise to a different isomorphism

8 With respect to the standard basis of R 2, find the matrix of the rotation map that rotates every vector an angle θ counterclockwise 9 Suppose S P LpV, W q and T P LpU, V q where the dimensions of U, V and W are n, m and l, respectively Given bases of U, V and W, we have matrices mpsq P M l m and mpt q P M m n representing S and T Show that the composition of S and T is represented by the matrix product of the two matrices mpsq and mpt q this is the l-by-n matrix mpsqmpt q given by pmpsqmpt qq ij m mpsq ik mpt q kj for i l, j n k In other words, show that mpst q mpsqmpt q where the left side is the matrix of the composition and where the right side is matrix multiplication of the matrices representing the two maps S and T 2 Use the isomorphism of problem 7 and the result of problem 9 together with what we already know about linear transformations to prove that matrix multiplication is associative and distributes over matrix addition 2 What is the matrix I of the identity transformation from V to V? Of the zero transformation from V to W? 22 Since matrix multiplication is associative, it makes sense to talk about A n for positive integers n Calculate A n if A, A 2 An m-by-n matrix M has a left inverse K if KM is the n-by-n identity matrix Likewise M has a right inverse L if ML is the m-by-m identity matrix (a) Show that if M has a left inverse, then n m and if M has a right inverse then m n (b) If m n and M has a left inverse K, then M also has a right inverse L, and moreover K L In this case we say M is invertible or non-singular, and write K L M (c) For a square (m n) invertible matrix A it now makes sense to talk about A n for n (and we set A I) Find A n for all n P Z for the matrices in problem 22 24 Find the matrix P of the orthogonal projection of R 2 onto the line ax by and show that P 2 P 2 Every linear transformation T from V to W has a matrix of a particularly simple form if you get to choose the bases for both V and W (Unfortunately in most problems you don t get to choose these bases ahead of time) Find this simple form if you pick a basis te, e 2,, e n u for V so that the first several vectors T pe q, T pe 2 q,, T pe r q span T pv q, where r is the rank of T, and e r,, e n span the kernel of T ; and we pick a basis f,, f m of W for which the first r vectors are f T pe q, f 2 T pe 2 q,, f r T pe r q

6 The ideas of matrices and linear transformations come in handy for solving systems of linear equations the systems may be determined (in the sense that they have the same number of equations as unknowns), underdetermined (fewer equations) or overdetermined (fewer unknowns) 26 Explain why solving a system of linear equations is equivalent to deciding whether or not a given vector b P W is in the range of a linear transformation T : V Ñ W Then show that a system of linear equations either has no solution, one solution or infinitely many solutions and give examples to show that each of the three possibilities can occur Finally, explain the relationship of ker T to the set of solutions of a system of linear equations 27 Consider the linear system ņ j a ij x j b i i m p q which we will often write in the shorter form Ax b Let T be the linear transformation from R n to R m corresponding to the matrix a a 2 a n a 2 a 22 a 2n A a m a m2 a mn Define the kernel, range, rank and nullity of A to be the kernel, range, rank and nullity of the corresponding linear transformation T Show that: (a) If x and x 2 are two solutions of (**), then x x 2 P ker A (b) If x is a specific solution of Ax b, then any other solution of Ax b is of the form x y, where y P ker A, ie, Ay (c) If ty, y 2,, y p u is a basis for the kernel of A, then any solution of the equation Ax b can be written as x x s y s 2 y 2 s p y p, where x is a fixed particular solution of Ax b Note: the system of equations Ax is often called the associated homogeneous system to (**)

7 When it comes down to actually calculating the solution(s) to a system of linear equations, most people use some version of the method of Gaussian elimination This method works by transforming the given system through a sequence of steps which do not change the set of solutions and which end up at a new system of equations that is easy to solve There are three basic kinds of steps we will use: Change the order in which the equations are written by interchanging the positions of two of them 2 Multiply one of the equations by a nonzero constant Add to one of the equations a multiple of one of the other equations 28 Explain why none of these operations changes the set of solutions of the system of equations (ie, if x is a solution of the original system, then x is a solution of the transformed system and vice versa) Also, explain why each of these operations is reversible In the course of our calculations, rather than writing the equations out as a i x a i2 x 2 a in x n b i, it will be more efficient simply to keep track of the coefficients in what is called an augmented matrix Rather than give a general theory for this, we will complete an illustrative example that should get the point across Consider the system of equations: x 2x 2 2x 4 x 2x x 2 x x 4 4x 6 4x x 2 2x x The augmented matrix of this system simply gets rid of the plus, minus and equals signs and the variables and simply records the coefficients as follows: 2 2 2 4 6 4 2 The three operations on equations given above have obvious analogs (called elementary row operations) when operating on augmented matrices: Interchange two rows of the matrix 2 Multiply a row by a nonzero constant Add a multiple of one row to another row Armed with these operations, we can use a consistent, orderly strategy to transform the given augmented matrix into the matrix of a simpler system

8 29 Use the operations (first use operation 2 and then ) to arrive at a system with a in the first row and first column and zeros in the other first column positions The result should be: 2 2 2 6 2 8 Next, do the same to get a in the second row second column and zeros (a zero) below it, and then use operation 2 to get a in the third row, third column The result should be: 2 2 6 8 2 A matrix like this (the first nonzero entry in each row is a and the leading in each row is to the right of the leading in the row above it) is said to be in row-echelon form We can simplify even further, and repeatedly use operation to get zeros above as well as below the leading s Do this and obtain the reduced row-echelon form: 2 6 2 2 x 2 x x 4 x 6 2 6 22 7 7 2 2 Now interpret the last matrix back into a system of equations, and show that the general solution is x 2 2 6 22 22 7 6 2 9 s Check that the first vector on the right side is a particular solution of the original system What are the other two vectors on the right side? 6 7 2 When might you have to use the first row operation (exchanging two rows)? 4 Solve the following systems of linear equations you ll have to use all three row operations sometimes and there will be interpretation issues! 6 2 s 2 paq 2x 2 x x 4 x x 2 x 4x 4 2 2x 6x 2 x 2x 2 pbq 4x x 4 b x 2x 2 4x x 4 b 2 x 2x 2 8x 4x 4 b Under what conditions on b, b 2, b is there a solution?

9 You can find the inverse of a square n-by-n matrix if it exists, by simultaneously solving n systems of equations Show how to do this and invert the matrix A 2 2 4 4 2 Hint: Start with the augmented matrix: 2 2 4 4 2 6 Explain how you could use a similar method to find the left or right inverse of a matrix (if one exists can you state and prove a theorem about this?) and use it to find at least two right inverses of the matrix 2 4 2 2 and at least two left inverses of the matrix 2 4 2 4