What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Similar documents
Family Feud Review. Linear Algebra. October 22, 2013

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

1 Last time: inverses

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

CSL361 Problem set 4: Basic linear algebra

MAT 2037 LINEAR ALGEBRA I web:

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 54 HW 4 solutions

Study Guide for Linear Algebra Exam 2

Math 315: Linear Algebra Solutions to Assignment 7

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Eigenvalues and Eigenvectors

Math 1553, Introduction to Linear Algebra

Numerical Linear Algebra Homework Assignment - Week 2

Matrix operations Linear Algebra with Computer Science Application

LINEAR ALGEBRA REVIEW

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Math Linear Algebra Final Exam Review Sheet

Elementary maths for GMT

and let s calculate the image of some vectors under the transformation T.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Row Space and Column Space of a Matrix

Linear Algebra Practice Problems

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Eigenvalues and Eigenvectors

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Section 2.2: The Inverse of a Matrix

Lecture 1 Systems of Linear Equations and Matrices

Math 3191 Applied Linear Algebra

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Linear Independence x

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

Summer Session Practice Final Exam

Lecture 12: Diagonalization

Linear Algebra. Min Yan

Announcements Wednesday, November 01

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

MATH 2360 REVIEW PROBLEMS

Linear Algebra and Matrix Inversion

MTH 464: Computational Linear Algebra

Announcements Monday, October 29

Properties of Linear Transformations from R n to R m

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Foundations of Matrix Analysis

Lecture Summaries for Linear Algebra M51A

Gaussian elimination

MA106 Linear Algebra lecture notes

Topic 15 Notes Jeremy Orloff

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Math 313 Chapter 1 Review

Final Exam Practice Problems Answers Math 24 Winter 2012

Math Final December 2006 C. Robinson

MTH 464: Computational Linear Algebra

MAT Linear Algebra Collection of sample exams

MAT1302F Mathematical Methods II Lecture 19

MTH 2032 Semester II

Linear Algebra March 16, 2019

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

1 Last time: least-squares problems

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

This MUST hold matrix multiplication satisfies the distributive property.

PRACTICE PROBLEMS FOR THE FINAL

Linear Algebra, Summer 2011, pt. 2

System of Linear Equations

Linear Algebra. and

Math 240 Calculus III

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Review problems for MA 54, Fall 2004.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

CS 246 Review of Linear Algebra 01/17/19

Definition (T -invariant subspace) Example. Example

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Online Exercises for Linear Algebra XM511

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Math 3C Lecture 20. John Douglas Moore

MATRICES. a m,1 a m,n A =

3.4 Elementary Matrices and Matrix Inverse

MAT 1302B Mathematical Methods II

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Transcription:

STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row reduction of a matrix into its echelon form Answer the following simple questions to see if you can still remember that story line: QUESTION Let A = 2 3 and B = 2 What is A + B? What is A B? What is AB? What is BA? What is A 2? 5 QUESTION 2 What is the reduced row echelon matrix of A =? 2 2 QUESTION 3 What are the elementary row operations? What is the difference between row echelon form and reduced row echelon form? It is convenient to use block matrices Given a matrix, we can use several vertical lines and horizontal lines to cut it into smaller matrices called blocks For example, a vertical line and a horizontal line divide a matrix into four blocks We can also use up all permissible vertical lines to chop a matrix into columns (The words cut and chop, as well as matrix reloaded and block buster in the title, are not standard) EXAMPLE 4 Chopping the matrix a a A = 2 a 3 a 2 a 22 a 23 vertically into columns, we have A = A A 2 A 3 or simply write A = A A 2 A 3, where are three columns of A A = a a 2 a2 a3, A 2 =, A a 3 = 22 a 23 Block matrices can be considered as matrices with matrix entries called blocks They can be added and multiplied like usual matrices as long as their sizes, as well as the sizes of their blocks, all fit

QUESTION 5 Let A = A A 2 A 3 be the block matrix in EXAMPLE 4 Let B be a 3 2 matrix Is the identity BA A 2 A 2 = BA BA 2 BA 3 correct? Check it How about A A 2 A 3 B = A B A 2 B A 3 B? Here is one particular thing about block matrices that has to be very careful: scalars should be treated like matrices Thus, identities like a ca ca = c =, () a 2 ca 2 in many textbooks, strictly speaking, are incorrect, because, in ca, c is and A is 2 Their sizes do not fit for multiplying in that way The correct one is Ac = a a 2 c = Incorrect identities like () are often harmless, but not always a c (2) a 2 c QUESTION 6 Let A = A A 2 A 3 be the same as the one in EXAMPLE 4 Let λ, λ 2 and λ 3 correct? be scalars Is the identity λ A λ 2 A 2 λ 3 A 3 = λ λ 2 λ 3 A A 2 A 3 (3) QUESTION 7 With the same notation as the last question, is the identity A A 2 A 3 λ λ 2 = A λ + A 2 λ 2 + A 3 λ 3 (4) λ 3 correct? QUESTION 8 With the same notation as QUESTION 6, is the identity A λ A 2 λ 2 A 3 λ 3 = A A 2 A 3 λ λ 2 (5) λ 3 correct? Check this Each elementary row operation corresponds to an elementary matrix E The result of this elementary row operation on a matrix A is the product EA To find this elementary 2

matrix E, append the identity matrix I to A to form A I and then apply your row operation The result is EA I = EA E, from which the required E can be read off from the second slot Thus, the elementary matrix corresponding to a row operation is equal to the result of applying the same row operation to the identity matrix I EXAMPLE 9 Let us find the elementary matrix corresponding to the operation R 2 2R applied to a 2 2 matrix, say A Instead, we apply this operation to A I: A I a b c d So the required elementary matrix is E = EA = 2 showing that the answer is correct a b c 2a d 2b 2 You can check that 2 a b a = c d c 2a b, d 2b QUESTION What is the elementary matrix corresponding to exchanging the first row and the third row of a 3 3 matrix? QUESTION What is the elementary matrix corresponding to adding the first row to the second row for a 3 3 matrix? A sequence of elementary row operations to A corresponds to a sequence of elementary matrices, say E, E 2,, E n and the result of these operations is B E n E 2 E A Suppose that we pick these row operations judicially so that B is the identity matrix, namely B = I Then E n E 2 E A = I Thus E n E 2 E gives us the inverse A of A The question is, how do we capture the product E n E 2 E? Actually, we know the answer even before this question is asked: append I to A to form A I and apply the corresponding row operations: E n E 2 E A I = E n E 2 E A E n E 2 E I E n E 2 E (6) The answer is hidden in the second slot of the last block matrix QUESTION 2 Why do we write E n E 2 E instead of E E 2 E n? 3

EXAMPLE 3 We can use the method described here to find the inverse of a 2 2 triangular matrix with nonzero diagonal elements The first step in what follows is to convert diagonal elements into and the second step is to remove the upper right corner: a b c b/a /a /c /a b/ac /c Hence, if a, c, we have a b = c /a b/ac /c A basic fact about row reduction says, elementary row operations do not change any linear relation among columns of a matrix EXAMPLE 4 Consider the 2 3 matrix A = A A 2 A 3 = 5 2 We want to find a nontrivial linear relation among its columns A, A 2 and A 3 We know that such a relation exists because these columns represent three vectors in a two dimensional space The question is to find it explicitly By a routine procedure, we find its reduced row echelon form: B = 2 3 It is easy to see a nontrivial linear relation among the columns of B: B 3 = 2B + 3B 2, or 2B + 3B 2 B 3 = O By the basic fact mentioned above, the same relation holds for columns of A: 2A + 3A 2 A 3 = O, as you can check: 2 + 3 2 5 = You can tell that this identity among columns of A cannot be described as trivial Now we give a proof of the basic fact row operations do not change column relations Let s take an m n matrix A with columns A, A 2,, A n, that is, A = A A 2 A n Assume that A is row reduced to B = B, B 2 B n This row reduction follows a sequence of elementary row operations, which correspond to elementary matrices E, E 2,, E N, E N So B = E N E N E 2 E A For simplicity, 4

let s put P = E N E 2 E we have so that B = P A Spell out the columns in the last identity, B B 2 B n = P A A 2 A n = P A P A 2 P A n Comparing columns, we obtain B = P A, B 2 = P A 2, etc Now, suppose we have a linear relation among the columns of A, say A c + A 2 c 2 + + A n c n = O Then B c + B 2 c 2 + + B n c n = P A c + P A 2 c 2 + + P A n c n = P (A c + A 2 c 2 + + A n c n ) = P O = O This shows that the same linear relation holds among columns of B In the above discussion, by a linear relation among column vectors A, A 2,, A n we mean an identity of the form A c + A 2 c 2 + + A n c n = O (7) where c, c 2,, c n are some scalars We say that column vectors B, B 2,, B n satisfies the same linear relation as (7) in case B c + B 2 c 2 + + B n c n = O holds Clearly, (7) holds if c = c 2 = = c n = In that case we say that (7) is a trivial linear relation Otherwise (7) is called a nontrivial linear relation In other words, (7) is a nontrivial linear relation for A, A 2,, A n if at least one of c, c 2,, c n is nonzero We say that a set of column vectors A, A 2,, A n is linearly dependent if there is a nontrivial linear relation among them; otherwise we say that the set is (or the vectors are) linearly independent Thus, a set of column vectors is linearly independent if there is no linear relation among these vectors except the trivial one (This definition of linear dependence and linear independence applies to vectors in a linear space in general) EXAMPLE 5 Let a, b, c, d be distinct numbers We are asked to show that a a 2 a 3 b b C =, C =, C c 2 = 2 b c 2, C 3 = 3 c 3 d d 2 d 3 are linearly independent We start by writing down a linear relation among these vectors: a C + a C + a 2 C 2 + a 3 C 3 = O; (this is the usual way of writing, which is wrong, but harmless here) We have to show that this linear relation is trivial It can be rewritten as a + a a + a 2 a 2 + a 3 a 3 a + a b + a 2 b 2 + a 3 b 3 a + a c + a 2 c 2 + a 3 c 3 =, a + a d + a 2 d 2 + a 3 d 3 5

which tells us that a, b, c, d are four distinct roots of a + a x + a 2 x 2 + a 3 x 3 = But a polynomial of degree less than or equal to three cannot have four different roots, unless this polynomial is identically zero So a, a, a 2, a 3 C, C, C 2, C 3 are linearly independent must be zeros, showing that As we know, applying elementary row operations can transform a given matrix into its reduced row echelon form, say B What can you learn from B about A? To answer this, first let us describe this B more carefully Suppose that the leading ones of B occur exactly at the following columns: B i, B i2,, B ir, where i < i 2 < < i r n; (I hope you understand perfectly well what leading one means) So we have B i = e, B i 2 = e 2, etc In general, B ik = e k, the column vector with in the kth entry and elsewhere The corresponding columns of A, namely A i, A i2,, A ir, will be called the pivoting columns of A Now I assert: a column of A is pivoting if and only if it cannot be written as a linear combination of columns preceding it This is because A is row equivalent to B and the corresponding statement for B is obviously true Certainly here we use the fact the row reduction preserves linear relations among columns My assertion implies that pivoting columns of A are uniquely determined by A it is independent of the way how row operations are carried out As a consequence the location of leading ones in B are also uniquely determined by A Next I assert: Every non-pivoting columns can be written in a unique way as a linear combination of pivoting columns preceding it To see this, suppose that A j is a non-pivoting column of A and A i, A i2,, A ik are pivoting columns in A preceding A j Let us take a look of the corresponding columns of B: B i e, B i2 = e 2,, B ik e k and B j If we write b ij as the (i, j) entry of B, then the entries of B j are b j, b 2j,, b mj Since B ik e k is the last pivoting column of B preceding B j, we must have b ij = for i > k Thus B j = b j b 2j b mj = b j + b 2j + + b mj = e b j + e 2 b 2j + + e k b kj = B i b j + B i2 b 2j + + B ik b kj 6 = e b j + e 2 b 2j + + e m b mj A

the same identity holds for A, namely A j = A i b j + A i2 b 2j + + A ik b kj Such a linear combination is unique due to the fact that the pivoting columns are linearly independent) So, any non-pivoting column, say the jth column A j, can be written in a unique way as a linear combination of preceding pivoting columns and the coefficients of this linear combination determine the jth column B j of the reduced echelon matrix B Now we see that the reduced row echelon form of A is uniquely determined by A; it is independent of the way the elementary row operations are carried out In particular, the positive integer r, which is the number of pivoting columns of A is uniquely determined by A We shall call r the rank of A Geometrically the rank of A is the dimension of the column space of A, that is, the subspace spanned by the column vectors of A The above discussion tells us that the pivoting columns of A are linearly independent and other columns are linear combinations of them So the pivoting columns form a basis of the column space Hence the dimension of the column space is the number of pivoting columns, which is r, the rank of A We may also consider the row space of A, which is the space spanned by the row vectors of A Since elementary row operators only create linear combinations of rows as new rows, and since these operations can be reversed, we see that row equivalent matrices have the same row space Clearly, the rows of the echelon matrix B of A form a basis of the row space This allows us to conclude that row rank = column rank for any matrix, provided that you know what row rank and column rank mean! Now we use the basic theory of row reduction to deal with a basic issue in linear algebra: the justification of the definition of dimension The dimension of a subspace M of R m is usually defined to be the number of vectors in any basis of M This definition makes sense only if the following assertion is valid: all bases of M has the same number of vectors We are going to verify this assertion to order to justify this definition of dimension We start with two bases of M, say {A, A 2,, A n } and {B, B 2,, B l } We have to show that n = l Suppose the contrary that n and l differ, so one is less than the other, say n < l Form matrices out of these bases: A = A A 2 A n and B = B B 2 B l Then M becomes the column space of A, as well as the column 7

space of B Since the columns of A or B are linearly independent, all columns are pivoting So the row reduced echelon form of A and B must be à = In O n and Il B = O l where I n is the n n identity matrix and O n is the (m n) n zero matrix; matrices I l and O l are interpreted in the same manner Since B is in M, which is spanned by the columns of A, we can write B as a linear combination of columns of A, say B = A x + A 2 x 2 + + A n x n, or B = A A 2 A n x x 2 x n = AX, where X = Thus AX = B for some X in R n Similarly we have AX 2 = B 2,, AX l = B l for some X 2,, X l in R n Now B = B B 2 B l = AX AX 2 AX l = AX X 2 X l AX x x 2 x n Since à is the row reduced echelon matrix of A, we can write à = P A, where P is a product of finitely many elementary matrices Thus, from B = AX we have P B = P AX = ÃX = In In X X X = = O n O n X O n This tells us that the last m n rows of P B are zeros So the last m n rows of the reduced row echelon matrix of P B are also zeros But the reduced row echelon matrix of P B is the same as that of B, namely B, because P B and B are row equivalent But only the last m l rows in B are zero rows This is impossible, because m n > m l The proof is complete In the history of science, an important development is to use the spectral method to understand the nature The eigenvalue problem in linear algebra is the first step towards the spectral theory Let A be an n n matrix (a square matrix!) By an eigenvector of A corresponding to eigenvalue λ we mean a nonzero column vector V in R n such that AV = V λ 8

PROBLEM 6 Show that if V, V 2,, V r are eigenvectors corresponding to distinct eigenvalues λ, λ 2,, λ r, then V, V 2,, V r are linearly independent Suppose that we are in a lucky situation that A has n eigenvectors P, P 2,, P n corresponding to distinct eigenvalues λ, λ 2,, λ n (recall that n is the number of rows, or columns, of the matrix A): AP = P λ, AP 2 = P 2 λ 2,, AP n = P n λ n (8) According to the previous problem, the column vectors P, P 2,, P n are linearly independent Let P = P P 2 P n Then P is an invertible n n matrix QUESTION 7 Why is P invertible, according to what you have learned so far, not citing any theorem from a textbook? From (8), we have AP = AP AP 2 AP n = P λ P 2 λ 2 P n λ n = P D, (9) where D is a diagonal matrix with eigenvalues of A as diagonal entries: λ λ 2 λ 3 D = () λ n From (9) we obtain A = P DP, where D is the diagonal matrix given by () QUESTION 8 Cancelling P in P DP to write P DP = D is another weapon of math destruction What is the TERRIBLE consequence of it? 9