Lecture Summaries for Linear Algebra M51A

Similar documents
Online Exercises for Linear Algebra XM511

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Highlights

2. Every linear system with the same number of equations as unknowns has a unique solution.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

LINEAR ALGEBRA REVIEW

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MAT Linear Algebra Collection of sample exams

Study Guide for Linear Algebra Exam 2

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Math Linear Algebra Final Exam Review Sheet

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

1. General Vector Spaces

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Linear Systems and Matrices

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MTH 464: Computational Linear Algebra

Definitions for Quizzes

4. Determinants.

MATH 235. Final ANSWERS May 5, 2015

2 b 3 b 4. c c 2 c 3 c 4

Calculating determinants for larger matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Linear Algebra Practice Problems

Math 1553, Introduction to Linear Algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Vector Spaces and Linear Transformations

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Linear Algebra Primer

Foundations of Matrix Analysis

Linear Algebra Practice Problems

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Chapter 7. Linear Algebra: Matrices, Vectors,

Math 302 Outcome Statements Winter 2013

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Review problems for MA 54, Fall 2004.

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Introduction to Linear Algebra, Second Edition, Serge Lange

LINEAR ALGEBRA SUMMARY SHEET.

Math113: Linear Algebra. Beifang Chen

Math 321: Linear Algebra

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Graduate Mathematical Economics Lecture 1

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1 9/5 Matrices, vectors, and their applications

Linear Algebra in Actuarial Science: Slides to the lecture

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

235 Final exam review questions

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

ELEMENTS OF MATRIX ALGEBRA

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Chapter 2 Notes, Linear Algebra 5e Lay

Dimension. Eigenvalue and eigenvector

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

ANSWERS. E k E 2 E 1 A = B

Spring 2014 Math 272 Final Exam Review Sheet

Elementary linear algebra

and let s calculate the image of some vectors under the transformation T.

The definition of a vector space (V, +, )

MTH 362: Advanced Engineering Mathematics

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Topic 1: Matrix diagonalization

Lecture Notes in Linear Algebra

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Math 2174: Practice Midterm 1

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Chapter 3. Determinants and Eigenvalues

Properties of the Determinant Function

Solutions to Final Exam

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

3 Matrix Algebra. 3.1 Operations on matrices

SUMMARY OF MATH 1600

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear equations in linear algebra

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Components and change of basis

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Transcription:

These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture 01 : Introduction. An introduction to the course, listing the topics to be covered; remarks concerning the rigor and formality with which the theory will be developed. Lecture 02 ( 1.1) : Matrices. Definitions, notations, and examples of: matrix, element, order, a ij, (as well as A ij, A i,j, A(i, j), i, j-entry of A), [a ij ], row index, column index, main (or principal) diagonal, diagonal element, square matrix, row matrix, column matrix, components, dimension, n-tuple, equality of matrices. Lecture 03 ( 1.1) : Matrix Addition and Scalar Multiplication. Definition and example of matrix addition; theorem asserting commutativity and associativity of (matrix) addition; definition of the zero matrix; theorem asserting that 0 is an additive identity element; definition of matrix subtraction; definition and example of scalar multiplication; theorem asserting the basic associative and distributive properties of scalar multiplication; some basic properties (left as exercises) relating matrix addition, subtraction, and scalar multiplication. Lecture 04 ( 1.2) : Matrix Multiplication. Definition of matrix multiplication; examples showing that matrix multiplication is not commutative, that a product of two non-zero matrices can return the zero matrix, that cancellation does not hold for matrix multiplication; theorem asserting that matrix multiplication is associative and distributes over matrix addition. Lecture 05 ( 1.3) : Transpose and Symmetry. Definition, examples, and basic properties of the transpose operation; notation A n for n > 0 and for n = 0; definitions, examples, and basic properties of symmetric, skew-symmetric, diagonal, and identity matrices. Lecture 06 ( 1.3) : Partitions and Special Forms of Matrices. Definitions and examples of submatrix, partitioned matrix, block, symmetrically partitioned matrix, diagonal block of a symmetrically partitioned matrix, zero row of a matrix, nonzero row of a matrix, row-reduced form, upper triangular, lower triangular; theorem asserting that the product of two upper triangular matrices is upper triangular, and the product of two lower triangular matrices is lower triangular. Lecture 07 ( 1.4) : Linear Systems of Equations. Definition and examples of a system of m linear equations in n variables, and its solutions; matrix representation of such a system; theorem asserting that if x 1 and x 2 are distinct solutions to Ax = b, then αx 1 + βx 2 is also a solution whenever α + β = 1; corollary asserting that a system of linear equations has either 0, 1, or infinitely many solutions; geometrical perspective of this corollary in 2-space and 3- space; definitions of homogeneous, inhomogeneous, consistent, inconsistent, and trivial solution; observation that the trivial solution makes every homogeneous system consistent. Lecture 08 ( 1.4) : Gaussian Elimination: Part I. Definitions and examples of augmented matrix; definition of elementary row operations; correspondence between operations on equations of a linear system and the elementary row operations on the corresponding augmented matrices; definition and example of Gaussian elimination; definition of derived set (of equations); guidelines for applying elementary row operations when doing Gaussian elimination. Lecture 09 ( 1.4) : Gaussian Elimination: Part II. More examples of Gaussian elimination, including an example of an inconsistent system and a system with infinitely many solutions; 1

summary of the relationship between the derived set of equations and the number of solutions to the system. Lecture 10 ( 1.5) : The Inverse of a Matrix. Definition and examples of (multiplicative) inverse of a (square) matrix; observation that being an inverse is a symmetric relation; theorem asserting that inverses are unique; A 1 notation for the inverse; observation that not all matrices have inverses; definitions of invertible, nonsingular, and singular; observation that diagonal matrices with diagonal elements all nonzero are invertible; theorem asserting that (A 1 ) 1 = A, (AB) 1 = B 1 A 1, (A T ) 1 = (A 1 ) T, and (λa) 1 = (1/λ)A 1, for any nonzero scalar λ; theorem asserting that if A is invertible, then the matrix equation Ax = b has the unique solution x = A 1 b. Lecture 11 ( 1.5) : Elementary Matrices and Computing Inverses. Definition and example of elementary matrix; construction of elementary matrices; theorem asserting elementary matrices are unique and invertible, and giving descriptions of the inverses; lemmas asserting the equivalence of invertibility of a matrix A with A s having certain row-reduced forms; theorem asserting that A is invertible if and only if E k E 1 A = I, for some elementary matrices E 1,..., E k ; step-by-step procedure for computing the inverse of A using the augmented matrix [A I] and the elementary matrices (or row operations) that transform A into the identity (if such exist). Lecture 12 ( 1.6) : LU Decompositions. Definition of LU decomposition of a nonsingular matrix; notation R 3 (i, j, k) denoting the type three elementary row operation that adds to the ith row k times the jth row (where k 0 and i j); lemma asserting that the diagonal elements of a nonsingular lower triangular matrix must be nonzero; theorem asserting that the nonsingular matrix A has an LU decomposition if and only if A can be transformed to an upper triangular matrix using only operations of the form R 3 (i, j, k) with i > j; discussion of LDU decomposition; discussion of how such decompositions can be used to solve systems of equations. Lecture 13 ( 2.1) : Vector Spaces. Notations R, C, and ɛ; definition (and defining axioms) of a vector space; definition of a real vector space and a complex vector space; example showing that R 3, with vector addition and scalar multiplication defined componentwise, is a real vector space. Lecture 14 ( 2.1) : Examples of Vector Spaces. Discussion of various examples of vector spaces including R n, C n, certain sets of matrices, polynomials, real-valued functions, and solutions to linear homogeneous differential equations. Lecture 15 ( 2.1) : Basic Properties of Vector Spaces. Theorems asserting that in any vector space there is a unique additive identity element (zero vector), additive inverses are unique, 0 u = 0, ( 1) u = u, ( u) = u, α 0 = 0, and α u = 0 implies either α = 0 or u = 0. Lecture 16 ( 2.2) : Subspaces: Part I. Definition of a subspace of a vector space; theorem asserting that a nonempty subset S of a vector space V is a subspace of V if and only if S is closed under vector addition and scalar multiplication; corollary asserting that a nonempty subset of a vector space V is a subspace of V if and only if α u β v S, whenever u, v S and α and β are scalars; examples illustrating the use of the corollary. Lecture 17 ( 2.2) : Subspaces: Part II. Example showing that for any a 1,..., a n R, the set S = {(x 1,..., x n ) R n a1 x 1 + + a n x n = 0} is a subspace of R n ; discussion of the geometry of such subspaces; theorem asserting that the intersection of arbitrarily many subspaces is again a subspace; definitions of linear combination and span of a collection of vectors; theorems asserting that for any nonempty subset S of a vector space V, span(s) is a subspace 2

of V, span(s) is contained in any subspace containing S, and span(s) is the intersection of all subspaces containing S. Lecture 18 ( 2.3) : Linear Independence. Definitions and examples of linear dependence and independence; theorem asserting that a finite ordered set of nonzero vectors is linearly dependent if and only if one of the vectors is a linear combination of those preceding it; corollary asserting that if S is a linearly dependent subset of V having n elements and with V = span(s), then there is a subset T of S with n 1 elements such that span(t ) = V ; corollary asserting that if V = span(s) and S has the minimal number of elements of all sets whose span is V, then S is linearly independent; theorems asserting that a subset with one element u is linearly dependent if and only if u = 0, a subset with two elements is linearly dependent if and only if one element is a scalar multiple of the other, and any subset containing 0 is linearly dependent; theorem asserting that a subset of a linearly independent set is linearly independent, and a superset of a linearly dependent set is linearly dependent. Lecture 19 ( 2.4) : Basis. Definitions of spans, spanning set, and basis; various examples of bases including the standard basis for R n and the standard basis for P n ; definition of finite dimensional; examples of finite dimensional vector spaces; examples of infinite dimensional vector spaces; theorem asserting that if S has n elements and spans V, then any set with more than n elements must be linearly dependent. Lecture 20 ( 2.4) : Dimension. Various corollaries of the theorem proven in the previous lecture, including the corollary asserting that for a given finite dimensional vector space, all bases have the same number of elements; definition, notation, and examples of dimension; remark on how dimension can change if the set of scalars is changed; remark on the dimension of the trivial vector space {0} being zero; theorem asserting that for a given basis of a vector space V, each u V has a unique representation as a linear combination of the basis vectors; definition, notation, and examples of coordinates of a vector with respect to a basis; theorem asserting that any linearly independent set of vectors is a subset of some basis, and that any spanning set contains some basis as a subset. Lecture 21 ( 2.5) : Row Space : Part I. Definitions, notations and examples of row space and row rank of a matrix; theorem asserting that a basis for the row space of a matrix in row-reduced form is the set of nonzero rows, and the row rank is the number of nonzero rows; lemmas asserting that S T implies span(s) span(t ), span(span(s)) = span(s), and S span(t ) implies span(s) span(t ); lemma asserting that the span of a set of nonzero vectors is unchanged if (1) a multiple of one vector is added to another, and (2) one of the vectors is replaced by a nonzero scalar multiple of itself; theorem asserting that if matrix B is obtained from matrix A by elementary row operations, then rowspace(b) = rowspace(a). Lecture 22 ( 2.5) : Row Space : Part II. Corollary (of theorems from the previous lecture) asserting that a basis for rowspace(a) is the set of nonzero rows of matrix B obtained by transforming A to row-reduced form; example illustrating the use of this corollary in finding a basis for the span of any finite collection of n-tuples in R n ; discussion explaining how a similar procedure can be used to find a basis for the span of any finite collection of vectors in any vector space V ; lemma and theorem asserting that in any n-dimensional vector space V with some chosen basis, a basis for the span of the coordinate representations of u 1,..., u m consists of n- tuples that are the coordinate representations of vectors that form a basis for span{u 1,..., u m }. Lecture 23 ( 2.6) : Column Space and Rank of a Matrix. Definitions and notations for column space and column rank of a matrix; lemma and theorem asserting that row rank equals column rank; definition and notation for rank of a matrix; theorem asserting that the system Ax = b is consistent if and only if rank(a) = rank([a b]); theorem asserting that if Ax = b is a consistent system of n variables and if rank(a) = k, then the solutions to the system 3

are expressible in terms of n k arbitrary unknowns; corollary asserting that the homogeneous system Ax = 0 in n variables has nontrivial solutions if and only if rank(a) n. Lecture 24 ( 2.6) : Rank and Invertibility. Lemma asserting that if A and B are n n matrices satisfying AB = I n, then the rows of A are linearly independent and r(a) = n; lemma asserting that if an n n matrix A has rank n, there there exists C such that CA = I n ; lemma asserting that if A and B are n n matrices with AB = I n, then BA = I n ; theorem asserting that an n n matrix A is invertible if and only if r(a) = n; corollary asserting that an n n matrix A is invertible if and only if it can be transformed using elementary row operations to an upper triangular matrix with all diagonal elements equal to 1. Lecture 25 ( 3.1,3.2) : Linear Transformations. Definitions of function, transformation, image, one-to-one, onto, linear; discussion of what linear really means; various examples of linear transformations; theorem asserting that if T : R m R n is defined by T v = Av where A is an n m matrix, then T is linear. Lecture 26 ( 3.3) : Matrix Representations of Linear Transformations. Discussion of why a linear transformation is determined by its action on a basis of its domain; notation (v) B denoting the column matrix consisting of the coordinates of v with respect to the basis B; lemma asserting that taking coordinates with respect to a basis of an n-dimensional vector space V gives a one-to-one, onto, linear transformation from V to R n ; theorem asserting that if T is a linear transformation from the vector space V to the vector space W with bases B and C respectively, then there exists a matrix A C B such that for any v V, (T v) C = A(v) B ; pictorial description of the previous theorem. Lecture 27 ( 3.4) : Matrix Representations and Change of Basis. Corollary (of theorem proven in previous lecture) asserting that for any two bases B and C of the finite dimensional vector space V, there exists a transition matrix PB C such that (v) C = PB C(v) B for all v V ; theorem asserting that any transition matrix PB C is invertible and (P B C) 1 = PC B; definition of similar; theorem asserting that for bases B and C of V, and linear transformation T : V V, the matrix representations of T with respect to bases B and C are similar; theorem asserting that any matrix similar to a matrix representation of T (with respect to some basis) is also a matrix representation of T (with respect to some basis). Lecture 28 ( 3.5) : Kernel and Image : Part I. Definitions, notations, and examples or kernel, image, nullity, and rank of a linear transformation; theorems asserting that the kernel and image are subspaces (of the domain and range respectively); theorem asserting that if A is the matrix representation of T : V W with respect to bases for V and W, then the rank of T is the rank of A. Lecture 29 ( 3.5) : Kernel and Image : Part II. Lemma asserting that if T : V W is linear, and if ker(t ) has a basis {v 1,..., v k } that can be extended to a basis {v 1,..., v k,..., v k+1,..., v n } for V, then {T v k+1,..., T v n } is a basis for image(t ); theorem asserting that if T : V W is linear and V is finite dimensional, then r(t )+null(t ) = dim(v ); theorem asserting that a linear transformation T is one-to-one if and only if ker(t ) = {0}; theorem asserting that if T : V W is linear and dim(v ) = dim(w ), then T is one-to-one if and only if T is onto; definition of an isomorphism; definition and notation ( =) of isomorphic; theorem asserting that = is a reflexive, symmetric, and transitive relation; definition of equivalence relation; theorem asserting that two finite dimensional vector spaces are isomorphic if and only if they have the same dimension. Lecture 30 ( 4.1, 4.2) : Determinants: Part I. Notation µ ij (A) denoting the submatrix of A obtained by deleting the ith row and jth column of A; definition, notation, and examples of the determinant of a matrix; definition of minor and cofactor; theorem of expansion by cofactors; corollary asserting that a matrix with a zero row or zero column has determinant zero; theorem asserting that the determinant of an upper (or lower) triangular matrix is the product 4

of the diagonal elements; statement of two theorems relating 2 2 determinants to areas of parallelograms in the plane. Lecture 31 ( 4.2) : Determinants: Part II. Theorem asserting that for any square matrix A, det(a) = det(a T ); three lemmas that together imply a theorem asserting that for any n n matrix A and n n elementary matrix E, det(ea) = det(e) det(a); corollary asserting that det(a) = 0 whenever A has two identical rows or two identical columns; corollary asserting that for any scalar λ and any n n matrix A, det(λa) = λ n det(a). Lecture 32 ( 4.2) : Determinants: Part III. Corollaries (of theorem in previous lecture) asserting that for any n n matrix A and any n n elementary matrices E 1,..., E k, det(e k E k 1 E 1 A) = det(e k E k 1 E 1 ) det(a) = det(e k ) det(e k 1 ) det(e 1 ) det(a); theorem asserting that a square matrix A is invertible if and only if det(a) 0; theorem asserting that for any n n matrices A and B, det(ab) = det(a) det(b); corollary asserting that for an n n invertible matrix A, det(a 1 ) = (det(a)) 1 ; theorem asserting that similar matrices have the same determinant. Lecture 33 ( 4.3) : Eigenvectors and Eigenvalues. Definitions and examples of eigenvectors and eigenvalues for a transformation and for a matrix; theorem asserting that for the linear transformation T : V V, there exists a basis of V containing only eigenvectors of T if and only if there exists a matrix representation of T that is diagonal; theorem asserting that u is an eigenvector with eigenvalue λ for the linear transformation T if and only if the coordinates of u with respect to basis C is an eigenvector with eigenvalue λ for the matrix representation of T with respect to C; theorem asserting that for a linear transformation T : V V, the set S λ, consisting of 0 and all eigenvectors of T with eigenvalue λ, is a subspace of V, known as the eigenspace of T for eigenvalue λ. Lecture 34 ( 4.3) : Finding Eigenvectors and Eigenvalues. Theorem asserting that λ is an eigenvalue of the n n matrix A if and only if det(a λi n ) = 0; examples illustrating how the theorem is used to find eigenvalues, eigenvectors, and eigenspaces. Lecture 35 ( 4.4) : Properties of Eigenvectors and Eigenvalues. Theorem asserting that similar matrices have identical characteristic equations (although the converse is false in general); theorem asserting that the eigenvalues of a triangular matrix are the diagonal elements; theorem asserting that a matrix is singular if and only if it has a zero eigenvalue; corollary giving various equivalences of invertibility, including having all eigenvalues be nonzero; theorem asserting that if x is an eigenvector with eigenvalue λ for the invertible matrix A, then x is an eigenvector with eigenvalue 1/λ for A 1 ; theorem asserting that if x is an eigenvector of A with eigenvalue λ, then for each scalar k, x is an eigenvector of ka with eigenvalue kλ, and for each positive integer n, x is an eigenvector of A n with eigenvalue λ n ; definition of trace; theorem asserting that the trace is the sum of the eigenvalues, and the determinant is the product of the eigenvalues. Lecture 36 ( 4.5) : Diagonalization : Part I. Definition of diagonalizable; theorem asserting that an n n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors; theorem asserting that if A is similar to the diagonal matrices D 1 and D 2, then D 1 and D 2 have the same set of diagonal elements and each such element occurs with the same multiplicity in D 1 as it does in D 2. Lecture 37 ( 4.5) : Diagonalization : Part II. Theorem asserting that any set of eigenvectors having distinct eigenvalues must be linearly independent; corollary asserting that if A is n n and A has n distinct eigenvalues, then A is diagonalizable; theorem asserting that if λ 1,..., λ k are the (real) eigenvalues of the n n matrix A, then A is diagonalizable if and only if the sum of the dimensions of the corresponding eigenspaces is n; theorem asserting that if λ is an eigenvalue of the n n matrix A, then dim(s λ ) = n r(a λi n ). 5

Lecture 38 : What s Next?. Brief discussion of the course(s) a student could/should take after completing Linear Algebra. 6