# ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Save this PDF as:

Size: px
Start display at page:

Download "ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices"

## Transcription

1 ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex numbers, R or C We call them scalars We always stay with one kind of scalars at a time Let the a i,j, b i be scalars Then a 1,1x a 1,nx n = b 1 a m,1x a m,nx n = b m is called a system of linear equations for the unknowns x i We call a tuple (x 1,, x n ) of numbers a solution of the system if the x 1,, x n satisfy each equation of the system A system consisting of a single row is called a linear equation Two systems are called equivalent if they have the same solutions Systems that have a solution are called consistent or solvable, those, which do not have a solution are called inconsistent A very simple inconsistent system is given by the two equations x 1 = 1 and x 1 = 0 Systems with constant column 0 are called homogeneous, otherwise inhomogeneous Homogeneous systems always are consistent, because they have the trivial solution, x 1 = = x n = 0 Similarly nontrivial solutions are understood Remark Later we show that linear equation systems (with scalars in R or in C) have either no solution or a unique solution or innitely many solutions The following manipulations of a linear equation system do not change the set of solutions: (1) Interchange of equations () Multiplication of an equation by a nonzero scalar () Addition of a multiple of an equation to another equation, ie, replace a i,1 x a i,n x n = b i by (a i,1 + ca j,1 )x (a i,n + ca j,n )x n = b i + cb j There is an obvious elimination method to solve linear equation systems Date: June 5, 01 1

2 NOTES BY OTTO MUTZBAUER 1 Matrices Denition 11 An m n matrix is a rectangular array of objects, mostly real or complex numbers, with m rows and n columns, written as a 1,1 a 1, a 1,n a,1 a, a,n A = [a i,j] = a m,1 a m, a m,n The object a i,j is called the entry at position (i, j) or (i, j)-entry of A, where i, j are called row index and column index, respectively The tuple (a i,1,, a i,n ) is called the ith row of A Similarly for columns A row is a 1 n matrix, a column is an m 1 matrix m n is called the size of the matrix A A 1 n matrix is called a row vector of length n, an m 1 matrix is called a column vector of length m The set of all real or complex m n matrices is denoted by M m n (R) and by M m n (C), respectively In particular, R n = M n 1 (R), R n = M 1 n (R), C n = M n 1 (C) and C n = M 1 n (C) are short notations for column and row vectors, respectively A matrix with all entries equal to 0 is called zero-matrix, written 0, or more precise 0 m n Similarly we understand 0-row, 0-column and 0-vector The entries a i,i are called diagonal entries the tuple (a 1,1, a,, a,, ) is called main diagonal of A An m n matrix is called square of size n if m = n For square matrices the main diagonal starts at the left upper corner and ends at the right lower corner Denition 1 to 15 Matrix operations Let A = [a i,j ] and B = [b i,j ] both be m n matrices (1) A = B if a i,j = b i,j for all i = 1,, m and for all j = 1,, n (equality) () A + B = [a i,j + b i,j ], similarly A B (sum, dierence) () ca = [ca i,j ] (scalar multiplication) () A T = [a j,i ] is an n m matrix (transpose) The transpose A T of A is the matrix obtained by interchanging the rows and the columns of A, ie, the matrix A is mirrored at the main diagonal Summation notation n i=1 a i = a 1 + a + + a n is called summation notation and (1) n i=1 (r i + s i )a i = n i=1 r ia i + n i=1 s ia i () n i=1 cr ia i = c n i=1 r ia i () m n j=1( i=1 a i,j) = n m i=1( j=1 a i,j) Let A 1,, A k all be m n matrices Then k i=1 c ia i is called a linear combination of the A i with coecients c i Included is the case of 1 n and of m 1 matrices In this case we call this a linear combination of vectors 1 Matrix Multiplication Denition 1 Let u = 6 a 1 a n 7 5, v = 6 b 1 b n 7 5 R n The dot product or inner product of u and v is u v = a 1 b a n b n = n a i b i Denition 17 Let A = [a i,j ] be an m p matrix and B = [b k,l ] a p n matrix Then the m n matrix AB = C = [c i,l ] dened by c i,l = a i,1 b 1,l + + a i,n b n,l = n j=1 a i,jb j,l is called the matrix product of A and B i=1

3 LINEAR ALGEBRA Note that only for matrices that t together a product is dened, the number of columns of the left factor has to be equal to the number of rows of the right factor Moreover, c i,j, the (i, j)-entry of AB, is the inner product of the ith row of A with the jth column of B The matrix notation of the inner product for u, v R n is u v = u T v In particular, if the product AB is dened, the product BA may not be dened We specialize to a product of a matrix with a vector Let A = [a i,j ] be an m n matrix Let a i, denote the ith row and a,j the jth column of A Let u = [c i ] be a column vector of length n Then a 1,1 a 1, a 1,n c 1 a 1, u a,1 a, a,n c Au = [a i,j][c j] = = a, u = 6 a m,1 a m, a m,n c n a m, u Let A = [a i,j ], x = 6 x 1 x m 7 5 and b = 6 b 1 b m P n Pj=1 a1,jcj n j=1 a,jcj P n j=1 am,jcj a T 1, u 7 5 = a T, u 6 7 = c1a,1+ca,+ +cna,n 5 a T m, u 7 5 Then the matrix notation of a linear equation system is Ax = b The matrix A = [a i,j ] is called the coecient matrix, the column b = [b i ] is called the constant column of the system We call the m (n + 1) matrix [A, b] = [a i,j b i ] formed by the coecient matrix together with the constant column as an additional last column the augmented matrix of the system This system is homogeneous if b = 0, ie, if Ax = 0 Theorem 1* The linear equation system Ax = b is consistent if and only if b is a linear combination of the columns of A Proof By the equation for the product of a matrix with a vector the constant column of a linear equation system is a linear combination of the columns of A with the entries of x as coecients Conversely, numbers x 1,, x n such that n j=1 x ja,j = b form a solution of the system Ax = b 1 Algebraic Properties of Matrix Operations Theorem 11+1 If A, B, C are matrices of the same size and if c, d are scalars, then: (1) A + B = B + A (commutative law of addition) () A + (B + C) = (A + B) + C (associative law of addition) () c(da) = (cd)a (mixed associative law) () c(a + B) = ca + cb (right-hand mixed distributive law) (5) (c + d)a = ca + da (left-hand mixed distributive law) In particular, 0 + A = A and 0 A = 0 and A = [ a i,j ] is called the negative of A Obviously A is the unique solution of the matrix equation A + X = 0 and only the zero-matrix 0 guarantees that A + 0 = A While writing down products of matrices, we tacitly assume that those products are dened Clearly, in general the product of matrices is not commutative, since it might not be dened But even if it is dened those matrices may not commute For instance, [ ] [ ] 0 0 = 0 1 [ ] [ ] = [ ] [ ] Moreover, this shows that nonzero matrices may multiply to the 0-matrix Theorem 1 Let A, B, C be matrices and let d be a scalar

4 NOTES BY OTTO MUTZBAUER (1) A(BC) = (AB)C (associative law of multiplication) () A(B + C) = AB + AC (left-hand distributive law) () (A + B)C = AC + BC (right-hand distributive law) () d(a B)=(d A)B=A(d B) Theorem 1 Let A, B be matrices of appropriate size and let c be a scalar, then: (1) (A T ) T = A () (A + B) T = A T + B T () (ca) T = ca T () (AB) T = B T A T 15 Special Types of Matrices A square matrix is called diagonal if all o diagonal entries are 0 We write diagonal matrices as The square matrix a 1, a, 0 diag (a 1,1,, a n,n) = a n,n I = I n = is called identity matrix If a i,j = 0 for all i > j, then A is called upper triangular, if a i,j = 0 for all i < j, then A is called lower triangular The matrix A is called triangular if it is either upper or lower triangular Diagonal matrices are both upper and lower triangular Denition Let A be a square matrix If A = A T, ie, a i,j = a j,i, for all i, j, the matrix A is called symmetric If A T = A, then A is called skew symmetric If there exists a matrix B such that AB = BA = I, then A is called invertible or nonsingular, and B is called an inverse of A Otherwise A is called singular or noninvertible In particular, symmetric, skew symmetric and invertible matrices are square Theorem 15 If A is an invertible matrix then its inverse is unique, written A 1 In particular, invertible matrices have no 0-row and no 0-column Theorem Let A and B be invertible matrices of size n (16) Then AB is invertible and (AB) 1 = B 1 A 1 (17) Then A 1 is invertible and (A 1 ) 1 = A (18) Then A T is invertible and (A 1 ) T = (A T ) 1 Theorem * Let A be invertible Then Ax = b has the unique solution x = A 1 b 16 Matrix Transformations Denition Let f : R n R m be a function Then f(u) for u R n is called the image of u, and f(r n ) R m is called the range of f If f(u) = Au is dened by an m n matrix A, then f is called a matrix transformation» Example 16 The matrix transformation f : R R dened by the matrix 1 0 A = is the 0 1 reection with respect to the x-axis in R

5 LINEAR ALGEBRA 5 Example 165 The matrix transformation f : R R dened by the matrix A = projection into the xy-plane Example 166 The matrix transformation f : R R dened by the matrix A = dilation if r > 1 and a contraction if 0 < r < 1 For r = 1, ie, A = I, f is the identity Example 168 The matrix transformation f : R R dened by the matrix A = is the rotation counterclockwise through the angle φ» is the r r r 5 is a» cos φ sin φ sin φ cos φ Solving Linear Systems 1 Echelon Form of a Matrix Denition 1 A matrix is said to be in reduced row-echelon form if it has the following four properties (1) Zero rows appear at the bottom () The rst nonzero entry of a nonzero row is a 1, called leading 1 () The leading 1 of a nonzero row appears to the right of the leading 1's of any preceding row () All other entries of a column containing a leading 1 are 0 A matrix having the properties (1), () and () is said to be in row echelon form Similarly column echelon form and reduced column echelon form are understood Denition The following transformations on a matrix A are called elementary operations I: Interchange of two rows (columns) II: Multiplication of a row (column) by a nonzero number III: Addition of a multiple of a row (column) to another Observe that if elementary row operations are applied to the augmented matrix of a linear system, then this corresponds to interchanging equations, multiplying equations by a nonzero number and adding a multiple of an equation to another equation So, in particular, elementary row transformations produce equivalent systems Denition Let the matrices A and B have the same size B is said to be row (column) equivalent to A if a nite sequence of elementary row (column) operations transforms A into B Theorem 1 Every matrix is row (column) equivalent to a matrix in row (column) echelon form Theorem Every matrix is row (column) equivalent to a unique matrix in reduced row (column) echelon form Solving Linear Systems Theorem Linear systems with row equivalent augmented matrices are equivalent In particular, homogeneous linear systems with row equivalent coecient matrix are equivalent To obtain an echelon form of a matrix by elementary transformations is called Gauss elimination To obtain a reduced echelon form of a matrix by elementary transformations is called Gauss-Jordan elimination The solution of a system with coecient matrix in row-echelon form is simply straightforward starting with the last row This procedure is called back substitution

6 6 NOTES BY OTTO MUTZBAUER Theorem * A linear system with augmented matrix [A b] in echelon form is consistent if and only if the entries in b are 0 in all 0-rows of A In case of consistency the solution is found by back substitution Theorem A homogeneous system of m linear equations in n unknowns with m < n has innitely many nontrivial solutions Theorem * Let Ax = b be a linear system with a particular solution x p Let M = {u Au = 0} be the general solution of the homogeneous system Ax = 0 Then x p + M = {x p + u Au = 0} is the general solution of Ax = b Elementary Matrices; Finding A 1 Denition An n n elementary matrix of type I, II or III is a matrix obtained by performing a single elementary row or column operation of type I, II, or III, to the identity matrix I n An elementary matrix is said to correspond to the respective elementary operation Theorem 5 Let A be an m n matrix, and let an elementary row (column) operation be performed on A to yield B Let E be the corresponding elementary matrix of size m (n) Then B = EA (B = AE) Theorem 6 Let A, B be matrices of the same size, then A is row (column) equivalent to B if and only if there exist elementary matrices E 1,, E k such that B = E k E 1 A (B = AE 1 E k ) Theorem 7 An elementary matrix is invertible and its inverse is an elementary matrix of the same type In particular, let A, B be matrices of the same size If A is row (column) equivalent to B, then B is row (column) equivalent to A Theorem 8+10+Corollary A is invertible if and only if it is the product of elementary matrices In particular, a matrix is row equivalent to the identity matrix if and only if it is invertible Proof If A is the product of elementary matrices, then A is invertible by Theorem 16 Conversely, assume A to be invertible A is row equivalent to a matrix in reduced row echelon form B =, where C is the nonzero part of B and 0 describes the 0-rows So the linear systems Ax = 0,» C 0 Bx = 0 and Cx = 0 have the same set of solutions, namely x = 0, because A is invertible Thus B = C, and there are no 0-rows in B by Theorem In particular, an invertible matrix A is by the rst part a product of elementary matrices, hence A is row equivalent to the identity matrix by Theorem 7; conversely, if A is row equivalent to the identity matrix, then it is the product of elementary matrices by Theorem 6, thus A is invertible by Theorem 7 Theorem 9 Let A be a square matrix The homogeneous system Ax = 0 has innitely many solutions if and only if A is not invertible Theorem 5* The Gauss-Jordan-elimination applied to [A I] allows to decide on if A is invertible or not, and in case that A is invertible A 1 [A I] = [I A 1 ] Theorem 6* The matrix A = [ ] a b is invertible if and only if ad bc 0 and then c d [ ] A 1 1 d b = ad bc c a Theorem 11 Let A, B be square of the same size, then AB = I implies BA = 1, ie, B = A 1

7 LINEAR ALGEBRA 7 Equivalent Matrices Denition Let A, B be matrices of the same size A is said to be equivalent to B, A B, if B is obtained from A by a nite sequence of elementary row or column operations Note that this equivalence of matrices is an equivalence relation: (1) A A (Reexivity), () A B implies B A (Symmetry), () A B and B C imply A C (Transitivity) Theorem 7* Equivalence of matrices is an equivalence relation Theorem 1 A nonzero matrix is equivalent to» I Theorem 1 Two matrices of the same size are equivalent if and only if there are invertible matrices P, Q such that B = P AQ Theorem 1 A matrix is invertible if and only if it is equivalent to the identity matrix 5 LU-Factorization Denition * A product representation A = LU of an invertible matrix A is said to be an LUfactorization of A if L is a lower triangular matrix and U is an upper triangular matrix with diagonal entries all equal to 1 Theorem 8* An invertible matrix has an LU-factorization Determinants 1 Denition Denition 1 Let S = {1,,, n} be the set of integers from 1 to n arranged in ascending order, ie, (1,,, n) A rearrangement (j 1, j,, j n ) of (1,,, n) is called a permutation of S A permutation of S can be considered as a one-to-one mapping of S onto itself, ie, π : S S dened by π(i) = j i The set S n of all permutations is called symmetric group of order n Permutations can be applied one after each other, this is called multiplication of permutations S n = n! The identity π(i) = i is the permutation that does not change anything Denition A permutation (j 1, j,, j n ) of (1,,, n) is said to have an inversion if a larger integer j r precedes a smaller one j s A permutation is called even if the total number of inversions is even, otherwise odd If n, there are n!/ even and n!/ odd permutations The function sign : S n S n dened by sign(π) = + if π is even and sign(π) = if π is odd is called the sign function Let A = [a i,j ] be a square matrix of size n The determinant function, denoted by det (also A = det(a)), is dened by det(a) = π S n sign(π)a 1,π(1) a,π() a n,π(n) In particular, det(i n ) = 1 for all n Example 16+7 det([a]) = a and det» a b = ad bc c d Example 18 The rule of Sarrus (only) for matrices a 1,1 a 1, a 1, det a,1 a, a, 5 = a 1,1a,a, + a 1,a,a,1 + a 1,a,1a, a,1a,a 1, a,a,a 1,1 a,a,1a 1, a,1 a, a,

8 8 NOTES BY OTTO MUTZBAUER Properties of Determinants Theorem 1 If A is a square matrix, then det(a) = det(a T ) Theorem Let A, B be square matrices of the same size If B results from A by interchanging two dierent rows (columns), then det(b) = det(a), ie, performing A by an elementary operation of type I changes the sign of the determinant Theorem + Let A be a square matrix If A has a 0-row (0-column), then det(a) = 0 If A has two equal rows (columns), then det(a) = 0 Theorem 5 Let A, B be square matrices of the same size If B results from A by multiplying a row by a real number k, then det(b) = k det(a), ie, performing A by an elementary operation of type II with scalar k multiplies the determinant by this k Theorem 6 Let A, B be square matrices of the same size If B results from A by an elementary row (column) operation of type III, then det(b) = det(a), ie, no change Denition Let A = [a i,j ] be a matrix If a i,j = 0 for all i > j, then A is called upper triangular, if a i,j = 0 for all i < j, then A is called lower triangular The matrix A is called triangular if it is either upper or lower triangular Theorem 7 If a square matrix A = [a i,j ] of size n is triangular, then det(a) = a 1,1 a, a n,n, ie, the determinant of a square triangular matrix is the product of the diagonal entries Denition Let E i,j be the matrix corresponding to the elementary operation of type I, ie, interchanging the rows (columns) with i and j Let E i (k) be the matrix corresponding to the elementary operation of type II, ie, multiplying the ith row (column) by k Let E(i, j, k) be the matrix corresponding to the elementary operation of type III, ie, replacing the row i by itself plus k times row j Lemma 1 det(e i,j ) = 1, det(e i (k)) = k and det ( E(i, j, k) ) = 1 In particular, for a square matrix A: det ( E i,j A ) = det(a), det ( E i (k)a ) = k det(a), det ( E(i, j, k)a ) = det(a) Theorem 8 A square matrix is invertible if and only if its determinant is not 0 Theorem 9 Let A, B be square matrices of the same size Then det(ab) = det(a) det(b) In particular, if A is invertible, then det(a 1 ) = 1/ det(a) Cofactor Expansion Denition + Let A = [a i,j ] be a square matrix of size n The (i, j)-minor of A is the determinant of M i,j the (n 1) (n 1) matrix A i,j obtained from A by deleting the ith row and the jth column The cofactor A i,j of a i,j is dened as A i,j = ( 1) i+j det(m i,j ) The sign pattern corresponds to the matrix [( 1) i+j ], ie, for example if n = :

9 LINEAR ALGEBRA 9 Theorem 10 Let A = [a i,j ] be a square matrix of size n Then for all i and for all j det(a) = a i,1 A i,1 + a i, A i, + + a i,n A i,n and det(a) = a 1,j A 1,j + a,j A,j + + a n,j A n,j The equations in Theorem 10 are called expansion of det along a row or along a column Inverse of a Matrix Theorem 11 Let A = [a i,j ] be a square matrix of size n Then a i,1 A k,1 + a i, A k, + + a i,n A k,n = 0 for i k; a 1,j A 1,k + a,j A,k + + a n,j A n,k = 0 for j k Denition 5 Let A = [a i,j ] be a square matrix of size n The matrix adj A = [A i,j ] T is called the adjoint of A Theorem 1+Corollary Let A = [a i,j ] be a square matrix of size n Then A(adj A) = (adj A)A = det(a)i n If additional A is invertible (ie det(a) 0), then A 1 = 1 (adj A) det(a) 5 Other Applications of Determinants Theorem 1 [Cramer's Rule] Let A = [a i,j ] be an invertible matrix of size n Let b = vector of length n Then the unique solution of the linear System Ax = b is x 1 = det(a 1) det(a), x = det(a ) det(a),, x n = det(a n) det(a), where A i is the matrix obtained from A by replacing the i'th column of A by b b 1 b n a column 1 Vectors in the Plane and in -Space Real Vector Spaces Originally a vector was geometrically considered to be an arrow indicating a direction and a magnitude given by its length Denition 1++ A vector in the plane or a -vector is a column vector of length with real entries Its entries are called components Vectors are said to be equal, if they are equal as matrices The sum and the scalar multiplication of vectors are dened as for matrices Similarly a vector in the space or a -vector is dened as a column vector of length with real entries

10 10 NOTES BY OTTO MUTZBAUER Vector Spaces Denition A nonempty set V is called a real vector space if there are operations of addition and scalar multiplication with real numbers on V (ie, V is closed under these operations) such that the following eight properties are satised: (1) u + v = v + u for all u, v V (commutative law of addition) () u + (v + w) = (u + v) + w for all u, v, w V (associative law of addition) () There is an element 0 in V so that v + 0 = 0 + v = v for all v V (existence of the 0-vector) () For each v V there is an element v V so that v + ( v) = 0 (negative vector) (5) c(u + v) = cu + cv for all scalars c and for all u, v V (mixed distributive law) (6) (c + d)v = cv + dv for all scalars c, d and for all v V (mixed distributive law) (7) c(dv) = (cd)v for all scalars c, d and for all v V (mixed associative law) (8) 1 v = v for all v V (norming law) The elements of a real vector space are called vectors The real numbers are called scalars The multiplication of a vector with a real number is called scalar multiplication The element 0 V is called zero vector The operations + and are called linear operations, while the multiplication dot is omitted in general The 0 denotes either the zero vector or the real number 0 The context claries the situation The empty set is no vector space But V = 0 = {0} is the zero space u is called the negative of u The 0-vector and the negative of a vector are unique Similarly a complex vector space is dened Theorem 1+Example R n and M m n (R) are real vector spaces for all m, n 1 Example 6 The expression p(t) = a n t n + a n 1 t n a 1 t + a 0, a i R, is called a polynomial If a n 0, then p(t) is said to have degree n Constant polynomials have degree 0 The polynomial with all coecients a i equal to 0 is called the 0-polynomial and it has no degree P n denotes the set of polynomials of degree n Then the set P n is a real vector space if the addition and scalar multiplication of polynomials are as usual Also the set P of all polynomials is a real vector space Example 7 The set of all real valued continuous functions dened on R is a real vector space if the linear operations are dened by (f + g)(t) = f(t) + g(t) and by (cf)(t) = cf(t) Theorem If V is a real vector space, then (1) 0u = 0 for all u V () c0 = 0 for all c R () If cu = 0, then either c = 0 or u = 0 () ( 1)u = u for all u V Subspaces Denition 5 Let V be a real vector space with a nonempty subset W If W is a vector space with respect to the operations in V, then W is called a subspace Theorem [Subspace Criterion] A nonempty subset W of a real vector space V is a subspace if and only if W is closed with respect to the linear operations dened in V ; in detail, (a) if u, v W then u + v W, (b) if c R and u W, then cu W

11 LINEAR ALGEBRA 11 Denition 6 Let V be a real vector space and let S be a subset The vector v V is called a linear combination of S if there are (nitely many) v 1,, v k S such that for some real numbers a 1,, a k k v = a 1 v a k v k = a j v j Note linear combinations are always nite sums Theorem 8* [Example 10] The set of all solutions of a homogeneous linear system is a real vector space The set of all solutions of a non-homogeneous linear system is no vector space The real vector space formed by the solutions of the linear system Ax = 0 is called the nullspace of A or the kernel of A, ker A Span Denition 7 Let V be a vector space and let S be a subset The set of all linear combinations of S is called the span of S, written Span S Theorem The span of a subset of a vector space is a subspace Denition 8 Let V be a vector space and let S be a subset If V = Span S, then S is called a spanning set of V or S is said to span V Theorem 9* Let v 1,, v n, w R m be (column) vectors (of length m) Let A = [v 1 v n ] be the m n matrix formed by the columns v j Then w Span{v 1,, v n } if and only if the linear system Ax = w is consistent 5 Linear Independence Denition 9 Let V be a real vector space A subset S is called linearly dependent, if there is a nite subset v 1,, v k S and there are scalars a 1,, a k R, not all 0, such that k a j v j = a 1 v a k v k = 0 j=1 Otherwise S is called linearly independent In other words, the set {v 1,, v k } is linearly independent if, whenever a 1 v a k v k = 0, then a 1 = a = = a k = 0 Theorem 5 Let S = {v 1,, v n } R n be a set of column vectors (of length n) Let A = [v 1 v n ] be the square matrix of size n formed by the columns v j Then S is linearly independent if and only if det(a) 0 Theorem 6 Let S 1, S be subsets of a vector space V with S 1 S Then the following is true: (a) If S 1 is linearly dependent, so is S (b) If S is linearly independent, so is S 1 A subset S of V containing 0 is dependent If the sequence v 1,, v k of vectors of V contains duplications, then those vectors are linearly dependent Theorem 7 The nonzero vectors v 1,, v n in a vector space V are linearly dependent if and only if one of the vectors v j, j >, is a linear combination of the preceding vectors v 1, v,, v j 1 j=1

12 1 NOTES BY OTTO MUTZBAUER 6 Basis and Dimension Denition 10 The vectors v 1,, v n V are said to form a basis for V if both of the following two conditions are satised: (a) v 1,, v n span V (b) v 1,, v n are linearly independent A real vector space V 0 has innitely many bases The empty set is the basis of the vector space 0 dim(r n ) = n, dim ( M m n (R) ) = mn, and dim(p n ) = n + 1 Moreover, dim(0) = 0 The zero vector space and all vector spaces with a nite basis are called nite dimensional vector spaces Vector spaces that are not nite dimensional are called innite dimensional vector spaces, denoted by dim(v ) = Example 61 Let e i R n be the column vector with all entries 0 except the entry in the ith row Then the set {e 1, e,, e n } is a basis of R n, the so called standard basis Example 6 The set {1, t, t,, t n } is a basis of the vector space P n of polynomials of degree n The innite set {1, t, t, } is a basis of the vector space P of all polynomials, so dim P = Theorem 8 (supplemented) Let V be a vector space A subset S of V is a basis of V if and only if every vector in V is a unique linear combination of S Theorem 9 Let S be a set of nonzero vectors in the vector space V and let W = Span S Then some subset of S is a basis of W Theorem 10* Let v 1,, v n, w R m be (column) vectors (of length m) Let A = [v 1 v n ] be the m n matrix formed by the columns v j Let B be a row echelon form of A Let j 1 < j < < j l ) be the indices of the columns of B containing a leading 1 Then the set {v j1, v j,, v jl } is a basis of Span{v 1,, v n } Theorem 10+Corollary 1 The cardinality of a linearly independent set in a vector space is always less or equal to the cardinality of a basis In particular, the cardinality of a basis is an invariant of the vector space Denition 11 The common cardinality of the bases of a vector space is called its dimension, dim V = n Denition 1 A linearly independent set is said to be maximal if there is no bigger linearly independent set Theorems Corollaries +++5 Let V be a vector space of dimension n A maximal linearly independent set of vectors in V contains n vectors A minimal spanning set of V contains n vectors Any subset of V with cardinality bigger than n is linearly dependent 5 Subsets of V with cardinality less than n do not span V 11 Linearly independent subsets can be supplemented to a basis of V 1 (a) A linearly independent subset of V of cardinality n is a basis (b) A subset of V of cardinality n that spans V is a basis 1 If V = Span S, then a maximal linearly independent subset of S is a basis of V

13 7 Homogeneous Systems LINEAR ALGEBRA 1 Theorem 11* Let A be an m n matrix Let r be the number of leading 1's in the row echelon form of A Then dim ker A = n r A basis of the kernel of A is found by a parameter form of the solutions using the reduced row echelon form of A The general solution of the inhomogeneous system Ax = b is given by a particular solution x p of the inhomogeneous system and by a basis u 1,, u n r of ker A in the form { } n r x p + s i u i s 1,, s n r R i=1 8 Coordinates and Isomorphisms Let S = (v 1, v,, v n ) be an ordered basis of V, indicated by round brackets, cf S = {v 1, v,, v n } is the notation as a set without ordering In particular, dim V = n Let v V be any vector, then the linear combination v = a 1 v a n v n is unique and [v] S = is called the coordinate vector of v with respect to the ordered basis S The entries a i are called the coordinates of v with respect to S Denition 1=61+ Let V, W be vector spaces Let u, v V and c R A function L : V W is called a linear transformation if 6 a 1 a n L(u + v) = L(u) + L(v) and L(cv) = cl(v) A linear transformation is called onto and one-to-one in the usual sense A one-to-one linear transformation is called an isomorphism, and then V and W are called isomorphic vector spaces, V = W Theorems 1+16+Corollary 6 An n-dimensional real vector space is isomorphic to R n In particular, R n = Rn Proof Let v, w V, then [v + w] S = [v] S + [w] S and c[v] S = [cv] S for a xed ordered basis S of V The function L : V R n dened by L(v) = [v] S is an isomorphism if dim V = n Theorem 15 Isomorphism is an equivalence relation Theorem 1* Let S, T be ordered bases of the vector space V Then there are invertible matrices P S T, P T S such that for all v V [v] S = P S T [v] T, [v] T = P T S [v] S, and P T S = (P S T ) 1 Moreover, if S = (v 1,, v n ) and T = (w 1,, w n ) are bases in R n, then P S T = (M S ) 1 M T, where M S = [v 1 v n ] and M T = [w 1 w n ] Denition The matrix P S T in Theorem 1* is called transition matrix from the T -basis to the S-basis 7 5

14 1 NOTES BY OTTO MUTZBAUER 9 Rank of a Matrix Denition 1+15 Let A be an m n matrix The rows of A considered as vectors in R n span a subspace of R n called the row space of A Similarly the columns of A span a subspace of R m called the column space of A The dimensions are called row and column dimensions of A, respectively Theorems The row (column) spaces of row (column) equivalent matrices are equal The row and the column rank of a matrix are equal Denition The row (or column) rank of a matrix A is called rank of A, rank A Theorems 19 Let A be an m n matrix Then rank A + dim ker A = n Theorems 0+Corollaries Let A be a square matrix of size n (0) rank A = n if and only if A is row (column) equivalent to I n (7) A is invertible if and only if rank A = n (8) rank A = n if and only if det(a) 0 (9) The homogeneous system Ax = 0 has a nontrivial solution if and only if rank A < n (10) The homogeneous system Ax = 0 has a unique solution (ie, 0) if and only if rank A = n In each case the rows of A form a basis of R n and the columns of A form a basis of R n Theorem 1 The linear system Ax = b has a solution if and only if rank A = rank[a b] 5 Inner Product Spaces 5 Inner Product Spaces Let M, N be sets, then M N is the set {(m, n) m M, n N} of all ordered pairs of M and N Denition 51 Let V be a real vector space A function f : V V R dened as f(u, v) = (u, v) is called an inner product on V, if it has the following properties: (1) (u, u) 0 and (u, u) = 0 if and only if u = 0 () (u, v) = (v, u) for all u, v V () (u + v, w) = (u, w) + (v, w) for all u, v, w V () (cu, v) = c(u, v) for all u, v V and all c R u = (u, u) is called the length of u The standard inner product in R n is (u, v) = u T v In particular, (u, u) = n i=1 u i Let V be a vector space of dimension n with inner product (u, v) Let S = (u 1,, u n ) be an ordered basis of V, then the matrix C = (u i, u j ) is called the matrix of the inner product with respect to the ordered basis S Denition 5 A vector space with inner product is called an inner product space If the space is nite dimensional it is called an Euclidean space Denition A real symmetric matrix C of size n is called positive denite if u T Cu > 0 for all 0 u R n Note that a positive denite matrix is invertible Theorems 5 Let V be an Euclidean space Then the matrix C of the inner product with respect to an ordered basis S is symmetric and positive denite It determines the inner product completely, namely (u, v) = ([u] S ) T C[v] S

15 LINEAR ALGEBRA 15 Theorems 5 (Cauchy-Schwarz-Inequality) Let V be an inner product space Then for all u, v V (u, v) u v, in particular, 1 (u, v) u v 1 Corollary 51 (Triangle Inequality) Let V be an inner product space Then u + v u + v for all u, v V Denition 5+5 Let V be an inner product space and let u, v V (1) θ with cos θ = (u,v) u v is called the angle between u and v () d(u, v) = u v is called the distance between u and v () u and v are called orthogonal if (u, v) = 0 Denition 55 Let V be an inner product space A set of vectors in V is called orthogonal if any two distinct vectors in S are orthogonal If in addition each vector has the length 1, then S is called orthonormal Theorems 5 An orthogonal set of nonzero vectors in an inner product space is linearly independent 5 Gram-Schmidt Process Theorems 55 Let S = (u 1,, u n ) be an orthonormal basis of an Euclidean space V, then v = (v, u 1 )u 1 + (v, u )u + + (v, u n )u n for any v V Theorems 56 (Gram-Schmidt Process) A nite dimensional nonzero subspace of an inner product space has an orthonormal basis Proof Let W be the subspace with some basis S = (u 1,, u m ) Then an orthogonal basis T = (v 1,, v m ) for W is given inductively by v i := u i i 1 j=1 (u i, v j ) (v j, v j ) v j starting with v 1 := u 1 Now the set ( 1 v 1 v 1, 1 v v,, 1 v m v m) is an orthonormal basis of W Theorems 57 Let S = (u 1,, u n ) be an orthonormal basis of an Euclidean space V If v = n i=1 a iu i and w = n i=1 b iu i are vectors in V, then (v, w) = n i=1 a ib i QR-Factorization Denition * Let A be an m n matrix with linearly independent columns A product representation A = QR is said to be a QR-factorization of A if Q is an m n matrix whose columns form an orthonormal basis of the column space of A and R is an invertible upper triangular matrix of size n Theorem 58 An m n matrix with linearly independent columns has a QR-factorization

16 16 NOTES BY OTTO MUTZBAUER 55 Orthogonal Complements Denition * Let U, W be subspaces of the vector space V Then U + W = {u + w u U, w W } is called the sum of U and W If additionally U W = 0, then we write U W and call this the direct sum of U and W Theorem 1* Let U, W be subspaces of the vector space V (a) The sum U + W is a subspace (b) If u + w U W, then the summands u and w are unique Denition 56 Let W be a subset of an inner product space Then W = {u V (u, w) = 0 for all w W } is called the orthogonal complement of W in V Theorem 59 Let W be a subset of the inner product space V Then (a) W is a subspace of V, (b) W W = 0 Theorem Let W be a nite dimensional subspace of the inner product space V Then (510) V = W W, (511) (W ) = W Theorem 51 Let A be an m n matrix Then (a) the kernel of A is the orthogonal complement of the row space of A, (b) the kernel of A T is the orthogonal complement of the column space of A Denition * Let W be a nite dimensional subspace of the inner product space V Let (w 1,, w m ) be an orthonormal basis of W For some v W let v = w + u be the unique representation with w W and u W Then proj W v = w is called the orthogonal projection of v on W Theorem 1* Let W be a nite dimensional subspace of the inner product space V Let (w 1,, w m ) be an orthonormal basis of W Then proj W v = (v, w 1) (w 1, w 1 ) w 1 + (v, w ) (w, w 1 ) w + + (v, w m) (w m, w m ) w m Theorem 51 Let W be a nite dimensional subspace of the inner product space V and let v V Then v w v proj W v for all w W and with equality if and only if w = proj W v 6 Linear Transformations and Matrices 61 Denition and Examples Denition 1=61+6 Let V, W be vector spaces Let u, v V and c R A function L : V W is called a linear transformation if L(u + v) = L(u) + L(v) and L(cv) = cl(v) If V = W a linear transformation is called a linear operator A linear transformation is called onto and one-to-one in the usual sense A one-to-one linear transformation is called an isomorphism, and then V and W are called isomorphic vector spaces, V = W Note that the following are equivalent:

17 (1) L is one-to-one () v 1 v implies L(v 1 ) L(v ) () L(v 1 ) = L(v ) implies v 1 = v Theorem 61 Let L : V W be a linear transformation Then (a) L(0) = 0, (b) L(u v) = L(u) L(v) for u, v V LINEAR ALGEBRA 17 Theorem 6 (supplemented) Let V, W be vector spaces, let V be of dimension n with basis S = (v 1,, v n ) Then a linear transformation L : V W is completely determined by by the image L(S) of the basis, namely n n L(v) = a i L(v i ) if v = a i v i i=1 For an arbitrary choice of vectors w 1,, w n W there is a unique linear transformation L : V W such that L(v i ) = w i for all i Theorem 6 Let L : R n R m, and let (e 1,, e n ) be the standard basis of R n Let A be the m n matrix whose jth column is L(e j ) Then L(v) = Av Moreover, A is the only matrix that describes L in this way Denition * The matrix A in Theorem 6 is called the standard matrix representing L 6 Kernel and Range of a Linear Transformation Denition 6 Let V, W be vector spaces, let L : V W be a linear transformation Then i=1 ker L = {v V L(v) = 0} is called the kernel of L Theorem 6+Corollary 61 For vector spaces V, W with a linear transformation L : V W (a) ker L is a subspace, (b) L is one-to-one if and only if ker L = 0 (61) If L(x) = L(y) = b, then x y ker L Denition 6 Let V, W be vector spaces, let L : V W be a linear transformation Then V is called the domain of L, range L = L(V ) is called the range of L or the image of V under L The linear transformation L is called onto if range L = W Theorem 65 Let V, W be vector spaces The range of the linear transformation L : V W is a subspace of W Theorem 66 Let V, W be vector spaces and let L : V W be a linear transformation Then dim ker L + dim range L = dim V Theorems Corollary 6+etc Let V, W be vector spaces of the same nite dimension and let L : V W be a linear transformation with (square) standard representing matrix A Then the following are equivalent: (1) L is an isomorphism () L is one-to-one

18 18 NOTES BY OTTO MUTZBAUER () L is onto () L is invertible, ie, L 1 is a linear transformation and (L 1 ) 1 = L (5) Images if linearly independent sets are linearly independent (6) A is invertible (7) ker A = 0 (8) det(a) 0 (9) rank A = n (10) A is row (column) equivalent to the identity matrix I (11) A is a product of elementary matrices (1) The rows (columns) of A form a linearly independent set (1) The homogeneous linear system Ax = 0 has only the trivial solution (1) The linear system Ax = b has a unique solution for every b R n 6 Matrix of a Linear Transformation Theorem 69 Let V, W be nonzero vector spaces and let L : V W be a linear transformation Let S = (v 1,, v n ) and T = (w 1,, w m ) be ordered bases of V, W, respectively Let A be the matrix whose jth column is [L(v j )] T Then A is unique with respect to [L(x)] T = A[x] S for every x V Denition * The matrix A in Theorem 69 is called the representation of L with respect to the ordered bases S and T 65 Similarity Theorem 61+Corollary 6 Let V, W be vector spaces with ordered bases S, S of V and T, T of W Let L : V W be a linear transformation Let A, A be the representations of L with respect to S and T, and with respect to S and T, respectively Then A = (P T T ) 1 AP S S and A = (P S S) 1 AP S S if V = W Denition * The common rank of the matrix representations of a linear transformation L is called the rank of L Theorem 61 rank L = dim rangel for a linear transformation L Denition 66 The square matrices A, B of the same size are called similar if there is an invertible matrix P such that B = P 1 AP Theorem 15* Similarity is an equivalence relation Theorem 61 Let V be an n-dimensional vector space and let A, B be square matrices of size n Then A and B are similar if and only if A and B represent the same linear operator with respect to two ordered bases Theorem 615 If A and B are similar, then rank A = rank B

19 71 Denitions and Examples LINEAR ALGEBRA 19 7 Eigenvalues and Eigenvectors Denition 71 Let V be a (real or complex) vector space and let L be a linear operator The scalar λ (real or complex) is called eigenvalue of L if there exists a nonzero vector x V such that L(x) = λx Every nonzero vector x satisfying this equation is then called an eigenvector of L associated with the eigenvalue λ Denition 7 Let A = [a i,j ] be an n n matrix Then the determinant of the matrix λ a 1,1 a 1, a 1,n a λi n A =,1 λ a, a,n a n,1 a n, λ a n,n is called characteristic polynomial of A The equation is called characteristic equation of A p(λ) = det(λi n A) = 0 Theorem 71 Let A be a square matrix The eigenvalues of A are the roots of the characteristic polynomial Theorem 16* A rational root of the integer polynomial p(λ) = λ n + a 1 λ n a n 1 λ + a n is an integer that divides a n If λ is an explicitly given eigenvalue of A, then (λi n A)x = 0 is a homogeneous linear system and the nonzero solutions, ie, ker(λi A) \ 0, are precisely the eigenvectors of A associated with the eigenvalue λ Exercise 710 [Cayley-Hamilton] A square matrix is a root of its characteristic polynomial 7 Diagonalization and Similar Matrices Denition 7 Let L be a linear operator on a nite dimensional vector space V The linear operator L is called diagonalizable or can be diagonalized if there exists a basis S of V such that L is represented by a diagonal matrix Theorem 7 Similar matrices have the same eigenvalues Theorems 7+7 Let L be a linear operator on a nite dimensional vector space V The linear operator L is diagonalizable if and only if V has a basis S consisting of eigenvectors of L If D is the diagonal matrix representing L with respect to S, then the diagonal entries of D are the eigenvalues of L An n n matrix A is similar to a diagonal matrix D if and only if A has n linearly independent eigenvectors Moreover, the diagonal entries of D are the eigenvalues of A Theorem 75 Let R be a set of roots of the characteristic polynomial of a matrix A Then the scalars in R are eigenvalues of A and a set of associated eigenvectors is linearly independent In particular, if R is equal to the degree of the characteristic polynomial, then A is diagonalizable

20 0 NOTES BY OTTO MUTZBAUER Denition * By the Fundamental Theorem of Algebra [Gauss] each complex polynomial p(λ) can be written in the form p(λ) = (λ λ 1 ) k 1 (λ λ ) k (λ λ r ) kr where λ 1,, λ r are distinct complex (including real) numbers The exponents k i are called the multiplicity of λ i Theorem 17* Let A be an n n matrix with characteristic polynomial p(λ) Let λ 1,, λ r be the set of the distinct roots of p(λ), ie, the set of distinct eigenvalues of A Let k i be the multiplicity of λ i Let l i be the dimension of the kernel of λ i I n A Then l i k i, and if l i = k i for all i, then A is diagonalizable 7 Diagonalization of Symmetric Matrices Theorems All roots of the characteristic polynomial of a symmetric matrix are real numbers, and eigenvectors that belong to distinct eigenvalues are orthogonal (with respect to the standard inner product in R n ) Denition 7 A real square matrix is called orthogonal if A 1 = A T, ie, A T A = I Theorems 78 A matrix is orthogonal if and only if its columns (rows) form an orthonormal set Theorems 79 A symmetric matrix is orthogonal diagonalizable, ie, for a symmetric matrix A there exists an orthogonal matrix P such that P 1 AP is a real diagonal matrix In particular, the columns of P form an orthonormal set of eigenvectors of A

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

### Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

### HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

### MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

### MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

### A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

### ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

### Notes on Mathematics

Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

### MATH 369 Linear Algebra

Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### 1 Determinants. 1.1 Determinant

1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

### Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

### av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

### Linear Algebra. Workbook

Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

### Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

### and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

### 1. Select the unique answer (choice) for each problem. Write only the answer.

MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

### Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

### Determinants Chapter 3 of Lay

Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j

### Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

### 18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

### (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

### MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

### BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

### MATH 1210 Assignment 4 Solutions 16R-T1

MATH 1210 Assignment 4 Solutions 16R-T1 Attempt all questions and show all your work. Due November 13, 2015. 1. Prove using mathematical induction that for any n 2, and collection of n m m matrices A 1,

### Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

### MTH 102A - Linear Algebra II Semester

MTH 0A - Linear Algebra - 05-6-II Semester Arbind Kumar Lal P Field A field F is a set from which we choose our coefficients and scalars Expected properties are ) a+b and a b should be defined in it )

### Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

### Eigenvalues and Eigenvectors

CHAPTER Eigenvalues and Eigenvectors CHAPTER CONTENTS. Eigenvalues and Eigenvectors 9. Diagonalization. Complex Vector Spaces.4 Differential Equations 6. Dynamical Systems and Markov Chains INTRODUCTION

### Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

### MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### Matrices and Determinants

Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

### Determinants. Beifang Chen

Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,

### CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

### Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

### Math 215 HW #9 Solutions

Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith

### Math 225 Linear Algebra II Lecture Notes. John C. Bowman University of Alberta Edmonton, Canada

Math 225 Linear Algebra II Lecture Notes John C Bowman University of Alberta Edmonton, Canada March 23, 2017 c 2010 John C Bowman ALL RIGHTS RESERVED Reproduction of these lecture notes in any form, in

### EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### 5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

### Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

### Optimization Theory. A Concise Introduction. Jiongmin Yong

October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

### det(ka) = k n det A.

Properties of determinants Theorem. If A is n n, then for any k, det(ka) = k n det A. Multiplying one row of A by k multiplies the determinant by k. But ka has every row multiplied by k, so the determinant

### ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

### Introduction to Matrices

214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

### Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

### Determinants. 2.1 Determinants by Cofactor Expansion. Recall from Theorem that the 2 2 matrix

CHAPTER 2 Determinants CHAPTER CONTENTS 21 Determinants by Cofactor Expansion 105 22 Evaluating Determinants by Row Reduction 113 23 Properties of Determinants; Cramer s Rule 118 INTRODUCTION In this chapter

### Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

### Mathematical Methods wk 2: Linear Operators

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

### Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

### SPRING OF 2008 D. DETERMINANTS

18024 SPRING OF 2008 D DETERMINANTS In many applications of linear algebra to calculus and geometry, the concept of a determinant plays an important role This chapter studies the basic properties of determinants

### CHAPTER 8: Matrices and Determinants

(Exercises for Chapter 8: Matrices and Determinants) E.8.1 CHAPTER 8: Matrices and Determinants (A) means refer to Part A, (B) means refer to Part B, etc. Most of these exercises can be done without a

### Mathematical Foundations

Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

### ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

### (K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

### Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

### 2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

### Review of some mathematical tools

MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

### 1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

### Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix

### Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

### Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

### TOPIC III LINEAR ALGEBRA

[1] Linear Equations TOPIC III LINEAR ALGEBRA (1) Case of Two Endogenous Variables 1) Linear vs. Nonlinear Equations Linear equation: ax + by = c, where a, b and c are constants. 2 Nonlinear equation:

### 18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

### Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

### Dimension and Structure

96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

### I. Multiple Choice Questions (Answer any eight)

Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

### Linear Algebra Final Exam Study Guide Solutions Fall 2012

. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

### Presentation by: H. Sarper. Chapter 2 - Learning Objectives

Chapter Basic Linear lgebra to accompany Introduction to Mathematical Programming Operations Research, Volume, th edition, by Wayne L. Winston and Munirpallam Venkataramanan Presentation by: H. Sarper

### 1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

### Math 240 Calculus III

The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

### Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

### Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 23: Trace and determinants! (1) (Final lecture) Travis Schedler Thurs, Dec 9, 2010 (version: Monday, Dec 13, 3:52 PM) Goals (2) Recall χ T (x) = (x λ 1 ) (x λ n ) = x n tr(t )x n 1 + +( 1) n det(t

### Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

### CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Linear Algebra 1/33 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct

### Math 2030 Assignment 5 Solutions

Math 030 Assignment 5 Solutions Question 1: Which of the following sets of vectors are linearly independent? If the set is linear dependent, find a linear dependence relation for the vectors (a) {(1, 0,

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### Eigenvalues and Eigenvectors

Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

### MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

### v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

### 4.3 - Linear Combinations and Independence of Vectors

- Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

### Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

3., 3.3 Inverting Matrices P. Danziger 1 Properties of Transpose Transpose has higher precedence than multiplication and addition, so AB T A ( B T and A + B T A + ( B T As opposed to the bracketed expressions

### 1. Row Operations. Math 211 Linear Algebra Skeleton Notes S. Waner

1 Math 211 Linear Algebra Skeleton Notes S. Waner 1. Row Operations Definitions 1.1 A field is a set F with two binary operations +,, such that: 1. Addition and multiplication are commutative: x, y é F,

### Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

### = 1 and 2 1. T =, and so det A b d

Chapter 8 Determinants The founder of the theory of determinants is usually taken to be Gottfried Wilhelm Leibniz (1646 1716, who also shares the credit for inventing calculus with Sir Isaac Newton (1643

### Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012