Math Camp Notes: Linear Algebra I

Similar documents
Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Math Linear Algebra Final Exam Review Sheet

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Systems and Matrices

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

ECON 186 Class Notes: Linear Algebra

Evaluating Determinants by Row Reduction

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATRICES AND MATRIX OPERATIONS

Lecture Notes in Linear Algebra

Determinants Chapter 3 of Lay

MATH 2030: EIGENVALUES AND EIGENVECTORS

Matrices. In this chapter: matrices, determinants. inverse matrix

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

1 Determinants. 1.1 Determinant

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Chapter 2. Square matrices

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Elementary maths for GMT

1300 Linear Algebra and Vector Geometry

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

1 Matrices and Systems of Linear Equations

Fundamentals of Engineering Analysis (650163)

Chapter 3. Determinants and Eigenvalues

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Determinants by Cofactor Expansion (III)

det(ka) = k n det A.

Properties of the Determinant Function

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra and Vector Analysis MATH 1120

Math 240 Calculus III

1 Last time: determinants

ECON2285: Mathematical Economics

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Matrices. Ellen Kulinsky

Matrices and Linear Algebra

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

4. Determinants.

ECON0702: Mathematical Methods in Economics

Introduction to Matrices and Linear Systems Ch. 3

Digital Workbook for GRA 6035 Mathematics

Chapter 4. Determinants

II. Determinant Functions

1 Matrices and Systems of Linear Equations. a 1n a 2n

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Linear algebra and differential equations (Math 54): Lecture 7

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

TOPIC III LINEAR ALGEBRA

Matrices. Ellen Kulinsky

Linear Algebra: Characteristic Value Problem

Components and change of basis

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

Chapter 7. Linear Algebra: Matrices, Vectors,

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Inverses and Determinants

a11 a A = : a 21 a 22

Review of Basic Concepts in Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Unit 3: Matrices. Juan Luis Melero and Eduardo Eyras. September 2018

and let s calculate the image of some vectors under the transformation T.

Matrix Operations: Determinant

Determinants: Uniqueness and more

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

MATH2210 Notebook 2 Spring 2018

Basic Concepts in Linear Algebra

Introduction to Matrix Algebra

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Phys 201. Matrices and Determinants

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Linear Algebra Solutions 1

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Chapter 1 Matrices and Systems of Equations

Basics of Calculus and Algebra

Linear Algebra Primer

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Review of Vectors and Matrices

Matrix & Linear Algebra

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Algebra Primer

3 Matrix Algebra. 3.1 Operations on matrices

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

2 b 3 b 4. c c 2 c 3 c 4

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Linear Algebra 1 Exam 1 Solutions 6/12/3

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Matrices. Chapter Definitions and Notations

Transcription:

Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n a nm + b nm λa λa m λa =, where λ R λa n λa nm, B = b b m b n b nm Notice that the elements in the matrix are numbered such that a i,j, where i is the row and j is the column in which the element a i,j is found In order to multiply matricies CD, the number of columns in the C matrix must be equal to the number of rows in the D matrix Say C is an n m matrix, and D is an m k matrix Then multiplication is dened as follows: m q= c m,qd q, q= c,qd q,k E = {{ C {{ D = n m m m k q= c m n,qd q, q= c n,qd q,k {{ n k There are two notable special cases for multiplaction of matricies The rst is called the inner product, which occurs when two vectors of the same length are multiplied together such that the result is a scalar: v {{ n v z = {{ z n = {{ v {{ n n The second is called the outer product: z z n z = ( v v n ( v v n = z z n = n v i z i i= z v z v n z n v {{ z n v n n n Note that when we multiplied the matricies C and D together, the resulting e i,j th element of E was just the inner product of the ith row of C and jth column of D Also, note that even if two matricies X and Y are both n n, then XY Y X, except in special cases Types of Matricies Zero Matricies A zero matrix is a matrix where each element is 0 0 0 0 = 0 {{ 0 n k

The following properties hold for zero matricies: A + 0 = A If AB = 0, it is not necessarily the case that A = 0 or B = 0 Identity Matricies The identity matrix is a matrix with zeroes everywhere except along the diagonal Note that the number of columns must equal the number of rows 0 I = 0 {{ n n The reason it is called the identity matrix is because AI = IA = A Square, Symmetric, and Transpose Matricies A square matrix is a matrix whose number of rows is the same as its number of columns For example, the identity matrix is always square If a square matrix has the property that a i,j = a j,i for all its elements, then we call it a symmetric matrix The transpose of a matrix A, denoted A is a matrix such that for each element of A, a i,j = a j,i For example, the transpose of the matrix 3 4 5 6 7 8 9 is Note that a matrix A is symmetric if A = A The following properties of the transpose hold: (A = A (A + B = A + B 3 (αa = αa 4 (AB = B A 5 If the matrix A is n k, then A is k n Diagonal and Triangular Matricies 4 7 5 8 3 6 9 A square matrix A is diagonal if it is all zeroes except along the diagonal: a 0 0 0 a 0 0 0 a nn Note that all diagonal matricies are also symmetric

A square matrix A is upper triangular if all of its entries below the diagonal are zero a a a n 0 a a n, 0 0 a nn and is lower triangular if all its entries above the diagonal are zero a 0 0 a a 0 a n a n a nn Inverse Matricies If there exists a matrix B such that BA = AB = I, then we call B the inverse of A, and denote it A Note that A only has an inverse if it is a square matrix The following properties of inverses hold: ( A = A (αa = α A 3 (AB = B A if B, A exist 4 (A = ( A Other Important Matricies A matrix A is orthogonal if A A = I In other words, a matrix is orthogonal if it is its own inverse A matrix is idempotent if it is both symmetric and AA = A Orthogonal and idempotent matricies are especially used in econometrics Determinants We dene the determinant inductively The determinant of a matrix [a] is a, and is denoted det(a ( a b The determinant of a matrix is ad bc Notice that this is the same as a det(d b det(c c d The rst term is the (,th entry of A times the determinant of that submatrix obtained by deleting from A the row and column which contain that entry The second term is (,th entry times the determinant of the submatrix obtained by deleting A from the row and column which contain that entry The terms alternate in sign, with teh rst term being added and the second term being subtracted 3 The determinant of a 3 3 matrix ( e f this can be written as a det h i a b c d e f g h i b det is aei + bfg + cdh ceg bdi afh Notice that ( d f g i ( d e + c det g h Can you see the pattern? In order to obtain the determinant, we multiply each element in the top row with the determinant of the matrix left when we delete the row and column in which the respective elements reside The signs of the terms alternate, starting with postive In order to write the denition of the determinant of an nth order matrix, it is useful to dene the (i, jth minor of A and the (i, jth cofactor of A: 3

Let A be an n n matrix Let A i,j be the (n (n submatrix obtained by deleting row i and column j from A Then the scalar M ij = det(a ij is called the (i, jth minor of A The scalar C ij = ( i+j M ij is called the (i, jth cofactor of A The cofactor is merely the signed minor Armed with these two denitions, we notice that the determinant for the matrix is and the determinant for the 3 3 matrix is det(a = am bm = ac + bc, det(a = am bm cm 3 = ac + bc + C 3 Therefore, we can dene the determinant for an n n sqaure matrix as follows: det {{ A = a C + a C + + a n C n n n Notice that the denition of the determinant uses elements and cofactors for the top row only This is called a cofactor expansion along the rst row However, a cofactor expansion along any row or column will be equal to the determinant The proof of this asssertion is left as a homework problem for the 3 3 case Example: Find the determinant of the upper diagonal matrix 0 0 3 0 4 5 6 Answer: The determinant is: ( e f ac + bc + C 3 = a det h i ( 3 0 = det 5 6 ( 0 0 det 4 6 ( d f b det g i ( 3 + 0 det 4 5 Now lets expand along the second column instead of the third row: ( ( d f a c ac + bc + C 3 = b det + e det g i g i ( 0 = 0 det 4 6 ( 0 + 3 det 4 6 Singular and Non-singular Matrices Consider the system of equations: ( 0 5 det 0 ( d e + c det g h = 3 6 = 8 ( a c h det d f = 3 6 = 8 = = y = a x + b x + + c x n y = a x + b x + + c x n y m = a x + b x + + c x n 4

The functions can be written in matrix form by y a b c y a b c = y m {{ a m b m {{ c m m m n x x x n {{ n In short, we can write this system as y = Ax For simplicity, denote the ith row in the matrix as a separate vector v i, so that a b c v a b c v A = = a m b m {{ c m v m m n We say the vectors v,, v m are linearly dependant if there exist scalars q,, q m, not all zero, such that: m q i v i = 0 i= We say the vectors v,, v m are linearly independant if the only scalars q,, q m such that: m q i v i = 0 i= are q = = q m = 0 We can use this deniton of linear independance and dependance for columns as well The rank of a matrix is the number of linearly independant rows or columns in a matrix (Note: the number of linearly independant rows are the same as the number of linearly independant columns If we draw the lines in the system in R N, then they will all cross either at no points, at one point, or at innitely many points Therefore, the system may have no solutions, one solution, or many solutions for a given vector y If it has more than one solution, it will have innitely many solutions, since straight lines can only intersect once if they do not coincide The following are conditions on the number of solutions of a system: A system of linear equations with coecient matrix A will have a solution for each choice of y i rank A = the number of rows in A A system of linear equations with coecient matrix A will have at most one solution for each choice of y i rank A = the number of columns in A This implies that a system of linear equations with coecient matrix A will have exactly one solution for each choice of y i rank A =the number of columns in A = the number of rows of A If a square matrix has exactly one solution for each y, we say the matrix is non-singular Otherwise, it is non-singular and has innitely many solutions Elementary Row Operations Recall the system of equations: y = a x + b x + + c x n y = a x + b x + + c x n 5

which can be written in matrix form by y y = y m {{ m y m = a x + b x + + c x n a b c a b c a m b m {{ c m m n x x x n, {{ n or y = Ax There are three types of elementary row operations we can perform on this matrix without changing the solution set: Interchanging two rows of the matrix Multiplying each element of a row by the same non-zero scalar 3 Change a row by adding it to a multiple of another row You should convince yourself that these three operations do not change the solution set of the matrix Using Elementary Row Operations to Check Linear Independence A row of a matrix is said to have k leading zeros if the (k + th element of the row is non zero while the rst k elements are zero A matrix is in row echelon form if each row has strictly more leading zeros that the preceding row For example, the matrices A = are in row echelon form while the matrices A = 4 0 7 0 0 0 0 3 0 4 5 0 6 7 0 0 8, B = 0 0 0 8 3 0 0 6, B = ( 0 4 are not However, B would be in row echelon form if we performed the elementary matrix operation of switching the two rows In order to get the matrix into row echelon form, we can perform elementary matrix operations on the rows For example, consider the matrix A = 3 4 8 6 By taking the third row and subtracting it from the rst row, we obtain the matrix A = 0 4 8 6 We can also subtract four times the third row from the second row A = 0 0 4 6

Now subtract four times the rst row from the second row to obtain A 3 = 0 0 0 6 Then rearrange to get the matrix in row echelon form A 4 = 0 0 0 6 When a matrix is in row echelon form, it is easy to check whether all the rows are linearly independant For linear independance, we must have that all the rows in the row echelon form are non-zero If not, then the matrix will have linearly dependant rows We have shown that all the rows in A are linearly independant because the row echelon form contains no zero rows Another way of dening the rank of matrix is by the number of non-zero (or linearly independant rows in a matrix For example, the matrix A is of full rank since all its rows are linearly independant Using Elementary Row Operations to Solve a System of Equations We can solve a system of equations by writing the matrix a b c y a b c y, a m b m c m y m called the augmented matrix of A, and use elementary row operations Example: Let A = 3 4 8 6 as before Say that the vector y = (,, Then the augmented matrix is 3 4 8 6 Performing the same matrix operations as before, we have 3 4 8 6 0 0 4 8 6 0 0 0 4 3 0 0 0 0 6 3 0 0 0 0 6 3 We continue the row operations until the left hand side of the augmented matrix looks like the identity matrix: 0 0 0 0 0 0 0 0 7

Notice that this implies 0 0 0 0 0 3 0 0 0 0 0 0 3 = 0 0 0 0 0 0 x x x 3 x = ( 3,,, so we have found a solution using elementary row operations In summary, if we form the augmented matrix of A, reduce the left hand side of the matrix to is reduced row echelon form (so that each row contains all zeros, except for the possibility of a one in a column of all zeros through elementary row operations, then the remaining vector on the right hand side will be the solution to the system Inverse Matricies It can be shown that the inverse of a square matrix will exist if it's determinant is non-zero The following statements are all equivalent for a square matrix A: A is non-singular All the columns and rows in A are linearly independent 3 A has full rank 4 Exactly one solution X exists for each vector Y 5 A is invertible 6 det(a 0 7 The row-echelon form of the matrix is upper triangular 8 The reduced row echelon form is the identity matrix Calculating the Inverse Matrix by the Adjoint Matrix The adjoint matrix of a square matrix A is the transposed matrix of cofactors of A, or C C C n C C C n adj(a = C n C n C nn ( a a Notice that the adjoint of a matrix is a a The inverse of the matrix A can be found by Therefore, the inverse of a matrix is A = A = a a a a det(a adj(a ( a a a a 8

Calculating the Inverse Matrix by the Elementary Row Operations To calculate the inverse matrix, form the augmented matrix a a a n 0 0 a a a n 0 0, a n a n a nn 0 0 where the left hand side is the matrix A and the right hand side is the identity matrix Reduce the left hand side to the reduced row echelon form, and what remains on the right hand side will be the inverse matrix of A In other words, by elementary row operations, you can transform the matrix (A I to the matrix (I A Kramer's Rule Let A be a non-singular matrix Then the unique solution x = (x,, x n of the n n system y = Ax is: x i = det(b i det(a for i =,, n, where B i is the matrix A with the right hand side y replacing the ith column of A Example: Consider the linear IS-LM model sy + ar = I 0 + G my hr = M s M 0 where Y is the net national product, r is the interest rate, s is the marginal propensity to save, a is the marginal eciency of capital, I = I 0 ar is investment, m is money balances needed per dollar of transactions, G is government spending, and M s is the money supply All the parameters are postive We can rewrite the system as ( ( I 0 + G s a M s M 0 = m h By Kramer's rule, we have Y = r = I 0 + G a M s M 0 h s a m h s I 0 + G m M s M 0 s a m h Principal Minors and Leading Principle Minors Principal Minors ( Y r = (I0 + Gh + a(m s M 0 sh + am = (I0 + Gm s(m s M 0 sh + am Let A be an n n matrix A k k submatrix of A obtained by deleting any n k columns and the same n k rows from A is called a kth-order principle submatrix of A The determinant of a k k principal submatrix is called a kth order principle minor of A Example: 9

List all the principle minors of the 3 3 matrix: a a a 3 a a a 3 a 3 a 3 a 33 Answer: There is one third order principle minor of A, det(a There are three second order principal minors: a a a a, formed by deleting the third row and column of A a a 3 a 3 a 33, formed by deleting the second row and column of A 3 a a 3 a 3 a 33, formed by deleting the rst row and column of A There are also three rst order principle minors: a, by deleting the last two rows and columns; a, by deleting the rst and last rows and columns; and a 33, by deleting the rst two rows and columns Leading Principal Minors A leading principal minor the the determinant of the leading principal submatrix obtained by deleting the last n k rows and columns of an n n matrix A Example: List all the leading principle minors of the 3 3 matrix: a a a 3 a a a 3 a 3 a 3 a 33 Answer: There are three leading principal minors: a, formed by deleting the last two rows and columns of A a a a a, formed by deleting the last row and column of A a a a 3 3 a a a 3, formed by deleting the no rows or columns of A a 3 a 3 a 33 Why in the world do we care about principal and leading principal minors? We need to calculate the signs of the leading principal minors in order to determine the deniteness of a matrix We need deniteness to check second-order conditions for maxima and minima We also need deniteness of the Hessian matrix to check to see whether or not we have a concave function Quadratic Forms and Deniteness Quadratic Forms Consider the function F : R R, where F = a x + a x x + a x We call this a quadratic form in R Notice that this can be expressed in matrix form as F (x = ( ( a x x a ( x a = x Ax, a x 0

where x = (x, x, and A is unique and symmetric The quadratic form in R n is F (x = n a ij x i x j, where x = (x,, x n, and A is unique and symmetric This can also be expressed in matrix form: a F (x = ( a a n x x x x n a a a n x = x Ax a n a n a nn x n Deniteness Let A be an n n symmetric matrix Then A is: positive denite if x Ax > 0 x 0 in R n i,j= positive semidenite if x Ax 0 x 0 in R n 3 negative denite if x Ax < 0 x 0 in R n 4 negative semidenite if x Ax 0 x 0 in R n 5 indenite if x Ax > 0 for some x R n, and < 0 for some other x R n We can test for the deniteness of the matrix in the following fashion: A is positive denite i all of its n leading principal minors are strictly positive A is negative denite i all of its n leading principal minors alternate in sign, where A < 0, A > 0, A 3 < 0, etc 3 If some kth order leading pricipal minor of A is nonzero but does not t either of the above sign patterns, then A is indenite If the matrix A would meet the criterion for positive or negative deniteness if we relaxed the strict inequalites to weak inequalities (ie we allow zero to t into the pattern, then although the matrix is not positive or negative denite, it may be positive or negative semidenite In this case, we employ the following tests: A is positive semidenite i every principal minor of A is 0 A is negative semidenite i every principal minor of A of odd order is 0 and every principal minor of even order is 0 Notice that for determining semideniteness, we can no longer check just the leading principal minors, but we must check all principal minors What a pain! Homework Do the following: ( ( 0 7 Let A = and B = 3 8 6 3 ( ( 3 Let v = and u = 5 Find A B, A + B, AB, and BA Find u v, u v and v u

3 Prove that the multiplication of any matrix with its transpose yields a symmetric matrix 4 Prove that A only has an inverse if it is a square matrix 5 Prove the rst four properties of transpose matricies above 6 In econometrics, we deal with a matrix called the projections matrix: A = I X (X X X Must A be square? Must X X be square? Must X be square? 7 Show that the projection matrix in 5 is idempotent 8 Evaluate the following determinants: 4 (a 8 0 4 7 0 9 (b 3 4 6 6 0 0 5 0 8 9 Find the inverse of the matrix 0 0 0 0 0 0 0 Express the quadratic form as a matrix product involving a symmetric coecient matrix (a Q = 8x x x 3x (b Q = 3x x x + 4x x 3 + 5x + 4x 3 x x 3 Prove that for a 3 3 matrix, one may nd the determinant by a cofactor expansion along any row or column in the matrix List all the principle minors of the 4 4 matrix 3 Prove that: A = a a a 3 a 4 a a a 3 a 4 a 3 a 3 a 33 a 34 a 4 a 4 a 43 a 44 (a Every diagonal matrix whose diagonal elements are all positive is positive denite (b Every diagonal matrix whose diagonal elements are all negative is negative denite (c Every diagonal matrix whose diagonal elements are all positive or zero is positive semidenite (d Every diagonal matrix whose diagonal elements are all negative or zero is negative semidenite (e All other diagonal matrices are indenite 4 Determine the deniteness of the following matrices: ( (a ( 3 4 (b 4 5 ( 3 4 (c 4 6

( 4 (d 4 8 (e 0 4 5 0 5 6 (f 0 0 0 0 0 3 0 (g 0 0 5 3 0 4 0 0 5 0 6 5 Determine the ranks of the matrices in problem 4 How many linearly independent rows are in each? 6 Calculate the determinants of the matrices in problem 4 Which have inverses? 3