Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Similar documents
Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra II

Math Linear Algebra Final Exam Review Sheet

Linear Algebra Solutions 1

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Chapter 3. Determinants and Eigenvalues

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Systems and Matrices

MATH 2030: EIGENVALUES AND EIGENVECTORS

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

1 Matrices and Systems of Linear Equations

4. Determinants.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

1 Matrices and Systems of Linear Equations. a 1n a 2n

and let s calculate the image of some vectors under the transformation T.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Lecture Notes in Linear Algebra

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Matrices and Linear Algebra

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Linear Algebra Primer

Fundamentals of Engineering Analysis (650163)

Elementary Linear Algebra

Determinants by Cofactor Expansion (III)

1300 Linear Algebra and Vector Geometry

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Elementary maths for GMT

ECON 186 Class Notes: Linear Algebra

Introduction to Matrix Algebra

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

3 Matrix Algebra. 3.1 Operations on matrices

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

A Brief Outline of Math 355

Properties of the Determinant Function

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Determinants Chapter 3 of Lay

1 Determinants. 1.1 Determinant

Evaluating Determinants by Row Reduction

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Digital Workbook for GRA 6035 Mathematics

NOTES FOR LINEAR ALGEBRA 133

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

1 Last time: determinants

CS 246 Review of Linear Algebra 01/17/19

Chapter 2. Square matrices

Review problems for MA 54, Fall 2004.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition


det(ka) = k n det A.

Math 240 Calculus III

Solution Set 7, Fall '12

3 (Maths) Linear Algebra

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Calculating determinants for larger matrices

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

1. General Vector Spaces

Linear Algebra Primer

Linear Algebra Highlights

MATRICES AND MATRIX OPERATIONS

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Linear Algebra Primer

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Introduction to Matrices and Linear Systems Ch. 3

Basics of Calculus and Algebra

Math (P)refresher Lecture 8: Unconstrained Optimization

MAT Linear Algebra Collection of sample exams

MATH 369 Linear Algebra

Math 320, spring 2011 before the first midterm

Topic 1: Matrix diagonalization

Chapter 5 Eigenvalues and Eigenvectors

Math113: Linear Algebra. Beifang Chen

1 Matrices and Systems of Linear Equations

Matrices. In this chapter: matrices, determinants. inverse matrix

Review of Basic Concepts in Linear Algebra

Knowledge Discovery and Data Mining 1 (VO) ( )

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra and Vector Analysis MATH 1120

Inverses and Determinants

LINEAR ALGEBRA REVIEW

Dynamical Systems. August 13, 2013

Mathematics. EC / EE / IN / ME / CE. for

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Some Notes on Linear Algebra

OR MSc Maths Revision Course

Lecture Summaries for Linear Algebra M51A

Transcription:

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems of Linear Equations Linear Algebra is concerned with the study of systems of linear equations equations in n variables has the form A system of m linear b = a x + a x + + a n x n b = a x + a x + + a n x n b m = a m x + a m x + + a mn x n Linear equations are important since non-linear, dierentiable functions can be approximated by linear ones (as we have seen For example, the behavior of a dierentiable function f : R R around a point x can be approximated by the tangent plane at x The equation for the tangent plane is one linear equation in two variables In Economics, we often encounter systems of equations Often linear equations are used since they are tractable and since they can be thought of as approximations for more complicated underlying relationships between variables The system?? can be written in matrix form: b b b m } {{ } m a a a n a a a n = } a m a m {{ a mn } m n x x x n, } {{ } n In short, we can write this system as b = Ax where A is an m n matrix, b is an m vector and x is an n vector A system of linear equations, also referred to as linear map, can therefore be identied with a matrix, and any matrix can be identied with ("turned into" a linear system In order to study linear systems, we study matrices and their properties Matrices Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm, B = b b m b n b nm

Linear Algebra Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n a nm + b nm λa λa m λa =, where λ R λa n λa nm Notice that the elements in the matrix are numbered a ij, where i is the row and j is the column in which the element a ij is found In order to multiply matrices CD, the number of columns in the C matrix must be equal to the number of rows in the D matrix Say C is an n m matrix, and D is an m k matrix Then multiplication is dened as follows: E = }{{} C }{{} D = n m m k m q= c m,qd q, q= c,qd q,k m q= c m n,qd q, q= c n,qd q,k }{{} n k There are two notable special cases for multiplication of matrices The rst is called the inner product or dot product, which occurs when two vectors of the same length are multiplied together such that the result is a scalar: v z = }{{} v }{{} n n The second is called the outer product: v }{{} n }{{} z n = z z n z = ( v v n ( v v n = z z n = n v i z i i= z v z v n } z n v {{ z n v n } n n Note that when we multiplied the matrices C and D together, the resulting e i,j th element of E was just the inner product of the ith row of C and jth column of D Also, note that even if two matrices X and Y are both n n, then XY Y X, except in special cases Just in case you are wondering why matrix multiplication is dened the way it is: Consider two linear maps (that is, two systems of linear equations f(x = Ax and g(x = Bx where A is m n and B is n k We can dene a new linear map h that is the composition of f after g: h(x = f(g(x Then the matrix that represents the linear system h turns out to be exactly AB, that is h(x = f(g(x = ABx Matrix multiplication is dened to correspond to the composition of linear maps Denition A mapping f of a vector space X into a vector space Y is said to be a linear mapping if for any vectors x,, x m X and any scalars c,, c m, f(c x + + c m x m = c f(x + + c m f(x m

Math Camp 3 Some Special Matrices Zero Matrices A zero matrix is a matrix where each element is = } {{ } n k The following properties hold for zero matrices: A + = A If AB =, it is not necessarily the case that A = or B = Identity Matrices The identity matrix is a matrix with zeroes everywhere except along the diagonal Note that the number of columns must equal the number of rows I = } {{ } n n The reason it is called the identity matrix is because AI = IA = A 3 Square, Symmetric, and Transpose Matrices A square matrix is a matrix whose number of rows is the same as its number of columns For example, the identity matrix is always square If a square matrix has the property that a i,j = a j,i for all its elements, then we call it a symmetric matrix The transpose of a matrix A, denoted A is a matrix such that for each element of A, a i,j = a j,i For example, the transpose of the matrix 3 4 5 6 7 8 9 is Note that a matrix A is symmetric if A = A The following properties of the transpose hold: (A = A (A + B = A + B 3 (αa = αa 4 7 5 8 3 6 9

4 Linear Algebra 4 (AB = B A 5 If the matrix A is n k, then A is k n 4 Diagonal and Triangular Matrices A square matrix A is diagonal if it is all zeroes except along the diagonal: a a a nn Note that all diagonal matrices are also symmetric A square matrix A is upper triangular if all of its entries below the diagonal are zero a a a n a a n, a nn and is lower triangular if all its entries above the diagonal are zero a a a a n a n a nn 5 Inverse Matrices If there exists a matrix B such that BA = AB = I, then we call B the inverse of A, and denote it A Note that A can only have an inverse if it is a square matrix However, not every square matrix has an inverse The following properties of inverses hold: ( A = A (αa = α A 3 (AB = B A if B, A exist 4 (A = ( A 6 Orthogonal and Idempotent Matrices A matrix A is orthogonal if A A = I (which also implies AA = I In other words, a matrix is orthogonal if it is its own inverse A matrix is idempotent if it is both symmetric and AA = A Orthogonal and idempotent matrices are especially used in econometrics

Math Camp 5 3 The Determinant For square matrices we can dene a number called the determinant of the matrix The determinant tells us important characteristics of the matrix that we will dwell on later Here we will simply present how it is computed The determinant can be dened inductively: The determinant of a matrix (a is a, and is denoted det(a ( a b The determinant of a matrix is ad bc c d Notice that this is the same as a det(d b det(c The rst term is the (,th entry of A times the determinant of that submatrix obtained by deleting from A the row and column which contain that entry The second term is (,th entry times the determinant of the submatrix obtained by deleting A from the row and column which contain that entry The terms alternate in sign, with the rst term being added and the second term being subtracted a b c 3 The determinant of a 3 3 matrix d e f is aei + bfg + cdh ceg bdi afh Notice g h i ( ( ( e f d f d e that this can be written as a det b det + c det h i g i g h Can you see the pattern? In order to obtain the determinant, we multiply each element in the top row with the determinant of the matrix left when we delete the row and column in which the respective elements reside The signs of the terms alternate, starting with positive In order to write the denition of the determinant of an nth order matrix, it is useful to dene the (i, jth minor of A and the (i, jth cofactor of A: Let A be an n n matrix Let A i,j be the (n (n submatrix obtained by deleting row i and column j from A Then the scalar M ij = det(a ij is called the (i, jth minor of A The scalar C ij = ( i+j M ij is called the (i, jth cofactor of A The cofactor is merely the signed minor Armed with these two denitions, we notice that the determinant for the matrix is and the determinant for the 3 3 matrix is det(a = am bm = ac + bc, det(a = am bm cm 3 = ac + bc + C 3 Therefore, we can dene the determinant for an n n sqaure matrix as follows: det }{{} A = a C + a C + + a n C n n n Notice that the denition of the determinant uses elements and cofactors for the top row only This is called a cofactor expansion along the rst row However, a cofactor expansion along any row or column will be equal to the determinant The proof of this asssertion is left as a homework problem for the 3 3 case

6 Linear Algebra Example: Find the determinant of the upper diagonal matrix 3 4 5 6 The determinant is: ( e f ac + bc + C 3 = a det h i ( 3 = det det 5 6 ( 4 6 ( d f b det g i ( 3 + det 4 5 ( d e + c det g h = 3 6 = 8 Now lets expand along the second column instead of the third row: ( ( ( d f a c a c ac + bc + C 3 = b det + e det h det g i g i d f ( = det 4 6 Important properties of the determinant: ( + 3 det 4 6 det(i = where I is the identity matrix det(ab = det(a det(b If A is invertible, det(a = det(a det(a = det(a A is orthogonal if and only if det A = 5 det ( = 3 6 = 8 Denition The following denition will be important subsequently: An n n matrix A is called singular if det A = It is called non-singular if det A 3 Solving Systems of Linear Equations Recall the system of equations: = = b = a x + a x + + a n x n b = a x + a x + + a n x n b m = a m x + a m x + + a mn x n which can be written in matrix form by b a a a n b a a a n = } b m {{ } } a m a m {{ a mn } m m n or b = Ax The questions we want to answer are: x x x n, } {{ } n

Math Camp 7 Given a left hand side vector b, how can we nd a solution x to b = Ax? Given a coecient matrix A, what can we say about the number of solutions to b = Ax, for any b? How can we tell whether a system has one unique solution? Let's start with the rst question 3 Elementary Row Operations There are three types of elementary row operations we can perform on the matrix A and the vector b without changing the solution set to b = Ax: Interchanging two rows Multiplying each element of a row by the same non-zero scalar 3 Change a row by adding it to a multiple of another row You should convince yourself that these three operations do not change the solution set of the system 3 Using Elementary Row Operations to Solve a System of Equations A row of a matrix is said to have k leading zeros if the (k + th element of the row is non zero while the rst k elements are zero A matrix is in row echelon form if each row has strictly more leading zeros that the preceding row For example, the matrices A = 4 7 are in row echelon form while the matrices A = 3 4 5 6 7 8, B = 8 3 6, B = ( 4 are not However, B would be in row echelon form if we performed the elementary matrix operation of switching the two rows In order to get the matrix into row echelon form, we can perform elementary matrix operations on the rows For example, consider the matrix A = 3 4 8 6 By taking the third row and subtracting it from the rst row, we obtain the matrix A = 4 8 6

8 Linear Algebra We can also subtract four times the third row from the second row A = 4 Now subtract four times the rst row from the second row to obtain A 3 = 6 Then rearrange to get the matrix in row echelon form A 4 = 6 We can solve a system of equations by writing the matrix a b c b a b c b a m b m c m b m, called the augmented matrix of A, and use elementary row operations Example: Let A = 3 4 8 6 as before Say that the vector b = (,, Then the augmented matrix is 4 8 3 6 Performing the same matrix operations as before, we have 3 4 8 6 4 8 6 4 3 6 3 6 3 We continue the row operations until the left hand side of the augmented matrix looks like the identity matrix: 3

Math Camp 9 Notice that this implies 3 = x x x 3 x = ( 3,,, so we have found a solution using elementary row operations In summary, if we form the augmented matrix of A, reduce the left hand side of the matrix to its reduced row echelon form (so that each row contains all zeros, except for the possibility of a one in a column of all zeros through elementary row operations, then the remaining vector on the right hand side will be the solution to the system 33 Using Cramer's Rule to Solve a System of Equations Cramer's Rule is a theorem that yields the solutions to systems of the form b = Ax where A is a square matrix and non-singular, ie det A : Let A be a non-singular matrix Then the unique solution x = (x,, x n of the n n system b = Ax is: x i = det(b i det(a for i =,, n, where B i is the matrix A with the right hand side b replacing the ith column of A Example: Consider the linear IS-LM model sy + ar = I + G my hr = M s M where Y is the net national product, r is the interest rate, s is the marginal propensity to save, a is the marginal eciency of capital, I = I ar is investment, m is money balances needed per dollar of transactions, G is government spending, and M s is the money supply All the parameters are positive We can rewrite the system as ( I + G M s M ( s a = m h ( Y r By Cramer's rule, we have Y = r = I + G a M s M h s a m h s I + G m M s M s a m h = (I + Gh + a(m s M sh + am = (I + Gm s(m s M sh + am Depending on the size of A, solving a system using Cramer's Rule may be faster than solving it using elementary row operations But be aware that Cramer's Rule only works for non-singular square matrices

Linear Algebra 4 Characterization of the Solutions to a Linear System The second question about linear system concerns the existence of solutions: How can we tell whether a system has zero, one or more solutions? In order to tackle this problem, we start by dening the concept of "linear independence" 4 Linear Independence The vectors v,, v m are linearly dependent if there exist scalars q,, q m, not all zero, such that: m q i v i = i= This is equivalent to the statement that one of the vectors can be written as a linear combination of the other ones The vectors v,, v m are linearly independent if the only scalars q,, q m such that: m q i v i = i= are q = = q m = This is equivalent to the statement that none of the vectors can be written as a linear combination of the other ones 3 Example : The vectors,, are linearly dependent since Example : The vectors q, q, q 3 such that 3, q +, 3 + q + ( 3 = are linearly independent since the only scalars + q 3 3 = are q = q = q 3 = We can use this denition of linear independence and dependence for columns as well The rank of a matrix, rk(a, is the number of linearly independent rows or columns in a matrix (Note that the number of linearly independent rows is the same as the number of linearly independent columns Therefore rk(a min{number of rows of A, number of columns of A} A matrix is said to have full rank if rk(a = min{number of rows of A, number of columns of A} When a matrix is in row echelon form, it is easy to check whether all the rows are linearly independent For linear independence, we must have that all the rows in the row echelon form are non-zero If not, then the matrix will have linearly dependant rows For example, in the matrix A = 3 4 8 6

Math Camp in the example above, all rows are linearly independent because its row echelon form A 4 = 6 contains no zero rows 4 Numbers of Solutions Let's build some intuition by looking at a system of two variables and two equations For a given b = (b, b, we can view the two equations y = a x + b x and y = a x + b x as two lines in R A solution x = (x, x is a point that lies on both of these lines at the same time Now, two lines can intersect once, be parallel to each other or be identical to each other In the rst case, there will be one solution (one point of intersection, in the second case there will be no solution (no point of intersectionand in the third case there will be innitely many solutions (innitely many points of intersection Therefore, a system of two equations and two unknowns can have zero, one or innitely many solution, depending on the vector b and the matrix A This result generalizes to a linear system of any size: It can have either zero, one or innitely many solutions Fortunately we can say a bit more about the possible numbers of solutions when we look at the dimension of a system and the rank of its coecient matrix A Remember than m is the number of equations in the system and the number of rows of A and that n is the number of variables in the system and the number of columns of A If m < n, then for any given b, b = Ax has zero or innitely many solutions if rka = m = rk(a b, b = Ax has innitely many solutions for every b A system with more unknowns than equation can never have one unique solution (Example: Two planes in R 3 cannot intersect in only one point They are either parallel (no solution or intersect in a line (innitely many solutions If we know that A has maximal rank, namely m, then we know that the system has innitely many solutions (Continuing the example above, we know then that the planes are not parallel and therefore have to intersect in at least a line We also know then that the planes are not all identical, but that does not help us in narrowing down the number of solutions in this case If m > n, then for any given b, b = Ax has zero, one or innitely many solutions if rka = n, b = Ax has zero or one solution for every b A system with more equations than unknowns may have zero, one or innitely many solutions (Example: Three lines in R can either not all intersect in the same point (no solution, two of them can be identical and intersect with the third one in one point (one solution, or all three of them can be identical (innitely many solutions If we know that A has maximal rank, namely n, then we know that the system cannot have innitely many solutions (Continuing the example above, we know then that the lines are not all identical and therefore cannot intersect in innitely many points We also know then that the lines are not all parallel, but that does not narrow down the number of solutions in this case If m = n, then for any given b, b = Ax has zero, one or innitely many solutions

Linear Algebra if rka = m = n, b = Ax has exactly one solution for every b A system with as many equations as unknowns may have zero, one or innitely many solutions (Example: Two lines in R can be parallel (zero solutions, intersect in one point (one solution, or be identical (innitely many solutions Same for three planes in R 3 If we know that A has maximal rank, namely n = m, then we know that the system has exactly one solution (Continuing the example above, we know then that the lines are not identical and therefore cannot intersect in innitely many points We also know then that the lines are not all parallel and therefore have to intersect in at least one point Two lines that are neither parallel nor identical intersect in exactly one point Whether a square matrix has full rank can be checked by looking at the determinant: An n n square matrix A has full rank n if and only if det A Therefore the last statement above translates into: The n n system b = Ax has exactly one solution for each b if and only if det A This is an extremely important and useful property of the determinant So, saying that a square matrix A is non-singular is the same as saying that rka = n is the same as saying b = Ax has exactly one solution for each b (Note also how this explains why Cramer's Rule fails when det A = 43 Inverse Matrices Suppose we are given an n n matrix A The question whether its inverse exists is connected to whether a unique solution to b = Ax exists for each choice of b To see why, suppose the inverse A exists Then we can multiply both sides of b = Ax by A : A b = A Ax = Ix = x Therefore x = A b has to be the unique solution to b = Ax Conversely, it can also be shown that if there is a unique solution to b = Ax, then the inverse A has to exist (take b = (,,,, b = (,,,, etc, and let the solution to b i = Ax be x i Then A = (x, x, x n Therefore, a square matrix A is invertible if and only if it is non-singular, that is, if and only if det A The following statements are all equivalent for a square matrix A: A is non-singular All the columns and rows in A are linearly independent 3 A has full rank 4 Exactly one solution X exists for each vector Y 5 A is invertible 6 det(a 7 The row-echelon form of the matrix is upper triangular 8 The reduced row echelon form is the identity matrix Now that we know how to check whether the inverse matrix exists (ie by looking at the determinant, how do we compute it? Here are two strategies:

Math Camp 3 43 Calculating the Inverse Matrix by Elementary Row Operations To calculate the inverse matrix, form the augmented matrix a a a n a a a n a n a n a nn, where the left hand side is the matrix A and the right hand side is the identity matrix Reduce the left hand side to the reduced row echelon form, and what remains on the right hand side will be the inverse matrix of A In other words, by elementary row operations, you can transform the matrix (A I to the matrix (I A 43 Calculating the Inverse Matrix by the Adjoint Matrix The adjoint matrix of a square matrix A is the transposed matrix of cofactors of A, or C C C n C C C n adj(a = C n C n C nn ( a a Notice that the adjoint of a matrix is a a The inverse of the matrix A can be found by Therefore, the inverse of a matrix is A = A = a a a a det(a adj(a ( a a a a

4 Linear Algebra 5 Homework Do the following: Let A = Let v = ( and B = 3 8 ( and u = ( 3 5 ( 7 6 3 Find A B, A + B, AB, and BA Find u v, u v and v u 3 Prove that the multiplication of any matrix with its transpose yields a symmetric matrix 4 Prove that A only has an inverse if it is a square matrix 5 Prove the rst four properties of transpose matrices above 6 In econometrics, we deal with a matrix called the projections matrix: A = I X (X X X Must A be square? Must X X be square? Must X be square? 7 Show that the projection matrix in 6 is idempotent 8 Calculate the determinants of the following matrices: ( (a ( 3 4 (b 4 5 ( 3 4 (c 4 6 ( 4 (d 4 8 (e 4 5 5 6 (f 3 (g 5 3 4 5 6 9 Evaluate the following determinants: 4 (a 8 4 7 9 (b 3 4 6 6 5 8

Math Camp 5 Find the inverse of the matrix Prove that for a 3 3 matrix, one may nd the determinant by a cofactor expansion along any row or column in the matrix Determine the ranks of the matrices below How many linearly independent rows are in each? Which have inverses? ( (a ( 3 4 (b 4 5 ( 3 4 (c 4 6 ( 4 (d 4 8 (e 4 5 5 6 (f 3 (g 5 3 4 5 6

6 Linear Algebra 6 Quadratic Forms Quadratic forms are the next simplest functions after linear ones Like linear functions they have a matrix representation, so that studying quadratic forms reduces to studying symmetric matrices This is what this section is about Consider the function F : R R, where F = a x + a x x + a x We call this a quadratic form in R Notice that this can be expressed in matrix form as F (x = ( ( a x x a a a where x = (x, x, and A is unique and symmetric The quadratic form in R n is n F (x = i,j= a ij x i x j, ( x x = x Ax, where x = (x,, x n, and A is unique and symmetric This can also be expressed in matrix form: a F (x = ( a a n x x x x n a a a n x = x Ax a n a n a nn x n A quadratic form has a critical point at x =, where it takes on the value Therefore we can classify quadratic forms by whether x = is a maximum, minimum or neither This is what deniteness is about 7 Deniteness Let A be an n n symmetric matrix Then A is: positive denite if x Ax > x in R n That is, x = is a unique global minimum of the quadratic form given by A positive semidenite if x Ax x in R n That is, x = is a global minimum, but not a unique global one, of the quadratic form given by A 3 negative denite if x Ax < x in R n That is, x = is a unique global maximum of the quadratic form given by A 4 negative semidenite if x Ax x in R n That is, x = is a global maximum, but not a unique global one, of the quadratic form given by A 5 indenite if x Ax > for some x R n, and < for some other x R n That is, x = is neither a maximum nor a minimum of the quadratic form given by A The deniteness of a matrix plays an important role For example, for a function f(x of one variable, the sign of the second derivative f (x at a critical point x gives a sucient condition for determining whether x is a maximum, minimum or neither This test generalizes to more than one variable using the deniteness of the Hessian matrix H (the matrix of the second order derivatives (More on this when we get to optimization Similarly, a function f(x in one variable is concave if its second derivative is A function of more than one variable is concave if its Hessian matrix is negative semidenite There is a convenient way to test for the deniteness of a matrix Before we can formulate this test we rst need to dene the concept of principal minors of a matrix

Math Camp 7 7 Principal Minors and Leading Principal Minors 7 Principal Minors Let A be an n n matrix A k k submatrix of A obtained by deleting any n k columns and the same n k rows from A is called a kth-order principal submatrix of A The determinant of a k k principal submatrix is called a kth order principal minor of A Example List all the principal minors of the 3 3 matrix: a a a 3 a a a 3 a 3 a 3 a 33 Answer: There is one third order principal minor of A, det(a There are three second order principal minors: a a a a, formed by deleting the third row and column of A a a 3 a 3 a 33, formed by deleting the second row and column of A 3 a a 3 a 3 a 33, formed by deleting the rst row and column of A There are also three rst order principal minors: a, by deleting the last two rows and columns; a, by deleting the rst and last rows and columns; and a 33, by deleting the rst two rows and columns 7 Leading Principal Minors The leading principal minor is the determinant of the leading principal submatrix obtained by deleting the last n k rows and columns of an n n matrix A Example List the rst, second, and third order leading principal minors of the 3 3 matrix: a a a 3 a a a 3 a 3 a 3 a 33 Answer: There are three leading principal minors, one of order, one of order, and one of order 3: a, formed by deleting the last two rows and columns of A a a a a, formed by deleting the last row and column of A a a a 3 3 a a a 3, formed by deleting the no rows or columns of A a 3 a 3 a 33 Why in the world do we care about principal and leading principal minors? We need to calculate the signs of the leading principal minors in order to determine the deniteness of a matrix We need deniteness to check second-order conditions for maxima and minima We also need deniteness of the Hessian matrix to check to see whether or not we have a concave function

8 Linear Algebra 7 Testing for Deniteness We can test for the deniteness of the matrix in the following fashion: A is positive denite i all of its n leading principal minors are strictly positive A is negative denite i all of its n leading principal minors alternate in sign, where A <, A >, A 3 <, etc 3 If some kth order leading principal minor of A is nonzero but does not t either of the above sign patterns, then A is indenite If the matrix A would meet the criterion for positive or negative deniteness if we relaxed the strict inequalities to weak inequalities (ie we allow zero to t into the pattern, then although the matrix is not positive or negative denite, it may be positive or negative semidenite In this case, we employ the following tests: A is positive semidenite i every principal minor of A is A is negative semidenite i every principal minor of A of odd order is and every principal minor of even order is Notice that for determining semideniteness, we can no longer check just the leading principal minors, but we must check all principal minors What a pain!

Math Camp 9 8 Homework Express the quadratic form as a matrix product involving a symmetric coecient matrix (a Q = 8x x x 3x (b Q = 3x x x + 4x x 3 + 5x + 4x 3 x x 3 List all the principal minors of the 4 4 matrix 3 Prove that: A = a a a 3 a 4 a a a 3 a 4 a 3 a 3 a 33 a 34 a 4 a 4 a 43 a 44 (a Every diagonal matrix whose diagonal elements are all positive is positive denite (b Every diagonal matrix whose diagonal elements are all negative is negative denite (c Every diagonal matrix whose diagonal elements are all positive or zero is positive semidefinite (d Every diagonal matrix whose diagonal elements are all negative or zero is negative semidefinite (e All other diagonal matrices are indenite 4 Determine the deniteness of the following matrices: ( (a ( 3 4 (b 4 5 ( 3 4 (c 4 6 ( 4 (d 4 8 (e 4 5 5 6 (f 3 (g 5 3 4 5 6

Linear Algebra 9 Eigenvalues, Eigenvectors and Diagonalizability 9 Eigenvalues Let A be a square matrix An eigenvalue is a number λ such that (A λi x =, for some x Notice also that this implies that an eigenvalue is a number λ such that Ax = λx Note that x = is always a solution to (A λi x = The denition of an eigenvalue requires that there is another solution to (A λi x =, which is dierent from The system (A λi x = has therefore more than one solution and (A λi is a singular matrix In other words, and eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix Also, a matrix A is non-singular if and only if is not an eigenvalue of A Example: Find the eigenvalues of the matrix A = 4 7 8 Assume that λ is an eigenvalue of A Remember that if the matrix resulting when we subtract λi from A is singular, then its determinant must be zero Then λ solves the equation λ λ 4 7 8 λ = λ λ 4 7 8 λ = λ 3 + 8λ 7λ + 4 = (λ 4(λ 4λ + = So one eigenvalue is λ = 4 To solve the quadratic, we use the quadratic formula: ( 4 4 λ = 4 ± = ± 3 Therefore, the eigenvalues are λ = { 3, + 3, 4} 9 Eigenvectors Denition Given an eigenvalue λ, and eigenvector associated with λ is a non-zero vector x such that (A λi x = Notice that eigenvectors are only dened up to a scalar: If (A λi x = then also (A λi x = etc, so all multiples of x are also eigenvectors associated with λ Example: Find the eigenvectors of the matrix A = 3

Math Camp We must rst nd the eigenvalues of the matrix The determinant of (A λi is λ ( λ (3 λ + ( λ = λ ( 6 5λ + λ + 4 λ = 8λ + 5λ λ 3 + 4 Therefore, we must nd λ 3 5λ + 8λ 4 = (λ (λ (λ = We can see that λ = and λ =, where λ = is a repeated root Since each eigenvector corresponds to an eigenvalue, let us consider the eigenvalue λ = The matrix A λi is then A λi = This implies that the system of equations can be written x = x x = x x 3 x = x when A λi = Combining the second and third equation, we have x = x x 3 x = x 3 Therefore, we have x = x Which implies x x x 3 x = x x 3 = x = x Therefore, we can let x be anything we want, say, x = s As we increase and decrease s, we trace out a line in 3-space, the direction of which is the eigenvector of λ = Each point along the line has det (A λi = As a check, we can see whether (A λi x = We then have = + + + = = + + We now know one eigenvector is ( We can nd the others in a similar manner Let λ = Then (A λi =, which implies x = x 3, x = x 3,

Linear Algebra Therefore, we can write the system as x x x 3 x = x 3 =??? x Notice that we cannot write x as a function of either x or x 3 Therefore, there is no single eigenvector which corresponds to λ = However, notice that since the system is independent of x when λ =, we can let x and x be anything we want Say x = s and x = t Then we can write the solution to the system as x x x 3 = Therefore, corresponding eigenvectors are check that this is the case, notice (A λi x = s + and = t, and their linear combinations To + + + + = = and (A λi x = = + + + + + + = = Generally, if the matrix is an n n matrix, we have n eigenvalues (not necessarily real and n eigenvectors (up to scalars This is because there are n roots (not necessarily real to an nth order polynomial Also, if an eigenvalue is repeated k times, then there are k corresponding eigenvectors to the repeated root 93 Diagonalizability A square matrix A is called diagonalizable if there exists an invertible matrix P such that P AP is a diagonal matrix We state without proof the following theorem: Let A be n n, and assume A has n linearly independent eigenvectors Dene the matrices λ λ Λ =, P = ( p p p 3, λ n where p i is the ith linearly independent eigenvector Then Λ = P AP Example: Consider again the matrix A = 3,

Math Camp 3 and remember that the corresponding eigenvalues were λ = {,, } and Therefore, the matrix P can be expressed as P =,, Notice that Then we can see that Example: P AP = = Consider the matrix P = 4 A = 3 ( = det(a λi = ( λ and so the repeated eigenvalue of this matrix is λ = To nd the associated eigenvector, we solve ( (A Ix = x = ( ( x We get x =, which can be written as x = s Although λ has multiplicity, there is only one linearly independent eigenvector associated with it We cannot nd two linearly independent eigenvectors, and therefore A is not diagonalizable Notice however that A is invertible (det(a = This is an example of a matrix that is invertible but not diagonalizable = Real Symmetric Matrices Let A be n n real symmetric matrix Then all eigenvalues of A are real (but not necessarily distinct and A is diagonalizable Furthermore, we can nd a matrix P of eigenvectors such that P is orthogonal, ie P = P T Eigenvalues & Deniteness For a diagonalizable matrix A, we can easily read of its deniteness from the eigenvalues (compare to homework question 3 in LA II notes: A diagonalizable matrix A is positive (semidenite if all eigenvalues are (weakly positive A diagonalizable matrix A is negative (semidenite if all eigenvalues are (weakly negative 3 A diagonalizable matrix A is indenite if at least one eigenvalue is strictly positive and at least one eigenvalue is strictly negative We will need to diagonalize matrices when solving systems of dierential equations

4 Linear Algebra Markov Chains Assume k states of the world, which we label,,, k Let the probability that we move to state j from state i p ij, and call it the transition probability Then the transition probability matrix of the Markov chain is the matrix P = p p k p k p kk Notice that the rows are the current state, and the columns are the states into which we can move Also notice that the rows of the transition probability matrix must add up to one (otherwise the system would have positive probability of moving into an undened state Example: Say that there are three states of the world: rainy, overcast, and sunny Let state be rainy, be overcast, and 3 be sunny Consider the transition probability matrix 8 P = 3 5 6 For example, if it is rainy, the probability it remains rainy is 8 Also, if it is sunny, the probability that it becomes overcast is 6 The transition probability matrix tells us the likelihood of the state of the system next period However, what if we want to know the likelihood of the state in two periods? Call this probability transition matrix P It can be shown that P P = P Similarly, it can be shown that P n = P n Example: If it is rainy today, what is the probability that is will be cloudy the day after tomorrow? 8 8 69 6 5 P = 3 5 3 5 = 4 37 3 6 6 38 6 36 Therefore, there is a 6 chance that if it is rainy today, then it will be cloudy the day after tomorrow Implicitly we have been assuming that we know the vector of probabilities today For example, we say assume it is cloudy But what if we don't know what the probability will be today? We can dene an initial state of probabilities as x = (p, p,, p n Then the probability that it will be rainy tomorrow is x P = x A vector x with x i for all i and n i= x i = is called a probability vector Example: If there is a 3% chance of rain today, and a % chance of sun, what is the probability that is will be cloudy the day after tomorrow? x = x P = x P = ( 3 6 8 8 3 5 3 5 = 6 6 = ( 3 6 69 6 5 4 37 3 38 6 36 = ( 485 96 9

Math Camp 5 Therefore, there is a 96 chance that it will be cloudy the day after tomorrow A transition matrix P is called regular if P n has only strictly positive entries for some integer n A steady-state vector q of a transition matrix P is a probability vector that satises the equation qp = q Notice that this corresponds to λ = as an eigenvector of P If P is a regular transition matrix the following holds true: is an eigenvalue of P There is a unique probability vector q which is an eigenvector associated with the eigenvalue xp n q as n Example: What is the probability it will be cloudy as n? The matrix P is regular since P has only strictly positive entries probabilities, we must realize that qp = q P q = q Notice that if λ =, then we have In order to nd the long-run qp = λq P q = λq Therefore, q is the eigenvector of P which corresponds to the eigenvalue of λ = Therefore, it suces to nd this eigenvalue 3 (P λiq = 8 6 q = 5 8 3 8 6 5 8 34 3 4 3 q = 34 3 q 3 q = 4 3 q 3 q 3 = q 3 Therefore, q = ( 34 3, 4 3, q 3, where q 3 can be whatever we want Since we must have that the sum of the elements of q be equal to, we choose q 3 accordingly 34 3 + 4 3 + 3 3 = 6 3 q 3 = 3 ( 34 6 q = 6, 4 6, 3 6

6 Linear Algebra Homework Find the eigenvalues and the corresponding eigenvectors for the following matrices: (a ( 3 8 (b 4 ( 9 (c 4 ( 3 (d ( 7 (e ( (f ( Determine whether the following matrices are diagonalizable: (a ( (b ( 3 3 (c (d 3 4 3 (e 3 3 3 Diagonalize the following matrices, if possible: (a ( 4 7 (b ( 6 (c (d 3 3

Math Camp 7 4 (midterm exam You have the following transition probability matrix of a discrete state Markov chain: 4 4 4 4 4 4 What is the probability that the system is in state at time n + given that it was in state 3 at time n? 5 (Homework problem Suppose that the weather can be sunny or cloudy and the weather conditions on succesive mornings form a Markov chain with stationary transition probabilities Suppose that the transition matrix is as follows ( 7 3 6 4 where sunny is state and cloudy is state If it is cloudy on a given day, what is the probability that it will also be cloudy the next day? 6 (Homework problem Suppose that three boys,, and 3 are throwing the ball to one another Whenever has the ball, he throws it to with a probability of Whenever has the ball, he will throw it to with probability 6 Whenever 3 has the ball, he is equally likely to throw it to or (a Construct the transition probability matrix (b If each of the boys is equally likely to have the ball at a certain time n, which boy is most likely to have the ball at time n +?