A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

Size: px
Start display at page:

Download "A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra"

Transcription

1 A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Matrix Algebra Matrix algebra is a means of efficiently expressing large numbers of calculations to be made upon ordered sets of numbers Often referred to as Linear Algebra

2 Why use it? Matrix algebra is used primarily to facilitate mathematical expression. Many equations would be completely intractable if scalar mathematics had to be used. It is also important to note that the scalar algebra is under there somewhere. Definitions - Scalars scalar - a single value (i.e., a number)

3 Definitions - Vectors Vector - a single row or column of numbers Each individual entry is called an element denoted with bold small letters row vector a = 3 4 column vector a [ ] = 3 4 Definitions - Matrices A matrix is a rectangular array of numbers (called elements) arranged in orderly rows and columns a = a a A 3 a a a3 Subscripts denote row (i=,,n) and column (j=,,m) location of an element 3

4 Definitions - Matrices Matrices are denoted with bold Capital letters All matrices (and vectors) have an order or dimensions - that is the number of rows x the number of columns. Thus A is referred to as a two by three matrix. Often a matrix A of dimension n x m is denoted A nxm Often a vector a of dimension n (or m) is denoted A n (or A m ) Definitions - Matrices Null matrix a matrix for which all elements are zero, i.e., a ij = 0 i,j Square Matrix a matrix for which the number of rows equals the number of columns (n = m) Symmetric Matrix a matrix for which a ij = a ji i,j 4

5 Definitions - Matrices Diagonal Elements Elements of a Square Matrix for which the row and column locations are equal, i.e., a ij i = j Upper Triangular Matrix a matrix for which all elements below the diagonal are zero, i.e., a ij = 0 i,j i > j Lower Triangular Matrix a matrix for which all elements above the diagonal are zero, i.e., a ij = 0 i,j i < j Matrix Equality Thus two matrices are equal iff (if and only if) all of their elements are identical Note that statistical data sets are matrices (usually with observations in the rows and variables in the columns) Variable Variable L Variable m Observation a a L am Observation a a L am M M M O M Observation n a a L a n n nm 5

6 Basic Matrix Operations Transpositions Sums and Differences Products Inversions The Transpose of a Matrix The transpose A (or A T ) of a matrix A is the matrix such that the i th row of A is the j th column of A, i.e., B is the transpose of A iff b ij = a ji i,j This is equivalent to fixing the diagonal portion (i.e., elements for which a ij = a ji ) then rotating the matrix 80 degrees 6

7 Transpose of a Matrix An Example If we have 4 A = then A ' = i.e., 4 A = 5 A ' = More on the Transpose of a Matrix (A ) = A (think about it!) If A = A, then A is symmetric 7

8 Sums and Differences of Matrices Two matrices may be added (subtracted) iff they are the same order Simply add (subtract) elements from corresponding locations where a a b b c c a a + b b = c c a3 a3 b3 b3 c3 c3 a +b =c, a +b =c, a +b =c, a +b =c, a +b =c, a +b =c Sums and Differences An Example If we have 7 0 A = 3 4 and B= then we can calculate C = A + B by C = A+ B= =

9 Sums and Differences An Example Similarly, if we have 7 0 A = 3 4 and B= then we can calculate C = A - B by C = A- B= = Some Properties of Matrix Addition/Subtraction Note that The transpose of a sum = sum of transposes (A+B+C) = A +B +C A+B = B+A (i.e., matrix addition is commutative) Matrix addition can be extended beyond two matrices matrix addition is associative, i.e., A+(B+C) = (A+B)+C 9

10 Products of Scalars and Matrices To multiply a scalar times a matrix, simply multiply each element of the matrix by the scalar quantity b a a = ba ba a a ba ba Products of Scalars & Matrices An Example If we have A = 3 4 and b = then we can calculate ba by ba = = Note that ba = Ab if b is a scalar 0

11 Some Properties of Scalar x Matrix Multiplication Note that if b is a scalar then ba = Ab (i.e., scalar x matrix multiplication is commutative) Scalar x Matrix multiplication can be extended beyond two scalars Scalar x Matrix multiplication is associative, i.e., ab(c) = a(bc) Scalar x Matrix multiplication leads to removal of a common factor, i.e., if ba ba a a C = ba = ba then C b A where A = a a ba3 ba3 a3 a3 Products of Matrices We write the multiplication of two matrices A and B as AB This is referred to either as pre-multiplying B by A or post-multiplying A by B So for matrix multiplication AB, A is referred to as the premultiplier and B is referred to as the postmultiplier

12 Products of Matrices In order to multiply matrices, they must be conformable (the number of columns in the premultiplier must equal the number of rows in postmultiplier) Note that an (m x n) x (n x p) = (m x p) an (m x n) x (p x n) = cannot be done a ( x n) x (n x ) = a scalar ( x ) Products of Matrices If we have A (3x3) and B (3x) then a a a3 b b c c AB = a a a3 x b b = c c = C a3 a3 a33 b3 b3 c3 c3 where c =a b +a b +a b 3 3 c =a b +a b +a b 3 3 c =a b +a b +a b 3 3 c =a b +a b +a b 3 3 c =a b +a b +a b c =a b +a b +a b

13 Products of Matrices If we have A (3x3) and B (3x) then b b a a a3 BA = b b x a a a 3 = undefined b3 b3 a3 a3 a33 i.e., matrix multiplication is not commutative (why?) Matrix Multiplication An Example If we have then where A = 5 8 and B= c c AB = 5 8x 5 = c c = c c3 3 3 ( ) ( ) ( ) 3 3 ( ) ( ) ( ) 3 3 ( ) ( ) ( ) 3 3 ( ) ( ) ( ) ( ) ( ) ( ) c 3 =a3b +a3b +a 33 3 ( ) ( ) ( ) c =a b +a b +a b = =30 c =a b +a b +a b = =66 c =a b +a b +a b = =36 c =a b +a b +a b = =8 c =a b +a b +a b = =4 b = =96 3

14 Some Properties of Matrix Multiplication Note that Even if conformable, AB does not necessarily equal BA (i.e., matrix multiplication is not commutative) Matrix multiplication can be extended beyond two matrices matrix multiplication is associative, i.e., A(BC) = (AB)C Some Properties of Matrix Multiplication Also note that The transpose of a product is equal to the product of the transposes in reverse order (ABC) = C B A If AA = A then A' is idempotent (and A' = A) 4

15 Special Uses for Matrix Multiplication Sum Column Elements of a Matrix Premultiply a matrix A by a conformable row vector of s If 4 7 A = then premultiplication by = will yield a row vector of the column totals for A, i.e. 4 7 A = 5 8 = Special Uses for Matrix Multiplication Sum Row Elements of a Matrix Postmultiply a matrix A by a conformable column vector of s If 4 7 A = then postmultiplication by = will yield a column vector of row totals for A, i.e. 4 7 A = 5 8 =

16 Special Uses for Matrix Multiplication How can we create a column vector of the column totals of a Matrix? Postmultiply the transpose of a matrix A by a conformable column vector of s If = 4 7 A = then postmultiplication of by 3 A' = Special Uses for Matrix Multiplication will yield a column vector of the column totals of a Matrix, i.e. 3 6 A' = =

17 Special Uses for Matrix Multiplication How can we create a row vector of the row totals of a Matrix? Premultiply the transpose of a matrix A by a conformable row vector of s If 4 7 A = then premultiplication of 3 A' = by = Special Uses for Matrix Multiplication will yield a row vector of the row totals of matrix A, i.e. 3 A' = =

18 Special Uses for Matrix Multiplication The Dot (or Inner) Product of two Vectors Premultiplication of a column vector a by conformable row vector b yields a single value called the dot product or inner product -If 5 a= and b = 8 then ab gives us 5 ab = = =7 ( ) ( ) ( ) 8 which is the sum of products of elements in similar positions for the two vectors Special Uses for Matrix Multiplication The Outer Product of two Vectors Premultiplication of a row vector a by conformable column vector b yields a matrix containing the products of each pair of elements from the two matrices (called the outer product) - If 5 a= and b = 8 then ba gives us ba = =

19 Special Uses for Matrix Multiplication Sum the Squared Elements of a Vector Premultiply a column vector a by its transpose If 5 a = 8 then premultiplication by a row vector a a' = 5 8 will yield the sum of the squared values of elements for a, i.e. 5 aa ' = 5 8 = =93 8 Special Uses for Matrix Multiplication Postmultiply a row vector a by its transpose If a = 7 0 then postmultiplication by a column vector a 7 a' = 0 will yield the sum of the squared values of elements for a, i.e. 7 aa' = = =50 9

20 Special Uses for Matrix Multiplication Determining if two vectors are Orthogonal Two conformable vectors a and b are orthogonal iff a b = 0 Example: Suppose we have then a= 7 0 and b = 0.5 ab' = = -7 ( ) +0 ( 0.5) - ( -) = 0 Special Uses for Matrix Multiplication Representing Systems of Simultaneous Equations Suppose we have the following system of simultaneous equations: If we let px + qx + rx 3 = M dx + ex + fx 3 = N x A = p q r, x= x, and b= M d e f N x3 then we can represent the system (in matrix notation) as Ax = b (why?) 0

21 Special Uses for Matrix Multiplication Linear Independence any subset of columns (or rows) of a matrix A are said to be linearly independent if no column (row) in the subset can be expressed as a linear combination of other columns (rows) in the subset. If such a combination exists, then the columns (rows) are said to be linearly dependent. Special Uses for Matrix Multiplication The Rank of a matrix is defined to be the number of linearly independent columns (or rows) of the matrix. Nonsingular (Full Rank) Matrix Any matrix that has no linear dependencies among its columns (rows). For a square matrix A this implies that Ax = 0 iff x = 0. Singular (Not of Full Rank) Matrix Any matrix that has at least one linear dependency among its columns (rows).

22 Special Uses for Matrix Multiplication Example - The following matrix A 3 A = is singular (not of full rank) because the third column is equal to three times the first column. This result implies there is either i) no unique solution or ii) no existing solution to the system of equations Ax = 0 (why?). Special Uses for Matrix Multiplication Example - The following matrix A 5 A = is singular (not of full rank) because the third column is equal to the first column plus two times the second column. Note that the number of linearly independent rows in a matrix will always equal the number of linearly independent columns in the matrix.

23 Special Matrices There are a number of special matrices. These include Diagonal Matrices Identity Matrices Null Matrices Commutative Matrices Anti-Commutative Matrices Periodic Matrices Idempotent Matices Nilpodent Matrices Orthogonal Matrices Diagonal Matrices A diagonal matrix is a square matrix that has values on the diagonal with all off-diagonal entities being zero. a a a a44 3

24 Identity Matrices An identity matrix is a diagonal matrix where the diagonal elements all equal I = When used as a premultiplier or postmultiplier of any conformable matrix A, the Identity Matrix will return the original matrix A, i.e., IA = AI = A Why? Null Matrices A square matrix whose elements all equal Usually arises as the difference between two equal square matrices, i.e., a b = 0 a = b 4

25 Commutative Matrices Any two square matrices A and B such that AB = BA are said to commute. Note that it is easy to show that any square matrix A commutes with both itself and with a conformable identity matrix I. Anti-Commutative Matrices Any two square matrices A and B such that AB = -BA are said to anticommute. 5

26 Periodic Matrices Any square matrix A such that A k+ = A is said to be of period k. Of course any matrix that commutes with itself of period k= commutes with itself for any integer value of k (why?). Idempotent Matrices Any square matrix A such that A = A is said to be of idempotent. Thus an idempotent matrix commutes with itself of period k for any integer value of k. 6

27 Nilpotent Matrices Any square matrix A such that A p = 0 where p is a positive integer is said to be nilpotent. Note that if p is the least positive integer such that A p = 0, then A is said to be nilpotent of index p. Orthogonal Matrices Any square matrix A with rows (considered as vectors) that are mutually perpendicular and have unit lengths, i.e., A A = I Note that A is orthogonal iff A - = A. 7

28 Orthogonal Matrices Properties of an orthogonal matrix A: Its transpose and inverse are identical: A = A -. When multiplied by its transpose the product is commutative: AA =A A. A is also an orthogonal matrix. when multiplied by a conformable orthogonal matrix B, the product is an orthogonal matrix. Orthogonal Matrices Properties of an orthogonal matrix A: the sum of the square of the elements in a given row or column is equal to. The dot product of any two rows or columns is zero. the sum of the square of the elements in each row or column is. (Ax)(Ay) = xy for all real scalars x and y (this is called preserving dot products). 8

29 The Determinant of a Matrix The determinant of a matrix A is commonly denoted by A or det A. Determinants exist only for square matrices. A matrix with a determinant of zero is described as singular; a matrix with a nonzero determinant is described as regular, invertable, or nonsingular. They are a matrix characteristic (that can be somewhat tedious to compute). The Determinant for a x Matrix If we have a matrix A such that then For example, the determinant of is A = a a a a A =aa -aa A = 3 4 ( )( ) ( )( ) A = = a = a - aa 4-3 =- 3 4 Determinants for x matrices are easy! 9

30 The Determinant for a 3x3 Matrix If we have a matrix A such that a a a3 A = a a a3 a3 a3 a33 Then the determinant is a a a a a a det A = A = a - a + a a a 3 3 a 3 3 a 33 a 3 a which can be expanded and rewritten as det A = A = aaa33 - aa3a 3 + aa3a3 - a a a + a a a - a a a (Why?) The Determinant for a 3x3 Matrix If we rewrite the determinants for each of the x submatrices in a a a a a a det A = A = a - a + a a a as a a 3 a a a a a a 3 33 a a a a a 3 3 a 33 a 3 a =a a - a a, =a a - a a, and =a a - a a 3 3 by substitution we have A = aaa33 - aa3a 3 + aa3a3 - aaa 33 + a3aa3 - a3aa3 30

31 The Determinant for a 3x3 Matrix Note that if we have a matrix A such that Then A can also be written as or or a a a3 A = a a a3 a3 a3 a33 a a a a a a det A = A = a - a + a a a 3 3 a 3 3 a 33 a 3 a a a a a a a det A = A = -a + a - a a a 3 3 a 3 3 a 33 a 3 a a a a a det A = A = a - a + a a a a a3 a a3 a a The Determinant for a 3x3 Matrix To do so first create a matrix of the same dimensions as A consisting only of alternating signs (+,-,+, )

32 The Determinant for a 3x3 Matrix Then expand on any row or column (i.e., multiply each element in the selected row/column by the corresponding sign, then multiply each of these results by the determinant of the submatrix that results from elimination of the row and column to which the element belongs For example, let s expand on the second column a a a 3 A = a a a3 a3 a3 a33 The Determinant for a 3x3 Matrix The three elements on which our expansion is based will be a, a, and a 3. The corresponding signs are -, +,

33 The Determinant for a 3x3 Matrix So for the first term of our expansion we will multiply -a by the determinant of the matrix formed when row and column are eliminated from A (called the minor and often denoted A rc where r and c are the deleted rows and columns): a a a3 a = = a A a 3 a a 3 so A a3 a33 a3 a3 a33 which gives us a a3 -a a 3 a 33 This product is called a cofactor. The Determinant for a 3x3 Matrix For the second term of our expansion we will multiply a by the determinant of the matrix formed when row and column are eliminated from A: a a a3 = a A a a a 3 so A = a a3 a3 a33 which gives us a a a 3 a 3 a 33 a a

34 The Determinant for a 3x3 Matrix Finally, for the third term of our expansion we will multiply -a 3 by the determinant of the matrix formed when row 3 and column are eliminated from A: a a a3 = a A a a a 3 so A = a a3 a3 a33 which gives us a a -a 3 3 a a 3 a a 3 3 The Determinant for a 3x3 Matrix Putting this all together yields a A A a3 a a3 a a det = = -a 3 + a - a3 a a a a a a So there are nine distinct ways to calculate the determinant of a 3x3 matrix! These can be expressed as m n i+j i+j ij ( ) ij ij ( ) ij j= i= det A = A = a - A = a - A Note that this is referred to as the method of cofactors and can be used to find the determinant of any square matrix. 34

35 The Determinant for a 3x3 Matrix An Example Suppose we have the following matrix A: 3 A = Using row (i.e., i=), the determinant is: ( ) j = + = m +j j j= det A = A = a - A () ( 8) 3( ) 5 Note that this is the same result we would achieve using any other row or column! Some Properties of Determinants Determinants have several mathematical properties useful in matrix manipulations: A = A' If each element of any row (or column) of A is 0, then A = 0 If every value in a row is multiplied by k, then A = k A If two rows (or columns) are interchanged the sign, but not value, of A changes If two rows (or columns) of A are identical, A = 0 35

36 Some Properties of Determinants A remains unchanged if each element of a row is multiplied by a constant and added to any other row If A is nonsingular, then A =/ A -, i.e., A A - = AB = A B (i.e., the determinant of a product = product of the determinants) For any scalar c, ca = c k A where k is the order of A The determinant of a diagonal matrix is simply the product of the diagonal elements Some Properties of Determinants If A is an orthogonal matrix, its determinants are ± (note that the reverse is not necessarily true; i.e., not all matrices whose determinants are ± are orthogonal). 36

37 Why are Determinants Important? Consider the small system of equations: a x + a x = b a x + a x = b Which can be represented by: Ax = b where a a x b A =, x =, and b= a a x b Why are Determinants Important? If we were to solve this system of equations simultaneously for x we would have: a (a x + a x = b ) -a (a x + a x = b ) Which yields (through cancellation & rearranging): a a x + a a x -a a x -a a x = a b -a b 37

38 Why are Determinants Important? or (a a -a a )x = a b -a b which implies ab = ab x= a a - a a Notice that the denominator is: A =a a -a a Thus iff A = 0 there is either i) no unique solution or ii) no existing solution to the system of equations Ax = b! Why are Determinants Important? This result holds true: if we solve the system for x as well; or for a square matrix A of any order. Thus we can use determinants in conjunction with the A matrix (coefficient matrix in a system of simultaneous equations) to see if the system has a unique solution. 38

39 Traces of Matrices The trace of a square matrix A is the sum of the diagonal elements Denoted tr(a) We have n m ( A) ii tr = a = a i= j= For example, the trace of A = 3 4 tr = a =+4=5 is ( ) n A ii i= jj Some Properties of Traces Traces have several mathematical properties useful in matrix manipulations: For any scalar c, tr(ca) = c[tr(a)] tr(a ± B) = tr(a) ± tr(b) tr(ab) = tr(ba) tr(b - AB) = tr(a) n m ij i= j= ( AA ) tr ' = a 39

40 The Inverse of a Matrix The inverse of a matrix A is commonly denoted by A - or inv A. The inverse of an n x n matrix A is the matrix A - such that AA - = I = A - A The matrix inverse is analogous to a scalar multiplicative reciprocal A matrix which has an inverse is called nonsingular The Inverse of a Matrix For some n x n matrices A an inverse matrix A - may not exist. A matrix that does not have an inverse is singular. An inverse of n x n matrix A exists iff A 0 40

41 Inverse by Simultaneous Equations Pre or postmultiply your square matrix A by a dummy matrix of the same dimensions, i.e., AA a a a3 a b c = a a a3 d e f a3 a3 a33 g h i Set the result equal to an identity matrix of the same dimensions as your square matrix A, i.e., AA a a a3 a b c 0 0 = a = a a3 d e f 0 0 a 3 a g h i a33 or or Inverse by Simultaneous Equations Recognize that the resulting expression implies a set of n simultaneous equations that must be satisfied if A - exists: a (a) + a (d) + a 3 (g) =, a (b) + a (e) + a 3 (h) = 0, a (c) + a (f) + a 3 (i) = 0; a (a) + a (d) + a 3 (g) = 0, a (b) + a (e) + a 3 (h) =, a (c) + a (f) + a 3 (i) = 0; a 3 (a) + a 3 (d) + a 33 (g) = 0, a 3 (b) + a 3 (e) + a 33 (h) = 0, a 3 (c) + a 3 (f) + a 33 (i) =. Solving this set of n equations simultaneously yields A -. 4

42 Inverse by Simultaneous Equations An Example If we have 3 A = Then the postmultiplied matrix would be = -3 - g h i 3 a b c AA 5 4 d e f We now set this equal to a 3x3 identity matrix 3 a b c d e f= g h i 0 0 Inverse by Simultaneous Equations An Example Recognize that the resulting expression implies the following n simultaneous equations: a + d + 3g =, b + e + 3h = 0, c + f + 3i = 0; or a + 5d + 4g = 0, b + 5e + 4h =, c + 5f + 4i = 0; or a - 3d - g = 0, b - 3e - h = 0, c - 3f - i =. This system can be satisfied iff A - exists. 4

43 Inverse by Simultaneous Equations An Example Solving the set of n equations simultaneously yields: a = -/5, b = /3, c = 7/3, d = -8/5, e = /3, f = -/3 g = /5, h =-/3, i = -/5 so we have that A - = AA Inverse by Simultaneous Equations An Example ALWAYS check your answer. How? Use the fact that AA - = A - A =I and do a little matrix multiplication! = = 0 0 = I So we have found A -! 3x3 43

44 Inverse by the Gauss-Jordan Algorithm Augment your matrix A with an identity matrix of the same dimensions, i.e., a a a3 0 0 A I = a a a3 0 0 a3 a a33 Now we use valid Row Operations necessary to convert A to I (and so A I to I A - ) Inverse by the Gauss-Jordan Algorithm Valid Row Operations on A I You may interchange rows You may multiply a row by a scalar You may replace a row with the sum of that row and another row multiplied by a scalar (which is often negative) Every operation performed on A must be performed on I Use valid Row Operations on A I to convert A to I (and so A I to I A - ) 44

45 Inverse by the Gauss-Jordan Algorithm An Example If we have 3 A = Then the augmented matrix A I is A I = We now wish to use valid row operations to convert the A side of this augmented matrix to I Inverse by the Gauss-Jordan Algorithm An Example Step : Subtract Row from Row And substitute the result for Row in A I

46 Inverse by the Gauss-Jordan Algorithm An Example Step : Subtract Row 3 from Row Divide the result by 5 and substitute for Row 3 in the matrix derived in the previous step Inverse by the Gauss-Jordan Algorithm An Example Step 3: Subtract Row from Row Divide the result by 3 and substitute for Row 3 in the matrix derived in the previous step

47 Inverse by the Gauss-Jordan Algorithm An Example Step 4: Subtract Row from Row Substitute the result for Row in the matrix derived in the previous step Inverse by the Gauss-Jordan Algorithm An Example Step 5: Subtract 7 Row 3 from Row Substitute the result for Row in the matrix derived in the previous step

48 Inverse by the Gauss-Jordan Algorithm An Example Step 6: Add Row 3 to Row Substitute the result for Row in the matrix derived in the previous step Inverse by the Gauss-Jordan Algorithm An Example Now that the left side of the augmented matrix is an identity matrix I, the right side of the augmented matrix is the inverse of the matrix A (A - ), i.e., A =

49 Inverse by the Gauss-Jordan Algorithm An Example To check our work, let s see if our result yields AA - = I: AA = = So our work checks out! Inverse by Determinants Replace each element a ij in a matrix A with an element calculated as follows: Find the determinant of the submatrix that results when the i th row and j th column are eliminated from A (i.e., A ij ) Attach the sign that you identified in the Method of Cofactors Divide by the determinant of A After all elements have been replaced, transpose the resulting matrix 49

50 Inverse by Determinants An Example Again suppose we have some matrix A: 3 A = We have calculated the determinant of A to be 5, so we replace element, with ( ) + A = = 5 5 A Similarly, we replace element, with ( ) + A = 8 = A Inverse by Determinants An Example After using this approach to replace each of the nine elements of A, The eventual result will be which is A -!

51 Eigenvalues and Eigenvectors For a square matrix A, let I be a conformable identity matrix. Then the scalars satisfying the polynomial equation A - λi = 0 are called the eigenvalues (or characteristic roots) of A. The equation A - λi = 0 is called the characteristic equation or the determinantal equation. Eigenvalues and Eigenvectors For example, if we have a matrix A: then A = A = λ I = 4 λ = -λ 4 = ( λ)( 4 λ) 6 = λ or λ + λ 4 = 0 which implies there are two roots or eigenvalues -- λ=-6 and λ=4. 5

52 Eigenvalues and Eigenvectors Suppose A is an nxn matrix then det(λi A) = 0 is called the characteristic equation of A. This will yield an n th degree polynomial in λ of the form f(λ) = λ n + c n- λ n- + + c λ + c 0 This is called the characteristic polynomial of A. Eigenvalues and Eigenvectors For a matrix A with eigenvectors λ, a nonzero vector x such that Ax = λx is called an eigenvector (or characteristic vector) of A associated with λ. 5

53 Eigenvalues and Eigenvectors For example, if we have a matrix A: A = with eigenvalues λ = -6 and λ = 4, the eigenvector of A associated with λ = -6 is x =λ x Ax x 4 = x x x + 4x = 6x 8x + 4x = 0 and 4x 4x = 6x 4x + x = 0 Fixing x = yields a solution for x of. Eigenvalues and Eigenvectors Note that eigenvectors are usually normalized so they have unit length, i.e., e = For our previous example we have: x x'x x - - e = = = = 5 x'x Thus our arbitrary choice to fix x = has no impact on the eigenvector associated with λ =

54 Eigenvalues and Eigenvectors For matrix A and eigenvalue λ = 4, we have x =λ x Ax x 4 = x x x + 4x = 4x x + 4x = 0 and 4x 4x = 4x 4x 8x = 0 We again arbitrarily fix x =, which now yields a solution for x of /. Eigenvalues and Eigenvectors Normalization to unit length yields x e = = = = = 5 x'x Again our arbitrary choice to fix x = has no impact on the eigenvector associated with λ = 4. 54

55 Eigenvalues and Eigenvectors Computing eigenvalues from eigenvectors is relatively straightforward for matrix A and an eigenvector e (or x), solve the characteristic equation Ae = λe for eigenvalue λ, i.e. -6 Ae =λ =λ =λ e λ = 6 Eigenvalues and Eigenvectors and for the second eigenvector e we have 8 Ae =λ =λ =λ e λ = 55

56 Eigenvalues and Eigenvectors Rayleigh Quotients an alternate method for computing eigenvalues from eigenvectors, the i th eigenvalue can be computed as: ' eae i i λ i = ' ee i i Eigenvalues and Eigenvectors For the first eigenvector e we have ' eae 5 λ = = = 6 ' ee

57 Eigenvalues and Eigenvectors and for the second eigenvector e we have ' eae 5 λ = = = ' ee Properties of Matrices Related to Eigenvalues and Eigenvectors The sum of the eigenvalues of a matrix A is equal to the trace of A λ i =tr ( A) For our previous example: A = which has eigenvalues λ=-6 and λ=4, ( A) λi =-6+4=-=tr =-4 57

58 Properties of Matrices Related to Eigenvalues and Eigenvectors The product of the of the eigenvalues of a matrix A is equal to the determinant of A λi =det( A) For our previous example: A = which has eigenvalues λ=-6 and λ=4, ( ) ( A) ( ) ( ) λi =-6 4 =-4=det = Properties of Matrices Related to Eigenvalues and Eigenvectors Two eigenvectors e i and e j associated with two distinct eigenvalues λ i and λ j of a symmetric matrix are mutually orthogonal iff e' i e j = 0. For our previous example: A = e which has eigenvectors = 5, e = ( ) ( A) ( ) ( ) λi =-6 4 =-4=det =

59 Properties of Matrices Related to Eigenvalues and Eigenvectors Given a set of variables X, X,...,X p, with nonsingular covariance matrix Σ, we can always derive a set of uncorrelated variables Y, Y,..., Y p by a set of linear transformations corresponding to the principal-axes rotation. The covariance matrix of this new set of variables is the diagonal matrix Λ = V'ΣV Properties of Matrices Related to Eigenvalues and Eigenvectors The absolute value of a determinant ( deta ) is the product of the absolute values of the eigenvalues of matrix A c = 0 is an eigenvalue of A if A is a singular (noninvertible) matrix If A is a nxn triangular (upper or lower triangular) or diagonal matrix, the eigenvalues of A are the diagonal entries of A. 59

60 Properties of Matrices Related to Eigenvalues and Eigenvectors A and A have same eigenvalues. Eigenvalues of a symmetric matrix A are all real. Eigenvectors of a symmetric matrix A are orthogonal for distinct eigenvalues. The dominant or principal eigenvector of matrix A is an eigenvector corresponding to the eigenvalue of largest magnitude (for real numbers, largest absolute value) of that matrix. Properties of Matrices Related to Eigenvalues and Eigenvectors The smallest eigenvalue of matrix A is the same as the multiplicative inverse (reciprocal) of the largest eigenvalue of A -. 60

61 Quadratic Forms A Quadratic From is a function Q(x) = x Ax in k variables x,,x k where x x = x M xk and A is a k x k symmetric matrix. Quadratic Forms Note that a quadratic form has only squared terms and crossproducts, and so can be written then Q Suppose we have ( x) = i= j= x = x = and A 4 x 0 Q( x) = x'ax= x + 4x x - x k k a x x ij i j 6

62 Spectral Decomposition and Quadratic Forms Any k x k symmetric matrix can be expressed in terms of its k eigenvalueeigenvector pairs (λ i, e i ) as A k i= = λee ' i i i This is referred to as the spectral decomposition of A. Spectral Decomposition and Quadratic Forms For our previous example on eigenvalues and eigenvectors we showed that A = has eigenvalues λ = -6 and λ = -4, with corresponding (normalized) eigenvectors e = 5 =, e 5,

63 Spectral Decomposition and Quadratic Forms Can we reconstruct A? k i= A = λee ' i i i = = = 4 = A Spectral Decomposition and Quadratic Forms Spectral decomposition can be used to develop/illustrate many statistical results/ concepts. We start with a few basic concepts: - Nonnegative Definite Matrix when any k x k matrix A such that 0 x Ax x =[x, x,, x k ] the matrix A and the quadratic form are said to be nonnegative definite. 63

64 Spectral Decomposition and Quadratic Forms - Positive Definite Matrix when any k x k matrix A such that 0 < x Ax x =[x, x,, x k ] [0, 0,, 0] the matrix A and the quadratic form are said to be positive definite. Spectral Decomposition and Quadratic Forms Example - Show that the following quadratic form is positive definite: 6x + 4x - 4 xx We first rewrite the quadratic form in matrix notation: Q( x) = x x 6 - x = xax ' - 4 x 64

65 Spectral Decomposition and Quadratic Forms Now identify the eigenvalues of the resulting matrix A (they are λ = and λ = 8). A λ I = 6 - λ or ( )( ) ( )( ) ( )( ) = 6-λ - = 6 λ 4 λ - - = λ λ 0λ + 6 = λ λ 8 = 0 Spectral Decomposition and Quadratic Forms Next, using spectral decomposition we can write: k ' ' ' ' ' i i i 8 i= A = λ ee = λ e e +λ e e = e e + e e where again, the vectors e i are the normalized and orthogonal eigenvectors associated with the eigenvalues λ = and λ = 8. 65

66 Spectral Decomposition and Quadratic Forms Sidebar - Note again that we can recreate the original matrix A from the spectral decomposition: k i= A = λee ' i i i = = = = A Spectral Decomposition and Quadratic Forms Because λ and λ are scalars, premultiplication and postmultiplication by x and x, respectively, yield: ' ' ' ' ' xax= xeex+ 8xe e x= y +8y 0 where ' ' ' ' y = xe = e x and y = xe = e x At this point it is obvious that x Ax is at least nonnegative definite! 66

67 Spectral Decomposition and Quadratic Forms We now show that x Ax is positive definite, i.e. xax= y + 8y > 0 ' From our definitions of y and y we have ' y e = x ' or y = Ex y e x Spectral Decomposition and Quadratic Forms Since E is an orthogonal matrix, E exists. Thus, x = ' Ey But 0 x = E y implies y 0!. At this point it is obvious that x Ax is positive definite! 67

68 Spectral Decomposition and Quadratic Forms This suggests rules for determining if a k x k symmetric matrix A (or equivalently, its quadratic form x Ax) is nonegative definite or positive definite: - A is a nonegative definite matrix iff λ i 0, i =,,rank(a) - A is a positive definite matrix iff λ i >0, i =,,rank(a) Measuring Distance Euclidean (straight line) distance The Euclidean distance between two points x and y (whose coordinates are represented by the elements of the corresponding vectors) in p-space is given by ( ) L ( p p) d( x, y) = x y + + x y 68

69 Measuring Distance For a previous example 3 z = z = z.00 = the Euclidean (straight line) distances are Measuring Distance ( ) ( ) ( ) ( ) ( ) ( ) d( z,z) = =.44 d( z,z) = =.44 3 d( z,z) 3 = = z = 0.36 z = ( ) ( ) ( ) z =

70 Measuring Distance Notice that the lengths of the vectors are their distances from the origin: ( ) L ( p ) d( 0P), = x x 0 = x + L+ x p This is yet another place where the Pythagorean Theorem rears its head! Measuring Distance Notice also that if we connect all points equidistant from some given point z, the result is a hypersphere with its center at z and area of πr : In p= dimensions this yields a circle r z 70

71 Measuring Distance In p = dimensions, we actually talk about area. In p 3 dimensions, we talk about volume - which is 4/3πr 3 for this n n problem or, more generally r π 3 n In p=3 dimensions we have a sphere r z Γ + Measuring Distance Problem What if the coordinates of a point x (i.e., the elements of vector x) are random variables with differing variances? Suppose - we have n pairs of measurements on two variables X and X, each having a mean of zero -X is more variable than X -X and X vary independently 7

72 Measuring Distance A scatter diagram of these data might look like this: Which point really lies further from the origin in statistical terms (i.e., which point is less likely to have occurred randomly)? Euclidean distance does not account for differences in variation of X and X! Measuring Distance Notice that a circle does not efficiently inscribe the data: r The area of the ellipse is πr r. r An ellipse does so much more efficiently! 7

73 Measuring Distance How do we take the relative dispersions on the two axes into consideration? We standardize each value of X i by dividing by its standard deviation. Measuring Distance Note that the problem can extend beyond two dimensions. 3 The area of the ellipsoid is (4/3)πr r r 3 or more generally n i= rπ n n Γ + 73

74 Measuring Distance If we are looking at distances from the origin D(0,P), we could divide coordinate i by its sample standard deviation s ii : x x= s * i i ii Measuring Distance The resulting measure is called Statistical Distance or Mahalanobis Distance: * * ( ) L ( p) d( 0P), = x + + x x has a relative weight of k = s x x p = + L+ s spp = x x + L + s s p pp x p has a relative weight of kp = s pp 74

75 Measuring Distance Note that if we plot all points a constant squared distance c from the origin: c s The area of this ellipse is c s c s ( ) ( ) π c s c s c s all points that satisfy x d ( 0P, ) = + L+ =c s s x p pp Measuring Distance What if the scatter diagram of these data looked like this: X and X now have an obvious positive correlation! 75

76 Measuring Distance We can plot a rotated coordinate system ~ ~ on axes x and x : ~ x ~ x Θ This suggests that we calculate distance ~ ~ based on the rotated axes x and x. Measuring Distance The relation between the original coordinates (x, x ) and the rotated ~ ~ coordinates (x, x ) is provided by: x=x % cos x=-x % sin ( θ ) + x sin( θ) ( θ ) + x cos( θ) 76

77 a Measuring Distance Now we can write the distance from P = ~ ~ (x, x ) to the origin in terms of the original coordinates x and x of P as d( 0P), = a x + a x x + a x where cos ( θ) = cos ( θ ) s + sin( θ) cos ( θ ) s + sin ( θ) s sin ( θ) + ( θ) ( θ) ( θ ) + ( θ) cos s sin cos s sin s a a = and Measuring Distance sin ( θ) ( θ ) + ( θ) ( θ ) + ( θ) cos ( θ) ( θ) ( θ) ( θ ) + ( θ) cos s sin cos s sin s = + cos s sin cos s sin s cos ( θ) sin( θ) ( θ ) + ( θ) ( θ ) + ( θ) sin( θ) cos ( θ) ( θ) ( θ) ( θ ) + ( θ) cos s sin cos s sin s cos s sin cos s sin s 77

78 Measuring Distance Note that the distance from P = (x, x ) to the origin for uncorrelated coordinates x and x is d( 0P), = a x + a x x + a x for weights a ij = s ij Measuring Distance What if we wish to measure distance from some fixed point Q = (y, y )? ~ x ~ x Q=(y, y ) In this diagram, Q = (y, y ) = (x, x ) is called the centroid of the data. 78

79 Measuring Distance The distance from any point p to some fixed point Q = (y, y ) is ~ x P=(x, x ) ~ x Q=(y, y ) Θ ( ) ( )( ) ( ) d( PQ, ) = a x y +a x y x y +a x y Measuring Distance Suppose we have the following ten bivariate observations (coordinate sets of (x, x )): Obs # x x x i

80 Measuring Distance The plot of these points would look like this: Centroid (-, 5) The data suggest a positive correlation between x, and x. Measuring Distance The inscribing ellipse (and major and minor axes) look like this: ~ x ~ x Θ=

81 a Measuring Distance The rotational weights are: 0 cos ( 45 ) ( )( ) + ( ) ( )( ) + ( )( ) = cos 45. sin 45 cos sin ( θ) sin cos sin 45 cos sin 45. = ( )( ) ( ) ( )( ) + ( )( ) a and: Measuring Distance 0 sin ( 45 ) ( )( ) + ( ) ( )( ) + ( )( ) = cos 45. sin 45 cos sin ( θ) cos cos sin 45 cos sin 45. = ( )( ) ( ) ( )( ) + ( )( ) 8

82 a and: Measuring Distance 0 0 sin( 45 ) cos ( 45 ) ( )( ) + ( ) ( )( ) + ( )( ) 0 cos ( θ) sin ( 45 ) ( )( ) ( ) ( )( ) + ( )( ) = cos 45. sin 45 cos sin cos sin 45 cos sin 45. = Measuring Distance So the distances of the observed points from their centroid Q = (-.0, 5.0) are: Obs # x x ~ x ~ x D(P,Q) Euclidean Mahalanobis D(P,Q) x i

83 Measuring Distance Mahalonobis distance can easily be generalized to p dimensions: p j- p ii i i ij i i j j i = i = j = ( ) ( )( ) d( PQ, ) = a x y + a x y x y and all points satisfying p j- p a = ii xi y i + aij xi yi xj yj c i = i = j = ( ) ( )( ) form a hyperellipsoid with centroid Q. Measuring Distance Now let s backtrack the Mahalonobis distance of a random p dimensional point P from the origin is given by p j- p = d( 0P, ) a x + a x x so we can say ii i ij i j i = i = j = p j- p = d( 0P, ) ax + axx ii i ij i j i = i = j = provided that d > 0 x 0. 83

84 Measuring Distance Recognizing that a ij = a ji, i j, i =,,p, j =,,p, we have a a L ap x = a L L a ap x 0<d ( 0P, ) x x x p = xax ' M L O M M ap ap L a x pp p for x 0. Measuring Distance Thus, p x p symmetric matrix A is positive definite, i.e., distance is determined from a positive definite quadratic form x Ax! We can also conclude from this result that a positive definite quadratic form can be interpreted as a squared distance! Finally, if the square of the distance from point x to the origin is given by x Ax, then the square of the distance from point x to some arbitrary fixed point μ is given by (x-μ) A (x-μ). 84

85 Measuring Distance Expressing distance as the square root of a positive definite quadratic form yields an interesting geometric interpretation based on the eigenvalues and eigenvectors of A. For example, in p = two dimensions all points x x = x that are constant distance c from the origin must satisfy xax ' = a x +a x x +a x = c Measuring Distance By the spectral decomposition we have k ' ' ' i i i i= A = λ ee = λ e e +λ e e so by substitution we now have ( ) ( ) xax=λ xe +λ xe = ' ' ' c and A is positive definite, so λ > 0 and λ >0, which means is an ellipse. ( ) ( ) ' ' c =λ xe +λ xe 85

86 Measuring Distance Finally, a little algebra can be used to show that c x = e λ satisfies c xax= λ ee = c λ ' ' Measuring Distance Similarly, a little algebra can be used to show that c x = e λ satisfies c xax= λ ee = c λ ' ' 86

87 Measuring Distance So the points at a distance c lie on an ellipse whose axes are given by the eigenvectors of A with lengths proportional to the reciprocals of the square roots of the corresponding eigenvalues (with constant of proportionality c) x This generalizes to p dimensions e c λ c λ e x Square Root Matrices Because spectral decomposition allows us to express the inverse of a square matrix in terms of its eigenvalues and eigenvectors, it enables us to conveniently create a square root matrix. Let A be a p x p positive definite matrix with the spectral decomposition k i= ' A = λee i i i 87

88 Square Root Matrices Also let P be a matrix whose columns are the normalized eigenvectors e, e,, e p of A, i.e., Then P = e e L ep k ' ' i i i i= A = λ ee = PΛP where P P = PP = I and λ λ Λ= L 0 M M O M λ 0 0 L p Square Root Matrices Now since (PΛ - P )PΛP =PΛP (PΛ - P )=PP =I we have Next let A P P ee - = Λ ' = k ' i i i= λi λ 0 L 0 Λ = 0 λ L 0 M M O M 0 0 L λ p 88

89 Square Root Matrices The matrix k i i i i= ' ' PΛ P = λ ee = A is called the square root of A. Square Root Matrices The square root of A has the following properties: ' A = A A A = A - - A A = A A = I A A = A where A = A - 89

90 Square Root Matrices Next let Λ - denote the matrix matrix whose columns are the normalized eigenvectors e, e,, e p of A, i.e., Then P = e e L ep k ' ' i i i i= A = λ ee = PΛP where P P = PP = I and λ λ Λ= L 0 M M O M λ 0 0 L p Random Vectors and Matrices Random Vector vector whose individual elements are random variables Random Matrix matrix whose individual elements are random variables 90

91 Random Vectors and Matrices The expected value of a random vector or matrix is the matrix containing the expected values of the individual elements, i.e., E x E x L E xp E = x E x L E xp E X M M O M E x L n E xn E xnp Random Vectors and Matrices where E x ij = all x ij xp ( ) ( x ) ij ij ij xf x dx ij ij ij ij 9

92 Random Vectors and Matrices Note that for random matrices X and Y of the same dimension, conformable matrices of constants A and B, and scalar c E(cX) = ce(x) E(X+Y) = E(X) + E(Y) E(AXB)= AE(X)B Random Vectors and Matrices Mean Vector random vector whose elements are the means of the corresponding random variables, i.e., E x i = μ i = all xi xp ( ) ( x ) i i i xf x dx i i i i 9

93 Random Vectors and Matrices In matrix notation we can write the mean vector as E x μ E x μ E = = X = μ M M μ p E x p Random Vectors and Matrices For the bivariate probability distribution X X -4 3 p (x ) p (x ) the mean vector is ( x ) xp E x μ all x μ = E X = = = μ E x xp( x) all x 0.4() + 0.6().60 = = 0.( 4) + 0.5() + 0.5(3)

94 Random Vectors and Matrices Covariance Matrix random symmetric vector whose diagonal elements are variances of the corresponding random variables, i.e., E μ =σ = ( x ) i i i ( xi μi) pi( xi) all xi ( μ) ( ) x f x dx i i i i i Random Vectors and Matrices and whose off-diagonal elements are covariances of the corresponding random variable pairs, i.e., ( xi i) ( xk k) pik ( x i,xk) μ μ all xi all xk E ( μ)( μ ) =σ = xi i xk k ik ( x μ)( x μ ) f ( x,x ) dxdx i i k k ik i k i k notice that if we this expression, when i = k, returns the variance, i.e., σ =σ ii i 94

95 Random Vectors and Matrices In matrix notation we can write the covariance matrix as ( x μ) ( x μ ) ' E ( X-μ)( X-μ) = E ( x μ ) ( μ ) ( μ ) x L xp p M ( xp μp ) E ( x μ ) ( μ )( μ ) ( μ )( μ ) E x x L E x x p p σ σ L σp E ( μ )( μ ) ( μ ) ( μ )( μ ) = x x E x L E x x p p σ σ L σp = = M M O M M M O M E ( x μ )( μ ) σ σ L σ p p pp p p x E x p ( μ )( μ ) p x L E ( x μ p p) Random Vectors and Matrices For the bivariate probability distribution we used earlier X X -4 3 p (x ) p (x ) the covariance matrix is E ( x μ ) ( μ )( μ E x x ) ( μ )( μ ) ( μ) ' E ( X-μ)( X-μ) = E x x E x ( μ ) ( ) ( μ )( μ ) ( ) x p x x x p x,x all x σ all x all x σ = = = ( x μ )( μ ) ( ) ( μ ) ( ) x p x,x x p x σ σ all x all x all x 95

96 Random Vectors and Matrices which can be computed as ( x μ) p( x) ( x μ)( x μ) p( x,x) ' all x ( )( ) all x all x E X-μ X-μ = ( x μ )( μ ) ( ) ( μ ) ( ) x p x,x x p x all x all x all x (.6)( )( 0.5) + (.6)( 0.35)( 0.) ( )( )( ) (.6)( )( 0.) (.6)( 0.35)( 0.5) (.6)( )( 0.5) (.6 ) ( 0.4) + (.6 ) ( 0.6) = ( )(.6)( 0.5) + ( )(.6 )( 0.) + ( 0.35 )(.6)( 0. ) + ( 0.35)(.6)( 0.5) ( ) ( 0.5) + ( 0.35 ) ( 0.45) + ( ) ( 0.3 ) + ( )(.6)( 0.05) + ( )(.6)( 0.5) σ σ = = = σ σ Random Vectors and Matrices Thus the population represented by the bivariate probability distribution X X -4 3 p (x ) p (x ) Have population mean vector and variance-covariance matrix μ = and =

97 Random Vectors and Matrices We can use the population variancecovariance matrix Σ to calculate the population correlation matrix ρ. Individual population correlation coefficients are defined as σik ρ ik = σ σ ii kk Random Vectors and Matrices In matrix notation we can write the correlation matrix as σ σ σ p L σ σ σ σ σ σpp ρ ρ L ρp σ σ σ p L ρik ρik L ρp ρ = σ σ σ σ σ σ = pp M M O M M M O M ρp ρp L ρpp σp σp σpp L σ σ σ σ σ σ pp pp pp pp 97

98 Random Vectors and Matrices We can easily show that where V which implies = V ρv σ 0 L 0 0 σ L 0 = M M O M 0 0 L σpp ( V ) ( V ) ρ= Random Vectors and Matrices For the bivariate probability distribution we used earlier X X -4 3 p (x ) p (x ) the square root of the variance matrix is V σ = = = σ

99 Random Vectors and Matrices so the population correlation matrix is ( V ) ( V ) ρ= = = Random Vectors and Matrices We often deal with variables that naturally fall into groups. In the simplest case, we have two groups of size q and p q of variables. Under such circumstances, it may be convenient to partition the matrices and vectors. 99

100 Random Vectors and Matrices Here we have a mean vector and variance-covariance matrix: μ σ L σp σ,q + L σp M M O M M O M () μ σ σ σ q μ q L qq q,q+ L σqp μ= =, = = μ ( ) + q σ + L σ + σ + + L σ + μ q, q,q q,q q,p M M O M M O M μ σ σ σ p p L pq p,q+ L σpp Random Vectors and Matrices Rules for Mean Vectors and Covariance Matrices for Linear Combinations of Random Variables: The linear combination of real constants c and random variables X x = x cx ' c c c = c X x p has mean vector p L p M j j j= E cx ' = c'e X = cμ ' 00

101 Random Vectors and Matrices The linear combination of real constants c and random variables X x = x cx ' c c c = c X x p also has variance p L p M j j j= Var cx ' = c ' c Random Vectors and Matrices Suppose, for example, we have random vector X X = X and we want to find the mean vector and covariance matrix for the linear combinations: Z = X - X, Z X i.e., Z= = = CX Z = X +X Z X 0

102 Random Vectors and Matrices Note what happens if σ =σ the off diagonal terms vanish (i.e., the sum and difference of two random variables with identical variances are uncorrelated) We have mean vector μ Z μ μ μ =E Z = CμX = = μ μ + μ and covariance matrix Z ' σ σ =Cov Z = C X C = σ σ σ σ + σ σ σ = σ σ σ σ + σ Summary There are MANY other Matrix Algebra results that will be important to our study of statistical modeling. This site will be updated occasionally to reflect these results as they become necessary. 0

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Mathematics. EC / EE / IN / ME / CE. for

Mathematics.   EC / EE / IN / ME / CE. for Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Digital Workbook for GRA 6035 Mathematics

Digital Workbook for GRA 6035 Mathematics Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

MATRICES AND ITS APPLICATIONS

MATRICES AND ITS APPLICATIONS MATRICES AND ITS Elementary transformations and elementary matrices Inverse using elementary transformations Rank of a matrix Normal form of a matrix Linear dependence and independence of vectors APPLICATIONS

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Introduc)on to linear algebra

Introduc)on to linear algebra Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

APPENDIX A. Matrix Algebra. a n1 a n2 g a nk

APPENDIX A. Matrix Algebra. a n1 a n2 g a nk APPENDIX A Matrix Algebra A.1 TERMINOLOGY A matrix is a rectangular array of numbers, denoted a 11 a 12 g a 1K a A = [a ik ] = [A] ik = D 21 a 22 g a 2K T. g a n1 a n2 g a nk (A-1) The typical element

More information