Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Similar documents
MSc Quantitative Techniques Mathematics Weeks 1 and 2

MSc Quantitative Techniques Mathematics Weeks 1 and 2

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Introduction to Matrices

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Phys 201. Matrices and Determinants

Chapter 7. Linear Algebra: Matrices, Vectors,

Knowledge Discovery and Data Mining 1 (VO) ( )

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra Primer

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

1 Determinants. 1.1 Determinant

Fundamentals of Engineering Analysis (650163)

Linear Systems and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Topic 1: Matrix diagonalization

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Foundations of Matrix Analysis

3 Matrix Algebra. 3.1 Operations on matrices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

POLI270 - Linear Algebra

Elementary maths for GMT

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

3 (Maths) Linear Algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

A Review of Matrix Analysis

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Mathematics. EC / EE / IN / ME / CE. for

Chapter 3. Determinants and Eigenvalues

ECON 186 Class Notes: Linear Algebra

Introduction to Matrix Algebra

Linear Algebra Primer

1 Matrices and Systems of Linear Equations. a 1n a 2n

Lecture Notes in Linear Algebra

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Study Notes on Matrices & Determinants for GATE 2017

CS 246 Review of Linear Algebra 01/17/19

Matrices. Chapter Definitions and Notations

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Appendix A: Matrices

Matrices A brief introduction

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

ECON2285: Mathematical Economics

Matrices and Determinants

Linear Algebra March 16, 2019

Graduate Mathematical Economics Lecture 1

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013

Undergraduate Mathematical Economics Lecture 1

Digital Workbook for GRA 6035 Mathematics

Lecture Summaries for Linear Algebra M51A

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

Math Camp Notes: Linear Algebra I

Matrices and Linear Algebra

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Math Linear Algebra Final Exam Review Sheet

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

NOTES on LINEAR ALGEBRA 1

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Review of Vectors and Matrices

Elementary Linear Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Systems of Linear Equations and Matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Chapter 1. Matrix Algebra

Systems of Linear Equations and Matrices

A matrix over a field F is a rectangular array of elements from F. The symbol

M. Matrices and Linear Algebra

Notes on Mathematics

CHAPTER 6. Direct Methods for Solving Linear Systems

Properties of Matrices and Operations on Matrices

Linear Algebra and Matrix Inversion

MATH 106 LINEAR ALGEBRA LECTURE NOTES

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Some Notes on Linear Algebra

EE731 Lecture Notes: Matrix Computations for Signal Processing

1. General Vector Spaces

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

Mathematical Foundations of Applied Statistics: Matrix Algebra

Linear Algebra Highlights

Eigenvalues and Eigenvectors

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

4. Determinants.

Quantum Computing Lecture 2. Review of Linear Algebra

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Matrices and Determinants for Undergraduates. By Dr. Anju Gupta. Ms. Reena Yadav

Math113: Linear Algebra. Beifang Chen

Transcription:

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007

MSc Sep Intro QT 1 Who are these course for? The September Quantitative Techniques courses review the basic mathematical and statistical tools needed for the various MSc programmes in the School. All MSc students, especially new ones, begin with Introduction to Quantitative Techniques in week 1. Then, in weeks 2-4, students specialize in material relevant to their chosen programme. In particular MSc Economics. Part-time students in the initial year cover Mathematics (evening) while final year part-time students cover Statistics (evening). Fulltime students cover Mathematics in the afternoon and Statistics (evening). MSc Finance. Part-time students in their initial year cover Mathematics (evening). Full-time students do Mathematics (afternoon). MSc Financial Engineering Full-time and Part-time Year 1 students cover Statistics (evening). Teaching Arrangements and Assessment These courses last four weeks. There will be three meetings in this introductory week, and in weeks 2 and 3 (Monday, Tuesday and Thursday). In week 4, there will be lectures on Monday and Tuesday only, with exams scheduled in the later part of that week. Performance in these course is assessed through two-hour written examinations, which you MUST pass. No resits are held. Textbooks We do not recommend any particular text, but in the past students have found the following useful. Chiang, Alpha C., Fundamental Methods of Mathematical Economics, McGraw Hill, 3rd ed. (A popular textbook, even though it is slightly dated.) Greene, William Econometric Analysis, 5th edition, Prentice Hall, 2002.

Contents Some Preliminaries 1 1 Matrix Algebra 3 1.1 Matrix Operations.............................. 4 1.2 Some Special Matrices........................... 7 1.3 Matrix Representations........................... 8 1.4 Linear Independence............................ 8 1.5 Determinant................................. 9 1.6 Rank of a Matrix............................... 11 1.7 Inverse Matrix................................ 11 1.8 Characteristic Roots and Vectors..................... 16 1.9 Trace of a Matrix............................... 19 Problems...................................... 20 2

Some Preliminaries Sets A set is any well-specified collection of elements. A set may contain finitely many or infinitely many elements. If x is an element of a set S, we say that x belongs to S, or write x S. If z does not belong to S, we write z / S. A set with no elements is called the empty set (or the null set) and is denoted by the symbol. Real Numbers Numbers such as 1, 2, 3,... are called natural numbers. Integers include zero and negative numbers too:..., 2, 1, 0, 1, 2, 3,.... Numbers that can be expressed as a ratio of two integers (that is, of the form where a and b are integers, and b = 0) are said to be rational. a b Numbers such as 2, π, e cannot be expressed as a ratio of integers: they are said to be irrational. The set of real numbers include both rational and irrational numbers. It is sometimes helpful to think of real numbers as points on a number line. The set of real numbers is usually denoted by R. It is common to use R + to denote the set of non-negative real numbers, and R ++ for strictly positive real numbers. 1

MSc Sep Intro QT 2 Inequalities Given any two real numbers a and b, there are three mutually exclusive possibilities a > b, (a is greater than b) a < b, (a is less than b) a = b (a is equal to b) The inequality in the first two cases above is said to be strict. The case where a is greater than b or a is equal to b is denoted as a b. The case where a is less than b or a is equal to b is denoted as a b. In these cases, the inequalities are said to be weak. The following are simple but useful relations: If a > b and b > c then a > c. If a > b then ac > bc for any positive c. If a > b then ac < bc for any negative c. Note that multiplying through with a negative number reverses the inequality. Notation for Summation Consider a sequence of numbers, x 1, x 2, x 3,...x n. The summation operator,, denotes the sum of this sequence: n x i = x 1 + x 2 + + x n. i=1 This operator denotes the sum from x 1 to x n. The symbol i is called the index of the summation. More generally, we have n x i = x m + x m+1 + + x n. i=m The following denotes an infinite sum x i = x 1 + x 2 +.... i=1

Chapter 1 Matrix Algebra A vector is an ordered set of numbers. Consider, for example the row vector or the column vector [ 3 2 0, The number of elements in a vector is referred to as its dimension. 1 3 4. A matrix is a rectangular array of numbers a 11 a 12 a 1K a 21 a 22 a 2K A = [a ik =.. a n1 a n2 a nk with n rows, K columns, and nk elements. The dimensions of the matrix refer to the number of rows (n in this example) and the number of columns (K in this example): we say the matrix A is n K. The notational subscripts in the typical element a ik refer to its location in the array, namely the i-th row and the k-th column. A matrix can be viewed as a set of column vectors, or alternatively as a set of row vectors. Alternatively, a vector can be viewed as a matrix with only one row or column. An algebra is a system of sets and operations on these sets (multiplication and addition, for example), where the sets satisfy certain conditions and the operations satisfy some rules. 3

MSc Sep Intro QT 4 1.1 Matrix Operations 1.1.1 Equality of matrices Matrices A and B are equal if and only if they have the same dimensions and if each element of A equals the corresponding element of B. That is, if a ik = b ik for all i and k. 1.1.2 Transpose of a Matrix For any matrix A, the transpose, denoted by A (or sometimes A ), is obtained by interchanging rows and columns. If A = [a ik, then A = [a ki That is, the i-th row of the original matrix forms the i-th column of the transpose matrix. For example, if [ 2 3 1 A = 4 1 2 then A = 2 4 3 1 1 2 Note that if A is of dimension n K, its transpose is of dimension K n. The transpose of a transpose of a matrix yields the original matrix. We have (A ) = A. 1.1.3 Matrix addition We can add two matrices as long as they are of the same dimension. Consider A = [a ik and B = [b ik, both of dimension n K. Their sum is defined as an n K matrix, C = A + B = [a ik + b ik. For instance, [ [ [ a11 a 12 b11 b + 12 a11 + b = 11 a 12 + b 12 a 21 a 22 b 21 b 22 a 21 + b 21 a 22 + b 22

MSc Sep Intro QT 5 Matrix addition is commutative: and associative A + B = B + A (A + B) + C = A + (B + C). The transpose of a sum of matrices is the sum of the transpose matrices 1.1.4 Scalar Multiplication (A + B) = A + B. Multiplying the matrix by a scalar involves multiply each element by that scalar. If A = [a ik, for any real number λ, we have For instance, [ 1 3 2 3 1 0 1 λa = [λa ik = 1.1.5 Inner Product of Two Vectors [ 3 9 6 3 0 3 Consider two n-dimensional column vectors a 1 a 2 a =. and b = a n The inner product of these two vectors, written as b 1 a b = [ b 2 n a 1 a 2 a n. = a 1b 1 + a 2 b 2 + + a n b n = a i b i. i=1 b n Note that a b = b a = n i=1 a ib i. For any vector x the sum of the elements can be denoted as n x i = i x i=1 where i is an n-dimensional column vector whose elements are all 1. Two vectors are said to be orthogonal if their inner product is zero. b 1 b 2. b n

MSc Sep Intro QT 6 1.1.6 Matrix multiplication Let A = [a ik be an n K matrix and B = [b kj be an K J matrix. The product matrix C = AB is an n J matrix whose ij-th element is given by the inner product of the i-th row vector of matrix A and the j-th column vector of matrix B. Put differently, c ij = k a ik b kj. The restriction on matrix multiplication is that first matrix must have the same number of columns as the number of rows in the second matrix. When this condition holds the matrices are said to be conformable under multiplication. Matrix multiplication is not commutative. Even when matrices A and B are conformable in way that AB exists BA may not exist. For instance, if A is 3 2 and B is 2 2, AB exists but BA is not defined. even when both product matrices exist, they may not have the same dimensions. For instance, if A is 3 2 and B is 2 3, AB is of order 2 2 while BA is of order 3 3. even when both product matrices are of the same dimension, they may not be equal. Hence when multiplying matrices, it is important to distinguish between premultiplication and post-multiplication. In the product AB, the matrix A is postmultiplied by B, while B is pre-multiplied by A. It is easy to check that matrix multiplication is associative (AB)C = A(BC). Matrix multiplication is also distributive across sums of matrices A(B+C) = AB+AC (B+C)A = BA+CA It is straight forward to check that the transpose of the product (AB) = B A

MSc Sep Intro QT 7 1.2 Some Special Matrices Null Matrix A null matrix is composed of all 0s and can be of any dimension. For any matrix B B + 0 = B where 0 is the null matrix with the same dimension as B. Addition of the null matrix leaves the original matrix unchanged. Multiplication of a matrix by a conformable null matrix produces a null matrix. Square Matrix A matrix with the same number of rows as columns is said to be a square matrix. Identity Matrix The identity matrix is a square matrix with 1s on the main diagonal, and all other elements equal to 0. Identity matrices are often denoted by the symbol I (or sometimes as I n where n denotes the dimension). For instance, the identity matrix of dimension 3 is I 3 = 1 0... 0 0 1... 0. 0 0... 1 Pre-multiplying or post-multiplying any matrix A with the identity matrices of conformable dimensions yields the original matrix Symmetric Matrix IA = AI = A. A square matrix A = [a ij is said to be symmetric if a ij = a ji. If A is symmetric, A = A. Idempotent Matrix A square matrix A is said to be idempotent if AA = A.

MSc Sep Intro QT 8 1.3 Matrix Representations Matrices provide a compact way to represent a system of linear equations. For instance, 2x 1 + 4x 2 = 5 3x 1 + x 2 = 6 can be written as Ax = d where [ 2 4 A = 3 1 x = [ x1 x 2 d = [ 5 6 Alternatively, we can think of vector d as a linear combination of the columns of matrix A [ [ [ 2 4 5 x 1 + x 3 2 = 1 6 1.4 Linear Independence A set of vectors is linearly dependent if any of the vectors in the set can be written as a linear combination of the others. Consider the following matrix. [ a b 2a 2b Notice that the second row is a multiple of the first row, so that the vectors are linearly dependent. Notice also that a linear combination of the row vectors (in particular, two times the first row added to the second row) equals a null vector [0 0. This suggests an alternative and equivalent definition of linear dependence. Definition Vectors v 1, v 2,..., v n are linearly dependent if and only if there exists scalars k 1, k 2,..., k n, not all zero, such that This implies the following k 1 v 1 + k 2 v 2 +... + k n v n = 0

MSc Sep Intro QT 9 Definition Vectors v 1, v 2,..., v n are linearly independent if and only if k 1 v 1 + k 2 v 2 +... + k n v n = 0 for scalars k 1, k 2,..., k n implies k 1 = k 2 =... = k n = 0. To illustrate linear dependence, consider the row vectors in the following matrix 3 4 5 v 1 A = 0 1 2 = v 2 6 8 10 v 3 where v 3 = 2v 1. If we take k 1 = 2, k 2 = 0, and k 3 = 1, we get 2v 1 + 0 v 3 = 0 so we have found constants which are not all equal to zero which gives us the zero-vector, hence the system is linearly dependent. 1.5 Determinant The notion of a determinant provides a more robust method for testing whether the system is linearly dependent. The determinant of a square matrix A is denoted A and is always a scalar. For a 2 2 matrix the determinant is given by A = a 11 a 12 a 21 a 22 = a 11a 22 a 12 a 21 Consider a 2 2 matrix where the vectors are linearly dependent: a b ka kb = kab kab = 0 Linear dependence turns out to be equivalent to the determinant of the matrix being equal to zero. Higher Order Determinants: the Laplace Expansion Given the following 3 3 matrix A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33

MSc Sep Intro QT 10 then a A = a 22 a 23 11 a 32 a 33 a 12 a 21 a 23 a 31 a 33 + a 13 a 21 a 22 a 31 a 32 For any square matrix A, consider the sub-matrix A (ij) formed by deleting the i- th row and j-th column of the matrix. The determinant of the sub-matrix A (ij) is called the (i, j)-th minor of the matrix, or sometimes as the the minor of element a ij. We denote this as M ij. For a 3 3 matrix, we thus have M 11 = a 22 a 23 a 32 a 33, M 12 = a 21 a 23 a 31 a 33, M 13 = a 21 a 22 a 31 a 32 allowing us to write A = a 11 M 11 a 12 M 12 + a 13 M 13. A cofactor associated with element a ij, denoted by C ij, is the minor with a prescribed algebraic sign attached to it. The sign prescription for element a ij given by ( 1) i+j. Put simply, the sign is positive for elements whose row and column indices add up to an even number, and negative otherwise. Thus C ij ( 1) i+j M ij, so that C 11 = ( 1) 2 M 11, C 12 = ( 1) 3 M 12, C 13 = ( 1) 4 M 13. Thus A can be written as A = a 11 C 11 + a 12 C 12 + a 13 C 13 This is called the Laplace Expansion. We can use any row or column to derive the determinant, and the method generalizes to any n n matrix. 1.5.1 Properties of Determinants 1. The transpose operation (interchanging rows with columns) does not affect the value of the determinant. Hence, A = A. 2. The interchange of two rows or two columns will change the sign of the determinant but not the numerical value. 3. The multiplication of one row or one column in A by a scalar k will change the value of the determinant to k A.

MSc Sep Intro QT 11 4. The addition (subtraction) of a multiple of any row to (from) another row will leave the value of the determinant unchanged. The same applies to columns. 5. If one row (column) is a multiple of another row (column), the value of the determinant is zero. 1.6 Rank of a Matrix Rank is defined as the order of the largest non-zero determinant that can be obtained from the elements of a matrix. This definition applies to both square and rectangular matrices. Thus a non-zero matrix A has rank r if at least one of its r-square minor is different from zero while every (r + 1) or larger square minor, if any, is equal to zero. The rank of the matrix A can be found by starting with the largest determinants of order m, and evaluating them to ascertain if one of them is non-zero. If so, rank(a) = m. If all the determinants of order m are equal to zero, we start evaluating determinants of order m 1. Continuing in this fashion, we eventually find the rank r of the matrix, being the order of the largest non-zero determinants. [ 6 2 Example 1. Find rank(a) where A =. 3 1 Note A = 0. Then rank(a) = 1, since the largest non-zero minor of A is of order 1. (There are in this example four non-zero minors of order 1). [ 6 2 3 Example 2. Find rank(a) where A =. 3 1 3 Consider the minor obtained by deleting the second column. We have 6 3 3 3 = 0, so that rank(a) = 2 in this case. Clearly if A is (n m) and n = m, then rank(a) min(n, m). 1.7 Inverse Matrix For a square matrix A, there may exist a matrix B such that AB = BA = I

MSc Sep Intro QT 12 An inverse, if it exists is usually denoted as A 1, so that the above definition can be written as AA 1 = A 1 A = I If an inverse exists does not exist for a matrix, the matrix is said to be singular. Singularity of the matrix is closely tied to the value of the determinant. In particular, A = 0 is equivalent to the statement that all rows or columns in A are linearly independent, which is equivalent to the fact that A is non-singular, which is equivalent to the fact that there exists a unique inverse A 1. To put it differently A 1 exists A = 0 A has full rank A is non-singular. Inverse matrices satisfy the following properties (as long as the defined inverses exist). (A 1 ) 1 = A (AB) 1 = B 1 A 1 (A ) 1 = (A 1 ) 1.7.1 Using the inverse matrix to solve a system of equations We can use the inverse to find the solution to systems of equations. In general, a system of equations can be denoted as Ax = d. For our purposes we will confine attention to a system of n linear equations in n unknowns, so that in the above expression A is a square matrix with dimension n n, and x and d are n 1 vectors. If an inverse exists for square matrix A then pre-multiplying the previous expression with A 1 we get A 1 Ax = A 1 d or x = A 1 d. When does a solution exist? We distinguish between two cases. A homogeneous system, where d = 0. For this case x = 0 is an obvious (or trivial) solution. A non-trivial solution exists only if A has less than full rank (that is, if A = 0. A non-homogeneous system, where d = 0. This has a non-trivial solution only if A has full rank (that is, if A = 0.

MSc Sep Intro QT 13 1.7.2 Computing the Inverse Matrix We have discussed the inverse A 1 of a matrix without indicating how we find it (assuming it exists). We shall now develop a method for finding the inverse. To do this, we begin by defining some related matrix. The Cofactor Matrix For any element a ij of a square matrix A, the cofactor is given by C ij = ( 1) i+j M ij. The cofactor matrix C is obtained by replacing each element the matrix A by its corresponding cofactor, C ij. Example: Find the cofactor matrix for A = [ 3 2 4 1. We have [ C11 C C = 12 C 21 C 22 where C 11 = ( 1) 1+1 M 11 = 1 C 12 = ( 1) 1+2 M 12 = 4 C 21 = ( 1) 2+1 M 21 = 2 C 22 = ( 1) 2+2 M 22 = 3 [ 1 4 Thus C = 2 3 The Adjoint Matrix For any square matrix A, the adjoint of A is given by the transpose of the co-factor matrix. Denoting the associated co-factor matrix as C, we have adj A = C. In the previous example, the adjoint is given by [ [ C11 C adj A = 21 1 2 = C 12 C 22 4 3

MSc Sep Intro QT 14 Finding the Inverse For any square matrix A, the inverse A 1 is given by A 1 = 1 adj A, A which is defined as long as A = 0. To see why, we multiply an arbitrary 2 2 matrix A by its adjoint matrix, adj A. For notational purposes, let the product matrix be given by B. Thus [ [ [ a11 a 12 C11 C 21 b11 b = 12 a 21 a 22 C 12 C 22 b 21 b 22 But then b 11 = a 11 C 11 + a 12 C 12 b 12 = a 11 C 21 + a 12 C 22 b 21 = a 21 C 11 + a 22 C 12 b 22 = a 21 C 21 + a 22 C 22 [ adj A = a 22 a 12 a 21 a 11 b 11 = a 11 a 22 a 12 a 21 = A b 12 = a 11 ( a 12 ) + a 12 a 11 = 0 b 21 = a 21 a 22 + a 22 ( a 21 ) = 0 b 22 = a 21 ( a 12 ) + a 22 a 11 = A Hence we have found that [ A 0 A adj A = 0 A [ 1 0 = A 0 1 = A I. In general, it can be shown that if the elements of a row (or column) of a matrix are multiplied by the co-factors of a different row (or column) and the products are summed, the result is zero. Also, the elements on the principal diagonal of the product of A and adj A are equal to A. Thus for an n n matrix A we have A adj A = A 0 0 0 A 0. 0 A. = A I. 0 0 A

MSc Sep Intro QT 15 Equivalently, if A = 0, This yields: A adj A A adj A A = I = A 1 1.7.3 Cramer s Rule This method of matrix inversion enables us to describe a convenient procedure for solving a system of linear equation. Consider a system of n linear equations in n unknowns Ax = b, where A is an n n matrix, and x and b are n 1 vectors. As long as an inverse exists (that is, as long as A is non-singular This can be written as x 1 x 2. = 1. A x n where C ij = ( 1) i+j M ij. Rewrite this as x 1 x 2.. x n Compare the i-th element = 1 A x = A 1 b = adj A b. A C 11 C 21.. C n1 C 12 C 22.. C n2.......... C 1n C 2n.. C nn b 1 b 2.. b n C 11 b 1 + C 21 b 2 +... + C n1 b n C 12 b 1 + C 22 b 2 +... + C n2 b n...... C 1n b 1 + C 2n b 2 +... + C nn b n x i = 1 A (C 1ib 1 + C 2i b 2 +... + C ni b n )

MSc Sep Intro QT 16 of this expression, with the Laplace expansion for the evaluation of A : A = (C 1i a 1i + C 2i a 2i +... + C ni a ni ) We can see that in compared to the previous equation the elements a 1i, a 2i,..., a ni have been replaced by b 1, b 2,..., b n. So (C 1i b 1 + C 2i b 2 +... + C ni b n ) is the determinant, expanded down the i-th column, of the following matrix, which we will call B i : a 11 a 12.. b 1.. a 1n a 21 a 22.. b 2.. a 2n B i =................ a n1 a n2.. b n.. a nn (note: the i-th column has been replaced by b. To summarize, we can write x i = B i A. This procedure is referred to as Cramer s rule for solving the system of equations Ax=b. Example: Find x 1, using Cramer s rule, where [ [ 6 3 x1 = 2 6 x 2 [ 50 35 Now, A = 36 6 = 30 = 0. Then x 1 = B 1 A = 50 3 35 6 30 = 50(6) ( 3)35 30 = 13.5 1.8 Characteristic Roots and Vectors Definition: If A is a n n square matrix, and if a scalar λ and a (n 1) vector x = 0 satisfy Ax = λx, then λ is a characteristic root of A and x is the associated characteristic vector. Characteristic roots (sometimes called latent roots or eigenvalues) and characteristic vectors (latent vectors or eigenvectors ) are used for stability analysis of dynamic economic models and in econometrics.

MSc Sep Intro QT 17 If x = 0, then any λ would give Ax = λx and the problem is trivial. Hence we exclude x = 0. Note also that characteristic vectors are not unique: if x is an characteristic vector, then µ(ax) = µ(λx). Thus A(µx) = λ(µx), so that µx is also an characteristic vector, where µ is any non-zero scalar. For this reason the characteristic vector is said to be only determined up to a scalar multiple. 1.8.1 Finding the Characteristic Roots Rewrite equation as or Ax = λx Ax λx = 0 [A λix = 0 which is a homogeneous system of equations. If there is a non-trivial solution to a homogeneous system, then the matrix must be singular, i.e. A λi = 0 Thus we must choose values for λ such that the determinant a 11 λ a 12.. a 1n a 21 a 22 λ.. a 2n A λi =..... = 0...... a n1 a n2.. a nn λ If [A λi is a 2 2 matrix, the value of the determinant is a polynomial of degree 2 A λi = [(a 11 λ)(a 22 λ) a 12 a 21 A λi = (a 11 a 22 a 12 a 21 ) (a 11 + a 22 )λ + λ 2 More generally, for an n n matrix, the determinant will be a polynomial of degree n b 0 + b 1 λ + b 2 λ 2 +... + b n 1 λ n 1 + b n λ n

MSc Sep Intro QT 18 This characteristic equation sets the value of the polynomial to zero, and the characteristic roots are the solutions to this equation: that is, values of λ that, substituted into the last equation yield 0. Example: find the characteristic roots of the (2 2) matrix [ 2 2 G = 2 1 The characteristic polynomial is G λi = 2 λ 2 2 1 λ = (1 + λ)(2 λ) 4 so that the characteristic equation is λ 2 λ 6 = 0 with characteristic roots λ 1 = 3 and λ 2 = 2. Generally an n-th order polynomial has n different solutions. However, two or more roots may coincide, so that we get fewer than n distinct values; some roots may involve imaginary square roots of negative numbers, giving complex roots. 1.8.2 Calculation of Characteristic vectors For λ i, we have [A λ i Ix i = 0 where x i is the characteristic vector corresponding to λ i. In the previous Example, consider first the case where λ 1 = 3. To find the characteristic vector associated associated with λ 1 = 3, we solve [A λ 1 Ix 1 = 0 that is [ 2 3 2 2 1 3 [ x11 x 12 = [ 1 2 2 4 [ x11 x 12 = [ 0 0 Noting that rank (A λ 1 I) is 1, we find that x 11 = 2x 12. Choosing x 12 arbitrarily as 1, we have x 11 = 2, thus we obtain the characteristic vector x 1 = [ 2 1.

MSc Sep Intro QT 19 To find the characteristic vector associated with λ 2 = 2, we solve [A λ 2 Ix 2 = 0; that is: [ [ [ [ [ 2 λ2 2 x21 4 2 x21 0 = = 2 1 λ 2 x 22 2 1 x 22 0 Noting that rank (A λ 2 I) is 1, we find that x 21 = 1/2x 22. Choosing x 22 arbitrarily as 2 to eliminate the fraction, we find x 21 = 1; the associated characteristic vector x 2 = [ 1 2. It is obvious that these characteristic vectors are determined only up to a scalar multiple. In order to remove this indeterminacy a practical use is to force out a unique solution by imposing the restriction i=1 n x2 i = 1. In our example, for the case λ 2 = 2, the characteristic vector would then be equal to the solution of the following system x 21 = 1/2x 22 x21 2 + x2 22 = 1. 1.9 Trace of a Matrix The trace of a square matrix is the sum of its diagonal elements: Example: The trace of the matrix equals 2 1 = 1 The following results hold: tr(a) = a ii. i G = [ 2 2 2 1 tr(ca) = c.tr(a) tr(a) = tr(a ) tr(a + B) = tr(a) + tr(b) If both matrix products AB and BA are defined We state the following without proof: tr(ab) = tr(ba) Let λ 1, λ 2,..., λ n be the characteristics roots of a square matrix A. Then λ 1 + λ 2 +... + λ n = tr(a) λ 1 λ 2... λ n = A.

MSc Sep Intro QT 20 Problems 1. Let the matrices A and B be given by [ 4 9 A = 2 1 B = [ 2 4 1 7 (a) Calculate AB and demonstrate that the commutative law of multiplication AB = BA does not hold under matrix multiplication. (b) Calculate the determinant of A. (c) Find the inverse of matrix A. 2. Given D = [ 7 2 5 4 [ 2, E = 4, F = [ 3 7 (a) Determine for each of the following whether the products DE, EF and DF are defined. If so, indicate the dimensions of the product matrices. (b) Find EF and FE. (c) Calculate (DE)F and D(EF). 3. Consider the equation system 4x 1 + 6x 2 + 8x 3 = 2 x 1 + x 2 + x 3 = 1 4x 1 + 3x 2 + 2x 3 = 1 (a) Write the system in matrix notation, as Ax = b. (b) How many solutions does this system have? Find them. 4. Let A = 1 1 1 0 1 1 0 0 1 (a) Compute A (the transpose of A). (b) Compute AA. (c) Find the inverse A 1.

MSc Sep Intro QT 21 5. Given B = 2 1 3 1 1 β +1 2 1 For what value(s) of β is B not invertible? 6. An model is specified by the following relations Y = C + I IS curve C = 100 + 0.7Y I = 180 125r { MD = M LM curve S = 255 M D = 220 + 0.2Y 175r Find values Y and r that satisfy these relations, using Cramer s Rule. 7. Let A = 3 8 1 0 4 3 0 3 4 (a) Find the characteristic roots and characteristic vectors of A. (b) Verify that the trace of the matrix equals the sum of the characteristic roots. (c) Verify that the determinant of the matrix equals the product of the characteristic roots. 8. Show that if λ is an characteristic root of a square matrix A, then λ 2 is an characteristic root of A 2. 9. Given a 2 4 matrix X, define P = X(X X) 1 X and M=I-P. (a) Show that M and P are idempotent. (b) Show that MP=0