Matrix Algebra: Introduction

Similar documents
GATE Material for Mathematics

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Introduction to Determinants

Chapter 2 Notes, Linear Algebra 5e Lay

Elementary Linear Algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Eigenvalues and Eigenvectors: An Introduction

7.6 The Inverse of a Square Matrix

MAC Module 1 Systems of Linear Equations and Matrices I

Elementary Row Operations on Matrices

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

System of Equations: An Introduction

and let s calculate the image of some vectors under the transformation T.

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 4. Solving Systems of Equations. Chapter 4

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Chapter 1 Matrices and Systems of Equations

Linear Algebra and Matrix Inversion

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 313 Chapter 1 Review

All of my class notes can be found at

Matrices and systems of linear equations

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATRICES AND MATRIX OPERATIONS

Matrix & Linear Algebra

Linear Algebra March 16, 2019

Matrix Algebra & Elementary Matrices

Chapter 3. Determinants and Eigenvalues

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Matrices and Matrix Algebra.

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

LINEAR SYSTEMS, MATRICES, AND VECTORS

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

4 Elementary matrices, continued

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Solving Linear Systems Using Gaussian Elimination

ICS 6N Computational Linear Algebra Matrix Algebra

Elementary Linear Algebra. Howard Anton & Chris Rorres

Systems of Linear Equations and Matrices

Review Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43

Systems of Linear Equations and Matrices

Linear Algebra Basics

Gaussian Elimination and Back Substitution

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

9.1 - Systems of Linear Equations: Two Variables

POLI270 - Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Section 6.2 Larger Systems of Linear Equations

Linear Algebra, Summer 2011, pt. 2

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Elementary maths for GMT

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Matrix Arithmetic. j=1

Chapter 9: Systems of Equations and Inequalities

Linear Algebra. Chapter Linear Equations

MTH 464: Computational Linear Algebra

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

MATH 1210 Assignment 4 Solutions 16R-T1

MAT 2037 LINEAR ALGEBRA I web:

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Linear Systems and Matrices

Introduction to Systems of Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math 1314 Week #14 Notes

Eigenvalues and Eigenvectors

Lesson U2.1 Study Guide

Chapter 2: Matrices and Linear Systems

Elementary Linear Algebra

Matrix Algebra: Definitions and Basic Operations

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

MATRICES. a m,1 a m,n A =

1. (3pts) State three of the properties of matrix multiplication.

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

1 Determinants. 1.1 Determinant

CSL361 Problem set 4: Basic linear algebra

Phys 201. Matrices and Determinants

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

MATH2210 Notebook 2 Spring 2018

Section 12.4 Algebra of Matrices

Linear Algebra I Lecture 8

Systems of equation and matrices

Determinant: 3.3 Properties of Determinants

Matrices and Linear Algebra

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Lecture Notes in Linear Algebra

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Transcription:

Matrix Algebra: Introduction Matrices and Determinants were discovered and developed in the eighteenth and nineteenth centuries. Initially, their development dealt with transformation of geometric objects and solution of systems of linear equations. Historically, the early emphasis was on the determinant, not the matrix. In modern treatments of linear algebra, matrices are considered first. We will not speculate much on this issue. Matrices provide a theoretically and practically useful way of approaching many types of problems including: Solution of Systems of Linear Equations, Equilibrium of Rigid Bodies (in physics), Graph Theory, Theory of Games, Leontief Economics Model, Forest Management, Computer Graphics, and Computed Tomography, Genetics, Cryptography, Electrical Networks, Fractals. 1

Introduction and Basic Operations Matrices, though they may appear weird objects at first, are a very important tool in expressing and discussing problems which arise from real life cases. Our first example deals with economics. Indeed, consider two families A and B (though we may easily take more than two). Every month, the two families have expenses such as: utilities, health, entertainment, food, etc... Let us restrict ourselves to: food, utilities, and health. How would one represent the data collected? Many ways are available but one of them has an advantage of combining the data so that it is easy to manipulate them. Indeed, we will write the data as follows: If we have no problem confusing the names and what the expenses are, then we may write This is what we call a Matrix. The size of the matrix, as a block, is defined by the number of Rows and the number of Columns. In this case, the above matrix has 2 rows and 3 columns. You may easily come up with a matrix which has m rows and n columns. In this case, we say that the matrix is a (mxn) matrix (pronounce m-by-n matrix). Keep in mind that the first entry (meaning m) is the number of rows while the second entry (n) is the number of columns. Our above matrix is a (2x3) matrix. When the numbers of rows and columns are equal, we call the matrix a square matrix. A square matrix of order n, is a (nxn) matrix. Back to our example, let us assume, for example, that the matrices for the months of January, February, and March are 2

To make sure that the reader knows what these numbers mean, you should be able to give the Health-expenses for family A and Food-expenses for family B during the month of February. The answers are 250 and 600. The next question may sound easy to answer, but requires a new concept in the matrix context. Indeed, what is the matrix-expense for the two families for the first quarter? The idea is to add the three matrices above. It is easy to determine the total expenses for each family and each item, then the answer is So how do we add matrices? An approach is given by the above example. The answer is to add entries one by one. For example, we have Clearly, if you want to double a matrix, it is enough to add the matrix to itself. So we have which implies This suggests the following rule and for any number, we will have 3

Let us summarize these two rules about matrices. Addition of Matrices: In order to add two matrices, we add the entries one by one. Note: Matrices involved in the addition operation must have the same size. Multiplication of a Matrix by a Number: In order to multiply a matrix by a number, you multiply every entry by the given number. Keep in mind that we always write numbers to the left and matrices to the right (in the case of multiplication). What about subtracting two matrices? It is easy, since subtraction is a combination of the two above rules. Indeed, if M and N are two matrices, then we will write M-N = M + (-1)N So first, you multiply the matrix N by -1, and then add the result to the matrix M. Example. Consider the three matrices J, F, and M from above. Evaluate Answer. We have and since we get To compute J-M, we note first that 4

Since J-M = J + (-1)M, we get And finally, for J-F+2M, we have a choice. Here we would like to emphasize the fact that addition of matrices may involve more than one matrix. In this case, you may perform the calculations in any order. This is called associativity of the operations. So first we will take care of -F and 2M to get Since J-F+2M = J + (-1)F + 2M, we get So first we will evaluate J-F to get to which we add 2M, to finally obtain For the addition of matrices, one special matrix plays a role similar to the number zero. Indeed, if we consider the matrix with all its entries equal to 0, then it is easy to check that this matrix has behavior similar to the number zero. For example, we have 5

and 6

Multiplication of Matrices Before we give the formal definition of how to multiply two matrices, we will discuss an example from a real life situation. Consider a city with two kinds of population: the inner city population and the suburb population. We assume that every year 40% of the inner city population moves to the suburbs, while 30% of the suburb population moves to the inner part of the city. Let I (resp. S) be the initial population of the inner city (resp. the suburban area). So after one year, the population of the inner part is 0.6 I + 0.3 S While the population of the suburbs is 0.4 I + 0.7 S After two years, the population of the inner city is 0.6 (0.6 I + 0.3 S) + 0.3 (0.4 I + 0.7 S) and the suburban population is given by 0.4 (0.6 I + 0.3 S) + 0.7(0.4 I + 0.7 S) Is there a nice way of representing the two populations after a certain number of years? Let us show how matrices may be helpful to answer this question. Let us represent the two populations in one table (meaning a column object with two entries): So after one year the table which gives the two populations is If we consider the following rule (the product of two matrices) 7

then the populations after one year are given by the formula After two years the populations are Combining this formula with the above result, we get In other words, we have In fact, we do not need to have two matrices of the same size to multiply them. Above, we did multiply a (2x2) matrix with a (2x1) matrix (which gave a (2x1) matrix). In fact, the general rule says that in order to perform the multiplication AB, where A is a (mxn) matrix and B a (kxl) matrix, then we must have n=k. The result will be a (mxl) matrix. For example, we have Remember that though we were able to perform the above multiplication, it is not possible to perform the multiplication So we have to be very careful about multiplying matrices. Sentences like "multiply the two matrices A and B" do not make sense. You must know which of the two matrices will be to the 8

right (of your multiplication) and which one will be to the left; in other words, we have to know whether we are asked to perform or. Even if both multiplications do make sense (as in the case of square matrices with the same size), we still have to be very careful. Indeed, consider the two matrices We have and So what is the conclusion behind this example? The matrix multiplication is not commutative, the order in which matrices are multiplied is important. In fact, this little setback is a major problem in playing around with matrices. This is something that you must always be careful with. Let us show you another setback. We have the product of two non-zero matrices may be equal to the zero-matrix. 9

Algebraic Properties of Matrix Operations In this page, we give some general results about the three operations: addition, multiplication, and multiplication with numbers, called scalar multiplication. From now on, we will not write (m x n) but m x n. Properties involving Addition. Let A, B, and C be m x n matrices. We have 1. 2. 3. A+B = B+A (A+B)+C = A + (B+C) 4. where is the mxn zero-matrix (all its entries are equal to 0); if and only if B = -A. Properties involving Multiplication. 1. Let A, B, and C be three matrices. If you can perform the products AB, (AB)C, BC, and A(BC), then we have (AB)C = A (BC) 2. Note, for example, that if A is 2x3, B is 3x3, and C is 3x1, then the above products are possible (in this case, (AB)C is 2x1 matrix). If and are numbers, and A is a matrix, then we have 3. If is a number, and A and B are two matrices such that the product is possible, then we have 10

4. If A is an nxm matrix and the mxk zero-matrix, then Note that are different. is the nxk zero-matrix. So if n is different from m, the two zero-matrices Properties involving Addition and Multiplication. 1. Let A, B, and C be three matrices. If you can perform the appropriate products, then we have (A+B)C = AC + BC and A(B+C) = AB + AC 2. If and are numbers, A and B are matrices, then we have and Example. Consider the matrices Evaluate (AB)C and A(BC). Check that you get the same matrix. Answer. We have 11

so On the other hand, we have so Example. Consider the matrices It is easy to check that and These two formulas are called linear combinations. More on linear combinations will be discussed on a different page. 12

We have seen that matrix multiplication is different from normal multiplication (between numbers). Are there some similarities? For example, is there a matrix which plays a similar role as the number 1? The answer is yes. Indeed, consider the nxn matrix In particular, we have The matrix I n has similar behavior as the number 1. Indeed, for any nxn matrix A, we have A I n = I n A = A The matrix I n is called the Identity Matrix of order n. Example. Consider the matrices Then it is easy to check that The identity matrix behaves like the number 1 not only among the matrices of the form nxn. Indeed, for any nxm matrix A, we have 13

In particular, we have 14

Invertible Matrices Invertible matrices are very important in many areas of science. For example, decrypting a coded message uses invertible matrices. The problem of finding the inverse of a matrix will be discussed in a different page. Definition. An matrix B such that matrix A is called nonsingular or invertible iff there exists an where I n is the identity matrix. The matrix B is called the inverse matrix of A. Example. Let One may easily check that Hence A is invertible and B is its inverse. Notation. A common notation for the inverse of a matrix A is A -1. So Example. Find the inverse of 15

Write Since we get Easy algebraic manipulations give or The inverse matrix is unique when it exists. So if A is invertible, then A -1 is also invertible and The following basic property is very important: 16

If A and B are invertible matrices, then is also invertible and Remark. In the definition of an invertible matrix A, we used both and to be equal to the identity matrix. In fact, we need only one of the two. In other words, for a matrix A, if there exists a matrix B such that, then A is invertible and B = A -1. 17

Special Matrices: Triangular, Symmetric, Diagonal We have seen that a matrix is a block of entries or two dimensional data. The size of the matrix is given by the number of rows and the number of columns. If the two numbers are the same, we called such matrix a square matrix. To square matrices we associate what we call the main diagonal (in short the diagonal). Indeed, consider the matrix Its diagonal is given by the numbers a and d. For the matrix its diagonal consists of a, e, and k. In general, if A is a square matrix of order n and if a ij is the number in the i th -row and j th -colum, then the diagonal is given by the numbers a ii, for i=1,..,n. The diagonal of a square matrix helps define two type of matrices: upper-triangular and lowertriangular. Indeed, the diagonal subdivides the matrix into two blocks: one above the diagonal and the other one below it. If the lower-block consists of zeros, we call such a matrix uppertriangular. If the upper-block consists of zeros, we call such a matrix lower-triangular. For example, the matrices are upper-triangular, while the matrices 18

are lower-triangular. Now consider the two matrices The matrices A and B are triangular. But there is something special about these two matrices. Indeed, as you can see if you reflect the matrix A about the diagonal, you get the matrix B. This operation is called the transpose operation. Indeed, let A be a nxm matrix defined by the numbers a ij, then the transpose of A, denoted A T is the mxn matrix defined by the numbers b ij where b ij = a ji. For example, for the matrix we have Properties of the Transpose operation. If X and Y are mxn matrices and Z is an nxk matrix, then 1. 2. 3. (X+Y) T = X T + Y T (XZ) T = Z T X T (X T ) T = X A symmetric matrix is a matrix equal to its transpose. So a symmetric matrix must be a square matrix. For example, the matrices 19

are symmetric matrices. In particular a symmetric matrix of order n, contains at most different numbers. A diagonal matrix is a symmetric matrix with all of its entries equal to zero except may be the ones on the diagonal. So a diagonal matrix has at most n different numbers. For example, the matrices are diagonal matrices. Identity matrices are examples of diagonal matrices. Diagonal matrices play a crucial role in matrix theory. We will see this later on. Example. Consider the diagonal matrix Define the power-matrices of A by Find the power matrices of A and then evaluate the matrices for n=1,2,... Answer. We have 20

and By induction, one may easily show that for every natural number n. Then we have for n=1,2,.. Scalar Product. Consider the 3x1 matrices The scalar product of X and Y is defined by In particular, we have X T X = (a 2 + b 2 + c 2 ). This is a 1 x 1 matrix. 21

Elementary Operations for Matrices Elementary operations for matrices play a crucial role in finding the inverse or solving linear systems. They may also be used for other calculations. On this page, we will discuss these type of operations. Before we define an elementary operation, recall that to an nxm matrix A, we can associate n rows and m columns. For example, consider the matrix Its rows are Its columns are Let us consider the matrix transpose of A Its rows are As we can see, the transpose of the columns of A are the rows of A T. So the transpose operation interchanges the rows and the columns of a matrix. Therefore many techniques which are developed for rows may be easily translated to columns via the transpose operation. Thus, we will only discuss elementary row operations, but the reader may easily adapt these to columns. 22

Elementary Row Operations. 1. 2. 3. Interchange two rows. Multiply a row with a nonzero number. Add a row to another one multiplied by a number. Definition. Two matrices are row equivalent if and only if one may be obtained from the other one via elementary row operations. Example. Show that the two matrices are row equivalent. Answer. We start with A. If we keep the second row and add the first to the second, we get We keep the first row. Then we subtract the first row from the second one multiplied by 3. We get We keep the first row and subtract the first row from the second one. We get which is the matrix B. Therefore A and B are row equivalent. One powerful use of elementary operations consists in finding solutions to linear systems and the inverse of a matrix. This happens via Echelon Form and Gauss-Jordan Elimination. In order 23

to appreciate these two techniques, we need to discuss when a matrix is row elementary equivalent to a triangular matrix. Let us illustrate this with an example. Example. Consider the matrix First we will transform the first column via elementary row operations into one with the top number equal to 1 and the bottom ones equal 0. Indeed, if we interchange the first row with the last one, we get Next, we keep the first and last rows. And we subtract the first one multiplied by 2 from the second one. We get We are almost there. Looking at this matrix, we see that we can still take care of the 1 (from the last row) under the -2. Indeed, if we keep the first two rows and add the second one to the last one multiplied by 2, we get We can't do more. Indeed, we stop the process whenever we have a matrix which satisfies the following conditions 1. any row consisting of zeros is below any row that contains at least one nonzero number; 2. the first (from left to right) nonzero entry of any row is to the left of the first nonzero entry of any lower row. 24

Now if we make sure that the first nonzero entry of every row is 1, we get a matrix in row echelon form. For example, the matrix above is not in echelon form. But if we divide the second row by -2, we get This matrix is in echelon form. An application of this, namely to solve linear systems via Gaussian elimination may be found on another page. 25

Matrix Exponential The matrix exponential plays an important role in solving system of linear differential equations. On this page, we will define such an object and show its most important properties. The natural way of defining the exponential of a matrix is to go back to the exponential function e x and find a definition which is easy to extend to matrices. Indeed, we know that the Taylor polynomials converges pointwise to e x and uniformly whenever x is bounded. These algebraic polynomials may help us in defining the exponential of a matrix. Indeed, consider a square matrix A and define the sequence of matrices When n gets large, this sequence of matrices get closer and closer to a certain matrix. This is not easy to show; it relies on the conclusion on e x above. We write this limit matrix as e A. This notation is natural due to the properties of this matrix. Thus we have the formula One may also write this in series notation as At this point, the reader may feel a little lost about the definition above. To make this stuff clearer, let us discuss an easy case: diagonal matrices. Example. Consider the diagonal matrix 26

It is easy to check that for. Hence we have Using the above properties of the exponential function, we deduce that Indeed, for a diagonal matrix A, e A can always be obtained by replacing the entries of A (on the diagonal) by their exponentials. Now let B be a matrix similar to A. As explained before, then there exists an invertible matrix P such that B = P -1 AP. Moreover, we have B n = P -1 A n P for, which implies This clearly implies that In fact, we have a more general conclusion. Indeed, let A and B be two square matrices. Assume that. Then we have. Moreover, if B = P -1 AP, then 27

e B = P -1 e A P. Example. Consider the matrix This matrix is upper-triangular. Note that all the entries on the diagonal are 0. These types of matrices have a nice property. Let us discuss this for this example. First, note that In this case, we have In general, let A be a square upper-triangular matrix of order n. Assume that all its entries on the diagonal are equal to 0. Then we have Such matrix is called a nilpotent matrix. In this case, we have As we said before, the reasons for using the exponential notation for matrices reside in the following properties: Theorem. The following properties hold: 1. 28

2. ; if A and B commute, meaning AB = BA, then we have e A+B = e A e B ; 3. for any matrix A, e A is invertible and 29