Linear Algebra and Matrix Inversion

Similar documents
Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Matrix operations Linear Algebra with Computer Science Application

MAT 610: Numerical Linear Algebra. James V. Lambers

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Elementary maths for GMT

Gaussian Elimination and Back Substitution

Chapter 1 Matrices and Systems of Equations

CHAPTER 6. Direct Methods for Solving Linear Systems

ICS 6N Computational Linear Algebra Matrix Algebra

Linear Equations and Matrix

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

POLI270 - Linear Algebra

Chapter 2 Notes, Linear Algebra 5e Lay

Matrices and systems of linear equations

Linear Equations in Linear Algebra

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

MATH2210 Notebook 2 Spring 2018

Basic Concepts in Linear Algebra

MAT 2037 LINEAR ALGEBRA I web:

Math 3108: Linear Algebra

PH1105 Lecture Notes on Linear Algebra.

Phys 201. Matrices and Determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Review of Basic Concepts in Linear Algebra

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Matrices. Chapter Definitions and Notations

Section 9.2: Matrices.. a m1 a m2 a mn

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Undergraduate Mathematical Economics Lecture 1

MA 138 Calculus 2 with Life Science Applications Matrices (Section 9.2)

Graduate Mathematical Economics Lecture 1

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Math 4377/6308 Advanced Linear Algebra

Matrices and Linear Algebra

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Elementary Row Operations on Matrices

Linear Algebra V = T = ( 4 3 ).

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Matrix & Linear Algebra

Chapter 2: Matrices and Linear Systems

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

MTH 464: Computational Linear Algebra

Linear Algebra Tutorial for Math3315/CSE3365 Daniel R. Reynolds

MATRICES AND MATRIX OPERATIONS

Matrix representation of a linear map

Matrix Algebra 2.1 MATRIX OPERATIONS Pearson Education, Inc.

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Matrix representation of a linear map

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Fundamentals of Engineering Analysis (650163)

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Lecture 3: Matrix and Matrix Operations

Math 3191 Applied Linear Algebra

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Numerical Analysis Lecture Notes

Foundations of Cryptography

Linear Systems and Matrices

Lecture Notes in Linear Algebra

MATRICES. a m,1 a m,n A =

Appendix C Vector and matrix algebra

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Section 12.4 Algebra of Matrices

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

MATH Linear Algebra Homework Solutions: #1 #6

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

7.6 The Inverse of a Square Matrix

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Announcements Monday, October 02

A Review of Matrix Analysis

Scientific Computing: Dense Linear Systems

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Direct Methods for Solving Linear Systems. Matrix Factorization

Math 313 Chapter 1 Review

CS 246 Review of Linear Algebra 01/17/19

22A-2 SUMMER 2014 LECTURE 5

Chapter 3. Vector spaces

LINEAR ALGEBRA WITH APPLICATIONS

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Systems of Linear Equations and Matrices

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Math 60. Rumbos Spring Solutions to Assignment #17

Steven J. Leon University of Massachusetts, Dartmouth

Systems of Linear Equations and Matrices

Math 240 Calculus III

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Transcription:

Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much more than notational conveniences for writing systems of linear equations A matrix A can also be used to represent a linear function f A whose domain and range are both sets of vectors called vector spaces A vector space over a field (such as the field of real or complex numbers) is a set of vectors, together with two operations: addition of vectors, and multiplication of a vector by a scalar from the field Specifically, if u and v are vectors belonging to a vector space V over a field F, then the sum of u and v, denoted by u + v, is a vector in V, and the scalar product of u with a scalar α in F, denoted by αu, is also a vector in V These operations have the following properties: Commutativity: For any vectors u and v in V, u + v v + u Associativity: For any vectors u, v and w in V, (u + v) + w u + (v + w) Identity element for vector addition: There is a vector, known as the zero vector, such that for any vector u in V, u + + u u Additive inverse: For any vector u in V, there is a unique vector u in V such that u + ( u) u + u Distributivity over vector addition: For any vectors u and v in V, and any scalar α in F, α(u + v) αu + αv Distributivity over scalar multiplication: For any vector u in V, and any scalars α and β in F, (α + β)u αu + βu

Associativity of scalar multiplication: For any vector u in V and any scalars α and β in F, α(βu) (αβ)u Identity element for scalar multiplication: For any vector u in V, u u A function f A : V W, whose domain V and range W are vector spaces over a field F, is a linear transformation if it has the properties f A (x + y) f A (x) + f A (y), f A (αx) αf A (x), where x and y are vectors in V and α is a scalar from F If V is a vector space of dimension n over the field of real or complex numbers, such as R n or C n, and W is a vector space of dimension m, then a linear function f A with domain V and range W can be represented by an m n matrix A whose entries belong to the field Suppose that the set of vectors {v, v 2,, v n } is a basis for V, and the set {w, w 2,, w n } is a basis for W That is, any vector v in V has a unique representation as a linear combination of v, v 2,, v n, and any vector w in W is a unique linear combination of w, w 2,, w m Then, a ij is the scalar by which w i is multiplied when applying the function to the vector v j That is, m f A (v j ) a j w + a 2j w 2 + + a mj w m a ij w j In other words, the jth column of A describes the image under f A of the vector v j, in terms of the coefficients of f A (v j ) in the basis {w, w 2,, w m } If V and W are spaces of real or complex vectors, then, by convention, the bases {v j } n j and {w i } m i are each chosen to be the standard basis for Rn and R m, respectively The jth vector in the standard basis is a vector whose components are all zero, except for the jth component, which is equal to one These vectors are called the standard basis vectors of an n-dimensional space of real or complex vectors, and are denoted by e j From this point on, we will assume that V is R n, for simplicity Example The standard basis for R 3 consists of the vectors e, e 2, e 3 i 2

To describe the action of A on a general vector x from V, we can write x x e + x 2 e 2 + + x n e n Then, because A represents a linear function, f A (x) x j f A (e j ) j x j e j j x j a j, where a j is the jth column of A We define the vector y f A (x) above to be the matrix-vector product of A and x, which we denote by Ax Each element of the vector y Ax is given by j y i [Ax i a i x + a i2 x 2 + + a in x n a ij x j From this definition, we see that we can represent the jth column of A by the matrix-vector product Ae j Example Let Then A Ax 3 5 3 4 2 5 3 + 4, x + 2 2 3 2 j We see that Ax is a linear combination of the columns of A, with the coefficients of the linear combination obtained from the components of x Matrix Multiplication It follows from this definition that a general system of n linear equations in n unknowns can be described in matrix-vector form by the equation Ax b, where Ax is a matrix-vector product of the n n coefficient matrix A and the vector of unknowns x, and b is the vector of right-hand side values We say that A is a square matrix, because the number of rows and columns is equal 8 25 3

Of course, if n, the system of equations Ax b reduces to the scalar linear equation ax b, which has the solution x a b, provided that a As a is the unique number such that a a aa, it is desirable to generalize the concepts of multiplication and identity element to square matrices The matrix-vector product can be used to define the composition of linear functions represented by matrices Let A be an m n matrix, and let B be an n p matrix Then, if x is a vector of length p, and y Bx, then we have where C is an m p matrix with entries Ay A(Bx) (AB)x Cx, C ij a ik b kj k We define the matrix product of A and B to be the matrix C AB with entries defined in this manner It should be noted that the product BA is not defined, unless m p Even if this is the case, in general, AB BA That is, matrix multiplication is not commutative However, matrix multiplication is associative, meaning that if A is m n, B is n p, and C is p k, then A(BC) (AB)C Example Consider the 2 2 matrices [ 2 A 3 4 Then AB whereas BA [ 2 3 4 [ 5 6 7 8 We see that AB BA The Identity Matrix [ 5 6 7 8 [ 2 3 4 [ 5 6, B 7 8 [ ( 5) 2(7) (6) 2( 8) 3( 5) + 4(7) 3(6) + 4( 8) [ 5() + 6( 3) 5( 2) + 6(4) 7() 8( 3) 7( 2) 8(4) [ 9 22 43 5 [ 23 34 3 46 When n, the identity element of matrices, the number, is the unique number such that a() (a) a for any number a To determine the identity element for n n matrices, we seek a matrix I such that AI IA A for any n n matrix A That is, we must have a ik I kj a ij, i, j,, n k 4,

This can only be guaranteed for any matrix A if I jj for j, 2,, n, and I ij when i j We call this matrix the identity matrix I Note that the jth column of I is the standard basis vector e j The Inverse of a Matrix Given an n n matrix A, it is now natural to ask whether it is possible to find an n n matrix B such that AB BA I Such a matrix, if it exists, would then serve as the inverse of A, in the sense of matrix multiplication We denote this matrix by A, just as we denote the multiplicative inverse of a nonzero number a by a If the inverse of A exists, we say that A is invertible or nonsingular; otherwise, we say that A is singular If A exists, then we can use it to describe the solution of the system of linear equations Ax b, for A Ax (A A)x Ix x A b, which generalizes the solution x a b of a single linear equation in one unknown However, just as we can use the inverse to describe the solution to a system of linear equations, we can use systems of linear equations to characterize the inverse Because A satisfies AA I, it follows from multiplication of both sides of this equation by the jth standard basis vector e j that Ab j e j, j, 2,, n, where b j A e j is the jth column of B A That is, we can compute A by solving n systems of linear equations of the form Ab j e j, using a method such as Gaussian elimination and back substitution If Gaussian elimination fails due to the inability to obtain a nonzero pivot element for each column, then A does not exist, and we conclude that A is singular Example Consider a general 2 2 matrix [ a b A c d To compute the inverse of this matrix, we solve the systems of linear equations [ [ Ax e, Ax 2 e 2 5

For simplicity, we assume a Then, we apply Gaussian elimination, without pivoting, to the augmented matrices [ [ A () a b a b, c d ˆA() c d The only row operation required is to subtract c/a times the first row from the second, which yields [ [ A (2) a b a b, d bc/a b/a ˆA(2) d bc/a Thus Gaussian elimination yields the upper triangular system for the first column of A, and ax + bx 2, ( d bc ) x 2 c a a ax 2 + bx 22, ( d bc ) x 22 a for the second column In both systems, we multiply both sides of the second equation by a to obtain and ax + bx 2, (ad bc)x 2 c ax 2 + bx 22, (ad bc)x 22 a We see that A does not exist unless ad bc Using back substitution on both upper triangular systems, we obtain We conclude that x 2 c ad bc, x d ad bc, x 22 A [ x x 2 ad bc a ad bc, x 2 b ad bc [ d b c a The inverse of a nonsingular matrix A has the following properties: 6

A is unique A is nonsingular, and (A ) A If B is also a nonsingular n n matrix, then (AB) B A Because the set of all n n matrices has an identity element, matrix multiplication is associative, and each nonsingular n n matrix has a unique inverse with respect to matrix multiplication that is also an n n nonsingular matrix, this set forms a group, which is denoted by GL(n), the general linear group Vector Operations for Matrices The set of all vectors of size m n, for fixed m and n, is itself a vector space of dimension mn The operations of vector addition and scalar multiplication for matrices are defined as follows: If A and B are m n matrices, then the sum of A and B, denoted by A + B, is the m n matrix C with entries c ij a ij + b ij If α is a scalar, then the product of α and an m n matrix A, denoted by αa, is the m n matrix B with entries b ij αa ij It is natural to identify m n matrices with vectors of length mn, in the context of these operations Matrix addition and scalar multiplication have properties analogous to those of vector addition and scalar multiplication In addition, matrix multiplication has the following properties related to these operations We assume that A is an m n matrix, B and D are n k matrices, and α is a scalar Distributivity: A(B + D) AB + AD Commutativity of scalar multiplication: α(ab) (αa)b A(αB) Special Matrices There are certain types of matrices which are particularly useful for solving systems of linear equations We have previously learned about upper triangular matrices that result from Gaussian elimination Recall that an m n matrix A is upper triangular if a ij whenever i > j This means that all entries below the main diagonal, which consists of the entries a, a 22,, are equal to zero A system of linear equations of the form Ux y, where U is an n n nonsingular upper triangular matrix, can be solved by back substitution Such a matrix is nonsingular if and only if all of its diagonal entries are nonzero Similarly, a matrix L is lower triangular if all of its entries above the main diagonal, that is, entries l ij for which i < j, are equal to zero A system of equations of the form Ly b, where 7

L is an n n nonsingular lower triangular matrix, can be solved using a process similar to back substitution, called forward substitution As with upper triangular matrices, a lower triangular matrix is nonsingular if and only if all of its diagonal entries are nonzero A matrix that is both upper and lower triangular is a diagonal matrix If D is a diagonal matrix, then d ij whenever i j It is particularly simple to solve a system of equations Dx b when D is an n n nonsingular diagonal matrix, as the solution is given by x i b i /d ii, for i, 2,, n We see that as with triangular matrices, a diagonal matrix is nonsingular if and only if all of its diagonal entries are nonzero An n n matrix A is said to be symmetric if a ij a ji for i, j, 2,, n The n n matrix B whose entries are defined by b ij a ji is called the transpose of A, which we denote by A T Therefore, A is symmetric if A A T More generally, if A is an m n matrix, then A T is the n n matrix B whose entries are defined by b ij a ji The transpose has the following properties: (A T ) T A (A + B) T A T + B T (AB) T B T A T If A is an n n nonsingular matrix, then (A ) T (A T ) It is common practice to denote the transpose of A by A T Example Let A be the matrix from a previous example, 3 A 4 2 5 3 Then It follows that A + A T A T 3 5 4 2 3 3 + 3 + + 5 + 4 4 2 + 5 + 2 3 3 6 4 8 3 4 3 6 This matrix is symmetric This can also be seen by the properties of the transpose, since (A + A T ) T A T + (A T ) T A T + A A + A T 8