Matrices and Matrix Algebra.

Similar documents
Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

7. Dimension and Structure.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MAT 2037 LINEAR ALGEBRA I web:

LINEAR SYSTEMS, MATRICES, AND VECTORS

Linear Algebra March 16, 2019

Systems of Linear Equations and Matrices

Chapter Contents. A 1.6 Further Results on Systems of Equations and Invertibility 1.7 Diagonal, Triangular, and Symmetric Matrices

Systems of Linear Equations and Matrices

MATH2210 Notebook 2 Spring 2018

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Fundamentals of Engineering Analysis (650163)

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 313 Chapter 1 Review

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Introduction to Matrices

Elementary Linear Algebra. Howard Anton & Chris Rorres

4. Determinants.

Linear Algebra Homework and Study Guide

Chapter 7. Linear Algebra: Matrices, Vectors,

MAT Linear Algebra Collection of sample exams

Matrix & Linear Algebra

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Matrices and systems of linear equations

Chapter 2: Matrix Algebra

Math Linear Algebra Final Exam Review Sheet

Extra Problems: Chapter 1

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Elementary maths for GMT

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATRICES. a m,1 a m,n A =

A Review of Matrix Analysis

7.6 The Inverse of a Square Matrix

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Linear equations in linear algebra

Linear Equations and Matrix

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Definition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices

Lecture Summaries for Linear Algebra M51A

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Chapter 5 Eigenvalues and Eigenvectors

Conceptual Questions for Review

1 - Systems of Linear Equations

Linear Algebra Primer

Online Exercises for Linear Algebra XM511

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

CSL361 Problem set 4: Basic linear algebra

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Linear Algebra, Summer 2011, pt. 2

Linear Equations in Linear Algebra

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Elementary Row Operations on Matrices

Extra Problems for Math 2050 Linear Algebra I

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Chapter 2 Notes, Linear Algebra 5e Lay

MTH 464: Computational Linear Algebra

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Linear Algebra Highlights

Elementary Linear Algebra

Linear Algebra. Preliminary Lecture Notes

MH1200 Final 2014/2015

Math 3C Lecture 25. John Douglas Moore

Lecture Notes in Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 369 Linear Algebra

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Chapter 1: Systems of Linear Equations

MAC Module 1 Systems of Linear Equations and Matrices I

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Final Exam Practice Problems Answers Math 24 Winter 2012

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Section 5.6. LU and LDU Factorizations

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Review Questions REVIEW QUESTIONS 71

Linear Algebra. Preliminary Lecture Notes

Study Guide for Linear Algebra Exam 2

Math 1314 Week #14 Notes

Chapter 1 Matrices and Systems of Equations

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Transcription:

Matrices and Matrix Algebra

3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square matrix of order n

3.1. Operations on Matrices Matrix Notation and Terminology (A) ij : the entry in row i and column j of a matrix A. (A) 12 =-3

3.1. Operations on Matrices Operations on Matrices Example 1

3.1. Operations on Matrices Operations on Matrices

3.1. Operations on Matrices Row and Column Vectors

3.1. Operations on Matrices Row and Column Vectors

3.1. Operations on Matrices The Product Ax

3.1. Operations on Matrices The Product Ax

3.1. Operations on Matrices The Product Ax

3.1. Operations on Matrices The Product AB

3.1. Operations on Matrices The Product AB Example 5

3.1. Operations on Matrices Finding Specific Entries in A Matrix Product

3.1. Operations on Matrices Finding Specific Rows and Columns of A Matrix Product the column rule for matrix multiplication the row rule for matrix multiplication

3.1. Operations on Matrices Matrix Products as Linear Combinations

3.1. Operations on Matrices Matrix Products as Linear Combinations Example 9

3.1. Operations on Matrices Transpose of a Matrix Example 10

3.1. Operations on Matrices Trace

3.1. Operations on Matrices Inner and Outer Matrix Products Example 11

3.1. Operations on Matrices Inner and Outer Matrix Products

3.1. Operations on Matrices Inner and Outer Matrix Products Keep in mind, however, that these formulas apply only when u and v are expressed as column vectors.

3.2. Inverses; Algebraic Properties of Matrices Properties of Matrix Addition and Scalar Multiplication

3.2. Inverses; Algebraic Properties of Matrices Properties of Matrix Multiplication

3.2. Inverses; Algebraic Properties of Matrices Properties of Matrix Multiplication The commutative law does not hold for matrix multiplication; that is AB and BA need not be equal matrices. Example 1

3.2. Inverses; Algebraic Properties of Matrices Zero Matrices A matrix whose entries are all zero is called a zero matrix.

3.2. Inverses; Algebraic Properties of Matrices Zero Matrices The cancellation law for real numbers: If ab=ac, and if a 0, then b=c. The cancellation law does not hold, in general, for matrix multiplication. Example 2

3.2. Inverses; Algebraic Properties of Matrices Zero Matrices If c and a are real numbers such that ca=0, then c=0 or a=0. Nonzero matrices can have a zero product. Example 3 Hear CA=0, but C 0 and A 0.

3.2. Inverses; Algebraic Properties of Matrices Identity Matrices A square matrix with 1 s on the main diagonal and zeros elsewhere is called an identity matrix.

3.2. Inverses; Algebraic Properties of Matrices Identity Matrices

3.2. Inverses; Algebraic Properties of Matrices Inverse of A Matrix REMARK Observe that the condition that AB=BA=I is not altered by interchanging A and B. Thus, if A is invertible and B is an inverse of A, then it is also true that B is invertible and A is an inverse of B. Accordingly, when the condition AB=BA=I holds, it is correct to say that A and B are inverse of one another.

3.2. Inverses; Algebraic Properties of Matrices Inverse of A Matrix Example 4

3.2. Inverses; Algebraic Properties of Matrices Inverse of A Matrix In general, a square matrix with a row or column of zeros is singular. Example 5

3.2. Inverses; Algebraic Properties of Matrices Properties of Inverses

3.2. Inverses; Algebraic Properties of Matrices Properties of Inverses Example 6

3.2. Inverses; Algebraic Properties of Matrices Properties of Inverses Example 7 Because the coefficients of the unknowns are literal rather than numerical, Gauss-Jordan elimination is a little clumsy.

3.2. Inverses; Algebraic Properties of Matrices Properties of Inverses Example 8 What should be the lengths of the arms be in order to position the tip of the working arm at the point (x, y) shown in the figure?

3.2. Inverses; Algebraic Properties of Matrices Properties of Inverses

3.2. Inverses; Algebraic Properties of Matrices Powers of Matrix If A is a square matrix, then we define the nonnegative integer powers of A to be and if A is invertible, then we define the negative integer powers of A to be

3.2. Inverses; Algebraic Properties of Matrices Powers of Matrix

3.2. Inverses; Algebraic Properties of Matrices Matrix Polynomials If A is a square matrix, say n n, and if is any polynomial, then we define the n n matrix p(a) to be It is called a matrix polynomial in A. If p(x)=p 1 (x)p 2 (x), then

3.2. Inverses; Algebraic Properties of Matrices Properties of the Transpose

3.2. Inverses; Algebraic Properties of Matrices Properties of the Transpose

3.2. Inverses; Algebraic Properties of Matrices Properties of the Trace

3.2. Inverses; Algebraic Properties of Matrices Properties of the Trace Example 15

3.2. Inverses; Algebraic Properties of Matrices Transpose and Dot Product In expressions of the form Au v or u Av, the matrix A can be moved across the dot product sign by transposing A. If u and v are column vectors, then their dot product can be expressed as the matrix product u v=v T u.

3.3. Elementary Matrices; A Method for Finding A -1 Elementary Matrices Elementary row operations 1. Multiply an equation through by a nonzero constant. 2. Interchange two equations. 3. Add a multiple of one equation to another. Elementary matrix: a matrix that results from applying a single elementary row operation to an identity matrix.

3.3. Elementary Matrices; A Method for Finding A -1 Elementary Matrices In short, this theorem states that an elementary row operation can be performed on a matrix A using a left multiplication by an appropriate elementary matrix.

3.3. Elementary Matrices; A Method for Finding A -1 Elementary Matrices Example 1

3.3. Elementary Matrices; A Method for Finding A -1 Elementary Matrices Example 2

3.3. Elementary Matrices; A Method for Finding A -1 Characterization of Invertibility

3.3. Elementary Matrices; A Method for Finding A -1 Row Equivalence In general, two matrices that can be obtained from one another by finite sequences of elementary row operations are said to be row equivalent.

3.3. Elementary Matrices; A Method for Finding A -1 An Algorithm for Inverting Matrices

3.3. Elementary Matrices; A Method for Finding A -1 An Algorithm for Inverting Matrices Example 3

3.3. Elementary Matrices; A Method for Finding A -1 An Algorithm for Inverting Matrices Example 4

3.3. Elementary Matrices; A Method for Finding A -1 Solving Linear Systems by Matrix Inversion

3.3. Elementary Matrices; A Method for Finding A -1 Solving Linear Systems by Matrix Inversion Example 5

3.3. Elementary Matrices; A Method for Finding A -1 Solving Linear Systems by Matrix Inversion

3.3. Elementary Matrices; A Method for Finding A -1 Solving Linear Systems by Matrix Inversion (a) To prove the invertibility of A, it suffices to show that the homogeneous system Ax=0 has only the trivial solution. If x is any solution of this system, Thus, the system Ax=0 has only the trivial solution, which establishes that A is invertible. (b)

3.3. Elementary Matrices; A Method for Finding A -1 A Unifying Theorem

3.3. Elementary Matrices; A Method for Finding A -1 Consistency of Linear Systems Example 8

3.4. Subspaces and Linear Independence Subspaces of R n In general, if W is a nonempty set of vectors in R n, then we say that W is closed under scalar multiplication if any scalar multiple of a vector in W is also in W, And we say that W is closed under addition if the sum of any two vectors in W is also in W. Let W be the plane through the origin of R n whose equation is

3.4. Subspaces and Linear Independence Subspaces of R n The zero subspace and R n are called the trivial subspaces of R n. Example 1

3.4. Subspaces and Linear Independence Subspaces of R n The subspace W of R n whose vectors satisfy (3) is called the span of v 1, v 2,, v s and is denoted by We also say that the vector v 1, v 2,, v s span W. The scalars in (3) are called parameters, and we can think of span{v 1, v 2,, v s } as the geometric object in R n that results when the parameters in (3) are allowed to vary independently from - to.

3.4. Subspaces and Linear Independence Subspaces of R n Example 2 Thus, span{e 1, e 2,, e n }=R n ; that is, R n is spanned by the standard unit vectors.

3.4. Subspaces and Linear Independence Subspaces of R n LOOKING AHEAD We will eventually show that every subspace of R n is the span of some finite set of vectors, and, in fact, is the span of at most n vectors. Example 4 All subspaces of R 2 fall into one of three categories: 1. The zero subspace 2. Lines through the origin 3. All of R 2 All subspaces of R 3 fall into one of four categories: 1. The zero subspace 2. Lines through the origin 3. Planes through the origin 4. All of R 3

3.4. Subspaces and Linear Independence Solution Space of A Linear System Since x=0 is a solution of the system, we are assured that the solution set is nonempty. If x 0 is any solution of the system, A(kx 0 )=k(ax 0 )=k0=0 If x 1 and x 2 are solutions of the system, A(x 1 +x 2 )=Ax 1 +Ax 2 =0+0=0 The solution set of a homogeneous linear system is a subspace, we will refer to it as the solution space of the system. The solution space, being a subspace of R n, must be expressible in the form which we call a general solution of the system.

3.4. Subspaces and Linear Independence Solution Space of A Linear System Example 5 The solution space can also be denoted by span{v 1, v 2, v 3 }, where v 1 =(-3,1,0,0,0,0), v 2 =(-4,0,-2,1,0,0), v 3 =(-2,0,0,0,1,0)

3.4. Subspaces and Linear Independence Solution Space of A Linear System Example 7 The solution space of a homogeneous linear system in three unknowns is a subspace of R 3.

3.4. Subspaces and Linear Independence Solution Space of A Linear System

3.4. Subspaces and Linear Independence Linear Independence

3.4. Subspaces and Linear Independence Linear Independence Example 10 Two vectors v 1 and v 2 in R n are linearly dependent if and only if there are scalars c 1 and c 2, not both zero, such that c 1 0 and c 2 0 Two vectors in R n are linearly dependent if they are collinear and linearly independent if they are not.

3.4. Subspaces and Linear Independence Linear Independence Example 11 Three vectors in R n are linearly dependent if they lie in a plane through the origin and are linearly independent if they do not.

3.4. Subspaces and Linear Independence Linear Independence and Homogeneous Linear Systems Consider the n s matrix We can rewrite Ax=0 as

3.4. Subspaces and Linear Independence Linear Independence and Homogeneous Linear Systems Example 12 In Example 6 of Section 3.3, where we showed that it has only the trivial solution. Thus, the vectors are linearly independent. In Example 4 of Section 3.3, where we showed that the coefficient matrix for this system is not invertible. This implies that the system has nontrivial solutions (Theorem 3.3.7), and hence that the vectors are linearly dependent.

3.4. Subspaces and Linear Independence Linear Independence and Homogeneous Linear Systems Example 12 This system has more unknowns than equations, so it must have nontrivial solutions by Theorem 2.2.3. This implies that the vectors are linearly dependent.

3.4. Subspaces and Linear Independence Translated Subspaces If x 0, v 1, v 2,, v s are vectors in R n, then the set of vectors of the form can be viewed as a translation by x 0 of the subspace We call it the translation of W by x0 and denote it by Translations of subspaces have various names in the literature, the most common being linear manifolds, flats, and affine spaces. We will call them linear manifolds.

3.4. Subspaces and Linear Independence A Unifying Theorem

3.5. The Geometry of Linear Systems The Relationship Between Ax=b and Ax=0 If nonhomogeneous system Ax=b is consistent, we will call Ax=0 the homogeneous system associated with Ax=b. specific solution

3.5. The Geometry of Linear Systems The Relationship Between Ax=b and Ax=0

3.5. The Geometry of Linear Systems The Relationship Between Ax=b and Ax=0 The solution of a consistent nonhomogeneous linear system is expressible in the form where general solution particular solution

3.5. The Geometry of Linear Systems The Relationship Between Ax=b and Ax=0 Example 1 Since the solution set of a consistent nonhomogeneous linear system is the translation of the solution space of the associated homogeneous system, the solution set of a consistent nonhomogeneous linear system in two or three unknowns must be one of the following:

3.5. The Geometry of Linear Systems The Relationship Between Ax=b and Ax=0

3.5. The Geometry of Linear Systems Consistency of a Linear System from the Vector Point of View If the successive column vectors of A are a 1, a 2,, a n, Ax=b is rewritten as The linear system is consistent if and only if b can be expressed as a linear combination of the column vectors of A. If A is an m n matrix, then to say that b is a linear combination of the column vectors of A is the same as saying that b is in the subspace of R m spanned by the column vectors of A. This subspace is called the column space of A and is denoted by col(a).

3.5. The Geometry of Linear Systems Consistency of a Linear System from the Vector Point of View Example 2 Determine whether the vector w=(9,1,0) can be expressed as a linear combination of the vectors v 1 =(1,2,3), v 2 =(1,4,6), v 3 =(2,-3,-5) and, if so, find such a linear combination.

3.5. The Geometry of Linear Systems Hyperplanes The set of points (x 1, x 2,, x n ) in R n that satisfy a linear equation of the form is called a hyperplane in R n. The hyperplane passes through the origin.

3.5. The Geometry of Linear Systems Hyperplanes The hyperplane consists of all vectors x in R n that are orthogonal to the vector a. Hyperplane through the origin with normal a Orthogonal complement of a α (read, a perp )

3.5. The Geometry of Linear Systems Geometric Interpretations of Solution Spaces

3.6. Matrices with Special Forms Diagonal Matrices A square matrix in which all entries off the main diagonal are zero is called a diagonal matrix. A diagonal matrix is invertible if and only if all of its diagonal entries are nonzero.

3.6. Matrices with Special Forms Diagonal Matrices If k is a positive integer, then the kth power of the diagonal matrix is

3.6. Matrices with Special Forms Triangular Matrices A square matrix in which all entries above the main diagonal are zero is called lower triangular and a square matrix in which all the entries below the main diagonal are zero is called upper triangular. A matrix that is either upper triangular or lower triangular is called triangular. Example 2 A a ij, aij 0 if i j A a ij, aij 0 if i j

3.6. Matrices with Special Forms Properties of Triangular Matrices Part (b) If i<j,

3.6. Matrices with Special Forms Properties of Triangular Matrices Example 4 invertible noninvertible

3.6. Matrices with Special Forms Symmetric and Skew-symmetric Matrices A square matrix A is called symmetric if A T =A and skew-symmetric if A T = -A.

3.6. Matrices with Special Forms Symmetric and Skew-symmetric Matrices

3.6. Matrices with Special Forms Symmetric and Skew-symmetric Matrices Let A and B be symmetric matrices with the same size. If the product of A and B is symmetric, (AB) T =AB The product AB is symmetric if and only if AB=BA. Example 5

3.6. Matrices with Special Forms Invertibility of Symmetric Matrices If A is symmetric and invertible,

3.6. Matrices with Special Forms Matrices of the Form AA T and A T A The products AA T and A T A are always symmetric, since If A is invertible, A T is invertible, so the products AA T and A T A are invertible. (A -1 ) T =(A T ) -1

3.6. Matrices with Special Forms Fixed Points of a Matrix If A is a square matrix and Ax=x, the solutions of this equation, if any, are called the fixed points of A. Example 6

3.6. Matrices with Special Forms A Technique for Inverting I-A When A Is Nilpotent If A k =0, A square matrix A with the property that A k =0 for some positive integer k is said to be nilpotent, and the smallest positive power for which A k =0 is called the index of nilpotency.

3.6. Matrices with Special Forms A Technique for Inverting I-A When A Is Nilpotent Example 7

3.6. Matrices with Special Forms Inverting I-A by Power Series If 0<x<1 and k,

3.6. Matrices with Special Forms Inverting I-A by Power Series It is called a power series representation of (I-A) -1.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization If a square matrix A is in the form A=LU (1) where L is lower triangular and U is upper triangular, Step 1. Rewrite the system Ax=b as LUx=b (2) Step 2. Define a new unknown y by letting Ux=y (3) and rewrite (2) as Ly=b. Step 3. Solve the system Ly=b for the unknown y. Step 4. Substitute the now-known vector y into (3) and solve for x. This procedure is called the method of LU-decomposition. Notice that Ly=b and Ux=y are easy to solve because their coefficient matrices are triangular.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization Example 1 Assume that Solve the following linear system

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization In general, not every square matrix A has an LU-decomposition, nor is an LU-decomposition unique if it exists.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization Suppose that A is an n n matrix that has been reduced by elementary row operations without row interchanges to the row echelon form U. There is a sequence of elementary matrices E 1, E 2,, E k such that (8) Since elementary matrices are invertible, we can solve (8) for A as where (10)

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization U is upper triangular because it is a row echelon form of the square matrix A. Notice that no row interchanges are used to obtain U from A and that in Gaussian elimination zeros are introduced by adding multiples of rows to lower rows. Each elementary matrix in (8) arises either by multiplying a row of the n n identity matrix by a scalar or by adding a multiple of a row to a lower row. In either case the resulting elementary matrix is lower triangular. Each of the matrices on the right side of (10) is lower triangular, so their product L is also lower triangular.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization A Procedure for finding an LU-decomposition of the matrix A: 1. Reduce A to a row echelon form U without using any row interchanges. 2. Keep track of the sequence of row operations performed, and let E 1, E 2,, E k be the sequence of elementary matrices that corresponds to those operations. 3. Let 4. A=LU is an LU-decomposition of A. Try to figure what entry to put into L, so that L will be reduced to I.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization Four step for finding an LU-decomposition of the matrix A: 1. Reduce A to row echelon form U without using row interchanges, keeping track of the multipliers used to introduce the leading 1 s and the multipliers used to introduce zeros below the leading 1 s. 2. In each position along the main diagonal of L, place the reciprocal of the multiplier that introduced the leading 1 in that position in U. 3. In each position below the main diagonal of L, place the negative of the multiplier used to introduce the zero in that position in U. 4. From the decomposition A=LU.

3.7. Matrix Factorizations; LU-Decomposition Solving Linear Systems by Factorization Example 2 Find an LU-decomposition of

3.7. Matrix Factorizations; LU-Decomposition The Relationship between Gaussian Elimination and LU-Decomposition Example 3 Find an LU-decomposition of

3.7. Matrix Factorizations; LU-Decomposition Matrix Inversion by LU-Decomposition Many of the best algorithms for inverting matrices use LU-decomposition. Let Then, AA -1 =I can be expressed as This can be done by finding an LU-decomposition of A.

3.7. Matrix Factorizations; LU-Decomposition LDU-Decomposition U has 1 s on the main diagonal as it is a row echelon form of A, but L need not. We can shift the diagonal entries of L to a diagonal matrix D and write L as L=L D where L is a lower triangular matrix with 1 s on the main diagonal.

3.7. Matrix Factorizations; LU-Decomposition LDU-Decomposition If A is a square matrix that can be reduced to row echelon form without row interchanges, then A can be factored uniquely as A=LDU where L is a lower triangular matrix with 1 s on the main diagonal, D is a diagonal matrix, and U is an upper triangular matrix with 1 s on the main diagonal. This is called the LDU-decomposition (or LDU-factorization) of A.

3.7. Matrix Factorizations; LU-Decomposition Using Permutation Matrices to Deal With Row Interchanges A matrix P is formed by multiplying in sequence those elementary matrices that corresponds to the row interchanges, and then execute all of these row interchanges on A by forming the product PA. Since all of the row interchanges are out of the way, the matrix PA can be reduced to row echelon form without row interchanges and hence an LU-decomposition PA=LU Since the matrix P is invertible (being a product of elementary matrices), the system Ax=b and PAx=Pb have the same solutions, and the latter system can be solved by LU-decomposition. P is called permutation matrix. This is called a PLU-decomposition of A.

3.7. Matrix Factorizations; LU-Decomposition Cost Estimates for Solving Large Linear Systems Neither method has a cost advantage over the other. However, LU-decomposition has other advantages that make it the method of choice:

3.8. Partitioned Matrices and Parallel Processing General Partitioning A matrix can be partitioned (subdivided) into submatrices (also called blocks) in various ways by inserting lines between selected rows and columns. where

3.8. Partitioned Matrices and Parallel Processing General Partitioning If the sizes of the blocks conform for the required operations, then the block version of the row-column rule of Theorem 3.1.7 yields This procedure is called block multiplication.

3.8. Partitioned Matrices and Parallel Processing General Partitioning Example 1

3.8. Partitioned Matrices and Parallel Processing General Partitioning

3.8. Partitioned Matrices and Parallel Processing General Partitioning It is sometimes called outer product rule.

3.8. Partitioned Matrices and Parallel Processing General Partitioning Example 2 Here is the proof of Therem 3.2.12(e). tr(ab)=tr(ba)

3.8. Partitioned Matrices and Parallel Processing Block Diagonal Matrices A partitioned matrix A is said to be block diagonal if the matrices on the main diagonal are square and all matrices off the main diagonal are zero. where the matrices D 1, D 2,, D k are square. It can be shown that the matrix A is invertible if and only if each matrix on the diagonal is invertible, in which case

3.8. Partitioned Matrices and Parallel Processing Block Diagonal Matrices Example 3 Consider the block diagonal matrix

3.8. Partitioned Matrices and Parallel Processing Block Upper Triangular Matrices A partitioned matrix A is said to be block upper triangular if the matrices on the main diagonal are square and all matrices below the main diagonal are zero; that is, the matrix is partitioned as where the matrices A 11, A 22,, A kk are square. The definition of a block lower triangular matrix is similar.

3.8. Partitioned Matrices and Parallel Processing Block Upper Triangular Matrices This formula allows the work of inverting A to be accomplished by parallel processing, that is by using two individual processors working simultaneously to compute the inverse of the smaller matrices, A 11 and A 22.

3.8. Partitioned Matrices and Parallel Processing Block Upper Triangular Matrices Example 4

3.8. Partitioned Matrices and Parallel Processing Block Upper Triangular Matrices Example 4