Chapter 3 - From Gaussian Elimination to LU Factorization
|
|
- Alison Long
- 5 years ago
- Views:
Transcription
1 Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall
2 Gaussian Elimination - Take 1 2
3 Consider the system of linear equations 2x + 4y 2z = 1 4x 2y + 6z = 2 6x 4y + 2z = 18 Notice that x, y, and z are just variables, for which we can pick any name we want. To be consistent with the notation we introduced previously for naming components of vectors, we use the names χ, χ 1, and and χ 2 instead of x, y, and z, respectively: 2χ + 4χ 1 2χ 2 = 1 4χ 2χ 1 + 6χ 2 = 2 6χ 4χ 1 + 2χ 2 =
4 2χ + 4χ 1 2χ 2 = 1 4χ 2χ 1 + 6χ 2 = 2 6χ 4χ 1 + 2χ 2 = 18 Solving this linear system relies on the fact that its solution does not change if 1 Equations are reordered (not actually used in this example); and/or 2 An equation in the system is modified by subtracting a multiple of another equation in the system from it; and/or 3 Both sides of an equation in the system are scaled by a nonzero. 4
5 Example: Gaussian Elimination The following steps are knows as Gaussian elimination. They transform a system of linear equations to an equivalent upper triangular system of linear equations: Subtract λ 1 = (4/2) = 2 times the first equation from the second equation: Before After 2χ + 4χ 1 2χ 2 = 1 4χ 2χ 1 + 6χ 2 = 2 6χ 4χ 1 + 2χ 2 = 18 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 6χ 4χ 1 + 2χ 2 =
6 Subtract λ 2 = (6/2) = 3 times the first equation from the third equation: Before After 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 6χ 4χ 1 + 2χ 2 = 18 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 16χ 1 + 8χ 2 = 48 Subtract λ 21 = (( 16)/( 1)) = 1.6 times the second equation from the third equation: Before After 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 16χ 1 + 8χ 2 = 48 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 8χ 2 =
7 This now leaves us with an upper triangular system of linear equations. Multipliers In the above Gaussian elimination procedure, λ 1, λ 2, and λ 21 are called the multipliers. 7
8 Back substitution 2χ + 4χ 1 2χ 2 = 1 1χ 1 + 1χ 2 = 4 8χ 2 = 16 Solve last equation: χ 2 = 16/( 8) = 2. Substitute χ 2 = 2 into second equation and solve: χ 1 = (4 1(2))/( 1) = 2. Substitute χ 2 = 2 and χ 1 = 2 into first equation and solve: χ = ( 1 (4( 2) + ( 2)( 2)))/2 = 1. Thus, the solution is the vector x = χ χ 1 χ 2 =
9 Gaussian Elimination - Take 2 9
10 It becomes very cumbersome to always write the entire equation. The information is encoded in the coefficients in front of the χ i variables, and the values to the right of the equal signs. We could just let represent 2χ + 4χ 1 2χ 2 = 1 4χ 2χ 1 + 6χ 2 = 2 6χ 4χ 1 + 2χ 2 = 18 Then Gaussian elimination can simply work with this array of numbers. 1
11 Initial system of equations: Subtract λ 1 = (4/2) = 2 times the first row from the second row: Subtract λ 2 = (6/2) = 3 times the first row from the third row: Subtract λ 21 = (( 16)/( 1)) = 1.6 times the second row from the third row:
12 Back substitution The last row is shorthand for 8χ 2 = 16 which implies χ 2 = ( 16)/( 8) = 2 The second row is shorthand for 1χ 1 + 1χ 2 = 4 which implies 1χ 1 + 1(2) = 4 and hence χ 1 = (4 1(2))/( 1) = 2 The first row is shorthand for 2χ + 4χ 1 2χ 2 = 1 which implies 2χ + 4( 2) 2(2) = 1 and hence χ = ( 1 4( 2) + 2(2))/(2) = χ 1 Solution equals x χ 1 χ 2 A 2 2 A Check the answer (by plugging χ = 1, χ 1 = 2, and χ 2 = 2 into the original system) 2(1) + 4( 2) 2(2) = 1 4(1) 2( 2) + 6(2) = 2 6(1) 4( 2) + 2(2) = 18
13 Observations The above discussion motivates storing only the coefficients of a linear system (the numbers to the left of the ) as a two dimensional array and the numbers to the right as a one dimension array. We recognize this two dimensional array as a matrix: A R m n is the two dimensional array of scalars α, α,1 α,n 1 α 1, α 1,1 α 1,n 1 A =......, α m 1, α m 1,1 α m 1,n 1 where α i,j R for i < m and j < n. It has m rows and n columns. Note that the parentheses are simply there to delimit the array rather than having any special meaning. 13
14 Observations (continued) We similarly recognize that the one dimensional array is a (column) vector x R n where x = The length of the vector is n. χ χ 1. χ n 1. Now, given A R m n and vector x R n, the notation Ax stands for α, χ + α,1 χ α,n 1 χ n 1 α 1, χ + α 1,1 χ α 1,n 1 χ n α m 1, χ + α m 1,1 χ α m 1,n 1 χ n
15 Gaussian Elimination - Take
16 Example =
17 Exercise Compute How can this be described as an axpy operation?. 17
18 2 1 A A = 1 A = 1 A = A A A A
19 = = =
20 Back substitution as before = = =
21 Gaussian Elimination - Take
22 Example =
23 = =
24 Forward substitution Back substitution as before = =
25 Theorem Let ˆL j be a matrix that equals the identity, except that for i > jthe (i, j) elements (the ones below the diagonal in the jth column) have been replaced with λ i,j : ˆL j = I j λ j+1,j 1 λ j+2,j λ m 1,j 1. Then ˆL j A equals the matrix A except that for i > j the ith row is modified by subtracting λ i,j times the jth row from it. Such a matrix ˆL j is called a Gauss transform. 25
26 Exercise Verify that = and =
27 Gaussian Elimination - Take
28 Example Consider λ 1 1 λ 2 1 = λ 1 (2) 2 λ 1 (4) 6 λ 1 ( 2) 6 λ 2 (2) 4 λ 2 (4) 2 λ 2 ( 2) How should λ 1 and λ 2 be chosen so that zeroes are introduced below the diagonal in the first column?. Examine 4 λ 1 (2) and 6 λ 2 (2). λ 1 = 4/2 = and λ 2 = 6/2 = 3 have the desired property. 28
29 Example Alternatively, we can write this as ` 1 1 λ1 1 4 λ λ1 λ 2 « «` 1 «4 2 «2 6 A 4 2 ` «4 2 ` λ1 λ «1 A To zero the elements below the diagonal in the first column: ( λ1 λ 2 or, equivalently, ( λ1 λ 2 ) = ) ( ( 4 6 ) = ( ) ( 2 /2 = 3 ) ). 29
30 Generalizing this insight Let A () R n n and ˆL () a Gauss transform. Partition ( ) ( ) A () α () 11 a () T 1 a () 21 A (), ˆL () l () I Then ˆL () A () = ( 1 l () 21 I = ( ) ( α () 11 a () T a () 21 A () 22 ) α () 11 a () T a () 21 l() 21 α() 11 A () 22 l 21a () T ). 3
31 Generalizing this insight (continued) α () 11 a () T a () 21 l() 21 α() 11 A () 22 l21a() T!. Choose l () 21 so that a() 21 l() 21 α() 11 = : l () 21 = a() 21 /α() 11. A () 22 A() 22 l() 21 a() T : this is a rank-1 update (ger). Update A (1) := = 1 l () 21 I «α () 11 a () T a () 21 A () 22! α () 11 a () T a () 21 l() 21 α() 11 A () 22 l21a() T! = α (1) 11 a (1) T A (1) 22!. 31
32 Example Consider 1 λ 21 1 = λ 21 ( 1) 8 λ 2 (1) How should λ 21 be chosen? 16 λ 21 ( 1) = so that λ 21 = 16/( 1) = 1.6 has the desired property. Alternatively, we notice that, viewed as a vector, ( λ21 ) = ( 16 ) /( 1). 32
33 Moving on A (1) A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T a (1) 21 A (1) 22, ˆL (1) I 1 l (1) 21 I. Then I 1 l (1) = 21 I A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T a (1) 21 A (1) 22 A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T a (1) 21 l(1) 21 α(1) 11 A (1) 22 l(1) 21 a(1) T. 33
34 Moving on Now, I 1 l (1) = Choose l (1) I 1 C B A (1) 22 A(1) 22 l(1) 21 a(1) T 1 I A (2) B C B 1 l (1) 21 I A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T a (1) 21 A (1) 22 A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T 1 C A a (1) 21 l(1) 21 α(1) 11 A (1) 22 l(1) 21 a(1) T 1 C A. so that a(1) 21 l(1) 21 α(1) 11 = : l(1) 21 = a(1) 21 /α(1) 11. : this is a rank-1 update (ger). 1 A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T a (1) 21 A (1) 22 C A = A (1) a (1) 1 A (1) 2 α (1) 11 a (1) T A (2) 22 1 C A 34
35 More general yet A (k) A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T a (k) 21 A (k) 22, ˆL (k) where A (k) and I k are k k matrices. Then I k 1 l (k) = 21 I A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T a (k) 21 A (k) 22 A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T I k 1 l (k) 21 I a (k) 21 l(k) 21 α(k) 11 A (k) 22 l(k) 21 a(k) T., 35
36 I k 1 l (k) = Choose l (k) I A (k) 22 A(k) 22 l(k) 1 C B A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T a (k) 21 A (k) 22 A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T 1 C A a (k) 21 l(k) 21 α(k) 11 A (k) 22 l(k) 21 a(k) T 1 C A. so that a(k) 21 l(k) 21 α(k) 11 = : l(k) 21 = a(k) 21 /α(k) 11. A (k+1) = 21 a(k) T : I 1 l (k) 21 I = 1 C B A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T a (k) 21 A (k) 22 A (k) a (k) 1 A (k) 2 α (k) 11 a (k) T A (k+1) 22 1 C A 1 C A 36
37 A := GE Take5 (A) «AT L A T R Partition A A BL where A T L is while m(a T L) < m(a) do Repartition AT L A T R A BL A BR where α 11 is 1 1 «@ A BR A a 1 A 2 a T 1 α 11 a T A 2 a 21 A 22 a 21 := a 21/α 11 (= l 21) A 22 := A 22 a 21a T (= A 22 l 21a T ) Continue with AT L A T R A BL endwhile A BR «@ A a 1 A 2 a T 1 α 11 a T A 2 a 21 A 22 1 A 1 A 37
38 Insights Now, if A R n n, then A (n) = ˆL (n 1) ˆL (1) ˆL () A = U, an upper triangular matrix. Also, to solve Ax = b, we note that Ux = (ˆL (n 1) ˆL (1) ˆL () A)x = ˆL (n 1) ˆL (1) ˆL () b }{{}. The right-hand size of this we recognize as forward substitution applied to vector b. We will later see that solving Ux = ˆb where U is upper triangular is equivalent to back substitution. ˆb 38
39 The reason why we got to this point as GE Take 5 is so that the reader, hopefully, now recognizes this as just Gaussian elimination. The insights in this section are summarized in the algorithm, in which the original matrix A is overwritten with the upper triangular matrix that results from Gaussian elimination and the strictly lower triangular elements are overwritten by the multipliers. 39
40 Gaussian Elimination - Take 6 4
41 Inverse of a Matrix Let A R n n and B R n n have the property that AB = BA = I. Then B is said to be the inverse of matrix A and is denoted by A 1. Later we will see that for square A and B it is always the case that if AB = I then BA = I and that the inverse of a matrix is unique. 41
42 Example Let Then ˆL LˆL A and L A 1 A. 1 1 This should be intuitively true: ˆLA subtracts two times the first row from the second row. LA adds two times the first row from the second row. LˆLA = L(ˆLA) = A. Why? Two transformations that always undo each other are inverses of each other. 1 A. 42
43 Exercise Compute and reason why this should be intuitively true. 43
44 Similarly ˆLL = I. (Notice that when 44 we use I without indicating Theorem If ˆL = I k 1 l 21 I then L = I k 1 l 21 I is its inverse: LˆL = ˆLL = I. Proof ˆLL = = I k 1 l 21 I I k 1 l 21 + Il 21 I I k 1 l 21 I = I k 1 I = I.
45 Exercise Recall that = Show that =
46 Exercise Show that so that = =
47 Theorem Let ˆL (),, ˆL (n 1) be the sequence of Gauss transforms that transform an n n matrix A to an upper triangular matrix: Then ˆL (n 1) ˆL () A = U. A = L () L (n 2) L (n 1) U, where L (j) = ˆL (j) 1, the inverse of ˆL (j). 47
48 Proof If ˆL (n 1) ˆL (n 2) ˆL () A = U. then ˆL (n 2) A = L () L (n 2) L } (n 1) {{ ˆL (n 1) } ˆL () } I {{ } } I {{ } I = L () L (n 2) L (n 1) } ˆL (n 1) ˆL (n 2) {{ L () A} U = L () L (n 2) L (n 1) U. A 48
49 Lemma Let ˆL (),..., ˆL (n 1) be the sequence of Gauss transforms that transforms a matrix A into an upper triangular matrix U: and let L (j) = ˆL (j) 1. Then has the structure ˆL (n 1) ˆL () A = U L (k) = L () L (k 1) L (k) L (k) = ( L (k) T L L (k) BL where L (k) T L is a (k + 1) (k + 1) unit lower trianglar matrix. I ) 49
50 Proof Proof by induction on k. Base case: k =. L () = L () = ( 1 l () 21 I ) meets the desired criteria since 1 is a trivial unit lower triangular matrix. 5
51 Inductive step: Assume L (k) meets the indicated criteria. We will show that then L (k+1) does too. Let 1! L L (k) L (k) (k) T L B = L (k) BL I l(k) T C 1 1 A L (k) T 2 I where L T L (and hence L ) are unit lower triangular matrices of dimension (k + 1) (k + 1). Then 1 1 L (k) I k+ L (k+1) = L (k) L (k+1) B l(k) T C B C A L (k) T 2 I l (k+1) 21 I 1! 1 L (k) L (k)! B l(k) T C 1 1 A = B l (k) T 1 1 C A = L T L L L (k+1), (k) T 2 l (k+1) 21 I L (k) T 2 l (k+1) BL I I 21 which meets the desired criteria since L (k+1) T L triangular. is unit lower 51
52 By the Principle of Mathematical Induction the result holds for L (j), j < n
53 Corollary Under the conditions of the Lemma L = L (n 1) is a unit lower triangular matrix the strictly lower triangular part of which is the sum of all the strictly lower triangular parts of L (),..., L (n 1) : L = L (n 1) = l (2) l (n 2) 21 1 l () 21. (Note that l (n 1) 21 is a vector of length zero, so that the last step of involving L (n 1) is really a no op.) 53
54 Example A consequence of this corollary is that the fact that = in a previous Exercise is not a coincidence: For these matrices all you have to do find the strictly lower triangular parts of the right-hand side is to move the nonzeroes below the diagonal in the matrices on the left-hand side to the corresponding elements in the matrix on the right-hand side of the equality sign. 54
55 Exercise The order in which the Gauss transforms appear is important. In particular, verify that
56 Theorem Let ˆL (),..., ˆL (n 1) be the sequence of Gauss transforms that transforms an n n matrix A into an upper triangular matrix U: ˆL (n 1) ˆL () A = U and let L (j) = ˆL (j) 1. Then A = LU, where L = L () L (n 1) is a unit lower triangular matrix and can be easily obtained from ˆL (),..., ˆL (n 1) by the observation summarized in the last Corollary. Note Notice that the Theorem does not say that for every square matrix Gaussian elimination is well-defined. It merely says that if Gaussian elimination as presented thus far completes, then there is a unit lower triangular matrix L and upper triangular matrix U such that A = LU. 56
Week6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx
Week6 Gaussian Elimination 61 Opening Remarks 611 Solving Linear Systems View at edx 193 Week 6 Gaussian Elimination 194 61 Outline 61 Opening Remarks 193 611 Solving Linear Systems 193 61 Outline 194
More informationExercise 1.3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow.
54 Chapter Introduction 9 s Exercise 3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow Exercise 5 Follow the instructions for this problem given on the class wiki For the
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationNotes on Householder QR Factorization
Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationPractical Linear Algebra. Class Notes Spring 2009
Practical Linear Algebra Class Notes Spring 9 Robert van de Geijn Maggie Myers Department of Computer Sciences The University of Texas at Austin Austin, TX 787 Draft January 8, Copyright 9 Robert van de
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationLecture 12: Solving Systems of Linear Equations by Gaussian Elimination
Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Winfried Just, Ohio University September 22, 2017 Review: The coefficient matrix Consider a system of m linear equations in n variables.
More information22A-2 SUMMER 2014 LECTURE 5
A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationSolving Ax = b w/ different b s: LU-Factorization
Solving Ax = b w/ different b s: LU-Factorization Linear Algebra Josh Engwer TTU 14 September 2015 Josh Engwer (TTU) Solving Ax = b w/ different b s: LU-Factorization 14 September 2015 1 / 21 Elementary
More informationChapter 6 - Orthogonality
Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationExample: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3
Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationCalculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm
Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen
More informationMatrices and RRE Form
Matrices and RRE Form Notation R is the real numbers, C is the complex numbers (we will only consider complex numbers towards the end of the course) is read as an element of For instance, x R means that
More informationChapter 1 Matrices and Systems of Equations
Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.
More informationNotes on LU Factorization
Notes on LU Factorization Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu October 11, 2014 The LU factorization is also known as the LU decomposition
More informationAPPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF
ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More information12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126
12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR Cramer s Rule Solving a physical system of linear equation by using Cramer s rule Cramer s rule is really
More informationReview of Matrices and Block Structures
CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationMatrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More information4 Elementary matrices, continued
4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More information30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes
LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..
More informationEvaluating Determinants by Row Reduction
Evaluating Determinants by Row Reduction MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Objectives Reduce a matrix to row echelon form and evaluate its determinant.
More information1.Chapter Objectives
LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More information1 - Systems of Linear Equations
1 - Systems of Linear Equations 1.1 Introduction to Systems of Linear Equations Almost every problem in linear algebra will involve solving a system of equations. ü LINEAR EQUATIONS IN n VARIABLES We are
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationMatrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationA = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1
Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an
More informationLA lecture 4: linear eq. systems, (inverses,) determinants
LA lecture 4: linear eq. systems, (inverses,) determinants Yesterday: ˆ Linear equation systems Theory Gaussian elimination To follow today: ˆ Gaussian elimination leftovers ˆ A bit about the inverse:
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationx n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists
Vectors, Linear Combinations, and Matrix-Vector Mulitiplication In this section, we introduce vectors, linear combinations, and matrix-vector multiplication The rest of the class will involve vectors,
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationMath 344 Lecture # Linear Systems
Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear
More information4 Elementary matrices, continued
4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices are constructed
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More informationSPRING OF 2008 D. DETERMINANTS
18024 SPRING OF 2008 D DETERMINANTS In many applications of linear algebra to calculus and geometry, the concept of a determinant plays an important role This chapter studies the basic properties of determinants
More informationFinite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.
Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form
More informationNotes on Row Reduction
Notes on Row Reduction Francis J. Narcowich Department of Mathematics Texas A&M University September The Row-Reduction Algorithm The row-reduced form of a matrix contains a great deal of information, both
More informationIntroduction to Matrices and Linear Systems Ch. 3
Introduction to Matrices and Linear Systems Ch. 3 Doreen De Leon Department of Mathematics, California State University, Fresno June, 5 Basic Matrix Concepts and Operations Section 3.4. Basic Matrix Concepts
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationDefinition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices
IT 131: Mathematics for Science Lecture Notes 3 Source: Larson, Edwards, Falvo (2009): Elementary Linear Algebra, Sixth Edition. Matrices 2.1 Operations with Matrices This section and the next introduce
More informationIntroduction to Determinants
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationCalculus II - Basic Matrix Operations
Calculus II - Basic Matrix Operations Ryan C Daileda Terminology A matrix is a rectangular array of numbers, for example 7,, 7 7 9, or / / /4 / / /4 / / /4 / /6 The numbers in any matrix are called its
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationProperties of the Determinant Function
Properties of the Determinant Function MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Overview Today s discussion will illuminate some of the properties of the determinant:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationII. Determinant Functions
Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationLecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra
Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra
More informationLinear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra
MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationSimultaneous Linear Equations
Simultaneous Linear Equations PHYSICAL PROBLEMS Truss Problem Pressure vessel problem a a b c b Polynomial Regression We are to fit the data to the polynomial regression model Simultaneous Linear Equations
More informationExtra Problems: Chapter 1
MA131 (Section 750002): Prepared by Asst.Prof.Dr.Archara Pacheenburawana 1 Extra Problems: Chapter 1 1. In each of the following answer true if the statement is always true and false otherwise in the space
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationSection Vectors
Section 1.2 - Vectors Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Vectors We will call a one-dimensional
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More information(1) for all (2) for all and all
8. Linear mappings and matrices A mapping f from IR n to IR m is called linear if it fulfills the following two properties: (1) for all (2) for all and all Mappings of this sort appear frequently in the
More informationSection 1.1: Systems of Linear Equations
Section 1.1: Systems of Linear Equations Two Linear Equations in Two Unknowns Recall that the equation of a line in 2D can be written in standard form: a 1 x 1 + a 2 x 2 = b. Definition. A 2 2 system of
More information3. Replace any row by the sum of that row and a constant multiple of any other row.
Section. Solution of Linear Systems by Gauss-Jordan Method A matrix is an ordered rectangular array of numbers, letters, symbols or algebraic expressions. A matrix with m rows and n columns has size or
More informationLINEAR SYSTEMS, MATRICES, AND VECTORS
ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More information8.3 Householder QR factorization
83 ouseholder QR factorization 23 83 ouseholder QR factorization A fundamental problem to avoid in numerical codes is the situation where one starts with large values and one ends up with small values
More informationMIDTERM 1 - SOLUTIONS
MIDTERM - SOLUTIONS MATH 254 - SUMMER 2002 - KUNIYUKI CHAPTERS, 2, GRADED OUT OF 75 POINTS 2 50 POINTS TOTAL ) Use either Gaussian elimination with back-substitution or Gauss-Jordan elimination to solve
More information1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic
1300 Linear Algebra and Vector Geometry Week 2: Jan 14 18 1.2, 1.3... Gauss-Jordan, homogeneous matrices, intro matrix arithmetic R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca Winter 2019 What
More information