Solution of Linear Systems

Size: px
Start display at page:

Download "Solution of Linear Systems"

Transcription

1 Solution of Linear Systems Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 12, 2016 CPD (DEI / IST) Parallel and Distributed Computing / 27

2 Outline Solving Linear Systems Direct Methods: solution is sought directly, at once Gaussian Elimination LU Factorization Pivoting CPD (DEI / IST) Parallel and Distributed Computing / 27

3 Solving Linear Systems Probably the single most used procedure in the world. CPD (DEI / IST) Parallel and Distributed Computing / 27

4 Solving Linear Systems Probably the single most used procedure in the world. Linear Systems are the model for many modern day problems in mathematics in physics in economics and pretty much in any field CPD (DEI / IST) Parallel and Distributed Computing / 27

5 Solving Linear Systems Probably the single most used procedure in the world. Linear Systems are the model for many modern day problems in mathematics in physics in economics and pretty much in any field What about nonlinear systems? Is that not a more general model? Yes, but how do we solve nonlinear systems? We linearize and iterate until we have a solution At each iteration we solve a linear system CPD (DEI / IST) Parallel and Distributed Computing / 27

6 Solving Linear Systems Probably the single most used procedure in the world. Linear Systems are the model for many modern day problems in mathematics in physics in economics and pretty much in any field What about nonlinear systems? Is that not a more general model? Yes, but how do we solve nonlinear systems? We linearize and iterate until we have a solution At each iteration we solve a linear system Also, how do we solve differential equations? We discretize in time and solve for each timepoint At each timepoint it may be a nonlinear system, so we linearize it In the end we still solve a linear systems, actually many of them CPD (DEI / IST) Parallel and Distributed Computing / 27

7 Direct Methods Gaussian Elimination Reduce Ax = b to upper triangular system, Tx = c: Forward Elimination Use Back Substitution to solve Tx = c. CPD (DEI / IST) Parallel and Distributed Computing / 27

8 Direct Methods Gaussian Elimination Reduce Ax = b to upper triangular system, Tx = c: Forward Elimination Use Back Substitution to solve Tx = c. a 11 a 12 a 13 a 1n a 21 a 22 a 23 a 2n a 31 a 32 a 33 a 3n a m1 a m2 a m3 a mn x 1 x 2 x 3. x n = b 1 b 2 b 3. b n CPD (DEI / IST) Parallel and Distributed Computing / 27

9 Direct Methods Gaussian Elimination Reduce Ax = b to upper triangular system, Tx = c: Forward Elimination Use Back Substitution to solve Tx = c. t 11 t 12 t 13 t 1n 0 t 22 t 23 t 2n 0 0 t 33 t 3n t mn x 1 x 2 x 3. x n = c 1 c 2 c 3. c n CPD (DEI / IST) Parallel and Distributed Computing / 27

10 Direct Methods Gaussian Elimination Reduce Ax = b to upper triangular system, Tx = c: Forward Elimination Use Back Substitution to solve Tx = c. t 11 t 12 t 13 t 1n 0 t 22 t 23 t 2n 0 0 t 33 t 3n t mn x 1 x 2 x 3. x n = c 1 c 2 c 3. c n Back Substitution 1 one element of x can be immediately computed 2 use this value to simplify system, revealing another element that can be immediately computed 3 repeat CPD (DEI / IST) Parallel and Distributed Computing / 27

11 Forward Elimination, recall steps 1x 1 +1x 2 1x 3 +4x 4 = 8 1x 1 1x 2 4x 3 +5x 4 = 13 1x 1 1x 2 6x 3 8x 4 = 13 1x 1 1x 2 2x 3 = 9 CPD (DEI / IST) Parallel and Distributed Computing / 27

12 Forward Elimination, recall steps 1x 1 +1x 2 1x 3 +4x 4 = 8 1x 1 1x 2 4x 3 +5x 4 = 13 1x 1 1x 2 6x 3 8x 4 = 13 1x 1 1x 2 2x 3 = 9 Pivot: p 21 = a 21 /a 11, multiply by 1 st row, add to 2 nd row CPD (DEI / IST) Parallel and Distributed Computing / 27

13 Forward Elimination, recall steps 1x 1 +1x 2 1x 3 +4x 4 = 8 1x 1 1x 2 4x 3 +5x 4 = 13 1x 1 1x 2 6x 3 8x 4 = 13 1x 1 1x 2 2x 3 = 9 Pivot: p 21 = a 21 /a 11, multiply by 1 st row, add to 2 nd row 1x 1 +1x 2 1x 3 4x 4 = 8 2x 2 3x 3 1x 4 = 5 1x 1 1x 2 6x 3 8x 4 = 13 1x 1 1x 2 2x 3 = 9 CPD (DEI / IST) Parallel and Distributed Computing / 27

14 Forward Elimination, recall steps 1x 1 +1x 2 1x 3 +4x 4 = 8 1x 1 1x 2 4x 3 +5x 4 = 13 1x 1 1x 2 6x 3 8x 4 = 13 1x 1 1x 2 2x 3 = 9 Also for p 31 = a 31 /a 11, p 41 = a 41 /a 11 1x 1 +1x 2 1x 3 4x 4 = 8 2x 2 3x 3 1x 4 = 5 2x 2 5x 3 4x 4 = 5 2x 2 1x 3 4x 4 = 1 CPD (DEI / IST) Parallel and Distributed Computing / 27

15 Forward Elimination, recall steps 1x 1 +1x 2 1x 3 4x 4 = 8 2x 2 3x 3 1x 4 = 5 2x 2 5x 3 4x 4 = 5 2x 2 1x 3 4x 4 = 1 Pivot: p 32 = a 32 /a 22 1x 1 +1x 2 1x 3 4x 4 = 8 2x 2 3x 3 1x 4 = 5 2x 3 3x 4 = 0 2x 2 1x 3 4x 4 = 1 CPD (DEI / IST) Parallel and Distributed Computing / 27

16 Back Substitution 1x 1 +1x 2 1x 3 +4x 4 = 8 2x 2 3x 3 +1x 4 = 5 2x 3 3x 4 = 0 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

17 Back Substitution 1x 1 +1x 2 1x 3 +4x 4 = 8 2x 2 3x 3 +1x 4 = 5 2x 3 3x 4 = 0 2x 4 = 4 x 4 = 2 1x 1 +1x 2 1x 3 = 0 2x 2 3x 3 = 3 2x 3 = 6 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

18 Back Substitution 1x 1 +1x 2 1x 3 = 0 2x 2 3x 3 = 3 2x 3 = 6 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

19 Back Substitution 1x 1 +1x 2 1x 3 = 0 2x 2 3x 3 = 3 2x 3 = 6 2x 4 = 4 x 4 = 2, x 3 = 3 1x 1 +1x 2 = 3 2x 2 = 12 2x 3 = 6 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

20 Back Substitution 1x 1 +1x 2 = 3 2x 2 = 12 2x 3 = 6 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

21 Back Substitution x 4 = 2, x 3 = 3, 1x 1 +1x 2 = 3 2x 2 = 12 2x 3 = 6 2x 4 = 4 x 2 = 6 1x 1 = 9 2x 2 = 12 2x 3 = 6 2x 4 = 4 CPD (DEI / IST) Parallel and Distributed Computing / 27

22 Back Substitution x 4 = 2, x 3 = 3, 1x 1 +1x 2 = 3 2x 2 = 12 2x 3 = 6 2x 4 = 4 x 2 = 6 1x 1 = 9 2x 2 = 12 2x 3 = 6 2x 4 = 4 x 4 = 2, x 3 = 3, x 2 = 6, x 1 = 9 CPD (DEI / IST) Parallel and Distributed Computing / 27

23 Pseudo-code for Back Substitution for(i = n-1; i >= 0; i--){ x[i] = b[i] / a[i][i]; for(j = 0; j < i; j++) b[j] = b[j] - x[i] * a[j][i]; } Complexity: Θ(n 2 ) CPD (DEI / IST) Parallel and Distributed Computing / 27

24 Pseudo-code for Back Substitution for(i = n-1; i >= 0; i--){ x[i] = b[i] / a[i][i]; for(j = 0; j < i; j++) b[j] = b[j] - x[i] * a[j][i]; } Complexity: Θ(n 2 ) Parallelization? CPD (DEI / IST) Parallel and Distributed Computing / 27

25 Pseudo-code for Back Substitution for(i = n-1; i >= 0; i--){ x[i] = b[i] / a[i][i]; for(j = 0; j < i; j++) b[j] = b[j] - x[i] * a[j][i]; } Complexity: Θ(n 2 ) Parallelization: cannot execute the outer loop in parallel can execute the inner loop in parallel CPD (DEI / IST) Parallel and Distributed Computing / 27

26 Row-oriented Algorithm for(i = n-1; i >= 0; i--){ x[i] = b[i] / a[i][i]; for(j = 0; j < i; j++) b[j] = b[j] - x[i] * a[j][i]; } associate primitive task with each row of A and corresponding elements of x and b during iteration i task associated with row j computes new value of b j task i must compute x i and broadcast its value agglomerate using rowwise interleaved striped decomposition CPD (DEI / IST) Parallel and Distributed Computing / 27

27 Complexity Analysis for(i = n-1; i >= 0; i--){ x[i] = b[i] / a[i][i]; for(j = 0; j < i; j++) b[j] = b[j] - x[i] * a[j][i]; } Complexity Analysis Computation Complexity: each process performs about n/(2p) iterations of loop j in all a total of n 1 iterations in all Overall computational complexity: Θ(n 2 /p) Communication Complexity: one broadcast per iteration, log p n 1 iterations Overall communication complexity: Θ(n log p) CPD (DEI / IST) Parallel and Distributed Computing / 27

28 Isoefficiency Analysis Isoefficiency analysis: T (n, 1) CT 0 (n, p) (T (n, 1) sequential time; T 0 (n, p) parallel overhead) Sequential time complexity: T (n, 1) = O(n 2 ) Parallel overhead dominated by broadcasts: O(n log p) T 0 (n, p) = p O(n log p) n 2 Cpn log p n Cp log p Scalability function: M(f (p))/p M(n) = n 2 M(Cp log p) p = C 2 p log 2 p Poor scalability... CPD (DEI / IST) Parallel and Distributed Computing / 27

29 LU Factorization Useful if solving for multiple right-hand-sides (same matrix) Ax = b 1, Ax = b 2, Compute LU factorization A = LU where L is unit lower triangular and U is upper triangular. Solution obtained in two steps: Ly = b lower triangular system by forward-substitution to obtain vector y Ux = y upper triangular system by back-substitution to obtain solution x to original system CPD (DEI / IST) Parallel and Distributed Computing / 27

30 Factorization by Gaussian Elimination LU factorization can be computed by Gaussian elimination as follows, where U overwrites A for k = 0 to n 2 for i = k + 1 to n 1 l ik = a ik /a kk end end for j = k + 1 to n 1 for i = k + 1 to n 1 a ij = a ij l ik a kj end end {loop over columns} {compute multipliers} {for current column} {apply transformation to} {remaining submatrix} CPD (DEI / IST) Parallel and Distributed Computing / 27

31 Factorization by Gaussian Elimination In general, row interchanges (pivoting) may be required to ensure existence of LU factorization and numerical stability of Gaussian elimination algorithm, but for simplicity we temporarily ignore this issue Gaussian elimination requires about n 3 /3 paired additions and multiplications, so model serial time as T 1 = t c n 3 /3 where t c is time required for multiply-add operation About n 2 /2 divisions also required, but we ignore this lower-order term CPD (DEI / IST) Parallel and Distributed Computing / 27

32 Parallel Algorithm Partition For i, j = 1,, n, fine-grain task (i, j) stores a ij and computes and stores { uij, if i j l ij, if i > j yielding 2-D array of n 2 fine-grain tasks Communication Broadcast entries of A vertically to tasks below Broadcast entries of L horizontally to tasks to right CPD (DEI / IST) Parallel and Distributed Computing / 27

33 Fine-Grain Tasks and Communication CPD (DEI / IST) Parallel and Distributed Computing / 27

34 Fine-Grain Parallel Algorithm if i j then send broadcast a ij to tasks (k, j), k = i + 1,, n {vert bcast} else recv broadcast of a jj from task (j, j) {vert bcast} l ij = a ij /a jj {multiplier} send broadcast l ij to tasks (i, k), k = j + 1,, n {horiz bcast} end for k = 1 to min(i, j) 1 recv broadcast of a kj task (k, j) recv broadcast of l ik from task (i, k) a ij = a ij l ik a kj end {vert bcast} {horiz bcast} {update entry} CPD (DEI / IST) Parallel and Distributed Computing / 27

35 Agglomeration Agglomerate With n n array of fine-grain tasks, natural strategies are: 2-D: combine k k subarray of fine-grain tasks to form each coarse-grain task, yielding (n/k) 2 coarse-grain tasks 1-D column: combine n fine-grain tasks in each column into coarse-grain task, yielding n coarse-grain tasks 1-D row: combine n fine-grain tasks in each row into coarse-grain task, yielding n coarse-grain tasks CPD (DEI / IST) Parallel and Distributed Computing / 27

36 Mapping Map 2-D: assign (n/k) 2 /p coarse-grain tasks to each of p processes using any desired mapping in each dimension, treating target network as 2-D mesh 1-D: assign n/p coarse-grain tasks to each of p processes using any desired mapping, treating target network as 1-D mesh CPD (DEI / IST) Parallel and Distributed Computing / 27

37 Scalability for 2-D Agglomeration Updating by each process at step k requires about (n k) 2 /p operations Summing over n 1 steps T comp n 1 t c (n k) 2 /p k=1 t c n 3 /(3p) CPD (DEI / IST) Parallel and Distributed Computing / 27

38 Scalability for 2-D Agglomeration Similarly, amount of data broadcast at step k along each process row and column is about (n k)/ p, so on 2-D mesh T comm n 1 2(t s + t w (n k)/ p) k=1 2t s n + t w n 2 / p where we have allowed for overlap of broadcasts for successive steps Total execution time is Tp t c n 3 /(3p) + 2t s n + t w n 2 / p CPD (DEI / IST) Parallel and Distributed Computing / 27

39 Isoefficiency Analysis Isoefficiency analysis: T (n, 1) CT 0 (n, p) (T (n, 1) sequential time; T 0 (n, p) parallel overhead) Sequential time complexity: T (n, 1) = O(n 3 ) Parallel overhead dominated by broadcasts: O(2t s n + t w n 2 / p) = O(n 2 / p) T 0 (n, p) = p O(n 2 / p) n 3 C pn 2 n C p Scalability function: M(f (p))/p M(n) = n 2 M(C p) = C 2 p Perfect scalability! CPD (DEI / IST) Parallel and Distributed Computing / 27

40 Pivoting Pivoting is the action of exchanging matrix elements to use a different pivot Main reason is to choose pivot that creates fewer fillins during elimination: creates previous non-existent element Other reasons are numerical Partial pivoting complicates parallel implementation of Gaussian elimination and significantly affects potential performance With 2-D algorithm, pivot search is parallel but requires communication within process column and inhibits overlapping of successive steps With 1-D column algorithm, pivot search requires no communication but is purely serial Once pivot is found, index of pivot row must be communicated to other processes, and rows must be explicitly or implicitly interchanged in each process CPD (DEI / IST) Parallel and Distributed Computing / 27

41 Review Solving Linear Systems Direct Methods: solution is sought directly, at once Gaussian Elimination LU Factorization Pivoting CPD (DEI / IST) Parallel and Distributed Computing / 27

42 Next Class Efficient parallelization of numerical algorithms Finite Difference discretization Solution of Ordinary Differential Equations CPD (DEI / IST) Parallel and Distributed Computing / 27

Parallel Programming. Parallel algorithms Linear systems solvers

Parallel Programming. Parallel algorithms Linear systems solvers Parallel Programming Parallel algorithms Linear systems solvers Terminology System of linear equations Solve Ax = b for x Special matrices Upper triangular Lower triangular Diagonally dominant Symmetric

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Parallel Scientific Computing

Parallel Scientific Computing IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

The Sieve of Erastothenes

The Sieve of Erastothenes The Sieve of Erastothenes Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 25, 2010 José Monteiro (DEI / IST) Parallel and Distributed

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 13 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath Parallel Numerical Algorithms

More information

Linear Systems of Equations by Gaussian Elimination

Linear Systems of Equations by Gaussian Elimination Chapter 6, p. 1/32 Linear of Equations by School of Engineering Sciences Parallel Computations for Large-Scale Problems I Chapter 6, p. 2/32 Outline 1 2 3 4 The Problem Consider the system a 1,1 x 1 +

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 13 Finite Difference Methods Outline n Ordinary and partial differential equations n Finite difference methods n Vibrating string

More information

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11 CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani 1 Question 1 1. Let a ij denote the entries of the matrix A. Let A (m) denote the matrix A after m Gaussian elimination

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith

More information

Parallel programming using MPI. Analysis and optimization. Bhupender Thakur, Jim Lupo, Le Yan, Alex Pacheco

Parallel programming using MPI. Analysis and optimization. Bhupender Thakur, Jim Lupo, Le Yan, Alex Pacheco Parallel programming using MPI Analysis and optimization Bhupender Thakur, Jim Lupo, Le Yan, Alex Pacheco Outline l Parallel programming: Basic definitions l Choosing right algorithms: Optimal serial and

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Overview: Synchronous Computations

Overview: Synchronous Computations Overview: Synchronous Computations barriers: linear, tree-based and butterfly degrees of synchronization synchronous example 1: Jacobi Iterations serial and parallel code, performance analysis synchronous

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that 1 Problem 1 Solution In order to show that the matrix L k is the inverse of the matrix M k, we need to show that Since we need to show that Since L k M k = I (or M k L k = I). L k = I + m k e T k, M k

More information

7.2 Linear equation systems. 7.3 Linear least square fit

7.2 Linear equation systems. 7.3 Linear least square fit 72 Linear equation systems In the following sections, we will spend some time to solve linear systems of equations This is a tool that will come in handy in many di erent places during this course For

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo Lecture 12 Gaussian Elimination and LU Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa State University[0.5in]

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

1 Last time: linear systems and row operations

1 Last time: linear systems and row operations 1 Last time: linear systems and row operations Here s what we did last time: a system of linear equations or linear system is a list of equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22

More information

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

Linear Algebra I Lecture 8

Linear Algebra I Lecture 8 Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information