Solving Linear Systems of Equations

Size: px
Start display at page:

Download "Solving Linear Systems of Equations"

Transcription

1 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly and iteratively solving Ax = b A Linear System of Equations Direct Solution for Ax = b LU-decomposition by Gaussian Elimination Gaussian Elimination with Partial Pivoting Crout Algorithm for A = LU Cholesky Algorithm for A = LL t (A is positive definite) Vector and Matrix Norms Matrix Condition Number (Cond(A) = A A 1 ) Sensitity and Error Bounds Iterative Solutions Jacobi method Gauss-Seidel method Other methods Applications

2 2 Overdetermined, Underdetermined, Homogeneous Systems a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2 a m1 x 1 + a m2 x a mn x n = b m Ax = b Definition: A linear system is said to be overdetermined if there are more equations than unknowns (m > n), underdetermined if m < n, homogeneous if b i = 0, 1 i m x + y = 1 x + y = 3 x + y = 2 (A) x y = 3 (B) x y = 1 (C) 2x + 2y = 4 x + 2y = 2 2x + y = 5 x y = 2 (A) has no solution, (B) has unique solution, (C) has infinitely many solutions (D) x + 2y + z = 1 2x + 4y + 2z = 3 (E) x + 2y + z = 5 2x y + z = 3 (D) has no solution, (E) has infinitely many solutions

3 3 Some Special Matrices A = [a ij ] R n n Diagonal if a ij = 0 i j Lower if a ij = 0 if j > i Unit lower if A is lower- with a ii = 1 Lower Hessenberg if a ij = 0 for j > i + 1 Band matrix with bandwidth 2k + 1 if a ij = 0 for i j > k A band matrix with bandwidth 1 is diagonal A band matrix with bandwidth 3 is tridiagonal A band matrix with bandwidth 5 is pentadiagonal A lower and upper Hessenberg matrix is tridiagonal A 1 = , A 2 = , A 3 = , A 4 =

4 4 A Direct Solution of Linear Systems A linear system 2x + y + z = 5 4x 6y = 2 2x + 7y + 2z = 9 A matrix representation Ax = b, or x y z = Solution using MATLAB A = [2, 1, 2; 4, 6, 0; 2, 7, 2]; b = [5, 2, 9] ; x = A\b (x = [1; 1; 2])

5 5 Elementary Row Operations (1) Interchange two rows: A r A s (2) Multiply a row by a nonzero real number: A r αa r (3) Replace a row by its sum with a multiple of another row: A s αa r + A s E 1 =, E 2 = 0 0 2, E 3 = 1 0 1, Example A = , E 1 A = , E 2 A = , E 3 A = Let L 3 = 0 1 1, L 2 = 1 0 1, L 1 = 2 1 0, A = Then L 3 L 2 L 1 A = = U (Upper ) A = (L 1 1 L 1 2 L 1 3 )U = LU, where L is unit lower

6 6 Computing An Inverse Matrix By Elementary Row Operations A = = I E 1 A = = E 1 I E 2 E 1 A = = E 2 E 1 I E 3 E 2 E 1 A = = E 3 E 2 E 1 I E 4 E 3 E 2 E 1 A = = E 4 E 3 E 2 E 1 I = A 1 where the elementary matrices are E 1 = 1 1 0, E 2 = 2 0 1, E 3 = 1 2 0, E 4 = 1 0 3

7 7 LU-Decomposition If a matrix A can be decomposed into the product of a unit lower- matrix L and an upper- matrix U, then the linear system Ax = b can be written as LUx = b The problem is reduced to solving two simpler triangular systems Ly = b and Ux = y by forward and back substitutions A = , L 3 = 0 1 1, L 2 = 1 0 1, L 1 = Then where L 3 L 2 L 1 A = = U A = L 1 1 L 1 2 L 1 3 U = LU A = = = LU If b = [5, 2, 9] t, then y = [5, 12, 2] t and x = [1, 1, 2] t B = , b = x = 1 2 1

8 8 Analysis of Gaussian Elimination Algorithm for i = 1, 2,, n 1 for k = i + 1, i + 2,, n β a ki /a ii if a ii 0 a ki β (β = m ki ) for j = i + 1, i + 2,, n a kj a kj β a ij The Worst Computational Complexity is O( 2 3 n3 ) 1 # of divisions are (n 1) + (n 2) = n(n 1) 2 2 # of multiplications are (n 1) 2 + (n 2) = n(n 1)(2n 1) 6 3 # of subtractions are (n 1) 2 + (n 2) = n(n 1)(2n 1) 6

9 9 The Analysis of Gaussian Elimination and Back Substitution to solve Ax=b 2 3 n n2 7n 6 R 1 : a 11 x 1 + a 12 x a 1n x n = b 1 R 2 : a 21 x 1 + a 22 x a 2n x n = b 2 R i : a i1 x 1 + a i2 x a in x n = b i R n : a n1 x 1 + a n2 x a nn x n = b n By Gaussian Elimination, we need C 1 = [ n k=1 (k+1)(k 1) + n k=1 k(k 1)] flops to reduce the above linear system of equations equivalent to the following upper triangular system R 1 : u 11 x 1 + u 12 x u 1n x n = c 1 R i : u ii x i + + u in x n = c i R n : u nn x n = c n We need C 2 = n k=1 (2k 1) = n 2 flops to solve an upper triangular linear system of equations Therefore, the total number of flops of solving Ax = b is summarized as C 1 + C 2 = [ n k=1 (k + 1)(k 1) + n k=1 k(k 1)] + n k=1 (2k 1) = 2 n k=1 k 2 n + n k=1 k n = 2n(n+1)(2n+1) 6 + n(n+1) 2 2n = n 6 (4n2 + 6n n ) = 2 3 n n2 7 6 n

10 10 PA=LU Let A R 4 4 and L 3 P 3 L 2 P 2 L 1 P 1 A = U by Gaussian Elimination with Partial Pivoting Denote L 1 = 0 α 1 α 2 α 3, L 2 = α α 5 0 1, L 3 = α 6 1 Without loss of generality, let P 2 = Then P 2 L 1 = (P 2 L 1 P 1 2 )P 2 = L (2) 1 P 2, where L (2) 1 = 0 α 3 α 2 α 1 Theorem: For any A R n n, there exists a permutation matrix P such that P A = LU, where L is unit lower and U is upper

11 11 Gaussian Elimination with partial Pivoting Algorithm for i = 1, 2,, n p(i)=i for i = 1, 2,, n 1 (a) select a pivotal element a p(j),i such that a p(j),i = max i k n a p(k),i (b) p(i) p(j) (c) for k = i + 1, i + 2,, n m p(k),i = a p(k),i /a p(i),i for j = i + 1, i + 2,, n a p(k),j = a p(k),j m p(k),i a p(i),j An example A = P 23 P 13 A = LU =

12 12 Matlab Codes for Gaussian Elimination with Partial Pivoting %% %% %% gaussppm - drive of Gaussian Elimination wit Partial Pivoting %% %% %% fin=fopen( gaussmatdat, r ); n=fscanf(fin, %d,1); A=fscanf(fin, %f,[n n]); A=A ; b=fscanf(fin, %f,n); X=gausspivot(A,b,n) %% %% %% gausspivotm - Gaussian elimination with Partial Pivoting %% %% %% function X=gausspivot(A,b,n) if (abs(det(a))<eps) disp(sprintf( A is singular with det=%f\n,det(a))) end C=[A, b]; % Gaussian Elimination with Partial Pivoting % for i=1:n-1 [pivot, k]=max(abs(c(i:n,i))); if (k>1) temp=c(i,:); C(i,:)=C(i+k-1,:); C(i+k-1,:)=temp; end m(i+1:n,i)= -C(i+1:n,i)/C(i,i); C(i+1:n,:)=C(i+1:n,:)+m(i+1:n,i)*C(i,:); end % Back substitution % X=zeros(n,1); %% Let X be a column vector of size n X(n)=C(n,n+1)/C(n,n); for i=n-1:-1:1 X(i)=(C(i,n+1)-C(i,i+1:n)*X(i+1:n))/C(i,i); end

13 13 Doolittle s LU Factorization Algorithm: A R n n, A = LU, L is unit lower-, U is upper- for k = 0, 1,, n 1 L kk 1 for j = k, k + 1,, n 1 U kj A kj k 1 s=0 L ks U sj for i = k + 1, k + 2,, n 1 L ik [ A ik k 1 s=0 L isu sk ] /Ukk A = = = LU

14 14 Crout s LU Factorization Algorithm: A R n n, A = LU, L is lower-, U is unit upper- for k = 0, 1,, n 1 U kk 1 for i = k, k + 1,, n 1 L ik A ik k 1 s=0 L is U sk for j = k + 1, k + 2,, n 1 U kj [ A kj k 1 s=0 L ksu sj ] /Lkk A = = = LU

15 15 Cholesky Algorithm Algorithm: A R n n, A = LL t, A is positive definite and L is lower- for j = 0, 1,, n 1 L jj [ A jj ] j 1 1/2 k=0 L2 jk for i = j + 1, j + 2,, n 1 L ij [ A ij j 1 k=0 L ikl jk ] /Ljj A = = = LL t

16 16 Vector Norms Definition: A vector norm on R n is a function that satisfies τ : R n R + = {x 0 x R} (1) τ(x) > 0 x 0, τ(0) = 0 (2) τ(cx) = c τ(x) c R, x R n (3) τ(x + y) τ(x) + τ(y) x, y R n Hölder norm (p-norm) x p = ( n i=1 x i p ) 1/p for p 1 (p=1) x 1 = n i=1 x i (Mahattan or City-block distance) (p=2) x 2 = ( n i=1 x i 2 ) 1/2 (Euclidean distance) (p= ) x = max 1 i n { x i } ( -norm)

17 17 Matrix Norms Definition: A matrix norm on R m n is a function that satisfies τ : R m n R + = {x 0 x R} (1) τ(a) > 0 A O, τ(o) = 0 (2) τ(ca) = c τ(a) c R, A R m n (3) τ(a + B) τ(a) + τ(b) A, B R m n Consistency Property: τ(ab) τ(a)τ(b) A, B (a) τ(a) = max{ a ij 1 i m, 1 j n} (b) A F = [ mi=1 nj=1 a 2 ij] 1/2 (Fröbenius norm) Subordinate Matrix Norm: A = max x 0 { Ax / x } (1) If A R m n, then A 1 = max 1 j n ( m i=1 a ij ) (2) If A R m n, then A = max 1 i m ( nj=1 a ij ) (3) Let A R n n be real symmetric, then A 2 = max 1 i n λ i, where λ i λ(a)

18 18 Matrix Condition Number Cond(A) = A A 1 For the matrix A = , A 1 = Cond 1 (A) = A 1 A 1 1 = 6 45 = 27 Cond (A) = A A 1 = 8 35 = 28 Cond 2 (A) = A 2 A 1 2 = =

19 19 Sensitivity and Error Bounds Let x and ˆx be the solutions to Ax = b and Aˆx = ˆb, respectively, where ˆx = x + x, ˆb = b + b Then b = Ax A x or x b A Thus x x x = A 1 b A 1 b A 1 b A b Cond(A) b b Similarly, suppose that (A + E) x = b and Ax = b, we have Further derivations to get x x E Cond(A) A ( x b x Cond(A) b + E ) A

20 20 Iterative Methods for Solving Linear Systems 1 Iterative methods are msot useful in solving large sparse system 2 One advantage is that the iterative methods may not require any extra storage and hence are more practical 3 One disadvantage is that after solving Ax = b 1, one must start over again from the beginning in order to solve Ax = b 2 Jacobi Method Given Ax = b, write A = C M, where C is nonsingular and easily invertible Then Ax = b (C M)x = b Cx = Mx + b x = C 1 Mx + C 1 b x = Bx + c, where B = C 1 M, c = C 1 b Suppose we start with an inital x (0), then x (1) = Bx (0) + c and x (k+1) = Bx (k) + c

21 21 Jacobi Iterative Method for Solving Linear Systems Suppose x is a solution to Ax = b, then x (1) x = (Bx (0) + c) (Bx + c) = B(x (0) x) x (k) x = B k (x (0) x) x (k) x B k x (0) x B k x (0) x x (k) x 0 as k if B = C 1 M < 1 For simplest computations, 1-norm or -norm is used The simplest choice of C, M with A = C M is C = Diag(A), M = (A C) Theorem: Let x (0) R n be arbitrary and define x (k+1) = Bx (k) + c for k = 0, 1, If x is a solution to Ax = b, then the necessary and sufficient condition for x (k) x is B < 1 Theorem: If A is diagonally dominant, ie, a ii > j i a ij for 1 i n A = C M and B = C 1 M, then n B = max 1 i n b ij = max a ij 1 i n j=1 j i a ii < 1 Let Example: Let Then A = C = B = C 1 M = , b = [ 11 12, M = ], x (0) = , [ 0 0, c = C 1 b = ] [ ] x (1) = Bx (0) + c = [ ], x (2) = [ ], x (3) = [ ]

22 22 Gauss-Seidel Method Given Ax = b, write A = C M, where C is nonsingular and easily invertible Jacobi: C = diag(a 11, a 22,, a nn ), M = (A C) Gauss-Seidel: A = (D L) U = C M, where C = D L, M = U for Jacobi Let x (0) R n be nonzero, then Cx (k+1) = Mx (k) + b implies that (D L)x (k+1) = Ux (k) + b Dx (k+1) = Lx (k+1) + Ux (k) + b x (k+1) 1 = 1 a 11 ( nj=2 a 1j x (k) j + b 1 ) x (k+1) ( i = 1 a ii i 1 j=1 a ijx (k+1) j n j=i+1 a ij x (k) ) j + b i, for i = 2, 3,, n The difference between Jacobi and Gauss-Seidel iteration is that in the latter case, one is using the coordinates of x (k+1) as soon as they are calculated rather than in the next iteration The program for Gauss-Seidel is much simpler

23 23 Convergence of Gauss-Seidel Iterations Theorem: If A is diagonally dominant, ie, a ii > j i a ij for 1 i n Gauss-Seidel iteration converges to a solution of Ax = b Then Proof: Denote A = (D L) U and let j 1 α j = a ji, β j = n a ji, and R j = i=1 i=j+1 β j a jj α j Since A is diagonally dominant, then a jj > α j + β j, thus R j = Therefore, R = Max 1 j n R j < 1 β j a jj α j < a jj α j a jj α j = 1 1 j n The remaining problem is to show that B = Max x =1 Bx R < 1, where B = C 1 M = (D L) 1 U Let x = 1 and y = Bx, then y = Max 1 i n y i = y k for some k Then y = Bx = (D L) 1 Ux (D L)y = Ux Dy = Ly + Ux y = D 1 (Ly + Ux) Then y k = 1 a kk ( k 1 i=1 a kiy i n i=k+1 a ki x i ) y = y k 1 a kk (α k y + β k x ) which implies that for all x with x = 1, Bx = y Thus, B = Max x =1 Bx R < 1 β k a kk α k = R k < 1

24 24 Example for Gauss-Seidel Iterations Example: A = = b = [ ] , x (0) = [ 0 0 ] = D L U Then Moreover, Thus x (1) 1 = 1 10 ( a 12x (0) 2 + b 1 ) = 11 x (1) 2 = 1 10 ( a 21x (1) b 2 ) = 098 x (2) 1 = 1 10 ( a 12x (1) 2 + b 1 ) = 1002 x (2) 2 = 1 10 ( a 21x (2) b 2 ) = x (1) = [11, 098] t x (2) = [1002, 09996] t x (3) = [100004, ] t

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Vector Space and Linear Transform

Vector Space and Linear Transform 32 Vector Space and Linear Transform Vector space, Subspace, Examples Null space, Column space, Row space of a matrix Spanning sets and Linear Independence Basis and Dimension Rank of a matrix Vector norms

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

Problems of Eigenvalues/Eigenvectors

Problems of Eigenvalues/Eigenvectors 67 Problems of Eigenvalues/Eigenvectors Reveiw of Eigenvalues and Eigenvectors Gerschgorin s Disk Theorem Power and Inverse Power Methods Jacobi Transform for Symmetric Matrices Spectrum Decomposition

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Problems of Eigenvalues/Eigenvectors

Problems of Eigenvalues/Eigenvectors 5 Problems of Eigenvalues/Eigenvectors Reveiw of Eigenvalues and Eigenvectors Gerschgorin s Disk Theorem Power and Inverse Power Methods Jacobi Transform for Symmetric Matrices Singular Value Decomposition

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Chapter 1 Matrices and Systems of Equations

Chapter 1 Matrices and Systems of Equations Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble. Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

TMA4125 Matematikk 4N Spring 2017

TMA4125 Matematikk 4N Spring 2017 Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.

More information