Engineering Computation

Size: px
Start display at page:

Download "Engineering Computation"

Transcription

1 Engineering Computation Systems of Linear Equations_1 1

2 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian Elimination. Examine how roundoff error can enter and be magnified in Gaussian Elimination 4. Introduce Pivoting and Scaling as defenses against roundoff. 5. Consider what an engineer can do to generate well formulated problems.

3 Systems of Equations In the previous lecture we have tried to determine the value x, satisfying f(x)=0, where f is a scalar function or a system of non linear equations. : Now we try to obtain the values x 1,x,, x n, satisfying the system of linear equations A x=b with the same number n of equations and unknowns:

4 Systems of Equations Linear Algebraic Equations a 11 x 1 + a 1 x + a 1 x + + a 1n x n = b 1 a 1 x 1 + a x + a x + + a n x n = b.. a n1 x 1 + a n x + a n x + + a n x n = b n where all a ij 's and b i 's are constants. In matrix form: a11 a1 a1 a1n x1 b1 a1 a a an x b a1 a a an x = b an1 an an ann xn bn or simply n x n n x 1 n x 1 [A]{x} = {b} 4

5 Systems of Equations where A is the coefficients square matrix and b vector of independent terms. Many of the engineering fundamental equations are based on conservation laws. In mathematical terms, these principles lead to balance or continuity equations relating the system behavior with respect to the amount of the magnitude being modelled and the external stimuli acting on the system. b1 P b At each node, Balance = = Force Equilibrium F b1 V 1 H 1 H V 1 H 1 V 6 equations - 6 unknowns F x = 0 F y = 0 5

6 Systems of Equations Some special types of matrices: Symmetric matrix Identity matrix Diagonal matrix Upper triangular matrix 6

7 Systems of Equations Lower triangular matrix Banded matrix Half band width Sparse matrix: many zeros (See Matlab sparse and spconvert) All elements are null with the exception of those in a band centered around the main diagonal. This matrix has a band width of and has the name of tridiagonal. 7

8 Systems of Equations Graphic Solution: Systems of equations are hyperplanes (straight lines, planes, etc.). The solution of a system is the intersection of these hyperplanes.. Compatible and determined system.. Normal vectors are linearly independent.. Determinant of A is non-null.. Unique solution. 8

9 Systems of Equations. Incompatible system,. Linearly dependent vectors.. Null determinant of A.. No solution.. Compatible but undetermined system.. Linearly dependent vectors. Null determinant of A.. A infinite number of solutions.. Compatible and determined system. Linearly independent vectors.. Nonnull determinant of A, but close to zero.. There exists a solution but it is difficult to find precisely.. It is an ill conditioned system leading to numerical errors. 9

10 Gauss elimination Naive Gauss elimination method: The Gauss method has two phases: Forward elimination and backsustitution. In the first, the system is reduced to an upper triangular system: ( a 1 / a11) add pivot Pivot equation First, the unknown x 1 is eliminated. To this end, the first row (pivot row) is multiplied by -a 1 /a 11 and added to the second row. The same is done with all other succesive rows (n-1 times) until only the first equation contains the first unknown x 1. 10

11 Gauss elimination 1. Forward Elimination (Row Manipulation): a. Form augmented matrix [A b]: a11 a1 a1n x1 b1 a11 a1 a1n b1 a1 a an x b a1 a an b = ==> a a a x b a a a b n1 n nn n n n1 n nn n b. By elementary row manipulations, reduce [A b] to [U b'] where U is an upper triangular matrix: DO i = 1 to n-1 DO k = i+1 to n Row(k) = Row(k) +(-a ki /a ii ) * Row(i) ENDDO ENDDO 11

12 Gauss elimination flops 1. Forward Elimination (Row Manipulation): DO k = 1 to n 1 DO i = k+1 to n r = -A(i,k)/A(k,k) DO j = k+1 to n A(i,j)=A(i,j) +r A(k,j) ENDDO B(i) = B(i) +r B(k) ENDDO Complexity: Operation Counting, (+-) plus (*/) is 1 operation, ENDDO Inner loop: n j=k+1 1= n -(k +1)+1= n-k n i=k+1 Second loop: [ ] [ ] + (n -k) = ( + n) k (n k) = (n + n) (n + 1)k + k 1

13 Gauss elimination flops Operation Counting Forward Elimination : DO k = 1 to n 1 ENDDO m i 1... m m(m 1)(m 1)/6 m / + O(m = = + + = ) i = 1 n 1 k = 1 [( n n) ( n 1) k k ] n 1 n 1 n 1 = (n + n) 1 (n + 1) k + k k= 1 k= 1 k= 1 (n 1))(n) = (n + n)(n 1) (n + 1) + (n 1)(n)(n 1) 6 n = + O(n ) 1

14 Gauss elimination flops Back Substitution Solve upper triangular system [U]{x} = {b } x n = b' n / u nn DO i = n-1 to 1 by (-1) END Outer Loop: [ ] n 1 n 1 n (n i) = (1+ n) 1 i i= 1 i= 1 i= 1 u11 u1 u1 u1n x1 b1 0 u u u n x b 0 0 u un x = b u x b n nn n n Inner Loop: j= i+ 1 1 = n (i + 1) + 1 = n-i Total Effort, number of Floating Point Operations, (FLOPS), Gauss method: n / + O(n ) {Elim} + n / + O(n) {Back Subs} = n / + O(n ) = O(n ) = (1+ n)(n 1) (n 1)n n = + O(n) Ch&C 9..1 considers one (+-) and one (*/) as different flops: Naive Gaussian elimination, No of flops= n /+O(n ) {mult/div} + n /+O(n) {add/substr}= n / + O(n ) For backsubstitution: n flops Total effort: n / + O(n ) =O(n ) Solution by Cramer's Rule: n! flops. Never used with a computer 14

15 Pivoting strategy Floating point operations, mantissa with 4 digits, rounding 1st step 5 x 1 - x +x = 5-1 Elimination factors - x x +4 x = (-/5)=/5=0.6000e0 x 1 + x + x =7 7 -/5= e0 (Exact solution: [1,1,1].5000e e1.1000e1.000e1.5000e e1.1000e1.000e1 1st step -.000e1.1799e1.4000e1.799e e-.4600e1.4599e1.000e1.000e1.000e1.7000e e1.1400e1.500e1 15

16 Pivoting strategy.5000e e1.1000e1.000e e-.4600e1.4599e1 nd step.5000e e1.1000e1.000e e-.4600e1.4599e e1.1400e1.500e1 Elim factor:-.800e1/(-.1000e-)=800 x = 17490/ x = (.4599e e1.1001e1)/(-.1000e- )= =( )/(-0.001)= /(-0.001) = 6 x 1 = (-x +x )/5=( )/5=( )/5= =0/5 = 4 (Exact solution: [1,1,1]) Strategy (pivoting): reduce de abs value of elim factors e5.1749e = e = e5+.5e1.1749e5. Coefficients in the last equation, after step, have values 10 or 10 4 times greater than those in the original system.. Small rounding errors for them are big errors for other coefficients.. Relative SCALING of coefficientes can affect a lot!! 16

17 Pivoting strategy 1 Pivoting effect: reduce as much as possible the abs values of elimination factors, protecting against increase of the values of the coefficients. We still operate with 4 digits mantissa and rounding, and partial pivoting.5000e e e1.000e e1.1799e1.4000e1.799e1.000e1.000e1.000e1.7000e1 1st step as before nd step: permuting + elimination m =-(-0.001/.8)=.6e-4 x = 1 x = ( )/.8 = e e e1.000e e-.4600e1.4599e e1.1400e1.500e1 Look for a pivot (step i): greatest abs value coef in column i from diag ii-element x 1 = (-x +x )/5=(-1+)/5=1 (Exact solution: [1,1,1])!!!.6e e e e The exact solution isn t usually obtained, but pivoting is absolutely necessary to reduce rounding errors!! 17

18 Gauss elimination (example) Goals: 1. Best accuracy (i.e. minimize error). Parsimony (i.e. minimize effort) Possible Problems: A. Zero on diagonal term by zero. B. Many floating point operations (flops) cause numerical precision problems and propagation of errors. C. System may be ill-conditioned: det[a] 0. D. No solution or an infinite # of solutions: det[a] = 0. Possible Remedies: A. Carry more significant figures (double precision). B. Pivoting : choose elimination factor as small as possible in abs value (one case is when a diag elem is close to zero) C. Scaling: to reduce round-off error. 18

19 Gauss elimination (pivoting) PIVOTING A. Row pivoting (Partial Pivoting) Partial Pivoting, No Scaling: At each step i, find (along column i) max k a ki for k = i, i+1, i+,..., n Partial Pivoting, Scaling At each step i, find (along column i) max k a ki /size(matrix coef row k) for k = i, i+1, i+,..., n -> Move corresponding row to pivot position.. Avoids zero a ii. Protects against growing of coefficients & minimizes round-off. Maintains diagonal dominance.. Row pivoting does not affect the order of the variables.. Included in any good Gaussian Elimination routine. 19

20 Gauss elimination (pivoting) B. Column pivoting - Reorder remaining variables x j for j = i,...,n so get largest a ji Column pivoting changes the order of the unknowns, x i, and thus leads to complexity in the algorithm. Not usually done. C. Complete or Full pivoting Performing both row pivoting and column pivoting. (If [A] is symmetric, needed to preserve symmetry.) SCALING a. Express all equations (and variables) in comparable units so all elements of [A] are about the same size. b. If that fails, and max j a ij varies widely across the rows, replace each row i by: a ij <- a ij /max a ij. This makes the largest coefficient a ij of each equation equal to 1 and the largest element of [A] equal to 1 or -1 NOTE: Routines generally do not scale automatically; scaling can cause round-off error too! (Partial pivoting with scaling integrates pivoting and scaling to select the pivots) 0

21 LU factorization LU factorization - The LU factorization is a method that uses gaussian elimination techniques to transform the matrix A in a product of triangular matrices (L is lower triangular, U is upper traingular). This is specially useful to solve systems with different vectors b, because the same decomposition of matrix A can be used to evaluate in an efficient form, by forward and backward sustitution, all cases. The situation of a system with the same coefficient matrix and several right hand vectors is quite common in engineering. This is because usually coefficient matrix represent geometry/connectivity and physical properties in the balance equations represented in the system, while the independent right hand vectors represent actions on the system, like mechanical loads corresponding to different loading hypothesis. A pure factorization A=L U is possible in some cases, but in general a factorization with pivoting is the method to apply. It is always possible if matrix A is nonsingular, and protects against rounding errors. In practice, pivoting is needed: P A = L U (P is a square matrix that results of row permutations of the Identity matrix) 1

22 LU factorization P A=LU A X=B Initial system naming U X=Y Back substitution of Y: U X=Y Upper triangular system solve for X= (x n,,x 1 ): solution P A X=P B, L U X=P B L Y=P B, Lower triangular system, solve for Y= (y 1,...,y n )

23 LU factorization LU decomposition is very much related to Gauss method, because the upper triangular matrix U appears in the LU decomposition. Surprisingly, during the Gauss elimination procedure, this matrix L is obtained, but one is not aware of this fact. The factors we use to get zeroes below the main diagonal are elements of this matrix L. Let us consider first an example of LU factorization without pivoting: 4 A= m 1 =-1/ m 1 =-1/ Frobenius matrices M 1 = M =1-1/ 1 0-1/ = 4 0 1/ 1 0-1/ / 1 0-1/ 0 M 1 A (1) (=A) = A (), det(m 1 )=1 m =-((-1/)/(1/))=1 4 = 0 1/ 1 M A () = A () = U, det(m )=1 M M 1 A= U ; M M 1 A = U ; A = U = u 11 u u, det(a)=det(u) (no pivoting) A= M 1-1 M -1 U = L U

24 LU factorization (no pivoting) M 1 = -1/ 1 0 M -1 1 = -1/ / / 0 1 M = M -1 = M 1-1 M -1 = 1/ 1 0 1/ -1 1 Coefficients of L under the diagonal: elimination factors with changed sign In general, P A = L U (using pivoting strategies), and det(a)=det(u) (-1) number of row permutations in P P A=L U number of FLOPS : O(n /), 1 flop=1(*/) plus 1(+-) 4

25 LU Gaussian factorization, nonscaled partial pivoting P A= Rows indices, initial situation Each step: substeps: 1.- Pivot selection and possible row permutation,.- Elimination. 1st step: 1st column, 7=max( 1, 4, 7), pivot element, permute 1st and rd rows pivot element m 1 =-4/7 m 1 =-1/ /7 /7 /7 +1/7 6/7 16/7. nd step: st column, 6/7=max( /7, 6/7 ), pivot element, permute nd and rd rows /7 6/7 16/7 +4/7 /7 /7 pivot element m =-(/7)/(6/7)=-1/ when permuting, full rows are permuted /7 6/7 16/7 +4/7 +1/ L U= L= P is orthogonal: P T =P -1 U /7 16/ / /7 1/ 1 P= P A=L U, check! 5

26 A= LU Gaussian factorization, nonscaled partial pivoting Rows indices. 1st step: Substep 1: permute 1st and rd rows P 1 = Substep : eliminate with Frobenius matrix M 1 = -4/ / nd step: Substep 1: permute nd and rd rows P = Substep : eliminate with Frobenius matrix M = / 1 M P M 1 P 1 A=U, after some algebra: PA=LU, L =1, U =7 (6/7), P =(-1) permut, A =1 A 1 =M 1 P 1 A /7 /7 0 6/7 16/7 A =M P A 1 = =M P M 1 P 1 A=U /7 16/7 0 0 L= 1/7 1 0 P= 4/7 1/

27 LU factorization, nonscaled partial pivoting Row indices 1 A st step: m 1 =-4/7 m 1 =-1/ /7-1/7 /7 +1/7 /7 16/7. nd step (permutation isn t needed): /7-1/7 /7 m =(-/7)/(-1/7)=/1 +1/7 -/1 6/ = 4/ /7 -/ /7 / /1 P A L U 7

28 LU factorization, SCALED partial pivoting Row Indices i 1 A Row Sizes d st step: max( 1 /, 4 /6, 7 /1)=4/6, pivot: 4, nd row Row Indices i Row Sizes d 6 1 m 1 =-1/4 m 1 =-7/ /4 /4 / +7/4 1/4-11/ d 6 1. nd step (choose pivot): max( /4 /, 1/4 /1=1/48, pivot: 1/4, rd row i 1 d /4 1/4-11/ +1/4 /4 / m =(-/4)/(1/4)=-/ /4 1/4-11/ +1/4 /1 6/ = 7/ /4 / /4-11/ 0 0 6/1 P A L U 8

29 Matlab LU factorization (nonscaled partial pivoting) >> A1=[1 ;4 5 6;7 1 5] A1 = >> P*A1 ans = >> L*U ans = >> >> [L,U,P]=lu(A1) L = U = P = >> Pivoting 1st example revisited: A=[5,-,1;-,1.799,4;,,] [L1,U1,P1]=lu(A) 9

30 LU decomposition (Complexity) Basic Approach Consider A x = b a) Gauss-type "decomposition" of A into L U n / flops A x = b becomes L U x = b; let U x y b) First solve L y = b for y, lower triangular n / flops c) Then solve U x = y for x by back substitution n / flops Practice: USE PIVOTING!!: P A=L U Ax=b ->> P A x = P b; L U x=p b; L y=p b ->> y, U x=y ->> x, the solution 0

31 LU decomposition LU Decomposition Variations Doolittle L 1 U General A=L 1 U [ ] 1 0 L 1 = a ij 1 U= a a 0 a [ ] 11 ij Crout L U 1 General A=L U 1 a 0 L= a ij a [ ] 11 nn 1 a [ ] ij U 1 = 0 1 Cholesky L L T Positive Definite Symmetric A=L L T a 0 L= a ij a [ ] 11 nn Cholesky works only for Positive Definite Symmetric matrices!! nn. There are other important matrix factorizations. One of them is A=Q R, Q orthogonal and R, upper triangular. It is used in robust matrix eigenvalue calculations. 1

32 Cholesky factorization PROPERTY: A=L L T, with l ii 0, i=1,,n (L: Lower triangular matrix) is a necesary and sufficient condition for A to be a positive definite matrix (x T A x >0 x 0 (vector), x T A x =0 only if x=0 function [L,indic] = Choles(A); % Input: A Symmetric and Positive Definite matrix, % Output: % L: Lower triangular matrix, L*L'=A % indic: =0 if no problems, =1 if problems indic=0; n = size(a,1); L = zeros(n); for k=1:n s=0;for p=1:k-1,s=s+l(k,p)^;end; aux1=a(k,k)-s; if (aux1 <0 abs(aux1)<=1e-1), indic=1,aux1,return end L(k,k)=sqrt(aux1); for i=k+1:n s=0;for p=1:k-1,s=s+l(i,p)*l(k,p);end L(i,k)=(A(i,k)-s)/L(k,k); end % for i end % for k A= L l l 1 l 0 l n1 l n l nn L T l 11 l 1 l n1 0 l l n 0 0 l nn In A=L L T, identify element to element at both sides, by columns for k=1:n l kk =sqrt(a kk - p=1:k-1 (l kp ) ) for i=k+1 : n l ik =(a ik - p=1:k-1 l ip l kp )/l kk end end No. of flops=o(n /6)

33 Matrix Inversion Definition of a matrix inverse: A A -1 = I, A x = b => x = A -1 b First Rule: Don t do it. (more effort and possibly numerical unstabilities) Consider: A A -1 = I, n systems with the columns i k (k=1,,n) of I as RHS vectors 1) Create the augmented matrix: [A I ] ) Apply simultaneously elimination under and over the diagonal (Gauss-Jordan elimination): at the end one gets: [ I A -1 ]. This type of elimination can also be applied to a system Ax=b, but the number of flops is higher than performing gaussian elimination plus back substitution, and should nt be used.. Nevertheless, Gauss-Jordan method flops to calculate the inverse A -1 takes O(n ) and it is adequate instead of solving n systems with RHS i k (k=1,,n) by L U method.

34 Matrix Inversion. Gauss-Jordan method 4 A= m 1 =-1/ m 1 = 1/ / 11/ 0 8/ 1/ -1/ 1 0 1/ 0 1 A -1 =?, A A -1 =I linear systems in parallel. Same coeff matrix m 1 =-/(16/)=-/8 m =-(8/)/(16/)=-1/ m 1 =(-1/8)/(5/)=-1/0 m =(-11/)/(5/)=-/15 0 1/8 0 16/ 11/ 0 0 5/ / / 9/8 -/8 0-1/ 1 0 1/ -1/ 1 /5 /0-1/0-16/15 6/15 -/15 1/ -1/ 1 Full 1st row * (1/) Full nd row * (/16) Full rd row * (/5) /5 1/0-7/0-1/5 1/40-11/40 1/5-1/5 /5 A -1 Partial pivoting, scaled or not, should be applied. At each step/column search the pivot under the diagonal at each column/step, for not breaking the zeros structure 4

35

36

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44) 1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Numerical Analysis Fall. Gauss Elimination

Numerical Analysis Fall. Gauss Elimination Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Gaussian Elimination -(3.1) b 1. b 2., b. b n Gaussian Elimination -() Consider solving a given system of n linear equations in n unknowns: (*) a x a x a n x n b where a ij and b i are constants and x i are unknowns Let a n x a n x a nn x n a a a

More information

Chapter 8 Gauss Elimination. Gab-Byung Chae

Chapter 8 Gauss Elimination. Gab-Byung Chae Chapter 8 Gauss Elimination Gab-Byung Chae 2008 5 19 2 Chapter Objectives How to solve small sets of linear equations with the graphical method and Cramer s rule Gauss Elimination Understanding how to

More information

Solution of Linear systems

Solution of Linear systems Solution of Linear systems Direct Methods Indirect Methods -Elimination Methods -Inverse of a matrix -Cramer s Rule -LU Decomposition Iterative Methods 2 A x = y Works better for coefficient matrices with

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Chapter 9: Gaussian Elimination

Chapter 9: Gaussian Elimination Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying Objectives Understanding forward elimination and back substitution in Gaussian elimination method Understanding the concept

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and

Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and vector operations. Vectors operations and norms: Let

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2 Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Section Matrices and Systems of Linear Eqns.

Section Matrices and Systems of Linear Eqns. QUIZ: strings Section 14.3 Matrices and Systems of Linear Eqns. Remembering matrices from Ch.2 How to test if 2 matrices are equal Assume equal until proved wrong! else? myflag = logical(1) How to test

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is

More information