NUMERICAL MATHEMATICS & COMPUTING 7th Edition
|
|
- Valentine McLaughlin
- 5 years ago
- Views:
Transcription
1 NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc7 October 16, 2011 Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
2 Numerical Mathematics & Computing, 7th Ed NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc7 October 16, 2011 c 2012 Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
3 23 Tridiagonal and Banded Systems In many applications, extremely large linear systems that have a banded structure are encountered Banded matrices often occur in solving ordinary and partial differential equations It is advantageous to develop computer codes specifically designed for such linear systems, since they reduce the amount of storage used Of practical importance is the tridiagonal system Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
4 d 1 c 1 a 1 d 2 c 2 a 2 d 3 c 3 a i 1 d i c i a n 2 d n 1 c n 1 a n 1 d n x 1 x 2 x 3 x i x n 1 x n = b 1 b 2 b 3 b i b n 1 Here, all the nonzero elements in the coefficient matrix must be on the main diagonal or on the two diagonals just above and below the main diagonal (usually called superdiagonal and subdiagonal, respectively): All elements not in the displayed diagonals are 0 s b n (1) Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
5 A tridiagonal matrix is characterized by the condition a ij = 0 if i j 2 In general, a matrix is said to have a banded structure if there is an integer k (less than n) such that a ij = 0 whenever i j k The storage requirements for a banded matrix are less than those for a general matrix of the same size Thus, an n n diagonal matrix requires only n memory locations in the computer, and a tridiagonal matrix requires only 3n 2 Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
6 This fact is important if banded matrices of very large order are being used For banded matrices, the Gaussian elimination algorithm can be made very efficient if it is known beforehand that pivoting is unnecessary This situation occurs often enough to justify special procedures Here, we develop a code for the tridiagonal system and give a listing for the pentadiagonal system (in which a ij = 0 if i j 3) Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
7 Tridiagonal Systems The routine to be described now is called procedure Tri It is designed to solve a system of n linear equations in n unknowns, as shown in System (1) Both the forward elimination phase and the back substitution phase are incorporated in the procedure, and no pivoting is used Thus, naive Gaussian elimination is used Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
8 Step 1: we subtract a 1 /d 1 times row 1 from row 2, thus creating a 0 in the a 1 position Only the entries d 2 and b 2 are altered Observe that c 2 is not altered Step 2: the process is repeated, using the new row 2 as the pivot row Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
9 Here is how the d i s and b i s are altered in each step: d 2 d 2 (a 1 /d 1 ) c 1 b 2 b 2 (a 1 /d 1 ) b 1 In general, we obtain d i d i (a i 1 /d i 1 ) c i 1 b i b i (a i 1 /d i 1 ) b i 1 (2 i n) Ward Cheney/David Kincaid c (UT Austin[10pt] NUMERICAL Engage Learning: MATHEMATICS Thomson-Brooks/Cole & COMPUTING wwwengagecom[10pt] 7th EditionOctober 16, wwwmautexasedu/ / 33
10 Tridiagonal Systems At the end of the forward elimination phase, the form of the system is as follows: d 1 c 1 x 1 b 1 d 2 c 2 x 2 b 2 d 3 c 3 x 3 b 3 d i c i d n 1 c n 1 d n x i x n 1 x n = b i b n 1 b n Of course, the b i s and d i s are not as they were at the beginning of this process, but the c i s are / 33
11 The back substitution phase solves for x n, x n 1,, x 1 as follows: x n b n /d n x n 1 (b n 1 c n 1 x n ) /d n 1 Finally, we obtain x i (b i c i x i+1 ) /d i (i = n 1, n 2,, 1) / 33
12 Tridiagonal Systems In procedure Tri for a tridiagonal system, we use single-dimensioned arrays (a i ), (d i ), and (c i ) for the diagonals in the coefficient matrix and array (b i ) for the right-hand side, and store the solution in array (x i ) Notice that the original data in arrays (d i ) and (b i ) have been changed / 33
13 Pseudocode Tri procedure Tri(n, (a i ), (d i ), (c i ), (b i ), (x i )) integer i, n; real xmult real array (a i ) 1:n, (d i ) 1:n, (c i ) 1:n, (b i ) 1:n, (x i ) 1:n for i = 2 to n xmult a i 1 /d i 1 d i d i (xmult)c i 1 b i b i (xmult)b i 1 end for x n b n /d n for i = n 1 to 1 step 1 do x i (b i c i x i+1 )/d i end for end procedure Tri / 33
14 Tridiagonal Systems A symmetric tridiagonal system arises in the cubic spline development A general symmetric tridiagonal system has the form d 1 c 1 x 1 c 1 d 2 c 2 x 2 c 2 d 3 c 3 x 3 c i 1 d i c i c n 2 d n 1 c n 1 c n 1 d n x i x n 1 x n = b 1 b 2 b 3 b i b n 1 b n (2) / 33
15 One could overwrite the right-hand side vector b with the solution vector x as well Thus, a symmetric linear system can be solved with a procedure call of the form call Tri(n, (c i ), (d i ), (c i ), (b i ), (b i )) This reduces the number of linear arrays from five to three / 33
16 Strictly Diagonal Dominance Since procedure Tri does not involve pivoting, it is natural to ask whether it is likely to fail Simple examples can be given to illustrate failure because of attempted division by zero even though the coefficient matrix in System (1) is nonsingular On the other hand, it is not easy to give the weakest possible conditions on this matrix to guarantee the success of the algorithm We content ourselves with one property that is easily checked and commonly encountered If the tridiagonal coefficient matrix is diagonally dominant, then procedure Tri does not encounter zero divisors / 33
17 Strictly Diagonal Dominance Definition A general matrix A = (a ij ) n n is strictly diagonally dominant if a ii > n j=1 j i a ij (1 i n) In the case of the tridiagonal system of System (1), strict diagonal dominance means simply that (with a 0 = a n = 0) d i > a i 1 + c i (1 i n) / 33
18 Let us verify that the forward elimination phase in procedure Tri preserves strictly diagonal dominance The new coefficient matrix produced by Gaussian elimination has 0 elements where the a i s originally stood, and new diagonal elements are determined recursively by { d 1 = d 1 d i = d i ( a i 1 / d i 1 ) ci 1 (2 i n) where d i denotes a new diagonal element The c i elements are unaltered Now we assume that d i > a i 1 + c i, and we want to be sure that d i > c i / 33
19 Obviously, this is true for i = 1 because d 1 = d 1 If it is true for index i 1 (that is, d i 1 > c i 1 ), then it is true for index i because d i = d i ( a i 1 / d i 1 ) ci 1 d i a i 1 c i 1 / d i 1 > a i 1 + c i a i 1 = c i While the number of long operations in Gaussian elimination on full matrices is O(n 3 ), it is only O(n) for tridiagonal matrices Also, the scaled pivoting strategy is not needed on strictly diagonally dominant tridiagonal systems / 33
20 Pentadiagonal Systems The principles illustrated by procedure Tri can be applied to matrices that have wider bands of nonzero elements procedure Penta is given here to solve the five-diagonal system: d 1 c 1 f 1 a 1 d 2 c 2 f 2 e 1 a 2 d 3 c 3 f 3 e 2 a 3 d 4 c 4 f 4 e i 2 a i 1 d i c i f i e n 4 a n 3 d n 2 c n 2 f n 2 e n 3 a n 2 d n 1 c n 1 e n 2 a n 1 d n x 1 x 2 x 3 x 4 x i x n 2 x n 1 x n = b 1 b 2 b 3 b 4 b i b n 2 b n 1 b n / 33
21 Pentadiagonal Systems (cont) In the pseudocode, the solution vector is placed in array (x i ) Also, one should not use this routine if n 4 (Why?) procedure Penta(n, (e i ), (a i ), (d i ), (c i ), (f i ), (b i ), (x i )) integer i, n; real r, s, xmult real array (e i ) 1:n, (a i ) 1:n, (d i ) 1:n, (c i ) 1:n, (f i ) 1:n, (b i ) 1:n, (x i ) 1:n r a 1 s a 2 t e / 33
22 Pentadiagonal Systems (cont) for i = 2 to n 1 xmult r/d i 1 d i d i (xmult)c i 1 c i c i (xmult)f i 1 b i b i (xmult)b i 1 xmult t/d i 1 r s (xmult)c i 1 d i+1 d i+1 (xmult)f i 1 b i+1 b i+1 (xmult)b i 1 s a i+1 t e i end for / 33
23 xmult r/d n 1 d n d n (xmult)c n 1 x n (b n (xmult)b n 1 )/d n x n 1 (b n 1 c n 1 x n )/d n 1 for i = n 2 to 1 step 1 x i (b i f i x i+2 c i x i+1 )/d i end for end procedure Penta / 33
24 Pentadiagonal Systems (cont) To be able to solve symmetric pentadiagonal systems with the same code and with a minimum of storage, we have used variables r, s, and t to store temporarily some information rather than overwriting into arrays This allows us to solve a symmetric pentadiagonal system with a procedure call of the form call Penta(n, (f i ), (c i ), (d i ), (c i ), (f i ), (b i ), (b i )) This reduces the number of linear arrays from seven to four Of course, the original data in some of these arrays are corrupted The computed solution is stored in the (b i ) array Here, we assume that all linear arrays are padded with zeros to length n in order not to exceed the array dimensions in the pseudocode / 33
25 Block Pentadiagonal Systems Many mathematical problems involve matrices with block structures In many cases, there are advantages in exploiting the block structure in the numerical solution This is particularly true in solving partial differential equations numerically We can consider a pentadiagonal system as a block tridiagonal system / 33
26 D 1 C 1 A 1 D 2 C 2 A 2 D 3 C 3 A i 1 D i C i A n 2 D n 1 C n 1 A n 1 D n X 1 X 2 X 3 X i X n 1 X n = B 1 B 2 B 3 B i B n 1 B n where [ d2i 1 c D i = 2i 1 a 2i 1 d 2i ] [ e2i 1 c, A i = 2i 1 0 e 2i ] [ f2i 1 0, C i = c 2i 1 f 2i ] Here, we assume that n is even, say n = 2m If n is not even, then the system can be padded with an extra equation x n+1 = 1 so that the number of rows is even / 33
27 Block Pentadiagonal Systems The algorithm for this block tridiagonal system is similar to the one for tridiagonal systems Hence, we have the forward elimination phase { Di D i A i 1 D 1 i 1 C i 1 B i B i A i 1 D 1 i 1 B i 1 (2 i m) The back substitution phase is { Xn D 1 n B n X i D 1 i (B i C i X i+1 ) (m 1 i 1) Here, D 1 i = 1 where = d 2i d 2i 1 a 2i 1 c 2i 1 [ d 2i a 2i 1 c 2i 1 d 2i 1 ] / 33
28 Code for solving a pentadiagonal system using this block procedure is left as an exercise The results from the block pentadiagonal code are the same as those from the procedure Penta, except for roundoff error Also, this procedure can be used for symmetric pentadiagonal systems (in which the subdiagonals are the same as the superdiagonals) / 33
29 Later, we discuss two-dimensional elliptic partial differential equations For example, the Laplace equation is defined on the unit square A 3 3 mesh of points are placed over the unit square region, and they are ordered in the natural ordering (left-to-right and up) Figure: Mesh points in natural order / 33
30 In the Laplace equation, second partial derivatives are approximated by second-order centered finite difference formulas This results in an 9 9 system of linear equations having a sparse coefficient matrix with this nonzero pattern: A = / 33
31 Here, nonzero entries in the matrix are indicated by the symbol, and zero entries are a blank This matrix is block tridiagonal, and each nonzero block is either tridiagonal or diagonal Other orderings of the mesh points result in sparse matrices with different patterns / 33
32 Summary 23 For banded systems, such as tridiagonal, pentadiagonal, and others, it is usual to develop special algorithms for implementing Gaussian elimination, since partial pivoting is not needed in many applications The forward elimination procedure for a tridiagonal linear system A = tridiagonal[(a i ), (d i ), (c i )] is d i d i (a i 1 /d i 1 ) c i 1 b i b i (a i 1 /d i 1 ) b i 1 (2 i n) The back substitution procedure is x i (b i c i x i+1 ) /d i (i = n 1, n 2,, 1) / 33
33 A strictly diagonally dominant matrix A = (a ij ) n n is one in which the magnitude of the diagonal entry is larger than the sum of the magnitudes of the off-diagonal entries in the same row and this is true for all rows, namely, a ii > n j=1 j i a ij (1 i n) For strictly diagonally dominant tridiagonal coefficient matrices, partial pivoting is not necessary because zero divisors are not encountered The forward elimination and back substitution procedures for a pentadiagonal linear system A = pentadiagonal [(e i ), (a i ), (d i ), (c i ), (f i )] is similar to that for a tridiagonal system / 33
NUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationConsider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations
More informationNUMERICAL MATHEMATICS & COMPUTING 6th Edition
NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical
More informationNumerical Mathematics & Computing, 7 Ed. 4.1 Interpolation
Numerical Mathematics & Computing, 7 Ed. 4.1 Interpolation Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 November 7, 2011 2011 1 /
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More information7. Piecewise Polynomial (Spline) Interpolation
- 64-7 Piecewise Polynomial (Spline Interpolation Single polynomial interpolation has two major disadvantages First, it is not computationally efficient when the number of data points is large When the
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationLecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014
Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationNUMERICAL MATHEMATICS AND COMPUTING
NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific
More informationNumerical Analysis Fall. Gauss Elimination
Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing
More informationMatrices and Systems of Equations
M CHAPTER 3 3 4 3 F 2 2 4 C 4 4 Matrices and Systems of Equations Probably the most important problem in mathematics is that of solving a system of linear equations. Well over 75 percent of all mathematical
More informationChapter 3. Linear and Nonlinear Systems
59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationErrata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013
Chapter Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) 202 9 March 203 Page 4, Summary, 2nd bullet item, line 4: Change A segment of to The
More informationHani Mehrpouyan, California State University, Bakersfield. Signals and Systems
Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationSolutions to Assignment 3
Solutions to Assignment 3 Question 1. [Exercises 3.1 # 2] Let R = {0 e b c} with addition multiplication defined by the following tables. Assume associativity distributivity show that R is a ring with
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationDM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini
DM559 Linear and Integer Programming Lecture Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Outline 1. 3 A Motivating Example You are organizing
More informationPre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix
Pre-Calculus I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationSimultaneous Linear Equations
Simultaneous Linear Equations PHYSICAL PROBLEMS Truss Problem Pressure vessel problem a a b c b Polynomial Regression We are to fit the data to the polynomial regression model Simultaneous Linear Equations
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationExample: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3
Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationCPE 310: Numerical Analysis for Engineers
CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public
More informationChapter 9: Gaussian Elimination
Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location
More informationDirect Solution of Linear Systems
0 Direct Solution of Linear Systems 0.0 Introduction The problem of solving a system of N simultaneous linear equations in N unknowns arises frequently in the study of the numerical solution of differential
More informationThomas Algorithm for Tridiagonal Matrix
P a g e 1 Thomas Algorithm for Tridiagonal Matrix Special Matrices Some matrices have a particular structure that can be exploited to develop efficient solution schemes. Two of those such systems are banded
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationMath 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1
Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationLECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS
LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1 Linear equations We now switch gears to discuss the topic of solving linear equations, and more interestingly, systems
More informationLinear Systems of Equations. ChEn 2450
Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationMAC Learning Objectives. Learning Objectives (Cont.) Module 10 System of Equations and Inequalities II
MAC 1140 Module 10 System of Equations and Inequalities II Learning Objectives Upon completing this module, you should be able to 1. represent systems of linear equations with matrices. 2. transform a
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationComputational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras
Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More informationReview. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.
Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g
More informationBTCS Solution to the Heat Equation
BTCS Solution to the Heat Equation ME 448/548 Notes Gerald Recktenwald Portland State University Department of Mechanical Engineering gerry@mepdxedu ME 448/548: BTCS Solution to the Heat Equation Overview
More informationMath 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm
Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm All course information is available on Moodle. Text: Nicholson, Linear algebra with applications, 7th edition. We shall cover Chapters 1,2,3,4,5:
More informationLesson U2.1 Study Guide
Lesson U2.1 Study Guide Sunday, June 3, 2018 2:05 PM Matrix operations, The Inverse of a Matrix and Matrix Factorization Reading: Lay, Sections 2.1, 2.2, 2.3 and 2.5 (about 24 pages). MyMathLab: Lesson
More informationComputational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras
Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 46 Tri-diagonal Matrix Algorithm: Derivation In the last
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More information30.4. Matrix Norms. Introduction. Prerequisites. Learning Outcomes
Matrix Norms 304 Introduction A matrix norm is a number defined in terms of the entries of the matrix The norm is a useful quantity which can give important information about a matrix Prerequisites Before
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMatrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1
Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix
More informationarxiv: v1 [cs.sc] 17 Apr 2013
EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations
More informationMidterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015
Midterm 1 Review Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015 Summary This Midterm Review contains notes on sections 1.1 1.5 and 1.7 in your
More informationLectures on Linear Algebra for IT
Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 2. Systems
More informationMath 502 Fall 2005 Solutions to Homework 3
Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More information1 - Systems of Linear Equations
1 - Systems of Linear Equations 1.1 Introduction to Systems of Linear Equations Almost every problem in linear algebra will involve solving a system of equations. ü LINEAR EQUATIONS IN n VARIABLES We are
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationM 340L CS Homework Set 1
M 340L CS Homework Set 1 Solve each system in Problems 1 6 by using elementary row operations on the equations or on the augmented matri. Follow the systematic elimination procedure described in Lay, Section
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationAlgebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix
Algebra & Trig. I 8.1 Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural
More informationFinite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.
Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form
More informationChapter 2 - Linear Equations
Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs
More informationRelationships Between Planes
Relationships Between Planes Definition: consistent (system of equations) A system of equations is consistent if there exists one (or more than one) solution that satisfies the system. System 1: {, System
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationKrylov Subspaces. The order-n Krylov subspace of A generated by x is
Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed
More informationSection 1.1: Systems of Linear Equations
Section 1.1: Systems of Linear Equations Two Linear Equations in Two Unknowns Recall that the equation of a line in 2D can be written in standard form: a 1 x 1 + a 2 x 2 = b. Definition. A 2 2 system of
More informationIntroduction to Determinants
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.
More informationFitting a Natural Spline to Samples of the Form (t, f(t))
Fitting a Natural Spline to Samples of the Form (t, f(t)) David Eberly, Geometric Tools, Redmond WA 9852 https://wwwgeometrictoolscom/ This work is licensed under the Creative Commons Attribution 4 International
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationChapter 3 - From Gaussian Elimination to LU Factorization
Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More information