Thomas Algorithm for Tridiagonal Matrix
|
|
- Nathan Perkins
- 5 years ago
- Views:
Transcription
1 P a g e 1 Thomas Algorithm for Tridiagonal Matrix Special Matrices Some matrices have a particular structure that can be exploited to develop efficient solution schemes. Two of those such systems are banded and symmetric matrices. A banded matrix is a square matrix that has all elements equal to zero, with the exception of a band centered on the main diagonal. Banded systems are frequently encountered in the solution of differential equations. In addition, other numerical methods such as cubic splines involve the solution of banded systems. The dimensions of a banded system can be quantified by two parameters: the bandwidth BW and the halfbandwidth HBW (see figure below). These two values are related by BW = 2HBW + 1. In general, then, a banded system is one for which a ij = 0 if i j > HBW. For a triadiagonal matrix BW=3, HBW=1. For a pentadiagonal matrix BW=5, and HBW=2. Next efficient elimination methods are described for both. Heat Transfer Problem As an example to illustrate the solution mechanism for banded matrices, specifically for tridiagonal matrices, we are going to use a simple problem from the heat transfer field. Assume that the extremes of a cylindrical metal rod are maintained at different temperatures, 0 C (left-hand extreme) and 1 C (right-hand extreme) and we wish to determine the steady state temperature at interior nodes. To do this, we divide the rod into 11 small imaginary cylinders, the temperatures of each cylinder is represented by the geometric center, named interior nodes as shown in the figure below: 0 o C 1 o C x 11 nodes
2 P a g e 2 At steady state, the heat transfer equation takes the form: 2 u x 2 = 0 0 x 1 BC s: u = 0 x = 0 u = 1 x = 1 where u is the temperature, and x is the spatial one dimensional coordinate. Expand the second derivative with central difference representation of error order θ(h 2 ): Rearrange and simplify u(i + 1) 2u(i) + u(i 1) x 2 = 0 i = 2, 3, 4, ix 1 u(i 1) 2u(i) + u(i + 1) = 0 i = 2, 3, 4, ix 1 Choose the number of points to be solved ix = 11 (i.e., the number of nodes) and expand the above equation such as the resulting index for u are i=1, 2, 3, ix (within the range of your problem discretization). u(1) 2u(2) + u(3) = 0 u(2) 2u(3) + u(4) = 0 u(3) 2u(4) + u(5) = 0 u(4) 2u(5) + u(6) = 0 u(5) 2u(6) + u(7) = 0 u(6) 2u(7) + u(8) = 0 u(7) 2u(8) + u(9) = 0 u(8) 2u(9) + u(10) = 0 u(9) 2u(10) + u(11) = 0 The range of this discretization is 1 i ix for later coding with matlab Name the coefficients of these equations p, q, r and the right-hand side vector s: p(1)u(1) + q(1)u(2) + r(1)u(3) = s(1) p(2)u(2) + q(2)u(3) + r(2)u(4) = s(2) p(3)u(3) + q(3)u(4) + r(3)u(5) = s(3) p(4)u(4) + q(4)u(5) + r(4)u(6) = s(4) p(5)u(5) + q(5)u(6) + r(5)u(7) = s(5) p(6)u(6) + q(6)u(7) + r(6)u(8) = s(6) p(7)u(7) + q(7)u(8) + r(7)u(9) = s(7) p(8)u(8) + q(8)u(9) + r(8)u(10) = s(8) p(9)u(9) + q(9)u(10) + r(9)u(11) = s(9)
3 P a g e 3 The matrix form yields a quasi tridiagonal matrix: The empty cells within the matrix coefficients matrix are actually filled with zeros. Please notice that that the above matrix is not square; we need to make it square. Thomas algorithm consists of two steps, direct sweep and inverse sweep (Con el permiso de Thomas, this method is just Gaussian elimination applied to a tridiagonal matrix avoiding the job on zeros entries).
4 P a g e 4 DIRECT SWEEP (forward pass on Gaussian Elimination) First step Eliminate u(1) and u(11) (i.e, first and last) from the matrix of coefficients in the first and last equations, by passing them to the other side of the equal sign. This elimination is possible as u(1) and u(11) are known temperatures. For the first equation: p(1)u(1) + q(1)u(2) + r(1)u(3) = s(1) q(1)u(2) + r(1)u(3) = s(1) - p(1)u(1) and for the last equation: p(9)u(9) + q(9)u(10) + r(9)u(11) = s(9) p(9)u(9) + q(9)u(10) = s(9) - r(9)u(11) which square and simplifies the original matrix to a tridiagonal matrix. A tridiagonal system that is, one with a bandwidth of 3 can be expressed generally as columns rows q(1) r(1) u(2) s(1) - p(1)u(1) 2 p(2) q(2) r(2) u(3) s(2) 3 p(3) q(3) r(3) u(4) s(3) 4 u(5) s(4) 5... = u(6) = s(5) 6... u(7) s(6) 7... u(8) s(7) 8 u(9) s(8) 9 p(9) q(9) u(10) s(9) - r(9)u(11) Rename: s(1) = s(1) - p(1)u(1) s(9) = s(9) r(9)u(11) Every time you solve a problem similar to this one, you must develop a square matrix as the above. This is the MATRIX representation of your problem
5 P a g e 5 This is a clean matrix equation suitable for a solution representing your problem. columns rows q(1) r(1) u(2) s(1) 2 p(2) q(2) r(2) u(3) s(2) 3 p(3) q(3) r(3) u(4) s(3) 4 u(5) s(4) 5... u(6) = s(5) 6... u(7) s(6) 7... u(8) s(7) 8 u(9) s(8) 9 p(9) q(9) u(10) s(9) The following procedure is equivalent to the steps of Gauss elimination by using the banded matrix notation which prevents from working with the zero elements. The first equation: q(1)u(2) + r(1)u(3) = s(1) Eq-1 Normalize the first equation, dividing it by q1 (pivot element). In the following equations we have simplified the notation, e.g. q(1) becomes q1, etc. u2 + r1 s1 u3 = q1 q1 rename: u2 + r 1u3 = s 1 Eq-2, where r 1 = r1 q1 and s 1 = s1 q1 Eq-3 Multiplied the first equation by p2 and subtract it from the second equation: p2u2 + q2u3 + r2u4 = s2 Eq-4 (p2u2 + p2 r 1u3 = p2ŝ1) Multiply Eq-2 by p2 Eq-5 (q2 p2r 1)u3 + r2u4 = s2 p2s 1 Eq-6 Notice that the above procedure works for eliminating u2. Normalize E-6 dividing by q2 p2r 1 (i.e., the coefficient of u3 u3 + r2 q2 p2r 1 u4 = s2 p2s 1 q2 p2r 1 Eq-7 Rename Eq-7: u3 + r 2u4 = s 2 Eq-8 where r 2 = r2 q2 p2r 1 and s 2 = s2 p2s 1 q2 p2r 1 Eq-9 We repeat the procedure with the 3 rd equation and eliminate u3 using the previous equation (Eq-8):
6 P a g e 6 p3u3 + q3u4 + r3u5 = s3 Eq-10 (p3u3 + p3r 2u4 = p3s 2) Multiply Eq-8 by p3 Eq-11 (q3 p3r 2)u4 + r3u5 = s3 p3s 2 Eq-12 Normalize Eq-12 dividing by q3 p3r 2 u4 + r3 q3 p3r 2 u5 = s3 p3s 2 q3 p3r 2 Eq-13 Rename: u4 + r 3u5 = s 3 Eq-14 where r 3 = r3 q3 p3r 2 and s 3 = s3 p3s 2 q3 p3r 2 Eqs-15 By looking at Eqs-9 and Eqs-15 we derived the recurrence formulas: r i = r i q i p i r Eq-16 and s i i 1 = si p i s i 1 qi p i r i 1 Eq-17 For the first equation in the coefficient Matrix where p1 is not used in the left hand side, it is enough to set p1=0 for both (Eq-16 and Eq-17) recurrence formulas to yield an expression for the r1 and s1: r 1 = r 1 q 1 Eq-3a and s 1 = s 1 q1 Eq-3b NOTE: This doesn t mean that p(1)=0 for the right-hand side vector INVERSE SWEEP Here we compute from the last equation backwards to the first. Start with the last equation: p9u9 + q9u10 = s9 r9u11 Eq-18 Since ix =11, the above equation can be expressed also as: p(ix-2)u(ix-2) + q(ix-2)u(ix-1) = s(ix-2) Eq-19 Write the second to last equation by looking the pattern of either Eq.-2 or Eq. 8: u(ix 2) + r (ix 3)u(ix 1) = s (ix 3) Eq-20 Multiply Eq-20 by p(ix-2) and subtract it from Eq-19:
7 P a g e 7 Last equation p(ix 2)u(ix 2) + q(ix 2)u(ix 1) = s(ix 2) Second last (p(ix 2)u(ix 2) + p(ix 2)r (ix 3)u(ix 1) = p(ix 2)s (ix 3)) [q(ix 2) p(ix 2)r (ix 3)]u(ix 1) = s(ix 2) p(ix 2)s (ix 3) Eq-20 Normalize Eq-20 u(ix 1) = s(ix 2) p(ix 2)s (ix 3) q(ix 2) p(ix 2)r (ix 3) Eq-21 Rename the right-hand side of Eq-21: s (ix 2) = s(ix 2) p(ix 2)s (ix 3) q(ix 2) p(ix 2)r (ix 3) Eq-22 Then Eq-21 becomes u(ix 1) = s (ix 2) Eq-23 As expected, the definition of s is the same as in the previous direct sweep step. For the current example ix=11, therefore ux(ix-1) = u(10), and Eq-21 can be easily computed: u(10) = s(9) p(9)s (8) q(9) p(9)r (8) Eq-24 Observe that all constants in the RHS of Eq-24 are known values. Now the following equation from bottom up (second to last equation): u(ix 2) + r (ix 3)u(ix 1) = s (ix 3) Eq-20 Solve by u(ix-2): u(ix 2) = s (ix 3) r (ix 3)u(ix 1) Eq-25 Again observe that you already know every variable and parameter on the RHS. Generalize u(i) = s (i 1) r (i 1)u(i + 1) Eq-26 Please become aware of that the last computed u(i) will be, e.g., u(2) = s (1) r (1)u(3) The formulas to apply then For the last equation
8 P a g e 8 u(ix 1) = s (ix 2) Eq-23 For the next ones, from bottom up: u(i) = s (i 1) r (i 1)u(i + 1) Eq-26 And so on. For the last equation to be solved we have: u(2) = s (1) r (1)u(3) Summarizing the equations: DIRECT SWEEP r i = r i q i p i r Eq-16 and s i i 1 = si p i s i 1 qi p i r i 1 Eq-17 In Eq-16, indices run as i=2, 3, 4,, ix-3 (note that r(9) = 0 is in the other side of the equation and r(9) =0 does not exist). Eq-17 applies for i=2, 3,, ix-2 For the first equation where i=1 and p(1) is in the other side of the equation, the next two equations apply: r 1 = r 1 q 1 Eq-3a and s 1 = s 1 q1 Eq-3b INVERSE SWEEP The formulas to apply then For the last equation u(ix 1) = s (ix 2) For the next ones up u(i) = s (i 1) r (i 1)u(i + 1) Eq-23 Eq-26 Therefore, for this step you compute the unknowns in the order u(ix-1), u(9), u(8),, u(2) the last unknown to be computed, i.e., i=2: u(2) = s (1) r (1)u(3)
9 P a g e 9 For the heat transfer problem-example and for developing MATLAB code u(i 1) 2u(i) + u(i + 1) = 0 i = 2, 3, 4, ix 1 For values of the variables and coefficients: u(1), u(2), u(3),, u(11) p(1), p(2), p(3),, p(9) q(1), q(2), q(3),, q(9) r(1), r(2), r(3),.,r(9) s(1), s(2), s(3),., s(9) u(1), u(ix) p(1), p(ix-2) q(1), q(ix-2) r(1),.r(ix-2) s(1), s(ix-2) Actual values: u(1)=0.0 u(ix)=u(11)=1 p(i)=1.0 q(i)=-2.0 r(i)=1.0 BC BC i=1,2,.. ix-2 i=1,2, ix-2 i=1,2, ix-2 The values of s(1) and s(9) were renamed, here we go back to the original definition: s(1)=s(1)-p(1)u(1)=0-(1)(0)=0 % Note that p(1) is not zero for the right-hand side vector.. s(9)=s(9)-r(9)u(11)=0-(1)(1)=-1 % Note that r(9) is not zero for the right-hand side vector In other words, for the current problem: s(i) =0 i=2, ix-3 (i.e., ix-3=8); all are zeros except for s(9) s(9)=s(ix-2)=-1 After identifying p(i), q(i), r(i), s(i) and u(i) we start computing r (i), and s (i)
10 P a g e 10 r (1) = r(1) q(1) r (i) = r(i) [q(i) p(i)r (i 1) i=1 i=2,3, ix-3 s (1) = s(1) q(1) = 0 2 = 0 i=1 s (i) = (s(i) p(i)s (i 1)) (q(i) p(i)r (i 1)) i=2,3,..,ix-2 Now keep going with the INVERSE SWEEP to find u(i) s At the end of your computations you should be able to summarize your results in a Table: i p(i) q(i) r(i) s(i) u(i) in which not all the cells are filled with values. Please as an assignment fill the table above for the heat transfer problem, this practice is critical for your success in a test. EXACT SOLUTION We have chosen a problem that has an exact (and simple) solution, just to show the solution mechanism for the numerical method. The analytical (exact) solution is then 2 u x 2 = 0 0 x 1 x ( u ) = 0 x K 1 u x = K 1 ==> u = K 1 x + K 2 Apply boundary conditions to find the values of K 1 and K 2:
11 P a g e 11 u=0 x=0 K 2=0 u=1 x=1 K 1=1 Solution u = x In an actual exam, the evaluation of the Thomas Algorithm may involve. (1) Write down the system of equations in standarized form (2) Write down the system in Matrix form (square matrix) (3) Identify the p(i), q(i), r(i), and s(i) (4) Compute the r (i), and s (i) parameters (5) Compute the u(i) s (6) Fill a table like i p(i) q(i) r(i) s(i) u(i) ix-2
12 Excel computation File: Tridiagonal Algorithm You start computing here and go down TRIDIAGONAL ALGORITHM DIRECT SWEEP (Computation start with r^(1), s^(1) up to r^(9) and s^(9)) i p q r s ř ŝ u BC's The values of s(1) and s(9) could be tricky as they were both renamed to find recurrence formulas: s(1)=s(1)-p(1)u(1)=0-(1)(0)=0 s(9)=s(9)-r(9)u(11)=0-(1)(1)=-1 P a g e 12 s(1) and s(9) in the RHS both are zero, but not the new s(1) and s(9) Boundary conditions are known values 11 1 BC's first step second step third step INVERSE SWEEP (Computation goes from u(10) up to u(2) i p q r s ř ŝ u BC's BC's You start computing here and go up MATLAB, to the RESCUE Following are several ways of creating tridiagonal matrices. You need to create a tridiagonal matrix to use standard methods of solving a matrix system with matlab, such as x=a\b % Tridiagonal Maker % file: tridiamaker.m % TM [tridiagonal matrix] clc, clear, close % Option 1: n = 5; p = -1;
13 P a g e 13 q = 2; r = -1; TM=full(gallery('tridiag',n,p,q,r)) fprintf('\n\n'); % Option 2 n = 5 ; nones = ones(n, 1) ; TM = diag(2 * nones, 0) - diag(nones(1:n-1), -1) - diag(nones(1:n-1), 1) fprintf('\n\n'); % Option 3 n=5; TM=toeplitz([2-1 zeros(1, n-2)], [2-1 zeros(1, n-2)]) OUTPUT TM = Tridiagonal solver routine for MATLAB If you want to be loyal to Thomas algorithm, here you have a shared code: function u = tridiag(a,b,c,r,n) ; % Function tridiag: % Inverts a tridiagonal system whose lower, main and upper diagonals % are respectively given by the vectors a, b and c. r is the right hand % side, and N is the size of the system. The result is placed in u. % (Adapted from Numerical Recipes, Press et al. 1992) if (b(1)==0) fprintf(1,'reorder the equations for the tridiagonal solver...') pause
14 P a g e 14 end beta = b(1) ; u(1) = r(1)/beta ; % Start the decomposition and forward substitution for j = 2:N gamma(j) = c(j 1)/beta ; beta = b(j) a(j)*gamma(j) ; if (beta==0) fprintf(1,'the tridiagonal solver failed...') pause end u(j) = (r(j) a(j)*u(j 1))/beta ; end % Perform the back substitution for j = 1:(N 1) k = N j ; u(k) = u(k) gamma(k+1)*u(k+1) ; end REFERENCES
1.Chapter Objectives
LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationPartial Differential Equations (PDEs) and the Finite Difference Method (FDM). An introduction
Page of 8 Partial Differential Equations (PDEs) and the Finite Difference Method (FDM). An introduction FILE:Chap 3 Partial Differential Equations-V6. Original: May 7, 05 Revised: Dec 9, 06, Feb 0, 07,
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More information4 Elementary matrices, continued
4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationLinear Systems of Equations. ChEn 2450
Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations
More informationNumerical Analysis Fall. Gauss Elimination
Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing
More informationNUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc7 October 16, 2011 Ward Cheney/David Kincaid
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A
More informationFinite Difference Methods (FDMs) 1
Finite Difference Methods (FDMs) 1 1 st - order Approxima9on Recall Taylor series expansion: Forward difference: Backward difference: Central difference: 2 nd - order Approxima9on Forward difference: Backward
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More information30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes
LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationChapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS
Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general
More informationy b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1
LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationReview. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.
Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g
More informationCE 206: Engineering Computation Sessional. System of Linear Equations
CE 6: Engineering Computation Sessional System of Linear Equations Gauss Elimination orward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient
More informationBTCS Solution to the Heat Equation
BTCS Solution to the Heat Equation ME 448/548 Notes Gerald Recktenwald Portland State University Department of Mechanical Engineering gerry@mepdxedu ME 448/548: BTCS Solution to the Heat Equation Overview
More informationFitting a Natural Spline to Samples of the Form (t, f(t))
Fitting a Natural Spline to Samples of the Form (t, f(t)) David Eberly, Geometric Tools, Redmond WA 9852 https://wwwgeometrictoolscom/ This work is licensed under the Creative Commons Attribution 4 International
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationHomework Set #8 Solutions
Exercises.2 (p. 19) Homework Set #8 Solutions Assignment: Do #6, 8, 12, 14, 2, 24, 26, 29, 0, 2, 4, 5, 6, 9, 40, 42 6. Reducing the matrix to echelon form: 1 5 2 1 R2 R2 R1 1 5 0 18 12 2 1 R R 2R1 1 5
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationHani Mehrpouyan, California State University, Bakersfield. Signals and Systems
Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,
More informationMath/CS 466/666: Homework Solutions for Chapter 3
Math/CS 466/666: Homework Solutions for Chapter 3 31 Can all matrices A R n n be factored A LU? Why or why not? Consider the matrix A ] 0 1 1 0 Claim that this matrix can not be factored A LU For contradiction,
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More informationQuiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.
Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationMath 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1
Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two
More information(17) (18)
Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More information4 Elementary matrices, continued
4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices are constructed
More informationFinite Difference Method
Finite Difference Method for BVP ODEs Dec 3, 2014 1 Recall An ordinary differential equation is accompanied by auxiliary conditions. In the analytical method, these conditions are used to evaluate the
More informationMath Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!
Math 5- Computation Test September 6 th, 6 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Name: Answer Key: Making Math Great Again Be sure to show your work!. (8 points) Consider the following
More informationLecture 8: Determinants I
8-1 MATH 1B03/1ZC3 Winter 2019 Lecture 8: Determinants I Instructor: Dr Rushworth January 29th Determinants via cofactor expansion (from Chapter 2.1 of Anton-Rorres) Matrices encode information. Often
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More informationCubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines
Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationComputational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras
Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 46 Tri-diagonal Matrix Algorithm: Derivation In the last
More informationY = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2
Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationMethods for Solving Linear Systems Part 2
Methods for Solving Linear Systems Part 2 We have studied the properties of matrices and found out that there are more ways that we can solve Linear Systems. In Section 7.3, we learned that we can use
More informationLecture 1 Systems of Linear Equations and Matrices
Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces
More informationName: MATH 3195 :: Fall 2011 :: Exam 2. No document, no calculator, 1h00. Explanations and justifications are expected for full credit.
Name: MATH 3195 :: Fall 2011 :: Exam 2 No document, no calculator, 1h00. Explanations and justifications are expected for full credit. 1. ( 4 pts) Say which matrix is in row echelon form and which is not.
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More information18.06 Problem Set 2 Solution
18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular U 1 : 1 a b 1 0 UU 1 = I 0 1 c x 1 x 2 x 3 = 0 1 0. 0 0 1 0 0
More information22A-2 SUMMER 2014 LECTURE 5
A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationThe use of exact values at quadrature points in the boundary element method
The use of exact values at quadrature points in the boundary element method N.D. Stringfellow, R.N.L. Smith Applied Mathematics & OR Group, Cranfield University, Shrivenham, Swindon, U.K., SN6 SLA E-mail:
More informationSolving PDEs with CUDA Jonathan Cohen
Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More information1300 Linear Algebra and Vector Geometry
1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.
More informationCS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11
CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani 1 Question 1 1. Let a ij denote the entries of the matrix A. Let A (m) denote the matrix A after m Gaussian elimination
More informationLinear Algebra 1 Exam 1 Solutions 6/12/3
Linear Algebra 1 Exam 1 Solutions 6/12/3 Question 1 Consider the linear system in the variables (x, y, z, t, u), given by the following matrix, in echelon form: 1 2 1 3 1 2 0 1 1 3 1 4 0 0 0 1 2 3 Reduce
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationLecture 9: Elementary Matrices
Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationMAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices
MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationThe following steps will help you to record your work and save and submit it successfully.
MATH 22AL Lab # 4 1 Objectives In this LAB you will explore the following topics using MATLAB. Properties of invertible matrices. Inverse of a Matrix Explore LU Factorization 2 Recording and submitting
More informationMath 320, spring 2011 before the first midterm
Math 320, spring 2011 before the first midterm Typical Exam Problems 1 Consider the linear system of equations 2x 1 + 3x 2 2x 3 + x 4 = y 1 x 1 + 3x 2 2x 3 + 2x 4 = y 2 x 1 + 2x 3 x 4 = y 3 where x 1,,
More informationEcon 172A, Fall 2012: Final Examination Solutions (II) 1. The entries in the table below describe the costs associated with an assignment
Econ 172A, Fall 2012: Final Examination Solutions (II) 1. The entries in the table below describe the costs associated with an assignment problem. There are four people (1, 2, 3, 4) and four jobs (A, B,
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More information2 Systems of Linear Equations
2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving
More information0 0 0 # In other words it must saisfy the condition that the successive rst non-zero entries in each row are indented.
Mathematics 07 October 11, 199 Gauss elimination and row reduction I recall that a matrix is said to be in row-echelon form if it looks like this: # 0 # 0 0 0 # where the # are all non-zero and the are
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationConsider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations
More informationMath 123, Week 2: Matrix Operations, Inverses
Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationMODEL ANSWERS TO THE THIRD HOMEWORK
MODEL ANSWERS TO THE THIRD HOMEWORK 1 (i) We apply Gaussian elimination to A First note that the second row is a multiple of the first row So we need to swap the second and third rows 1 3 2 1 2 6 5 7 3
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationCHAPTER 7: Systems and Inequalities
(Exercises for Chapter 7: Systems and Inequalities) E.7.1 CHAPTER 7: Systems and Inequalities (A) means refer to Part A, (B) means refer to Part B, etc. (Calculator) means use a calculator. Otherwise,
More informationSimple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones.
A matrix is sparse if most of its entries are zero. Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. In fact sparse matrices
More informationCHAPTER 8: Matrices and Determinants
(Exercises for Chapter 8: Matrices and Determinants) E.8.1 CHAPTER 8: Matrices and Determinants (A) means refer to Part A, (B) means refer to Part B, etc. Most of these exercises can be done without a
More informationMATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c
MATH 2030: MATRICES Matrix Algebra As with vectors, we may use the algebra of matrices to simplify calculations. However, matrices have operations that vectors do not possess, and so it will be of interest
More informationIntroduction to PDEs and Numerical Methods Tutorial 5. Finite difference methods equilibrium equation and iterative solvers
Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Tutorial 5. Finite difference methods equilibrium equation and iterative solvers Dr. Noemi
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationGauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan
Gauss Elimination Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan chanhl@mail.cgu.edu.tw Solving small numbers of equations by graphical method The location of the intercept provides
More information