Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras

Similar documents
Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Process Control and Instrumentation Prof. A. K. Jana Department of Chemical Engineering Indian Institute of Technology, Kharagpur

(Refer Slide Time: 1:02)

2.1 Gaussian Elimination

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Numerical Analysis Fall. Gauss Elimination

Introduction to Parallel Programming in OpenMP Dr. Yogish Sabharwal Department of Computer Science & Engineering Indian Institute of Technology, Delhi

8.4. Systems of Equations in Three Variables. Identifying Solutions 2/20/2018. Example. Identifying Solutions. Solving Systems in Three Variables

Process Control and Instrumentation Prof. A. K. Jana Department of Chemical Engineering Indian Institute of Technology, Kharagpur

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

A Review of Matrix Analysis

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Fundamentals of Transport Processes Prof. Kumaran Indian Institute of Science, Bangalore Chemical Engineering

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Digital Signal Processing Prof. T. K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

MATH 3511 Lecture 1. Solving Linear Systems 1

Numerical Linear Algebra

Chapter 9: Gaussian Elimination

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Convective Heat and Mass Transfer Prof. A. W. Date Department of Mechanical Engineering Indian Institute of Technology, Bombay

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

(17) (18)

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

1 GSW Sets of Systems

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Matrix decompositions

MAC Learning Objectives. Learning Objectives (Cont.) Module 10 System of Equations and Inequalities II

Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras

(Refer Slide Time: 1:13)

Computational Fluid Dynamics Prof. Dr. SumanChakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Linear Algebra, Vectors and Matrices

Solving Consistent Linear Systems

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Linear Algebra March 16, 2019

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

Networks and Systems Prof V.G K. Murti Department of Electrical Engineering Indian Institute of Technology, Madras Lecture - 10 Fourier Series (10)

Linear Equations in Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi

Gaussian Elimination and Back Substitution

Dynamics of Ocean Structures Prof. Dr. Srinivasan Chandrasekaran Department of Ocean Engineering Indian Institute of Technology, Madras

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Derivation of the Kalman Filter

Select/Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

Basics Quantum Mechanics Prof. Ajoy Ghatak Department of physics Indian Institute of Technology, Delhi

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra Prof. Dilip P Patil Department of Mathematics Indian Institute of Science, Bangalore

Matrix decompositions

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Chemical Reaction Engineering Prof. JayantModak Department of Chemical Engineering Indian Institute of Science, Bangalore

Solving Systems of Linear Equations

Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

Consider the following example of a linear system:

Condensed Matter Physics Prof. G. Rangarajan Department of Physics Indian Institute of Technology, Madras

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur

(Refer Slide Time: 01:17)

Linear Algebraic Equations

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

The Solution of Linear Systems AX = B

Project Planning & Control Prof. Koshy Varghese Department of Civil Engineering Indian Institute of Technology, Madras

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Scientific Computing

Linear Algebra. Carleton DeTar February 27, 2017

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

1 GSW Gaussian Elimination

Lecture 4: Gaussian Elimination and Homogeneous Equations

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Block-tridiagonal matrices

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 06

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

A Review of Linear Algebra

Linear Equations in Linear Algebra

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Transcription:

Computational Fluid Dynamics Prof. Sreenivas Jayanti Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 46 Tri-diagonal Matrix Algorithm: Derivation In the last class, we looked at two methods for the solution of Ax equal to b type set of simultaneous linear algebraic equations. One of this was a Gaussian elimination method and the other was L u decomposition method. In both cases, we are reducing the set of equations described by A x equal to b into a specific form, into an upper diagonal matrix form, the A is converted into an upper diagonal matrix form. In the process, we are rearranging the equations, reformulating the equations in such a way, that solution by successive substitution is possible. (Refer Slide Time: 01:06) In the case of Gaussian elimination method, we are converting A x equal to b into, we are converting A x equal to b into a special form, which is being written as an upper, train, triangular matrix; u times x here equal to a modified form of the coefficients. And this triangular, upper triangular form is such, that while the last equation here contains A, the last equation here is a n 1 x 1 plus a n 2 x 2 plus a n 3 x 3 plus a n n n minus 1 x n minus

1 plus a n n x n equal to b n. So, this is rewritten in this modified form. The last equation is written as a n n prime x n equal to b n prime. And the one which is just above this, this equation here is written as a n n minus 1 prime x n minus 1. So, this is a n minus 1 n minus 1 plus a n minus 1 n prime x n equal to b n minus 1 prime. So, in this transformed rearranged set of equations, the last equation is simplified from one, which involves all the variables, all the way from x 1, x 2, x 3, up to x n into an equation involving only x n as a variable. So, from this we can directly get x n as b prime n by a prime n n. And once you have x n here, you can substitute this value in this and you have this equation, which contains only x n minus 1. So, we can, by substituting this value in this we can get x n minus 1. Similarly, the 3rd equation from bottom here is put in some form of x n minus 2 plus a double prime x n minus 1 plus a triple prime x n equal to b prime n minus 2. So, the right hand side of this equation gets changed and the coefficients also gets changed, but by this time we already know x n and we already know x n minus 1. So, we can substitute here for this and we can substitute here for this and we have a simple equation for x n minus 2, we can get from this x n minus 2. So, by successive back substitution, starting from the bottom most equation, the last equation, all the way up to this, we can evaluate x n x n minus 1, x n minus 1, x n minus 2, all the way to x 1 through substitution alone, which is very fast. So, it would not take large number of equation, large number of a multiplications and divisions. For example, here we have one division and here we have these times this multiplication and then again subtraction of this from this and then division by this. So, you have one multiplication here and one division here. So, and here we have one multiplication, one multiplication, subtraction and division here. So, as we go more and more, we get more and more number of mathematical operations, but it is only every time a few more. So, the back substitution process is an easy way of solving the equations successively, but to go from a general matrix into an upper triangular matrix here requires a lot of work.

And we said, that that will be taking n cube by 3 number of mathematical operations, whereas the back substitution will take n square number of operations and together it will take n cube plus n square number of mathematical, arithmetic operations, which will tend towards n cube by 3 number of multiplications and divisions for the solution using Gaussian elimination method and which is much, much better than the factorial n plus 1, which is required for the Cramer s rule. The other method that we looked at, which is the L-u decomposition method is similar to this in principle. In this case, in the case of Gaussian elimination, as we transformed A x equal to b into u x equal to b prime, here the right hand side is also getting transformed. In the case of L-u decomposition, we only convert A into a product of lower and upper triangular matrix so that an equation of A x equal to b is written as L u x equal to b in which we know we have certain algorithms to find the elements L i j and u i j of this lower and upper triangular matrices. So, when you have an equation like this, then you can put this, you can solve this equation first. So, that is, you can solve, you can, you can put u x equal to y so that we write this as L y equal to b. So, in this L is known and b is known, so we can get y. And this lower triangular matrix will be like, this matrix is like, here, so we get y by starting with the 1st equation and 2nd equation, 3rd equation like this. So, this we get by forward substitution. And once we get y here we can, we can solve for u x equal to y in which case y is known and u is known, from this we get x by backward substitution.

(Refer Slide Time: 08:27) So, let us just try to write it down neatly here. We have Ax equal to b converted to u x equal to b prime and solve by back substitution. The first method is the Gaussian elimination method. The 2nd method, A x equal to b written as L u x equal to b, the same b, and solving this in two steps as L y equal to b giving y by forward substitution, and this is followed by solution of u x equal to y and solving this for x by backward substitution. So, this is the essence of the L-u decomposition method, and this is what we are doing in Gaussian elimination method. Both of them are good methods, in the, for the general case of Ax equal to b. But today we are going to look at a specialized method which builds on the same principles, that if it is possible to convert A x equal to b into a special form into an upper triangular form or L u x equal to b form like this, then we can get easy solution through successive substitution here. The same principle can be applied to a simple case of a tridiagonal matrix in a case where we have T x equal to s, where T is tridiagonal matrix. So, let us see, for example, a simple case of d by d x of K d t by d x. Let me put, let me not confuse this t with this T, so let me put this as theta equal to Q. So, this is one-dimensional steady state heat conduction equation where K is the thermal conductivity, theta is the temperature and Q

is some heat source or heat sink in the appropriate form, for example, volumetric or area wise and all that. So, we have this equation and we have this is the one-dimensional problem. We have x going in this direction and we can have i here, i minus 1 and i plus 1, i plus 2, i minus 2, like this. The boundary conditions at x equal to 0 and x equal to L are specified. At this point we can discretize this by writing this as, we will put this as i minus half and this as i plus half, that is, midway between the two points. So, we can write this as K d theta by d x at i plus half minus K d theta by d x at i minus half divided by delta x is equal to Q i. And we can write d theta by d x, that i plus half here as K at i plus half and this as theta i plus 1 minus theta i by theta delta x minus K at i minus half theta i minus theta i minus 1 by delta x is equal to, we take this here and we have Q i times delta x. So, here we see, that we have theta i, theta i plus 1 and theta i minus 1. So, we can write this as as K i plus half by delta x theta i plus 1 minus K i plus half by delta x plus K i minus half by delta x times theta i plus K i minus half by delta x times theta i minus 1 equal to Q i times delta x. So, we have theta i in terms of theta i plus 1 and theta i minus 1. So, if you do this for every theta and put them in the lexicographic, graphic ordering, we will get a matrix A, which will have all theta i values will be along the central diagonal here and theta i minus 1 will be just here and theta i plus 1 will be here, all these are 0s. So, this A resulting from the discretization of one-dimensional heat conduction equation is a typical example where we have T as a tridiagonal matrix. So, this is the tridiagonal matrix form of the coefficient matrix T, which arises from this one dimensional Poisson equation or Laplace equation like this. So, what we would like to do is that we would like to solve this and we would like to solve this taking advantage of this type of conversion, that is, conversion into, into an upper triangular matrix or L-u kind of thing here. So, we, it is possible to develop a very fast scheme by taking advantage of a, the fact, that in this there is only 3 rows, 3 diagonals here, which are non-zero and all the others are 0 and taking advantage of the fact, that in a typical diffusion problem, the diagonal dominance of this triangular matrix

is satisfied. (Refer Slide Time: 16:11) So, with these kind of conditions what we would like to do is to convert T x equal to s into an upper triangular matrix of the form u x equal to s prime, okay and in such a way, that this upper triangular matrix will only have 2 rows. So, what we are looking at is, that where you have T such that the T i comma i is a i and T i comma i plus 1 is b i and T i comma i minus 1 is c i where a, b, c are the coefficients. So, that for the ith equation in this, so you have 1st equation, 2nd equation, 3rd equation, all the way up to n number of equations, the ith equation can be written as c i x i minus 1 plus a i x i plus b i x i plus 1 equal to s i. So, the ith equation in this system is written as a coefficient corresponding to i minus 1, a coefficient a i corresponding to ith term and coefficient b i corresponding to the i plus 1th variable equal to s i. And this formulation is such that each of these coefficients can change from equation to equation. You can have some special cases where all the c s are constant, all a s are constant. For example, in the simplest case where K is constant, the thermal conductivity is constant; we normally have c equal to 1, a equal to minus 2 and b equal to 1. So, this represents a tridiagonal, an equation, a linear algebraic equation involving 3 immediate, 3 adjacent neighbors. And when you go for all values of i, you get this type of form. So,

this is an equation, that we wish to solve for n number of x n, x 1, x 2, x 3, all the way up to x n and we would like to convert this into u x equal to s, where u i i is taken to be 1 and u i i plus 1 is given as d i and s prime i is given as y i. So, the form of T is this. Central diagonal has coefficient a i, the one corresponding to i, i plus 1 is the one which is above here. All the coefficients here are b i and the one i, i minus 1 is one bellow and this is c i. So, these are the coefficient values and these values may change in each equation, as we mentioned here. This is being converted into a special form of upper triangular matrix where we have, all the diagonals have 1 and we have only one more diagonal here and this is y i, and all these are 0. So, if you have, if you can make this conversion here, then this equation can be solved in the back substitution process. We have this one directly giving as x n and this one will involve x n and x n minus 1. We know x n, so we can get x n minus 1. We go to the next equation. The next equation was only x n minus 1 and x n minus 2. We solve for x n minus 2, then you come here, this involve x n minus 2 and x n minus 3 and so, we, we know x n minus 2. So, we can solve this for x n minus 3 we can just climb the ladder, as it were, you solve this and then you go here and then get this and then you go here, get this, you go here, get this, like this and you can go all the way up to the first one. So, this is the idea of the, tri, tridiagonal matrix algorithm in which tridiagonal original matrix T is converted into an upper triangular matrix containing only two nonzero diagonals. So, how can we do this conversion is the thing that we would like to have here. So, when we write an equation like this, this equation is such that the ith equation is written as x i plus d i, x i plus 1 equal to y i. So, because u, okay, u i i is equal to 1, so in the ith equation you have x i and x i plus 1; x i plus 1 will be coming here. So, this is the ith equation in u x equal to y, u x equal to s prime and this is the ith equation in T x equal to s. So, whereas, here you have three coefficients c i, a i, b i here. In this equation, this coefficient is 1 by choice and this coefficient, d i and y i are to be determined yet. So, if this is the yth equation, i minus 1th equation is given by x i minus 1 plus d i minus 1, x i equal to y i minus 1.

So, now what we like to do is to convert this equation in which we already know c i, a i, b i, s i because that is what we have derived here. So, all the coefficients in this T are known, we want to convert this into this form and determine this d i and y i for each of these equations because once we do this, then it is possible to do this back substitution here. So, we make use of this expression here in this. So, we can write this as. So, we can say from here, this x i minus 1 is y i minus 1 minus d i minus 1 x i here. We substitute this here and then get rid of this x i minus 1, so that we have an equation involving x i and x i plus 1 and that equation is similar to this form. This is also an equation involving x i and x i plus 1. So, we compare the coefficients and then, from that we will be able to form an equation for d i and y i. So, what we are doing is, if this is the form of ith equation, we are saying, that form of the ith i minus 1th equation is x i minus 1 plus d i minus 1 times x i equal to y i minus 1. So, this equation allows us to write x i minus 1 equal to this minus this. We substitute this value here in this and we can write that as c i times y i minus 1 minus d i minus 1 x i plus a i x i plus b i x i plus 1 equal to s i. So, we can now take this c i minus this one here and we have a i minus c i d i minus 1 times x i plus b i x i plus 1 equal to s i minus c i y minus 1. So, we can divide this whole thing by this and write this as x i plus b i by a i minus c i d i minus 1 times x i plus 1 equal to s i minus c i y i minus 1 divided by a i minus c i d i minus 1. So, I would like you to examine this equation and this equation here. In both cases, this coefficient is 1 and this coefficient must be equal to this coefficient. So, from this we can say, that that this term and this is equal to this and we have y i is given by this. So, let us now write, the formula here d i is b i by c i a i minus c i d i minus 1 and y i is s i minus c i y i minus 1 divided by a i minus. So, that means, that we already know a i, b i, c i everywhere and s i. We do not know d i minus 1 and s i minus 1, but we can say, that for the d 1 is b 1 by a 1 and y 1 is s 1 by a 1. So, starting with these expressions, from i equal to next point onwards, that is, we can take i equal to 2 onwards, we can put d 2 equal to d 2, which we know a 2, which we know minus c 2 times d i minus 1 be d 1 and d one is already known, so we can evaluate d 2. Similarly, y 2 will be equal to s 2, which we know, c i times y 1 which is given here. And a 1 is known, a 2 is known, c 2 is known

and d 1 is known, so we can evaluate y 2. So, we can evaluate d 2 y 2 and then, we go back here and when we evaluate d 3, y 3 and d 4, y 4 all the way up to d n, y n. So, using this recursive formulas, starting with known values of d 1 and y 1 from the form of these equations we will be able to get the six, determine all the coefficients of d i and y i. Then, it is a question of back substitution. So, we will write down the whole algorithm for the solution of T x equal to s using this tridiagonal matrix algorithm in which we write T x equal to s in the form of an upper, tri, tridiagonal, diagonal, upper diagonal matrix, bidiagonal matrix involving only two diagonals in which we put u equal to 1; here all these diagonal elements to be 1. Why we want to put these to be 1? These can be any values, but this had to be fixed in order to get unique set of values here. So, in this algorithm we put all of them to be 1 and then, we have exactly as many number of equations as are needed to determine this d i and y i and this can be put in the simple recursive forms. So, we, from known values of a i, b i, c i, s i, we start with the initial values and then, we find all the d n and y n and through the back substitution process we get the final solution. (Refer Slide Time: 30:51) So, we will just rub it. I am continuing the tridiagonal matrix algorithm known as

TDMA. It is also known as the Thomas algorithm. So, the solution of T x equal to s, where T i i is a i, t i i plus 1 is b i, t i i minus 1 is c i and s i is s i. So, with this, starts with conversion of this, this equation, that is c i x i minus 1 plus a i xi plus b i x i plus 1 equal to s i. So, this is rewritten as x i plus d i x i plus 1 equal to y i where d 1 is b 1 by a 1 and y 1 is s 1 by a 1 and we have d i is given by b i minus b i divided by a i minus c i times d i minus 1 and y i is s i minus c i y i minus 1 divided by a i minus c i d i minus 1. So, we start with this and then we evaluate all this from i equal to 2 to n. And once you have this, you can then back substitute by saying x n is y n by d n is the thing and then, we can write x n minus 1 is equal to y n minus 1, y n minus 1 divided, minus x n divided by d n minus 1 and so on. We can go through this; we can go through this until we go from x n to x 1, all the way to x 1. So, this is the Thomas algorithm where we convert the T x equal to s into u x equal to s prime in where u is a upper bi-diagonal matrix and then we evaluate the elements here and then, we do back substitution to get this. So, the overall number of mathematical operations required for the solution for large n is about 12 n where n is the number of equations. So, this is very small compared to the n cubed by 3, which is required for the Gaussian elimination and L-u method and n plus 1 factorial for Cramer s rule. So, but the advantage is, that the disadvantage is this can be applied only for tridiagonal matrix, which can be applied only for one-d problems. We will see later on in the next class some other methods and we will come back to this method in, in the advanced methods, which try to make use of efficiency of this Thomas algorithm in solving one particular form of Ax equal to b. Thank you