Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Similar documents
1 - Systems of Linear Equations

MATRICES. a m,1 a m,n A =

Linear Algebra I Lecture 8

Lecture 7: Introduction to linear systems

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

LS.1 Review of Linear Algebra

Section 1.1: Systems of Linear Equations

22A-2 SUMMER 2014 LECTURE 5

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Linear Algebra March 16, 2019

Notes on Row Reduction

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

A Review of Matrix Analysis

Section 9.2: Matrices.. a m1 a m2 a mn

Gaussian Elimination -(3.1) b 1. b 2., b. b n

1300 Linear Algebra and Vector Geometry

MATH 3511 Lecture 1. Solving Linear Systems 1

PH1105 Lecture Notes on Linear Algebra.

Matrices and systems of linear equations

Math 60. Rumbos Spring Solutions to Assignment #17

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Linear equations in linear algebra

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Equations in Linear Algebra

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

Definition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices

Gaussian Elimination and Back Substitution

Linear Algebra and Matrix Inversion

Matrices and RRE Form

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

System of Linear Equations

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Math 240 Calculus III

Introduction to Determinants

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 1: Systems of Linear Equations

System of Linear Equations

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

7.6 The Inverse of a Square Matrix

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Chapter 3. Linear and Nonlinear Systems

Linear Equations in Linear Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Linear Systems and Matrices

Chapter 4. Solving Systems of Equations. Chapter 4

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Lectures on Linear Algebra for IT

Math 1314 Week #14 Notes

Basic Concepts in Linear Algebra

Linear Algebra V = T = ( 4 3 ).

POLI270 - Linear Algebra

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Econ Slides from Lecture 7

Elementary maths for GMT

Review of Basic Concepts in Linear Algebra

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Numerical Linear Algebra Homework Assignment - Week 2

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

MTH 464: Computational Linear Algebra

Elementary Linear Algebra

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

SOLVING LINEAR SYSTEMS

Elementary Row Operations on Matrices

Section Gaussian Elimination

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Lecture Summaries for Linear Algebra M51A

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Exercise Sketch these lines and find their intersection.

Determine whether the following system has a trivial solution or non-trivial solution:

4 Elementary matrices, continued

Lecture 1: Systems of linear equations and their solutions

9.1 - Systems of Linear Equations: Two Variables

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Lecture 4: Gaussian Elimination and Homogeneous Equations

MAT Linear Algebra Collection of sample exams

Lecture 7. Econ August 18

Introduction to Systems of Equations

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Relationships Between Planes

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Lecture 22: Section 4.7

Solving Systems of Linear Equations

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Transcription:

Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss some of the most common problems. Solving tridiagonal systems of equations Recall a system of linear equations can be written and subdiagonal. E.g. a 11 a 12 0... 0 a 21 a 22 a 23... 0 A = 0 a 32 a 33 a 34....................... a n 1,n 0... 0 a n,n 1 a nn This makes the solution of the system under certain assumptions, quite easy. At this stage introduce some notation to simplify things: use l i, d i, u i to denote the lower-diagonal, diagonal and upper-diagonal elements l i = a i,i 1, 2 i n d i = a ii, 1 i n u i = a i,i+1, 1 i n 1 where we adopt the convention that l 1 = 0 and u n = 0. Then the augmented matrix corresponding to the system is given by a 11 x 1 + a 12 x 2 +... a 1n x n = f 1 a 21 x 1 + a 22 x 2 +... a 2n x n = f 2... a n1 x 1 + a n2 x 2 +... a nn x n = f n where the a ij and f i are known and the x i are unknowns. In matrix-vector form this is Ax = f, where A has entries a ij and x and f are vectors with components x i and f i respectively. A common special case is A tridiagonal ie there are only three diagonals in A that contain nonzero elements: the main diagonal and the superdiagonal and we can store the entire problem using jsut 4 vectors forl, d, u, f instead 57 58

of an n n matrix that s mostly zeroes anyway. Recall from linear algebra methods that the standard means to solve a linear system (Gaussian elimination) is to eliminate all the components of A below the main diagonal i.e. reduce A to triangular form. In this case it is an easy task as we only need to eliminate a single element below the main diagonal in each column. Thus, you would multiply the first equation by l 2 /d 1 and subtract this from the second equation to get where δ 1 = d 1, δ 2 = d 2 u 1 (l 2 /d 1 ), δ 3 = d 3 u 2 (l 3 /δ 2 ) and in general δ k = d k u k 1 (l k /δ k 1 ) with 2 k n. Similarly, g 1 = f 1, g 2 = f 2 g 1 (l 2 /δ 1 ), g 3 = f 3 g 2 (l 3 /δ 2 ) so that the general form is g k = f k g k 1 (l k /δ k 1 ), 2 k n. and then continue with each successive row. Note this assumes d 1 0 and continuing requires d 2 u 1 (l 2 /d 1 ) 0 etc. Under these circumstances we can reduce the system to The matrix [T g] is row equivalent to the original augmented matrix [A f] meaning that we can progress from one to the other using elementary row operations; thus the two augmented matrices represent systems with exactly the same solution sets. Moreover, the solution is now easy to obtain, since we can solve the last equation δ n x n = g n to get x n = g n /δ n, and then use this value in the previous equation to get x n 1 and so on, to get each solution component. Again, carrying out this stage of the computation requires the assumption that each δ k 0 with 1 k n. The first stage of the computation (reducing the tridiagonal matrix A to the tridiagonal one T ) is generally called the elimination step and the second stage is generally called the backward solution (or back-solve step). A pseudocode for this process is below. Note that we don t store the entire matrix but only the three vectors needed to define the elements in the nonzero diagonals. In addition, different variable names are not used for the d i, δ i etc but overwrote the originals with new values. This saves storage when working with large problems. /* Elimination stage */ for i=2 to n d(i) = d(i) - u(i-1)*l(i)/d(i-1) f(i) = f(i) - f(i-1)*l(i)/d(i-1) endfor 59 60

/* Backsolve stage (bottom row is a special case) */ Example x(n) = f(n)/d(n) for i=n-1 downto 1 x(i) = ( f(i) - u(i)*x(i+1)/d(i) We will discuss this problem in later lectures. For now we state a common condition that is sufficient ti guarantee the tridiag solution algorithm given here will work. 61 62

Diagonal Dominance Definition: A tridiagonal matrix is diagonally dominant if d i > l i + u i > 0, 1 i n. Example, the matrix 6 1 0 0 A = 2 6 3 0 0 6 9 0 0 0 3 4 Theorem If the tridiagonal matrix A is diagonally dominant then the algorithm will succeed, within the limitations of rounding error. Proof Diagonal dominance tells us d 1 = δ 1 0 so all that remains is to show that each δ k = d k u k 1 l k /δ k 1 0 for 2 k n. Assume for the moment, l 2 0 then δ 2 = d 2 u 1 l 2 /d 1 d 2 u 1 l 2 /d 1 u 2 + l 2 l 2 θ 1 ( u 2 + l 2 )(1 θ 1 ) for θ 1 = u 1 / d 1 < 1. Therefore, δ 2 > 0 since l 2 0. If l 2 = 0 then we have δ 2 = d 2 0 = d 2 > 0. We can repeat the argument for each index. Completes the proof. Chapter 8 Solutions of Systems of Equations A brief review A vector x R n is an ordered n-tuple of real numbers i.e. x = (x 1, x 2,..., x n ) T where the T denotes this should be considered a column vector. A matrix, A R m n is a rectangular array of m rows and n columns. a 11 a 12... a 1n A =. a m1 a m2... a mn Given a square matrix, A R n n if there exists a second square matrix B R n n such that AB = BA = I then we say B is the inverse of A. Note that not all sqaure matrices have an inverse. If A has an inverse it is nonsingular and if it does not it is singular. The following theorem summarises the conditions under which a matrix is nonsingular and also connects them to the solvability of the linear systems problem. Theorem Given a matrix A R n n the following statements are equivalent: 1. A is nonsingular 63 64

2. The columns of A form and independent set of vectors 3. The rows of A form an independent set of vectors 4. The linear system Ax = b has a unique solution for all vectors b R n. 5. The homogeneous system Ax = 0 has only the trivial solution x = 0. 6. The determinant is nonzero. Corollary If A R n n is singular, then there exist infinitely many vectors x R n, x 0 such that Ax = 0. There are a number of special classes of matrices. In particular, tridiagonal matrices and (later) symmetric positive definite matrices. A square matrix is lower (upper) triangular if all the elements above (below) the main diagonal are zero. Thus is upper tringular, while is lower triangular. 1 2 3 U = 0 4 5 0 0 6 1 0 0 L = 2 3 0 4 5 0 Note at this stage you should review concepts of spanning, basis, dimension and orthogonality as we will use these ideas soon in a discussion of eignvalues and their computation. 65