Solution of System of Linear Equations & Eigen Values and Eigen Vectors

Similar documents
MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Romberg Integration and Gaussian Quadrature

Interpolation. P. Sam Johnson. January 30, P. Sam Johnson (NITK) Interpolation January 30, / 75

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Next topics: Solving systems of linear equations

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MATRICES AND ITS APPLICATIONS

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Econ Slides from Lecture 7

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Lecture Summaries for Linear Algebra M51A

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Review Questions REVIEW QUESTIONS 71

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

Chapter 6 Page 1 of 10. Lecture Guide. Math College Algebra Chapter 6. to accompany. College Algebra by Julie Miller

Linear Algebra Exam 1 Spring 2007

Linear Algebra: Matrix Eigenvalue Problems

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Computational Methods. Systems of Linear Equations

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Online Exercises for Linear Algebra XM511

7.6 The Inverse of a Square Matrix

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Chap 3. Linear Algebra

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

CPE 310: Numerical Analysis for Engineers

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

MTH 464: Computational Linear Algebra

GENERAL ARTICLE Realm of Matrices

Foundations of Matrix Analysis

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Linear System Equations

Linear Algebra. Min Yan

Linear Algebra Primer

Chapter 3. Determinants and Eigenvalues

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MTH 464: Computational Linear Algebra

Linear Algebra Practice Problems

Fundamentals of Engineering Analysis (650163)

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Linear Algebra Highlights

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Systems of Linear Equations and Matrices

(Refer Slide Time: 2:04)

Exact and Approximate Numbers:

Chapter 5 Eigenvalues and Eigenvectors

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

The iteration formula for to find the root of the equation

DM559 Linear and Integer Programming. Lecture 6 Rank and Range. Marco Chiarandini

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Linear Algebra I for Science (NYC)

MATH 583A REVIEW SESSION #1

Elementary Linear Algebra

Eigen Values and Eigen Vectors - GATE Study Material in PDF

Lecture Notes in Linear Algebra

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Systems of Linear Equations and Matrices

Chapter 2 Notes, Linear Algebra 5e Lay

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Numerical Methods - Numerical Linear Algebra

Matrices and systems of linear equations

8. Diagonalization.

3 Matrix Algebra. 3.1 Operations on matrices

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

The Solution of Linear Systems AX = B

Linear Algebra Primer

Symmetric and anti symmetric matrices

Chapter 7. Linear Algebra: Matrices, Vectors,

3 (Maths) Linear Algebra

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

Diagonalization of Matrix

Mathematical foundations - linear algebra

Root Finding For NonLinear Equations Bisection Method

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Algebra C Numerical Linear Algebra Sample Exam Problems

2 Eigenvectors and Eigenvalues in abstract spaces.

Linear Algebra Practice Problems

Linear Algebra March 16, 2019

Computational Linear Algebra

Matrix & Linear Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math 3191 Applied Linear Algebra

Linear Algebra (Math-324) Lecture Notes

SRI RAMAKRISHNA INSTITUTE OF TECHNOLOGY, COIMBATORE- 10 DEPARTMENT OF SCIENCE AND HUMANITIES B.E - EEE & CIVIL UNIT I

Transcription:

Solution of System of Linear Equations & Eigen Values and Eigen Vectors P. Sam Johnson March 30, 2015 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 1/43

Overview Linear systems occur widely in applied mathematics. Ax = b (1) They occur as direct formulations of real world problems ; but more often, they occur as a part of numerical analysis. There are two general categories of numerical methods for solving (1). Direct methods are methods with a finite number of steps ; and they end with the exact solution provided that all arithmetic operations are exact. Iterative methods are used in solving all types of linear systems, but they are most commonly used for large systems. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 2/43

Overview Iterative methods are faster than the direct methods. Even the round-off errors in iterative methods are smaller. In fact, iterative method is a self correcting process and any error made at any stage of computation gets automatically corrected in the subsequent steps. We start with a simple example of a linear system and we discuss consistency of system of linear equations, using the concept of determinant. Two iterative methods Gauss-Jacobi and Gauss-Seidel methods are discussed, to solve a given linear system numerically. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 3/43

Overview Eigen values are a special set of scalars associated with a linear system of equations. The determination of the eigen values and eigen vectors of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigen value is paired with a corresponding vector, called eigen vector. We discuss properties of eigen values and eigen vectors in the lecture. Also we give a method, called Rayeigh s Power Method, to find out the largest eigen value in magnitude (also called, dominant eigen value). The special advantage of the power method is that the eigen vector corresponds to the dominant eigen value is also calculated at the same time. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 4/43

An Example A shopkeeper offers two standard packets because he is convinced that north indians each more wheat than rice and south indians each more rice than wheat. Packet one P 1 : 5kg wheat and 2kg rice ; Packet two P 2 : 2kg wheat and 5kg rice. Notation. (m, n) : m kg wheat and n kg rice. Suppose I need 19kg of wheat and 16kg of rice. Then I need to buy x packets of P 1 and y packets of P 2 so that x(5, 2) + y(2, 5) = (10, 16). Hence I have to solve the following system of linear equations 5x + 2y = 10 2x + 5y = 16. Suppose I need 34 kg of wheat and 1 kg of rice. Then I must buy 8 packets of P 1 and 3 packets of P 2. What does this mean? I buy 8 packets of P 1 and from these I make three packets of P 2 and give them back to the shopkeeper. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 5/43

Consistency of System of Linear Equations Graphically (1 Equation, 2 Variables) The solution of the equation ax + by = c is the set of all points satisfy the equation forms a straight line in the plane through the point (c/b, 0) and with slope a/b. Two lines parallel no solution intersect unique solution same infinitely many solutions. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 6/43

Consistency of System of Linear Equations Consider the system of m linear equations in n unkowns a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. +. + +. =. a m1 x 1 + a m2 x 2 + + a mn x n = b m. Its matrix form is a 11 a 12... a 1n x 1 a 21 a 22... a 2n x 2....... = a m1 a m2... a mn x m b 1 b 2. b m. That is, Ax = b, where A is called the coefficient matrix. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 7/43

To determine whether these equations are consistent or not, we find the ranks of the coefficient matrix A and the augmented matrix a 11 a 12... a 1n b 1 a 21 a 22... a 2n b 2 K = = [A b]..... a m1 a m2... a mn b m We denote the rank of A by r(a). 1. r(a) r(k), then the linear system Ax = b is inconsistent and has no solution. 2. r(a) = r(k) = n, then the linear system Ax = b is consistent and has a unique solution. 3. r(a) = r(k) < n, then the linear system Ax = b is consistent and has an infinite number of solutions. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 8/43

Solution of Linear Simultaneous Equations Simultaneous linear equations occur quite often in engineering and science. The analysis of electronic circuits consisting of invariant elements, analysis of a network under sinusoidal steady-state conditions, determination of the output of a chemical plant, finding the cost of chemical reactions are some of the problems which depend on the solution of simultaneous linear algebraic equations, the solution of such equations can be obtained by direct or iterative methods (successive approximation methods). Some direct methods are as follows: 1. Method of determinants Cramer s rule 2. Matrix inversion method 3. Gauss elimination method 4. Gauss-Jordan method 5. Factorization (triangulization) method. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 9/43

Iterative Methods Direct methods yield the solution after a certain amount of computation. On the other hand, an iterative method is that in which we start from an approximation to the true solution and obtain better and better approximations from a computation cycle repeated as often as may be necessary for achieving a desired accuracy. Thus in an iterative method, the amount of computation depends on the degree of accuracy required. For large systems, iterative methods are faster than the direct methods. Even the round-off errors in iterative methods are smaller. In fact, iteration is a self correcting process and any error made at any stage of computation gets automatically corrected in the subsequent steps. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 10/43

Diagonally Dominant Definition An n n matrix A is said to be diagonally dominant if, in each row, the absolute value of each leading diagonal element is greater than or equal to the sum of the absolute values of the remaining elements in that row. 10 5 2 The matrix A = 4 10 3 is a diagonally dominant and the 1 6 10 2 3 1 matrix A = 5 8 4 is not a diagonally dominant matrix. 1 1 1 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 11/43

Diagonal System Definition In the sytem of simultaneous linear equations in n unknowns AX = B, if A is diagonally dominant, then the system is said to be diagonal system. Thus the system of linear equations a 1 x + b 1 y + c 1 z = d 1 a 2 x + b 2 y + c 2 z = d 2 a 3 x + b 3 y + c 3 z = d 3 is a diagonal system if a 1 b 1 + c 1 b 2 a 2 + c 2 c 3 a 3 + b 3. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 12/43

The process of iteration in solving AX = B will converge quickly if the cofficient matrix A is diagonally dominant. If the coefficient matrix is not diagonally dominant we must rearrange the equations in such a way that the resulting coefficient matrix becomes dominant, if possible, before we apply the iteration method. Exercises 1. Is the system of equations diagonally dominant? If not, make it diagonally dominant. 2. Check whether the system of equations 3x + 9y 2z = 10 4x + 2y + 13z = 19 4x 2y + z = 3. x + 6y 2z = 5 4x + y + z = 6 3x + y + 7z = 5. is a diagonal system. If not, make it a diagonal system. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 13/43

Jacobi Iteration Method (also known as Gauss-Jacobi) Simple iterative methods can be designed for systems in which the coefficients of the leading diagonal are large as comparted to others. We now describe three such methods. Gauss-Jacobi Iteration Method Consider the system of linear equations a 1 x + b 1 y + c 1 z = d 1 a 2 x + b 2 y + c 2 z = d 2 a 3 x + b 3 y + c 3 z = d 3. If a 1, b 2, c 3 are large as compared to other coefficients (if not, rearrange the given linear equations), solve for x, y, z respectively. That is, the cofficient matrix is diagonally dominant. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 14/43

Then the system can be written as x = 1 a 1 (d 1 b 1 y c 1 z) y = 1 b 2 (d 2 a 2 x c 2 z) (2) z = 1 c 3 (d 3 a 3 x b 3 y). Let us start with the initital approximations x 0, y 0, z 0 for the values of x, y, z repectively. In the absence of any better estimates for x 0, y 0, z 0, these may each be taken as zero. Substituting these on the right sides of (2), we gt the first approximations. This process is repeated till the difference between two consecutive approximations is negligible. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 15/43

Exercises 3. Solve, by Jacobi s iterative method, the equations 20x + y 2z = 17 3x + 20y z = 18 2x 3y + 20z = 25. 4. Solve, by Jacobi s iterative method correct to 2 decimal places, the equations 10x + y z = 11.19 x + 10y + z = 28.08 x + y + 10z = 35.61. 5. Solve the equations, by Gauss-Jacobi iterative method 10x 1 2x 2 x 3 x 4 = 3 2x 1 + 10x 2 x 3 x 4 = 15 x 1 x 2 + 10x 3 2x 4 = 27 x 1 x 2 2x 3 + 10x 4 = 9. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 16/43

Gauss-Seidel Iteration Method This is a modification of Jacobi s Iterative Method. Consider the system of linear equations a 1 x + b 1 y + c 1 z = d 1 a 2 x + b 2 y + c 2 z = d 2 a 3 x + b 3 y + c 3 z = d 3. If a 1, b 2, c 3 are large as compared to other coefficients (if not, rearrange the given linear equations), solve for x, y, z respectively. Then the system can be written as x = 1 a 1 (d 1 b 1 y c 1 z) y = 1 b 2 (d 2 a 2 x c 2 z) (3) z = 1 c 3 (d 3 a 3 x b 3 y). P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 17/43

Here also we start with the initial approximations x 0, y 0, z 0 for x, y, z respectively, (each may be taken as zero). Subsituting y = y 0, z = z 0 in the first of the equations (3), we get x 1 = 1 a 1 (d 1 b 1 y 0 c 1 z 0 ). Then putting x = x 1, z = z 0 in the second of the equations (3), we get y 1 = 1 b 2 (d 2 a 2 x 1 c 2 z 0 ). Next subsituting x = x 1, y = y 1 in the third of the equations (3), we get and so on. z 1 = 1 c 3 (d 3 a 3 x 1 b 3 y 1 ) P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 18/43

That is, as soon as a new approximation for an unknown is found, it is immediately used in the next step. This process of iteration is repeatedly till the values of x, y, z are obtained to desired degree of accuracy. Since the most recent approximations of the unknowns are used which proceeding to the next step, the convergence in the Gauss-Seidel method is twice as fast as in Gauss-Jacobi method. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 19/43

Convergence Criteria The condition for convergence of Gauss-Jacobi and Gauss-Seidel methods is given by the following rule: The process of iteration will converge if in each equation of the system, the absolute value of the largest co-efficient is greater than the sum of the absolute values of all the remaining coefficients. That is, if the coefficient matrix is a diagonal dominant matrix, then the process of iteration will converge. For a given system of linear equations, before finding approximations by using Gauss-Jacobi / Gauss-Seidel methods, convergence criteria should be verified. If the coefficient matrix is a not diagonal dominant matrix, rearrange the equations, so that the new coefficient matrix is diagonal dominant. Note that all systems of linear equations are not diagonal dominant. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 20/43

Exercises 6. Apply Gauss-Seidel iterative method to solve the equations 20x + y 2z = 17 3x + 20y z = 18 2x 3y + 20z = 25. 7. Solve the equations, by Gauss-Jacobi and Gauss-Seidel methods (and compare the values) 27x + 6y z = 85 x + y + 54z = 110 6x + 15y + 2z = 72. 8. Apply Gauss-Seidel iterative method to solve the equations 10x 1 2x 2 x 3 x 4 = 3 2x 1 + 10x 2 x 3 x 4 = 15 x 1 x 2 + 10x 3 2x 4 = 27 x 1 x 2 2x 3 + 10x 4 = 9. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 21/43

Relaxation Method Consider the system of linear equations a 1 x + b 1 y + c 1 z = d 1 a 2 x + b 2 y + c 2 z = d 2 a 3 x + b 3 y + c 3 z = d 3. We define the residuals R x, R y, R z by the relations R x = d 1 a 1 x b 1 y c 1 z R y = d 2 a 2 x b 2 y c 2 z (4) R z = d 3 a 3 x b 3 y c 3 z. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 22/43

To start with we assume that x = y = z = 0 and calculate the initial residuals. Then the residuals are reduced step by step, by giving increments to the variables. We note from the equations (4) that if x is increased by 1 (keeping y and z constant), R x, R y and R z decrease by a 1, a 2, a 3 respectively. This is shown in the following table along with the effects on the residuals when y and z are given unit increments. δr x δr y δr z δx = 1 a 1 a 2 a 3 δy = 1 b 1 b 2 b 3 δz = 1 c 1 c 2 c 3 The table is the negative of transpose of the coefficient matrix. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 23/43

At each step, the numerically largest residual is reduced to almost zero. How can one reduce a particular residual? To reduce a particular residual, the value of the corresponding variable is changed. For example, to reduce R x by p, x should be increased by p/a 1. When all the residuals have been reduced to almost zero, the increments in x, y, z are added separately to give the desired solution. Verification Process : The residuals are not all negligible when the computed values of x, y, z are substituted in (4), then there is some mistake and the entire process should be rechecked. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 24/43

Convergence of the Relaxation Method Relaxation method can be applied successfully only if there is at least one row in which diagonal element of the coefficient matrix dominate the other coefficients in the corresponding row. That is, should be valid for at least one row. a 1 b 1 + c 1 b 2 a 2 + c 2 c 3 a 3 + b 3 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 25/43

Exercises 9. Solve by relaxation method, the equations 9x 2y + z = 50 x + 5y 3z = 18 2x + 2y + 7z = 19. 10. Solve by relaxation method, the equations 10x 2y 3z = 205 2x + 10y 2z = 154 2x y + 10z = 120. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 26/43

Eigen Values and Eigen Vectors For any n n matrix A, consider the polynomial λ a 11 a 12 a 1n χ A (λ) := λi A = a 21 λ a 22 a 2n. (5) a n1 a n2 λ a nn Clearly this is a monic polynomial of degree n. By the fundamental theorem of algebra, χ(a) has exactly n (not necessarily distinct) roots. χ A (λ) χ A (λ) = 0 the roots of χ A (λ) distinct roots of χ A (λ) the characteristic polynomial of A the characteristic equation of A the characteristic roots of A the spectrum of A P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 27/43

Few Observations on Characteristic Polynomials The constant terms and the coefficient of λ n 1 in χ A (λ) are ( 1) n A and tr(a). The sum of the characteristic roots of A is tr(a) and the product of the characteristic roots of A is A. Since λi A T = (λi A) T, characteristic polynomials of A and A T are the same. Since λi P 1 AP = P 1 (λi A)P, similar matrices have the same characteristic polynomials. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 28/43

Eigen Values and Eigen Vectors Definition A complex number λ is an eigen value of A if there exists x 0 in C n such that Ax = λx. Any such (non-zero) x is an eigen vector of A corresponding to the eigen value λ. When we say that x is an eigen vector of A we mean that x is an eigen vector of A corresponding to some eigen value of A. λ is an eigen value of A iff the system (λi A)x = 0 has a non-trivial solution. λ is a characteristic root of A iff λi A is singular. Theorem A number λ is an eigen value of A iff λ is a characteristic root of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 29/43

The preceding theorem shows that eigen values are the same as characteristic roots. However, by the characteristic roots of A we mean the n roots of the characteristic polynomial of A whereas the eigen values of A would mean the distinct characteristic roots of A. Equivalent names: Eigen values Eigen vectors proper values, latent roots, etc. characteristic vectors, latent vectors, etc. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 30/43

Properties of Eigen Values 1. The sum and product of all eigen values of a matrix A is the sum of principal diagonals and determinant of A, respectively. Hence a matrix is not invertible if and only if it has a zero eigen value. 2. The eigen values of A and A T (the transpose of A) are same. 3. If λ 1, λ 2,..., λ n are eigen values of A, then (a) 1/λ 1, 1/λ 2,..., 1/λ n are eigen values of A 1. (b) kλ 1, kλ 2,..., kλ n are eigen values of ka. (c) kλ 1 + m, kλ 2 + m,..., kλ n + m are eigen values of ka + mi. (d) λ p 1, λp 2,..., λp n are eigen values of A p, for any positive integer p. 4. The transformation of A by a non-singular matrix P to P 1 AP is called a similarity transformation. Any similarity transformation applied to a matrix leaves its eigen values unchanged. 5. The eigen values of a real symmetric matrix are real. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 31/43

Cayley - Hamilton Theorem Theorem For every matrix A, the characteristic polynomial of A annihilates A. That is, every matrix satisfies its own characteristic equation. Main uses of Cayley-Hamilton theorem 1. To evaluate large powers A. 2. To evaluate a polynomial in A with large degree even if A is singular. 3. To express A 1 as a polynomial in A whereas A is non-singular. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 32/43

Dominant Eigen Value We have seen that the eigen values of an n n matrix A are obtained by solving its characteristic equation λ n + a n 1 λ n 1 + c n 2 λ n 2 + + c 0 = 0. For large values of n, polynomial equations like this one are difficult and time-consuming to solve. Moreover, numerical techniques for approximating roots of polynomial equations of high degree are sensitive to rounding errors. We now look at an alternative method for approximating eigen values. As presented here, the method can be used only to find the eigen value of A, that is, largest in absolute value we call this eigen value the dominant eigen value of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 33/43

Dominant Eigen Vector Dominant eigen values are of primary interest in many physical applications. Let λ 1, λ 2,..., and λ n be the eigen values of an n n matrix A. λ 1 is called the dominant eigen value of A if For example, the matrix λ i > λ j, i = 2, 3,..., n. ( ) 1 0 has no dominant eigen value. 0 1 The eigen vectors corresponding to λ 1 are called dominant eigen vectors of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 34/43

Power Method In many engineering problems, it is required to compute the numerically largest eigen values (dominant eigen values) and the corresponding eigen vectors (dominant eigen vectors). In such cases, the following iterative method is quite convenient which is also well-suited for machine computations. Let x 1, x 2,..., x n be the eigen vectors corresponding to the eigen values λ 1, λ 2,..., λ n. We assume that the eigen values of A are real and distint with λ 1 > λ 2 > > λ n. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 35/43

As the vectors x 1, x 2,..., x n are linearly independent, any arbitrary column vector can be written as x = k 1 x 1 + k 2 x 2 + + k n x n. We now operate A repeatedly on the vector x. Then Hence Ax = k 1 λ 1 x 1 + k 2 λ 2 x 2 + + k n λ n x n. [ ( ) ( ] λ2 λn Ax = λ 1 k 1 x 1 + k 2 x 2 + + k n )x n. λ 1 λ 1 and through iteration we obtain, [ ( ) r ( ) r ] A r λ2 λn x = λ 1 k 1 x 1 + k 2 x 2 + + k n x n λ 1 λ 1 (6) provided λ 1 0. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 36/43

Since λ 1 is dominant, all terms inside the brackets have limit zero except the first term. That is, for large values of r, the vector ( ) r ( ) r λ2 λn k 1 x 1 + k 2 x 2 + + k n x n λ 1 λ 1 will converge to k 1 x 1, that is the eigen vector of λ 1. The eigenvalue is obtained as (A r+1 x) p λ 1 = lim n (A r p = 1, 2,..., n x) p where the index p signifies the pth component in the corresponding vector. That is, if we take the ratio of any corresponding components of A r+1 x and A r x, this ratio should therefore have a limit λ 1. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 37/43

Linear Convergence Neglecting the terms of λ 2, λ 3,..., λ n in (6) we get A r+1 x k 1 λ1 r+1 x 1 = λ ( 2 A r ) x λ 1 k 1 λ r x 1 1 which shows that the convergence is linear with convergence ratio λ 2. λ 1 Thus, the convergence is faster than if this ratio is small. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 38/43

Working Rule To Find Dominant Eigen Value Choose x 0, an arbitrary real vector. Generally, we choose x as 1 1, 1 or, 1 0. 0 In other words, for the purpose of computation, we generally take x with 1 as the first component. We start with a column vector x which is as near the solution as possible and evaluate Ax which is written as λ (1) x (1) after normalization. This gives the first approximation λ (1) to the eigen value and x (1) is the eigen vector. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 39/43

Rayeigh s Power Method Similarly, we evaluate Ax (2) = λ (2) x (2) which gives the second approximation. We repeat this process till [x (r) x (r 1) ] becomes negligible. Then λ (r) will be the dominant (largest) eigen value and x (r), the corresponding eigen vector (dominant eigen vector). The iteration method of finding the dominant eigen value of a square matrix is called the Rayeigh s power method. At each stage of iteration, we take out the largest component from y (r) to find x (r) where y (r+1) = Ax (r), r = 0, 1, 2,.... P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 40/43

To find the Smallest Eigen Value To find the smallest eigen value of a given matrix A, apply power method to the matrix A 1. Find the dominant eigen value of A 1 and then take is reciprocal. Exercise 11. Say true or false with justification: If λ is the dominant eigen value of A and β is the dominant eigen value of A λi, then the smallest eigen value of A is λ + β. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 41/43

Exercises 12. Determine the largest eigen value and the corresponding eigen vector of the matrix ( ) 5 4. 1 2 13. Find the largest eigen value and the corresponding eigen vector of the matrix 2 1 0 1 2 1 0 1 2 using power method. Take [1, 0, 0] T as an initial eigen vector. 14. Obtain by power method, the numerically dominant eigen value and eigen vector of the matrix 15 4 3 10 12 6 20 4 2. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 42/43

References 1. Richard L. Burden and J. Douglas Faires, Numerical Analysis - Theory and Applications, Cengage Learning, Singapore. 2. Kendall E. Atkinson, An Introduction to Numerical Analysis, Wiley India. 3. David Kincaid and Ward Cheney, Numerical Analysis - Mathematics of Scientific Computing, American Mathematical Society, Providence, Rhode Island. 4. S.S. Sastry, Introductory Methods of Numerical Analysis, Fourth Edition, Prentice-Hall, India. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 43/43