LINEAR ALGEBRA AND MATRICES. n ij. is called the main diagonal or principal diagonal of A. A column vector is a matrix that has only one column.

Similar documents
MATRICES AND VECTORS SPACE

The Algebra (al-jabr) of Matrices

Math 520 Final Exam Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

Matrices and Determinants

Elements of Matrix Algebra

Chapter 5 Determinants

INTRODUCTION TO LINEAR ALGEBRA

Engineering Analysis ENG 3420 Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

September 13 Homework Solutions

ECON 331 Lecture Notes: Ch 4 and Ch 5

ARITHMETIC OPERATIONS. The real numbers have the following properties: a b c ab ac

Matrix Eigenvalues and Eigenvectors September 13, 2017

Lecture Solution of a System of Linear Equation

Elementary Linear Algebra

Algebra Of Matrices & Determinants

Multivariate problems and matrix algebra

HW3, Math 307. CSUF. Spring 2007.

THE DISCRIMINANT & ITS APPLICATIONS

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

Chapter 2. Determinants

Chapter 3 MATRIX. In this chapter: 3.1 MATRIX NOTATION AND TERMINOLOGY

Matrices. Elementary Matrix Theory. Definition of a Matrix. Matrix Elements:

Numerical Linear Algebra Assignment 008

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

The Islamic University of Gaza Faculty of Engineering Civil Engineering Department. Numerical Analysis ECIV Chapter 11

1 Linear Least Squares

DETERMINANTS. All Mathematical truths are relative and conditional. C.P. STEINMETZ

SUMMER KNOWHOW STUDY AND LEARNING CENTRE

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT

Things to Memorize: A Partial List. January 27, 2017

CHAPTER 2d. MATRICES

Geometric Sequences. Geometric Sequence a sequence whose consecutive terms have a common ratio.

Lecture 14: Quadrature

Lecture Note 9: Orthogonal Reduction

a a a a a a a a a a a a a a a a a a a a a a a a In this section, we introduce a general formula for computing determinants.

Math 75 Linear Algebra Class Notes

In Section 5.3 we considered initial value problems for the linear second order equation. y.a/ C ˇy 0.a/ D k 1 (13.1.4)

Chapter 4 Contravariance, Covariance, and Spacetime Diagrams

308K. 1 Section 3.2. Zelaya Eufemia. 1. Example 1: Multiplication of Matrices: X Y Z R S R S X Y Z. By associativity we have to choices:

Introduction to Determinants. Remarks. Remarks. The determinant applies in the case of square matrices

Operations with Matrices

Matrix Algebra. Matrix Addition, Scalar Multiplication and Transposition. Linear Algebra I 24

STUDY GUIDE FOR BASIC EXAM

fractions Let s Learn to

Best Approximation in the 2-norm

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

Review of basic calculus

Lecture 19: Continuous Least Squares Approximation

MATH34032: Green s Functions, Integral Equations and the Calculus of Variations 1

Math 270A: Numerical Linear Algebra

MTH 5102 Linear Algebra Practice Exam 1 - Solutions Feb. 9, 2016

Theoretical foundations of Gaussian quadrature

The Regulated and Riemann Integrals

Math 4310 Solutions to homework 1 Due 9/1/16

Operations with Polynomials

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying

Chapter 2. Vectors. 2.1 Vectors Scalars and Vectors

Module 6: LINEAR TRANSFORMATIONS

Homework 5 solutions

FACULTY OF ENGINEERING TECHNOLOGY GROUP T LEUVEN CAMPUS INTRODUCTORY COURSE MATHEMATICS

Abstract inner product spaces

CSCI 5525 Machine Learning

4.5 JACOBI ITERATION FOR FINDING EIGENVALUES OF A REAL SYMMETRIC MATRIX. be a real symmetric matrix. ; (where we choose θ π for.

Chapter 3 Polynomials

STURM-LIOUVILLE BOUNDARY VALUE PROBLEMS

Math Lecture 23

INJNTU.COM LECTURE NOTES

Thomas Whitham Sixth Form

Linear Algebra 1A - solutions of ex.4

g i fφdx dx = x i i=1 is a Hilbert space. We shall, henceforth, abuse notation and write g i f(x) = f

Rudimentary Matrix Algebra

Determinants Chapter 3

A-Level Mathematics Transition Task (compulsory for all maths students and all further maths student)

ODE: Existence and Uniqueness of a Solution

Duality # Second iteration for HW problem. Recall our LP example problem we have been working on, in equality form, is given below.

Math 61CM - Solutions to homework 9

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 2

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

Discrete Least-squares Approximations

Polynomial Approximations for the Natural Logarithm and Arctangent Functions. Math 230

The First Fundamental Theorem of Calculus. If f(x) is continuous on [a, b] and F (x) is any antiderivative. f(x) dx = F (b) F (a).

Chapter 14. Matrix Representations of Linear Transformations

Matrices. Introduction

set is not closed under matrix [ multiplication, ] and does not form a group.

Best Approximation. Chapter The General Case

Matrix Solution to Linear Equations and Markov Chains

Linearity, linear operators, and self adjoint eigenvalue problems

Goals: Determine how to calculate the area described by a function. Define the definite integral. Explore the relationship between the definite

2. VECTORS AND MATRICES IN 3 DIMENSIONS

Orthogonal Polynomials and Least-Squares Approximations to Functions

and that at t = 0 the object is at position 5. Find the position of the object at t = 2.

Bases for Vector Spaces

Quadratic Forms. Quadratic Forms

Math 113 Fall Final Exam Review. 2. Applications of Integration Chapter 6 including sections and section 6.8

Infinite Geometric Series

Contents. Outline. Structured Rank Matrices Lecture 2: The theorem Proofs Examples related to structured ranks References. Structure Transport

Lecture 2e Orthogonal Complement (pages )

Math Theory of Partial Differential Equations Lecture 2-9: Sturm-Liouville eigenvalue problems (continued).

Transcription:

PART 1 LINEAR ALGEBRA AND MATRICES Generl Nottions Mtri (denoted by cpitl boldfce letter) A is n m n mtri. 11 1... 1 n 1... n A ij...... m1 m... mn ij denotes the component t row i nd column j of A. If m n, A is squre mtri nd its digonl contining 11,,..., nn is clled the min digonl or principl digonl of A A submtri of A is obtined by omitting some rows or columns (or both) froma. Vector (denoted by lowercse boldfce letter) A row vector is mtri tht hs only one row. 1... n A column vector is mtri tht hs only one column. b1 b b : bn 1-1

Trnsposition T The trnspose A of n m n mtri A ij is the mtri n m tht hs the first row of A s its first column nd the second row of A s its second column, nd so on. Thus the trnspose of A is A T 11 1... m 1 1... m ji...... 1n n... mn A symmetric mtri is squre mtri tht A T A For emple, 4 1 0 1 3 0 5 A skew-symmetric mtri is squre mtri tht A T A For emple, 0 8 0 6 8 6 0 1 -

Mtri Addition, Sclr Multipliction Definition: Equlity of mtrices Mtrices A nd B re equl, written s A=B, if nd only if they hve the sme size nd ll of the corresponding entries re equl. Definition: Mtri ddition Addition is defined only for mtrices A nd B of the sme size, nd their sum, written s A+B, is then obtined by dding the corresponding entries. Emple Given 4 6 3 A 0 1 nd B 5 1 0 3 1 0 Then 1 5 3 AB 3 Definition: Sclr multipliction The product of ny m n mtri A nd ny sclr c, written s ca, is the m n mtri ca obtined by multiplying ech entry in A by c. 1-3

Emple Given A.7 1.8 0 0.9 9.0 4.5.7 1.8 A 0 0.9, 9.0 4.5 3 10 A 0 1, 9 10 5 0 0 0A 0 0 0 0 According to bove definitions, these properties follow A BBA UVW UVW A 0 A A A 0 c A B cacb c k AcAkA A ck c k 1A A A T T T A B A B T T ca ca 1-4

Mtri Multipliction The product C AB of n m n mtri A nd n r p mtri B is defined if nd only if r n, otherwise C is undefined. c ik C is n m p mtri with entries n c b b b... b ik ij jk i1 1k i k in nk j 1 Mtri multipliction is not commuttive AB BA For instnces, 9 31 4 15 1 0 5 8 wheres 1 4 9 3 17 3 5 0 8 6 AB 0 does not necessrily imply A 0 or B 0 or BA 0 1 11 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1 Other properties re kab kab AkB ABC ABC A BCACBC CA BCACB 1-5

Specil mtrices A squre mtri whose entries bove the min digonl re ll zero is clled lower tringulr mtri. 1 0 0 3 0 5 0 Similrly, n upper tringulr mtri is squre mtri whose entries below the min digonl re ll zero. 1 6 1 0 3 0 0 4 Digonl mtri is squre mtri whose entries bove nd below the min digonl re ll zero. 0 0 0 0 0 0 0 4 nd 0 0 0 0 0 0 A digonl mtri whose entries on the min digonl re ll equl is clled sclr mtri. c 0... 0 0 c... 0 S...... 0 0... c AS SA ca 1-6

A unit mtri or identity mtri I is sclr mtri whose entries on the min digonl re ll 1. I 1 0 0 0 1 0 0 0 1 Trnspose of Product T T T AB B A Inner Product of Vectors b 1 n T 1... n : b i ib 1 1... nb n i1 bn b b The inner product of vectors is lwys sclr. Product in Terms of Column Vectors Let B b1 b... b n where b1, b,..., b n re column vectors AB Ab1 Ab... Ab n 1-7

LINEAR SYSTEMS OF EQUATIONS A liner system of m equtions in n unknowns 1,,..., n is set of equtions of the form 111 1nnb1 b b 1 1 n n m1 1 mn n m jk re given nd clled the coefficients of the systems. b re lso given numbers. i If b i re ll zero, the system is clled homogeneous system. If t lest one b i is not zero, then the system is clled nonhomogeneous system. A solution of the system of equtions is set of number 1,,..., n tht stisfy ll the m equtions. If the system is homogenous, it hs t lest the trivil solution where 1... n 0 Consider the number of equtions nd number of unknowns If there re more equtions thn unknowns, the system is overdetermined. If m n, the system is determined. If there re more unknowns thn equtions, the system is underdetermined. An underdetermined system lwys hs solutions, wheres the other two cses, solutions my or my not eist. 1-8

Geometric Interprettion If m=n=, we hve two eqution of two unknowns. If we interpret the two unknowns s pir of coordintes in -D plne, then ech eqution is n eqution for stright line. Ech point on the line is pir of vlues tht stisfies the eqution. Emple Cse () y 1 y 0 No solution if the two lines re prllel. y Cse (b) y 1 y 0 y There is precisely one solution if the two lines intersect Cse (c) y 1 y y There re infinitely mny solutions if the two lines coincide. 1-9

Coefficient Mtri From the definition of mtri multipliction we see tht the m equtions my be written s single vector eqution is equivlent to or b 11 1 1n n 1 b 1 1 n n b m1 1 mn n m 11 1... 1 n 1 b1 1... n b...... : :... b m1 m mn n m A b where the coefficient mtri A jk is the m n mtri...... A......... 11 1 1n 1 n m1 m mn, nd 1 : n nd b b b b 1 : m re column vectors. Note tht hs n components (n unknowns) wheres b hs m components (m equtions). 1-10

The mtri A 11 1... 1 n b1 1... n b....... m1 m... mn bm is clled ugmented mtri becuse it is obtined by ugmenting the column vector b to the coefficient mtri A. This ugmented mtri cn completely describe the liner system of equtions. Elementry Row Opertions When we solve the originl set of liner equtions, we cn (1) Interchnge the order of two equtions () Multiply n eqution by nonzero constnt (3) Add constnt multiple of one eqution to nother eqution without ffecting the solution. Therefore, we cn (1) Interchnge two rows of the ugmented mtri () Multiply row by nonzero constnt (3) Add constnt multiple of one row to nother row without chnging the solution. These mnipultions, which produce mtri tht is row-equivlent to the originl one, i.e., yielding the sme set of solution, re clled elementry row opertions. 1-11

Guss Elimintion Guss Elimintion is method to obtin the solution of the liner system by performing the elementry row opertions on the ugmented mtri of the liner system. By subtrcting multiple of the first row from ll other rows, it mkes the first column of ll other rows become zero. This process is clled pivoting whereby the first row is the pivot row. Then, by subtrcting multiple of the second row from ll other rows below, it mkes the second column of ll other rows below become zero. This process uses the second row s the pivot row. Repeting similr process for ll the rows below will result in rowequivlent mtri tht is n upper-tringulr mtri, which is clled Echelon form. If the i th column of the i th row hppens to be equl zero, it cn not be used s pivot eqution, so we need to interchnge the i th row with nother row below it. This is clled prtil pivoting. 1-1

Emple 1 3 1 3 1 3 0 0 10 5 90 0 10 80 Augmented mtri 1 1 1 0 1 1 1 0 0 10 5 90 0 10 0 80 Pivot by the first row: Row = Row (-1)*Row 1 Row 3 = Row 3 Row 4 = Row 4 0*Row1 1 1 1 0 0 0 0 0 0 10 5 90 0 30 0 80 Move Row to the lst row nd move Row 3 nd 4 up. 1 1 1 0 0 10 5 90 0 30 0 80 0 0 0 0 1-13

Pivot by the second row: Row 3 = Row 3 (30/10)*Row 1 1 1 0 0 10 5 90 0 30 0 80 0 0 0 0 1 1 1 0 0 10 5 90 0 0 95 190 0 0 0 0 Now we obtin n upper-tringulr mtri tht is row-equivlent to the originl ugmented mtri. The corresponding system is clled reduced system. 1 3 0 10 53 90 953 190 0 0 From this system, we cn esily determine the solution by first clculting 3, then clculting from the known 3, nd then clculting 1 from the known 3 nd. We then obtin, 4,. This lst process is clled bck substitution. 3 1 In this emple, the system is overdetermined becuse there re more equtions thn unknowns. However, there eists one unique solution. 1-14

Emple Guss Elimintion for n underdetermined system Augmented mtri is 3.01.0.03 5.04 0.611.5 1.53 5.44 8.0.7 1. 0.3 0.3.4.1 1 3 4 Pivot by Row 1 Row = Row (0.6/3.0)*Row 1 Row3 = Row 3 (1./3.0)*Row 1 Pivot by Row Row 3 = Row 3 (-1)*Row 3.0.0.0 5.0 8.0 0.6 1.5 1.5 5.4.7 1. 0.3 0.3.4.1 3.0.0.0 5.0 8.0 0 1.1 1.1 4.4 1.1 0 1.1 1.1 4.4 1.1 3.0.0.0 5.0 8.0 0 1.1 1.1 4.4 1.1 0 0 0 0 0 1-15

3.0.0.0 5.0 8.0 0 1.1 1.1 4.4 1.1 0 0 0 0 0 Bck substitution 1 4 3 4 31 8 3 54 3 8 8 5 31 634 1 3 4 3 4 1 4 1 4 3 4 Since 3 nd 4 remin rbitrry, the system hs infinitely mny solutions. If we choose vlue of 3 nd vlue of 4, then the corresponding vlues of 1 nd re uniquely determined. 1-16

Emple Guss Elimintion if unique solution eists 1 3 3 6 1 3 3 4 4 1 3 Pivot by Row 1 1 1 3 1 1 6 1 3 4 4 Row = Row + 3*Row 1 Row 3 = Row 3 Row 1 1 1 0 7 1 0 Pivot by Row : Row 3 = Row 3 Row 1 1 0 7 1 0 0 5 10 Bck substitution, 1, 1 3 1 The system hs one unique solution. 1-17

Emple Guss Elimintion if no solution eists The ugmented mtri is Row = Row (/3)*Row 1 Row 3 = Row 3 (6/3)*Row 1 Row 3 = Row 3 6*Row 31 3 3 1 3 0 6 4 6 1 3 3 1 3 1 1 0 6 4 6 3 1 3 0 1/3 1/3 0 0 3 1 3 0 1/3 1/3 0 0 0 1 The lst row indictes contrdiction. Thus, there is no solution tht will mke the equtions hold. 1-18

To determine if the liner system hs unique solution, mny solutions, or no solution, consider the echelon form (upper tringulr mtri t the end of Guss elimintion). It hs generl form of...... b...... b : : : :... b 0 b 0 : 0 b 11 1 1n 1 n rr rn r If ny of b,..., r 1 bm is nonzero, no solution eists Otherwise, r 1 m If r If r n, the system hs one unique solution. n, the system hs infinitely mny solutions. 1-19

Liner Independence Given ny set of m vectors 1,,..., m, liner combintion of these vectors is of the form c11 c... cm m where c1, c,..., c m re ny constnt sclrs. The set of vectors 1,,..., m is linerly independent if c1 c... c m 0 is the only set of c1, c,..., c m tht mkes liner combintion of them equl to zero c11c... c m m 0 If there eist choice of c1, c,..., c m (not ll zero) tht mkes the liner combintion of 1,,..., m equl to zero, then set of vectors 1,,..., m re sid to be linerly dependent becuse then we cn epress one of them s liner combintion of the others, for emple, Emple 1 c... cm m 1 c1 1 3 0 6 4 4 54 3 1 1 0 15 re linerly dependent becuse 1 61 3 0 Although this cn be checked esily, it is not esy to discover. 1-0

Linerly dependent vectors v v 3 v 1 v 1 + v 1.75v 3 = 0-1.75v 3 v v 1 Linerly independent vectors v 4 v v 1 1-1

VECTOR SPACE Spn of set of vectors 1,,..., m is the set of ll liner combintions of 1,,..., m. A vector spce V is set of vectors with the two lgebric opertions of ddition nd sclr multipliction defined such tht the following holds 1) The sum +b of ny vectors nd b in V is lso in V nd the product k of ny vector in V nd sclr k is lso in V. ) For ll vectors nd sclrs we hve the fmilir rules bb bcbc 0 0 k b kkb k l kl kl k l 1 1 -

Dimension of V is the mimum number of linerly independent vectors in V nd is denoted by dim V A linerly independent set consisting of mimum possible number of vectors in V is clled bsis. Thus, the number of vectors in bsis for V equls dim V. n Rel n -dimensionl vector spce R is the spce of ll vectors with n rel numbers s components nd rel numbers s sclrs. Rnk of mtri Rnk of mtri is the number of linerly independent row vectors of mtri nd is denoted by rnk A Note tht rnk A = 0 if nd only if A = 0. 1-3

Theorem 1: The rnk of mtri A equls the mimum number of linerly independent column vectors of A. Hence A nd its trnspose A T hve the sme rnk. Proof: Let rnk A = r. By definition, A hs linerly independent set of r row vectors, clled v 1, v v r. Ech row of A, clled 1, m, cn be written s 1 c11v1c1v... c1rvr c1v1cv... crvr : c v c v... c v m m1 1 m mr r These re vector equtions. Ech of the bove eqution contins n component. If we consider the k th component of ech of the bove equtions, we get 1k c11v1 k c1vk... c1 rvrk c v c v... c v : c v c v... c v k 1 1k k r rk mk m1 1k m k mr rk where k = 1,, n. This cn be written s 1k c11 c1 c1r k c 1 c c r v 1k v k... v rk : : : : c c c mk m1 m mr 1-4

The left hnd side is the k th column of A. This shows tht ech columns of A cn be written s liner combintion of set of r vectors, which mens tht the mimum number of linerly independent column vectors of A is no more thn r (=mimum number of linerly independent row vector of A). We cn imgine tht this finding is lso pplicble to A T. The mimum number of linerly independent column vectors of A T (row of A) is no more thn mimum number of linerly independent row vectors of A T (column of A). Thus, they must be equl. Hence the mimum number of linerly independent column vectors of A equls to the rnk of mtri A nd the T rnk A rnk A The spn of the row vectors of mtri A is clled the row spce of A nd the spn of the column vectors the column spce of A. From Theorem 1, we thus hve Theorem : The row spce nd the column spce of mtri A hve the sme dimension equl to rnk A. Theorem 3: Row-equivlent mtrices hve the sme rnk. By definition, rnk of mtri is equl to the dimension of its row spce nd row spces of row-equivlent mtrices re the sme, so row-equivlent mtrices hve the sme rnk. By this theorem, we cn immeditely determine the rnk of mtri from its echelon form. 1-5

Emple Determine of the rnk of Row = Row + *Row 1 Row 3 = Row 3 7 *Row 1 Row 3 = Row 3 + ½*Row Therefore, rnk A = 3 0 A 6 4 4 54 1 1 0 15 3 0 0 4 8 58 0 1 14 9 3 0 0 4 8 58 0 0 0 0 1-6

Theorem 4: p row vectors 1,,..., p (with n components ech) re linerly independent if the mtri with row vectors 1,,..., p hs rnk p; they re linerly dependent if tht rnk is less thn p. A 1 11 1... 1n 1... n : : : : : p p1 p... pn Theorem 5: p vectors with n components re lwys linerly dependent if n < p. Proof: From Theorem 1, the mimum number of linerly dependent row vector of A equls the mimum number of linerly dependent column vector of A. Therefore, there cn be t most n linerly independent row vectors. From definition of dimension, we thus hve Theorem 6: The vector spce R n consisting of ll vectors with n components hs dimension n. 1-7

Generl Properties of Solutions of Liner Systems Theorem 7: Fundmentl Theorem for liner systems ) A liner system of m equtions in n unknowns 1,, n hs solutions if nd only if the coefficient mtri A nd the ugmented mtri A hve the sme rnk. b 11 1 1n n 1 b 1 1 n n b m1 1 mn n m (1) b) If this rnk r equls n, the system (1) hs precisely one solution. c) If r < n, the system (1) hs infinitely mny solutions, ll of which re obtined by determining r suitble unknowns (whose submtri of coefficients must hve rnk r) in terms of the remining n - r unknowns, to which rbitrry vlues cn be ssigned. d) If solutions eist, they cn ll be obtined by the Guss elimintion. 1-8

Proof: ) Write A in term of column vector c 1,, c n. A c1... c n From A = b, if A hs solution 1,, n, then b cn then be written s b c c c 1 1... n n b is liner combintion of column vectors of A. Adding linerly dependent column vector b to coefficient mtri A does not chnge the rnk of A, thus the ugmented mtri A hs the sme rnk s A. Conversely, if rnk A = rnk A, b must be liner combintion of column vectors of A, nd solution must eist. b) If this rnk A = rnk A = r = n, the n column vectors of A re linerly independent. If 1,,..., n re solution, the representtion b c c c 1 1... n n which is liner combintion of linerly independent vectors must be unique. Becuse if there is y 1,, y n such tht then b y c y c y c 1 1 1 1 1... n n ( y ) c ( y ) c... ( y ) c 0 n n n From the definition of liner independence, i y i must be 0, so y i must be the sme s i nd solution is unique. 1-9

c) If rnk A = r < n, the column vectors of A consists of r linerly independent vectors nd n r dependent vectors. Suppose we reorder the unknowns nd columns of A such tht the first n columns re linerly independent nd the lst n r columns re liner combintion of the first r. When solution eists, b cn be written s liner combintion of column vectors of A. b c c... c c... c 1 1 r r r1 r1 n n All of columns vectors b, c 1,, c n re in the sme vector spce of dimension r. Then, the vector b( c... c ) c c... c r1 r1 n n 1 1 r r cn lso be represented by liner combintion of only the first r column vectors. Even if the unknowns,..., r 1 n is chnged rbitrrily, the right hnd side cn be re-clculte to restore the equlity. 1-30

Homogeneous System 11 1 1n 1 1 n m1 1 mn n n n 0 0 0 () Theorem 8: Homogeneous system ) A homogeneous system () lwys hs the trivil solution 1 = = = n = 0. b) Nontrivil solution eist if nd only if rnk A < n. c) If rnk A = r < n, these solutions, together with = 0, form vector spce of dimension n - r. In prticulr, if 1 nd re solution vectors of (), then = c 1 1 + c, where c 1 nd c ny sclrs, is solution vector of (). The vector spce of ll solution of () is clled the null spce of the coefficient mtri A, becuse if we multiply ny in this null spce by A we get 0. The dimension of the null spce is clled the nullity of A. In term of these concepts, Theorem 8 sttes tht rnk A + nullity A = n where n is the number of unknowns (number of columns of A) Theorem 9: System with fewer equtions thn unknowns A homogeneous system of liner equtions with fewer equtions thn unknowns lwys hs nontrivil solutions. 1-31

Nonhomogenous System Theorem 10: Nonhomogenous System If nonhomogenous liner system of equtions of the form (1) hs solutions, then ll these solutions re of the form 0 h where 0 is ny fied solution of (1) nd h runs through ll the solutions of the corresponding homogeneous system (). Proof: Let be ny given solution to the nonhomogeneous system (1) nd 0 be n rbitrrily chosen solution of (1). Then, Then, A = b nd A 0 = b. A( 0 ) = 0 ( 0 ) is solution to the homogeneous system () = h. Thus, = 0 + h 1-3

INVERSE OF A MATRIX In this section, we consider only squre mtrices. The inverse of n n n mtri A is denoted by A -1 nd is n n n mtri such tht where I is the n 1 1 AA A A I n unit mtri. If A hs n inverse, then A is clled nonsingulr mtri. If A hs no inverse, then A is clled singulr mtri If A hs n inverse, the inverse is unique. If both B nd C re inverses of A, then tht we obtin the uniqueness from AB I nd CA I so B IBCAB CAB CI C Theorem 1: Eistence of the inverse The inverse A -1 of n n n mtri A eists if nd only if rnk A = n. Hence A is nonsingulr if rnk A = n, nd is singulr if rnk A < n. 1-33

Determintion of the Inverse From the definition of inverse of mtri AA 1 I 1 If we write A s mtri consisting of column vector 1,,..., n nd write I s mtri consisting of column vector e1, e,..., e n where the i th component of e i equl to 1 nd other components re zero....... A e e 1 n 1 n We cn determine 1 A if we solve for ech of i from A e. i To do this, we ugment the coefficient mtri by ll column vectors e i nd perform row opertion until the left prt of ugmented mtri becomes unit mtri (Guss-Jordn 1 Elimintion) then we will obtin 1,,..., n (= A ) in the right prt of the reduced mtri. i Guss-Jordn Elimintion is similr to the Guss Elimintion but lso eliminting the entries bove the digonl nd scled ll of the digonl entries to be equl to one. 1-34

Emple Find the inverse of A 1 1 3 1 1 1 3 4 A I = 1 1 1 0 0 3 1 1 0 1 0 1 3 4 0 0 1 Row = Row + 3*Row1 Row 3 = Row 3 Row 1 Row 3 = Row 3 Row 1 1 1 0 0 0 7 3 1 0 0 1 0 1 1 1 1 0 0 0 7 3 1 0 0 0 5 4 1 1 This is the end of the Guss Elimintion. Net, perform Guss-Jordn elimintion to obtin unit mtri in the left prt. Scle ech row by 1/digonl to get digonl equl to 1 1-35

Row 1 = Row 1 + *Row 3 Row = Row 3.5*Row 3 Row 1 = Row 1 + Row 1 1 1 0 0 0 1 3.5 1.5 0.5 0 0 0 1 0.8 0. 0. 1 1 0 0.6 0.4 0.4 0 1 0 1.3 0. 0.7 0 0 1 0.8 0. 0. 1 0 0 0.7 0. 0.3 0 1 0 1.3 0. 0.7 0 0 1 0.8 0. 0. The lst three columns is 1 A. Let s check it 1 1 0.7 0. 0.3 1 0 0 3 1 1 1.3 0. 0.7 0 1 0 1 3 4 0.8 0. 0. 0 0 1 1-36

Useful Formuls For nonsingulr mtri, we obtin A 11 1 1 A 1 1 1 det A 1 11 where det A 11 11 Emple A 3 1 4 1 1 4 1 0.4 0.1 A 1 3 0. 0.3 Useful Formuls For nonsingulr digonl mtri, we hve A 11... 0.......... 0... nn A 1 1/ 11... 0.......... 0... 1/ nn Emple A 0.5 0 0 0 4 0 0 0 1 A 1 0 0 0 0.5 0 0 0 1 Inverse of the Inverse is the given mtri 1 A 1 A 1-37

Inverse of Product 1 1 1 AC C A To prove this, we strt from the definition 1 AC AC I Pre-multiplying by 1 A gives Agin, pre-multiplying by 1 1 A CAC 1 C gives 1 1 1 AC C A We cn similrly generlize this formul to the products of more thn two mtrices. AC PQ Q P C A 1 1 1 1 1 Vnishing of Products From the strnge fct tht AB 0 does not necessrily imply A 0 or B 0 or BA 0 1 11 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1 Ech of these two mtrices hs rnk less thn n =. This sitution chnges when n n mtrices hve rnk n. 1-38

Theorem : Cncelltion Lw Let A, B, C be n n mtrices. Then: () If rnk A = n nd AB = AC, then B = C (b) If rnk A = n, then AB = 0 implies B = 0. Hence if AB = 0, but A 0 s well s B 0, then rnk A < n nd rnk B < n. (c) If A is singulr, so re AB nd BA. Proof: () If rnk A = n, A is nonsingulr. We cn pre-multiply 1 A on both sides of AB = AC, nd then obtin B = C. (b) Similrly, we cn pre-multiply 1 A on both sides. (c) A is singulr rnk A < n A=0 hs nontrivil solution. Multipliction by B gives BA=0 rnk of BA < n BA is singulr. From the theorem tht the trnspose of ny mtri hs the sme rnk of the originl mtri, rnk A T < n A T is singulr B T A T is singulr rnk B T A T < n rnk (AB) T < n AB is singulr. 1-39

DETERMINANTS Determinnts were first defined for solving liner systems, lthough they re imprcticl in computtions. They hve importnt engineering pplictions in eigenvlue problem nd differentil equtions. An n th -order determinnts is n epression ssocited with n n n squre mtri beginning with n=. A determinnt of second order is denoted nd defined by D = det A = 11 1 1 11 1 1 Emple 4 3 4 5 3 14 5 1-40

Crmer s rule For liner system of equtions b 11 1 1 1 b 1 1 the solutions cn be obtined from 1 D1/ D, D/ D where b1 1 D1 b 1 1b b b D b b 11 1 11 1 1 1 b Emple 4 3 1 1 5 8 1 4 3 D 14, D1 5 1 3 8 5 84, D 4 1 8 56 1 84/14 6 nd 56/14 4 1-41

Third-Order Determinnts A determinnt of third order cn be defined by D 11 1 13 3 1 13 1 13 1 3 11 1 31 3 33 3 33 3 31 3 33 Note tht the signs on the right re + +. Ech term on the right is the entry of the first column multiplying its minor, which is the determinnt of submtri obtined by deleting the row nd column of tht entry. D 1133 1133 1313 1133 3113 3113 Crmer s rule For liner system of equtions 111 1 133 b1 b b 1 1 3 3 31 1 3 33 3 3 1 D 1, D D D3, 3 D D where D is the determinnt of the system nd b 1 1 13 D b, 1 3 b 3 3 33 b 11 1 13 D b, 1 3 b 31 3 33 b 11 1 1 D b 3 1 b 31 3 3 1-4

Determinnt of Any Order n A determinnt of order n is sclr ssocited with n n n mtri. 11 1... 1 n 1... n D det A : : : :... It is defined for n 1 by nd for n by n1 n nn D 11 D j1cj1j Cj... jncjn (j = 1,,, or n) or D 1 kc1k kc k... nkcnk (k = 1,,, or n) where 1 j C k M jk jk nd M jk is determinnt of order n 1 of the submtri of A obtined by deleting the row nd column of the entry jk (the j th row nd the k th column). C jk is clled the cofctor of jk in D nd M jk is clled the minor of jk in D. The epression for D my be written s D D n jk 1 jkm jk (j = 1,,, or n) k 1 n jk 1 jkm jk (k = 1,,, or n) j1 1-43

Emple: Minors nd cofctors of third order determinnt The minors re 11 1 13 1 3 31 3 33 M 11 3 3 33 M 1 1 3 31 33 M 13 1 31 3 M 1 1 13 3 33 M 11 13 31 33 M 3 11 1 31 3 M 31 1 13 3 M 3 11 13 1 3 M 33 11 1 1 nd the cofctors re C C C M C1 M1 C13 M13 11 11 M C M C3 M3 1 1 M C3 M3 C33 M33 31 31 1-44

Emple D 1 3 0 6 4 1 0 The epnsion by the first row is D 6 4 4 6 1 3 0 11 0 34 4 1 0 1 1 0 The epnsion by the lst column is D 6 1 3 1 3 0 4 0 1 6 6 1 1 0 1 0 6 Determinnt of tringulr mtri The determinnt of ny tringulr mtri equls the product of ll the entries of the min digonl. This cn be shown by epnding by rows if the mtri is lower tringulr nd epnding by columns if the mtri is upper tringulr. Emple 3 0 0 4 0 6 4 0 3 34560 5 1 5 1-45

GENERAL PROPERTIES OF DETERMINANTS Theorem 1: Trnsposition The vlue of determinnt is not ltered if its rows re written s columns, in the sme order. Emple det A T det A 1 3 0 1 1 6 4 3 6 0 1 1 0 0 4 Theorem : Multipliction by constnt If ll the entries in one row (or one column) of determinnt re multiplied by the sme fctor k, the vlue of the new determinnt is k times the vlue of the given determinnt. Note tht Emple det ka k n det A 1 3 0 1 3 0 1 1 0 1 1 0 6 4 1 3 6 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 Theorem 3: If ll the entries in row (or column) of determinnt re zero, the vlue of the determinnt is zero. Theorem 4: If ech entry in row (or column) of determinnt is epressed s binomil, the determinnt cn be written s the sum of two determinnts. 1-46

Emple d b c b c d b c 1 1 1 1 1 1 1 1 1 1 d b c b c d b c d b c b c d b c 3 3 3 3 3 3 3 3 3 3 Theorem 5: Interchnges of rows or columns If ny two rows (or two columns) of determinnt re interchnged, the vlue of the determinnt is multiplied by 1. Emple 6 4 1 3 0 1 3 0 6 4 1 1 0 1 0 Theorem 6: Proportionl rows or columns If corresponding entries in two rows (or two columns) of determinnt re proportionl, the vlue of the determinnt is zero. Emple 3 6 4 1 1 3 0 6 1 8 1-47

Theorem 7: Addition of row or column The vlue of determinnt is left unchnged if the entries in row (or column) re ltered by dding to them ny constnt multiple of the corresponding entries in ny other row (or column, respectively). Emple The determinnt of mtri cn be determined by using row opertions (Guss elimintion) to obtin n upper tringulr mtri. Keep in mind tht interchnging ny two rows will ffect the vlue of the determinnt by fctor of -1. D 0 4 6 4 5 1 0 0 6 1 3 8 9 1 0 4 6 0 5 9 1 0 6 1 0 8 3 10 0 4 6 0 5 9 1 0 0.4 3.8 0 0 11.4 9. 1-48

0 4 6 0 5 9 1 0 0.4 3.8 0 0 0 47.5 5.4 47.5 1134 In ech cycle of elimintion, the determinnt cn be epnded s the summtion of products of entries in the column nd their corresponding cofctors. Only one term will remin; thus, we cn write them in compct form s the following 0 4 6 5 9 1 4 5 1 0.4 3.8 D... 6 1... 10 1134 0 6 1 11.4 9. 8 3 10 3 8 9 1 Theorem 8: Determinnt of product of mtrices For ny n Emple n mtrices A nd B det AB det BA det A det B 4 3 4 0 5 9 4 18 6 10 14 1 1 46 10 76 4 7 9 3 0 4 9 7 49 1-49

If the entries of squre mtri re sclrs (constnts), its determinnt is lso constnt. If the entries of squre mtri re functions, its determinnt is lso function nd we cn determine its derivtive from the following useful formul. Theorem 9: Derivtive of determinnt The derivtive D ' of determinnt D of order n whose entries re differentible functions cn be written s D' D D... D n 1 where D j is obtined from D by differentiting the entries in the j th row. Emple: Derivtive of third-order determinnt d f g h f ' g' h' f g h f g h p q r p q r p ' q ' r ' p q r d u v w u v w u v w u ' v ' w ' Rnk in Terms of Determinnts Theorem 1: Rnk in terms of determinnts An m n mtri A hs rnk r 1 if nd only if A hs n r r submtri with nonzero determinnt, wheres the determinnt of every squre submtri with r 1 or more rows tht A hs is zero. In prticulr, if A is squre mtri, A is nonsingulr, so tht the inverse A -1 eists, if nd only if det A 0 1-50

Theorem : Crmer s Theorem () If the determinnt D det A of liner system of n equtions 111 1 nnb1 11 nnb b n1 1 nn n n in the sme number of unknowns is not zero, the system hs precisely one solution. This solution is given by the formuls 1 D 1, D D Dn,, n D D where D k is the determinnt obtined from D by replcing in D the k th column by the column with entries b 1,, b n. (b) Hence if the system is homogeneous nd hs only the trivil solution 1 0, 0,, n 0. If D=0, the homogeneous system lso hs nontrivil solutions. Theorem 3: Inverse of mtri The inverse of nonsingulr n n mtri A jk is given by C11 C1... Cn 1 1 1 T 1 C1 C... C n A C det A det A : : : : C1n Cn... Cnn where C jk is the cofctor of jk in det A. T C is clled djoint of A. 1-51

Emple A 1 1 3 1 1 1 3 4 det A 1 7 13 8 10 C 11 1 1 1 1 7, C1, C31 3 3 4 3 4 1 1 3 1 C1 13 1 4, C 1 1 4, C3 1 3 1 7 C 13 3 1 1 3 8, C3 1 1 1 1 3, C33 1 1 3 1 A 1 7 3 0.7 0. 0.3 1 13 7 1.3 0. 0.7 10 8 0.8 0. 0. 1-5

PROGRAMMING OF GAUSS ELIMINATION ) ALGORITHM GAUSS ( A jk A b Input: Augmented n n 1 A jn, 1 b j mtri jk, where Output: Solution j or messge tht the system hs no unique solution. For k = 1 to n-1 (k refers to the unknown being eliminted) If kk =0 then Find the smllest j k such tht jk 0 If no such j eists, then output No unique solution eists nd stop Else Echnge the contents of row j nd k of A For j = k +1 to n m jk jk kk (j refers to row being eliminted) End End For p = k+1 to n+1 (p refers to column on row j) End jp jp mjkkp If 0 nn then output No unique solution eists nd stop Else Strt Bck Substitution 1-53

nn, 1 n nn For i = n-1 downto 1 (i refers to unknown being determined) End n 1 i i, n1 ij j ii j i 1 Output j nd stop End GAUSS m jk is clled the multiplier becuse the pivot eqution is multiplied by this fctor before it is subtrcted from the j th eqution. Opertion count The qulity of numericl method is judged in terms of Amount of storge Amount of time (=number of opertions) Effect of round-off error For Guss elimintion, the number of opertion is s follows. In step k we eliminte k from n-k equtions. This needs n-k divisions in computing the m jk nd (n-k)(n-k+1) multiplictions nd subtrctions. We do such elimintion steps for k from 1 to n-1. 1-54

The totl number of opertions for Guss elimintion is n1 n1 1 f n n k n k n k k1 k1 1 f n n1n n 1n n 3 3 3 3 where n /3 is obtined by dropping the lower powers of n. 3 We see tht f(n) grows bout proportionl to n. We sy tht 3 f(n) is of order n nd write 3 f n O n where O suggest order. The generl definition of O is if f n/ f n O h n h n remins bounded (does not grow to infinity) For bck substitution, we mke n-i multiplictions nd subtrctions nd 1 division. The totl number of opertions in the bck substitution is n 1 bn n i n nn n n On i1 We see tht it grows more slowly thn the number of opertions in the forwrd elimintion of Guss lgorithm. Emple n Elimintion Bck substitution 100 0.7 sec 0.005 sec 1000 11 min 0.5 sec 1-55

Difficulty with smll pivot entries If pivot entry kk is zero, we must interchnge tht row with the other. In ddition to this, if kk is too smll, should still interchnge the equtions becuse m jk will be very lrge nd we will subtrct lrge multiple of one row from the others. This will crete more round-off error nd ffect ccurcy of the result. Emple 0.0004 1.40 1.406 1 0.4003 1.50.501 1 We know tht the ect solution is 1 = 10 nd = 1. Let s try to solve this problem by Guss elimintion using 4-digit floting-point numbers. () m 0.4003/ 0.0004 1001 Subtrct m*eqution 1 from Eqution, we get Then 1405 1404 1404/ 1405 0.9993 nd from the first eqution 1 0.005 1 1.406 1.40 * 0.9993 1.5 0.0004 0.0004 This lrge error occurs becuse 11 is smll compre to 1 so the smll round-off error in led to lrge error in 1. 1-56

(b) Let s use the second eqution s the pivot eqution for the first unknown. m=0.0004/0.4003=0.0009993 Subtrcting m* Eqution from Eqution 1, we get nd from the second eqution 1.404 = 1.404 = 1 1 = 10 This success occurs becuse 1 is not very smll compred to, so smll round-off error in would not led to lrge error in 1. Therefore, we would prefer to use the pivot eqution such tht the pivot is lrgest coefficient of unknowns in bsolute vlue in its eqution. In prctice, one picks the pivot eqution such tht the rtio between the pivot term nd the lrgest coefficient in its row is the lrgest mong ll vilble rows. Emple: Choice of pivot eqution We hve 11 3 m 5 1k 3 4 5 1 1 3 3 1 1 3 6 8 35 1 3 1 3 m 3 k 31 6 m 8 3k 1-57

Therefore, we select the second eqution s the pivot eqution nd eliminte 1. 3 1 We now hve 1 3 6 0 3 1 37 3 m * * k 6 m * 3 * 3k 1 1 Here we select the lst eqution s the pivot eqution nd get the tringulr system 31 3 1 1 3 37 37 37 3 6 6 Bck substitution gives 3 1, 3, nd 1 1-58

ILL-CONDITIONING AND NORMS Sometimes we observe tht some systems A=b give n ccurte solution while some other systems do not. We cll A=b well-conditioned if smll errors in the coefficients or in the solution process hve only smll effect on the solution nd ill-conditioned if the effect on the solution is lrge. For two equtions in two unknowns, ill-conditioning occurs if nd only if the equtions represent two nerly prllel lines, so tht the intersection moves very much if the line is shifted only slightly. Emple 0.9999 1.0001y = 1 y = 1 The ect solution to this system is = 0.5 nd y = -0.5 wheres the system 0.9999 1.0001y = 1 y = 1+e hs the solution = 0.5+5000.5e nd y = -0.5+4999.5e This shows tht if the eqution is modified slightly by e the solution will chnge significntly by 5000e. Note tht the lines of both equtions re nerly prllel. 1-59

RESIDUAL Residul of n pproimte solution of A=b is defined s nd becuse b=a, thus r b A r A r is smll when is very ccurte but smll r does not imply ccurcy of. Emple 1.0001 1 + =.0001 1 + 1.0001 =.0001 The ect solution is 1 = 1 nd = 1. Suppose tht we hve bd solution 1 = nd = 0.0001. r.0001 1.0001 1.0000.0000 0.000.0001 1.0000 1.0001 0.0001 0.0000 One might be fooled if he only inspects the residul nd thinks tht those solutions re ccurte becuse of its smll residul. To mesure the ill-conditioning of liner system, we need to define the followings. 1-60

VECTOR NORM Vector norm of denoted by is generlized length of the vector. It must hve the following properties: is nonnegtive rel number =0 if nd only if =0 k k for ll k y y (Tringulr Inequlity) Definition of p-norm 1/ p 1... p n where p is fied number nd p 1 Emple is clled l 1 -norm 1 1... n 1 is clled Eucliden or... n m is clled l -norm j j p p p l -norm For n=3, l -norm is the usul length of vector in 3-D spce. Emple T If 3 0 1 4, then 10 30 4 1 If nd re the ect nd pproimte solutions, is mesure of the distnce between them, or the error of. 1-61

MATRIX NORM If A is n n n mtri nd is ny vector with n components, then A is vector with n components. We cn tke the vector norm of A nd. One cn prove tht there is number c such tht A c Among ny nonzero, the smllest c is the mtri norm of A. Alterntively, A A m m 1 A A Note tht A depends on the corresponding vector norm. One cn show tht A equl the lrgest column bsolute sum for l 1 -norm A equl the lrgest row bsolute sum for l -norm The mtri norm hs the following properties A A (by definition) AB A B n A A n 1-6

Emple: An ill-conditioned system 0.9999 1.0001 A 1.0000 1.0000 nd A 1 5000.0 5000.5 5000.0 4999.5 For l 1 -norm, A =.0001 nd 1 A = 10000 For l -norm, A = nd 1 A = 10000.5 Note tht 1 A >> A. 1-63

CONDITION NUMBER OF A MATRIX The condition number of squre nonsingulr mtri is defined s 1 A A A Theorem 1: Condition number A liner system of eqution A=b whose condition number is smll is well-conditioned. A lrge condition number indictes ill-conditioning. It cn be shown tht r A b where r ba nd is the pproimte solution. δ δa δ A A nd A δb b This mens tht the reltive error in the solution is bounded by the condition number multiplied with reltive residul, reltive error in A, or reltive error in b. Emple From previous emple, A =0001. The condition number of I is equl to 1, which is the lowest possible vlue of condition number. For the mtri tht is closed to singulr mtri ( det A is smll compred to the mimum entry of A), the condition number will be lrge. 1-64

LU-FACTORIZATION A given squre mtri A cn be written s A=LU where L is lower tringulr nd U is upper tringulr. For emple, 3 1 0 3 A 8 5 = LU = 4 1 0 7 It cn be proved tht for ny nonsingulr mtri, the rows cn reordered so tht the resulting mtri A hs n LUfctoriztion, where the mtri L contin multipliers m jk in the Guss elimintion wheres U is the upper tringulr reduced mtri. The importnt ide is tht L nd U cn be computed directly without using the Guss elimintion. These computtion needs bout n 3 /3 opertions, which is only hlf s mny s the Guss elimintion, which needs bout n 3 /3. Once we hve the L nd U, we cn determine the solution by bck substitution in two steps, ech of which involves only n opertions. Note from A = LU = b tht we cn solve for y from Ly = b nd then from U = y. The method to compute L nd U is clled Doolittle s method. This method gives lower tringulr tht hs ll entries on the digonl equl to 1. Another method is clled Crout s method. It gives n upper tringulr tht hs ll digonl entries equl to 1. 1-65

Doolittle s method u k = 1,, n j1 j =,, n u 1k 1k m j1 11 j 1 k = j,, n; j u m u jk jk js sk s1 k 1 1 jk jk u kk s1 js sk m m u j = k+1,, n; k Crout s method m u j = 1,, n 1 k k =,, n m j1 j1 1k 11 k 1 m m u j = k,, n; jk jk js sk s1 j 1 k 1 ujk jk mjsusk k = j+1,, n; j m jj s1 Emple A 3 8 5 u 11 = 11 = u 1 = 1 = 3 m 1 = 1 /u 11 = 8/ = 4 u = -m 1 u 1 = 5-4*3 = -7 1-66

Cholesky method Another specil method especilly for symmetric positive definite (s.p.d.) mtri (A=A T nd T A > 0 for ll ) is clled Cholesky s method. We cn choose U = L T, so A=LL T. Emple 4 14 0 0 1 7 17 5 1 4 0 0 4 3 14 5 83 7 3 5 0 0 5 Cholesky method m 11 11 m j1 j1 j =,, n m 11 j1 jj jj js s1 m m j =,, n k 1 1 jk jk js ks m kk s1 m m m j = k+1,, n; k If A is not positive definite, it will result in comple numbers. 1-67

METHOD OF LEAST SQUARE When the number of equtions is greter thn number of unknowns (m > n), the system is clled overdetermined. No solution cn stisfy the eqution ectly. However, we cn determine the best solution tht gives the lest residul r. To minimize r, we could choose ny norm, such s r, r, 1 r. The lst one which corresponds to the minimizing the sum of the squred residuls problem. Norml eqution T r is equl to T minimizes T minimum vlue of condition m ri is liner lest squre i1 A b A b ; therefore, tht A b A b lso minimizes r. At the A b A b, we hve the following d d T A b A b 0 T A A b 0 T AA T Ab This is clled the Norml eqution. T The mtri AA is squre n n mtri. The problem becomes n liner equtions in n unknowns, which cn be solved by Guss elimintion. 1-68

Emple: Liner regression Determine liner eqution tht best fit the eperimentl dt given below. p 1 3 4 5 6 7 8 q 1 3 4 5 4 5 6 8 The liner eqution hs the form q p 1 p nd the set of equtions re 1 1 3 3 4 4 5 5 4 6 5 7 6 8 8 1 1 1 1 1 1 1 1 A 1 1 1 1 3 1 4 1 5 1 6 1 7 1 8 1 b 1 3 4 5 4 5 6 8 1-69

T 8 36 AA 36 04 T 36 Ab 195 0.9643 0.7857 8 7 6 5 q 4 3 1 1 3 4 5 6 7 8 p The condition number of T AA using -norm (A) = 131.75 1-70

Emple Determine the prmeters of cubic eqution tht best fits the given set of dt in the previous emple. The set of equtions re q( p) p p p 3 1 3 4 1 1 1 1 3 1 3 4 3 3 1 3 4 3 3 3 4 3 1 3 4 4 4 4 5 3 1 3 4 5 5 5 4 3 1 3 4 6 6 6 5 3 1 3 4 7 7 7 6 3 1 3 4 8 8 8 8 3 1 3 4 A 1 1 1 1 1 1 3 3 3 1 4 4 4 1 5 5 5 1 6 6 6 1 7 7 7 1 8 8 8 3 3 3 3 3 3 3 3 1 3 4 b 1 3 4 5 4 5 6 8 1-71

T AA 8 36 04 196 36 04 196 877 04 196 877 61776 196 877 61776 446964 T Ab 36 195 115 8187.7857 4.687 1.07 0.0758 q( p).7857 4.687 p1.07 p 0.0758 p 8 3 7 6 5 q 4 3 1 1 3 4 5 6 7 8 p T The condition number of AA using -norm (A) = 5.03E06 T In prctice, the mtri AA is often ill-conditioned. There re other techniques tht void solving the norml equtions directly. 1-7

QR-Fctoriztion (decomposition) Let A be n m n mtri with m n. Suppose tht A hs full column rnk. Then there eist n m n orthogonl mtri Q nd n n n upper tringulr mtri R with positive digonl r ii > 0 such tht A=QR. Orthogonl (Unitry) mtri T 1 Q Q or T T QQQQ I The condition number of n orthogonl mtri equls to 1 The solution of the lest squre problem cn be determined by 1 T R Q b Emple Let A 1 1 1 1 4. The QR decomposition is 1 3 9 0.5774 0.7071 0.408 Q 0.5774 0 0.8165 0.5774 0.7071 0.408 R 1.731 3.4641 8.089 0 1.414 5.6569 0 0 0.8165 One cn verify tht QR=A nd Q T Q=I. 1-73

Singulr Vlues Decomposition (SVD) Let A be n m n mtri with m n. Then we cn write T A UV, where U is m n nd U T U = I, V is n n nd V T V = I, nd is n n n digonl mtri=dig(,..., 1 n) with 1... n 0. The columns u 1,, u n of U re clled left singulr vectors. The columns v 1,, v n of V re clled right singulr vectors. The i re clled singulr vlues. The solution of the lest squre problem cn be determined by 1 T VΣ U b Emple Determine the SVD of A in the previous emple A 1 1 1 1 4 1 3 9 U 0.134 0.8014 0.5833 0.464 0.485 0.7634 0.8948 0.3498 0.775 Σ 10.6496 0 0 0 1.507 0 0 0 0.150 V 0.1365 0.7490 0.6483 0.3446 0.5777 0.7400 0.988 0.344 0.1793 One cn verify tht T UV A, U T U = I, nd V T V = I. 1-74

PSEUDO-INVERSE Suppose tht A is n m n mtri with m n nd hs full T rnk, with A QR UV being its QR decomposition nd SVD, respectively. Then 1 T T 1 T 1 T A A A A R Q VΣ U is clled the (Moore-Penrose) pseudoinverse of A. If m < n, then A + =A T (AA T ) -1. The pseudoinverse lets us write the solution of the full-rnk, overdetermined lest squres problem s simply =A + b. If A is squre nd full rnk, this formul reduces to =A -1 b s epected. MATLAB COMMANDS det(a) return the determinnt of A inv(a) return the inverse of A cond(a) return the condition number of A [q r]=qr(a) return the Q nd R mtrices for QR [u s v]=svd(a) return the U,, nd V mtrices for SVD pinv(a) return the pseudoinverse of A =A\b help elmt help mtfun solve the liner system of eqution lists Elementry mth functions. lists more numericl liner lgebr. 1-75

EIGENVALUES AND EIGENVECTORS Eigenvlue problems re mong the most importnt problems in engineering in connection with mtrices. For given squre n n mtri A, consider the vector eqution A where is sclr number. It is cler tht 0 is trivil solution for ny vlue of. For the cse of nontrivil solutions, vlue of is clled n eigenvlue or chrcteristic vlue of the mtri A nd the corresponding vector is clled eigenvector or chrcteristic vector of A corresponding to tht eigenvlue. The spn of ll eigenvectors of A is clled the eigenspce of A. Emple Given: Eigenvlue problem: A 5 A 5 1 1 5 1 1 1 5 0 1 0 1 1-76

A I 0 This is homogeneous liner equtions. By Crmer s Theorem, it hs nontrivil solution 0 if nd only if its coefficient determinnt is zero. D detα I 5 5 4 7 6 0 We cll D the chrcteristic determinnt nd D 0 the chrcteristic eqution of A. The solutions re 1 1 nd. These re the eigenvlues of A. 6 The eigenvector corresponding to 1 cn be obtined by solving the homogeneous eqution 1 A I 0 41 0 0 1 Obviously, the coefficient mtri of this system does not hve full rnk becuse its determinnt is zero. Hence, there re infinitely mny solutions. If we choose 1 rbitrrily equl to k, we get = k. The eigenvectors corresponding to 1 1 re k 1 1 where k 0 1-77

We cn check tht A 5 1 1 1 1 1 1 1 The eigenvector corresponding to cn be obtined by solving the homogeneous eqution A I 0 1 0 4 0 1 If we choose 1 rbitrrily equl to k, we get = -k. The eigenvectors corresponding to 6 re Theorem 1: Eigenvlues k 1 where k 0 The eigenvlues of squre mtri A re the roots of the corresponding chrcteristic eqution. Hence n n n mtri hs t most n numericlly different eigenvlues. The set of ll eigenvlues of A is clled the spectrum of A. The lrgest of the bsolute vlues of the eigenvlues of A is clled the spectrl rdius of A. Theorem : Eigenvectors If is n eigenvector of A corresponding to n eigenvlue, so is k with ny k 0. 1-78

Applictions of Eigenvlue Problems Stress Representtion The stte of stress in -D cn be written in mtri form. σ 11 1 1 1 11 The principl stresses re the eigenvlues of σ. For the cse 3-D, this concept is lso pplicble, but the mtri σ will become 33 mtri. 1-79

Emple 1 1 σ 0 1 1 0 1 1 =0 1 0 1, 1 For 1 1, 1 1 1 0 1 1 0 1 1 1 For 1, 1 11 0 1 1 0 1 1 The eigenvector points towrd the direction of the principl stress (corresponding eigenvlue). 1 1 1 1-80

Roots of Chrcteristic Eqution The chrcteristic eqution D A I... 11 1 1n... : :... : 1 n det 0... n1 n nn is n n th degree polynomil, which generlly hve n roots (some my be comple numbers). The polynomil mybe written s M1 M j M j 1... 0 Suppose tht there re j different roots. Some roots my be repeted, e.g., M. M is clled the lgebric multiplicity of. The number of linerly independent eigenvectors corresponding to the eigenvlue is clled the geometric multiplicity m. In generl, m M. 1-81

Emple The chrcteristic eqution is 3 A 1 6 1 0 3 det AI 1 6 0 1 0 Epnding the determinnt gives which cn be written s 3 1 45 0 5 3 0 The solution is 1 5 nd 3 3. We cn see tht the lgebric multiplicity of 3 is equl to. The eigenvector corresponding to 1 cn be determined from 5 3 1 0 15 6 0 1 05 3 0 7 31 0 4 6 0 1 5 3 0 1-8

0 16 3 1 0 0 8 16 0 1 5 3 0 If we rbitrrily choose =, we get 3 = -1 nd 1 = 1. 1 1 1 The eigenvector corresponding to cn be determined from 3 3 1 0 13 6 0 1 03 3 0 1 31 0 4 6 0 1 3 3 0 This homogeneous system hs rnk = 1, so we cn choose the vlues of 3-1= unknowns rbitrrily. From the theorem tht rnk A nullity A n nullity A n rnk A 3 1 This mens tht the dimension of null spce equl to, or there re two linerly independent vectors tht stisfy the eqution. If we choose = 1 nd 3 = 0, we get 1 = -1. If we choose = 0 nd 3 = 1, we get 1 = 3. 1-83

Therefore, the eigenvectors corresponding to 3 re 1 0 nd 3 3 0 1 The geometric multiplicity of 3 thus equls to. IMPORTANT NOTES The entries on the min digonl of tringulr mtri re its eigenvlues. The eigenvlues of symmetric mtri re rel numbers The eigenvlues of skew-symmetric mtri re pure imginry or zero. The eigenvlues of n orthogonl mtri re rel or comple conjugtes in pirs nd hve bsolute vlue equl 1. 1-84

SIMILARITY OF MATRICES An n n mtri  is clled similr to n n n mtri A if ˆ 1 A T AT for some nonsingulr n n mtri T. This trnsformtion, which gives  from A, is clled similrity trnsformtion. Theorem 1: Eigenvlues nd eigenvectors of similr mtrices If  is similr to A, then  hs the sme eigenvlues s A. Furthermore, if is n eigenvector of A, then y = T -1 is n eigenvector of  corresponding to the sme eigenvlue. Proof Let be n eigenvlue of A with s the corresponding eigenvector. A 1 1 T A T 1 1 1 T ATT T ˆ ( 1 ) ( 1 AT T ) Hence, is n eigenvlue of corresponding eigenvector.  with T -1 s the PROPERTIES OF EIGENVECTORS Theorem : Liner independence of eigenvectors Let 1,,..., k be distinct eigenvlues of n n mtri. Then corresponding eigenvectors 1,,..., k form linerly independent set. 1-85

Theorem 3: Bsis of eigenvectors If n n n mtri A hs n distinct eigenvlues, then A hs n bsis of eigenvectors for. Theorem 4: Eigenvectors of symmetric mtri A symmetric mtri hs n orthonorml bsis of n eigenvectors for. Definition of An Orthonorml Set If i nd j re vectors in n orthonorml set, i j 1 if i j 0 if i j Emple 5 Given: A The eigenvectors corresponding to 1 1 nd 6 re 1/ 5 1 nd / 5 We cn check tht 1 1 1 / 5 1/ 5 1 1 0 1-86

DIAGONALIZATION An n n mtri A is digonlizble if nd only if there eists 1 mtri X such tht X AX is digonl mtri. Theorem 5: Digonliztion of mtri If n n n mtri A hs bsis of eigenvectors, then D 1 X AX is digonl, with the eigenvlues of A s the entries on the min digonl. Here X is the mtri with these eigenvectors s column vectors. Also m 1 m D X A X Emple 3 A 1 6 1 0 hs eigenvlues =5, -3, nd -3. columns vectors re the eigenvectors of A, is 1 3 X 1 0 nd X 1 0 1 1 The mtri X, whose 0.15 0.5 0.375 0.5 0.5 0.75 0.15 0.5 0.65 5 0 0 1 Then, we cn obtin D X AX 0 3 0. 0 0 3 1-87

Emple The given mtri 4 3 A 4 6 3 3 3 1 hs eigenvlues = -, -, nd 1. Let s try to find the eigenvector corresponding to 1. A I 0 4 4 3 1 0 4 4 3 0 3 3 3 3 0 The rnk of the coefficient mtri of this homogeneous system equls to. The nullity (dimension of the null spce) is then equl to 3 1. Hence, there is only one rbitrry unknown. The eigenvector for this eigenvlue is 1 1 1 0 We cn see tht the geometric multiplicity m of this eigenvlue 1 equls to 1, while the lgebric multiplicity M equls to. There re not enough bses of eigenvectors to digonlize A, so the mtri A is not digonlizble. 1-88

NUMERICAL METHODS FOR EIGENVALUE PROBS Power method In this method, we strt from ny vector 0 ( 0) nd compute successively 1 A 0, 1 A, A 1 For simplifiction, we denote s1 by nd s by y, so tht y A. s s Ryleigh s Quotient T q y T is n pproimtion for the lrgest eigenvlue. We cn see tht s will become more nd more similr to the eigenvector corresponding to the lrgest (in bsolute vlue) eigenvlue. To prove this, suppose 0 cn be written s liner combintion of ll eigenvectors v 1, v,, v n of A. 0 1v1v... nv n 1 A0 1Av1Av... nav n 1 A0 1 1v1v... nnv n s 0 s 1 1 1 s... s s A v v n n v n We will see tht the term with the lrgest eigenvlue (in bsolute vlue) will eventully dominte other terms, s s gets lrger nd lrge. In ddition, the size of the vector s will differ from vector s1 by tht eigenvlue. 1-89

Emple A 8 6 4 4 6 Choose 0 1 1 1 Then, we cn obtin successively 8 4 7 1 0 3 3 496 4 40 70 51 7776 6464 6496 Tke = 3 nd y = 4 m 0 = T = 106560 m 1 = T y = 1130816 q = m 1 /m 0 = 11.817 The correct eigenvlue is 1. To obtin more ccurte result, we need to continue the multipliction of A. We cn see tht the size of the vector is getting lrger nd lrger. In prctice, the vector is scled down by T. 1-90

1 0.894 1 1 0 T 1 1 1 0.447 Note tht 1 1 8 0.894 8.050 6 4 A 0 3.578 4 6 0.447 4.47 1 T 0.815 0.36 0.453 Note tht 1 8.148 3 A 5.613 3 5.794 0.711 0.490 0.505 7.675 4 A3 6.380 Note tht 3 1 6.411 11.817 q 11.817 1 T 3 4 T 3 3 1-91

Inverse Vector Itertion In the power method, we cn determine only the lrgest eigenvlue, but this inverse vector itertion method cn overcome tht disdvntge. We need the following three properties of eigenvlues of mtri. 1. If A hs the eigenvlues 1,,..., n, then eigenvlues 1 1 1 1,,..., n. 1 A hs the. If A hs the eigenvlues 1,,..., n, then A ki hs the eigenvlues 1 k, k,..., n k. (Spectrl Shift Theorem) 3. Eigenvectors of A re lso eigenvectors of A -1. The concept in this method is tht we use the power method to determine the lrgest eigenvlue of A -1, which will give us the smllest eigenvlue of A. Then we pply some spectrl shift (A-kI) nd gin use the power method to determine the lrgest eigenvlue of (A-kI) -1, which will give us the smllest eigenvlue of shifted (A-kI). Then, eigenvlue of A = (eigenvlue of A-kI)+ k which is eigenvlue of A nerest to the shift k. We cn keep chnging the shift nd repet the process until ll eigenvlues nd eigenvectors re obtined. 1-9

Emple Given: A 5 First compute 1 1 A 6 5 nd choose 0 1 1 Then, we cn obtin successively 1 =A -1 0 1 4 6 7 1 6 43 1 130 6 59 1 3 3 4 4 1 778 6 1555 Tke = 3 nd y = 4 m 0 = T = 1.80 m 1 = T y = -1.80 q = m 1 /m 0 = -1 Eigenvlue of A -1 An eigenvlue of A = -1 which comes from inverse of eigenvlue of A -1 To find other eigenvlues of A, we need to pply spectrl 1 A IA I shift (just tril k = -4) k 4 1-93

1 1 1 1 k 6 1 6 1 Compute A I A 4I nd choose 0 1 1 Then, we cn obtin successively 1 =A -1 0 1 0 6 3 1 6 6 3 1 6 6 15 1 3 3 4 4 1 78 6 87 1 330 6 69 5 5 6 6 7 7 1 798 6 591 1 4 6 3 1 778 8 8 6 1005 T 7 8 Then compute Ryleigh s quotient q T 7 7 0.475 which is the lrgest eigenvlue of A ki 1. Smllest eigenvlue of Note tht eigenvlue of 1 AkIA( 4) I.105 0.475 A ki = (eigenvlue of A) - k eigenvlue of A = (eigenvlue of A ki) + k Another eigenvlue of A = -.105 + (-4) = -6.105 which is the eigenvlue of A nerest to the tril shift k. 1-94