Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Size: px
Start display at page:

Download "Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network."

Transcription

1

2 About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led by Loughborough University, funded by the Higher Education Funding Council for England under the Fund for the Development of Teaching and Learning for the period October September 5, with additional transferability funding October 5 September 6. HELM aims to enhance the mathematical education of engineering undergraduates through flexible learning resources, mainly these Workbooks. HELM learning resources were produced primarily by teams of writers at six universities: Hull, Loughborough, Manchester, Newcastle, Reading, Sunderland. HELM gratefully acknowledges the valuable support of colleagues at the following universities and colleges involved in the critical reading, trialling, enhancement and revision of the learning materials: Aston, Bournemouth & Poole College, Cambridge, City, Glamorgan, Glasgow, Glasgow Caledonian, Glenrothes Institute of Applied Technology, Harper Adams, Hertfordshire, Leicester, Liverpool, London Metropolitan, Moray College, Northumbria, Nottingham, Nottingham Trent, Oxford Brookes, Plymouth, Portsmouth, Queens Belfast, Robert Gordon, Royal Forest of Dean College, Salford, Sligo Institute of Technology, Southampton, Southampton Institute, Surrey, Teesside, Ulster, University of Wales Institute Cardiff, West Kingsway College (London), West Notts College. HELM Contacts: Post: HELM, Mathematics Education Centre, Loughborough University, Loughborough, LE 3TU. helm@lboro.ac.uk Web: HELM Workbooks List Basic Algebra 6 Functions of a Complex Variable Basic Functions 7 Multiple Integration 3 Equations, Inequalities & Partial Fractions 8 Differential Vector Calculus 4 Trigonometry 9 Integral Vector Calculus 5 Functions and Modelling 3 Introduction to Numerical Methods 6 Exponential and Logarithmic Functions 3 Numerical Methods of Approximation 7 Matrices 3 Numerical Initial Value Problems 8 Matrix Solution of Equations 33 Numerical Boundary Value Problems 9 Vectors 34 Modelling Motion Complex Numbers 35 Sets and Probability Differentiation 36 Descriptive Statistics Applications of Differentiation 37 Discrete Probability Distributions 3 Integration 38 Continuous Probability Distributions 4 Applications of Integration 39 The Normal Distribution 5 Applications of Integration 4 Sampling Distributions and Estimation 6 Sequences and Series 4 Hypothesis Testing 7 Conics and Polar Coordinates 4 Goodness of Fit and Contingency Tables 8 Functions of Several Variables 43 Regression and Correlation 9 Differential Equations 44 Analysis of Variance Laplace Transforms 45 Non- parametric Statistics z- Transforms 46 Reliability and Quality Control Eigenvalues and Eigenvectors 47 Mathematics and Physics Miscellany 3 Fourier Series 48 Engineering Case Study 4 Fourier Transforms 49 Student s Guide 5 Partial Differential Equations 5 Tutor s Guide Copyright Loughborough University, 5 Production of this 5 edition, containing corrections and minor revisions of the 8 edition, was funded by the sigma Network.

3 Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors 46 Learning outcomes In this Workbook you will learn about the matrix eigenvalue problem AX = kx where A is a square matrix and k is a scalar (number). You will learn how to determine the eigenvalues (k) and corresponding eigenvectors (X) for a given matrix A. You will learn of some of the applications of eigenvalues and eigenvectors. Finally you will learn how eigenvalues and eigenvectors may be determined numerically.

4 Basic Concepts. Introduction From an applications viewpoint, eigenvalue problems are probably the most important problems that arise in connection with matrix analysis. In this Section we discuss the basic concepts. We shall see that eigenvalues and eigenvectors are associated with square matrices of order n n. If n is small ( or 3), determining eigenvalues is a fairly straightforward process (requiring the solutiuon of a low order polynomial equation). Obtaining eigenvectors is a little strange initially and it will help if you read this preliminary Section first. Prerequisites Before starting this Section you should... Learning Outcomes On completion you should be able to... have a knowledge of determinants and matrices have a knowledge of linear first order differential equations obtain eigenvalues and eigenvectors of and 3 3 matrices state basic properties of eigenvalues and eigenvectors HELM (5): Workbook : Eigenvalues and Eigenvectors

5 . Basic concepts Determinants A square matrix possesses an associated determinant. Unlike a matrix, which is an array of numbers, a determinant has a single value. [ c c A two by two matrix C = has an associated determinant c c det (C) = c c c c = c c c c (Note square or round brackets denote a matrix, straight vertical lines denote a determinant.) A three by three matrix has an associated determinant c c c 3 det(c) = c c c 3 c 3 c 3 c 33 Among other ways this determinant can be evaluated by an expansion about the top row : c det(c) = c c 3 c 3 c 33 c c c 3 c 3 c 33 + c 3 c c c 3 c 3 Note the minus sign in the second term. Task Evaluate the determinants det(a) = det(b) = 4 8 det(c) = Your solution Answer det A = = 4 det B = 4 8 = det C = = 6 ( 4) 5() + 4(4 3) = 85 [ 4 8 A matrix such as B = in the previous task which has zero determinant is called a singular matrix. The other two matrices A and C are non-singular. The key factor to be aware of is as follows: HELM (5): Section.: Basic Concepts 3

6 Key Point Any non-singular n n matrix C, for which det(c), possesses an inverse C i.e. CC = C C = I where I denotes the n n identity matrix A singular matrix does not possess an inverse. Systems of linear equations We first recall some basic results in linear (matrix) algebra. Consider a system of n equations in n unknowns x, x,..., x n : c x + c x c n x n = k c x + c x c n x n = k =. c n x + c n x c nn x n = k n We can write such a system in matrix form: c c... c n x k c c... c n x = k, or equivalently CX = K.. c n c n... c nn x n k n We see that C is an n n matrix (called the coefficient matrix), X = {x, x,..., x n } T is the n column vector of unknowns and K = {k, k,..., k n } T is an n column vector of given constants. The zero matrix will be denoted by O. If K O the system is called inhomogeneous; if K = O the system is called homogeneous. Basic results in linear algebra Consider the system of equations CX = K. We are concerned with the nature of the solutions (if any) of this system. We shall see that this system only exhibits three solution types: The system is consistent and has a unique solution for X The system is consistent and has an infinite number of solutions for X The system is inconsistent and has no solution for X 4 HELM (5): Workbook : Eigenvalues and Eigenvectors

7 There are two basic cases to consider: det(c) or det(c) = Case : det(c) In this case C exists and the unique solution to CX = K is X = C K Case : det(c) = In this case C does not exist. (a) If K O the system CA = K has no solutions. (b) If K = O the system CX = O has an infinite number of solutions. We note that a homogeneous system CX = O has a unique solution X = O if det(c) (this is called the trivial solution) or an infinite number of solutions if det(c) =. Example (Case ) Solve the inhomogeneous system of equations x + x = x + x = which can be expressed as CX = K where [ [ x C = X = K = x [ Solution Here det(c)=. The system of equations has the unique solution: X = [ x x [ =. HELM (5): Section.: Basic Concepts 5

8 Example (Case a) Examine the following inhomogeneous system for solutions x + x = 3x + 6x = Solution Here det (C) = 3 6 =. In this case there are no solutions. To see this we see the first equation of the system states x + x = whereas the second equation (after dividing through by 3) states x + x =, a contradiction. Example 3 (Case b) Solve the homogeneous system x + x = x + x = Solution Here det(c) = =. The solutions are any pairs of numbers {x, x } such that x = x, [ α i.e. X = where α is arbitrary. α There are an infinite number of solutions. A simple eigenvalue problem We shall be interested in simultaneous equations of the form: AX = λx, where A is an n n matrix, X is an n column vector and λ is a scalar (a constant) and, in the first instance, we examine some simple examples to gain experience of solving problems of this type. 6 HELM (5): Workbook : Eigenvalues and Eigenvectors

9 Example 4 Consider the following system with n = : x + 3y = λx 3x + y = λy so that A = [ 3 3 and X = [ x y. It appears that there are three unknowns x, y, λ. The obvious questions to ask are: can we find x, y? what is λ? Solution To solve this problem we firstly re-arrange the equations (take all unknowns onto one side); ( λ)x + 3y = 3x + ( λ)y = () () Therefore, from equation (): ( λ) x = y. 3 Then when we substitute this into () (3) ( λ) y + 3y = which simplifies to [ ( λ) + 9 y =. 3 We conclude that either y = or 9 = ( λ). There are thus two cases to consider: Case If y = then x = (from (3)) and we get the trivial solution. (We could have guessed this solution at the outset.) Case 9 = ( λ) which gives, on taking square roots: ±3 = λ giving λ = ± 3 so λ = 5 or λ =. Now, from equation (3), if λ = 5 then x = +y and if λ = then x = y. We have now completed the analysis. We have found values for λ but we also see that we cannot obtain unique values for x and y: all we can find is the ratio between these quantities. This behaviour is typical, as we shall now see, of an eigenvalue problem. HELM (5): Section.: Basic Concepts 7

10 . General eigenvalue problems Consider a given square matrix A. If X is a column vector and λ is a scalar (a number) then the relation. AX = λx is called an eigenvalue problem. Our purpose is to carry out an analysis of this equation in a manner similar to the example above. However, we will attempt a more general approach which will apply to all problems of this kind. Firstly, we can spot an obvious solution (for X) to these equations. The solution X = is a possibility (for then both sides are zero). We will not be interested in these trivial solutions of the eigenvalue problem. Our main interest will be in the occurrence of non-trivial solutions for X. These may exist for special values of λ, called the eigenvalues of the matrix A. We proceed as in the previous example: take all unknowns to one side: (A λi)x = where I is a unit matrix with the same dimensions as A. (Note that AX λx = does not simplify to (A λ)x = as you cannot subtract a scalar λ from a matrix A). This equation (5) is a homogeneous system of equations. In the notation of the earlier discussion C A λi and K. For such a system we know that non-trivial solutions will only exist if the determinant of the coefficient matrix is zero: det(a λi) = Equation (6) is called the characteristic equation of the eigenvalue problem. We see that the characteristic equation only involves one unknown λ. The characteristic equation is generally a polynomial in λ, with degree being the same as the order of A (so if A is the characteristic equation is a quadratic, if A is a 3 3 it is a cubic equation, and so on). For each value of λ that is obtained the corresponding vector X is obtained by solving the original equations (4). These X s are called eigenvectors. N.B. We shall see that eigenvectors are only unique up to a multiplicative factor: i.e. if X satisfies AX = λx then so does kx when k is any constant. (4) (5) (6) 8 HELM (5): Workbook : Eigenvalues and Eigenvectors

11 Example 5 Find the eigenvalues and eigenvectors of the matrix A = [ Solution The eigenvalues and eigenvectors are found by solving the eigenvalue probelm [ x AX = λx X = i.e. (A λi)x =. y Non-trivial solutions will exist if det (A λi) = {[ [ } that is, det λ =, λ λ =, expanding this determinant: ( λ)( λ) =. Hence the solutions for λ are: λ = and λ =. So we have found two values of λ for this matrix A. Since these are unequal they are said to be distinct eigenvalues. To each value of λ there corresponds an eigenvector. We now proceed to find the eigenvectors. Case λ = (smaller eigenvalue). Then our original eigenvalue problem becomes: AX = X. is In full this x = x x + y = y Simplifying x = x x + y = All we can deduce here is that x = y X = [ x x for any x (We specify x as, otherwise, we would have the trivial solution.) [ So the eigenvectors corresponding to eigenvalue λ = are all proportional to [ etc., e.g. Sometimes we write the eigenvector in normalised form that is, with modulus or magnitude. Here, the normalised form of X is [ which is unique. [ (a) (b), HELM (5): Section.: Basic Concepts 9

12 Solution (contd.) Case Now we consider the larger eigenvalue λ =. Our original eigenvalue problem AX = λx becomes AX = X which gives the following equations: [ [ [ x x = y y i.e. x = x x + y = y These equations imply that x = whilst the variable y may take any value whatsoever (except zero as this gives the trivial solution). [ [ [ Thus the eigenvector corresponding to eigenvalue λ = has the form, e.g., etc. [ y The normalised eigenvector here is. [ In conclusion: the matrix A = has two eigenvalues and two associated normalised eigenvectors: λ =, λ = X = [ X = [ Example 6 Find the eigenvalues and eigenvectors of the 3 3 matrix A = Solution The eigenvalues and eigenvectors are found by solving the eigenvalue problem x AX = λx X = y z Proceeding as in Example 5: (A λi)x = and non-trivial solutions for X will exist if det (A λi) = HELM (5): Workbook : Eigenvalues and Eigenvectors

13 Solution (contd.) that is, det i.e. λ λ λ λ =. = Expanding this determinant we find: ( λ) λ λ + λ = that is, ( λ) {( λ) } ( λ) = Taking out the common factor ( λ): ( λ) {4 4λ + λ } which gives: ( λ) [λ 4λ + =. This is easily solved to give: λ = or λ = 4 ± 6 8 = ±. So (typically) we have found three possible values of λ for this 3 3 matrix A. To each value of λ there corresponds an eigenvector. Case : λ = (lowest eigenvalue) Then AX = ( )X implies x y = ( )x x + y z = ( )y y + z = ( )z Simplifying x y = x + y z = y + z = (a) (b) (c) We conclude the following: (c) y = z (a) y = x these two relations give x = z then (b) x + x x = The last equation gives us no information; it simply states that =. HELM (5): Section.: Basic Concepts

14 Solution (contd.) x X = x for any x (otherwise we would have the trivial solution). x eigenvectors corresponding to eigenvalue λ = are all proportional to. In normalised form we have an eigenvector Case : λ = Here AX = X implies i.e. x y z. = x y z So the x y = x x + y z = y y + z = z After simplifying the equations become: y = x z = y = (a) (b) (c) (a), (c) imply y = : (b) implies x = z eigenvector has the form x for any x. x That is, eigenvectors corresponding to λ = are all proportional to In normalised form we have an eigenvector.. HELM (5): Workbook : Eigenvalues and Eigenvectors

15 Solution (contd.) Case 3: λ = + (largest eigenvalue) Proceeding along similar lines to cases, above we find that the eigenvectors corresponding to λ = + are each proportional to with normalised eigenvector. In conclusion the matrix A has three distinct eigenvalues: λ =, λ = λ 3 = + and three corresponding normalised eigenvectors: X =, X =, X 3 = Exercise Find the eigenvalues and eigenvectors of each of the following matrices A: [ [ 4 4 (a) (b) (c) 4 (d) Answer (eigenvectors are written in normalised form) (a) 3 and ; (b) 3 and 9; [ / 5 / and 5 [ and [ / / 7 [ 4 (c), 4 and 6; (d), and ; ; ; ; ; HELM (5): Section.: Basic Concepts 3

16 3. Properties of eigenvalues and eigenvectors There are a number of general properties of eigenvalues and eigenvectors which you should be familiar with. You will be able to use them as a check on some of your calculations. Property : Sum of eigenvalues For any square matrix A: sum of eigenvalues = sum of diagonal terms of A (called the trace of A) n Formally, for an n n matrix A: λ i = trace(a) (Repeated eigenvalues must be counted according to their multiplicity.) 3 Thus if λ = 4, λ = 4, λ 3 = then λ i = 9). i= Property : Product of eigenvalues For any square matrix A: i= product of eigenvalues = determinant of A n Formally: λ λ λ 3 λ n = λ i = det(a) i= The symbol simply denotes multiplication, as denotes summation. Example 7 Verify Properties and for the 3 3 matrix: A = whose eigenvalues were found earlier. Solution The three eigenvalues of this matrix are: Therefore λ =, λ =, λ 3 = + λ + λ + λ 3 = ( ) + + ( + ) = 6 = trace(a) whilst λ λ λ 3 = ( )()( + ) = 4 = det(a) 4 HELM (5): Workbook : Eigenvalues and Eigenvectors

17 Property 3: Linear independence of eigenvectors Eigenvectors of a matrix A corresponding to distinct eigenvalues are linearly independent i.e. one eigenvector cannot be written as a linear sum of the other eigenvectors. The proof of this result is omitted but we illustrate this property with two examples. We saw earlier that the matrix [ A = has distinct eigenvalues λ = λ = with associated eigenvectors X () = [ [ X () = respectively. Clearly X () is not a constant multiple of X () and these eigenvectors are linearly independent. We also saw that the 3 3 matrix A = had the following distinct eigenvalues eigenvectors of the form shown: X () =, X () = λ =, λ =, λ 3 = + with corresponding, X (3) = Clearly none of these eigenvectors is a constant multiple of any other. Nor is any one obtainable as a linear combination of the other two. The three eigenvectors are linearly independent. Property 4: Eigenvalues of diagonal matrices A diagonal matrix D has the form [ a D = d The characteristic equation D λi = is a λ d λ = i.e. (a λ)(d λ) = So the eigenvalues are simply the diagonal elements a and d. Similarly a 3 3 diagonal matrix has the form a D = b c having characteristic equation HELM (5): Section.: Basic Concepts 5

18 D λi = (a λ)(b λ)(c λ) = so again the diagonal elements are the eigenvalues. We can see that a diagonal matrix is a particularly simple matrix to work with. In addition to the eigenvalues being obtainable immediately by inspection it is exceptionally easy to multiply diagonal matrices. Task Obtain the products D D and D D of the diagonal matrices a e D = b D = f c g Your solution Answer D D = D D = ae bf cg which of course is also a diagonal matrix. Exercise If λ, λ,... λ n are the eigenvalues of a matrix A, prove the following: (a) A T has eigenvalues λ, λ,... λ n. (b) If A is upper triangular, then its eigenvalues are exactly the main diagonal entries. (c) The inverse matrix A has eigenvalues,,.... λ λ λ n (d) The matrix A ki has eigenvalues λ k, λ k,... λ n k. (e) (Harder) The matrix A has eigenvalues λ, λ,... λ n. (f) (Harder) The matrix A k (k a non-negative integer) has eigenvalues λ k, λ k,... λ k n. Verify the above results for any matrix and any 3 3 matrix found in the previous Exercises on page 3. N.B. Some of these results are useful in the numerical calculation of eigenvalues which we shall consider later. 6 HELM (5): Workbook : Eigenvalues and Eigenvectors

19 Answer (a) Using the property that for any square matrix A, det(a) = det(a T ) we see that if det(a λi) = then det(a λi) T = This immediately tells us that det(a T λi) = which shows that λ is also an eigenvalue of A T. (b) Here simply write down a typical upper triangular matrix U which has terms on the leading diagonal u, u,..., u nn and above it. Then construct (U λi). Finally imagine how you would then obtain det(u λi) =. You should see that the determinant is obtained by multiplying together those terms on the leading diagonal. Here the characteristic equation is: (u λ)(u λ)... (u nn λ) = This polynomial has the obvious roots λ = u, λ = u,..., λ n = u nn. (c) Here we begin with the usual eigenvalue problem AX = λx. If A has an inverse A we can multiply both sides by A on the left to give A (AX) = A λx which gives X = λa X or, dividing through by the scalar λ we get A X = X which shows that if λ and X are respectively eigenvalue and eigenvector of A then λ and X are respectively eigenvalue and eigenvector of A. λ [ 3 As an example consider A =. This matrix has eigenvalues λ 3 =, λ = 5 [ [ with corresponding eigenvectors X = and X =. The reader should verify (by direct multiplication) that A = [ 3 has eigenvalues and [ 5 3 [ 5 with respective eigenvectors X = and X =. (d) (e) and (f) are proved in similar way to the proof outlined in (c). HELM (5): Section.: Basic Concepts 7

20 Applications of Eigenvalues and Eigenvectors. Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration analysis, electric circuits, advanced dynamics and quantum mechanics are just a few of the application areas. Many of the applications involve the use of eigenvalues and eigenvectors in the process of transforming a given matrix into a diagonal matrix and we discuss this process in this Section. We then go on to show how this process is invaluable in solving coupled differential equations of both first order and second order. Prerequisites Before starting this Section you should... Learning Outcomes On completion you should be able to... have a knowledge of determinants and matrices have a knowledge of linear first order differential equations diagonalize a matrix with distinct eigenvalues using the modal matrix solve systems of linear differential equations by the decoupling method 8 HELM (5): Workbook : Eigenvalues and Eigenvectors

21 . Diagonalization of a matrix with distinct eigenvalues Diagonalization means transforming a non-diagonal matrix into an equivalent matrix which is diagonal and hence is simpler to deal with. A matrix A with distinct eigenvalues has, as we mentioned in Property 3 in., eigenvectors which are linearly independent. If we form a matrix P whose columns are these eigenvectors, it can be shown that det(p ) so that P exists. The product P AP is then a diagonal matrix D whose diagonal elements are the eigenvalues of A. Thus if λ, λ,... λ n are the distinct eigenvalues of A with associated eigenvectors X (), X (),..., X (n) respectively, then [ P = X (). X ().. X (n) will produce a product P AP = D = λ... λ λ n We see that the order of the eigenvalues in D matches the order in which P is formed from the eigenvectors. N.B. (a) The matrix P is called the modal matrix of A (b) Since D is a diagonal matrix with eigenvalues λ, λ,..., λ n which are the same as those of A, then the matrices D and A are said to be similar. (c) The transformation of A into D using P AP = D is said to be a similarity transformation. HELM (5): Section.: Applications of Eigenvalues and Eigenvectors 9

22 Example [ 8 3 Let A =. Obtain the modal matrix P and calculate the product P 3 AP. (The eigenvalues and eigenvectors of this particular matrix A were obtained earlier in this Workbook at page 7.) Solution The matrix [ A has two distinct [ eigenvalues λ =, λ = 5 with corresponding eigenvectors x x X = and X x =. We can therefore form the modal matrix from the simplest x eigenvectors of these forms: [ P = [ 3 (Other eigenvectors would be acceptable e.g. we could use P = but there is no reason 3 to over complicate the calculation.) It is easy to obtain the inverse of this matrix P and the reader should confirm that: P = det(p ) adj(p ) = [ T = [ We can now construct the product P AP : P AP = [ [ [ 3 3 [ [ 5 = = [ [ = 5 5 which is a diagonal matrix with entries the eigenvalues, [ as expected. Show (by repeating the method outlined above) that had we defined P = (i.e. interchanged the order in which the [ 5 eigenvectors were taken) we would find P AP = (i.e. the resulting diagonal elements would also be interchanged.) HELM (5): Workbook : Eigenvalues and Eigenvectors

23 Task The matrix [ 4 A = 3 eigenvectors [ [ and If P = products [, P =. has eigenvalues and 3 with respective [ [, P 3 = P AP, P AP, P 3 AP 3 (You may not need to do detailed calculations.) write down the Your solution Answer P AP = [ 3 [ = D P AP = 3 [ 3 = D P3 AP 3 = = D 3 Note that D = D, demonstrating that any eigenvectors of A can be used to form P. Note also that since the columns of P have been interchanged in forming P 3 then so have the eigenvalues in D 3 as compared with D. Matrix powers If P AP = D then we can obtain A (i.e. make A the subject of this matrix equation) as follows: Multiplying on the left by P and on the right by P we obtain P P AP P = P DP Now using the fact that P P = P P = I we obtain IAI = P DP A = P DP and so We can use this result to obtain the powers of a square matrix, a process which is sometimes useful in control theory. Note that A = A.A A 3 = A.A.A. etc. Clearly, obtaining high powers of A directly would in general involve many multiplications. process is quite straightforward, however, for a diagonal matrix D, as this next Task shows. The HELM (5): Section.: Applications of Eigenvalues and Eigenvectors

24 Task Obtain D and D 3 if D = [ 3. Write down D. Your solution Answer D = D 3 = Continuing in this way: D = [ [ [ [ = ( ) = 4 [ [ [ [ ( ) = ( ) ( ) 3 = 8 [ 3 ( ) = [ We now use the relation A = P DP to obtain a formula for powers of A in terms of the easily calculated powers of the diagonal matrix D: Similarly: A = A.A = (P DP )(P DP ) = P D(P P )DP = P DIDP = P D P A 3 = A.A = (P D P )(P DP ) = P D (P P )DP = P D 3 P The general result is given in the following Key Point: Key Point For a matrix A with distinct eigenvalues λ, λ,..., λ n and associated eigenvectors X (), X (),..., X (n) then if D = P AP is a diagonal matrix such that λ λ D = P = [X () : X () :... : X (n)... λ n and A k = P D k P HELM (5): Workbook : Eigenvalues and Eigenvectors

25 Example [ 9 3 If A = find A 3 3. (Use the results of Example 8.) Solution [ We know from Example 8 that if P = where P = [ then P AP = [ 5 = D A = P DP and A 3 = P D 3 P using the general result in Key Point i.e. A 3 = [ [ [ 5 3 which is easily evaluated. Find a modal matrix P if [ 4 (a) A = (b) A = 3 Exercise Verify, in each case, that P AP is diagonal, with the eigenvalues of A as its diagonal elements. Answer [ [ (a) P =, P AP = 3 (b) P =, P AP = 3 HELM (5): Section.: Applications of Eigenvalues and Eigenvectors 3

26 . Systems of first order differential equations Systems of first order ordinary differential equations arise in many areas of mathematics and engineering, for example in control theory and in the analysis of electrical circuits. In each case the basic unknowns are each a function of the time variable t. A number of techniques have been developed to solve such systems of equations; for example the Laplace transform. Here we shall use eigenvalues and eigenvectors to obtain the solution. Our first step will be to recast the system of ordinary differential equations in the matrix form Ẋ = AX where A is an n n coefficient matrix of constants, X is the n column vector of unknown functions and Ẋ is the n column vector containing the derivatives of the unknowns.. The main step will be to use the modal matrix of A to diagonalise the system of differential equations. This process will transform Ẋ = AX into the form Ẏ = DY where D is a diagonal matrix. We shall find that this new diagonal system of differential equations can be easily solved. This special solution will allow us to obtain the solution of the original system. Task Obtain the solutions of the pair of first order differential equations } ẋ = x ẏ = 5y given the initial conditions x() = 3 i.e. x = 3 at t = y() = i.e. y = at t = (The notation is that ẋ dx dt, ẏ dy dt ) [Hint: Recall, from your study of differential equations, that the general solution of the differential equation dy dt = Ky is y = y e Kt. () Your solution Answer Using the hint: x = x e t y = y e 5t where x = x() and y = y(). From the given initial condition x = 3 y = so finally x = 3e t y = e 5t. 4 HELM (5): Workbook : Eigenvalues and Eigenvectors

27 In the above Task although we had two differential equations to solve they were really quite separate. We needed no knowledge of matrix theory to solve them. However, we should note that the two differential equations can be written in matrix form. Thus if X = [ x y Ẋ = [ ẋ ẏ the two equations () can be written as [ [ [ ẋ x = ẏ 5 y A = [ 5 i.e. Ẋ = AX. Task Write in matrix form the pair of coupled differential equations } ẋ = 4x + y ẏ = x + y () Your solution Answer [ ẋ ẏ [ = 4 Ẋ = A X [ x y The essential difference between the two pairs of differential equations just considered is that the pair () were really separate equations whereas pair () were coupled: The first equation of () involving only the unknown x, the second involving only y. In matrix terms this corresponded to a diagonal matrix A in the system Ẋ = AX. The pair () were coupled in that both equations involved both x and y. This corresponded to the non-diagonal matrix A in the system Ẋ = AX which you found in the last Task. Clearly the second system here is more difficult to deal with than the first and this is where we can use our knowledge of diagonalization. Consider a system of differential equations written in matrix form: Ẋ = AX where [ [ x(t) ẋ(t) X = and Ẋ = y(t) ẏ(t) [ r(t) We now introduce a new column vector of unknowns Y = through the relation s(t) HELM (5): Section.: Applications of Eigenvalues and Eigenvectors 5

28 X = P Y where P is the modal matrix of A. Then, since P is a matrix of constants: Ẋ = P Ẏ so Ẋ = AX becomes P Ẏ = A(P Y ) Then, multiplying by P on the left, Ẏ = (P AP )Y But, because of the properties of the modal matrix, we know that P AP is a diagonal matrix. Thus if λ, λ are distinct eigenvalues of A then: [ P λ AP = λ Hence Ẏ = (P AP )Y becomes [ [ [ ṙ λ r =. ṡ λ s That is, when written out we have ṙ = λ r ṡ = λ s. These equations are decoupled. The first equation only involves the unknown function r(t) and has solution r(t) = Ce λ t. The second equation only involves the unknown function s(t) and has solution s(t) = Ke λ t. [C, K are arbitrary constants. Once r, s are known the original unknowns x, y can be found from the relation X = P Y. Note that the theory outlined above is more widely applicable as specified in the next Key Point: For any system of differential equations of the form Key Point 3 Ẋ = AX, where A is an n n matrix with distinct eigenvalues λ, λ,..., λ n, and t is the independent variable, the solution is X = P Y, where P is the modal matrix of A and Y = [C e λ t, C e λ t,..., C n e λnt T. 6 HELM (5): Workbook : Eigenvalues and Eigenvectors

29 Example Find the solution of the coupled differential equations ẋ = 4x + y ẏ = x + y with initial conditions x() = y() = Here ẋ dx dt and ẏ dy dt. Solution [ 4 Here A =. It is easily checked that A has distinct eigenvalues λ = 3 λ = and [ [ corresponding eigenvectors X =, X =. [ [ 3 Therefore, taking P = then P AP = and using Key Point 3, r(t) = Ce 3t s(t) = Ke t. So [ x y X = P Y = [ [ r s = = [ [ Ce 3t Ke t [ Ce 3t + Ke t Ce 3t Ke t. Therefore x = Ce 3t + Ke t and y = Ce 3t Ke t. We can now impose the initial conditions x() = and y() = to give = C + K = C K. Thus C = K = and the solution to the original system of differential equations is x(t) = e 3t e t y(t) = e 3t + e t The approach we have demonstrated in Example can be extended to (a) Systems of first order differential equations with n unknowns (Key Point 3) (b) Systems of second order differential equations (described in the next subsection). The only restriction, as we have said, is that the matrix A in the system eigenvalues. Ẋ = AX has distinct HELM (5): Section.: Applications of Eigenvalues and Eigenvectors 7

30 3. Systems of second order differential equations The decoupling method discussed above can be readily extended to this situation which could arise, for example, in a mechanical system consisting of coupled springs. A typical example of such a system with two unknowns has the form ẍ = ax + by or, in matrix form, Ẍ = AX where X = ÿ = cx + dy [ x y [ a b A = c d, ẍ = d x dt, ÿ = d y dt Task [ r(t) Make the substitution X = P Y where Y = and P is the modal matrix s(t) of A, A being assumed here to have distinct eigenvalues λ and λ. Solve the resulting pair of decoupled equations for the case, which arises in practice, where λ and λ are both negative. Your solution Answer Exactly as with a first order system, putting X = P Y into the second order system Ẍ = AX gives [ [ Ÿ = P λ AP Y that is Ÿ = DY where D = and λ Ÿ = so r s [ [ [ r λ r = s λ s That is, r = λ r = ω r and s = λ s = ω s (where λ and λ are both negative.) The two decoupled equations are of the form of the differential equation governing simple harmonic motion. Hence the general solution is r = K cos ω t + L sin ω t and s = M cos ω t + N sin ω t The solutions for x and y are then obtained by use of X = P Y. Note that in this second order case four initial conditions, two each for both x and y, are required because four constants K, L, M, N arise in the solution. 8 HELM (5): Workbook : Eigenvalues and Eigenvectors

31 Exercises. Solve by decoupling each of the following first order systems: [ [ dx 3 4 (a) = AX where A =, X() = dt (b) ẋ = x ẋ = x + 3x 3 ẋ 3 = x with x () =, x () =, x 3 () = dx (c) dt = 3 X, with X() = (d) ẋ = x ẋ = x + x 3 ẋ 3 = 4x + x 3 with x () = x () = x 3 () =. Matrix methods can be used to solve systems of second order differential equations such as might arise with coupled electrical or mechanical systems. For example the motion of two masses m and m vibrating on coupled springs, neglecting damping and spring masses, is governed by m ÿ = k y + k (y y ) m ÿ = k (y y ) where dots denote derivatives with respect to time. Write this system as a matrix equation Ÿ = AY and use the decoupling method to find Y if (a) m = m =, k = 3, k = and the initial conditions are y () =, y () =, ẏ() = 6, ẏ () = 6 (b) m = m =, k = 6, k = 4 and the initial conditions are y () = y () =, ẏ () =, ẏ () = Verify your solutions by substitution in each case. Answers [ e 5t e. (a) X = 5t e 5t + e 5t (c) X = 4. (a) Y = cosh t (b) X = 4 sinh t cosh t e 5t + 3e t e e t e 5t e t (d) X = 5 [ cos t sin 6t cos t + sin 6t (b) Y = 5e t e + 3e 3t 8e t 3e 3t [ sin t sin t HELM (5): Section.: Applications of Eigenvalues and Eigenvectors 9

32 Repeated Eigenvalues and Symmetric Matrices.3 Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one or more of the eigenvalues is repeated. We shall see that this sometimes (but not always) causes problems in the diagonalization process that was discussed in the previous Section. We shall then consider the special properties possessed by symmetric matrices which make them particularly easy to work with. Prerequisites Before starting this Section you should... Learning Outcomes On completion you should be able to... have a knowledge of determinants and matrices have a knowledge of linear first order differential equations state the conditions under which a matrix with repeated eigenvalues may be diagonalized state the main properties of real symmetric matrices 3 HELM (5): Workbook : Eigenvalues and Eigenvectors

33 . Matrices with repeated eigenvalues So far we have considered the diagonalization of matrices with distinct (i.e. non-repeated) eigenvalues. We have accomplished this by the use of a non-singular modal matrix P (i.e. one where det P and hence the inverse P exists). We now want to discuss briefly the case of a matrix A with at least one pair of repeated eigenvalues. We shall see that for some such matrices diagonalization is possible but for others it is not. The crucial question is whether we can form a non-singular modal matrix P with the eigenvectors of A as its columns. Example Consider the matrix [ A = 4 which has characteristic equation det(a λi) = ( λ)( λ) =. So the only eigenvalue is which is repeated or, more formally, has multiplicity. To obtain eigenvectors of A corresponding to λ = we proceed as usual and solve or AX = X [ implying 4 [ x y = [ x y x = x and 4x + y = y from which x = and y is arbitrary. Thus possible eigenvectors are [ [ [ [,,, 3... However, if we attempt to form a modal matrix P from any two of these eigenvectors, [ [ [ e.g. and then the resulting matrix P = has zero determinant. Thus P does not exist and the similarity transformation P AP that we have used previously to diagonalize a matrix is not possible here. The essential point, at a slightly deeper level, is that the columns of P in this case are not linearly independent since [ [ = ( ) i.e. one is a multiple of the other. This situation is to be contrasted with that of a matrix with non-repeated eigenvalues. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 3

34 Earlier, for example, we showed that the matrix [ 3 A = 3 has the non-repeated eigenvalues λ =, λ = 5 with associated eigenvectors [ [ X = X = These two [ eigenvectors [ are linearly independent. since k for any value of k. Here the modal matrix [ P = has linearly independent columns: so that det P and P exists. The general result, illustrated by this example, is given in the following Key Point. Key Point 4 Eigenvectors corresponding to distinct eigenvalues are always linearly independent. It follows from this that we can always diagonalize an n n matrix with n distinct eigenvalues since it will possess n linearly independent eigenvectors. We can then use these as the columns of P, secure in the knowledge that these columns will be linearly independent and hence P will exist. It follows, in considering the case of repeated eigenvalues, that the key problem is whether or not there are still n linearly independent eigenvectors for an n n matrix. We shall now consider two 3 3 cases as illustrations. Task Let A = (a) Obtain the eigenvalues and eigenvectors of A. (b) Can three linearly independent eigenvectors for A be obtained? (c) Can A be diagonalized? 3 HELM (5): Workbook : Eigenvalues and Eigenvectors

35 Your solution Answer (a) The characteristic equation of A is det(a λi) = λ λ λ i.e. ( λ)( λ)( λ) = which gives λ =, λ =, λ =. x For λ = the associated eigenvectors satisfy y = z x =, z = and y is arbitrary. Thus an eigenvector is X = α x y z = from which where α is arbitrary, α. For the repeated eigenvalue λ = we must solve AY = ( )Y for the eigenvector Y : x x y = y from which z =, x + 3y = so the eigenvectors are z z of the form Y = 3β β = β 3 where β is arbitrary. (b) X and Y are certainly linearly independent (as we would expect since they correspond to distinct eigenvalues.) However, there is only one independent eigenvector of the form Y corresponding to the repeated eigenvalue. (c) The conclusion is that since A is 3 3 and we can only obtain two linearly independent eigenvectors then A cannot be diagonalized. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 33

36 Task The matrix A = corresponding to the eigenvalue 3 is X = has eigenvalues 3,,. The eigenvector 3 or any multiple. Investigate carefully the eigenvectors associated with the repeated eigenvalue λ = and deduce whether A can be diagonalized. Your solution Answer We must solve AY = ()Y for the required eigenvector x x i.e. y = y z z Each equation here gives on simplification x y + z =. So we have just one equation in three unknowns so we can choose any two values arbitrarily. The choices x =, y = (and hence z = ) and x =, y = (and hence z = ) for example, give rise to linearly independent eigenvectors Y = Y = We can thus form a non-singular modal matrix P from Y and Y together with X (given) P = 3 We can then indeed diagonalize A through the transformation 3 P AP = D = 34 HELM (5): Workbook : Eigenvalues and Eigenvectors

37 Key Point 5 An n n matrix with repeated eigenvalues can be diagonalized provided we can obtain n linearly independent eigenvectors for it. This will be the case if, for each repeated eigenvalue λ i of multiplicity m i >, we can obtain m i linearly independent eigenvectors.. Symmetric matrices Symmetric matrices have a number of useful properties which we will investigate in this Section. Task Consider the following four matrices [ [ 3 3 A = A 4 5 = A 3 = 6 8 A 4 = What property do the matrices A and A 4 possess that A and A 3 do not? Your solution Answer Matrices A and A 4 are symmetric across the principal diagonal. In other words transposing these matrices, i.e. interchanging their rows and columns, does not change them. A T = A A T 4 = A 4. This property does not hold for matrices A and A 3 which are non-symmetric. Calculating the eigenvalues of an n n matrix with real elements involves, in principle at least, solving an n th order polynomial equation, a quadratic equation if n =, a cubic equation if n = 3, and so on. As is well known, such equations sometimes have only real solutions, but complex solutions (occurring as complex conjugate pairs) can also arise. This situation can therefore arise with the eigenvalues of matrices. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 35

38 Task Consider the non-symmetric matrix [ A = 5 Obtain the eigenvalues of A and show that they form a complex conjugate pair. Your solution Answer The characteristic equation of A is det(a λi) = λ 5 λ = i.e. ( λ)( + λ) + 5 = leading to λ + = giving eigenvalues ± i which are of course complex conjugates. In particular any matrix of the form [ a b A = b a has complex conjugate eigenvalues a ± ib. A 3 3 example of a matrix with some complex eigenvalues is B = A straightforward calculation shows that the eigenvalues of B are λ = (real), λ = ±i (complex conjugates). With symmetric matrices on the other hand, complex eigenvalues are not possible. 36 HELM (5): Workbook : Eigenvalues and Eigenvectors

39 Key Point 6 The eigenvalues of a symmetric matrix with real elements are always real. The general proof of this result in Key Point 6 is beyond our scope but a simple proof for symmetric matrices is straightforward. [ a b Let A = be any symmetric matrix, a, b, c being real numbers. b c The characteristic equation for A is (a λ)(c λ) b = or, expanding: λ (a + c)λ + ac b = from which λ = (a + c) ± (a + c) 4ac + 4b The quantity under the square root sign can be treated as follows: (a + c) 4ac + 4b = a + c + ac 4ac + b = (a c) + 4b which is always positive and hence λ cannot be complex. Task Obtain the eigenvalues and the eigenvectors of the symmetric matrix [ 4 A = Your solution HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 37

40 Answer The characteristic equation for A is (4 λ)( λ) + 4 = or λ 5λ = giving λ = and λ = 5, both of which are of[ course real and also unequal (i.e. distinct). For the x larger eigenvalue λ = 5 the eigenvector X = satisfy y [ 4 [ x y = [ 5x 5y i.e. x y =, x 4y = [ Both equations tell us that x = y so an eigenvector for λ = 5 is X = this. For λ = the associated eigenvectors satisfy 4x y = x + y = i.e. y = x (from both equations) so an eigenvector is Y = [ or any multiple. or any multiple of We now look more closely at the eigenvectors X and Y in the last task. In particular we consider the product X T Y. Task Evaluate X T Y from the previous task i.e. where [ [ X = Y = Your solution Answer X T Y = [ [ X T Y = means are X and Y are orthogonal. = [ = [ = [ = Key Point 7 Two n column vectors X and Y are orthogonal if X T Y = or Y T X =. 38 HELM (5): Workbook : Eigenvalues and Eigenvectors

41 Task We obtained earlier in Section. Example 6 the eigenvalues of the matrix A = which, as we now emphasize, is symmetric. We found that the eigenvalues were, +, which are real and distinct. The corresponding eigenvectors were, respectively X = Y = (or, as usual, any multiple of these). Z = Show that these three eigenvectors X, Y, Z are mutually orthogonal. Your solution Answer X T Y = [ Y T Z = [ Z T X = [ = [ = = [ + = = [ = verifying the mutual orthogonality of these three eigenvectors. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 39

42 General theory The following proof that eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal is straightforward and you are encouraged to follow it through. Let A be a symmetric n n matrix and let λ, λ be two distinct eigenvalues of A i.e. λ λ with associated eigenvectors X, Y respectively. We have seen that λ and λ must be real since A is symmetric. Then AX = λ X AY = λ Y Transposing the first of there results gives X T A T = λ X T (Remember that for any two matrices the transpose of a product is the product of the transposes in reverse order.) We now multiply both sides of () on the right by Y (as well as putting A T symmetric) to give: X T AY = λ X T Y But, using the second eigenvalue equation of (), equation (3) becomes X T λ Y = λ X T Y or, since λ is just a number, λ X T Y = λ X T Y Taking all terms to the same side and factorising gives (λ λ )X T Y = from which, since by assumption λ λ, we obtain the result X T Y = and the orthogonality has been proved. () () = A, since A is (3) Key Point 8 The eigenvectors associated with distinct eigenvalues of a symmetric matrix are mutually orthogonal. The reader familiar with the algebra of vectors will recall that for two vectors whose Cartesian forms are a = a x i + a y j + a z k the scalar (or dot) product is b = b x i + b y j + b z k 4 HELM (5): Workbook : Eigenvalues and Eigenvectors

43 a b = a x b x + a y b y + a z b z. Furthermore, if a and b are mutually perpendicular then a b =. (The word orthogonal is sometimes used instead of perpendicular in the case.) Our result, that two column vectors are orthogonal if X T Y =, may thus be considered as a generalisation of the 3-dimensional result a b =. Diagonalization of symmetric matrices Recall from our earlier work that. We can always diagonalize a matrix with distinct eigenvalues (whether these are real or complex).. We can sometimes diagonalize a matrix with repeated eigenvalues. (The condition for this to be possible is that any eigenvalue of multiplicity m had to have associated with it m linearly independent eigenvectors.) The situation with symmetric matrices is simpler. Basically we can diagonalize any symmetric matrix. To take the discussions further we first need the concept of an orthogonal matrix. A square matrix A is said to be orthogonal if its inverse (if it exists) is equal to its transpose: Example A = A T or, equivalently, AA T = A T A = I. An important example of an orthogonal matrix is [ cos φ sin φ A = sin φ cos φ which arises when we use matrices to describe rotations in a plane. AA T = = [ [ cos φ sin φ cos φ sin φ sin φ cos φ sin φ cos φ [ cos φ + sin φ sin φ + cos = φ It is clear that A T A = I also, so A is indeed orthogonal. [ = I It can be shown, but we omit the details, that any matrix which is orthogonal can be written in one of the two forms. [ [ cos φ sin φ cos φ sin φ or sin φ cos φ sin φ cos φ If we look closely at either of these matrices we can see that. The two columns are mutually orthogonal e.g. for the first matrix we have [ sin φ (cos φ sin φ) = cos φ sin φ sin φ cos φ = cos φ. Each column has magnitude (because cos φ + sin φ = ) Although we shall not prove it, these results are necessary and sufficient for any order square matrix to be orthogonal. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 4

44 Key Point 9 A square matrix A is said to be orthogonal if its inverse (if it exists) is equal to its transpose: A = A T or, equivalently, AA T = A T A = I. A square matrix is orthogonal if and only if its columns are mutually orthogonal and each column has unit magnitude. Task For each of the following matrices verify that the two properties above are satisfied. Then check in both cases that AA T = A T A = I i.e. that A T = A. 3 (a) A = (b) A = 3 Your solution Answer (a) Since [ 3 [ Since 3 3 T = = each column has unit magnitude. [ = and 3 = the columns are orthogonal. 4 [ Straightforward multiplication shows AA T = A T A = (b) Proceed as in (a). T 3 [ = = I = 4 HELM (5): Workbook : Eigenvalues and Eigenvectors

45 The following is the key result of this Section. Key Point Any symmetric matrix A can be diagonalized using an orthogonal modal matrix P via the transformation λ... P T λ... AP = D =.... λ n It follows that any n n symmetric matrix must possess n mutually orthogonal eigenvectors even if some of the eigenvalues are repeated. It should be clear to the reader that Key Point is a very powerful result for any applications that involve diagonalization of a symmetric matrix. Further, if we do need to find the inverse of P, then this is a trivial process since P = P T (Key Point 9). Task The symmetric matrix A = has eigenvalues,, (i.e. eigenvalue is repeated with multiplicity.) Associated with the non-repeated eigenvalue is an eigenvector: X = (or any multiple) (a) Normalize the eigenvector X: Your solution HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 43

46 Answer (a) Normalizing X which has magnitude / 3 / 3 = /3 + ( ) = 3 gives (b) Investigate the eigenvectors associated with the repeated eigenvalue : Your solution Answer (b) The eigenvectors associated with λ = satisfy AY = Y x which gives y = z The first and third equations give x + z = x z = i.e. x = z The equations give us no information about y so its value is arbitrary. β Thus Y has the form Y = α β where both α and β are arbitrary. A certain amount of care is now required in the choice of α and β if we are to find an orthogonal modal matrix to diagonalize A. For any choice X T Y = ( ) β α β = β β =. So X and Y are orthogonal. (The normalization of X does not affect this.) 44 HELM (5): Workbook : Eigenvalues and Eigenvectors

47 However, we also need two orthogonal eigenvectors of the form Y () = ( choosing β =, α = ) and Y () = After normalization, these become Y () = Hence the matrix P = [X. Y (). Y () = is orthogonal and diagonalizes A: P T AP = Y () = β α β. Two such are ( choosing α =, β = ) /3 / 3 / 3 /3 /3 / 3 Hermitian matrices In some applications, of which quantum mechanics is one, matrices with complex elements arise. If A is such a matrix then the matrix A T is the conjugate transpose of A, i.e. the complex conjugate of each element of A is taken as well as A being transposed. Thus if [ [ + i A = then A T i 3i = 3i 5 i 5 + i An Hermitian matrix is one satisfying A T = A The matrix A above is clearly non-hermitian. Indeed the most obvious features of an Hermitian matrix is that its diagonal elements must be real. (Can you see why?) Thus [ i A = 4 i is Hermitian. A 3 3 example of an Hermitian matrix is i 5 i A = i i An Hermitian matrix is in fact a generalization of a symmetric matrix. The key property of an Hermitian matrix is the same as that of a real symmetric matrix i.e. the eigenvalues are always real. HELM (5): Section.3: Repeated Eigenvalues and Symmetric Matrices 45

48 Numerical Determination of Eigenvalues and Eigenvectors.4 Introduction In Section. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, and 3 3. This involved firstly solving the characteristic equation det(a λi) = for a given n n matrix A. This is an n th order polynomial equation and, even for n as low as 3, solving it is not always straightforward. For large n even obtaining the characteristic equation may be difficult, let alone solving it. Consequently in this Section we give a brief introduction to alternative methods, essentially numerical in nature, of obtaining eigenvalues and perhaps eigenvectors. We would emphasize that in some applications such as Control Theory we might only require one eigenvalue of a matrix A, usually the one largest in magnitude which is called the dominant eigenvalue. It is this eigenvalue which sometimes tells us how a control system will behave. Prerequisites Before starting this Section you should... Learning Outcomes On completion you should be able to... have a knowledge of determinants and matrices have a knowledge of linear first order differential equations use the power method to obtain the dominant eigenvalue (and associated eigenvector) of a matrix state the main advantages and disadvantages of the power method 46 HELM (5): Workbook : Eigenvalues and Eigenvectors

49 . Numerical determination of eigenvalues and eigenvectors Preliminaries Before discussing numerical methods of calculating eigenvalues and eigenvectors we remind you of three results for a matrix A with an eigenvalue λ and associated eigenvector X. A (if it exists) has an eigenvalue λ with associated eigenvector X. The matrix (A ki) has an eigenvalue (λ k) and associated eigenvector X. The matrix (A ki), i.e. the inverse (if it exists) of the matrix (A ki), has eigenvalue and corresponding eigenvector X. λ k Here k is any real number. Task The matrix A = has eigenvalues λ = 5, 3, with associated 5 / eigenvectors /,, respectively. 5 The inverse A exists and is A = Without further calculation write down the eigenvalues and eigenvectors of the following matrices: 3 (a) A (b) 3 (c) 6 3 Your solution HELM (5): Section.4: Numerical Determination of Eigenvalues and Eigenvectors 47

50 Answer (a) The eigenvalues of A are 5,,. (Notice that the dominant eigenvalue of A yields the 3 smallest magnitude eigenvalue of A.) (b) The matrix here is A + I. Thus its eigenvalues are the same as those of A increased by i.e. 6, 4,. (c) The matrix here is (A I). Thus its eigenvalues are the reciprocals of the eigenvalues of (A I). The latter has eigenvalues 3,, so (A I) has eigenvalues,,. 3 In each of the above cases the eigenvectors are the same as those of the original matrix A. The power method This is a direct iteration method for obtaining the dominant eigenvalue (i.e. the largest in magnitude), say λ, for a given matrix A and also the corresponding eigenvector. We will not discuss the theory behind the method but will demonstrate it in action and, equally importantly, point out circumstances when it fails. Task [ 4 Let A =. By solving det(a λi) = obtain the eigenvalues of A and 5 7 also obtain the eigenvector associated with the dominant eigenvalue. Your solution 48 HELM (5): Workbook : Eigenvalues and Eigenvectors

51 Answer det(a λi) = 4 λ 5 7 λ = which gives so λ λ + 8 = (λ 9)(λ ) = λ = 9 ( the dominant eigenvalue) and λ =. [ x The eigenvector X = for λ y = 9 is obtained as usual by solving AX = 9X, so [ [ [ [ 4 x 9x = from which 5x = y so X = or any multiple. 5 7 y 9y 5 If we normalize here such that the largest component of X is [.4 X = We shall now demonstrate how the power method can be used to obtain λ = 9 and X = [ 4 where A =. 5 7 [.4 We choose an arbitrary column vector X () = [ We premultiply this by A to give a new column vector X () : X () = [ [ = [ 6 We normalize X () to obtain a column vector Y () with largest component : thus Y () = [ 6 = We continue the process X () = AY () = Y () = [ [ / [ [ / = [.453 = [ HELM (5): Section.4: Numerical Determination of Eigenvalues and Eigenvectors 49

52 Task Continue this process for a further step and obtain X (3) and Y (3), quoting values to 6 d.p. Your solution Answer [ [ X (3) = AY () = 5 7 [ Y (3).4464 = = [ The first 8 steps of the above iterative process are summarized in the following table (the first three rows of which have been obtained above): Table Step r X (r) X (r) α r Y (r) Y (r) In Table, α r refers to the largest magnitude component of X (r) which is used to normalize X (r) to give Y (r). We can see that α r is converging to 9 which we know is the dominant eigenvalue λ of A. Also Y (r) is converging towards the associated eigenvector [.4, T. Depending on the accuracy required, we could decide when to stop the iterative process by looking at the difference α r α r at each step. Task Using the power method obtain the dominant eigenvalue and associated eigenvector of 3 A = 4 3 using a starting column vector X () = 5 HELM (5): Workbook : Eigenvalues and Eigenvectors

53 Calculate X (), Y () and α : Your solution Answer X () = AX () = so Y () = = using α =, the largest magnitude component of X (). Carry out the next two steps of this iteration to obtains X (), Y (), α and X (3), Y (3), α 3 : Your solution Answer X () = X (3) = = = Y () = Y (3) = α = 4 α 3 = 6.5 After just 3 iterations there is little sign of convergence of the normalizing factor α r. However the next two values obtained are α 4 = α 5 = and, after 4 iterations, α 4 α 3 <. and the power method converges, albeit slowly, to α 4 = which (correct to 4 d.p.) is the dominant eigenvalue of A. The corresponding eigenvector is It is clear that the power method requires, for its practical execution, a computer. HELM (5): Section.4: Numerical Determination of Eigenvalues and Eigenvectors 5

54 Problems with the power method. If the initial column vector X () is an eigenvector of A other than that corresponding to the dominant eigenvalue, say λ, then the method will fail since the iteration will converge to the wrong eigenvalue, say λ, after only one iteration (because AX () = λ X () in this case).. It is possible to show that the speed of convergence of the power method depends on the ratio magnitude of dominant eigenvalue λ magnitude of next largest eigenvalue If this ratio is small the method is slow to converge. In particular, if the dominant eigenvalue λ is complex the method will fail completely to converge because the complex conjugate λ will also be an eigenvalue and λ = λ 3. The power method only gives one eigenvalue, the dominant one λ (although this is often the most important in applications). Advantages of the power method. It is simple and easy to implement.. It gives the eigenvector corresponding to λ as well as λ itself. (Other numerical methods require separate calculation to obtain the eigenvector.) Finding eigenvalues other than the dominant We discuss this topic only briefly.. Obtaining the smallest magnitude eigenvalue If A has dominant eigenvalue λ then its inverse A has an eigenvalue λ (as we discussed at the beginning of this Section.) Clearly λ will be the smallest magnitude eigenvalue of A. Conversely if we obtain the largest magnitude eigenvalue, say λ, of A by the power method then the smallest eigenvalue of A is the reciprocal,. This technique is called the inverse power method. Example 3 If A = 4 3 then the inverse is A = Using X () = magnitude eigenvalue of A is λ in the power method applied to A gives λ = Hence the smallest =.746. The corresponding eigenvector is HELM (5): Workbook : Eigenvalues and Eigenvectors.

55 In practice, finding the inverse of a large order matrix A can be expensive in computational effort. Hence the inverse power method is implemented without actually obtaining A as follows. As we have seen, the power method applied to A utilizes the scheme: X (r) = AY (r ) r =,,... where Y (r ) = α r X (r ), α r being the largest magnitude component of X (r ). For the inverse power method we have X (r) = A Y (r ) which can be re-written as AX (r) = Y (r ) Thus X (r) can actually be obtained by solving this system of linear equations without needing to obtain A. This is usually done by a technique called LU decomposition i.e. writing A (once and for all) in the form A = LU L being a lower triangular matrix and U upper triangular.. Obtaining the eigenvalue closest to a given number p Suppose λ k is the (unknown) eigenvalue of A closest to p. We know that if λ, λ,..., λ n are the eigenvalues of A then λ p, λ p,..., λ n p are the eigenvalues of the matrix A pi. Then λ k p will be the smallest magnitude eigenvalue of A pi but will be the largest magnitude λ k p eigenvalue of (A pi). Hence if we apply the power method to (A pi) we can obtain λ k. The method is called the shifted inverse power method. 3. Obtaining all the eigenvalues of a large order matrix In this case neither solving the characteristic equation det(a λi) = nor the power method (and its variants) is efficient. The commonest method utilized is called the QR technique. This technique is based on similarity transformations i.e. transformations of the form B = M AM where B has the same eigenvalues as A. (We have seen earlier in this Workbook that one type of similarity transformation is D = P AP where P is formed from the eigenvectors of A. However, we are now, of course, dealing with the situation where we are trying to find the eigenvalues and eigenvectors of A.) In the QR method A is reduced to upper (or lower) triangular form. We have already seen that a triangular matrix has its eigenvalues on the diagonal. For details of the QR method, or more efficient techniques, one of which is based on what is called a Householder transformation, the reader should consult a text on numerical methods. HELM (5): Section.4: Numerical Determination of Eigenvalues and Eigenvectors 53

56 NOTES

57 Index for Workbook Characteristic equation 8 Coupled ODEs 5 Determinant 3 Diagonal matrix 5 Diagonalisation of matrix 9, 4 Dominant eigenvalue 48 Eigenvalue 8 Eigenvalues - all 53 - definition 8 - diagonal matrix 5 - dominant 48 - linear dependence 5 - product 4 - repeated 3 - smallest 5 - sum 4 Eigenvector 8, 3, 4 First order ODEs 4 Hermitian matrix 45 Homogeneous system 4 Householder transformation 53 Inhomogeneous system 4 Inverse power method 5 Linear differential equations 4, 8 Linear equations 4 Matrix - diagonal 5 - diagonalisation 9, 4 - Hermitian 45 - modal 9 - orthogonal powers - similar 9 - singular 3 - symmetric 35 Modal matrix 9 Normalised eigenvector 9 Numerical methods Orthogonal vectors 38-4 Power method 48 QR technique 53 Repeated eigenvalues 3 Second order ODEs 8 Shifted inverse power method 53 Similar matrices 9 Similarity transformation 9, 53 Singular matrix 3 Symmetric matrices 35 Systems of linear equations 4 Systems of linear differential equations 4, 8 EXERCISES 3, 6, 3, 9

58

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Applications of Eigenvalues and Eigenvectors 22.2 Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led by

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- year curriculum development project undertaken by a consortium of five English universities led

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 2002 Solutions For Problem Sheet 4 In this Problem Sheet, we revised how to find the eigenvalues and eigenvectors of a matrix and the circumstances under which

More information

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

10.1 Complex Arithmetic Argand Diagrams and the Polar Form The Exponential Form of a Complex Number De Moivre s Theorem 29

10.1 Complex Arithmetic Argand Diagrams and the Polar Form The Exponential Form of a Complex Number De Moivre s Theorem 29 10 Contents Complex Numbers 10.1 Complex Arithmetic 2 10.2 Argand Diagrams and the Polar Form 12 10.3 The Exponential Form of a Complex Number 20 10.4 De Moivre s Theorem 29 Learning outcomes In this Workbook

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 8 MATRICES III Rank of a matrix 2 General systems of linear equations 3 Eigenvalues and eigenvectors Rank of a matrix

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

21.3. z-transforms and Difference Equations. Introduction. Prerequisites. Learning Outcomes

21.3. z-transforms and Difference Equations. Introduction. Prerequisites. Learning Outcomes -Transforms and Difference Equations 1.3 Introduction In this we apply -transforms to the solution of certain types of difference equation. We shall see that this is done by turning the difference equation

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

7.3. Determinants. Introduction. Prerequisites. Learning Outcomes

7.3. Determinants. Introduction. Prerequisites. Learning Outcomes Determinants 7.3 Introduction Among other uses, determinants allow us to determine whether a system of linear equations has a unique solution or not. The evaluation of a determinant is a key skill in engineering

More information

CHAPTER 3. Matrix Eigenvalue Problems

CHAPTER 3. Matrix Eigenvalue Problems A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project

About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three-year curriculum development project undertaken by a consortium of five English universities led by

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis. Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Matrices and Deformation

Matrices and Deformation ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13 - Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH * 4.4 UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH 19 Discussion Problems 59. Two roots of a cubic auxiliary equation with real coeffi cients are m 1 1 and m i. What is the corresponding homogeneous

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Linear Algebra Exercises

Linear Algebra Exercises 9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

16.4. Power Series. Introduction. Prerequisites. Learning Outcomes

16.4. Power Series. Introduction. Prerequisites. Learning Outcomes Power Series 6.4 Introduction In this Section we consider power series. These are examples of infinite series where each term contains a variable, x, raised to a positive integer power. We use the ratio

More information

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

LS.2 Homogeneous Linear Systems with Constant Coefficients

LS.2 Homogeneous Linear Systems with Constant Coefficients LS2 Homogeneous Linear Systems with Constant Coefficients Using matrices to solve linear systems The naive way to solve a linear system of ODE s with constant coefficients is by eliminating variables,

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network.

Production of this 2015 edition, containing corrections and minor revisions of the 2008 edition, was funded by the sigma Network. About the HELM Project HELM (Helping Engineers Learn Mathematics) materials were the outcome of a three- ear curriculum development project undertaken b a consortium of five English universities led b

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Characteristic Equation Cayley-Hamilton Cayley-Hamilton Theorem An Example Euler s Substitution for u = A u The Cayley-Hamilton-Ziebur

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. You may write your answers on this sheet or on a separate paper. Remember to write your name on top. Please note:

More information

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information