im7.qxd 9//5 :9 PM Page 5 Part B LINEAR ALGEBRA. VECTOR CALCULUS Part B consists of Chap. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Chap. 8 Linear Algebra: Matrix Eigenvalue Problems Chap. 9 Vector Differential Calculus. Grad, Div, Curl Chap. Vector Integral Calculus. Integral Theorems. Hence we have retained the previous subdivision of Part B into four chapters. Chapter 9 is self-contained and completely independent of Chaps. 7 and 8. Thus, Part B consists of two large independent units, namely, Linear Algebra (Chaps. 7, 8) and Vector Calculus (Chaps. 9, ). Chapter depends on Chap. 9, mainly because of the occurrence of div and curl (defined in Chap. 9) in the Gauss and Stokes theorems in Chap.. CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Changes The order of the material in this chapter and its subdivision into sections has been retained, but various local changes have been made to increase the usefulness of this chapter for applications, in particular:. The beginning, which was somewhat slow by modern standards, has been streamlined, so that the student will see applications to linear systems of equations much earlier.. A reference section (Sec. 7.6) on second- and third-order determinants has been included for easier access from other parts of the book. SECTION 7.. Matrices, Vectors: Addition and Scalar Multiplication, page 7 Purpose. Explanation of the basic concepts. Explanation of the two basic matrix operations. The latter derive their importance from their use in defining vector spaces, a fact that should perhaps not be mentioned at this early stage. Its systematic discussion follows in Sec. 7., where it will fit nicely into the flow of thoughts and ideas. Main Content, Important Concepts Matrix, square matrix, main diagonal Double subscript notation Row vector, column vector, transposition Equality of matrices Matrix addition Scalar multiplication (multiplication of a matrix by a scalar) 5
im7.qxd 9//5 :9 PM Page 6 6 Instructor s Manual Comments on Important Facts One should emphasize that vectors are always included as special cases of matrices and that those two operations have properties [formulas (), ()] similar to those of operations for numbers, which is a great practical advantage. SOLUTIONS TO PROBLEM SET 7., page 77 8 6. Y8 6Z, Y 8 Z, Y Z, Y6 Z 6 6 8 8 6 5.5.5 8.5. Y Z, same, Y Z, undefined 9 8 6 8 8.5 7 8 6. Undefined, undefined, Y Z, same 56 96 8. Undefined, Y 8Z, undefined, undefined. _ 5 A, _ A. Similar (and more important) instances are the scaling of equations in linear systems, the formation of linear combinations, and the like, as will be shown later..,, and,,. The concept of a main diagonal is restricted to square matrices.. No, no, no. Transposition, which relates row and column vectors, will be discussed in the next section. 6. (b). The incidence matrices are as follows, with nodes corresponding to rows and branches to columns, as in Fig. 5. Y Z, W X, W X From a figure we may more easily grasp the kind of network. However, in the present context, matrices have two advantages over figures, namely, the computer can handle
im7.qxd 9//5 :9 PM Page 7 Instructor s Manual 7 them without great difficulties, and a matrix determines a network uniquely, whereas a given network can be drawn as a figure in various ways, so that one does not immediately see that two different figures represent the same network. A nice example is given in Sec... (c) The networks corresponding to the given matrices may be drawn as follows: 5 6 (e) The nodal incidence matrix is A W X. SECTION 7.. Matrix Multiplication, page 78 Purpose. Matrix multiplication, the third and last algebraic operation, is defined and discussed, with emphasis on its unusual properties; this also includes its representation by inner products of row and column vectors. Main Content, Important Facts Definition of matrix multiplication ( rows times columns ) Properties of matrix multiplication Matrix products in terms of inner products of vectors Linear transformations motivating the definition of matrix multiplication AB BA in general, so the order of factors is important. AB does not imply A or B or BA. (AB) T B T A T Short Courses. Products in terms of row and column vectors and the discussion of linear transformations could be omitted. Comment on Notation For transposition, T seems preferable over a prime, which is often used in the literature but will be needed to indicate differentiation in Chap. 9. Comments on Content Most important for the next sections on systems of equations are the multiplication of a matrix times a vector. Examples,, and emphasize that matrix multiplication is not commutative and make the student aware of the restrictions of matrix multiplication. Formula (d) for the transposition of a product should be memorized. In motivating matrix multiplication by linear transformations, one may also illustrate the geometric significance of noncommutativity by combining a rotation with a stretch in
im7.qxd 9//5 :9 PM Page 8 8 Instructor s Manual x-direction in both orders and show that a circle transforms into an ellipse with main axes in the direction of the coordinate axes or rotated, respectively. SOLUTIONS TO PROBLEM SET 7., page 86. Y 5Z, same, [ 6 ], (the zero matrix) 5 6 6 6 6 8 6. Y Z, Y Z, Y Z, same 6 6 65 6 6 6 6 8 6 7 6 6 6. Undefined, undefined, Y 6Z, [ 6 5] 5 5 9 8. Undefined, 868, 868, Y 5 6 8Z 5 76 9 77 8 6. Y 9 Z, same, Y 7 6Z 9 6 6 59 9 8. Y 9 7 Z, Y Z, undefined, Y Z. 58, undefined, undefined, 69 6. M AB BA must be the zero matrix. It has the form a a a a M [ ][ ] [ ][ ] a 87 a 8 a a a a a a [ ]. a a a a a a from m (also from m ). a a _ a from m (also from m ). Answer: a a A [ ]. a a a a a a a a a a _ a
im7.qxd 9//5 :9 PM Page 9 Instructor s Manual 9 8. The calculation for the first column is 6 9 5 Y ZY Z Y 7Z, 5 7 and similarly for the other two columns; see the answer to Prob., last matrix given. a. Idempotent are [ ], [ ], etc.; nilpotent are [ ], [ ], b etc., and A I is true for b c [ ], [ ], [ ], [ ], [ ] a /c where a, b, and c are arbitrary.. The entry c kj of (AB) T is c jk of AB, which is Row j of A times Column k of B. On the right, c kj is Row k of B T, hence Column k of B, times Column j of A T, hence Row j of A.. An annual increase of about 5% because the matrix of this Markov process is.9. A [ ]..999 and the initial state is [ 98] T, so that multiplication by A gives the further states (rounded) [98 979] T, [86 978] T, [65 9775] T. 6. The transition probabilities can be given in a matrix From N From T.9.5 A [ ]..5 To N To T and multiplication of [ ] T by A, A, A gives [.9.] T, [.86.] T, [.8.56] T. Answer:.86,.8. 8. Team Project. (b) Use induction on n. True if n. Take the formula in the problem as the induction hypothesis, multiply by A, and simplify the entries in the product by the addition formulas for the cosine and sine to get A n. (c) These formulas follow directly from the definition of matrix multiplication. (d) A scalar matrix would correspond to a stretch or contraction by the same factor in all directions. (e) Rotations about the x -, x -, x -axes through,,, respectively. SECTION 7.. Linear Systems of Equations. Gauss Elimination, page 87 Purpose. This simple section centers around the Gauss elimination for solving linear systems of m equations in n unknowns x,, x n, its practical use as well as its mathematical justification (leaving the more demanding general existence theory to the next sections).
im7.qxd 9//5 :9 PM Page 5 5 Instructor s Manual Main Content, Important Concepts Nonhomogeneous, homogeneous, coefficient matrix, augmented matrix Gauss elimination in the case of the existence of I. a unique solution (Example ) II. infinitely many solutions (Example ) III. no solutions (Example ). Pivoting Elementary row operations, echelon form Background Material. All one needs here is the multiplication of a matrix and a vector. Comments on Content The student should become aware of the following facts:. Linear systems of equations provide a major application of matrix algebra and justification of the definitions of its concepts.. The Gauss elimination (with pivoting) gives meaningful results in each of the Cases I III.. This method is a systematic elimination that does not look for unsystematic shortcuts (depending on the size of the numbers involved and still advocated in some older precomputer-age books). Algorithms for programs of Gauss s and related methods are discussed in Sec.., which is independent of the rest of Chap. and can thus be taken up along with the present section in case of time and interest. SOLUTIONS TO PROBLEM SET 7., page 95. x, y. x 5, y arbitrary, z y. Unknowns that remain arbitrary are sometimes denoted by other letters, such as t, t, etc. In the present case we could thus write the solutions as x 5, y t, z t. 6. x t arbitrary, y x t, z x t 8. No solution. x t arbitrary, y x 5 t 5, z x t. x z t, y z t, z t arbitrary. w x t, x t arbitrary, y, z 6. w, x t arbitrary, y x t, z x t 8. Currents at the lower node: I I I (minus because I flows out). Voltage in the left circuit: and in the right circuit: R I R I E E R I R I E
im7.qxd 9//5 :9 PM Page 5 Instructor s Manual 5 (minus because I flows against the arrow of E ). Answer: I [(R R )E R E ] D I [R E (R R )E ] D I [R E R E ] D where D R R R R R R.. P 9, P 8, D S, D S 8. Project. (a) B and C are different. For instance, it makes a difference whether we first multiply a row and then interchange, and then do these operations in reverse order. a a a a a a a 5a a 5a B W X, C W X a 5a 8a a 5a 8a (b) Premultiplying A by E (that is, multiplying A by E from the left) makes E operate on rows of A. The assertions then follow almost immediately from the definition of matrix multiplication. a 8a a 8a SECTION 7.. Linear Independence. Rank of a Matrix. Vector Space, page 96 Purpose. This section introduces some theory centered around linear independence and rank, in preparation for the discussion of the existence and uniqueness problem for linear systems of equations (Sec. 7.7). Main Content, Important Concepts Linear independence Real vector space R n, dimension, basis Rank defined in terms of row vectors Rank in terms of column vectors Invariance of rank under elementary row operations Short Courses. For the further discussion in the next sections, it suffices to define linear independence and rank. Comments on Rank and Vector Spaces Of the three possible equivalent definitions of rank, (i) By row vectors (our definition), (ii) By column vectors (our Theorem ), (iii) By submatrices with nonzero determinant (Sec. 7.7), the first seems to be most practical in our context.
im7.qxd 9//5 :9 PM Page 5 5 Instructor s Manual Introducing vector spaces here, rather than in Sec. 7., we have the advantage that the student immediately sees an application (row and column spaces). Vector spaces in full generality follow in Sec. 7.9. SOLUTIONS TO PROBLEM SET 7., page. Row reduction of the matrix and of its transpose gives 8 5 8 6 Y 9Z and Y Z. Hence the rank is, and bases are [8 5], [ 9] and [8 6 ] T, [ ] T.. Row reduction as in Prob. gives the matrices a b a b c Y Z and W c(a b) X. c(a b) a b a a a Hence for general a, b, c the rank is. Bases are [a b c], [ a b c(a b)] and [a b] T, [ c(a b)] T. 6. The matrix is symmetric. The reduction gives a Y a a Z. a a Hence a basis is [ a], [ a a], [ a a] and the transposed vectors (column vectors) for the column space. Hence the rank is (if a ), (if a ), and if a,. 8. The row reductions as in Prob. give 7 W X and W X. 7 The rank is, and bases are [ ], [ ], [ ], [ ] and the same vectors written as column vectors.
im7.qxd 9//5 :9 PM Page 5 Instructor s Manual 5. The matrix is symmetric. Row reduction gives W X. Hence the rank is, and bases are [ ], [ ] and the same vectors transposed (as column vectors).. The matrix is symmetric. Row reduction gives 7 5 5/7 W X. 5/7 Hence the ranks is, and bases are [ ], [ ], [ ], [ ] and the same vectors written as column vectors.. Yes 6. No, by Theorem 8. No. Quite generally, if one of the vectors v (),, v (m) is, say, v (), then () holds with any c and c,, c m all zero.. No. It is remarkable that A [a jk ] with a jk j k has rank for any size n of the matrix.. AB and its transpose (AB) T B T A T have the same rank.. This follows directly from Theorem. 6. A proof is given in Ref. [B], Vol., p.. 8. Yes if and only if k. Then the dimension is, and a basis is [ ], [ ], [ ].. No. If v [v v ] satisfies the inequality, v does not. Draw a sketch to see the geometric meaning of the inequality characterizing the half-plane above the sloping straight line v v.. Yes, dimension, basis [ 5 ]. No, because of the positivity assumption 6. Yes, dimension. The two given equations form a homogeneous linear system with the augmented matrix [ ]. The solution is v _ v v, v arbitrary, v 9_ v 6v, v arbitrary. 7 /7
im7.qxd 9//5 :9 PM Page 5 5 Instructor s Manual Taking v _ and v, we obtain the basis vector [ _ ]. Taking v and v, we get [ 6 ]. Instead of the second vector we could also use [ _ ] obtained by taking the second vector minus twice the first. SECTION 7.5. Solutions of Linear Systems: Existence, Uniqueness, page Purpose. The student should see that the totality of solutions (including the existence and uniqueness) can be characterized in terms of the ranks of the coefficient matrix and the augmented matrix. Main Content, Important Concepts Augmented matrix Necessary and sufficient conditions for the existence of solutions Implications for homogeneous systems rank A nullity A n Background Material. Rank (Sec. 7.) Short Courses. Brief discussion of the first two theorems, illustrated by some simple examples. Comments on Content This section should make the student aware of the great importance of rank. It may be good to have students memorize the condition rank A rank A for the existence of solutions. Students familiar with ODEs may be reminded of the analog of Theorem (see Sec..7). This section may also provide a good opportunity to point to the roles of existence and uniqueness problems throughout mathematics (and to the distinction between the two). SECTION 7.7. Determinants. Cramer s Rule, page 8 Second- and third-order determinants see in the reference Sec. 7.6. Main Content of This Section nth-order determinants General properties of determinants Rank in terms of determinants (Theorem ) Cramer s rule for solving linear systems by determinants (Theorem )
im7.qxd 9//5 :9 PM Page 55 Instructor s Manual 55 General Comments on Determinants Our definition of a determinant seems more practical than that in terms of permutations (because it immediately gives those general properties), at the expense of the proof that our definition is unambiguous (see the proof in App. ). General properties are given for order n, from which they can be easily seen for n when needed. The importance of determinants has decreased with time, but determinants will remain in eigenvalue problems (characteristic determinants), ODEs (Wronskians!), integration and transformations (Jacobians!), and other areas of practical interest. SOLUTIONS TO PROBLEM SET 7.7, page 6. 8. 78. 7.. Together with Prob. this illustrates the theorem that for odd n the determinant of an n n skew-symmetric matrix has the value. This is not generally true for even n, as Prob. 6 proves.. 58 6. 6 8. x, y. w, x 5, y, z 6.. Team Project. (a) Use row operation (subtraction of rows) on D to transform the last column of D into the form [ ] T and then develop D by this column. (b) For a plane the equation is ax by cz d, so that we get the determinantal equation x y z x l l. x x The plane is x y z 5. (c) For a circle the equation is a(x y ) bx cy d, so that we get x y x y x y l l. x y x y The circle is x y x y. (d) For a sphere the equation is a(x y z ) bx cy dz e, y y y x x x z z z y y y
im7.qxd 9//5 :9 PM Page 56 56 Instructor s Manual so that we obtain x y z x y z x y z lx y z x y z l. x y z x y z The sphere through the given points is x y (z ) 6. (e) For a general conic section the equation is ax bxy cy dx ey ƒ, so that we get x xy y x y x x x y x y y y l l x x y y x x x x x x y y y y y y z z z. x x y y x y x 5 x 5 y 5 y 5 x 5 y 5 6. det A n ( ) n (n ). True for n, a -simplex on R, that is, a segment (an interval), because det A ( ) ( ). Assume true for n as just given. Consider A n. To get the first row with all entries, except for the first entry, subtract from Row the expression (Row Row (n )). n The first component of the new row is n/(n ), whereas the other components are all. Develop det A n by this new first row and notice that you can then apply the above induction hypothesis, n det A n ( ) n (n ) ( ) n n, n as had to be shown. SECTION 7.8. Inverse of a Matrix. Gauss Jordan Elimination, page 5 Purpose. To familiarize the student with the concept of the inverse A of a square matrix A, its conditions for existence, and its computation. Main Content, Important Concepts AA A A I Nonsingular and singular matrices Existence of A and rank
im7.qxd 9//5 :9 PM Page 57 Instructor s Manual 57 Gauss Jordan elimination (AC) C A Cancellation laws (Theorem ) det (AB) det (BA) det A det B Short Courses. Theorem without proof, Gauss Jordan elimination, formulas (*) and (7). Comments on Content Although in this chapter we are not concerned with operations count (Chap. ), it would make no sense to first blindfold the student by using Gauss Jordan for solving Ax b and then later in numerics correct the false impression by explaining why Gauss elimination is better because back substitution needs fewer operations than the diagonalization of a triangular matrix. Thus Gauss Jordan should be applied only when A is wanted. The unusual properties of matrix multiplication, briefly mentioned in Sec. 7. can now be explored systematically by the use of rank and inverse. Formula (*) is worth memorizing. SOLUTIONS TO PROBLEM SET 7.8, page.6.8. [ ]. This is a symmetric orthogonal matrix. Orthogonal matrices will be.8.6 discussed in Sec. 8., where they will fit much better into the material.. The inverse equals the transpose. This is the defining property of orthogonal matrices to be discussed in Sec. 8.. 6. Y5 5Z 8. Y Z _. Y Z 8 5 9 5. Y Z _ 5 9 9 _. Rotation through. The inverse represents the rotation through. Replacement of by in the matrix gives the inverse. 6. I (A ) A. Multiply this by A from the right on both sides of the equation. This gives A (A ) A. Do the same operation once more to get the formula to be proved.
im7.qxd 9//5 :9 PM Page 58 58 Instructor s Manual 8. I I T (A A) T A T (A ) T. Now multiply the first and the last expression by (A T ) from the left, obtaining (A T ) (A ) T.. Multiplication by A from the right interchanges Row and Row of A, and the inverse of this interchange is the interchange that gives the original matrix back. Hence the inverse of the given matrix should equal the matrix itself, as is the case.. For such a matrix (see the solution to Prob. ) the determinant has either the value or. In the present case it equals. The values of the cofactors (determinants of matrices times or ) are obtained by straightforward calculation. SECTION 7.9. Vector Spaces, Inner Product Spaces, Linear Transformations, Optional, page Purpose. In this optional section we extend our earlier discussion of vector spaces R n and C n, define inner product spaces, and explain the role of matrices in linear transformations of R n into R m. Main Content, Important Concepts Real vector space, complex vector space Linear independence, dimension, basis Inner product space Linear transformation of R n into R m Background Material. Vector spaces R n (Sec. 7.) Comments on Content The student is supposed to see and comprehend how concrete models (R n and C n, the inner product for vectors) lead to abstract concepts, defined by axioms resulting from basic properties of those models. Because of the level and general objective of this chapter, we have to restrict our discussion to the illustration and explanation of the abstract concepts in terms of some simple typical examples. Most essential from the viewpoint of matrices is our discussion of linear transformations, which in a more theoretically oriented course of a higher level would occupy a more prominent position. Comment on Footnote Hilbert s work was fundamental to various areas in mathematics; roughly speaking, he worked on number theory 89 898, foundations of geometry 898 9, integral equations 9 9, physics 9 9, and logic and foundations of mathematics 9 9. Closest to our interests here is the development in integral equations, as follows. In 87 Carl Neumann (Sec. 5.6) had the idea of solving the Dirichlet problem for the Laplace equation by converting it to an integral equation. This created general interest in integral equations. In 896 Vito Volterra (86 9) developed a general theory of these equations, followed by Ivar Fredholm (866 97) in 9 9 (whose papers caused great excitement), and Hilbert since 9. This gave the impetus to the development of inner product and Hilbert spaces and operators defined on them. These spaces and operators and their spectral theory have found basic applications in quantum mechanics since 97. Hilbert s great interest in mathematical physics is documented by Ref. [GR], a classic full of ideas that are of importance to the mathematical work of the engineer. For more details, see G. Birkhoff and E. Kreyszig. The establishment of functional analysis. Historia Mathematica (98), pp. 58.
im7.qxd 9//5 :9 PM Page 59 Instructor s Manual 59 SOLUTIONS TO PROBLEM SET 7.9, page 9. Yes, dimension. A basis vector is obtained by solving the given linear system consisting of the two given conditions. The solution is [v v v ], v arbitrary. v gives the basis vector [ ] T.. Yes, dimension 6, basis Y Z, Y Z, Y Z, Y Z, Y Z, Y Z. 6. No, because of the second condition 8. No because det (A B) det A det B in general. Yes, dimension, basis cos x, sin x. Yes, dimension, basis [ ], [ ], [ ], [ ]. 9. If another such representation with coefficients k j would also hold, subtraction would give (c j k j )a j ; hence c j k j because of the linear independence of the basis vectors. This proves the uniqueness. 6. x.5y.5y x.5y.5y 8.. x y x x x y y y 5y.5y.5y x.5y.5y x.5y.5y.. _ 85 8. 6. 8. v v, v v ; hence [v v v ] T with arbitrary v and v. These vectors lie in a plane through the origin whose normal vector is the given vector.. a [ 6] T, b [6 ] T, a b [ 6] T. For the norms we thus obtain éa bé 6 6.55 éaé ébé 56 6 5.6.
im7.qxd 9//5 :9 PM Page 6 6 Instructor s Manual SOLUTIONS TO CHAP. 7 REVIEW QUESTIONS AND PROBLEMS, page. x y, y arbitrary, z. No solution 6. x, y, z 5 8. x z, y _, z arbitrary 5 8 5 96 96 6. AB Y Z, BA Y Z (AB) T 6 8 8 8 9 8 6 9 5 6. A B Y 8 6Z Y 8 Z Y5 5 Z 6 89 6 5 6 9. AA T A T A Y 8 6Z 6 89 9 6. Aa YZ, a T A [9 9], a T Aa 5 8.. Y 8..6 6.8Z.,. Hence one unknown remains arbitrary..,. Hence the system has no solution, by Theorem in Sec. 7.5. 6.,. Hence one unknown remains arbitrary..8. 9 6.8. 8. Y 6 8Z 9. 7. 8. Singular, rank. diag ( _,, _ ) 5. The equations obtained by Kirchhoff s laws are I I I (Left node) 5I 8I 8 (Upper loop) I 8I (Lower loop). This gives the solution I A, I 5 A, I 5 A.