Topics in Linear Algebra and Its Applications

Size: px
Start display at page:

Download "Topics in Linear Algebra and Its Applications"

Transcription

1 Topics in Linear Algebra and Its Applications James Emery Edition 2/15/2018 Contents 1 Introduction 5 2 Linear Transformations and Linear Operators 10 3 Multilinear Functionals 10 4 Determinants 13 5 Properties of Determinants Expansion by Minors, Existence of the Determinant 18 7 Permutations of the Integers 22 8 The Sign of a Permutation 24 9 A Formula for the Determinant Showing that its Defining Alternating Multilinear Functional is Unique The Determinant of a Transpose Determinant of a Product The Determinant of an Inverse Solving Linear Equations 36 1

2 14 Kernel and Range Quotient Spaces Direct Sums The Inner Product Inner Product Spaces Normed Linear Spaces Orthonormal Vectors Gram-Schmidt Orthogonalization The Pythagorean Theorem for Inner Product Spaces The Cauchy-Schwarz Inequality The Infinite Dimensional Space l The Completeness of l Bessel s Inequality The Triangle Inequality: an Inner product Space is a Normed Linear Space, and a Metric Space The Parallelogram Law Quadratic Forms Canonical Forms Upper Triangular Form Isometries, Rotations, Orthogonal Matrices Rotation Matrices 59 2

3 34 Exponentials of Matrices and Operators in a Banach Algebra Eigenvalues The Characteristic Polynomial and the Cayley-Hamilton Theorem Unitary Transformations Transpose, Trace, Self-Adjoint Operators The Spectrum The Spectral Theorem Tensors Application of Linear Algebra to Vibration Theory, Normal Coordinates A Simple Spring Example Decoupling Equations Example: Coupled Oscillators The General Problem of Linear Vibration Polynomial Roots, The Frobenius Companion Matrix Projection Operators Functional Analysis Hamel Basis Numerical Linear Algebra Quantum Mechanics The Schrödinger Wave Equation The Postulates of Quantum Mechanics 84 3

4 51 The Bra and Ket Notation of Dirac Example: The Hydrogen Atom The Relation Between Linear Algebra, Functional Analysis, and Abstract Solutions to Problems Appendix A: Rotation Matrices Rotation Matrix Defined by Axis and Angle Axis and Angle of a Proper Rotation Matrix Obtaining the Rotation As The Exponential of an Element of a Banach Algebra Properties of The Exponential of a Matrix A Test Program rotations.ftn with Subroutines orthgm and axisang Running Some Rotation Matrix Examples Appendix B, Groups and Permutations Introduction Permutations and Permutation Groups The Factorial Expansion of a Number Generating Permutations Cosets and Normal Subgroups Symbols for Common Groups Abelian Groups Physics and Group Theory 132 4

5 70 The Isometries of the Cube Symmetry Groups A Program to Compute the Isometries of the Cube by Brute Force Free Groups Actions and Orbits The relation between the Lorentz group and SL(2, C) Symmetry Conjugates Noether s Theorem Bibliography Index Introduction Linear algebra is the study of finite dimensional vector spaces and linear transformations. A vector space is a quadruple (V,F,+, ). V is a set of vectors, F is a field of scalars, + is the operation of vector addition, and * is the operation of scalar multiplication. We usually do not write the multiplication operator, that is, we write α v as αv. Let α,β F and u,v,w V. The following axioms are satisfied: 1. u+v = v +u 2. u+(v +w) = (u+v)+w 3. There is a zero element so that 0 V so that u+0 = u. 4. For each u V, there is an inverse element -u so that u+( u) = 0 5

6 5. α(u+v) = αu+αv. 6. (α+β)u = αu+βu. 7. (αβ)u = α(βu). 8. 1u = u Definition. A finite set of vectors v 1,v 2,...v n is linearly independent if α 1 v 1 +α 2 v α n v 2 = 0 implies that each α i is zero. Otherwise the set is called linearly dependent. Definition. A subset S of V is a subspace of V if the sum of any two elements in S is in S and the scalar product of any element in F with any element in S is in S. That is S is closed under addition and scalar multiplication. Definition. The span of a set of vectors in a vector space V is the intersection of all subspaces of V containing the set, and so is the smallest subspace containing the set. The subspace spanned by vectors is v 1,v 2,...v n S = {α 1 v 1 +α 2 v α n v n : α i F} Theorem The nonzero vectors v 1,v 2,...v n are linearly dependent if and only if some one of them is a linear combination of the preceding ones. Proof. Suppose v k canbewrittenasalinearcombination ofv 1,...v k 1. Then we have a linear combination of v 1,...v k set equal to zero with α k = 1, so that these vectors are linearly dependent. Conversely, suppose v 1,...v n are dependent. Then we can find a set of α i so that α 1 v 1 +α 2 v α n v n = 0, 6

7 and at least one of them is not zero. Let k be the largest index for which α i is not zero, then dividing by α k we find that v k is a linear combination of the preceding vectors. Corollary. Any finite set of vectors contains a linear independent subset that spans the same space. Theorem. Let vectors v 1,v 2,...,v n span V. Suppose the vectors u 1,u 2,...,u k are linearly independent. Then n k. Proof. The set u 1,v 1,v 2,...,v n is linearly dependent and spans V, so some v j is dependent on its predecessors. Then the set u 1,v 1,v 2,...,v j 1,v j+1,...,v n spansv, andisdependent. Wemaycontinuethis, addingau i whileremoving a v j and still having a set that spans V and is dependent. This can be continued until the u i are exhausted, otherwise the v j would be exhausted first and some subset of the u 1,u 2,...,u k would then be dependent, which is not possible. Therefore there are more v j than u i, which forces n k. DefinitionAbasisofavector spacev isaset oflinearlyindependent vectors that spans V. A vector space is finite dimensional if it has a finite basis. Theorem Suppose a vector space has a finite basis A = {v 1,v 2,...,v n }. Then any other basis also has n elements. Proof Let B = {u 1,u 2,u 3,...}, which is possibly infinite, be a second basis of V. By the previous theorem n k for any subset of B. It follows that u 1,u 2,...,u k B = {u 1,u 2,u 3,...,u m }, for some m, and that n m. Reversing the role of A and B, we apply the previous theorem again to get m n, which proves the corollary. We conclude that the dimension of a finite dimensional vector space can be well 7

8 defined as the number of elements in any basis. Any vector v V can be represented as a linear combination of the basis elements: v = α 1 v 1 +α 2 v 2 +α 3 v αv n v n. This representation is unique, because if we subtract two different representations we would get a representation of the zero vector as a linear combination of the basis vectors with at least one nonzero coefficient, which contradicts that the vectors are linearly independent. The scalar coefficients are called the coordinates of v, and form a element in the cartesian n-product of the scalar field F. These n-tuples of scalars themselves form an n dimensional vector space and are isomorphic to the original vector space. Let U and V be two vector spaces. A function is called a linear transformation if T : U V 1. For u 1,u 2 U,T(u 1 +u 2 ) = T(u 1 )+T(u 2 ). 2. For α F,u U,T(αu) = αt(u). A linear transformation from U to itself, is called a linear operator. Associated with every linear transformation is a matrix. Definition. A linearly independent set that spans a vector space V is called a basis. Definition. The number of vectors in a basis is called the dimension of V. Let {u 1,u 2,u 3,...,u n } be a bases of U and let {v 1,v 2,...,v m } a bases of V. For each u j, T(u j ) is in V, so that it may be written as a linear combination of the basis vectors, we have m T(u j ) = a ij v i. i=1 8

9 Now let u U and let its coordinates be x 1,...,x n. Vector u is represented by the coordinate vector x 1 x 2... x n We have n n T(u) = T( x j u j ) = x j T(u j ) j=1 j=1 n m = x j a ij v i j=1 i=1 m n = ( a ij x j )v i i=1 j=1 m = y i v i, i=1 where the y 1,y 2,...,y m are the components of the vector T(u) in vector space V. We have shown that the coordinate vector x of u is mapped to the coordinate vector y of T(u) by matrix multiplication, y 1 a a 1n x 1 y 2... = a a 2n x y n a m1... a mn x n and Now suppose we have two linear transformations The composite transformation is T : U V, S : V W. ST : U W, defined by ST(u) = S(T(u)). Theorem. If A is the matrix of linear transformation S, and B is the matrix of linear transformation T, then the matrix multiplication product AB is the matrix of ST. 9

10 2 Linear Transformations and Linear Operators Examples: Finite dimensional linear transformation y 1 a 11 a a 1n y 2... = a 21 a a 2n... a y n a m1 a m2... a mn Integral linear transformation f g, x 1 x 2... x n where Differential operator g(y) = k(x, y)f(x)dx f g, g(x) = (5 d2 dx 2 +2 d dx 3)f(x) 3 Multilinear Functionals Let V be a vector space. A multilinear functional is a function mapping vectors ofv into afield ofscalarssuch asthe realor complex numbers. Given a function f(v 1,v 2,...,v n ), where each v i V, the function is multilinear if it is a linear function in each argument. That is linear in each v i, f(v 1,v 2,...,v i +v i,...,v n) = f(v 1,v 2,...,v i,...,v n )+f(v 1,v 2,...,v i,...,v n), and for a scalar a f(v 1,v 2,...,av i,...,v n ) = af(v 1,v 2,...,v i,...,v n ). A multilinear functional is alternating if interchanging vectors v i and v j changes the sign of the functional. 10

11 Theorem. If a multilinear functional is alternating, then if two vector arguments are equal, the functional is zero. Proof. If two arguments are interchanged then the sign of the function is reversed. But if these two interchanged arguments are the same, then the functional remains the same. So the functional must have zero value. Theorem. If f(v 1,...,v i,...,v j,...v n ) is an alternating multilinear functional and if argument av i is added to argument v j, where a is a scalar, then the functional is unchanged. Proof. f(v 1,...,v i,...,v j +av i,...v n ) = f(v 1,...,v i,...,v j,...v n )+af(v 1,...,v i,...,v i,...v n ) f(v 1,...,v i,...,v j,...v n )+0 = f(v 1,...,v i,...,v j,...v n ). 11

12 Theorem. The following two statements are equivalent. 1. If a functional is alternating then it is zero when any two adjacent arguments are equal. 2. If a multilinear functional is zero whenever any two adjacent vectors are equal, then it is alternating. Proof. That (1) implies (2), was established in the previous theorem. It remains to prove that (2) implies (1). Suppose the multilinear functional is zero whenever any two adjacent vectors are equal. Then So 0 = f(v 1,...,v j +v j+1,v j +v j+1,...,v n ) = f(v 1,...,v j,v j,...,v n )+f(v 1,...,v j,v j+1,...,v n ) +f(v 1,...,v j+1,v j,...,v n )+f(v 1,...,v j+1,v j+1,...,v n ) = 0+f(v 1,...,v j,v j+1,...,v n ) +f(v 1,...,v j+1,v j,...,v n )+0. f(v 1,...,v j,v j+1,...,v n ) = f(v 1,...,v j+1,v j,...,v n ). So an adjacent interchange changes the sign of f. 12

13 Now consider the case of 6 arguments for example, where we want to switch v 2 and v 5. We start with switching v 2 with v 3 towards the right, and continue moving v 2 in that direction, f(v 1,v 2,v 3,v 4,v 5,v 6 ) f(v 1,v 3,v 2,v 4,v 5,v 6 ) f(v 1,v 3,v 4,v 2,v 5,v 6 ) f(v 1,v 3,v 4,v 5,v 2,v 6 ) Now we have v 2 in its proper new place in 3 interchanges. Now we work on moving v 5 to the left. Continuing we have f(v 1,v 3,v 5,v 4,v 2,v 6 ) f(v 1,v 5,v 3,v 4,v 2,v 6 ) we have v 5 in the desired place in 2 interchanges. This is because in the last switch of v 2 we also switched v 5 in the proper left direction. So only two interchanges were required to complete the interchange of v 2 with v 5. Thus the number of adjacent pair interchanges required will always be odd. So switching any pair changes the sign of f. And we have proved that f is an alternating linear functional. 4 Determinants A determinant is a multilinear functional defined on a square matrix. A linear functional is a mapping from a vector space to a field of scalars. A multilinear functional is a function defined on a cartesian product of the vector space. The column vectors of a matrix may be considered to be vectors of the vector space V. The set of n column vectors constitute a point in the cartesian product. The functional is linear in the sense that f(v 1,v 2,...,v k +v k,...,v n ) = f(v 1,v 2,...,v k,...,v n )+f(v 1,v 2,...,v k,...,v n ), and f(v 1,v 2,...,αv k,...,v n ) = αf(v 1,v 2,...,v k,...,v n ). 13

14 A multilinear functional is alternating if interchanging a pair of variables changes the sign of the function. Definition. Adeterminant D(A) of an ndimensional square matrix Ais the unique alternating multilinear functional defined on the n column vectors of A, which takes value 1 on the identity matrix. Later we will prove that there is such a multilinear functional and it is indeed unique. A reference for this is [15] Lange Serge, Linear Algebra Addison Wesley, 1968, page 232. Column Operations. 1. If two column vectors of a matrix are identical, then the determinant is zero. This follows because interchanging the columns changes the sign of the determinant, but the new matrix has not changed, so the value of the determinant is the same. The determinant must be zero. 2. Adding a multiple of one column to a second does not change the value of the determinant. This is clear from Row Operations. D(v 1,...,v i,...,v j +αv i,...,v n ) = D(v 1,...,v i,...,v j,...,v n )+αd(v 1,...,v i,...,v i,...,v n ) = D(v 1,...,v i,...,v j,...,v n )+0. The corresponding row operations are also valid. This follows because we will show that the determinant of the transpose of a matrix is equal to the determinant of the matrix. 1. If two rows of a matrix are identical, then the determinant is zero. 2. Adding a multiple of one row to a second does not change the value of the determinant. Example To compute the determinant of [ ]

15 subtract the first column from the second, then three times the second from the first, getting [ So D(A) = 2D(I) = 2, where I is the identity matrix. Once we have a matrix in diagonal form, we see from its definition as a multilinear functional, thatthedeterminant isequal totheproductofeachmultiplier ofeachcolumn times the determinant of the identity. That is, the value equals the product of the diagonal elements. Cramers s Rule for Solving a Linear Equation Suppose We have a system of n equations in n unknowns x 1,x 2,...,x n written in the form We have x 1 v 1 +x 2 v ,x n v n = v. D(v,v 2,v 3,...,v n ) = D(x 1 v 1 +x 2 v x n v n,v 2,...,v n ) So that unknown x 1 is given by = x 1 D(v 1,v 2,v 3,...,v n ). x 1 = D(v,v 2,v 3,...,v n ) D(v 1,v 2,v 3,...,v n ). There is clearly a similar expression for each of x 2,...x n. To compute a determinant we can perform permutations on the columns and add scalar multiples of columns to other columns, to put the matrix into diagonal form. Once in diagonal form, (or triangular form) the determinant equals the product of the diagonal elements. There is an alternate definition of the determinant involving permutations. Let a be a n by n matrix. Consider the sum ǫ(σ)a σ(1),1 a σ(2),2...a σ(n),n σ where σ is a permutation of the integers 1,2,3,4,...,n and ǫ(σ) is the sign of the permutation, which we shall define precisely below. Therefore it is the 15 ]

16 unique such functional, and so is equal to the determinant. See the sections Permutations, and A Formula for the Determinant Showing that its Defining Alternating Multilinear Functional is Unique. Also see the Appendix Groups and Permutations. The sign of the identity permutation is one. Interchanging a pair of elements, a transposition, changes the sign of the permutation. Notice that this is a multilinear functional, and is alternating, and further equals one on the identity matrix. An alternating multilinear functional changes sign when a pair of arguments is interchanged, thus for example if f is a function of three variables and is alternating, then and f(b,a,c) = f(a,b,c), f(c,b,a) = f(a,b,c) and so on. If f is defined on n arguments, then it is multilinear if for each i, and f(x 1,x 2,...,αx i,...,x n ) = αf(x 1,...,x i,...,x n ) f(x 1,x 2,...,x i +x i,...,x n ) = f(x 1,...,x i,...,x n )+f(x 1,...,x i,...,x n ). 5 Properties of Determinants. Proofs of the following properties will be presented below. 1. Determinant of a transpose. D(A T ) = D(T). 2. Determinant of a product. D(AB) = D(A)D(B). 16

17 3. Expansion by minors about row i. Let A ij be the matrix obtained from A by deleting row i and column j. Then n D(A) = ( 1) i+j a i,j D(A ij ). j=1 4. Alternate definition of the determinant involving permutations. Let A be a n by n matrix. D(A) = σ ǫ(σ)a σ(1),1 a σ(2),2...a σ(n),n where σ is a permutation of the integers 1,2,3,4,...,n and ǫ(σ) is the sign of the permutation. We sum over all permutations. Below we shall define the sign precisely and prove the formula. 17

18 6 Expansion by Minors, Existence of the Determinant We shall establish a formula for the determinant called expansion by minors. The determinant was described to be a function on a square matrix satisfying three properties: (1) Multilinear, (2) Alternating, (3) and equal to one on a unitmatrix. Butwehavenotestablished thatsuchathingexists. Wedothat in this section, using an induction argument, and developing the formula for expansion by minors. The alternating property is equivalent to the property that equal adjacent arguments or matrix columns forces the function to be zero. Given a n by n matrix, the formula on row k is A = a 1,1 a 1,2... a 1,n a 2,1 a 2,2... a 2,n a n,1 a n,2... a n,n, n D(A) = ( 1) k+j a k,j A k,j, j=1 where A k,j is the minor at (k,j), which is a n 1 n 1 matrix obtained by removing the column and the row that contains element a k,j. To be a little more clear let n=4, so that A = a 1,1 a 1,2 a 1,3 a 1,4 a 2,1 a 2,2 a 2,3 a 2,4 a 3,1 a 3,2 a 3,3 a 3,4 a 4,1 a 4,2 a 4,3 a 4,4 The minor A i,j is the 3 by 3 matrix formed by removing the ith row and the jth column of A. If i = 3 and j = 1 A 3,1 = a 1,2 a 1,3 a 1,4 a 2,2 a 2,3 a 2,4 a 4,2 a 4,3 a 4,4., 18

19 And if i = 3 and j = 2 then A 3,2 = a 1,1 a 1,3 a 1,4 a 2,1 a 2,3 a 2,4 a 4,1 a 4,3 a 4,4. The expansion by minors formula on row 3 in this case is d(a) = ( 1) 3+1 a 3,1 det(a 3,1 )+( 1) 3+2 a 3,2 det(a 3,2 )+ ( 1) 3+3 a 3,3 det(a 3,3 )+( 1) 3+4 a 3,4 det(a 3,4 ) = a 3,1 det(a 3,1 ) a 3,2 det(a 3,2 )+a 3,3 det(a 3,3 ) a 3,4 det(a 3,4 ) We shall prove this in general by showing that such a formula satisfies the multilinear property, the alternating property, and the property of a unit matrix having determinant 1. Let d 1 (A) = a 3,1 det(a 3,1 ), and Then d 2 (A) = a 3,2 det(a 3,2 ), d 3 (A) = a 3,3 det(a 3,3 ), d 4 (A) = a 3,4 det(a 3,4 ). d(a) = d 1 (A)+d 2 (A)+d 3 (A)+d 4 (A). LetA j bethejthcolumnvectorofmatrixaweshowthatd 1 (A) = d 1 (A 1,A 2,A 3,A 4 ) is linear in the first column vector. So let C be the column vector c 1 c C = 2 c 3. c 4 Then d 1 (A 1 +C,A 2,A 3,A 4 ) = (a 3,1 +c 3 )det(a ( 3,1)) = a 3,1 det(a ( 3,1))+c 3 det(a ( 3,1)) = d 1 (A 1,A 2,A 3,A 4 )+d 1 (C,A 2,A 3,A 4 ). 19

20 Now let t be a scalar. Then d 1 (ta 1,A 2,A 3,A 4 ) = ta 3,1 det(a ( 3,1)) = td 1 (A 1,A 2,A 3,A 4 ). So d 1 is linear in the first column variable. Now consider linearity in the second column variable. This will involve a change to the minor A 3,1, which by the induction assumption, is a multilinear functional. We have d 1 (A 1,A 2 +C,A 3,A 4 ) = a 3,1 = a 3,1 a 1,2 a 1,3 a 1,4 a 2,2 a 2,3 a 2,4 a 4,2 a 4,3 a 4,4 +a 3,1 a 1,2 +c 1 a 1,3 a 1,4 a 2,2 +c 2 a 2,3 a 2,4 a 4,2 +c 4 a 4,3 a 4,4 c 1 a 1,3 a 1,4 c 2 a 2,3 a 2,4 c 4 a 4,3 a 4,4 = d 1 (A 1,A 2,A 3,A 4 )+d 1 (A 1,C,A 3,A 4 ). Next we have d 1 (A 1,tA 2,A 3,A 4 ) = a 3,1 = t a 3,1 a 1,2 a 1,3 a 1,4 a 2,2 a 2,3 a 2,4 a 4,2 a 4,3 a 4,4 = td 1 (A 1,A 2,A 3,A 4 ). ta 1,2 a 1,3 a 1,4 ta 2,2 a 2,3 a 2,4 ta 4,2 a 4,3 a 4,4 So d 1 is linear in the second variable. Using similar arguments d 1 is linear in the third and fourth variables. So d 1 is multilinear and using a similar argument d 2,d 3, and d 4 are multilinear, thus d being the sum is multilinear. Now this proof of multilineararity does not depend on n = 4, so we have proved multilinearity in the general case of a n n matrix case. Now we shall prove that if two adjacent columns of matrix A are equal then Det(A) = 0. This is easy to prove. So let columns j and j+1 be equal. 20..

21 Then any minor A i,k where k is neither j nor j+1 is zero because such minor matrix is derived from both column j and j +1 and thus has two identical columns so is zero. Thus the formula for expansion by minors along row i becomes just the two terms ( 1) i+j a i,j +( 1) i+j+1 a i,j, but these two terms are numerically equal in magnitude and differ in sign, so add to zero. So D(A) = 0 when A has two adjacent equal column. In the section on multilinear functional it was shown that the consequence of this is that this multilinear functional D on such a matrix A is alternating. Lastly we show that this expansion by minors on a row of A equals 1 if A is a unit matrix. On any row i the formula for expansion by minors has a nonzero term only if it is a i,j A i,j with j = i, that is from an element on the main diagonal of A. Thus ai,i = 1 and by induction det(a i,i ) is 1. So we have proved Theorem. Given a n n matrix A the formula for the expansion by minors for any row i of A, the formula D(A), satisfies the above three requirements, for being a determinant. Below we shall show that the determinant Det(A) is unique by giving a formula involving permutations which depends only on the three required properties of being, (1) Multilinear, (2) Alternating, and (3) Having value one on a unit matrix. So we can then say that the formula of expansion by minors about a row, namely D(A), satisfies uniquely Det(A) = D(A). 21

22 7 Permutations of the Integers Given the symbols abc, the rearrangement of these symbols is called a permutation. So there are six permutations of these three symbols including the original arrangement, which are abc, acb, bac, bca, cab, and cba. There are n! permutations of n symbols. See a good college algebra book for an elementary treatment, and for more see Appendix B, and the the document Group Theory by James Emery, (Computer files groups.tex and groups.pdf, located at stem2.org). Definition. A permutation of the set of n integers Z n = {1,2,3,4,...,n} is a 1 to 1 onto mapping σ from Z n to Z n, k σ(k). Definition. A transposition is a permutation where only a single pair of integers are interchanged, as in i j and j i, and all other integers map to themselves. Consider the permutation of the integers {1,2,3}, σ = 1 3,2 1, and 3 2. In writing such maps we can abbreviate it by writing just the images 3,1,2. Suppose we want to show that this can be written as a pair of transpositions. So we add a transposition τ that reverses the last mapping of σ, σ(3) = 2. This transposition is τ = 1 1,2 3,3 2, then the composition of τ with σ is τσ = 1 3 2,2 1 1,3 2 3 = 1 2,2 1,3 3. So τσ maps 3 to 3 and so is esentually a permutation of the integers {1,2} and is a transposition. Now τ 1 is also a transposition, so σ = τ 1 (τσ) is the product of two transpositions. The method indicates how an induction argument can be used to show that any permutation of Z n = {1,2,3,...n} can be written as a composition of transpositions. So if we know that Z n 1 can be written as a composition of transpositions then we can construct a transposition τ working on the last integer n of Z n = {1,2,3,...n} so that τσ, where σ is a permutation of Z n, keeps n 22

23 fixed, so is esentially a permutation of Z n 1 and thus by induction can be replaced by a composition of transpositions. So σ = τ 1 τσ = τ 1 υ where υ is a composition of transpositions. Theorem. Any permutation σ can be written as a composition of transpositions. Proof. The identity permutation σ of Z 2, given by σ(1) = 1 and σ(2) = 2, can be written using the transposition α, defined by α(1) = 2 and α(2) = 1, as the product σ = αα. The only other permutation of Z 2 is α itself. So all permutations of the integers of size 2, are products of transpositions. If σ is a permutation of Z j and k > j then we can extend σ to a permutation of Z k by letting all integers greater than j map to themselves. We call this extension of σ, σ e. Note. The identity permutation σ = n is given by σ = α e α e, where α e is the extension of α = 21 as defined above, to to be a transposition of the integer set Z n. Let σ be a permutation of Z n If σ(n) = n then we call the restriction to Z n 1 σ r. By the induction assumption σ r can be written as a composition of transpositions σ r = ρ 1 ρ 2,...,ρ j, for some j. So extending each ρ m for each m, to ρ e m so that ρ e m to map n to n, we have σ = ρ e 1 ρe 2,...,ρe j, as a composition of transpositions. If σ(n) = k, where k is not equal to n, let τ be a transposition defined on Z n such that τ(n) = k and τ(k) = n. Then τσ(n) = τ(k) = n. 23

24 So let (τσ) r be the restriction of (τσ) to Z n 1. So by the induction assumption there exists transpositions, say ρ 1,ρ 2,...,ρ j, so that (τσ) r = ρ 1 ρ 2...ρ j. Then by extension to Z n (τσ) = ρ e 1ρ e 2...ρ e j. Then because τ 1 is a transposition, we have σ = τ 1 (τσ) = τ 1 ρ e 1 ρe 2...ρe j is a composition of transpositions. This completes the proof. 8 The Sign of a Permutation Theorem. There is a function ǫ, called the sign of the permutation, that has value on a permutation σ of {1,2,3,...,n} so that ǫ(σ) is equal to either 1 or -1. And the function satisfies the following: If τ is a transposition, then ǫ(τ) = 1. If σ and σ are permutations of the integers, then ǫ(σσ ) = ǫ(σ)ǫ(σ ). Proof. Let us use a concrete example to guide us in the proof. Let σ be a permutation of the integers {1,2,3,4,5}. Say 32451, consider the pairs of integers (i,j) where i < j. These are P = {(1,2),(1,3),(1,4),(1,5), (2,3),(2,4),(2,5), (3,4),(3,5), (4,5)} 24

25 Consider the product M = (i,j)ǫp (x σ(j) x σ(i) ), where x k, for k = 1,2,3,...,n is a variable. In our concrete case n = k. A pair is called positive if σ(i) < σ(j), that is σ increases on the pair (i,j). And conversely a pair is called negative if σ(i) > σ(j), that is σ decreases on the pair (i,j). So for this σ the positive pairs are and the negative pairs are P + = {(1,3),(1,4),(2,3),(2,4),(3,4)} P = {(1,2),(1,4),(2,3),(2,4),(3,4)} Construct a product of the differences of terms using the x variable, using positive pairs (i,j)ǫp + (x σ(j) x σ(i) ) and negative pairs = (x 4 x 3 )(x 5 x 3 )(x 4 x 2 )(x 5 x 2 )(x 5 x 4 ). (k,l)ǫp (x σ(l) x σ(k) ) = (x 2 x 3 )(x 1 x 3 )(x 1 x 2 )(x 1 x 4 )(x 1 x 5 ). Now we multiply together these P and P + pairs getting M = (x σ(j) x σ(i) ) (x σ(l) x σ(k) ) = (i,j)ǫp + (k,l)ǫp (x 4 x 3 )(x 5 x 3 )(x 4 x 2 )(x 5 x 2 )(x 5 x 4 ) (x 2 x 3 )(x 1 x 3 )(x 1 x 2 )(x 1 x 4 )(x 1 x 5 ). Now we invert the m P pairs getting M = (i,j)ǫp + (x σ(j) x σ(i) )( 1) m (k,l)ǫp (x σ(k) x σ(l) ) = (x 4 x 3 )(x 5 x 3 )(x 4 x 2 )(x 5 x 2 )(x 5 x 4 ) 25

26 ( 1) 5 (x 3 x 2 )(x 3 x 1 )(x 2 x 1 )(x 4 x 1 )(x 5 x 1 ). We define ǫ(σ) equal to ( 1) m where m is the number of interchanges that is the number of pairs in P. Then we see that this last expression for M gives M = ǫ(σ) (x j x i ). Equating the first expression for M, M = to the last, we have (i,j)ǫp (i,j)ǫp (i,j)ǫp (x σ(j) x σ(i) ), (x σ(j) x σ(i) ) = ǫ(σ) (i,j) P (x j x i ) Now consider a second permutation σ and let each of our n variables have values x τ = σ τ, for τ = 1,2,3,...,n. Then σ(j) x σ(i) ) = i<j(x (σ (σ(j)) σ (σ(i))) = i<j i<j(σ σ(j) σ σ(i)) = ǫ(σ) (x j x i ) i<j = ǫ(σ) i<j(σ (j) σ (i)) So Now i<j(σ σ(j) σ σ(i)) = ǫ(σ) (σ (j) σ (i)) i<j i<j(σ σ(j) σ σ(i)) = ǫ(σ σ) (j i), i<j because the sets of pairs in the product on the left are the same as the sets of pairs of the product on the right but m of them are inverted. So if we reverse these inversions we get i<j(σ σ(j) σ σ(i)) = ( 1) m (j i) = i<j 26

27 ǫ(σ σ) i<j(j i). So we now have ǫ(σ σ) (j i) = ǫ(σ) i<j i<j(σ (j) σ (i)) But in the same way as above ǫ(σ) (σ (j) σ (i)) = ǫ(σ)ǫ(σ ) i). i<j i<j(j Therefore ǫ(σ σ) (j i) = ǫ(σ)ǫ(σ ) i). i<j i<j(j So we conclude that ǫ(σ σ) = ǫ(σ)ǫ(σ ), as was to be shown. To complete the proof it remains to show that if τ is a transposition then ǫ(τ) = 1. A transposition is the interchange of two integers, so we call α the smaller and β the larger. So let σ be the transposition. Then the mapping is α β and β α, and all other maps arefromanintegertoitself. Letuspresent aconcreteexampleontheintegers {1,2,3,4,5,6,7}. Let α = 3 and β = 6 there are 21 pairs (i,j) with i < j, which are (1,2),(1,3),(1,4),(1,5),(1,6),(1,7) (2,3),(2,4),(2,5),(2,6),(2,7) (3,4),(3,5),(3,6),(3,7) (4,5),(4,6),(4,7) (5,6),(5,7) (6,7) \end{\verbatim} Starting from the bottom the number of such pairs is the sum of an arithmetic progression from 1 to 6 \[ = \frac{(6)(7)}{2} = 21. \] pair $(i,j)$ is an inversion if $\sigma(i) > \sigma(j)$, so the inversions of this example are \begin{verbatim} 27

28 (3,4),(3,5),(3,6) (4,6) (5,6) The number of inversions are 5 so ǫ(σ) = ( 1) 5 = 1. We realize that the only possible inversions are those pairs either starting with α or ending with β. Let us consider the pairs starting with α that are inversions. Now the pair (α,β) is an inversion because σ(α) = β > α = σ(β). If l is in the interval (α,β) then pair (α,l) is an inversion because σ(l) = l < β = σ(α). Thus the pair (α,l) is an inversion if l (α,β], and a little thought, considering examples, shows that this condition gives all such inverted pairs of the form (α,l). So this leaves consideration of pairs of the form (k,β) Pairs (k,β) will be inversions if and only if k is in the open interval (α,β). This is true because such a k maps to itself and β maps to α, so σ(k) < σ(β), so (k,β) is an inversion. And clearly if m is not in the open interval (α,β) then pair (m,β) is not an inversion. The two sets of inversions differ by 1, so the total number of inversions for a transposition σ is odd, so ǫ(σ) = 1 and a transposition is an odd permutation. Corollary. If σ is equal to the composition of an odd number of transpositions, then the sign ǫ(σ) = 1, and the permutation is called odd. If σ is equal to the composition of an even number of transpositions, then the sign ǫ(σ) = 1, and the permutation is called even. Corollary. If an odd number of interchanges (transpositions) brings a permutation σ to the identity permutation, then σ is odd. If an even number of interchanges brings a permutation σ to the identity permutation, then σ is even. Corollary ǫ(σ) = ǫ(σ 1 ). 28

29 Proof. ǫ(σ)ǫ(σ 1 ) = ǫ(σσ 1 ) = ǫ(id) = 1, so ǫ(σ) and ǫ(σ 1 ) have the same sign so are equal. Example. 9 A Formula for the Determinant Showing that its Defining Alternating Multilinear Functional is Unique. Let us compute a formula for the determinant of the 3 by 3 matrix A = a 1,1 a 1,2 a 1,3 a 2,1 a 2,2 a 2,3 a 3,1 a 3,2 a 3,3 from its definition as an alternating multilinear functional on column vectors, and which has value 1 on the unit matrix. Let E 1,E 2, and E 3 be the unit vectors E 1 = The column vectors of A are a 1,1 A 1 = a 2,1 a 3,1,E 2 =,A 2 = a 1,2 a 2,2 a 3,2,E 3 =,A 3 = a 1,3 a 2,3 a 3,3 Then the column vectors of A can be written in terms of the unit basis vectors. We have A 1 = a 1,1 E 1 +a 2,1 E 2 +a 3,1 E 3, A 2 = a 1,2 E 1 +a 2,2 E 2 +a 3,2 E 3, A 3 = a 1,3 E 1 +a 2,3 E 2 +a 3,3 E

30 Letting D be the functional ( that is the determinant), its value is D(A 1,A 2,A 3 ) = D(a 1,1 E 1 +a 2,1 E 2 +a 3,1 E 3,a 1,2 E 1 +a 2,2 E 2 +a 3,2 E 3,a 1,3 E 1 +a 2,3 E 2 +a 3,3 E 3 ). Now we shall expand this expression using first that it is multilinear, that it is alternating (swapping vector arguments changes the sign), and ultimately that its value is one on a unit matrix. We shall do this by selecting one argument value from each of the three argument sums, to form a product of matrix elements, and we do this in all possible ways getting a sum. In what follows we can omit selecting the same unit vector from different arguments because such terms are zero. 30

31 We get D(A 1,A 2,A 3 ) = D(a 1,1 E 1,a 2,2 E 2,a 3,3 E 3 ) +D(a 1,1 E 1,a 3,2 E 3,a 2,3 E 2 ) +D(a 2,1 E 2,a 1,2 E 1,a 3,3 E 3 ) +D(a 2,1 E 2,a 3,2 E 3,a 1,3 E 1 ) +D(a 3,1 E 3,a 2,2 E 2,a 1,3 E 1 ) +D(a 3,1 E 3,a 1,2 E 1,a 2,3 E 2 ) = a 1,1 a 2,2 a 3,3 D(E 1,E 2,E 3 ) +a 1,1 a 3,2 a 2,3 D(E 1,E 3,E 2 ) +a 2,1 a 1,2 a 3,3 D(E 2,E 1,E 3 ) +a 2,1 a 3,2 a 1,3 D(E 2,E 3,E 1 ) +a 3,1 a 2,2 a 1,3 D(E 3,E 2,E 1 ) +a 3,1 a 1,2 a 2,3 D(E 3,E 1,E 2 ) = a 1,1 a 2,2 a 3,3 ( 1) 0 +a 1,1 a 3,2 a 2,3 ( 1) 1 +a 2,1 a 1,2 a 3,3 ( 1) 1 +a 2,1 a 3,2 a 1,3 ( 1) 2 +a 3,1 a 2,2 a 1,3 ( 1) 1 +a 3,1 a 1,2 a 2,3 ( 1) 2 = a 1,1 a 2,2 a 3,3 a 1,1 a 3,2 a 2,3 a 2,1 a 1,2 a 3,3 +a 2,1 a 3,2 a 1,3 a 3,1 a 2,2 a 1,3 +a 3,1 a 1,2 a 2,3 Now in the 1st term D(E 1,E 2,E 3 ) = 1( 1) 0 = 1, because no interchanges of pairs of columns is needed to bring it into the form of the determinant of the unit matrix D(E 1,E 2,E 3 ) = 1. However, in the 4th term, 2 interchanges of column vectors are needed to bring D(E 2,E 3,E 1 ) to the standard form therefore D(E 2,E 3,E 1 ) = ( 1) 2 = 1, and so on for the other terms. We can do this because from the previous section on the sign of a permutation, we know that a permutation can be obtained with a composition 31

32 of transpositions, and for every such transposition we add a product of minus one, thus getting the sign of the permutation. The sign of a permutation equals -1 raised to the power of the number of negative pairs in the permutations. Now we are actually using the inverse of the existing permutation to reduce the ordering of the unit vector columns appearing as columns of the matrix to the 1,2,3 order. But the sign of a permutation equals the sign of its inverse. There are 3!=6 permutations of the integers 1,2,3, so there are 6 terms in this determinant. This form of the calculation is impractical for large matrices, but is of theoretical importance. For example, for a 50 by 50 matrix there are 50! terms in this calculation, which is about Fortunately, there are efficient calculations of a determinant, one is the Gaussian Elimination method of bringing a nonsingular matrix to diagonal form, because in such a form the determinant is simply the product of the diagonal elements. Theorem. Any multilinear functional defined on the n column vectors of a square n n matrix A with coefficients {a i,j : i = 1,n,j = 1,n}, which is alternating, and which has value 1 on the unit matrix, is unique. It has a formula for its value obtained by summing over all of the permutations σ of the integers 1,2,3,...,n, det(a) = σ ǫ(σ)a σ(1),1...a σ(n),n, where ǫ(σ) is the sign of the permutation. Proof. ( Later I shall give here the more general argument using our knowledge of the properties of the sign of a permutation, rather than counting transpositions as we did in the small 3 by 3 example above.) We carry out essentially the same calculation for the n by n case that we did for the 3 by 3 case above, using the unit basis vectors E 1,E 2,E 3,...,E n. Notice that we get this formula using only the defining properties of the determinant. So as a function the determinant must be unique, that is the multilinear functional with value one on the unit matrix is unique. 32

33 10 The Determinant of a Transpose Theorem. Let A be a square n by n matrix, and A T its transpose. Then Det(A T ) = Det(A). Proof. Let B be the transpose of A, then b i,j = a j,i by definition. We have Det(A) = ǫ(σ)a σ(1),1 a σ(2),2...a σ(n),n. σ We shall find that expressing each product in this sum using the inverse σ 1 of the permutation σ, we get essentually, a formula for the determinant of the transpose. By definition of the inverse we have from that σ(j) = k, σ 1 (k) = j. To see how a product transforms, let us consider a specific simple permutation σ of the set of integers {1,2,3}. Let σ, and its inverse σ 1 be defined by 1. σ(1) = 3, σ 1 (3) = σ(2) = 1, σ 1 (1) = σ(3) = 2, σ 1 (2) = 3. So the product a σ(1),1 a σ(2),2 a σ(3),3 = a 3,1 a 1,2 a 2,3 = a 3,σ 1 (3)a 1,σ 1 (1)a 2,σ 1 (2) = a 1,σ 1 (1)a 2,σ 1 (2)a 3,σ 1 (3). This final product comes about by ordering the previous product according to the first index of each term. Now it becomes clear that in the case of permutations of the integers {1,2,...,n} that the product a σ(1),1 a σ(2),2...a σ(n),n equals = a 1,σ 1 (1)a 2,σ 1 (2)...a n,σ 1 (n). 33

34 So the formula for Det(A) becomes, because ǫ(σ) = ǫ(σ 1 ), Det(A) = σ ǫ(σ 1 )a 1,σ 1 (1)a 2,σ 1 (2)...a n,σ 1 (n). But since the sets of permutations {σ 1,σ 2,...,σ n! } and {σ 1 1,σ 1 2,...,σ 1 n! } are identical we can write Det(A) = σ ǫ(σ)a 1,σ(1) a 2,σ(2)...a n,σ(n). Substituting a i,j = b j,i, we find Det(A) = σ ǫ(σ)b σ(1),1 b σ(2),2...b σ(n),n = Det(B) = Det(A T ). Corollary. as well as Det(A) = σ Det(A) = σ ǫ(σ)a σ(1),1 a σ(2),2...a σ(n),n, ǫ(σ)a 1,σ(1) a 2,σ(2)...a n,σ(n). Proof. We proved the first formula from the multilinear functional nature of the determinant, in the Section titled A Formula for the Determinant..., which is the previous section. We proved the second formula in the previous Theorem from the first formula while proving that the determinant of a matrix transpose equals the determinant of the original matrix. 11 Determinant of a Product Theorem. Let A and B be n by n matrices. Det(AB) = Det(A)Det(B). Proof. 34

35 Let A = (a ij ) B = (b jk ) and AB = C. Then if C k is the kth column of C nj=1 a 1j b jk nj=1 C k a = 2j b jk... nj=1 a nj b jk Then = b 1k A 1 +b 2k A b nk A n. D(AB) = D(b 11 A b n1 A n,...,b 1n A b nn A n ). Then expanding this multilinearly by choosing one term from each sum in all possible ways, avoiding the zero terms when two or more columns are selected equal, we obtain D(AB) = σ D(b σ(1),1 A σ(1),...,b σ(n),n A σ(n) ) = σ b σ(1),1...b σ(n),n D(A σ(1),...,a σ(n) ). Applying the permutation σ 1 to get the A column vectors into standard order in each term of the sum, and using the the fact that ǫ(σ 1 ) = ǫ(σ) this becomes ǫ(σ)b σ(1),1...b σ(n),n D(A 1,...,A n ) which equals D(A)D(B). σ = D(A 1,...,A n ) σ ǫ(σ)b σ(1),1...b σ(n),n. 12 The Determinant of an Inverse Corollary. Let A 1 be the inverse of A. Then D(A 1 ) = 1 D(A). Proof. This is a corollary of the previous product theorem. D(A 1 )D(A) = D(AA 1 ) = DI = 1. 35

36 13 Solving Linear Equations Suppose we want to solve a linear equation of the form where A is a square matrix of size n A = AX = B, a 11 a a 1n a n1 a n2... a nn X is a column vector to be solved for x 1 x X = 2..., x n and a B is a right side vector B = This can be written equivalently with the x 1,x 2,...,x n coefficient values multiplying column vectors of A. That is, b 1 b 2... b n., x 1 A 1 +x 2 A x n A n = B, where A j is the jth column vector of A, A j = This can be accomplished by Gaussian elimination, by repeatedly adding multiples of rows, so as to zero values in the matrix. This puts the set of equations in triangular form. 36 a 1j a 2j... a nj.

37 If the determinant of A is not zero, then the column vectors of A are linearly independent, and they form an n dimensional bases so that the each x j is a coefficient, and such values of the coefficients can be found so that a linear combination of the column vectors gives the right side vector B. Thus if the determinant is nonzero, there is a unique solution vector X = x 1 x 2... x n It is given by X = x 1 x 2... x n = A 1 B, where A 1 is the inverse of A. On the other hand if det(a) is zero, the equation may not have a solution. But consider the homogeneous equation where B = 0. Then the column vectors of A are not linearly independent, they are dependent because the determinant of A is zero. This means that there is a linear combination of vectors, with coefficients x 1,x 2,...x n, not all zero, so that x 1 A 1 +x 2 A x n A n = 0. That is, there is always a nonzero solution to the equation in the homogeneous case, when the vector B is zero. There always exist a solution to the equation when B is zero, and the determinant of A is zero. On the other hand, if the determinant of A is not zero, then the inverse of A exists, so the unique solution to this homogeneous equation is given by X = x 1 x 2... x n = A 1 B = 0. 37

38 14 Kernel and Range Let U be a n dimensional vector space. Let T be a linear transformation from U to a vector space V. Let K(T) be the kernel of T and R(T) the range of T. Then dim(k(t))+dim(r(t)) = n Let u 1,u 2,...,u p be a basis of the kernel of T. It can be extended to be a full basis of U. Suppose V is also n dimensional. Let A be the matrix of T with respect to this basis. Then the first column of A must be all zeroes, so that D(A) is zero. Let U have a second basis and let S be the linear transformation mapping the first bases to the second. Let B be the matrix of S. Then B has an inverse B 1 and D(B)D(B 1 ) = D(BB 1 ) = D(I) = 1. Then D(B) is not zero. The matrix of T with respect to the second basis is D(AB) = D(A)D(B), so that in general the determinant of the matrix of T with respect to bases of U and V is zero if and only if the kernel of T is not zero, and this is true if and only if T has an inverse. 15 Quotient Spaces 16 Direct Sums 17 The Inner Product The inner product of two vectors x and y in a complex or real vector space is written (x, y). The dot product of vector analysis is an inner product. The inner product has the following properties: 1. (x,y) = (y,x), 2. (α 1 x 1 +α 2 x 2,y) = α 1 (x 1,y)+α 2 (x,y), 3. (x,x) 0;(x,x) = 0, if and only if x = 0. 38

39 From these properties we get (x,αy) = (αy,x) = α(y,x) = α(y,x) = α(x,y) The above definition is contained in Halmos Finite Dimensional Vector Spaces. Note. It is possible to replace (2) above by (x,α 1 y 1 +α 2 y 2 ) = α 1 (x,y 1 )+α 2 (x.y 2 ), and then we have (cite an author that does this) (αx,y) = α(x,y). An example of an inner product on a vector space of complex valued continuous functions of a real variable, say defined on an interval [a,b] is (f,g) = b a f(x)ḡ(x)dx. See also the bra and ket notation of Dirac in Physics. In a Real Vector Space (x,y) = (y,x), and everything in sight is real, including the vector components. 18 Inner Product Spaces Let V be a vector space with an inner product (u,v). Notice that we are putting a restriction on the vector space V, because the scalar field of the vector space must be either the real numbers R or the complex numbers C, see Halmos pp , for a discussion of this and related matters. A basis can be used to construct an orthonormal basis (Gram-Schmidt Orthogonalization). An inner product space is also a normed linear space with the norm u = (u,u) 1/2. 39

40 The Cauchy-Schwarz inequality is (u,v) u v. The triangle inequality is u+v 2 u 2 + v 2. We look more at these below. 19 Normed Linear Spaces A norm of a vector in Euclidean space is the square root of the sum of its components squared, which of course comes from the Pythagorean Theorem. A norm for a vector x in a general vector space, written as x, satisfies the following properties (i) x > 0 if x is not zero. (ii) ax = a x if x is a scalar. (iii) x+y x + y. Property (iii) is called the triangle inequality. Definition. A metric is a distance function ρ defined on points of a space M, called a metric space. If points a,b,c M then the following hold 1. ρ(a,b) 0 2. ρ(a,b) = ρ(b,a) 3. ρ(a,a) = 0 4. ρ(a,c) ρ(a,b)+ρ(b,c). The Pythagorean distance in the real plane R d((x 1,y 1 ),(x 2,y 2 )) = is an example of a metric. (x 2 x 1 ) 2 +(y 2 Y 1) 2 40

41 Definition. A sequence {x n } n=1 contained in a metric space (M,ρ) is a Cauchy sequence if given any ǫ > 0, there is an integer N so that for all n,m > N the distance ρ(x n,x m ) < ǫ. Definition. A metric space (M,ρ) is complete if every sequence in M that is a Cauchy sequence converges to a point of M. This is named after the Frenchman Augustin-Louis Cauchy (August 21, 1789 to May 23, 1857), because using these ideas in the late 18th century and early 19th century, he was able to add rigor to Mathematical Analysis (Calculus), and explain the convergence of rational numbers to a rigorously defined concept of irrational numbers, which completed the set of rational numbers as the real numbers by adding the irrationals. He told the world what the irrational 2 was for example, and this finally allowed the ancient Greeks to sleep peacefully in their graves, no longer tossing and turning over the incommensurability of the hypotenuse, which destroyed their mathematics based on numbers being the length of geometric line segments. A vector space with a norm is called a normed linear space. A normed linear space has a distance function or metric defined by d(x,y) = x y, and thus is a metric space. An inner product space has a norm defined by x = (x,x). These things are proved in the subsection below called The Triangle Inequality. A normed linear space that is a complete metric space, is called a Banach Space named after Polish mathematician Sefan Banach (March 30, August 31, 1945), who was one of the greatest 20th century mathematicians. 20 Orthonormal Vectors A set of vectors S = {x i } is an orthonormal set if each pair is orthogonal (x i,x j ) = 0, 41

42 and each vector has unit norm (x i x i ) = x i = 1. If x is an arbitrary vector and S = {x i } is an orthonormal set, then the coefficients c i = (x,x i ), are called the Fourier coefficients of x with respect to the set S. If x can be expressed as a linear combination of the orthonormal vectors S, then there is a Fourier expansion of x n x = c i x i. i=1 Examples of orthonormal sets include the trigonometric functions and the various orthogonal polynomials used widely in applied mathematics. 21 Gram-Schmidt Orthogonalization Given a vector v and a vector u, the projection of v onto u has length v cos(θ), where θ is the angle between v and u. This can be written as (v,u) u The projection of v in the direction of u, or the component of v in the direction u is (v,u) u u u = (v,u) u u. 2 So suppose we have an orthogonal set W = w 1,w 2,...w k and a vector v, not in W. Let V be the space spanned by W and v. We can find a replacement for v, w k+1, so that w 1,w 2,...w k,w k+1 is an orthogonal basis for V. We subtract from v its projections in the direction of each of the w 1,w 2,...w k. So w k+1 = v (v,w 1) w 1 2 w 1 (v,w 2) w 2 2 w 2 (v,w k) w k 2 w k 42

43 Usingthisprocess, givenabasisofaspace, wecancomputefromthisbasis an orthogonal basis. See Numerical Methods, Dahlquist and Bjorck, 1974, Prentice-Hall, for the modified Gram-Schmidt process, which is an equivalent process with better numerical stability. 22 The Pythagorean Theorem for Inner Product Spaces PropositionIf two vectors uandv inaninner product spaceareorthogonal, then u+v 2 = u 2 + v 2. Proof. u+v 2 = (u+v,u+v) = (u,u)+(v,u)+(u,v)+(v,v) = u 2 + v The Cauchy-Schwarz Inequality Cauchy-Schwarz Inequality. If x and y are two vectors in an inner product space, then (x,y) x y, and we have equality if and only if x and y are dependent. Proof. If y = 0 then equality holds, otherwise the Cauchy-Schwarz inequality is equivalent to (x, y) y x. We let We have λ = (x,y) y 2. 0 x λy = (x λy,x λy) = (x,x)+(x, λy)+( λy,x)+( λy, λy) = x 2 λ(x,y) λ(y,x)+ λ 2 y 2. 43

44 We have λ = (y,x) y 2 = (x,y) y 2. Thus the magnitude of each of the last three terms of becomes x 2 λ(x,y) λ(y,x)+ λ 2 y 2, (x,y) 2 y 2. So by cancellation, the inequality becomes Therefore and thus then If x is a multiple of y, say 0 x 2 (x,y) 2 y 2. (x,y) 2 x 2 y 2, (x,y) x y. x = αy, (x,y) = α y 2 = α y y = x y. So we have equality. Conversely, if we have equality, then x is a multiple of y. For if x and y are not dependent then for any λ x λy is not zero, and so in the prove above we would start with 0 < x λy, 44

45 and carrying through the steps of the proof above we find (x,y) < x y. So if we have equality, then x must be a multiple of y. Real Vector Space Proof. If we are in a real vector space, we can give an alternate proof. The equation above becomes 0 x 2 λ(x,y) λ(y,x)+ λ 2 y 2, 0 x 2 2λ(x,y)+λ 2 y 2 = Aλ 2 +Bλ+C, where A = y 2, B = 2(x,y), and C = x 2. So we obtain a quadratic function in λ of the form f(λ) = Aλ 2 +Bλ+C 0. Because this function is nonnegative for any λ, it does not have a pair of real nonidentical roots. So the discriminate is not positive. Thus so Thus B 2 4AC B 2 4AC, 4 (x,y) 2 4 x 2 y 2. (x,y) x y, which is the Cauchy-Schwarz inequality. A Third Proof. Given u and v, let us perform a Gram-Schmidt orthogonalization, to get a vector w that is orthogonal to v w = u (u,v)v v 2 45

46 Then u = (u,v)v v 2 +w is an orthogonal decomposition of u. By the pythagorean theorem u 2 = (u,v)v v w 2 (u,v)v v 2 2 = (u,v) 2 v 2, with equality iff w = 0, that is iff u and v are dependent, that is iff u is a multiple of v. A Fourth Proof. Given vectors x,y, there exists a λ and a vector z orthogonal to x so that y = λx+z, so that Then the condition (x,z) = 0 gives z = y λx. λ(x,x) = (x,y), and The pythagorean theorem gives λ = (x,y) (x,x). (y,y) = λ 2 (x,x)+(z,z) and so λ 2 (x,x) = (y,y) (z,z). Then squaring λ(x,x) = (x,y), we have (x,y) 2 = λ 2 (x,x) 2 = λ 2 (x,x)(x,x) = [(y,y) ((z,z)](x,x) (x,x)(y,y). The inequality reduces to equality iff (z,z) = 0 iff y = λx. 46

47 24 The Infinite Dimensional Space l 2 The space l 2 consists of sequences s = {s n } 1 of real or complex numbers such that s n 2 <. n=1 Such a sequence is a function from the positive integers I = {1,2,3,...} so that for k I s(k) = s k, so that l 2 is actually a function space, and we may write an element in l 2 as either a sequence {s n } 1, or as a function s : k s k. We shall show below that l 2 has an inner product and a resulting norm given by and thus a metric (s,t) = s(i)t(i) i=1 ( ) (s,s) 1/2 1/2 = s = s(i) 2, i=1 ρ(s,t) = s t. We shall show that the inner product in l 2 given above (s,t) = s(i)t(i) i=1 converges and satisfies the Cauchy-Schwarz inequality. Then we shall prove the Minkowsky inequality, which shows that l 2 is a vector space. Theorem (Cauchy-Schwarz Inequality for l 2 ). If s and t are in l 2 then [ ] 1/2 [ ] (s,t) = s i t i s i 2 1/2 t i 2 = s t. i=1 i=1 47 i=1

Functional Analysis. James Emery. Edit: 8/7/15

Functional Analysis. James Emery. Edit: 8/7/15 Functional Analysis James Emery Edit: 8/7/15 Contents 0.1 Green s functions in Ordinary Differential Equations...... 2 0.2 Integral Equations........................ 2 0.2.1 Fredholm Equations...................

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Determinants. Beifang Chen

Determinants. Beifang Chen Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 106 LINEAR ALGEBRA LECTURE NOTES

MATH 106 LINEAR ALGEBRA LECTURE NOTES MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

NOTES ON LINEAR ALGEBRA. 1. Determinants

NOTES ON LINEAR ALGEBRA. 1. Determinants NOTES ON LINEAR ALGEBRA 1 Determinants In this section, we study determinants of matrices We study their properties, methods of computation and some applications We are familiar with the following formulas

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India CHAPTER 9 BY Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India E-mail : mantusaha.bu@gmail.com Introduction and Objectives In the preceding chapters, we discussed normed

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Chapter 2. Vectors and Vector Spaces

Chapter 2. Vectors and Vector Spaces 2.1. Operations on Vectors 1 Chapter 2. Vectors and Vector Spaces Section 2.1. Operations on Vectors Note. In this section, we define several arithmetic operations on vectors (especially, vector addition

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam This study sheet will not be allowed during the test Books and notes will not be allowed during the test Calculators and cell phones

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

DETERMINANTS 1. def. (ijk) = (ik)(ij).

DETERMINANTS 1. def. (ijk) = (ik)(ij). DETERMINANTS 1 Cyclic permutations. A permutation is a one-to-one mapping of a set onto itself. A cyclic permutation, or a cycle, or a k-cycle, where k 2 is an integer, is a permutation σ where for some

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Vector Spaces, Affine Spaces, and Metric Spaces

Vector Spaces, Affine Spaces, and Metric Spaces Vector Spaces, Affine Spaces, and Metric Spaces 2 This chapter is only meant to give a short overview of the most important concepts in linear algebra, affine spaces, and metric spaces and is not intended

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Determinants: Uniqueness and more

Determinants: Uniqueness and more Math 5327 Spring 2018 Determinants: Uniqueness and more Uniqueness The main theorem we are after: Theorem 1 The determinant of and n n matrix A is the unique n-linear, alternating function from F n n to

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007 Honors Algebra II MATH251 Course Notes by Dr Eyal Goren McGill University Winter 2007 Last updated: April 4, 2014 c All rights reserved to the author, Eyal Goren, Department of Mathematics and Statistics,

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Chapter 5. Basics of Euclidean Geometry

Chapter 5. Basics of Euclidean Geometry Chapter 5 Basics of Euclidean Geometry 5.1 Inner Products, Euclidean Spaces In Affine geometry, it is possible to deal with ratios of vectors and barycenters of points, but there is no way to express the

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker Archive of past papers, solutions and homeworks for MATH 224, Linear Algebra 2, Spring 213, Laurence Barker version: 4 June 213 Source file: archfall99.tex page 2: Homeworks. page 3: Quizzes. page 4: Midterm

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 23: Trace and determinants! (1) (Final lecture) Lecture 23: Trace and determinants! (1) (Final lecture) Travis Schedler Thurs, Dec 9, 2010 (version: Monday, Dec 13, 3:52 PM) Goals (2) Recall χ T (x) = (x λ 1 ) (x λ n ) = x n tr(t )x n 1 + +( 1) n det(t

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information