CAUCHY-BINET FOR PSEUDO-DETERMINANTS

Size: px
Start display at page:

Download "CAUCHY-BINET FOR PSEUDO-DETERMINANTS"

Transcription

1 CAUCHY-BINET FOR PSEUDO-DETERMINANTS OLIVER KNILL Abstract. The pseudo-determinant Det(A) of a square matrix A is defined as the product of the nonzero eigenvalues of A. It is a basis-independent number which is up to a sign the first nonzero entry of the characteristic polynomial of A. We extend here the Cauchy-Binet formula to pseudo-determinants. More specifically, after proving some properties for pseudo-determinants, we show that for any two n m matrices F, G, the formula Det(F T G) = P det(f P)det(G P ) holds, where det(f P ) runs over all k k minors of A with k = min(rank(f T G), rank(gf T )). A consequence is the following Pythagoras theorem: for any selfadjoint matrix A of rank k one has Det 2 (A) = P det2 (A P ), where det(a P ) runs over all k k minors of A. 1. Introduction The Cauchy-Binet theorem for two n m matrices A, B with n m tells that (1) det(a T B) = P det(a P )det(b P ), where the sum is over all m m square sub matrices P and A P is the matrix A masked by P. In other words, A P is a m m matrix obtained by deleting n m rows in A and det(a P ) a minor of A. In the special case m = n, the formula is the product formula det(a T B) = det(a T )det(b) for determinants. For direct proofs see 33, 29, 34. An elegant multilinear proof is 21, who call it almost tautological. A graph theoretical proof using the Lindström-Gessel-Viennot lemma sees matrix multiplication as concatenating directed graphs and determinants as a sum of weighted path integrals 3. The classical Cauchy-Binet theorem implies the Pythagorean identity det(a T A) = P det2 (A P ), (also called Lagrange Identity 19), where P runs over all m m sub-matrices of A, a formula which is useful for example to count the number of basis Date: June 17, Mathematics Subject Classification. Primary: 15A15, 15A69, 15A09. Key words and phrases. Linear algebra, Cauchy-Binet, Binet-Cauchy, pseudodeterminant, pseudo-inverse, matrix tree theorem, multilinear algebra. 1

2 2 OLIVER KNILL choices in matroids 2. The Cauchy-Binet formula assures that the determinant is compatible with the matrix product. Historically, after Leibniz introduced determinants in 1693, and Van der Monde made it into a theory in , Binet and Cauchy independently found the product formula for the determinant around , 12, 36, 7, 10, even before matrix multiplication had been formalized. 14, 23 noticed that Lagrange mentioned a similar result even before but only in the three dimensional case. The term matrix was used by Sylvester first in mentions that Binet s proof was not complete. It was Cayley who looked first at the matrix algebra 13, 27, 35. It is clear today that the Cauchy-Binet formula played a pivotal role for the development of matrix algebra. One can see in one of the first textbooks 39 on determinants how the notation has changed until today. In this paper, we extend the Cauchy-Binet formula (1) to matrices with determinant 0. The pseudo-determinant Det(A) is defined as the product of the nonzero eigenvalues of A with the assumption Det(0) = 0 for the zero matrix 0. Looking at singular matrices with pseudodeterminants opens a new world, which formula (1) has buried under the trivial identity 0 = 0. The extension of Cauchy-Binet to pseudo-determinants is fascinating because these determinants are not much explored and because Cauchy-Binet for pseudo-determinants is not a trivial extension of the classical theorem. One reason is that the most commonly used form of Cauchy-Binet is false, even for diagonal matrices: while Det(AB) = Det(BA) is true, we have in general: Det(AB) Det(A)Det(B). Also (1) does not hold for pseudo-determinants: take a nilpotent matrix A satisfying A 2 = 0 but A 0, then with B = A T we have Det(A T B) => 0 but Det(A T )Det(B) = Det(A) 2 = 0. What can be generalized? Because eigenvalues of square matrices C and C T agree, it is true that Det(C) = Det(C T ) for square matrices. In particular, Det(A T B) = Det(B T A) if A, B are matrices of the same kind. It is also true - even so it is slightly less obvious - that Det(A T B) = Det(AB T ). The later follows from the fact that A T B and AB T are essentially isospectral, meaning that they have the same nonzero eigenvalues. If A, B are not square, then one of the products has zero eigenvalues so that we need the pseudo-determinant for this identity to be interesting. Experiments showed us that summing over determinants of square matrices on the right hand side often works, but not always. The question was, which size of square matrices do we have to sum over? While it is evident that the ranks of A and B would play a role, only computer

3 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 3 experiments revealed that it is not the ranks of A, B, A T B or AB T but the minimum of the ranks of A T B and AB T which matters for the pseudo-determinant. Here is the result: Theorem 1. If A, B are matrices of the same size, then (2) Det(A T B) = P det(a P )det(b P ), where the sum is over all k k sub matrix masks P of A and where k = min(rank(a T B), rank(ab T )). Is this more general than the classical Cauchy-Binet? If A T B is not invertible but AB T is, then one can use that Det(A T B) = det(ab T ) and use the classical Cauchy-Binet result. Theorem (1) however also applies if both A T B and AB T are singular and this appears to be new. Yes, it is more general than the classical theorem. For a selfadjoint matrix A of rank k, this shows that Det(A 2 ) is a sum of squares of determinants det 2 (B) of k k sub matrices B. This uses that Det 2 (A) = Det(A 2 ) which is one of the identities for normal matrices to be discussed in the next section. The pairing A, B = Det(A T B) = B, A leads to a norm A = A, A. While A = 0 if and only if A = 0, it satisfies no triangle inequality and no Cauchy-Schwarz inequality A, B A B. The sum of determinants A, B depends in a nonlinear way on both A and B: λa, B = λ k A, B = A, λb, where k is the minimum of the ranks of the square matrices A T B and AB T. The sphere X = A = 1 in M(n, R) is a union of Cauchy-Binet varieties X n = SL(n, R), X n 1,... X 1. In the case M(2, R) for example, we have X = X 2 X 1 = SL(2, R) {ad bc = 0, a + d = 1 }. The set X 1 = {a(1 b) bc = 1 } is a two-dimensional quadric. In the case of diagonal 2 2 matrices, we have X = X 1 X 0 = { ad = 1 } {ad = 0, a + d = 1 } which is a union of a hyperbola and four points {±1, 0 }, {0, ±1 }. In the case M(3, R) already, the unit sphere X is the 8-dimensional X 3 = SL(3, R) together with a 7-dimensional X 2 and a 6-dimensional variety X 1. The classical Cauchy-Binet theorem misses the two later ones. We see that the case det(a) = 0 is an

4 4 OLIVER KNILL unexpectedly rich place. One of the main motivations for pseudo-determinants is graph theory, where the Laplacian matrix L always has a kernel. While det(l) is zero and is not interesting, Det(L) has combinatorial meaning and allows to count spanning trees in the graph. The number Det(L) indeed is a measure for the complexity of the graph. This paper grew while developing a new spanning tree theorem for Dirac operators D of a graphs, a result which was only discovered experimentally by studying the pseudo-determinant of Dirac operators of graphs. Since we saw that joining two graphs along a single vertex has the effect that the square of the pseudo-determinant of the Dirac operator is multiplicative, there had to be a combinatorial interpretation of Det 2 (D). This generalizes the well known fact that the classical Cauchy-Binet theorem gives a combinatorial interpretation of the minor of the Laplace- Beltrami operator L of a finite simple graph. The classical Kirchhoff matrix tree theorem is an immediate consequence of Theorem (1) because if F = G is the incidence matrix of a graph then A = F T G is the scalar Laplacian and Det(A) = Det(F T G) = P det(f P) 2. It is important to note however, that unlike for the Hodge Laplacian, the Kirchhoff matrix tree theorem can rely on the classical Cauchy-Binet theorem for invertible matrices. The reason is that for a connected graph, the kernel of the Laplacian is one dimensional only, so that Det(A) = n det(m), where M is a minor of A which is a classical determinant. The proof can then proceed with the classical Cauchy-Binet theorem for M. This becomes more complicated in the Dirac case, where the square of D gives the Laplace-Beltrami operator L = D 2 on discrete differential forms (see 25). Now, L has a large kernel in general with dim(ker(l)) = i b i = b, the sum of the Betti numbers of the graph which by Hodge theory is the total dimension of all harmonic forms. Theorem (1) gives a combinatorial interpretation of what Det(L) means, if L is the Laplace-Beltrami operator of a graph. The matrix L is large in general: if G = (V, E) has v cliques, then L is a v v matrix. For the complete graph K n for example, v = 2 n 1. Update and Acknowledgements: This update contains more references and minor corrections or typos. We added Mathematica code. From the Providence ILAS meeting, I got a valuable pointer from Fuzhen Zhang on generalized matrix functions, a couple of useful references on generalized inverses by Nieves Castro González like 4, 5 and learned from discussion with Woong Kook about more

5 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 5 sophisticated matrix tree theorems which are sensitive to higher cohomologies. I should also thank Shlomo Sternberg for a historical discussion on Binet in 2007, which had prompted me to search since to look for sources about this exciting part of history in linear algebra. Changes since June 10: The pseudo-determinant Definition 1. The pseudo-determinant of a square matrix A is defined as the product of the nonzero eigenvalues of A with the convention that the pseudo-determinant of the 0 matrix is defined to be zero. We start with basic facts about the pseudo-determinant Det(A) of a n n matrix A. Most are obvious, but we did not find any references. Some multi-linear treatment of the pseudo-determinant will appear in the proof part of the theorem. We denote by A the adjoint of A, with A + the Moore-Penrose pseudo-inverse of A defined by A + = V D + U if A = UDV is the singular value decomposition (SVD) of A and D + is the diagonal matrix which has the same zero entries than D and where D + ii = 1/D + ii for the nonzero entries of D. Pseudo-inverses are discussed in textbooks like 42 in the context of SVD. While SVD is not unique, the Moore-Penrose pseudo-inverse is. Denote by p A (x) = det(a x) the characteristic polynomial of A. We use a sign choice used in textbooks like 42, 8 and computer algebra systems like Mathematica. A matrix is selfadjoint if A = A, it is normal if AA = A A. We also denote by Q the unit cube in R n and by Y k the k-volume of a k-dimensional parallelepiped ( ) ( ) Y. Let Λ k A denote the k th exterior power of A. It is a n n matrix which is determined by k k Λ k A(e 1 e k ) = (Ae 1 Ae k ) if a basis e i in R n is given: e I = e i1 e ik with I {1,..., n} of cardinality k is then a basis of Λ k R n. Proposition 2. 1) If A, B are similar then Det(A) = Det(B). 2) If A is invertible, then Det(A) = det(a). 3) Det(P) = 1 if P is a projection onto a positive dimensional space. 4) If A is real and normal, then Det(A) = A(Q) k if k = ran(a). 5) Det(A T ) = Det(A) and Det(A ) = Det(A). 6) Det is discontinuous on M(n, R) or M(n, C) for n 2. 7) The pseudo-inverse of a normal matrix satisfies Det(A + ) = 1/Det(A) 8) Det(A) = ( 1) k p k where p A (x) = det(a x) = p k x k + +( 1) n x n.

6 6 OLIVER KNILL 9) Det(A) = tr(λ k A), where k is the rank of A. 10) For normal matrices A, one has Det(A n ) = Det n (A). 11) For A, B M(n, R) then Det(A T B) = Det(AB T ). 12) Self-adjoint matrices Det(A) = 0 are the zero matrix. 13) If A = A then Det(A) is real. Proof. 1) The eigenvalues of similar matrices are the same. 2) We use the definition and the fact that the classical determinant is the product of the eigenvalues. 3) If we diagonalize P, we get a matrix with only 1 or 0 in the diagonal. 4) Det(A) is basis independent and self-adjoint matrices can be diagonalized. For an orthogonal projection in particular, the pseudodeterminant is 1. 5) The eigenvalues are the same for A and A T and Det(A ) has the complex conjugate eigenvalues than A. 6) It is already discontinuous for n = 2, where Det(Diag(a, 1)) = a for a 0 and det(diag(a, 1) = 1 for a = 0. 7) Use that the pseudo inverse has the nonzero eigenvalues λ 1 j if λ j are the eigenvalues of A. For non normal matrices this can be false. 8) Write p A (x) = ( x) k j (λ j x), where λ j runs over the set of nonzero eigenvalues. 9) This follows from the previous step and the fact that tr(λ k A) = ( 1) k p k, a fact which can be deduced from det(1+a) = n j=0 tr(λj A)) (see i.e. 38 p.322), an identity which allows to define the determinant det(1 + A) in some infinite dimensional setups. 10) Normal matrices can be diagonalized. A nilpotent nonzero matrix satisfying A 2 = 0 shows that Det(A 2 ) Det(A) 2. 11) A T B and AB T have the same nonzero eigenvalues because their characteristic polynomials differ by a factor λ k only. 12) A nonzero self-adjoint matrix has a nonzero eigenvalue. 13) The eigenvalues are real. Remarks. 1) We can compute pseudo-determinants almost as fast as determinants because we only need to know the characteristic polynomial. We can find Det(A) also by row reduction if we do safe row reduction steps. As mentioned below, we have to make sure that we do not count any sign changes when swapping two parallel rows and do scalings of dependent rows, nor subtract a row from a parallel row. When doing safe row reductions, we end up with a matrix which looks like a row reduced echelon matrix but where parallel rows can appear. For such a reduced matrix, the eigenvalues can be computed fast.

7 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 7 2) The least square solution formula Ax = A(A T A) 1 A T y = P y features a projection matrix P with pseudo-determinant 1. We mention this because the least square inverse is often also called pseudo inverse even so it has nothing to do with the Moore-Penrose pseudo inverse in general. The former deals with overdetermined systems Ax = y, the later is defined for square matrices A only. 3) If A is normal, we can define log + A by diagonalizing A and get log Det(A) = tr(log + A ), where log + x = log x for x 0 and log + 0 = 0. 4) For finite simple graphs, the Laplace operator L always has a kernel. It is one dimensional if the graph is connected. The pseudo-determinant is considered a measure for complexity because it allows to count the number of maximal spanning trees in the graph. The Laplace-Beltrami operator on forms has a large kernel in general. Its dimension is the sum of the Betti numbers of the graph. Studying this matrix associated to a graph was the main motivation for us to look at Cauchy-Binet in the singular case. 5) The fact that F T G and F G T have the same nonzero eigenvalues is a consequence of p A (x) = ( x) k p B (x)) if A = F T G and B = F G T. For x = 1, this leads to the most important identity in mathematics (Deift) 43 det(1 + AB) = det(1 + BA) for n m and m n matrices A, B. Indeed, 15 shows how rich such identities can be. Switching operators is useful to construct the spectrum of the quantum harmonic oscillator, for the numerical QR algorithm A = QR RQ used to diagonalize a matrix, or to construct new solutions to nonlinear PDE s with so called Bäcklund transformations. Lets look at some pitfalls: 1) One could think that if B is invertible, then Det(AB) = Det(A)det(B) This is false: a counter example is A = and B =. It is even false for unitary B like A = and B = where AB = A and det(b) = 1. This example is the row swapping pitfall using the elementary swapping matrix B. We mention this pitfall because we actually tried to prove this first by repeating the textbook proof of det(ab) = det(a)det(b) in which one makes row reduction on the augmented matrix A B until the matrix B is the identity matrix. As we see below, pseudo-determinants need safe row reduction operations.

8 8 OLIVER KNILL 2) Even basic multi-linearity fails, also if we we apply it to nonzero 1 1 rows. An example like A = shows that if we scale a row by 1 1 λ then the pseudo-determinant gets scaled by (1 + λ)/2 and not by λ. This is just an example and not a general rule. It is difficult to say in general how scaling a linearly dependent row affects the pseudodeterminant. 3) Since for block diagonal matrices A = Diag(A 1, A 2 ) with square matrices A 1, A 2 one has det(a) = det(a 1 ) det(a 2 ) one could think this to be true also for Det. However, if A 1 = 0 and A 2 is invertible, then det(diag(a 1, A 2 )) = det(a 2 ) 0 but Det(A 1 ) Det(A 2 ) = 0. If A i are both not the zero matrix, then the formula Det(A) = Det(A 1 ) Det(A 2 ) holds. It is this not true but almost true which makes the subject of pseudo-determinants a bit treacherous - and could be great to challenge daring students in linear algebra. 3. Examples 1) An extreme example of Theorem (1) is obtained when F, G are two column vectors in R n and where the left hand side is the dot product F T G. It agrees with Det F 1 G 1 F 1 G 2 F 1 G n F 2 G 1 F 2 G 2 F 2 G n F n G 1 F n G 2 F n G n. which can be seen as the pseudo determinant analogue of a Gram determinant 17 and which has the only non-zero eigenvalue F G T. While the characteristic polynomial of the 1 1 matrix a = F T G is p A (λ) = a λ, the characteristic polynomial of the n n matrix F G T is ( λ) n 1 (a λ). 2) Assume F is a n m matrix for which every row v j of F is a multiple a j of v and that G is a n m matrix for which every row w j of G is a multiple b j of a vector w. Then Det(F T G) = (v w)(a b). This is the same than the right hand side of Cauchy-Binet. Since there is only nonzero eigenvalue, it has to be the trace of F T G. The later matrix has the entries (v w)a i b j. The left hand side of Cauchy- Binet is ( i v iw i )( j a jw j ). The right hand side of Cauchy-Binet is i,j (a jv i )(b j w i ). These two sums are the same. The same works also if F is rank 1 and G arbitrary. Since F T G is rank 1, we have

9 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 9 Det(F T G) = tr(f T G) = i,j F ijg ij. 3) The two 3 2 matrices F = give F T G = , G = , F G T = Both are singular matrices with the same determinant Det(F T G) = Det(FG T ) = 11. If F, G are n 2 matrices with column vectors a, b and c, d, then the classical Cauchy-Binet identity is (a c)(b d) (a d)(b c) = (a i b j a j b i )(c i d j c j d i ) 1 i,j n which has the special case (a c) 2 a 2 c 2 = a c 2 in three dimensions for the cross product which expresses the Pythagoras theorem cos 2 (α) 1 = sin 2 (α). Assume now that one of the matrices has rank 1 like in the case a = c. The classical identity is then 0 on both sides. The new Cauchy-Binet identity is not very deep: since F T G has the eigenvalues 0, a( i b i + i c i), we have Det(F T G) = a( i b i + i c i). The right hand side is P =1 det(f P )det(g P ) = i ab i + ac i. 4) For we have F = F T G = , G = 0 3 2, F G T = which have both the pseudo-determinant 12. Now, the 2 2 square submatrices of F and G are F 1 =, F =, F = 2 2 G 1 = , G 2 = , G 3 = 3 2,.

10 10 OLIVER KNILL But all their products 0 6 F1 T G 1 = 0 6, F T 2 G 2 = , F T 3 G 3 = have determinant 0. The reason is that while rank(f ) = rank(g) = rank(f T G) = 2, we have rank(f G T ) = 1. We see that we have to take the sum over the products det(fp T G P ) where P runs over all 1 1 matrices. And indeed, now the sum is 12 too. This example shows that even so F T G and F G T can have different rank, their pseudo-determinants are still the same. Of course, this follows already from the fact that all the nonzero eigenvalues are the same. 5) For non-invertible 2 2, we have Det(A) = tr(a). For example, 5 6 Det( ) = The same is true for matrices for which all columns are parallel like Det( ) = tr(a) = ) The two matrices F = , G = have different rank. The determinant of F T G = is equal to the trace as seen in the previous example. This is 41. This agrees with the sum det(f P G P ) = det(6 + 35). 7) If A is a van der Monde matrix defined by n numbers a i then the determinant of A is i<j (a i a j ). If the numbers are not different, like for A = a b a. a 2 b 2 a 2 We have Det(A) = (b a)(b + a + ab) if a b and Det(A) = tr(a) = 1 + a + a 2 if a = b.

11 CAUCHY-BINET FOR PSEUDO-DETERMINANTS A Pythagorean identity Theorem (1) implies a symmetry for the nonlinear pairing F, G = Det(F T G): Corollary 3 (Duality). If F, G are matrices of the same shape, then Det(F T G) = Det(FG T ). Proof. The characteristic polynomials of AB and BA satisfy p AB (x) = ±x l p BA (x) for some integer l. Remarks: 1) The matrices F T G and F G T have in general different shape. The result is also true in the square matrix case with the usual determinant because both sides are then det(f )det(g). 2) We know that M(n, m) with inner product tr(f G) is a Hilbert space with the Hilbert-Schmidt product i,j F ijg ij which is sometimes called Frobenius product. While the pairing A, B = Det(A T B) is not an inner product, it inspires to define A 2 := Det(A T A) 0. It is zero only if A = 0 and A, B = B, A. But d(a, B) = A B is not a metric because the triangle inequality does not hold. Here is an obvious corollary of Theorem (1): Corollary 4. If A is any matrix of rank k, then Det(A T A) = Det(AA T ) = P det2 (A P ), where P runs over all k k sub matrix masks of K. Proof. This is a special case of Theorem (1), where F = G = A using the fact that the two matrices A T A and AA T have the same rank k, if A has rank k. Especially, we have a Pythagoras theorem for pseudo-determinants: Corollary 5 (Pythagoras). For a selfadjoint matrix A of rank k, then Det 2 (A) = det 2 (A P ), P where P runs over all k k sub matrix masks of A. Proof. Use that Det 2 (A) = Det(AA T ) if A is selfadjoint. Remarks. 1) If A is invertible, 11 uses a special case of this identity that if A is a (n 1) n matrix, then det(aa T ) = ns where S is a perfect square. 2) As in the classical case, this result can be interpreted geometrically: the square of the k-volume of the parallelepiped spanned by the columns of A is related to the squares of the volumes of projections

12 12 OLIVER KNILL of AQ I onto planes spanned by Q J where for I = {i 1,..., i k } the set Q I is the parallel epiped spanned by e i1,..., e ik and the subsets I, J of {1,..., n } encode a k k sub mask P of A. (See 34). 3) Theorem (5) is obvious for diagonal matrices. An alternative proof could be to show that the right hand side is basis independent. But this exactly needs to go through the multi-linear approach which sees the right hand side as a Hilbert-Schmidt norm of a k-fermion matrix. Examples. a b 1) If A =, then Det(A a b T A) = 2a 2 + 2b 2 and agrees with the right hand side of Corollary (5). 2) If A is invertible, the Pythagoras formula is trivial and tells det(a 2 ) = det 2 (A). If A has everywhere the entry a then the left hand side is (na) 2 and the right hand side adds up n 2 determinant squares a 2 of 1 1 matrices. 3) Corollary (5) can fail for non-symmetric matrices. The simplest 0 1 case is A =, for which Det(A) = 0 but 0 0 P det(p 2 ) = 1. The right formula is Det(A T A) = P det2 (P) as in the previous corollary. It illustrates also that Det(A T A) Det 2 (A) in general. 4) The matrix A = has characteristic polynomial 41x+6x 2 x 3 and has eigenvalues 0, (3± 5 2). The pseudo-determinant is 41 so that Det 2 (A) = Now lets look at the determinants of all the 2 2 sub-matrices of A. A 1 = A 4 = A 7 = A 2 = A 5 = A 8 = A 3 = A 6 = A 9 =

13 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 13 Its determinant squares 256, 256, 144, 256, 256, 144, 144, 144, 81 add up to ) The matrix has Det(A) = 4. From the 15 2 = 225 sub-matrices of A of size 4 4, there are 16 which have nonzero determinant. Each of them either has determinant 1 or 1. The sum of the determinants squared is 16. 6) F = then A = F T F = is selfadjoint with (Det(A)) 2 = 30 2 = 900. Since k = 1, the right hand side of Pythagoras is the sum of the squares of the squares of the matrix entries. This is also 900. This can be generalized to any row vector F for which the identity reduces to the obvious identity ( n i=1 F i 2 ) 2 = ( n i,j=1 (F if j ) 2 ) which tells that the Euclidean norm of F is the Hilbert-Schmidt norm of A = F T F. 5. Proof We have the Hilbert-Schmidt identity p 1 = tr(f T G) = P =1 det(f P )det(g P ) = i,j F ij G ij, where F P is the sub-matrix matched by the pattern P and P = k means that P is a k k matrix and where p(x) = p m + p m 1 x + + ( 1) m 1 p 1 x m 1 + ( 1) m x m is the characteristic polynomial of the m m matrix F T G. Theorem (1) for pseudo-determinants is the case when k is the minimal rank of F T G and G T F. It can be rephrased as p k = ( 1) k det(f P )det(g P ). P =k

14 14 OLIVER KNILL Having seen that identity holds for k = 1 and experimentally when k is the minimal rank, the question is whether it can hold for general k. The answer is yes. Experiments showed this and prompted to generalize the result. This actually simplifies the proof: Theorem 6 (Generalized Cauchy-Binet). If F, G are arbitrary n m matrices and 1 k is given, then p k = ( 1) k det(f P )det(g P ), P =k where the sum is over all k k sub masks P and where p k are the coefficients of the characteristic polynomial p(x) = p m + p m 1 x + + ( 1) m 1 p 1 x m 1 + ( 1) m x m of F T G. While the matrix entries F ij are defined for 1 i n, 1 j m, the indices of Λ k F are given by subsets I of {1,..., n } and subsets J of {1,..., m }. Lets introduce more notation: Definition 2. We write F IJ for the matrix entry of (Λ k F ) IJ. It is a real number. Define also F P (IJ) for the matrix with pattern P (IJ) defined by the sets I {1,..., n } and J {1,..., m }. Finally, write Tr(U) = K U KK when summing over all subsets K. This includes the case when U KK are matrices. Actually, F IJ is a minor because of the following known lemma: Lemma 7 (Minor). If Λ k R n and Λ k R m are equipped with the standard basis e I = e i1 e ik obtained from a basis e i, then F IJ = det(f P (IJ) ) for any sets I, J of the same cardinality I = J = k. Proof. This is just rewriting F IJ = e I, F e J using the basis e I = e i1 e ik and the definition of Λ k F : Λ k R n Λ k R m. When restricting Λ k F IJ : X I X J, we get the determinant of F IJ. Lemma (7) implies the trace identity: Corollary 8 (Trace). For any square matrix A, we have tr(λ k A) = P det(a P ) P = P runs over all sub matrices A ij, i I, j I. Note that unlike in Cauchy-Binet, we sum over symmetric minors A P (II).

15 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 15 Lemma 9 (Multiplication). For n m matrix A and m n matrix B (AB) IJ = K A IK B KJ where I, J {1,... n} and K {1,..., m}. Proof. It appears as Theorem A.2.1 in 20. As pointed out in 21 Lemma 10.16, it rephrases the classical matrix multiplication of the two exterior products Λ k A, Λ k B as a composition of maps Λ k R n Λ k R m Λ k R n. For more on the exterior algebra, see 1, 18. We can now prove Theorem (6): Proof. We use ( 1) k p k (A) = tr(λ k A) for any square matrix A, Lemma (7) and Lemma (9): ( 1) k p k (F T G) = tr(λ k (F T G)) = tr(λ k F T Λ k G) Theorem (6) implies = J = J = J = P ( FJIG T IJ ) I F IJ G IJ I det(f P (IJ) )det(g P (IJ) ) I det(f P )det(g P ). Corollary 10 (Pythagoras). For any selfadjoint A and 1 k we have ( 1) k p k (A 2 ) = tr(λ k A 2 ) = det 2 (A P ), P =k where the sum is over all minors A P (IJ) with I = k, J = k. Remarks. 1) Despite the simplicity of the proof and similar looking results for minor expansion, formulas in multi-linear algebra 28, 19, condensation formulas, trace ideals 41, formulas for the characteristic polynomial 31, 37, 9, pseudo inverses, noncommutative generalizations 40, we are not aware that even the special case of the Pythagoras formula (5) for the pseudo determinant has appeared anywhere already. In the classical case, where A is invertible, the Pythagorean identity is also called Lagrange Identity 19.

16 16 OLIVER KNILL 2) Pythagoras (10) should be compared with the trace identity given in Lemma (8). The former deals with a product of matrices and is therefore a quadratic identity. The trace identity deals with one matrix only and does not explain Pythagoras yet. For k = 1, the Cauchy-Binet formula and the trace identity reduce to the definition of the matrix multiplication and the definition of the trace: tr(f T G) = i,j F ijg ij and tr(a) = i A ii. If k is the rank of A, then Cauchy-Binet is Theorem (1) and the trace identity is the known formula Det(A) = tr(λ k A), where k is the rank of A. 6. Row reduction Theorem (1) could be approached also by simplifying both sides of the identity Det(F T G) = P det(f P)det(G P ), by applying row operations on F and G and using that both sides of the identity are basis independent. We will only illustrate this here. The strategy of row reduction is traditionally used in the proof of Cauchy-Binet. In the special case of the product identity det(ab) = det(a)det(b) already, one row reduces the n 2n matrix A B. However, the row reduction strategy is not so easy to generalize, because any of the three row reduction steps are false as stated for pseudo-determinants! Lets explain the difficulty. Classical row reduction of a matrix A consists of applying swap, scale or subtract operations to A to bring a n m matrix into row reduced echelon form. Exercises in textbooks like 30, 8) ask to prove that the end result rref(a) is independent of the strategy with which these steps are applied. When applying row reduction to a nonsingular matrix until the identity matrix is obtained, then det(a) = ( 1) r / λ j, where r is the number of swap operations used and λ j are the scaling constants which were applied during the elimination process. While this is all fine for det, for Det it is simply false! Multiplying a zero row with λ or swapping two zero rows does not alter the matrix and does therefore not contribute to the ( 1) r or scaling factors. For example, swapping the two rows of does not change the pseudo-determinant, nor does a multiplication of the second row by λ. As far as the subtraction part of row reduction, there are more bad news: unlike for the determinant, subtracting a row from an other row can change the pseudo-determinant. For example, 1 1 Det( 1 1 ) = 2, Det( ) = 1.

17 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 17 These problems can be overcome, but it needs a safe Gauss elimination which honors a Pauli exclusion principle : we can do row reduction, as long as we do not deal with pairs of parallel vectors. This analogy leads to the use of alternating multi-linear forms to get a geometric characterization of the pseudo-determinant. In this multi-linear setup, a matrix A acts on k-forms f = f 1 f 2 f k as Af = Af 1 Af 2 Af k. The length of f = f 1 f k is defined as the k-volume f of the parallelepiped spanned by the k vectors f 1,..., f k. We have f = det(f ), where F is the matrix F ij = f i f j and where f g denotes the Euclidean dot product. Lets call a row in A independent in A, if it is not parallel to any other nonzero row. Lemma 11 (Safe row reduction). a) If the matrix B is obtained by scaling an independent row of A by a factor λ 0, then Det(B) = λdet(a). b) If the matrix B is obtained by swapping two independent rows in A, then Det(B) = Det(A). c) If the matrix B is obtained by subtracting an independent rows from an other independent row, then Det(B) = Det(A). Proof. a) The length f of the k-form f gets multiplied by λ. This can be seen by looking at the subspace X f generated by the f i. The matrix A f obtained by restricting A to that space satisfies Det(A) = det(a f ) because for λ 0, the matrix A f is invertible. We see that Af scales by a factor λ. b) The orientation of the k-form f changes sign. Again we can look at the behavior of A f restricted to the linear subspace X f to see that Af changes sign. c) This corresponds to a shear operation on the subspace X f for which we know that Af does not change. This immediately goes over to the case when row reduction steps are done for F or G and where we look at the pseudo-determinant of A = F T G: Corollary 12 (Safe row reduction in factored form). a) If an independent row in F T is scaled by a factor λ 0, then Det(F T G) is multiplied by λ. b) If two independent rows in F T are swapped, then Det(F T G) changes sign. c) If a row is subtracted from an independent row of F T then Det(F T G) does not change. The same holds for rows in G T.

18 18 OLIVER KNILL Proof. All these row operations in F T will become the same operations in A = F T G. When writing the row reduction steps using elementary matrices E, the statement is a consequence of associativity (EF T )G = E(F T G). Lets see what happens if we append a multiple of a given row, starting with the assumption that all of rows are already independent. While it is difficult to see what effect adding a multiple of a row to an other row in A has, it is possible to see if A = F T G and such an operation is performed for F and for G. i) Appending a parallel row. Given two n m matrices F, G such that F T G is nonsingular. Assume A T is the n (m + 1) matrix obtained from F T by appending λ times the l th row of F at the end. Assume that B T is the n (m+1) matrix obtained from G T by appending µ times the l row of G T at the end. Then both sides of the Cauchy-Binet formula are multiplied by 1 + λµ. Proof. First show that det(a T B) = Det(A T B) = (1 + λµ)det(f T G). First bring the m row vectors in row reduced echelon form so that we end up both for F and G with matrices which are row reduced in the first m 1 rows and for which the m th and m + 1 th row are parallel. If we now reduce the last two rows, F T G is block diagonal with at the end which has pseudo-determinant 1 µ λ λ + µ 1 + λµ. For every pattern P which does not involve the l th row, we have det(f P ) = det(a P ) and det(g P ) = det(b P ). For every pattern P which does involve the l row and not the second last we have det(a P ) = λdet(f P ) and det(b P ) = µdet(g P ). For every pattern which involves the appended last row as well as the l th row, we have det(a P ) = det(b P ) = 0. ii) Given two n m matrices F, G such that F T G and F G T have maximal rank. Given 1 l m. Assume the row v = l j=1 λ jv j is appended to F T and w = l j=1 µ jw j is appended to G T, where v j are the rows of F T and w j are the rows of G T. Then both sides of the Cauchy-Binet formula are multiplied by 1 + l j=1 λ jµ j. Proof. Use induction with respect to l, where l = 1 was case (i). When adding a new vector l l + 1, the determinant gets increased by λ l+1 µ j+1. On the right hand side this is clear by looking at the patterns which involve the last (m + 1) th row.

19 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 19 Remark. When adding more rows we do not have explicit formulas any more. Given two n m matrices F, G such that F T G is nonsingular. Assume we append l rows v i = j λ jiv ji to F T and rows w i = j λ jiw ji are appended to G T, where v ji are different rows of F T and w ji are different rows of G T. 7. Illustrations 1) This example hopes to illustrate, how difficult it can be to predict what scaling of a row does to the pseudo-determinant of a matrix A. Lets take the rank-2 matrix A = and scale successively the first, second or third row by a factor λ = 3, to get B = , C = , D = While det(a) = 2 we have det(b) = 8, det(c) = 2 and det(d) = 4. A look at the eigenvalues σ(a) = { , , 0}, σ(b) = { , , 0}, σ(c) = { , , 0} and σ(d) = { , , 0} confirms how scaling of rows with the same factor can tossed around the spectrum all over the place. In this 3 3 example, we can visualize the situation geometrically. The pseudodeterminant can be interpreted as the area of parallelogram in the image plane. Scaling rows deforms the triangle in different directions and it depends on the location of the triangle, how the area is scaled. 2) The two 3 2 matrices F, G F = , G = both have rank 2. a) If we append 3 times the first row to F T and 2 times the first row to G T, then the pseudo-determinant changes by a factor = 7. Indeed, we have det(f T F ) = 9 and det(a T B) = 63 with A = and B = b) If we append 3 times the first row of F T to F T and 2 times the second row of G T to G T however, then the pseudo-determinant does

20 20 OLIVER KNILL not change because 1 + i µ iλ i = 1. Indeed we compute det(a T B) = 9 with A = and B = ) This is an example of 4 2 matrices F, G, where the first matrix has rank 2 and the second matrix has rank a) If F = , G = , Then F T G = which has pseudo-determinant 40. b) If we multiply the second row of F T with a factor 2, we get F 1 = , and F 1 T G = which has pseudo-determinant c) If we swap two rows of F T, we get F 2 = and F T 2 G = which has the same pseudo-determinant 40. If we subtract the second row of F T from the first row of F T, we get F 3 = and F3 T G = which has pseudo-determinant d) Finally, lets append twice the first row off T to F T and three times the first row of G T to G T to get A = and B = , so that AT B = which has rank 1 and Det(A T B) = 220. For random 4 2 matrices F, G, scaling a row of F T by λ changes the determinant of F T G by λ, swapping two rows

21 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 21 of F T swaps the determinant of F T G and subtracting a row of F T from an other row does not change the determinant. After appending λ times a rows to F T and µ times a row to G T, we would get the pseudo-determinant scaled by a factor 1 + λµ = Remarks A) The classical matrix tree theorem of Kirchhoff follows directly from Theorem (1). We have Det(L) = Det(CC T ) where C is the incidence matrix and L is the Laplacian L = A D where A is the adjacency matrix and D the degree diagonal matrix.. By Cauchy-Binet, it is P det2 (P ). The left hand side is the pseudo-determinant of the Laplacian and the right hand side counts the number of trees with n 1 vertices (one vertex is missing) and n 1 edges. After completion, which means adding a vertex so that it becomes a complete tree, we get the number of spanning trees. B) An application of Theorem (1) we have found a matrix tree theorem which gives a combinatorial description of the pseudo-determinant of the Laplace-Beltrami operator acting on discrete differential forms of a finite simple graph. The formula essentially provides the proof if we interpret the right hand side in terms of trees. If D is the Dirac matrix of a finite simple graph, then L = D 2 is the Laplace-Beltrami operator L acting on discrete differential forms. It is a v v matrix if v is the total number of simplices in the graph. The formula implies there that Det(L) is the number of maximal complete trees in in a double cover of G branched at a single vertex. Like the classical matrix tree theorem which gives an interpretation of the Det(L 0 ) where L 0 is the matrix Laplacian on function, the Dirac matrix theorem gives an interpretation of Det(L), where L is the Laplace-Beltrami operator acting on discrete differential forms. Things are a bit more interesting in the Dirac case because trees come with a positive or negative sign and det(a 2 P ) does not correspond to a single tree only which forces us to look at a branched double cover of the graph. C) Pseudo-determinants are useful also in infinite dimensions, like for regularized Fredholm determinants. It makes an important appearance for zeta regularized Ray-Singer determinants of Laplacians for compact manifolds. The later are always pseudo-determinants because Laplacians on compact Riemannian manifolds always have a kernel. For the Dirac operator A = i d on the circle, which has dx eigenvalues n to the eigenvalues e in the Dirac zeta function is

22 22 OLIVER KNILL ζ(s) = n 0 n s = ζ MP (s/2) + ( 1) s ζ MP (s/2) = ζ MP (s/2)1 + e iπs is analytic in the entire complex plane as for any odd dimensional Riemannian manifold. The factor (1 + ( 1) s = 1 + e iπs is a choice of the branch which is necessary due to the negative eigenvalues λ appear. But this factor has naturally regularized the poles of the Minakshisundaram-Pleijel zeta function ζ MS. The Dirac zeta function of the circle has the same roots then the classical Riemann zeta function. Since ζ (0) = 1, the circle has the Dirac Ray-Singer determinant Det(D) = e. This must be seen as a regularized pseudodeterminant because only the nonzero eigenvalues have made it into the zeta function. D) For circle graphs C n, the scalar Laplacian satisfies n 1 Det(L) = 4 sin( kπ n )2 = n 2. k=1 We have looked at Birkhoff sums n 1 k=1 log(sin2 (πkα/n)) for Diophantine α in 26. Summing over logs is a natural variant to summing over inverse powers. What can we say about the Dirac zeta function n 1 (1 + e iπs ) (2 sin(π k n )) s k=1 of circular graphs C n in the limit n? Numerically, we see the roots of this analytic function is located on a smooth curve near (but not equal) to the critical line Re(s) = 1/2. E) Let A be the adjacency matrix of a finite simple weighted graph. This means that we assign values A ij = A ji to the edges of the graph. A sub-matrix P = P K,L is obtained by restricting to a sub pattern. If the square of det(p ) 2 is called the benefit of the sub-pattern and Det(A) 2 the benefit of the matrix, then the Pythagorean pseudo-determinant formula tells that the square Det(A) 2 is the sum of the benefits of all sub patterns. This picture suggests that Det(L) is an interesting functional, from a physical point of view. F) Lets look at a probability space of symmetric n n matrices which take values in a finite set. We can ask which matrices maximize or minimize the pseudo-determinant. Pseudo-determinants can be larger than expected: for all 2 2 matrices taking values in 0, 1, the maximal determinant is 1 while the maximal pseudo-determinant is 2, obtained for the matrix where all entries are 1. On a probability space (Ω, P )

23 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 23 of matrices, where matrix entries have continuous distribution, it does of course not matter whether we take the determinant functional or or pseudo-determinant functional because non-invertible matrices have zero probability. But we can ask for which n n matrices taking values in a finite set, the pseudo-determinant Det(A) is maximal. G) We can look at the statistics of the pseudo-determinant on the Erdös-Renyi probability space of all finite simple graphs G = (V, E) of order V = n similar than for the Euler characteristic or the dimension of graph in 24. Considering the pseudo-determinant functional on the subset of all connected graphs could be interesting. While the minimal pseudo-determinant is achieved for the complete graphs where Det(D(K n )) = n 2n 1 1, it grows linearly for linear graphs Det(D(L n )) = n( 1) n 1 and quadratically for cycle graphs Det(D(C n )) = n 2 ( 1) n 1. The complete graph with one added spike seems to lead to the largest pseudo determinant. We computed the pseudo-determinant for all graphs up to order n = 7, where there are connected graphs. It suggests that a limiting distribution of the random variable X(G) = log( Det(L(G) ) might exist on the probability space of all connected graphs G in the limit n. H) We have stated the results over fields of characteristic zero. Since multi-linear algebra can be done over any field F, Theorem (6) generalizes. Determinants over a commutative ring K can be characterized as an alternating n-linear function D on M(n, K) satisfying D(1) = 1 (see e.g. 22, 16). As discussed, this does not apply to pseudodeterminants. Is there an elegant axiomatic description of pseudodeterminants? I asked this Fuzhen Zhang after his plenary talk in Providence who informed me that it is not a generalized matrix function in the sense of Marcus and Minc 32, who introduced functions of the type d(a) = x H χ(x) A i,x(i), where H is a subgroup of the symmetric group and χ is a character on H. Indeed, the later is continuous in A, while the pseudo determinant is not continuous as a function on matrices. Appendix Here is Mathematica code to compute the pseudo determinants. Computing the eigenvalues and taking the product would not only be more costly, it would also throw us off from the ring of integers for larger matrices because we would be forced to compute the eigenvalues numerically in general. The computation of the characteristic polynomial

24 24 OLIVER KNILL gives integer values for integer matrices also if there are no algebraic expressions for the eigenvalues. f s, n := s 1+ n ( 1 ) ˆ n ; TopNoZero s := f s, Last Position s, 0 1 ; PDet A := TopNoZero CoefficientList C h a r a c t e r i s t i c P o l y n o m i a l A, x, x ; The procedure PTrace (which stands for Pauli trace) is seen next produces the pairing F, G k, which is the sum over all products of minors det(f P )det(g P ) where P runs over k k sub matrices. We then compute the list of Pauli traces and a generating function t(x) which will match the characteristic polynomial p(x) of F T G. There is a perfect match p(x) = t(x) if we extend the definition to k = 0 and assume F, G 0 = 1. PTrace A, B, k :=Module {U,V,m=Length A, n=length A 1, s ={}, t ={}}, U=Partition Flatten Subsets Range m, { k, k }, k ; u=length U ; V=Partition Flatten Subsets Range n, { k, k }, k ; v=length V ; Do s=append s, Table A U i, u, V j, v, { u, k },{ v, k }, { i, u },{ j, v } ; Do t=append t, Table B U i, u, V j, v, { u, k },{ v, k }, { i, u },{ j, v } ; Sum Det s l Det t l, { l, Length s } ; PTraces A, B :=( x )ˆm+Sum( x ) ˆ (m k ) PTrace A, B, k, { k,min n,m } ; n=5; m=7; r =4; RandomMatrix n, m := Table RandomInteger 2 r r, { n },{m} ; F=RandomMatrix n,m ; G=RandomMatrix n,m ; C h a r a c t e r i s t i c P o l y n o m i a l Transpose F. G, x PTraces F,G Here is the code for experiments made to discover Theorem (6). We generate matrices F, G to produce matrices A = F T G and B = F G T both both have zero determinant. This assures that we are in new territory, which can not be explained by the classical Cauchy-Binet theorem. Then we compute both polynomials: R:=RandomMatrix n,m ; S h u f f l e :=Module { },F=R;G=R;A=Transpose F. G;B=F. Transpose G ; S h u f f l e ; While Abs Det A+Abs Det B >0, S h u f f l e ; { C h a r a c t e r i s t i c P o l y n o m i a l Transpose F. G, x, PTraces F,G } {Det A, Det B, PDet A, PDet B } Here are experiments with Pythagoras. Again we shuffle things so that both matrices A = F T F as well as B = F F T are random self-adjoint zero-determinant matrices so that not the classical result can explain the identity. n=4; m=7; R:=RandomMatrix n,m ; S h u f f l e :=Module { },F=R;A=Transpose F. F ;B=F. Transpose F ; S h u f f l e ; While Abs Det A+Abs Det B >0, S h u f f l e ; {k, l }=Map MatrixRank, {A,B} ; { n,m, k, l, PDet A, PTrace F, F, k }

25 CAUCHY-BINET FOR PSEUDO-DETERMINANTS 25 References 1 R. Abraham, J.E. Marsden, and T. Ratiu. Manifolds, Tensor Analysis and Applications. Applied Mathematical Sciences, 75. Springer Verlag, New York etc., second edition, M. Aigner. Combinatorial Theory. Springer Verlag, M. Aigner and G.M. Ziegler. Proofs from the book. Springer Verlag, Berlin, 2 edition, Chapter R.B. Bapat. Outer inverses: Jacobi type identities and nullities of submatrices. Linear Algebra and Its Applications, 361: , J. Biao and A. Ben-Israel. Minors of the moore-penrose inverse. Linear Algebra and Its Applications, 195: , J.P.M. Binet. Mémoire sur un systeme de formules analytiques, et leur application à des considerations géométriques. J. de l Ecole Polytechnique IX, Cahier 16: , page 287, N. Bourbaki. Elements d histoire des Mathematiques. Springer, O. Bretscher. Linear algebra with applications. Prentice Hall, B. Brooks. The coefficients of the characteristic polynomial in terms of the eigenvalues and the elements of an n n matrix. Appl. Math. Lett., 19(6): , R.A. Brualdi and H. Schneider. Determinantal identities: Gauss, Schur, Cauchy, Sylvester, Kronecker, Jacobi, Binet, Laplace, Muir, and Cayley. Linear Algebra Appl., 52/53: , K.A. Bush. Classroom Notes: An Interesting Result on Almost Square Matrices and the Cauchy-Binet Theorem. Amer. Math. Monthly, 69(7): , A. Cauchy. Memoire sur le nombre de valeurs qu une fonction peut obtenir. J. de l Ecole Polytechnique X, pages , A. Cayley. On the theory of determinants. Transactions of the Cambridge Philosophical Society, VIII:1 16, J.L. Coolidge. A history of geometrical methods. Oxford University Press, and Dover, 1940 and P. A. Deift. Applications of a commutation formula. Duke Math. J., 45(2): , J. Dieudonné. Grundzüge der modernen Analysis. Vieweg, dritte edition, I. Gohberg and S. Goldberg. Basic Operator Theory. Birkhäuser, W.H. Greub. Multilinear Algebra. Springer, B. Huppert and W. Willems. Lineare Algebra. Vieweg / Teubner, 2 edition, P. Lancaster I. Gohberg and L. Rodman. Invariant Subspaces of Matrices with Applications, volume 51 of Classics in Applied Mathematics. SIAM, I.R.Shafarevich and A.O. Remizov. Linear Algebra and Geometry. Springer, R. Kunze K. Hoffman. Linear Algebra. Prentice Hall, second edition, M. Kline. Mathematical Thought from Ancient to Modern Time (Volume 2). Oxford University Press, O. Knill. The dimension and Euler characteristic of random graphs

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

ONE CAN HEAR THE EULER CHARACTERISTIC OF A SIMPLICIAL COMPLEX

ONE CAN HEAR THE EULER CHARACTERISTIC OF A SIMPLICIAL COMPLEX ONE CAN HEAR THE EULER CHARACTERISTIC OF A SIMPLICIAL COMPLEX OLIVER KNILL y G g(x, y) with g = L 1 but also agrees with Abstract. We prove that that the number p of positive eigenvalues of the connection

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Networks and Matrices

Networks and Matrices Bowen-Lanford Zeta function, Math table talk Oliver Knill, 10/18/2016 Networks and Matrices Assume you have three friends who do not know each other. This defines a network containing 4 nodes in which

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2. SSM: Linear Algebra Section 61 61 Chapter 6 1 2 1 Fails to be invertible; since det = 6 6 = 0 3 6 3 5 3 Invertible; since det = 33 35 = 2 7 11 5 Invertible; since det 2 5 7 0 11 7 = 2 11 5 + 0 + 0 0 0

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants THE ALGEBRA OF DETERMINANTS 1. Determinants We have already defined the determinant of a 2 2 matrix: det = ad bc. We ve also seen that it s handy for determining when a matrix is invertible, and when it

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

MATH 2030: EIGENVALUES AND EIGENVECTORS

MATH 2030: EIGENVALUES AND EIGENVECTORS MATH 2030: EIGENVALUES AND EIGENVECTORS Determinants Although we are introducing determinants in the context of matrices, the theory of determinants predates matrices by at least two hundred years Their

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

A case study in Interaction cohomology Oliver Knill, 3/18/2016

A case study in Interaction cohomology Oliver Knill, 3/18/2016 A case study in Interaction cohomology Oliver Knill, 3/18/2016 Simplicial Cohomology Simplicial cohomology is defined by an exterior derivative df (x) = F (dx) on valuation forms F (x) on subgraphs x of

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

On some matrices related to a tree with attached graphs

On some matrices related to a tree with attached graphs On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The spectral zeta function

The spectral zeta function The spectral zeta function Bernd Ammann June 4, 215 Abstract In this talk we introduce spectral zeta functions. The spectral zeta function of the Laplace-Beltrami operator was already introduced by Minakshisundaram

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

ANSWERS. E k E 2 E 1 A = B

ANSWERS. E k E 2 E 1 A = B MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

The Matrix-Tree Theorem

The Matrix-Tree Theorem The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries

More information

Digital Workbook for GRA 6035 Mathematics

Digital Workbook for GRA 6035 Mathematics Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Determinants An Introduction

Determinants An Introduction Determinants An Introduction Professor Je rey Stuart Department of Mathematics Paci c Lutheran University Tacoma, WA 9844 USA je rey.stuart@plu.edu The determinant is a useful function that takes a square

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Math 320, spring 2011 before the first midterm

Math 320, spring 2011 before the first midterm Math 320, spring 2011 before the first midterm Typical Exam Problems 1 Consider the linear system of equations 2x 1 + 3x 2 2x 3 + x 4 = y 1 x 1 + 3x 2 2x 3 + 2x 4 = y 2 x 1 + 2x 3 x 4 = y 3 where x 1,,

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

MATHEMATICAL STRUCTURES IN GRAPH THEORY II. Oliver Knill Harvard University

MATHEMATICAL STRUCTURES IN GRAPH THEORY II. Oliver Knill Harvard University MATHEMATICAL STRUCTURES IN GRAPH THEORY II Oliver Knill Harvard University JAN 16 2014 These 67 slides were not delivered on the 16th and! are still in a rough state or partly recycled from a previous

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c MATH 2030: MATRICES Matrix Algebra As with vectors, we may use the algebra of matrices to simplify calculations. However, matrices have operations that vectors do not possess, and so it will be of interest

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Chapter 2: Matrices and Linear Systems

Chapter 2: Matrices and Linear Systems Chapter 2: Matrices and Linear Systems Paul Pearson Outline Matrices Linear systems Row operations Inverses Determinants Matrices Definition An m n matrix A = (a ij ) is a rectangular array of real numbers

More information

Math 313 (Linear Algebra) Exam 2 - Practice Exam

Math 313 (Linear Algebra) Exam 2 - Practice Exam Name: Student ID: Section: Instructor: Math 313 (Linear Algebra) Exam 2 - Practice Exam Instructions: For questions which require a written answer, show all your work. Full credit will be given only if

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

MATH 106 LINEAR ALGEBRA LECTURE NOTES

MATH 106 LINEAR ALGEBRA LECTURE NOTES MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information