Computing homology groups:

Similar documents
CHAPTER 6. Direct Methods for Solving Linear Systems

Algorithms to Compute Bases and the Rank of a Matrix

Apprentice Linear Algebra, 1st day, 6/27/05

Computing Invariant Factors

Chapter 1 Vector Spaces

ELEMENTARY LINEAR ALGEBRA

Section 6.4. The Gram Schmidt Process

Definition 2.3. We define addition and multiplication of matrices as follows.

Math 4377/6308 Advanced Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math Final December 2006 C. Robinson

1 Invariant subspaces

Elementary maths for GMT

Math 121 Homework 5: Notes on Selected Problems

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

MATH 110: LINEAR ALGEBRA HOMEWORK #2

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

Linear Algebra, Spring 2005

Algebra Workshops 10 and 11

A proof of the Jordan normal form theorem

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA REVIEW

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

NOTES on LINEAR ALGEBRA 1

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

1 Last time: inverses

LINEAR ALGEBRA SUMMARY SHEET.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Topics in linear algebra

Lecture 9: Elementary Matrices

Dot Products, Transposes, and Orthogonal Projections

CHARACTER THEORY OF FINITE GROUPS. Chapter 1: REPRESENTATIONS

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

MATRICES. a m,1 a m,n A =

Math 1553 Introduction to Linear Algebra

A primer on matrices

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Linear equations in linear algebra

Jordan normal form notes (version date: 11/21/07)

for some n i (possibly infinite).

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Math 530 Lecture Notes. Xi Chen

Topological Data Analysis - Spring 2018

IDEAL CLASSES AND RELATIVE INTEGERS

LINEAR ALGEBRA REVIEW

Linear Algebra Review

Chapter 3 Transformations

MODEL ANSWERS TO THE FIRST HOMEWORK

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math 3108: Linear Algebra

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Span & Linear Independence (Pop Quiz)

Matrices. In this chapter: matrices, determinants. inverse matrix

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

The Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Math 113 Midterm Exam Solutions

OHSX XM511 Linear Algebra: Multiple Choice Exercises for Chapter 2

ELEMENTARY LINEAR ALGEBRA

Linear Algebra Lecture Notes

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Linear Algebra, Summer 2011, pt. 2

Intermediate Linear Algebra. Christopher Griffin

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Review of Matrices and Block Structures

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Vector Spaces ปร ภ ม เวกเตอร

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

can only hit 3 points in the codomain. Hence, f is not surjective. For another example, if n = 4

Lecture 8: Determinants I

Math Topology II: Smooth Manifolds. Spring Homework 2 Solution Submit solutions to the following problems:

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

First we introduce the sets that are going to serve as the generalizations of the scalars.

Matrix Factorization and Analysis

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2

Solution: We can cut the 2-simplex in two, perform the identification and then stitch it back up. The best way to see this is with the picture:

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 2 Subspaces of R n and Their Dimensions

MATH Linear Algebra

Solving Linear Systems Using Gaussian Elimination

MA106 Linear Algebra lecture notes

Exercises on chapter 0

SPRING OF 2008 D. DETERMINANTS

IFT 6760A - Lecture 1 Linear Algebra Refresher

Basic Linear Algebra Ideas. We studied linear differential equations earlier and we noted that if one has a homogeneous linear differential equation

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Chapter 1 Matrices and Systems of Equations

1 GSW Sets of Systems

A PRIMER ON SESQUILINEAR FORMS

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Solution to Homework 1

Transcription:

Computing homology groups: Computing simplicial homology groups for a finite -complex is, in the end, a matter of straightforward linear algebra. First the punchline! Start with the chain complex C n+1 (X) n+1 C n (X) n C n 1 (X) for the simplicial homology of a finite -complex X, with matrices n representing the boundary maps n (in the standard bases given by the (n +1)- and n-simplices). We can, by row and column operations over Z (that is, we add integer multiples of a row or column to another, permute rows or columns, and multiply a row or column by 1) put the matrix n =(d ij )intosmith normal form A n =(a ij ), which means that its entries are 0 except along the diagonal a ii = b i ; and further, b i b i+1 for all i (where everything is assumed to divide 0 and 0 divides only 0). We sketch the proof below. Then if we let M n = the number of columns of zeros of A n and m n =thenumber of non-zero rows of A n+1 [note the change of subscript!], and b 1,...b k the non-zero diagonal entries of A n+1 [note that m n = k], then H n (X) = Z M n m n Z b1 Z bk The point is that the matrices A n are representatives of the boundary maps n,with respect to some very carefully chosen (and compatible) bases for C n (X). Essentially, Z n (X) = Z M n with basis the last M n elements of our basis for C n (X), and B n (X) = b 1 Z b k Z, mapping into the last k coordinates.

Specifically, we show that we can decompose each C n (X) =U n V n W n,where Z n (X) ={0} V n W n (so n is injective on U n ), and n (U n ) W n 1 (with finite index). Furthermore, we may choose bases so that the matrix of n+1 : U n+1 W n is the diagonal matrix diag(b 1,...,b k ). Note that, choosing bases for the V n to fill out abasisforc n (X) (using the bases for U n and W n implied by the above statement), the matrix for n+1 has the block form consisting of all 0 s, except for the U n+1 to W n part, which is the diagonal matrix. (This is a slightly permuted version of Smith normal form.) Then B n (X) =im( n+1 )=b 1 Z b k Z {0} W n V n W n so H n (X) =Z n (X)/B n (X) = (V n W n )/({0} b 1 Z b k Z) = V n Z b1 Z bk. The dimension of V n can be determined from the normal forms of n+1 and n as dim(v n )=dim(v n W n ) dim(w n )=dim(z n (X)) dim(u n+1 ) = number of nonpivot columns of n number of pivot rows of n+1 = number of zero columns of A n number of non-zero rows of A n+1, as desired. It remains only to show that the matrices n can be put into Smith normal form, and that this implies the existence of the decompositions of C n (X) as described. The basic idea is that row and column operations on a matrix can really be interpreted as choosing different (ordered) bases for the domain or codomain, to describe a linear transformation L : V W. Working with integer matrices and operations over Z means that we can work with bases of Z n.sincelu i = j a ijw j for bases {u i } and {w j }, interchanging rows or columns corresponds to interchanging basis elements in W or V, repsectively (to make the actual vectors described by the summation remain the same). Multiplication of a row or column by 1 multiplies the corresponding

basis element by 1. And adding a multiple m of one row, k, toanother,l amounts to replacing the basis vector u l with u l + mu k (since the new row describes, in terms of the w j, the image of that vector in V ); adding a multiple m of one column k to another, l, is reflected in the bases as a i1 w 1 + + a im w m = a i1 + a ik (w k mw l )+ +(a il + ma ik )w l + + a im w m so it can be interpreted as a change of basis in the codomain. Using these operations to arrive in Smith normal form is a fairly straighforward matter of induction. Given an integer matrix A =(a ij ), Let N(A) be the minimum, among all non-zero entries of A, of their absolute value. If a ij = N(A) doesnot divide some entry a kl = b of A, we can by row and column operations decrease N(A). If i = k or j = l, then adding the right multiple of the column or row containing a ij to the one containing b will yield the remainder of b upon division by a ij in b s place, lowering N(A). Otherwise, we may assume that both a il = αa ij and a kj = βa ij,and then adding 1 α times the i-th row to the k-th yields a row with a ij in the j-th spot and a kl +(1 α)βa ij in the l-th, which a ij still doesn t divide. Then we apply the previous operation to reduce N(A). Since this cannot continue indefinitely [N(A) Z + ], eventually N(A) = a ij does divide every entry of A. By swapping rows and columns, we may assume that N(A) = a 11, and then by row and column operations we can zero out every other entry in the first row and column. Striking out this row and column, we have a minor matrix B satisfying N(A) N(B). Note that then this will remain true under any subsequent row or column operation on B. By induction, there are row and column operations for B (which we interpret as operations on A not using the first row or column)

putting B into Smith normal form; essentially, we just continually use the process above on smaller and smaller matrices. Since N(A) N(B) at the start, this remains true throughout the row and column operations, so each diagonal entry divides the ones that come later. All other entries are eventually zeroed out. After we have put our matrices representing the boundary maps n into Smith normal form, this provides us with bases {u n i }, {wn 1 j } for C n (X) andc n 1 (X) sothat n u n i = b i,n w n 1 i for i k n and n u n i =0fori>k n. In order to make these bases compatible, as we desire, we need to use the fact that n n+1 =0. Wemay assume that b i,n 0foralli k n (otherwise we just shift k n ). The point is that n+1 u n+1 i = b i,n+1 wi n implies that 0 = n n+1 u n+1 i = b i,n+1 n wi n, so n wi n =0foreveryi k n (since b i,n+1 0). Also, n is injective on the span of u n 1,...,un k n, since their images, b i,n wi n, are linearly independent (as non-zero multiples of basis elements). This also means that they span a subspace complementary to Z n (X). We therefore wish to choose {u n i :1 i k n } as our basis for U n and {wj n :1 j k n+1} as our basis for W n,andshow that this can be extended to abasisforc n (X) by choosing vectors lying in Z n (X); the added vectors will be a basis for V n. Once we do this, we know that in these bases our boundary maps will have the form we desire, since the added vectors v k map to 0 under n,asdothewj n we ve kept, and n u i = b i,n w n 1 i. For notational convenience in what follows, we will denote the vectors u i,w j that we have elected not to keep for our basis by u i,w j.

To finish building our basis for C n (X) we essentially need to show that the span W n of the w j s form a direct summand of Z n (X) =ker( n ), that is, the w j form the subset of some basis for Z n (X). To show that W n is a summand, we show that Q n = Z n (X)/W n is a free abelian group; choosing a basis {v k + W n } for Q n,itthen follows that {v k } {w j } is a basis for Z n (X); given z Z n (X), z + W n can be expressed as a combination of the v k + W n ; z minus that combination therefore lies in W n, so is a combination of the w j. That shows that they span. Writing 0 as a linear combination and then projecting to Q n shows that the coefficients of the v k are 0 (since the v k + W n are linearly independent in Q n ), and so the coeficients of the w j are zero (since the w j are linearly independent in W n ), proving linear independence. And to show that Q n = Z n (X)/W n is a free abelian group, since it is finitely generated, it suffices to show that it contains no torsion elements. That is, if z Z n (X) and cz W n for some c 0,thenz W n. But this is immediate, really; expressing z = c i w i + c i w i (which is unique, since the w i,w i form a basis for C n(x)), then cz = cc i w i + cc i w i W n implies that the cc i = 0 (since this expression is still unique and the w i form a basis for W n ), so the c i =0andz = c i w i W n,as desired. Thefactthattheu i and Z n (X) (hence the u i and the basis {v k,w j } for Z n (X)) span C n (X) follows from the fact that the u i all lie in Z n(x). This implies that C n (X) =span{u i } span{v k } span{w j }, finishing our proof.