Generalized Dedekind Bernoulli Sums

Size: px
Start display at page:

Download "Generalized Dedekind Bernoulli Sums"

Transcription

1 Generalized Dedekind Bernoulli Sums A thesis presented to the faculty of San Francisco State University In partial fulfilment of The Requirements for The Degree Master of Arts In Mathematics by Anastasia Chavez San Francisco, California June 2010

2 Copyright by Anastasia Chavez 2010

3 CERTIFICATION OF APPROVAL I certify that I have read Generalized Dedekind Bernoulli Sums by Anastasia Chavez and that in my opinion this work meets the criteria for approving a thesis submitted in partial fulfillment of the requirements for the degree: Master of Arts in Mathematics at San Francisco State University. Dr. Matthias Beck Professor of Mathematics Dr. Federico Ardillo Professor of Mathematics Dr. David Ellis Professor of Mathematics

4 Generalized Dedekind Bernoulli Sums Anastasia Chavez San Francisco State University 2010 The finite arithmetic sum called a Dedekind sum appears in many areas of mathematics, such as topology, geometric combinatorics, algorithmic complexity, and algebraic geometry. Dedekind sums exhibit many beautiful properties, the most famous being Dedekind s reciprocity law. Since Dedekind, Apostol, Rademacher and others have generalized Dedekind sums to involve Bernoulli polynomials. In 1995, Hall, Wilson and Zagier introduced the 3-variable Dedekind-like sum and proved a reciprocity relation. In this paper we introduce an n-variable generalization of Hall, Wilson and Zagier s sum called a multivariable Dedekind Bernoulli sum and prove a reciprocity law that generalizes the generic case of Hall, Wilson and Zagier s reciprocity theorem. Our proof uses a novel, combinatorial approach that simplifies the proof of Hall, Wilson and Zagier s reciprocity theorem and aids in proving the general 4-variable extension of Hall, Wilson and Zagier s reciprocity theorem. I certify that the Abstract is a correct representation of the content of this thesis. Chair, Thesis Committee Date

5 ACKNOWLEDGMENTS I would like to thank my thesis advisor, collaborator and mentor, Dr. Matthias Beck, for his support both academically and in life matters; professor and mentor Dr. David Ellis, thank you for urging me to continue this mathematical path; many thanks to the talented and dedicated professors and staff of the San Francisco State University Mathematics Department for their support through the years; to my parents, thank you for everything that was intentionally and unintentionally done, it was and is exactly what I need to be me; to my husband and best friend, Davi, your unconditional support and love helps me be the best version of spirit that I can express, my endless gratitude to you resides in every breath I take; and last, to the angels who bless my life in the most perfect ways, Ayla and Asha, thank you for your medicine and being the profound teachers that you are. v

6 TABLE OF CONTENTS 1 Introduction More History, Definitions and Notations Bernoulli Polynomials and More Dedekind and Bernoulli meet The Multivariate Dedekind Bernoulli sum A New Proof of Hall Wilson Zagier s Reciprocity Theorem The 4-variable reciprocity theorem Bibliography vi

7 Chapter 1 Introduction In the 1880 s, Richard Dedekind developed the finite sum that today is called the Dedekind sum [12]. Definition 1.1 (Classical Dedekind Sum). For any positive integers a and b, the classical Dedekind sum s(a, b) is defined to be s(a, b) = h mod b B 1 ( h b ) ( ) ha B 1, (1.1) b where {x} 1/2 (x Z), B 1 (x) = 0 (x Z), (1.2) 1

8 2 and we call x x = {x} the fractional part of x. The sum h mod b is interpreted as the periodic sum over all values of h modulo b where h Z. Dedekind arrived at the Dedekind sums while studying the η function. Although Dedekind sums are studied in number theory, they have appeared in many areas of mathematics such as analytical number theory [6], combinatorial geometry [4], topology [9], and algorithmic complexity [8]. Since Dedekind sums have no closed form, calculating their sum can become very time intensive. Thankfully, Dedekind discovered an invaluable tool that simplifies Dedekind sum calculations immensely, called reciprocity. Although Dedekind proved the following reciprocity relation by a transcendental method, Rademacher recognized it s elementary nature and developed multiple proofs of Dedekind s reciprocity theorem [12] addressing it s arithmetic signifigance. Theorem 1.1 (Dedekind Reciprocity [13]). For any positive integers a and b with (a, b) = 1, s(a, b) + s(b, a) = 1 12 ( a b + b a + 1 ) 1 ab 4. (1.3) Since Dedekind, many mathematicians such as Rademacher [11], Carlitz [5], and Apostol [1] have introduced Dedekind-like sums that generalize B 1 (u) to periodized Bernoulli polynomials, B k (u) (see Definition 2.4). In suit with Dedekind, reciprocity relations have been proven for the respective Dedekind-like sums.

9 3 In 1995, Hall, Wilson and Zagier [7] introduced a generalization of Dedekind sums (see Definition 2.4) that involve three variables and introduce shifts. They proved a reciprocity theorem that is a wonderful example of how these relations express the symmetry inherent in Dedekind-like sums and characterize their potential to be evaluated so swiftly. Most recent, Bayad and Raouj [3] have introduced a generalization of Hall, Wilson and Zagier s sum and proved a reciprocity theorem that shows the non-generic case of Hall, Wilson and Zagier s reciprocity theorem is a special case. In this paper we introduce a different generalization of Hall, Wilson and Zagier s sum called the multivariate Dedekind Bernoulli sum (see Definition 3.1) and prove our main result, a reciprocity theorem (see Theorem 3.1). Along with our main result, we show that the generic case of Hall, Wilson and Zagier s reciprocity theorem follows from the proof of our reciprocity theorem and that one can say more about the 4-variable version of Hall, Wilson and Zagier s sum than is covered by the reciprocity theorem given by Bayad and Raouj, as well as our own.

10 Chapter 2 More History, Definitions and Notations We now begin working towards rediscovering reciprocity theorems for Dedekind-like sums. 2.1 Bernoulli Polynomials and More Bernoulli polynomials and numbers are a pervasive bunch, which show up in many areas of mathematics such as number theory and probability [2]. Definition 2.1 (Bernoulli functions). For u R, the Bernoulli function B k (u) is defined through the generating function ze uz e z 1 = k 0 B k (u) z k. (2.1) k! 4

11 5 Definition 2.2 (Bernoulli numbers). The Bernoulli numbers are B k := B k (0) and have the generating function z e z 1 = k 0 B k k! zk. (2.2) 12.12]. It has been proven that Bernoulli functions are in fact polynomials [2, Theorem Theorem 2.1 (k-th Bernoulli polynomial). The functions B k (u) are polynomials in u given by B k (u) = k n=0 ( ) k B n u k n. (2.3) n An important property of k-th Bernoulli polynomials is the Fourier expansion formula [2, Theorem 12.19] k Z\{0} e 2πiu k m = (2πi)m m! B m (u). (2.4) In the non-absolutely convergent case m = 1, the sum is to be interpreted as a Cauchy principal value. Property 2.4 defines the unique periodic function B m (u) with period 1 that coincides with B m (u) on [0, 1), except we set B 1 (u) = 0 for u Z [7].

12 6 Lemm.2 (Raabe s formula [7]). For a N, x R, f mod a ( B m x + f ) = m Bm (ax). a Proof. Begin by embedding the right-hand side into a generating function: m Bm (ax) Y m 1 m! m=0 = m 0 = eax Y a e Y a 1 = exy e Y a 1. B m (ax) (Y/a)m 1 m! Now, embed the left-hand side into a generating function: m=0 f mod a ( B m x + f ) Y m 1 a m! = a 1 f=0 = exy e (x+ f a )Y e Y 1 e Y 1 + e(x+ 1 a )Y 2 a )Y a 1 a )Y e Y 1 + e(x+ e Y e(x+ e Y 1 = exy (1 + e 1 a Y + e 2 a Y + + e a 1 a Y ) e Y 1

13 7 (applying the geometric series identity n 1 k=0 rk = 1 rn 1 r where r = e Y a ) e Y 1 1 e Y 1 e Y a = exy = exy 1 e Y a = exy e Y a 1. a a We have deduced the generating functions of the right and left hand sides of Raabe s formula are equal, so the coefficients of the generating functions are equal. That is, f mod a ( f ) B m x + = a 1 m Bm (ax). a Definition 2.3. Let β(α, Y ) = m=0 B m (α) Y m 1, (2.5) m! where α R.

14 8 Lemm.3. β(α, Y ) = 1 ey +1 2 e Y 1 for α Z, e {α}y for α Z. e Y 1 (2.6) Proof. Recall that B m (α) coincides with B m (α) on the interval [0, 1) and is periodic with period 1. Thus, the cases we are interested in are when α is not in Z and α = 0. We will first evaluate β(α, Y ) for α Z. β(α, Y ) = = m=0 m=0 = e{α}y e Y 1. B m (α) Y m 1 m! B m ({α}) Y m 1 m! Now, we look at the case α = 0. We first write out the first few terms of β(α, Y ) evaluated at 0, then manipulate the sum to have period 1. We use the well known Bernoulli polynomial identities B 0 (α) = 1 and B m (α) = 0 for odd m 3, as well as

15 9 the first few Bernoulli numbers [2]. B 0 (0)Y 1 + B 1 (0) + B 2 (0) Y 2! + B 3(0) Y 2 3! + B 4(0) Y 3 4! + = 1 e Y = 1 e Y = 1 e Y = 2 + ey 1 2(e Y 1) = 1 2 e Y + 1 (e Y 1). Recall that B 1 (α) = 0 for all α Z, and so we have the following identity: B 0 (0)Y 1 + B 1 (0) + B 2 (0) Y 2! + B 3 (0) Y 2 3! + B 4 (0) Y 3 4! + = 1 2 e Y + 1 (e Y 1). When α is an integer {α} = 0, thus the above identity holds for α Z. 2.2 Dedekind and Bernoulli meet Dedekind-like sums involving higher-order Bernoulli polynomials have been defined by Apostol [1], Carlitz [5], Mikolás [10] and Hall, Wilson and Zagier [7]. The latter introduced the generalized Dedekind Rademacher sum, the function we wish to generalize to n variables.

16 10 Definition 2.4. The generalized Dedekind Rademacher sum is S m,n a b c = x y z h mod c ( B m a h + z ) x c ( B n b h + z ) y, (2.7) c where a, b, c Z >0 and x, y, z R. The generalized Dedekind Rademacher sum satisfies certain reciprocity relations that mix various pairs of indices (m, n) and so are most conveniently stated in terms of the generating function a b c Ω x y z = X Y Z m,n 0 1 m!n! S m,n a b c (X/a) m 1 (Y/b) n 1, (2.8) x y z where Z = X Y. Hall,Wilson and Zagier proved the following reciprocity theorem for the generalized Dedekind Rademacher sum. Theorem 2.4. [7] Let a, b, c Z >0 and pairwise relatively prime, x, y, z R, and

17 11 X, Y, Z three variables such that X + Y + Z = 0. Then a b c b c a c a b Ω x y z + Ω y z x + Ω z x y X Y Z Y Z X Z X Y 1/4 if (x, y, z) (a, b, c)r + Z 3, = 0 otherwise. (2.9) To clarify, we call the result a b c b c a c a b Ω x y z + Ω y z x + Ω z x y = 0 X Y Z Y Z X Z X Y the generic case of the above reciprocity theorem and this occurs most of the time. The one exception is when (x, y, z) are chosen in such a way that the involved Bernoulli periodic polynomials are evaluated at integer values. In the following chapter we introduce our main theorem which generalizes Hall, Wilson and Zagier s reciprocity theorem using a simple combinatorial approach and will make clear how the exceptional integer values come to light.

18 Chapter 3 The Multivariate Dedekind Bernoulli sum We now introduce our main object of study, the multivariate Dedekind Bernoulli sum. Definition 3.1. For a fixed integer n 2, we consider positive integers (p 1, p 2,..., p n ) and (,,..., a n ), and real numbers (x 1, x 2,..., x n ). For 1 k n, set A k = (,,..., â k,..., a n ), X k = (x 1, x 2,..., ˆx k,..., x n ) and P k = (p 1, p 2,..., ˆp k,..., p n ), where â k means we omit the entry. Then the multivariate Dedekind Bernoulli 12

19 13 sum is S Pk A k X k x k = h mod n i=1 B pi (a i h + x k ) x i. (3.1) We see that if (,, a 3 ) = (a, b, c), (x 1, x 2, x 3 ) = (x, y, z), and P 3 = (m, n) then A 3 = (a, b), X 3 = (x, y) and thus we recover Hall, Wilson and Zagier s generalized Dedekind Rademacher sum, S m,n A 3 X 3 c = z h mod c ( B m a h + z ) x c ( B n b h + z ) y. c Moreover, by letting (x 1, x 2,..., x n ) = 0 and P k = (p 1, p 2,..., ˆp k,..., p n ), we recover another generalization of Hall, Wilson and Zagier s generalized Dedekind Rademacher sum introduced by Bayad and Raouj in 2009 [3]. Bayad and Raouj s multiple Dedekind Rademacher sum is thus defined S Pk A k P k p k = h mod n i=1 ( ) ai h B pi. (3.2) Similar to the generalized Dedekind Rademacher sum, a multivariate Dedekind Bernoulli sum reciprocity relation mixes various (n 1)-tuples of indices and is most conveniently stated in terms of generating functions. For nonzero variables

20 14 (y 1, y 2,..., y n ) where y n = y 1 y 2 y n 1, let Y k = (y 1, y 2,..., ŷ k,..., y n ). Then Ω A k X k Y k x k y k =... 1 p p 1 p k p 1!p 2! p k 1!p k+1! p n! S P k A k n X k x k n i=1 ( yi a i ) pi 1. The series of summands p 1... p k p n is understood as summations over all non-negative integers p 1,..., ˆp k,..., p n. Our main result is the following reciprocity law involving multivariate Dedekind Bernoulli sums. Theorem 3.1. Let (x 1, x 2,..., x n ) R and (p 1, p 2,..., p n ), (,,..., a n ) Z >0 where (a u, a v ) = 1 for all 1 u < v n. For 1 k n, let A k = (,,..., â k,..., a n ), X k = (x 1, x 2,..., ˆx k,..., x n ) and P k = (p 1, p 2,..., ˆp k,..., p n ). For nonzero variables y 1, y 2,..., y n, let Y k = (y 1, y 2,..., ŷ k,..., y n ) such that y 1 +

21 15 y y n = 0, then n Ω k=1 A k X k Y k x k y k = 0 if xu hu a u xv hv a v Z whenever 1 u < v n and h u, h v Z. We now embark on a journey of transforming Ω A k X k Y k x k y k into a simplified and accessible form that will make proving Theorem 3.1 as easy as a matching game. Before we continue, we first introduce two useful lemmas for the fractional-part function. Lemma 3.2. Given a, b, c R, {a b} {a c} > 0 {a b} {a c} = {c b}. Proof. Our goal is to show {a b} {a c} {c b} = 0. By definition {x} = x, where is an integer and x R. Thus, the above

22 16 sum can be rewritten as follows, {a b} {a c} {c b} = a b 1 (a c 2 ) (c b 3 ) = =, where all s are integers. Given our assumption and the fact that 0 {x} < 1, it follows 0 {a b} {a c} < 1 1 < {a b} {a c} {c b} < 1 1 < < 1 = 0. Lemma 3.3. Given a, b, c R, {a b} {a c} < 0 {a b} {a c} = {b c}.

23 17 Proof. Our goal is to show {a b} {a c} + {b c} = 0. By definition {x} = x, where is an integer and x R. Thus, the above sum can be rewritten as follows, {a b} {a c} + {b c} = a b 1 (a c 2 ) + b c 3 = =, where all s are integers. Given our assumption, 0 {x} < 1, and 0 {b c} < 1, it follows 1 < {a b} {a c} < 0 1 < {a b} {a c} + {b c} < 1 1 < < 1 = 0.

24 18 We will now begin manipulating Ω multivariate Dedekind Bernoulli sums. S Pk A k X k x k n i=1 m i i = A k X k Y k h mod n i=1 x k y k using the following identity for B pi (a i h + x k x i ) m i i (applying 2.2, Raabe s formula) = H n i=1 B pi ( xk + h k x ) i + h i, a i where H = h 1 mod h 2 mod h n mod a n includes the original summand h

25 19 we now call h k. Let r i = x i+h i a i, then Ω A k X k Y k x k y k =... 1 p p 1 p k p 1!p 2! p k 1!p k+1! p n! S P k A k n = H = H... p k p 1 1 p p 1!p 2! p k 1!p k+1! p n! n X k n i=1 x k n i=1 ( ) pi 1 yi a i B pi (r k r i ) y p i 1 i n β (r k r i, y i ), (3.3) i=1 where Definition 2.3 is applied in the final equality. It is now clear that (3.3) depends on the differences r k r i for 1 k < i n and all β (r k r i, y i ) depend on whether or not these differences are integers (see 2.3). From now on we assume the differences are not integers, which is analogous to the generic case of Theorem 2.4.

26 20 Consider the overall sum of Ω A k X k Y k x k y k as k ranges from 1 to n. n Ω k=1 A k X k Y k x k y k = n k=1 H n 1 = k=1 H n β (r k r i, y i ) i=1 n β (r k r i, y i ) + H i=1 n 1 β (r n r j, y j ) j=1 (since {r k r i } 0 for each 1 k < i n, we apply Lemm.3) n 1 = k=1 H n 1 = = k=1 H n 1 k=1 n i=1 n i=1 e {r k r i }y i e y i 1 + H n 1 j=1 e {r k r i }y i e y i 1 eyk 1 e y k 1 + H H ni=1 + H n 1 e {rn r j}y j e y j 1 n 1 j=1 e {r k r i }yi e y k n 1 ni=1 k=1 H e {rn r j}y j e y j 1 eyn 1 e yn 1 e {r k r i }y i n 1 j=1 e{rn r j}y j j=1 e{rn r j}yj e yn H n. f=1 (ey f 1)

27 21 We will drop the denominator and continue to focus only on the numerator: n 1 k=1 H + H e {r k r n}y n+y k n 1 n 1 n 1 = k=1 n 1 e {r k r i }y i i=1 j=1 e {rn rj}yj e yn H H n 1 k=1 + H H n 1 k=1 H n 1 e {rn r j}y j j=1 n 1 e {r k r n}( y 1 y n 1 )+y k e {r k r i }y i n 1 e {r k r n}( y 1 y n 1 ) i=1 i=1 e {r k r i }y i j=1 e {rn rj}yj e ( y1 yn 1) H n 1 e {r k r n}y n e {r k r i }y i i=1 n 1 e {rn r j}y j. j=1 Recall that 1 {r i r j } = { r j r i } since r i r j Z. Thus the numerator equals n 1 k=1 H n 1 k=1 + H n 1 e {rn r k}y k e ({r k r i } {r k r n})y i H n 1 i=1 e {r k r n}y k n 1 e ({r k r i } {r k r n})y i (3.4) i=1 j=1 e {rj rn}yj H n 1 e {rn r j}y j. j=1 In order to prove Theorem 3.1, we will show that the exponents of opposite signed terms in the numerator can be paired and so the sum vanishes. Examining (3.4),

28 22 we can see that only three types of exponents appear, {r n r i }, {r i r n }, and {r i r j } {r i r n }. A critical charasteristic of the latter difference is that, by applying Lemma 3.2 or Lemma 3.3, {r j r i } {r j r n } = {r n r i } or {r i r n }, respectively. This means the exponents can be condensed to just the first two forms, {r n r i } and {r i r n }. Moreover, we can represent these numbers by their sign since the sign of {r j r i } {r j r n } determines if it is equal to {r n r i } or {r i r n }. This lends us to the following representations: {r n r i } = +, {r i r n } = and C ij = sign ({r j r i } {r j r n }). Thus every term of n we will state explicitly. k=1 Ω A k X k Y k x k y k can be represented by a sign vector, which The exponent corresponding to k = 1 in n 1 k=1 H n 1 e {rn r k}y k e ({r k r i } {r k r n})y i i=1 is {r n r 1 }y 1 + ({r 1 r 2 } {r 1 r n }) y ({r 1 r n 1 } {r 1 r n }) y n 1

29 23 and is represented as the sign vector (+, C 12,..., C 1,n 1 ). Similarly, the (k = 1)-exponent in is n 1 k=1 H n 1 e {r k r n}y k e ({r k r i } {r k r n})y i i=1 {r 1 r n }y 1 + ({r 1 r 2 } {r 1 r n }) y ({r 1 r n 1 } {r 1 r n }) y n 1, and the sign vector representation is (, C 12,..., C 1,n 1 ). Finally, the terms n 1 H j=1 e {rj rn}yj and H n 1 e {rn r j}y j j=1

30 24 are represented as the respective sign vectors (,,..., ) and (+, +,..., +). Note that if the sign of, say, {r 1 r n } and {r 4 r n } are the same, it is not necessarily true that {r 1 r n } = {r 4 r n }. We will address this futher in the following argument. To prove that the terms of n k=1 Ω A k X k Y k x k y k do in fact cancel, we proceed to construct two matrices M n and M n that consist of the sign vector representations of the terms of n k=1 Ω A k X k Y k x k to proving M n = M n after row swapping. y k and will show that n k=1 Ω A k X k Y k x k y k = 0 equates Let M n be the matrix of all sign vectors representing the exponents of the positive terms of n k=1 Ω A k X k Y k x k y k and let M n be the matrix of all sign vectors representing the exponents of the negative terms. This means the sign vector that represents every exponent from the positive terms n 1 k=1 for each k is placed in the kth row of matrix M n. H e{rn r k}y k n 1 i=1 e ({r k r i } {r k r n})y i Similarly, the sign vector representation for every exponent from the negative terms n 1 k=1 H e {r k r n}y k n 1 i=1 e ({r k r i } {r k r n})y i for each k is placed in the kth

31 25 row of the matrix M n. Finally, place the sign vector representing n 1 H j=1 e {r j r n}y j in the last row of M n and the sign vector representing n 1 H j=1 e{rn r j}y j in the last row of M n. Notice that the placement of entry C ki depends on the indices k and i of each term in n k=1 Ω A k X k Y k x k y k in the same row and column in both matrices. and given the symmetry among the terms, C ki lives This latter fact implies that if sign ({r n r i }) = sign ({r n r j }) then {r n r i } = {r n r j } only when i = j. Thus we have constructed the following matrices, + C 12 C 13 C 1,n 1 C 21 + C 23 C 2,n 1 C M n = 31 C 32 + C 3,n C n 1,1 C n 1,2 C n 1,3 +

32 26 and C 12 C 13 C 1,n 1 C 21 C 23 C 2,n 1 C M n = 31 C 32 C 3,n C n 1,1 C n 1,2 C n 1, Last, we will show that M n = M n after row swapping implies n k=1 Ω vanishes. Assume M n = M n up to row swapping. Then for each sign row vector A k X k Y k x k y k (C s1,..., C s,s 1, +, C s,s+1,..., C s,n 1 ) M n there exists (C t1,..., C t,t 1,, C t,t+1,..., C t,n 1 ) M n such that (C s1,..., C s,s 1, +, C s,s+1,..., C s,n 1 ) = (C t1,..., C t,t 1,, C t,t+1,..., C t,n 1 ).

33 27 Also, we have for some row f in M n (C f1,..., C f,f 1, +, C f,f+1,..., C f,n 1 ) = (+,..., +) and for some row g in M n (,..., ) = (C g1,..., C g,g 1, +, C g,g+1,..., C g,n 1 ). For each identity, we will show that the sign row vectors correspond to canceling terms of n k=1 Ω A k X k Y k x k y k M n, this corresponds to the exponent. Beginning with (C s1,..., C s,s 1, +, C s,s+1,..., C s,n 1 ) ({r s r 1 } {r s r n }) y ({r s r s 1 } {r s r n }) y s 1 + {r n r s }y s + ({r s r s+1 } {r s r n }) y s ({r s r n 1 } {r s r n }) y n 1, and more specifically, represents the positive term e 0 ({r s r 1 } {r s r n }) y ({r s r s 1 } {r s r n }) y s 1 + {r n r s }y s + ({r s r s+1 } {r s r n }) y s ({r s r n 1 } {r s r n }) y n 1 1 C A.

34 28 Similarly, (C t1, C t2,..., C t,t 1,, C t,t+1,..., C t,n 1 ) M n represents the exponent ({r t r 1 } {r t r n }) y ({r t r t 1 } {r t r n }) y t 1 {r t r t }y t + ({r t r t+1 } {r t r n }) y t ({r t r n 1 } {r t r n }) y n 1, and so also the negative term e 0 ({r t r 1 } {r t r n }) y ({r t r t 1 } {r t r n }) y t 1 {r t r t }y t + ({r t r t+1 } {r t r n }) y t ({r t r n 1 } {r t r n }) y n 1 1 C A. Then (C s1,..., C s,s 1, +, C s,s+1,..., C s,n 1 ) = (C t1,..., C t,t 1,, C t,t+1,..., C t,n 1 ),

35 29 implies sign ({r s r 1 } {r s r n }) y sign ({r s r s 1 } {r s r n }) y s 1 + {r n r s }y s + sign ({r s r s+1 } {r s r n }) y s sign ({r s r n 1 } {r s r n }) y n 1 = sign ({r t r 1 } {r t r n }) y sign ({r t r t 1 } {r t r n }) y t 1 {r t r t }y t + sign ({r t r t+1 } {r t r n }) y t sign ({r t r n 1 } {r t r n }) y n 1. If sign ({r s r i } {r s r n }) = sign ({r t r i } {r t r n }) then, by Lemmas 3.2 and 3.3, ({r s r i } {r s r n }) = ({r t r i } {r t r n }). Thus, ({r s r 1 } {r s r n }) y ({r s r s 1 } {r s r n }) y s 1 + {r n r s }y s + ({r s r s+1 } {r s r n }) y s ({r s r n 1 } {r s r n }) y n 1 = ({r t r 1 } {r t r n }) y ({r t r t 1 } {r t r n }) y t 1 {r t r t }y t + ({r t r t+1 } {r t r n }) y t ({r t r n 1 } {r t r n }) y n 1

36 30 and so e 0 1 ({r s r 1 } {r s r n }) y ({r s r s 1 } {r s r n }) y s 1 + {r n r s }y s e = 0. + ({r s r s+1 } {r s r n }) y s ({r s r n 1 } {r s r n }) y n ({r t r 1 } {r t r n }) y ({r t r t 1 } {r t r n }) y t 1 {r t r t }y t + ({r t r t+1 } {r t r n }) y t ({r t r n 1 } {r t r n }) y n 1 C A C A As for the identities (C f1, C f2,..., C f,f 1, +, C f,f+1,..., C f,n 1 ) = (+, +,..., +) and (,,..., ) = (C g1, C g2,..., C g,g 1, +, C g,g+1,..., C g,n 1 ), a similar argument follows. (C f1, C f2,..., C f,f 1, +, C f,f+1,..., C f,n 1 ) represents the exponent ({r f r 1 } {r f r n }) y ({r f r f 1 } {r f r n }) y f 1 + {r n r f }y f + ({r f r f+1 } {r f r n }) y f ({r f r n 1 } {r f r n }) y n 1

37 31 and thus the term e 0 ({r f r 1 } {r f r n }) y ({r f r f 1 } {r f r n }) y f 1 + {r n r f }y f + ({r f r f+1 } {r f r n }) y f ({r f r n 1 } {r f r n }) y n 1 1 C A. The identity (C f1, C f2,..., C f,f 1, +, C f,f+1,..., C f,n 1 ) = (+, +,..., +) means {r f r i } {r f r n } = {r n r i }. Thus e 0 ({r f r 1 } {r f r n }) y ({r f r f 1 } {r f r n }) y f 1 + {r n r f }y f = e 0 = H + ({r f r f+1 } {r f r n }) y f ({r f r n 1 } {r f r n }) y n 1 {r n r 1 }y {r n r f 1 }y f 1 {r f r n }y f +{r n r f+1 }y f {r n r n 1 }y n 1 n 1 e {rn r i}y i. i=1 The exponent of the negative term H vector (+, +,..., +) which cancels with the above. 1 C A 1 C A n 1 j=1 e{rn r j}y j is represented by the sign

38 32 (C g1, C g2,..., C g,g 1, +, C g,g+1,..., C g,n 1 ) represents the exponent ({r g r 1 } {r g r n }) y ({r g r g 1 } {r g r n }) y g 1 {r g r n }y g + ({r g r g+1 } {r g r n }) y g ({r g r n 1 } {r g r n }) y n 1 and thus the negative term e 0 ({r g r 1 } {r g r n }) y ({r g r g 1 } {r g r n }) y g 1 {r g r n }y g + ({r g r g+1 } {r g r n }) y g ({r g r n 1 } {r g r n }) y n 1 1 C A. The identity (,,..., ) = (C g1, C g2,..., C g,g 1, +, C g,g+1,..., C g,n 1 ) means {r g r i } {r g r n } = {r i r n }. Thus e 0 1 ({r g r 1 } {r g r n }) y ({r g r g 1 } {r g r n }) y g 1 {r g r n }y g = e = H + ({r g r g+1 } {r g r n }) y g ({r g r n 1 } {r g r n }) y n {r 1 r n }y 1 {r g 1 r n }y g 1 {r g r n }y g {r g+1 r n }y g+1 {r n 1 r n }y n 1 n 1 e {r i r n}y i. i=1 C A C A

39 33 The exponent of the positive term H vector (,,..., ), which cancels with the above. n 1 j=1 e {r j r n}y j is represented by the sign Thus the matching of rows of the matrices M n and M n correspond to the matching of like terms of opposite signs and we have n just revealed the potential of our main result, k=1 Ω A k X k Y k x k y k = 0. We have M n = M n, up to row swapping, implies n Ω k=1 A k X k Y k x k y k = 0. We must now show that under certain conditions, we can indeed find M n = M n up to row swapping. By proving the following properties of the matrices M n and M n, we are able to quickly show M n = M n up to row swapping and thus our reciprocity theorem follows. Lemma 3.4. M n and M n are of the form such that M n has all + entries on the diagonal and the last row has all entries, and M n has all entries on the diagonal and the last row has all + entries. Proof. As was explained above, the diagonal entries of M n (excluding the last row

40 34 of s) represent the terms in the exponent of the sum n 1 k=1 H e {rn r k}y k n 1 e ({r k r i } {r k r n})y i ; i=1 when k = i, the exponent is {r n r k } = +. Likewise, the diagonal entries of M n (excluding the last row of + s) represent the terms in the exponent of the sum n 1 k=1 H e {r k r n}y k n 1 e ({r k r i } {r k r n})y i ; i=1 when k = i, this exponent is {r k r n } =. Lemma 3.5. As above, let C ij = sign ({r i r j } {r i r n }) where i, j, n Z >0 and 1 i < j n 1. Then C ij = + if and only if C ji =. Proof. Assume C ij = +. Then {r i r j } {r i r n } 0 and by Lemma 3.2 {r i r j } {r i r n } = {r n r j }. We want to show C ji = which is to show {r j r i } {r j r n } < 0. {r j r i } {r j r n } = {r j r i } {r j r n } = {r i r j } + {r n r j } = {r i r n } C ji =.

41 35 Assume C ij =. Then {r i r j } {r i r n } < 0 and by Lemma 3.3 {r i r j } {r i r n } = {r j r n }. We want to show C ji = + which is to show {r j r i } {r j r n } > 0. {r j r i } {r j r n } = {r j r i } + {r i r j } {r i r n } = 1 {r i r n } = {r n r i } C ji = +, recalling that {r j r i } + {r i r j } = 1. By Lemma 3.5, it follows that the sign of one difference of fractional parts {r i r j } {r i r n } is dependent on the sign of another difference of fractional parts {r j r i } {r j r n }. The lemma implies C ij determines C ji. Thus, we will rename C ji = C ij, meaning if C ij = + then C ji = and if C ij = then C ji = +. Thus, we can update the sign vector matrices utilizing this new information as

42 36 + C 12 C 13 C 1,n 1 C 12 + C 23 C 2,n 1 C M n = 13 C 23 + C 3,n C 1,n 1 C 2,n 1 C 3,n 1 + and C 12 C 13 C 1,n 1 C 12 C 23 C 2,n 1 C M n = 13 C 23 C 3,n ,. C 1,n 1 C 2,n 1 C 3,n and we can state the following property. Lemma 3.6. M n and M n exhibit antisymmetry about the diagonal. Lemma 3.7. As above, let C ij = sign ({r i r j } {r i r n }) where i, j, n Z >0 and 1 i < j n 1. If C ij = + and C ik = then C jk =.

43 37 Proof. Assume C ij = + and C ik =. Then {r i r j } {r i r n } > 0 and {r i r k } {r i r n } < 0, and by Lemmas 3.2 and 3.3 {r i r j } {r i r n } = {r n r j } (3.5) and {r i r k } {r i r n } = {r k r i }. (3.6) Then the difference (3.5)-(3.6) is positive and we get {r i r j } {r i r k } = {r n r j } + {r k r i }. The final identity is positive, which means the left hand side is positive. Then by Lemma 3.2 {r i r j } {r i r k } = {r k r j }, and we have {r k r j } = {r n r j } + {r k r i }.

44 38 We want to show C jk = which reduces to showing {r j r k } {r j r n } < 0. {r j r k } {r j r n } = {r j r k } {r j r n } = {r k r j } + {r n r j } = {r k r i } C jk =. Lemma 3.8. There exists a unique row with k + s, for each 0 k n 1, in the matrix M n. Proof. We begin by showing that every row of the matrix M n is unique. Assume on the contrary that row m and row l of M n are equal. Then we can view the rows as follows: row m: C 1m C 2m + C ml C m,n 1 row l: C 1l C 2l C ml + C l,n 1. Then C ml = + and C ml = +, but by Lemma 3.5, C ml = + implies C ml =, a contradiction. Therefore, the rows of the matrix M n are unique. Next, we will show that no two rows contain the same number of + s. Assume on the contrary that row m and row l of M n contain exactly i + s, are not equal

45 39 (and thus 1 i < n 1) and look the same as above. We will examine the elements C ml of row m and C ml of row l. Let C ml = +. Since the mth row does not contain only + s, there exists a in column, say, w. Then by Lemma 3.7, the entry C wl is. So, for every in row m, Lemma 3.7 can be applied to show there is a in the same column entry of row l. But row l also contains a on the diagonal since it is a row of the matrix M n. Thus, row l contains i 1 many + s, a contradiction. The same argument holds for the entry C ml = + of row l. Thus, no two rows can have an equal number of + s. We ve shown no two rows contain the same number of + s and that every row is unique. Thus, for each 0 k n 1, there exists a unique row with k + s. Lemma 3.9. There exists a unique row with k + s, for each 0 k n 1, in the matrix M n. Proof. M n and M n share all entries C ij for 1 i < j n 1. Then the k + s in the kth row of M n determine that the kth row of M n contains k 1 + s since the + diagonal entry of M n becomes a. Therefore, M n exhibits the same uniqueness property of M n. We can now prove our main Theorem 3.1. Proof. Our goal is to show n k=1 Ω A k X k Y k x k y k. Assume xu h u a u xv hv a v Z whenever

46 40 1 u < v n and h u, h v Z which allows us to implement the following matrix argument. We ve seen that n k=1 Ω A k X k Y k x k y k gives rise to the following matrices + C 12 C 13 C 1,n 1 C 12 + C 23 C 2,n 1 C M n = 13 C 23 + C 3,n C 1,n C 2,n 1 C 3,n 1 + and C 12 C 13 C 1,n 1 C 12 C 23 C 2,n 1 C M n = 13 C 23 C 3,n ,.. C 1,n 1 C 2,n 1 C 3,n

47 41 where M n represents the exponents of all positive terms of n k=1 Ω A k X k Y k x k y k and M n represents all negative terms. As was shown above, to prove that the matrices M n andm n contain the same rows is equivalent to showing every positive term of n k=1 Ω A k X k Y k x k y k has a canceling negative term. Thus our problem reduces to showing M n = M n after row swapping. We ve proved in Lemma 3.4 that M n has + s on the diagonal and in row n + 1, M n has s on the diagonal and in row n + 1, Lemma 3.6 shows both matrices exhibit antisymmetry, and Lemma 3.7 tells us that two opposite signed entries in a row off the diagonal imply the sign of another entry in the matrix is a. These three lemmas lead to conclude Lemma 3.8 which states that for both matrices M n and M n there exists a unique row of k + s for every 1 k n 1. We can use row swapping to place the unique rows of k + s in the same row of matrix M n as they appear in matrix M n such that M n = M n, which, by our previous argument, implies n k=1 Ω A k X k Y k x k y k = 0.

48 Chapter 4 A New Proof of Hall Wilson Zagier s Reciprocity Theorem The original proof of Hall, Wilson and Zagier s reciprocity theorem for the generalized Dedekind Rademacher sum ultimately uses cotangent identities to show a b c b c a c a b Ω x y z + Ω y z x + Ω z x y X Y Z Y Z X Z X Y 1/4 if (x, y, z) (a, b, c)r + Z 3, = 0 otherwise. 42

49 43 Here we give an example of applying the approach used to prove Theorem 3.1, simplifying the proof of Theorem 2.4. New proof of Theorem 2.4. Let (,, a 3 ) Z >0 and (x 1, x 2, x 3 ) R. For non-zero variables y 1, y 2, y 3 such that y 1 + y 2 + y 3 = 0 we want to show that a 3 a 3 a 3 Ω x 1 x 2 x 3 + Ω x 2 x 3 x 1 + Ω x 3 x 1 x 2 y 1 y 2 y 3 y 2 y 3 y 1 y 3 y 1 y 2 1/4 if (x 1, x 2, x 3 ) (,, a 3 )R + Z 3, = 0 otherwise. Recall identity (3.3), n Ω k=1 A k X k Y k x k y k = n k=1 H n β (r k r i, y i ), i=1 where β(α, Y ) = 1 ey +1 2 e Y 1 for α Z, e {α}y for α Z, e Y 1

50 44 and H = h 1 mod h 2 mod h n mod a n. Then we have a 3 a 3 a 3 Ω x 1 x 2 x 3 + Ω x 2 x 3 x 1 + Ω x 3 x 1 x 2 y 1 y 2 y 3 y 2 y 3 y 1 y 3 y 1 y = β (r k r i, y i ). We must examine the following cases. k=1 H i=1 (i) (x 1, x 2, x 3 ) (,, a 3 )R + Z 3 ; (ii) (x i, x j ) (a i, a j )R + Z 2 for some 1 i < j 3 but not (i); (iii) None of the above. We will begin with case (i). Let (x 1, x 2, x 3 ) (,, a 3 )R+Z 3. Then x i = λa i +z i for each i where λ R and z i Z. Thus h i + x i a i h j + x j a j = h i + λa i + z i a i = h i + z i a i h j + λa j + z j a j h j + z j a j. (4.1) Since h i + z i only permutes the modular values of h i, we can make a change in

51 45 indices (letting h i z i = h i ) so that (4.1) becomes h i a i h j a j and a 3 a 3 a 3 Ω x 1 x 2 x 3 + Ω x 2 x 3 x 1 + Ω x 3 x 1 x 2 y 1 y 2 y 3 y 2 y 3 y 1 y 3 y 1 y = β (r k r i, y i ) = = k=1 H 3 k=1 H 3 k=1 H i=1 3 β i=1 3 β i=1 ( hk h ) i, y i a i ( hk h ) i, y i, a i since we sum h i over a complete residue system moda i. Since (a i, a j ) = 1, then h i a i can split up the sum 3 k=1 h j a j 3i=1 H β Z occurs only when h i = h j = 0. Thus, we ( ) h k al h i a i, y i into two parts, a term when all

52 46 h i = 0 and a sum over the remaining h i Z where (h 1, h 2, h 3 ) (0, 0, 0) 3 k=1 H 3 β i=1 ( hk h ) i, y i a l a i =β (0, y 2 ) β (0, y 3 ) + β (0, y 1 ) β (0, y 3 ) + β (0, y 1 ) β (0, y 2 ) ( ) ( ) h + β 1 h 2 h, y 2 β 1 h 3 a 3, y 3 ( ) ( ) h +β 2 a H\{0,0,0} 2 h 1 h, y 1 β 2 h 3 a 3, y 3 ( ) ( ). h +β 3 a 3 h 1 h, y 1 β 3 a 3 h 2, y 2 Let us first address the term when all h i = 0. We use the following cotangent identities cot (y) = i e2πy + 1 e 2πy 1, (4.2) cot (α) cot (β) 1 cot (α) + cot (β) =. cot (α + β) (4.3) Combined with the definition of cot y, β(0, y) becomes β(0, y) = 1 i cot y 2i. (4.4)

53 47 Let y k = y k 2i. Then β (0, y 2 ) β (0, y 3 ) + β (0, y 1 ) β (0, y 3 ) + β (0, y 1 ) β (0, y 2 ) = 1 e y e y e y e y e y e y e y 2 1 e y e y 1 1 e y e y 1 1 e y 2 1 = 1 4 (cot y 2 cot y 3 + cot y 1 cot y 3 + cot y 1 cot y 2) = 1 4 (cot y 2 (cot y3 + cot y1) + cot y1 cot y3) = 1 ( ( ) ) cot y cot y2 1 cot y3 1 + cot y 4 cot y1 + y3 1 cot y3. By assumption, y 1 + y 2 + y 3 = 0, so it follows that y 1 + y 3 = y 2 and cot (y 1 + y 3) = cot ( y 2) = cot y 2. Thus, ( ( ) cot y 1 cot y3 1 ) 1 4 cot y2 + cot y cot y1 + y3 1 cot y3 = 1 ( ( ) ) cot y cot y2 1 cot y3 1 + cot y 4 cot y2 1 cot y3 = 1 4 (( cot y 1 cot y 3 + 1) + cot y 1 cot y 3) = 1 4. It is left to show that the summand over H\{0, 0, 0} vanish. But we ve seen this

54 48 sum before. It is just 3 k=1 H\{0,0,0} i=1 3 β (r k r i, y i ) when (x 1, x 2, x 3 ) = 0. Since r k r i Z, we can apply the same matrix argument from the proof of Theorem 3.1 and the terms cancel. Assume case (ii). Let (x i, x j ) (a i, a j )R + Z 2 for some 1 i < j 3 but not (i). Without loss of generality, we assume (x 1, x 2 ) (, )R+Z 2. Then, as before, x 1 = λ + z 1 and x 2 = λ + z 2 for λ R and z 1, z 2 Z so that h 1 + x 1 h 2 + x 2 = h 1 + λ + z 1 h 2 + λ + z 2 = h 1 + z 1 h 2 + z 2. Since z 1 and z 2 permute the summands over h 1 and h 2, we introduce a change of variables and let h 1 = h 1 + z 1 and h 2 = h 2 + z 2. We can rewrite the differences involving r 3 as h i + x i h ) 3 + x 3 ( hi = + λ r 3 a i a 3 a i ( hi = h ) 3 + x 3 λa 3 a i a 3 ) ( hi = r 3. a i

55 49 Then a 3 a 3 a 3 Ω x 1 x 2 x 3 + Ω x 2 x 3 x 1 + Ω x 3 x 1 x 2 y 1 y 2 y 3 y 2 y 3 y 1 y 3 y 1 y 2 = 3 k=1 H = 3 β (r k r i, y i ) i=1 h 1 mod h 2 mod h 3 mod a 3 = H β +β +β ( h 1 h 2 ( h 2 h 1 ( ) ( ) β h1 h 2, y 2 β h1 r 3, y 3 ( ) ( ) +β h2 a 2 h 1, y 1 β h2 r 3, y 3 ( ) ( ) +β r 3 h 1 λ, y 1 β r 3 h 2, y 2 ) ( ) h, y 2 β 1 r 3, y 3 ) ( ) h, y 1 β 2 r 3, y 3 ( ). r 3 h 2, y 2 ( r 3 h 1, y 1 ) β We will again split up the final expression into two parts, one part includes all terms where h 1 = h 2 = 0 and the other part are all summands where h 1 and h 2 are not

56 50 both zero First, we show ( ) ( ) h β 1 h 2 h, y 2 β 1 r 3, y 3 ( ) ( ) h +β 2 a H 2 h 1 h, y 1 β 2 r 3, y 3 ( ) ( ) +β r 3 h 1, y 1 β r 3 h 2, y 2 = β (0, y 2 ) β ( r 3, y 3 ) +β (0, y 1 ) β ( r 3, y 3 ) h 3 mod a 3 +β ( r 3, y 1 ) β ( r 3, y 2 ) ( ) ( ) h β 1 h 2 h, y 2 β 1 r 3, y 3 ( ) ( ) + h +β 2 a H 2 h 1 h, y 1 β 2 r 3, y 3 ( ) ( ). (h 1,h 2 ) (0,0) +β r 3 h 1, y 1 β r 3 h 2, y 2 β +β H +β ( h 1 h 2 ( h 2, y 2 ) β ) h 1, y 1 β ( ) r 3 h 1, y 1 β ) r 3, y 3 ) r 3, y 3 ( ) = 0. r 3 h 2, y 2 ( h 1 ( h 2

57 51 ( ) ( ) h β 1 h 2 h, y 2 β 1 r 3, y 3 ( ) ( ) h +β 2 a H 2 h 1 h, y 1 β 2 r 3, y 3 ( ) ( ) +β r 3 h 1, y 1 β r 3 h 2, y 2 = ( 1 e y e{ r3}y3 + 1 e y 1 ) + 1 e{ r3}y3 e{ r3}y1 e{ r3}y e y 2 1 e y e y 1 1 e y 3 1 e y 1 1 e y 2 1 h 3 mod a 3 After multiplying each term by the common denominator and combining terms, we get the numerator h 3 mod a 3 e y 1+y 2 +{ r 3 }y 3 e y 2+{ r 3 }y 3 + e y 1+{ r 3 }y 3 e { r 3}y 3 +e y 1+y 2 +{ r 3 }y 3 e y 1+{ r 3 }y 3 + e y 2+{ r 3 }y 3 e { r 3}y 3 +2e { r 3}y 1 +{ r 3 }y 2 +y 3 2e { r 3}y 1 +{ r 3 }y 2 (writing all exponents in terms of y 1 and y 2 ) = h 3 mod a 3 e { r 3}y 1 +{ r 3 }y 2 e { r 3}y 1 +{ r 3 }y 2 + e { r 3}y 1 { r 3 }y 2 e { r 3}y 1 { r 3 }y 2 + e { r 3}y 1 +{ r 3 }y 2 e { r 3}y 1 { r 3 }y 2 = 0. +e { r 3}y 1 +{ r 3 }y 2 e { r 3}y 1 { r 3 }y 2 +2e { r 3}y 1 { r 3 }y 2 2e { r 3}y 1 +{ r 3 }y 2

58 52 We must now show Since ± H (h 1,h 2 ) (0,0) β +β +β ( h 1 h 2 ( h 2, y 2 ) β ) h 1, y 1 β ( ) r 3 h 1, y 1 β ) r 3, y 3 ) r 3, y 3 ( ) = 0. r 3 h 2, y 2 ( h 1 ( h 2 ( ) ( ) h 1 h 2 h, ± i a i r 3 Z, we see that this is just another special case of our proof of Theorem 3.1 when (x 1, x 2, x 3 ) = (0, 0, x 3 λa 3 ). Thus, these terms vanish. Last, we apply our Theorem 3.1 to the final case.

59 Chapter 5 The 4-variable reciprocity theorem We ve shown that the generic case of Hall Wilson Zagier s reciprocity theorem can be generalized to the n-variable case using a simple combinatorial argument. The remaining conditions left to deal with in the n-variable case cannot be addressed so easily, as shown by Bayad and Raouj [3] who use involved number theory to prove a reciprocity theorem for the multiple Dedekind Rademacher sum. For n = 4, most of the remaining conditions can be dealt with and the following theorem is revealed. Theorem 5.1. Let,, a 3, a 4 Z, x 1, x 2, x 3, x 4 R, and y 1, y 2, y 3, y 4 be four nonzero variables such that y 1 + y 2 + y 3 + y 4 = 0. Then 4 Ω k=1 A k X k Y k x k y k = 1 8i ( cot ( y1 ) + cot 2i ( y2 ) + cot 2i ( y3 ) + cot 2i ( y4 )) 2i 53

60 54 if (x 1, x 2, x 3, x 4 ) (,, a 3, a 4 ) R+Z 4. The sum vanishes for all other cases, except possibly when (x i, x j, x k ) (a i, a j, ) R + Z 3 and ( hi + x i a i h l + x l, h j + x j a l a j h l + x l, h k + x k a l h ) l + x l Z 3 a l for all h i, h j, h k, h l Z, where 1 i < j < k < l 4. In the unknown case we believe the sum has the potential to simplify. After the proof of Theorem 5.1 we will show the challenge the unknown case poses in proving the sum has a closed formula and ultimately the difficulty in generalizing all cases to n-variables. Proof. First, recall identity (3.3), n Ω k=1 A k X k Y k x k y k = n k=1 H n β (r k r i, y i ), i=1

61 55 where β(α, Y ) = 1 ey +1 2 e Y 1 for α Z, e {α}y e Y 1 for α Z and H = h 1 mod h 2 mod h n mod a n. Our approach to proving Theorem 5.1 will mimic that of our new proof of Theorem 2.4 outlined previously. Thus, we must check the following cases (i) (x 1, x 2, x 3, x 4 ) (,, a 3, a 4 ) R + Z 4 ; (ii) (x i, x j, x k ) (a i, a j, )R + Z 3 for some 1 i < j < k 4 but not (i); (iii) (x i, x j ) (a i, a j )R + Z 2 for some 1 i < j 4 but not (i) and not (ii); (iv) None of the above. Cases (i), (iii), and (iv) are covered in the statement of Theorem 5.1 and will be proven here. Case (ii) will be discussed after the proof. First, we see that case (iv) follows with Theorem 3.1. Assume case (i). Let (x 1, x 2, x 3, x 4 ) (,, a 3, a 4 ) R + Z 4. As in our simplified proof of Theorem 2.4, our concern is if h i a i h j a j Z. Since this occurs only when h i = h j = 0 after introducing a change in variables, we can split the sum

62 56 4 k=1 Ω A k X k Y k x k y k into the terms where h i = 0 for all i = 1, 2, 3, 4 and the remaining terms when not all h i = 0 4 Ω k=1 A k X k Y k x k y k = 4 k=1 H 4 β (r k r i, y i ) i=1 =β(0, y 2 )β(0, y 3 )β(0, y 4 ) + β(0, y 1 )β(0, y 3 )β(0, y 4 ) + β(0, y 1 )β(0, y 2 )β(0, y 4 ) + β(0, y 1 )β(0, y 2 )β(0, y 3 ) β k=1 H\{0,0,0,0} i=1 ( hk h i a i, y i ). We know that since in the last sum h k h i a i of Theorem 3.1 and 4 4 β k=1 H\{0,0,0,0} i=1 Z, we can apply the idea of the proof ( hk h ) i, y i = 0. a i The interesting part consists of the remaining terms. We will now show that the remaining terms in fact simplify to a compact finite sum.

63 57 Let y k = y k 2i. Then 4 Ω k=1 A k X k Y k x k y k =β(0, y 2 )β(0, y 3 )β(0, y 4 ) + β(0, y 1 )β(0, y 3 )β(0, y 4 ) + β(0, y 1 )β(0, y 2 )β(0, y 4 ) + β(0, y 1 )β(0, y 2 )β(0, y 3 ) = 1 8i (cot (y 2) cot (y3) cot (y4) + cot (y1) cot (y3) cot (y4) + cot (y1) cot (y2) cot (y4) + cot (y1) cot (y2) cot (y3)) = 1 8i (cot (y 3) cot (y4) (cot (y2) + cot (y1)) + cot (y1) cot (y2) (cot (y3) + cot (y4))) (applying the cotangent identity to the sums cot(y 1)+cot(y 2) and cot(y 3)+cot(y 4)) ( = 1 cot (y3) cot (y4) ( 8i + cot (y1) cot (y2) cot(y2) cot(y1) 1 cot(y1 +Y 2 ) cot(y 3) cot(y 4) 1 cot(y 3 +y 4) ) )

64 58 (since y 1 + y 2 + y 3 + y 4 = 0, then cot (y 3 + y 4) = cot ( y 1 y 2) = cot (y 1 + y 2)) = 1 8i cot (y 3 ) cot (y4) (cot (y2) cot (y1) 1) cot (y1) cot (y2) (cot (y3) cot (y4) 1) cot (y 1 + y 2) = 1 8i (cot (y 1) cot (y 2) cot (y 3) cot (y 4)) (solving the cotangent identity for cot(α)cot(β), we apply this identity to both cotangent products in the sum above) cot (y 1 + y2) (cot (y2) + cot (y1)) + 1 = 1 cot (y 8i 3 + y4) (cot (y3) + cot (y4)) 1 cot (y1 + y 2) = 1 8i = 1 8i cot (y 1 + y2) (cot (y2) + cot (y1)) cot (y1 + y2) (cot (y3) + cot (y4)) 1 ( cot (y 2) + cot (y 1) + cot (y 1 + y 2) 1 cot (y 1 + y 2) + cot (y 3) + cot (y 4) = 1 8i (cot (y 1) + cot (y 2) + cot (y 3) + cot (y 4)). ) 1 cot (y1 + y2) Next, we assume case (iii). Let (x i, x j ) (a i, a j )R + Z 2 for some 1 i < j 4 and not case (i) and not case (ii). Without loss of generality, assume (x 1, x 2 )

65 59 (, )R + Z 2. Then, again, we can divide the sum 4 k=1 Ω A k X k Y k x k y k into terms where h 1 = h 2 = 0 and the remaining terms when not both h 1 and h 2 are zero 4 Ω k=1 A k X k Y k x k y k = + h 3 mod a 3 h 4 mod a 4 H (h 1,h 2 ) (0,0) β (0, y 2 ) β ( r 3, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β ( r 3, y 3 ) β ( r 4, y 4 ) +β ( r 3, y 1 ) β ( r 3, y 2 ) β ( r 3 r 4, y 4 ) +β ( r 4, y 1 ) β ( r 4, y 2 ) β ( r 4 r 3, y 3 ) ( ) ( ) ( ) h β 1 h 2 h, y 2 β 1 h r 3, y 3 β 1 r 4, y 4 ( ) ( ) ( ) h +β 2 h 1 h, y 1 β 2 h r 3, y 3 β 2 r 4, y 4 ( ) ( ). +β r 3 h 1 a 1, y 1 β r 3 h 2, y 2 β ( r 3 r 4, y 4 ) ( ) ( ) +β r 4 h 1, y 1 β r 4 h 2, y 2 β ( r 4 r 3, y 3 ) Again, we see that all beta functions are evaluated at non-integer values and so the latter term vanishes by the proof method of Theorem 3.1. We now employ a similar matrix argument used in the proof of Theorem 3.1 to prove the first summand equals zero.

66 60 h 3 mod a 3 h 4 mod a 4 = h 3 mod a 3 h 4 mod a 4 = h 3 mod a 3 h 4 mod a 4 β (0, y 2 ) β ( r 3, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β ( r 3, y 3 ) β ( r 4, y 4 ) +β ( r 3, y 1 ) β ( r 3, y 2 ) β ( r 3 r 4, y 4 ) +β ( r 4, y 1 ) β ( r 4, y 2 ) β ( r 4 r 3, y 3 ) 1 ey 2 +1 e{ r 3 }y 3 e{ r 4 }y 4 2 e y 2 1 e y 3 1 e y ey e{ r 3 }y 3 e{ r 4 }y 4 e y 1 1 e y 3 1 e y e{ r 3 }y 1 e{ r 3 }y 2 e{ r 3 r 4 }y 4 e y 1 1 e y 2 1 e y e{ r 4 }y 1 e{ r 4 }y 2 e{ r 4 r 3 }y 3 e y 1 1 e y 2 1 e y ey 2 +1 e{ r 3 }y 3 e{ r 4 }y 4 2 e y 2 1 e y 3 1 e y ey 1 +1 e{ r 3 }y 3 e{ r 4 }y 4 2 e y 1 1 e y 3 1 e y 4 1 ( ) e y 1 1 e y 1 1 ) ( e y 2 1 e y e{ r 3 }y 1 e y 1 1 e{ r 3 }y 2 e y 2 1 e{ r 3 r 4 }y 4 e y 4 1 ( 2 2 ( ey 3 1 e y 3 1 )) + e{ r 4 }y 1 e y 1 1 e{ r 4 r}y 2 e y 2 1 e{ r 4 r 3 }y 3 e y 3 1 ( 2 2 ( ey 4 1 e y 4 1 )) e y 1+y 2 +{ r 3 }y 3 +{ r 4 }y 4 e y 2+{ r 3 }y 3 +{ r 4 }y 4 + e y 1+{ r 3 }y 3 +{ r 4 }y 4 e { r 3}y 3 +{ r 4 }y 4 + e y 1+y 2 +{ r 3 }y 3 +{ r 4 }y 4 e y 1+{ r 3 }y 3 +{ r 4 }y 4 +e y 2+{ r 3 }y 3 +{ r 4 }y 4 e { r 3}y 3 +{ r 4 }y 4 +2e { r 3}y 1 +{ r 3 }y 2 +y 3 +{ r 3 r 4 }y 4 2e { r 3}y 1 +{ r 3 }y 2 +{ r 3 r 4 }y 4 = h 3 mod a 3 h 4 mod a 4 +2e { r 4}y 1 +{ r 4 }y 2 +{ r 4 r 3 }y 3 +y 4 2e { r 4}y 1 +{ r 4 }y 2 +{ r 4 r 3 }y 3 2 (e y 1 1) (e y 2 1) (e y 3 1) (e y 4 1).

67 61 We write the final equality in terms of y 1, y 2 and y 3 and get the numerator e { r 4}y 1 +{ r 4 }y 2 +({ r 3 } { r 4 })y 3 e { r 4}y 1 +{ r 4 }y 2 +({ r 3 } { r 4 })y 3 + e { r 4}y 1 { r 4 }y 2 +({ r 3 } { r 4 })y 3 e { r 4}y 1 { r 4 }y 2 +({ r 3 } { r 4 })y 3 + e { r 4}y 1 +{ r 4 }y 2 +({ r 3 } { r 4 })y 3 e { r 4}y 1 { r 4 }y 2 +({ r 3 } { r 4 })y 3 + e { r 4}y 1 +{ r 4 }y 2 +({ r 3 } { r 4 })y 3 e { r 4}y 1 { r 4 }y 2 +({ r 3 } { r 4 })y 3 + 2e ({ r 3} { r 3 r 4 })y 1 +({ r 3 } { r 3 r 4 })y 2 +{ r 4 r 3 }y 3 2e ({ r 3} { r 3 r 4 })y 1 +({ r 3 } { r 3 r 4 })y 2 { r 3 r 4 }y 4 + 2e { r 4}y 1 { r 4 }y 2 { r 3 r 4 }y 3 2e { r 4}y 1 +{ r 4 }y 2 +{ r 4 r 3 }y 3. Let every term in an exponent be represented by its sign. Then sign{ r 4 } = +, sign ( { r 4 }) =, sign ({ r 3 } { r 4 }) = C 03, sign ({ r 3 } { r 3 r 4 }) = C 30, and let {r 3 r 4 } = {r 4 r 3 } = +.

68 62 We can construct the following matrices + + C 03 + C C 03 + C M 4 = 03 C 30 C 30 + C 30 C C 03 C 03 + C 03 C and M 4 = 03, C 30 C 30 C 30 C where M 4 consists of all vectors representing the exponents of the positive terms in the numerator and M 4 consists of all vectors representing the exponents of the negative terms in the numerator. We can apply Lemmas 3.2 and 3.3 to the differences { r 3 } { r 4 } and { r 3 } { r 3 r 4 }, where a = 0 and b = 0, respectively. Then we can make a statement about the entries C 03 and C 30. Lemma 5.2. C 03 = + if and only if C 30 =. Proof. Assume C 30 = +. This means the difference { r 3 } { r 4 } is positive and,

69 63 by Lemma 3.2, { r 3 } { r 4 } = { r 4 r 3 }. Then { r 3 } { r 3 r 4 } = { r 3 } { r 3 r 4 } = { r 3 } + { r 4 r 3 } = { r 4 }. Thus, { r 3 } { r 3 r 4 } is negative and so C 30 =. Thus, C 30 = C 03 and the matrices become + + C 03 + C 03 + C 03 C C 03 + C 03 + C M 4 = 03 C, M 4 = 03. C 03 C 03 + C 03 C 03 C 30 C 30 + C 03 C

70 64 It is left to show that M 4 = M 4 after row swapping. The matrices M 4 and M 4 exhibit some of the same properties as the matrices constructed in Chapter 3. Just as in Chapter 3, an entry in the i-th colum and j-th row represents the term in an exponent of the summand evaluated at indeces i and j. This guarentees that row swapping is allowed and thus comparable entries will always be found in the same row and column. Furthermore, the argument in Chapter 3 that shows matrix equality means opposite signed terms cancel holds for these matrices since the method of representation is the same. Assume C 03 = + and the matrices are equal up to row swapping. The same is true for C 03 =. This finishes the proof of Theorem 5.1. Finally, we shall discuss case (ii) and why we cannot find a closed formula in this case. Assume (x i, x j, x k ) (a i, a j, )R + Z 3 for some 1 i < j < k 4 and not case (i). Without loss of generality, let (x 1, x 2, x 3 ) (,, a 3 )R + Z 3. Then as before, we divide the sum into the terms when h 1 = h 2 = h 3 = 0 and the remaining

71 65 terms when not all h 1, h 2 and h 3 are zero 4 Ω k=1 A k X k Y k x k y k = + β (0, y 2 ) β (0, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β (0, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β (0, y 2 ) β ( r 4, y 4 ) +β ( r 4, y 1 ) β ( r 4, y 2 ) β ( r 4 r 3, y 3 ) ( ) ( ) ( ) h β 1 h 2 h, y 2 β 1 h 3 h a 3, y 3 β 1 r 4, y 4 ( ) ( ) ( ) h +β 2 h 1 h, y 1 β 2 h 3 h a 3, y 3 β 2 r 4, y 4 ( ) ( ) ( ). h H +β 3 a 3 h 1 h, y 1 β 3 a 3 h 2 h, y 2 β 3 a 3 r 4, y 4 ( ) ( ) ( ) +β r 4 h 1, y 1 β r 4 h 2, y 2 β r 4 h 3 a 3, y 3 h 4 mod a 4 (h 1,h 2,h 3 ) (0,0,0) The ideas of the proof of Theorem 3.1 can be applied to the latter summand since all β-functions are evaluated at non-integer values. Let us investigate what

72 66 goes awry in the first summand. h 4 mod a 4 β (0, y 2 ) β (0, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β (0, y 3 ) β ( r 4, y 4 ) +β (0, y 1 ) β (0, y 2 ) β ( r 4, y 4 ) +β ( r 4, y 1 ) β ( r 4, y 2 ) β ( r 4 r 3, y 3 ) 1 ey ey 3 +1 e{ r 4 }y 4 2 e = y e y 3 1 e y ey ey 3 +1 e{ r 4 }y 4 2 e y e y 3 1 e y 4 1 h 4 mod a ey ey 2 +1 e{ r 4 }y 4 e y e y 2 1 e y e{ r 4 }y 1 e{ r 4 }y 2 e{ r 4 }y 3 e y 1 1 e y 2 1 e y ey ey 3 +1 e{ r 4 }y 4 ( ) e y e = y e y 3 1 e y 4 1 e y ey ey 3 +1 e{ r 4 }y 4 ( ) e y e y e y 3 1 e y 4 1 e y 2 1 h 4 mod a ey ey 2 +1 e{ r 4 }y 4 ( ). e y 3 1 e y e y 2 1 e y 4 1 e y e{ r 4 }y 1 e{ r 4 }y 2 e{ r 4 }y 3 ( 4 ( ey 4 1 )) e y 1 1 e y 2 1 e y e y 4 1

73 67 Expanding terms we get the numerator h 4 mod a 4 e y 1+y 2 +y 3 +{ r 4 }y 4 e y 2+y 3 +{ r 4 }y 4 +e y 1+y 2 +{ r 4 }y 4 e y 1+{ r 4 }y 4 +e y 1+y 3 +{ r 4 }y 4 e y 3+{ r 4 }y 4 +e y 1+{ r 4 }y 4 e { r 4}y 4 e y 1+y 2 +y 3 +{ r 4 }y 4 e y 1+y 3 +{ r 4 }y 4 +e y 1+y 2 +{ r 4 }y 4 e y 1+{ r 4 }y 4 +e y 2+y 3 +{ r 4 }y 4 e y 3+{ r 4 }y 4 +e y 2+{ r 4 }y 4 e { r 4}y 4 e y 1+y 2 +y 3 +{ r 4 }y 4 e y 1+y 2 +{ r 4 }y 4 +e y 1+y 3 +{ r 4 }y 4 e y 1+{ r 4 }y 4 +e y 2+y 3 +{ r 4 }y 4 e y 2+{ r 4 }y 4 +e y 3+{ r 4 }y 4 e { r 4}y 4 +4e { r 4}y 1 +{ r 4 }y 2 +{ r 4 }y 3 +y 4 4e { r 4}y 1 +{ r 4 }y 2 +{ r 4 }y 3

A Generating-Function Approach for Reciprocity Formulae of Dedekind-like Sums

A Generating-Function Approach for Reciprocity Formulae of Dedekind-like Sums A Generating-Function Approach for Reciprocity Formulae of Dedekind-like Sums Jordan Clark Morehouse College Stefan Klajbor University of Puerto Rico, Rio Piedras Chelsie Norton Valdosta State July 28,

More information

Reciprocity formulae for general Dedekind Rademacher sums

Reciprocity formulae for general Dedekind Rademacher sums ACTA ARITHMETICA LXXIII4 1995 Reciprocity formulae for general Dedekind Rademacher sums by R R Hall York, J C Wilson York and D Zagier Bonn 1 Introduction Let B 1 x = { x [x] 1/2 x R \ Z, 0 x Z If b and

More information

A LATTICE POINT ENUMERATION APPROACH TO PARTITION IDENTITIES

A LATTICE POINT ENUMERATION APPROACH TO PARTITION IDENTITIES A LATTICE POINT ENUMERATION APPROACH TO PARTITION IDENTITIES A thesis presented to the faculty of San Francisco State University In partial fulfilment of The Requirements for The Degree Master of Arts

More information

AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS

AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS AUTOMORPHISM GROUPS AND SPECTRA OF CIRCULANT GRAPHS MAX GOLDBERG Abstract. We explore ways to concisely describe circulant graphs, highly symmetric graphs with properties that are easier to generalize

More information

THESIS. Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University

THESIS. Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University The Hasse-Minkowski Theorem in Two and Three Variables THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

FUNCTIONS OVER THE RESIDUE FIELD MODULO A PRIME. Introduction

FUNCTIONS OVER THE RESIDUE FIELD MODULO A PRIME. Introduction FUNCTIONS OVER THE RESIDUE FIELD MODULO A PRIME DAVID LONDON and ZVI ZIEGLER (Received 7 March 966) Introduction Let F p be the residue field modulo a prime number p. The mappings of F p into itself are

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

F. T. HOWARD AND CURTIS COOPER

F. T. HOWARD AND CURTIS COOPER SOME IDENTITIES FOR r-fibonacci NUMBERS F. T. HOWARD AND CURTIS COOPER Abstract. Let r 1 be an integer. The r-generalized Fibonacci sequence {G n} is defined as 0, if 0 n < r 1; G n = 1, if n = r 1; G

More information

Spectra of Semidirect Products of Cyclic Groups

Spectra of Semidirect Products of Cyclic Groups Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Spaces of Weakly Holomorphic Modular Forms in Level 52. Daniel Meade Adams

Spaces of Weakly Holomorphic Modular Forms in Level 52. Daniel Meade Adams Spaces of Weakly Holomorphic Modular Forms in Level 52 Daniel Meade Adams A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master

More information

Zsigmondy s Theorem. Lola Thompson. August 11, Dartmouth College. Lola Thompson (Dartmouth College) Zsigmondy s Theorem August 11, / 1

Zsigmondy s Theorem. Lola Thompson. August 11, Dartmouth College. Lola Thompson (Dartmouth College) Zsigmondy s Theorem August 11, / 1 Zsigmondy s Theorem Lola Thompson Dartmouth College August 11, 2009 Lola Thompson (Dartmouth College) Zsigmondy s Theorem August 11, 2009 1 / 1 Introduction Definition o(a modp) := the multiplicative order

More information

The Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College

The Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College The Structure of the Jacobian Group of a Graph A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College In Partial Fulfillment of the Requirements for the Degree Bachelor of

More information

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes by Chenlu Shi B.Sc. (Hons.), St. Francis Xavier University, 013 Project Submitted in Partial Fulfillment of

More information

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S,

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, Group, Rings, and Fields Rahul Pandharipande I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, A binary operation φ is a function, S S = {(x, y) x, y S}. φ

More information

ABSTRACT INVESTIGATION INTO SOLVABLE QUINTICS. Professor Lawrence C. Washington Department of Mathematics

ABSTRACT INVESTIGATION INTO SOLVABLE QUINTICS. Professor Lawrence C. Washington Department of Mathematics ABSTRACT Title of thesis: INVESTIGATION INTO SOLVABLE QUINTICS Maria-Victoria Checa, Master of Science, 2004 Thesis directed by: Professor Lawrence C. Washington Department of Mathematics Solving quintics

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

1. Introduction Definition 1.1. Let r 1 be an integer. The r-generalized Fibonacci sequence {G n } is defined as

1. Introduction Definition 1.1. Let r 1 be an integer. The r-generalized Fibonacci sequence {G n } is defined as SOME IDENTITIES FOR r-fibonacci NUMBERS F. T. HOWARD AND CURTIS COOPER Abstract. Let r 1 be an integer. The r-generalized Fibonacci sequence {G n} is defined as 8 >< 0, if 0 n < r 1; G n = 1, if n = r

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Lecture 1 Systems of Linear Equations and Matrices

Lecture 1 Systems of Linear Equations and Matrices Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011)

Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Before we consider Gelfond s, and then Schneider s, complete solutions to Hilbert s seventh problem let s look back

More information

William Stallings Copyright 2010

William Stallings Copyright 2010 A PPENDIX E B ASIC C ONCEPTS FROM L INEAR A LGEBRA William Stallings Copyright 2010 E.1 OPERATIONS ON VECTORS AND MATRICES...2 Arithmetic...2 Determinants...4 Inverse of a Matrix...5 E.2 LINEAR ALGEBRA

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

1 Adeles over Q. 1.1 Absolute values

1 Adeles over Q. 1.1 Absolute values 1 Adeles over Q 1.1 Absolute values Definition 1.1.1 (Absolute value) An absolute value on a field F is a nonnegative real valued function on F which satisfies the conditions: (i) x = 0 if and only if

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

The Kth M-ary Partition Function

The Kth M-ary Partition Function Indiana University of Pennsylvania Knowledge Repository @ IUP Theses and Dissertations (All) Fall 12-2016 The Kth M-ary Partition Function Laura E. Rucci Follow this and additional works at: http://knowledge.library.iup.edu/etd

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

SYMMETRY AND SPECIALIZABILITY IN THE CONTINUED FRACTION EXPANSIONS OF SOME INFINITE PRODUCTS

SYMMETRY AND SPECIALIZABILITY IN THE CONTINUED FRACTION EXPANSIONS OF SOME INFINITE PRODUCTS SYMMETRY AND SPECIALIZABILITY IN THE CONTINUED FRACTION EXPANSIONS OF SOME INFINITE PRODUCTS J MC LAUGHLIN Abstract Let fx Z[x] Set f 0x = x and for n 1 define f nx = ff n 1x We describe several infinite

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra: A Constructive Approach

Linear Algebra: A Constructive Approach Chapter 2 Linear Algebra: A Constructive Approach In Section 14 we sketched a geometric interpretation of the simplex method In this chapter, we describe the basis of an algebraic interpretation that allows

More information

φ(xy) = (xy) n = x n y n = φ(x)φ(y)

φ(xy) = (xy) n = x n y n = φ(x)φ(y) Groups 1. (Algebra Comp S03) Let A, B and C be normal subgroups of a group G with A B. If A C = B C and AC = BC then prove that A = B. Let b B. Since b = b1 BC = AC, there are a A and c C such that b =

More information

CHARACTERISTICS OF COUNTER EXAMPLE LOOPS IN THE COLLATZ CONJECTURE. Thomas W. Allen II

CHARACTERISTICS OF COUNTER EXAMPLE LOOPS IN THE COLLATZ CONJECTURE. Thomas W. Allen II CHARACTERISTICS OF COUNTER EXAMPLE LOOPS IN THE COLLATZ CONJECTURE by Thomas W Allen II An Abstract presented in partial fulfillment of the requirements for the degree of Master of Science in the Department

More information

FINITE ABELIAN GROUPS Amin Witno

FINITE ABELIAN GROUPS Amin Witno WON Series in Discrete Mathematics and Modern Algebra Volume 7 FINITE ABELIAN GROUPS Amin Witno Abstract We detail the proof of the fundamental theorem of finite abelian groups, which states that every

More information

Lecture 4: Number theory

Lecture 4: Number theory Lecture 4: Number theory Rajat Mittal IIT Kanpur In the next few classes we will talk about the basics of number theory. Number theory studies the properties of natural numbers and is considered one of

More information

SOME AMAZING PROPERTIES OF THE FUNCTION f(x) = x 2 * David M. Goldschmidt University of California, Berkeley U.S.A.

SOME AMAZING PROPERTIES OF THE FUNCTION f(x) = x 2 * David M. Goldschmidt University of California, Berkeley U.S.A. SOME AMAZING PROPERTIES OF THE FUNCTION f(x) = x 2 * David M. Goldschmidt University of California, Berkeley U.S.A. 1. Introduction Today we are going to have a look at one of the simplest functions in

More information

1.1.1 Algebraic Operations

1.1.1 Algebraic Operations 1.1.1 Algebraic Operations We need to learn how our basic algebraic operations interact. When confronted with many operations, we follow the order of operations: Parentheses Exponentials Multiplication

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

PROBLEMS ON LINEAR ALGEBRA

PROBLEMS ON LINEAR ALGEBRA 1 Basic Linear Algebra PROBLEMS ON LINEAR ALGEBRA 1. Let M n be the (2n + 1) (2n + 1) for which 0, i = j (M n ) ij = 1, i j 1,..., n (mod 2n + 1) 1, i j n + 1,..., 2n (mod 2n + 1). Find the rank of M n.

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

COMPLETION OF PARTIAL LATIN SQUARES

COMPLETION OF PARTIAL LATIN SQUARES COMPLETION OF PARTIAL LATIN SQUARES Benjamin Andrew Burton Honours Thesis Department of Mathematics The University of Queensland Supervisor: Dr Diane Donovan Submitted in 1996 Author s archive version

More information

Rational Points on Conics, and Local-Global Relations in Number Theory

Rational Points on Conics, and Local-Global Relations in Number Theory Rational Points on Conics, and Local-Global Relations in Number Theory Joseph Lipman Purdue University Department of Mathematics lipman@math.purdue.edu http://www.math.purdue.edu/ lipman November 26, 2007

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n ABSTRACT Title of Thesis: GRÖBNER BASES WITH APPLICATIONS IN GRAPH THEORY Degree candidate: Angela M. Hennessy Degree and year: Master of Arts, 2006 Thesis directed by: Professor Lawrence C. Washington

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Pade Approximations and the Transcendence

Pade Approximations and the Transcendence Pade Approximations and the Transcendence of π Ernie Croot March 9, 27 1 Introduction Lindemann proved the following theorem, which implies that π is transcendental: Theorem 1 Suppose that α 1,..., α k

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

DIVISIBILITY AND DISTRIBUTION OF PARTITIONS INTO DISTINCT PARTS

DIVISIBILITY AND DISTRIBUTION OF PARTITIONS INTO DISTINCT PARTS DIVISIBILITY AND DISTRIBUTION OF PARTITIONS INTO DISTINCT PARTS JEREMY LOVEJOY Abstract. We study the generating function for (n), the number of partitions of a natural number n into distinct parts. Using

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA DAVID MEREDITH These notes are about the application and computation of linear algebra objects in F n, where F is a field. I think everything here works over any field including

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

Zero estimates for polynomials inspired by Hilbert s seventh problem

Zero estimates for polynomials inspired by Hilbert s seventh problem Radboud University Faculty of Science Institute for Mathematics, Astrophysics and Particle Physics Zero estimates for polynomials inspired by Hilbert s seventh problem Author: Janet Flikkema s4457242 Supervisor:

More information

Doubly Indexed Infinite Series

Doubly Indexed Infinite Series The Islamic University of Gaza Deanery of Higher studies Faculty of Science Department of Mathematics Doubly Indexed Infinite Series Presented By Ahed Khaleel Abu ALees Supervisor Professor Eissa D. Habil

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

Course 2316 Sample Paper 1

Course 2316 Sample Paper 1 Course 2316 Sample Paper 1 Timothy Murphy April 19, 2015 Attempt 5 questions. All carry the same mark. 1. State and prove the Fundamental Theorem of Arithmetic (for N). Prove that there are an infinity

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

SMT 2013 Power Round Solutions February 2, 2013

SMT 2013 Power Round Solutions February 2, 2013 Introduction This Power Round is an exploration of numerical semigroups, mathematical structures which appear very naturally out of answers to simple questions. For example, suppose McDonald s sells Chicken

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Decomposition of Pascal s Kernels Mod p s

Decomposition of Pascal s Kernels Mod p s San Jose State University SJSU ScholarWorks Faculty Publications Mathematics January 2002 Decomposition of Pascal s Kernels Mod p s Richard P. Kubelka San Jose State University, richard.kubelka@ssu.edu

More information

1111: Linear Algebra I

1111: Linear Algebra I 1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 7 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 7 1 / 8 Properties of the matrix product Let us show that the matrix product we

More information

Least Squares Regression

Least Squares Regression Least Squares Regression Chemical Engineering 2450 - Numerical Methods Given N data points x i, y i, i 1 N, and a function that we wish to fit to these data points, fx, we define S as the sum of the squared

More information

= 1 and 2 1. T =, and so det A b d

= 1 and 2 1. T =, and so det A b d Chapter 8 Determinants The founder of the theory of determinants is usually taken to be Gottfried Wilhelm Leibniz (1646 1716, who also shares the credit for inventing calculus with Sir Isaac Newton (1643

More information

Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations

Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations University of Iowa Iowa Research Online Theses and Dissertations Summer 2013 Reduced [tau]_n-factorizations in Z and [tau]_nfactorizations in N Alina Anca Florescu University of Iowa Copyright 2013 Alina

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

School of Mathematics and Statistics. MT5824 Topics in Groups. Problem Sheet I: Revision and Re-Activation

School of Mathematics and Statistics. MT5824 Topics in Groups. Problem Sheet I: Revision and Re-Activation MRQ 2009 School of Mathematics and Statistics MT5824 Topics in Groups Problem Sheet I: Revision and Re-Activation 1. Let H and K be subgroups of a group G. Define HK = {hk h H, k K }. (a) Show that HK

More information

Some Remarks on the Discrete Uncertainty Principle

Some Remarks on the Discrete Uncertainty Principle Highly Composite: Papers in Number Theory, RMS-Lecture Notes Series No. 23, 2016, pp. 77 85. Some Remarks on the Discrete Uncertainty Principle M. Ram Murty Department of Mathematics, Queen s University,

More information

Notes on Determinants and Matrix Inverse

Notes on Determinants and Matrix Inverse Notes on Determinants and Matrix Inverse University of British Columbia, Vancouver Yue-Xian Li March 17, 2015 1 1 Definition of determinant Determinant is a scalar that measures the magnitude or size of

More information

Master of Arts In Mathematics

Master of Arts In Mathematics ESTIMATING THE FRACTAL DIMENSION OF SETS DETERMINED BY NONERGODIC PARAMETERS A thesis submitted to the faculty of San Francisco State University In partial fulllment of The Requirements for The Degree

More information

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014 Quivers of Period 2 Mariya Sardarli Max Wimberley Heyi Zhu ovember 26, 2014 Abstract A quiver with vertices labeled from 1,..., n is said to have period 2 if the quiver obtained by mutating at 1 and then

More information

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27 Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is

More information

arxiv: v1 [math.rt] 4 Jan 2016

arxiv: v1 [math.rt] 4 Jan 2016 IRREDUCIBLE REPRESENTATIONS OF THE CHINESE MONOID LUKASZ KUBAT AND JAN OKNIŃSKI Abstract. All irreducible representations of the Chinese monoid C n, of any rank n, over a nondenumerable algebraically closed

More information

Congruence Properties of Partition Function

Congruence Properties of Partition Function CHAPTER H Congruence Properties of Partition Function Congruence properties of p(n), the number of partitions of n, were first discovered by Ramanujan on examining the table of the first 200 values of

More information

6 Lecture 6: More constructions with Huber rings

6 Lecture 6: More constructions with Huber rings 6 Lecture 6: More constructions with Huber rings 6.1 Introduction Recall from Definition 5.2.4 that a Huber ring is a commutative topological ring A equipped with an open subring A 0, such that the subspace

More information

On Representations of integers by the quadratic form x2 Dy2

On Representations of integers by the quadratic form x2 Dy2 Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 5-4-2012 On Representations of integers by the quadratic form x2 Dy2 Christopher Thomas Follow this and additional

More information

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf)

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf) PAYNE, CATHERINE ANN, M.A. On ψ (κ, M) spaces with κ = ω 1. (2010) Directed by Dr. Jerry Vaughan. 30pp. S. Mrówka introduced a topological space ψ whose underlying set is the natural numbers together with

More information

Wilson s Theorem and Fermat s Little Theorem

Wilson s Theorem and Fermat s Little Theorem Wilson s Theorem and Fermat s Little Theorem Wilson stheorem THEOREM 1 (Wilson s Theorem): (p 1)! 1 (mod p) if and only if p is prime. EXAMPLE: We have (2 1)!+1 = 2 (3 1)!+1 = 3 (4 1)!+1 = 7 (5 1)!+1 =

More information

ON DIRICHLET S CONJECTURE ON RELATIVE CLASS NUMBER ONE

ON DIRICHLET S CONJECTURE ON RELATIVE CLASS NUMBER ONE ON DIRICHLET S CONJECTURE ON RELATIVE CLASS NUMBER ONE AMANDA FURNESS Abstract. We examine relative class numbers, associated to class numbers of quadratic fields Q( m) for m > 0 and square-free. The relative

More information

(for k n and p 1,p 2,...,p k 1) I.1. x p1. x p2. i 1. i k. i 2 x p k. m p [X n ] I.2

(for k n and p 1,p 2,...,p k 1) I.1. x p1. x p2. i 1. i k. i 2 x p k. m p [X n ] I.2 October 2, 2002 1 Qsym over Sym is free by A. M. Garsia and N. Wallach Astract We study here the ring QS n of Quasi-Symmetric Functions in the variables x 1,x 2,...,x n. F. Bergeron and C. Reutenauer [4]

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Chapter 5. Modular arithmetic. 5.1 The modular ring

Chapter 5. Modular arithmetic. 5.1 The modular ring Chapter 5 Modular arithmetic 5.1 The modular ring Definition 5.1. Suppose n N and x, y Z. Then we say that x, y are equivalent modulo n, and we write x y mod n if n x y. It is evident that equivalence

More information

SOME FORMULAE FOR THE FIBONACCI NUMBERS

SOME FORMULAE FOR THE FIBONACCI NUMBERS SOME FORMULAE FOR THE FIBONACCI NUMBERS Brian Curtin Department of Mathematics, University of South Florida, 4202 E Fowler Ave PHY4, Tampa, FL 33620 e-mail: bcurtin@mathusfedu Ena Salter Department of

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

PolynomialExponential Equations in Two Variables

PolynomialExponential Equations in Two Variables journal of number theory 62, 428438 (1997) article no. NT972058 PolynomialExponential Equations in Two Variables Scott D. Ahlgren* Department of Mathematics, University of Colorado, Boulder, Campus Box

More information

PLANE PARTITIONS AMY BECKER, LILLY BETKE-BRUNSWICK, ANNA ZINK

PLANE PARTITIONS AMY BECKER, LILLY BETKE-BRUNSWICK, ANNA ZINK PLANE PARTITIONS AMY BECKER, LILLY BETKE-BRUNSWICK, ANNA ZINK Abstract. Throughout our study of the enumeration of plane partitions we make use of bijective proofs to find generating functions. In particular,

More information

Local Fields. Chapter Absolute Values and Discrete Valuations Definitions and Comments

Local Fields. Chapter Absolute Values and Discrete Valuations Definitions and Comments Chapter 9 Local Fields The definition of global field varies in the literature, but all definitions include our primary source of examples, number fields. The other fields that are of interest in algebraic

More information