DOUBLE SERIES AND PRODUCTS OF SERIES KENT MERRYFIELD. Various ways to add up a doubly-indexed series: Let be a sequence of numbers depending on the two variables j and k. I will assume that 0 j < and 0 k <. An ordinary sequence can be expressed as a list; to list such a doubly-indexed sequence would require an array: u 00 u 0 u 0 u 0 u u u 0 u u...... We would like to consider series with such sequences as terms; that is, we would like to add up all of the numbers. Our problem with making this notion meaningful is not that there is no way to do this; rather, there are too many ways to do this. Out of a much larger collection of possible meanings, let us pick out four to compare. What can we mean by the sum of all of the numbers? Possibility : the iterated sum Possibility : the other iterated sum Possibility 3: the limit of the square partial sums: N N Let S N : interpret the series as Possibility 4: the limit of the triangular partial sums: Let T N N N j : interpret the series as 3 lim S N 4 lim T N 5 You can get a sense of the meaning of the N-th triangular partial sum by doing the following: look at the array in, place a ruler at a 45 angle across the upper left corner of this array, then add up all of the numbers that you can see above and to the left of the ruler. The further you pull the ruler down and to the right, the larger the N that this partial sum represents. Pushing this image a little further, we get the following notion: for each one-step move or our ruler down and to the right, we add in one more lower-left to upper-right diagonal worth of terms. We can look at T N as: N N u n k,k
u 00 + u 0 + u 0 + u 0 + u + u 0 + u 30 + u + u + u 03 + 6 So we have several different ways to add up a double series. There are yet more ways, but let s not overburden the discussion. The big question is this: do these methods necessarily yield the same number? Experience with analysis, particularly with its counterexamples, should at the very least make you skeptical to no great surprise, the answer is NO! We need an example. The following should be convincing enough. Define by the array: 0 0 0 0 0 0 7 0 0 0....... Adding up the rows first as in, we get 0 0 Adding up the colums first as in 3, we get + 0 + 0 + 0 + The sequence of square partial sums S N goes as,,,,,,..., which converges to. And finally, the sequence of triangular partial sums T N goes as, 0,, 0,, 0,..., which diverges. With four different methods, we got three different answers and the sense that the fact that even two of them were the same is at best a lucky coincidence. However, we also note that there is a great deal of cancellation going on here positive and negative terms adding up to zero. It is clear that this double sum cannot possibly be absolutely convergent by any of these medhods. What do I mean by absolutely convergent? Simply that if we replaced each and every term by its absolute value, the resulting sum would converge to a number <. A well-known theorem asserts that a single series is absolutely convergent if and only if it is unconditionally convergent that is, if and only if any rearrangement of that series still converges, and to the same number. All of these methods of trying to add up a double series should be seen as various rearrangements of the sum; if we are looking for a condition that guarantees that they will all give us the correct number, we should expect that absolute convergence is just the condition we need. As a general plan, theorems that have absolute convergence as a hypothesis proceed in two stages; the first is the proof that everything works in the case of series with nonnegative sums; the second is the use of that first proof as a lemma in the proof of the general absolutely convergent case. What follows will be no exception Lemma : If 0 for all j and k, then the square partial sums and triangular partial sums always have the same limit. This limit may be : if the square partial sums tend to, then so also do the triangular partial sums. The proof depends on the following inequality, which I present without written proof. A picture helps to explain it. If 0 for all j and k, then T N S N T N for all N 8 Both of these sequences - S N and T N - are non-decreasing sequences and thus subject to the monotone sequence alternative: either they converge or they go to. T N is just a subsequence of T N and hence has the same limit. An appeal to the Squeeze Theorem completes the argument.
Lemma : If 0 for all j and k, then the limit of the square partial sums is the same as either one of the iterated sums or 3. This time, the proof will take more work. To have some notation to work with, let I and let L lim S N. Our goal is to prove that I L. The proof of this equality will be a classical analysts proof: we very seldom prove equality directly. The way to show that I L is to show that I L and that I L. First note that for each j, since these are series with nonnegative terms, N. Adding up these estimates for 0 j N yields N N S N N N I Since S N I for all N, we must have that lim S N I, so L I. Naturally, we started with the easy half, but sooner or later we must face the other side. There will be an ɛ in this argument, and a need to have some convenient convergent series with positive terms whose sum is. Give us a choice of a convenient convergent series and we will usually take a geometric series; given our choice of ratio, we will usually pick. That is, we will use this fact: j+ We are trying to prove that L I. Start by assuming that ɛ is any positive number. Choose an M so large that for all m > M, we have m I ɛ 0 Next, for each j, 0 j m, choose an n j large enough that n j > 9 ɛ j+ Now we add up the estimates in for 0 j m and use 9 and 0. m nj m > ɛ j+ m m ɛ j+
> I ɛ ɛ I ɛ Finally, let N be the maximum of the finite collection of numbers {m, n, n,..., n m }. S N N N m mj > I ɛ and since S N S, we have S > I ɛ for all ɛ > 0. This forces S I. That finishes the proof - at least in the case where both of these limits are assumed to be finite. A minor variation of this proof will show that if either one is infinite, then both are infinite. Theorem 3: If 0 for all j and k, then if any one of the four sums the limit of the square partial sums, the limit of the triangular partial sums, or either of the iterated sums converges, then all four converge to the same number. If any one of the four is infinite, then all four are infinite. To prove this, just collect together Lemmas and, and note that the proof of Lemma would work just as well for the other iterated sums. The most interesting consequence of this theorem is the equality of the iterated sums: 3 The next stage is to claim that the same holds for absolutely convergent double series. We call the double series absolutely convergent if the double series with the terms converges. Which of the four possibilities do we mean when we say it converges? By Theorem 3, it doesn t matter if any one of the four converges, they all do, and to the same sum. Theorem 4: If the double series with terms converges absolutely, then both iterated sums, the limit of the square partial sums, and the limit of the triangular partial sums all equal the same number. We re-use some of the methods and insights of the previous theorems. Let s start with the square partial sums and the triangular partial sums. Let S N and T N denote those sums of and S N and T N denote the same sums for. Note that S N and T N are given to be Cauchy sequences. Since S N S M S N S M and T N T M T N S N, it follow that S N and T N are also Cauchy sequences, hence both convergent. Furthermore, S N T N S N T N as in Lemma, a picture helps explain it. Since lim S N T N 0, it follows that lim S N T N 0, hence S N and T N converge to the same limit. Now consider. We know that for each j, converges after all, an absolutely convergent series converges. Let a j, and b j lim S N, and L lim S N. If M > M N, then M jm + a j M jm +. Let I b j I S N. a j, I b j
Since we know that S N converges to I, it follows that that a j is a Cauchy, hence convergent, series. But further, now that we know that every row is a convergent series and that absolutely, we can write N I S N kn+ + jn+ a j N kn+ j + jn+ a j converges b j I S N and since that can be made arbitrarily small, we have lim S N I and hence I L. As before, we can repeat this argument for the column-first iterated sums. Since both the rowfirst and column-first iterated sums equal the limit of the square partial sums, they equal each other. Suppose a k and. The Cauchy product of two series: b k are two series. We know that if they are both convergent, we can add them together simply by adding them together term by term: a k + b k a k + b k What would we get if we multiplied them together? Certainly not the sum of the products of the terms - that s not the way multiplication works. Let s consider this from the perspective of finite sums: if you multiplied together two sums of ten terms each, how many terms would the product have? In general, 00 terms. If we had to figure out a way to organize this sum, we d write these 00 terms in a 0 0 array. It stands to reason that what you get when you multiply together two sums is a doubly indexed sum. Suppose a k A and b k B both converge. Then a k b k AB A b k Ab k b k a j a j b k since j is as good as k as an index This is exactly the kind of double sum that we have been considering. If both of the original series convevge absolutely, then Theorem 4 applies and we can write this sum in each of the other three
forms of that series and have it be equal. The most interesting form in what follows is the triangular partial sums. That is, we have this lemma: Lemma 5: If a k and b k both converge absoletely, then a k b k lim N n a n k b k n a n k b k The double sum on the right of 8 is called the Cauchy product of the two series. The immediate application for Lemma 5 is to the product of two power seryes. In fact, we have the following: Theorem 6 The Cauchy Product of Power Series: Suppose R > 0 and the two power series fx a k x k and gx b k x k both have radius of convergence at least as large as R, then the product fxgx is given by a power series with radius of convergence at least R, and that power series is n fxgx a n k b k x n 9 We can rephrase equation 9 as If hx fxgx, then hx c n x n, where the c n s are computed as c n n a n k b k. The name that we will give to getting a new coefficient sequence c n from the two other coefficient sequences a n and b n is that it is the convolution of those two other sequences. 8 3. Two examples the geometric series and the exponential: Example : multiplying the geometric series by itself. We know it is the geometric series, after all that for x <, we have x We now multiply this function by itself and use equation 9: x n x n n + x n The same identity could have been derived in this case by differentiating the power series. Example : the basic property of the exponential function. Define the function Ex by the power series which has an infinite radius of convergence Ex We know that E ought to be the exponential function: Ex e x. The most basic algebraic property of the exponential is the law of exponents: e x e y e x+y. That is, Ex Ey Ex + y. But what if we have never heard of the exponential? Or, more likely, suppose we are looking for a rigorous way to define the exponential function and to prove that it has the required properties. x k x k k!.
Power series provide a direct way to prove this property, as follows. x k y k ExEy k! k! n x n k n k! yk k! n n! n! n k!k! xn k y k n! n n x n k y k k by Lemma 5 n! x + yn Ex + y by the Binomial Theorem. 4. Exercises: Assume that 0 r <. Compute the sum r j+k in two ways by computing both n possible iterated sums. Does the sum converge? Let fx [ln + x]. Use the series for the logarithm and Theorem 6 to compute that n fx [ln + x] n x n n kk Use this to compute the 5th derivative of f evaluated at 0. 3 For s >, define ζs k s. Compute ζn. Justify your steps. k n k