Convergence of sequences and series A sequence f is a map from N the positive integers to a set. We often write the map outputs as f n rather than f(n). Often we just list the outputs in order and leave the reader to infer the relevant formula for the sequence. Examples The sequence of squares: 1, 4, 9, 16, 25,..., f n = n 2. The sequence: 1, 1, 1, 1, 1, 1,..., f n = ( 1) n 1. The Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, 21,..., f n = 1 (α β) (αn β n ), α = 1 2 (1 + 5), β = 1 2 (1 5). Sometimes it is convenient to start the sequence at a value of n different from 1: for example the sequence f n = 1 could be started at n = 2. n 2 1 We are concerned with the behavior of sequences as n goes to infinity. Examples f n = 1 n goes to zero as n. f n = sin(n) never settles down, but remains bounded, since sin(x) 1 for all real x. f n = n 2 diverges to infinity. f n = n 2 sin(n) gets arbitrarily large in size but both positive and negative. We say that the sequence f n has a limit L (necessarily unique) as n iff given any ɛ > 0, there exists N(ɛ), such that if n > N(ɛ), then f n L < ɛ. Intuitively we must be able to make all the terms of the sequence as close as we like to L, except for a finite number of terms of the sequence, where we first specify how close we want to be (this is the choice of ɛ).
Examples We have L = 0, for the sequence f n = 1, since we have: n 1 0 = 1 < ɛ for all n > 1, so here N(ɛ) = 1 will do. n n ɛ ɛ We have L = 0, for the sequence f n = 2 n, since we have: 2 n 0 = 2 n < ɛ, if n ln(2) > ln(ɛ), so if n > ln(ɛ) So here N(ɛ) = ln(ɛ) ln(2) will do. We have L = 1, for the sequence f n = f n 1 = n n+1 n > 1 1. ɛ So here N(ɛ) = 1 1 will do. ɛ ln(2). n, since we have: n+1 1 = n (n+1) n+1 = 1 n+1 = 1 n+1 < ɛ, if n + 1 > 1 ɛ, so if If a sequence is increasing and bounded above, it has a limit and the limit is the least upper bound of the sequence. For example, the sequence f n = n2 1 is increasing and bounded above n 2 +1 by 1, which is also its least upper bound and its limit. If a sequence is decreasing and bounded below, it has a limit and the limit is the greatest lower bound of the sequence. For example, the sequence f n = 1 is decreasing and bounded below n 2 +1 by 0, which is also the greatest lower bound and its limit. If a sequence is bounded, it has a convergent subsequence. For example, f n = ( 1) n 1 has the convergent subsequences f 2n = 1, 1, 1,..., with limit 1 and f 2n 1 = 1, 1, 1,..., with limit 1. 2
Series A series s n is a sequence constructed by summing the terms of another sequence: s n = n k=1 a k. Examples a k = k, s n = 1 + 2 + 3 + 4 + 5 + + n = 1 n(n + 1). 2 a k = k 3, s n = 1 + 8 + 27 + 64 + 125 + + n 3 = ( 1 2 n(n + 1))2. a k = 2 k 1, s n = 1 + 2 + 4 + 8 + 16 + + 2 n 1 = 2 n 1. a k = 2 k, s n = 1 2 + 1 4 + 1 8 + 1 16 + 1 32 + + 1 2 n = 1 2 n. a k = x k 1, s n = 1 + x + x 2 + x 3 + x 4 + + x n 1 = 1 xn, if x 1 and 1 x s n = n if x = 1. a k = 1 k(k+1), s n = 1 2 + 1 6 + 1 12 + 1 20 + 1 30 + + 1 n(n+1) = n. n+1 The terms s n are called the partial sums of the a n sequence. Then the series is said to have a limit s, called the sum of the series, provided that lim n s n = s and we then write: s = a k = a 1 + a 2 + a 3 + a 4 + a 5 +.... Examples k=1 a k = 2 k, s n = 1 2 n, s = 1: 1 = 2 k = 1 2 + 1 4 + 1 8 + 1 16 + 1 32 +.... k=1 a k = x k 1, s n = 1 xn, if x 1 and s 1 x n = n if x = 1. Here s exists only if 1 < x < 1 and then we have x n 0 as n, giving: s = 1 1 x = x k 1 = 1 + x + x 2 + x 3 + x 4 +.... k=1 This series is called the geometric series with ratio x. a k = 1 k(k+1), s n = n, s = 1: n+1 1 = 1 2 + 1 6 + 1 12 + 1 20 + 1 30 +.... 3
Convergence tests for series We study the convergence of a 1 + a 2 + a 3 + a 4 +.... We put s n = n k=1 a k and we need to know if lim n s n = s exists or not. Note that whether the limit exists depends only on the behavior of a n for n large: we can ignore the first 100 million terms of the series, for example, without affecting the issue of convergence. So comparison tests with hypotheses such as: if a k > 0 and b k > 0 for all k, then..., may be rephrased as: if a k > 0 and b k > 0 for all k > K, for fixed K, then.... If s exists, then lim n s n = s and lim n s n 1 = s, so we get then: lim n a n = lim n (s n s n 1 ) = s s = 0. So s can only exist if lim n a n = 0. This is not the only condition, however, for s to exist, but certainly if lim n a n is non-zero or does not exist, then neither does s. We first consider series of non-negative terms: a k 0, for all k. Then the partial sums s n are an increasing sequence, so their limit s exists if the partial sums are bounded above and the limit does not exist if the partial sums are not bounded above. We begin with some comparison tests, comparing k=1 a k with k=1 b k. We put s n = n k=1 a k and s = k=1 a k (if it exists). We put t n = n k=1 b k and t = k=1 b k (if it exists). The first test says that if one series is systematically smaller than another convergent series, then it too converges. The second test says that if one series is systematically larger than another divergent series, then it too diverges. The comparison test, first part: If 0 a n b n, for all n, and if t exists, then so does s. Because s n t n t, so s n t, so the partial sums are bounded above and s exists and s t. The comparison test, second part: If 0 b n a n, for all n, and if t does not exist, then neither does s. Because s n t n and if s existed then s would be an upper bound for the s n sequence and therefore also for the t n sequence, so t would exist also, a contradiction, so s cannot exist. 4
The limit comparison test: a If a k > 0 and b k > 0, for all k, and if lim k k b k = L exists and L > 0, then both the sums s = k=1 a k and t = k=1 b k exist or both do not exist: a Since lim k k b k = L, we may choose N sufficiently large, so that for all n > N, for some positive integer N, we will have a k b k within L of L: 2 L < L, so L < a k 2 2 b k < 3L, so a 2 k < 3Lb 2 k and b k < 2 a L k. a k b k Then if t exists, we have, for any n > N: n n 3L s n = s N + a k < s N + 2 b k < s N + 3L 2 t. k=n+1 k=n+1 So s n is bounded above for all n and we are done: s exists. Conversely, if s exists, we have, for any n > N: n n 2 t n = t N + a k < t N + L a k < t N + 2 L s. k=n+1 k=n+1 So t n is bounded above for all n and we are done: t exists. The integral test: Suppose that a k = f(k) for some decreasing function f(x) > 0, defined on the interval [1, ) (in particular a k > 0). If 1 f(x)dx exists, then s exists. We have s n = n k=1 f(k) = f(1) + F (n), say, where F (n) = n k=2 f(k) and we notice that F (n) is the lower Riemann sum for the integral n So F (n) f(x)dx. n 1 f(x)dx f(x)dx. 1 1 So s n f(1) + f(x)dx, so s 1 n is bounded above and we are done: s exists and we have s f(1) + f(x)dx. 1 If f(x)dx does not exist, then neither does s. 1 We have s n = n k=1 f(k) = G(n), say and we notice that G(n) is the upper Riemann sum for the integral n+1 f(x)dx. 1 So s n n+1 f(x)dx, which goes to infinity as n goes to infinity, 1 so s n is not bounded above and we are done: s does not exist. 5
The ratio test: suppose that lim n a n+1 a n = L exists. If L < 1, then the sum s exists. Put M = 1+L, so L < M < 1. 2 Then, for all n N, for some integer N > 0, we have a n+1 < Ma n. It follows by induction that a k M k N a N for all integers k N, which gives, for n > N: n n s n = s N + a k s N + a N M N M k k=n+1 < s N + a N M N k=1 k=n+1 M k 1 = s N + a N M N 1 1 M. Here we used the sum of the geometric series with ratio M as discussed earlier. So the partial sums s n are bounded above, so s exists. If L > 1, then the sum s does not exist, because for all n large enough, we have a n+1 > a n, so the sequence a n is eventually increasing so cannot go to zero. If L = 1, this test gives us no information about the existence or non-existence of s. Some results are available for series with negative terms: Suppose that k=1 a k = S exists. Then so does s = k=1 a k and S s S: Put b k = a k + a k. Then 0 b k 2 a k, so by the ordinary comparison test, we have k=1 b k = b exists and b 2S. Then s = k=1 (b k a k ) = b S 2S S = S. So s exists and s S. Applying this argument to k=1 ( a k) gives, since a k = a k, the relation s S, so s S, so S s S, as required. A series s = k=1 a k is said to be absolutely convergent if S = k=1 a k exists. The result just proved is that an absolutely convergent series is automatically convergent and then we have s S. A series is said to be conditionally convergent if it converges, but not absolutely. 6
The alternating series test: Let a k 0, a k be decreasing and let a k 0 as k. Then s = k=1 ( 1)k a k exists. We have the following relation between s n and s n 2 : s n s n 2 = n n 2 n ( 1) k a k a k = ( 1) k a k k=1 k=1 k=n 1 = ( 1) n a n + ( 1) n 1 a n 1 = ( 1) n (a n a n 1 ). We have a n a n 1 0, since a k is decreasing, so s n s n 2 0 if n is odd (so ( 1) n < 0) and s n s n 2 < 0, if n is even (so ( 1) n > 0). So the even subsequence s 2n is a decreasing sequence and the odd subsequence s 2n 1 is an increasing sequence. Also s 2n s 2n 1 = a 2n > 0, so s 2 s 2n > s 2n 1 s 1. So s 2n is bounded below, by s 1, so has a limit s + s 1,say. Also s 2n 1 is bounded above by s 2, so has a limit s s 2, say. Then s + s = lim n (s 2n s 2n 1 ) = lim n a 2n = 0, so s + = s = s, say and it then follows that we have lim n s n = s, so the limit exists, as required. For this last step, using the fact that s + = s = s, we need to prove that lim n s n = s: We can choose N + (ɛ) so that s n s < ɛ, whenever n > N + (ɛ) and n is even. We can choose N (ɛ) so that s n s < ɛ, whenever n > N (ɛ) and n is odd. Then put N(ɛ) = max(n + (ɛ), N (ɛ)). We have then s n s < ɛ, for any n > N(ɛ), whether n is even or odd, so we are done. Note that we have s 2n > s > s 2n 1, so the error in the partial sum s n is always less in size than the next term in the series. 7
Basic series The geometric series: 1 + x + x 2 + x 3 + = k=0 xk = 1 1 x, converges if and only if 1 < x < 1. The p-series S p = 1 + 1 only if p > 1: + 1 + 1 2 p 3 p 4 p + = 1 n p converges if and If p 0, then a n = n p 0 as n is false, so the series diverges. If p > 0, then the function 1 is decreasing, so we compare with x p the integral 1 dx. 1 x p 1 We know this converges iff p > 1 and then the integral is. p 1 So by the integral test, the p-series diverges for 0 < p 1 and converges for p > 1. When the series converges, by the integral test we have the sum S p 1 + 1 = p. p 1 p 1 Special cases of p-series: p = 1: The harmonic series 1 + 1 2 + 1 3 + 1 4 +... diverges. p = 2: The series 1 + 1 + 1 + 1 +... converges and the sum, as shown 4 9 16 first by Euler, is π2. 6 The alternating harmonic series: 1 1 2 + 1 3 1 4 + = ( 1) n 1 1 n = ln(2). n=1 This series converges by the alternating series test. 8
The series for various functions: e x = 1 + x + x2 + x3 + x4 + = x n 2! 3! 4!. n! This converges for all x by the ratio test: a n+1 a n = x n+1 n! (n + 1)! x n = x n + 1 0. sin(x) = x x3 + x5 x7 + = x2n+1 3! 5! 7! ( 1)n. (2n+1)! This converges for all x by the ratio test: a n+1 a n = x 2n+3 (2n + 1)! (2n + 3)! x 2n+1 = x 2 (2n + 3)(2n + 2) 0. cos(x) = 1 x2 + x4 x6 + = x2n 2! 4! 6! ( 1)n. (2n)! This converges for all x by the ratio test: a n+1 a n = x 2n+2 (2n)! (2n + 2)! x 2n = x 2 (2n + 2)(2n + 1) 0. ln(1 + x) = x x2 2 + x3 3 x4 4 + = ( 1)n 1 xn n. This converges for x < 1 by the ratio test: a n+1 a n = x n+1 n (n + 1) x n = x n (n + 1) x. The series diverges for x > 1 by the ratio test. The series diverges at x = 1, since it is then the negative of the harmonic series, by the p-test (p = 1). It converges when x = 1, since it is then the alternating harmonic series, convergent by the alternating series test. So the series converges when 1 < x 1. The interval ( 1, 1] is called the interval of convergence of the series. 9
arctan(x) = x x3 + x5 x7 + = x2n+1 3 5 7 ( 1)n. (2n+1) We first use the ratio test: a n+1 a n = x 2n+3 (2n + 3) (2n + 1) x 2n+1 = x 2 (2n + 1) (2n + 3) x. The series converges for x < 1 by the ratio test. The series diverges for x > 1 by the ratio test. The series converges when x = ±1, since for each of these values of x, the series is an alternating series, which obeys the conditions for the alternating series test, so converges by the alternating series test. So the series converges when 1 x 1. The interval of convergence is [ 1, 1]. The special case when x = 1 gives the nice formula: The Pitt comic series: π = 4(1 1 3 + 1 5 1 7 +... ). f(x) = k=1 ( 1) n x n 3 n. We first use the ratio test: a n+1 a n = x n+1 3 n n 3 n + 1 x n = x 3 n + 1 x. If x < 1, the series converges by the ratio test. If x > 1, the series diverges by the ratio test. If x = 1, the series is ( 1) n k=1 3 n, which converges by the alternating series test (conditionally, not absolutely, by the next result). If x = 1, the series is k=1 (p = 1 1). 3 So the interval of convergence is ( 1, 1]. 1 3 n, which diverges by the p-test 10
Taylor series The Taylor series of f(x), based at x = a is the series: T (f, a)(x) = 1 n! f (n) (a)(x a) n. Here, if n > 0, f (n) (a) denotes the n-th derivative of f evaluated at x = a. Also f (0) (a) is defined to be f(a). Note that 0! is defined to be 1, so the series written out is: T (f, a)(x) = f(a)+f (1) (a)(x a)+ 1 2 f (2) (a)(x a) 2 + 1 6 f (3) (a)(x a) 3 + 1 24 f (4) (a)(x a) 4 +.... Note that the first two terms give the standard linear approximation to f, based at x = a. Examples f(x) = e x, f (n) (x) = e x, T (f, a) = e a (x a) n. n! This Taylor series converges for all x and represents the function e x. f(x) = cos(x), f (2n) (x) = ( 1) n cos(x), f (2n+1) = ( 1) n+1 sin(x), T (f, a) = cos(a) ( ) ( 1) n (x a) 2n sin(a) (2n)! ( ( 1) n (x a) 2n+1 (2n + 1)! This Taylor series converges for all x and represents the function cos(x). f(x) = sin(x), f (2n) (x) = ( 1) n sin(x), f (2n+1) = ( 1) n cos(x), T (f, a) = sin(a) ( ) ( 1) n (x a) 2n +cos(a) (2n)! ( ( 1) n (x a) 2n+1 (2n + 1)! This Taylor series converges for all x and represents the function sin(x). ). ). 11
f(x) = 1 x, f (n) (x) = ( 1)n n! x n+1, T (f, a) = (a x) n a n+1. This is a geometric series with first term 1 a x and with ratio a a converges, with sum ( 1)( 1 a 1 ( a x )) = 1 if and only if x a < a. x a and f(x) = ln(x), f (n) (x) = ( 1)n 1 (n 1)! x n (for n > 0), T (f, a) = ln(a) n=1 Here we are assuming that a > 0. This series converges to ln(x) if 0 < x 2a. (a x) n na n. f(x) = x r, where r is a real number and (to avoid unnecessary complications) we take a > 0, f (n) (x) = r(r 1)(r 2)... (r n + 1)x r n, T (f, a) = ( ) r a r n (x a) n. n Here ( ) r n = r(r 1)(r 2)...(r n+1), when n > 0 and ( r n! 0) = 1. This series converges for all x if r is a non-negative integer, when it agrees with the standard binomial expansion. If r is non-integral, the series converges to x r, for 0 < x < 2a. 12
Taylor series based at the origin Often it is convenient to expand around the origin, so to put a = 0. The Taylor series for this case are also called MacLaurin series. For the series for ln(x) and x r, this entails shifting x by a constant, so that the derivatives are well-defined. We then have the following Taylor series, based at a = 0. f(x) = e x, f (n) (x) = e x, f (n) (0) = 1, T (f, 0) = x n n! = 1 + x + x2 2 + x3 6 + x4 24 + x5 120 +..., This Taylor series converges for all x and represents the function e x. f(x) = cos(x), f (2n) (x) = ( 1) n cos(x), f (2n+1) f (2n) (0) = ( 1) n, f (2n+1) (0) = 0, = ( 1) n+1 sin(x), ( 1) n x 2n T (f, 0) = (2n)! = 1 x2 2 + x4 24 x6 720 x8 40320 +.... This Taylor series converges for all x and represents the function cos(x). f(x) = sin(x), f (2n) (x) = ( 1) n sin(x), f (2n+1) = ( 1) n cos(x), f (2n) (0) = 0, f (2n+1) (0) = ( 1) n, T (f, 0) = ( 1) n x 2n+1 (2n + 1)! = x x3 6 + x5 120 x7 5040 + x9 362880.... This Taylor series converges for all x and represents the function sin(x). f(x) = 1 1+x, f (n) (x) = ( 1)n n! (1+x) n+1, f (n) (0) = ( 1) n n!, T (f, 0) = ( x) n = 1 x + x 2 x 3 + x 4 x 5 +.... This is the standard geometric series with ratio x and converges, with if and only if 1 < x < 1. sum 1 1+x 13
f(x) = ln(1+x), f (n) (x) = ( 1)n 1 (n 1)! (1+x) n (for n > 0), f(0) = 0, f (n) (0) = ( 1) n 1 (n 1)!, ( x) n T (f, 0) = n n=1 = x x2 2 + x3 3 x4 4 + x5 5.... This series converges to ln(1 + x) if 1 < x 1. f(x) = (1 + x) r, where r is a real number, f (n) (x) = r(r 1)(r 2)... (r n + 1)(1 + x) r n, f (n) (0) = r(r 1)(r 2)... (r n + 1), T (f, 0) = ( ) r x n n r(r 1) = 1+rx+ x 2 r(r 1)(r 2) + x 3 r(r 1)(r 2)(r 3) + x 4 +.... 2 6 24 This series converges for all x if r is a non-negative integer, when it agrees with the standard binomial expansion. If r is non-integral, the series converges to x r, for 1 < x < 1. 14
Taylor approximations; the error term; convergence The n-th Taylor approximation T n (f, a)(x) based at a to a function f(x) is the (n + 1)-th partial sum of the Taylor series: n f k (a) T n (f, a)(x) = (x a) k k! k=0 = f(a)+f (1) (a)(x a)+ f (2) (a) (x a) 2 + f (3) (a) (x a) 3 + + f (n) (a) (x a) n. 2 6 n! Note that T n (f, a)(x) is a sum of n + 1 terms and is a polynomial of degree at most n in x. Then T n (f, a)(x) has the characteristic property that its derivatives agree with those of the function f(x), when both are evaluated at x = a, up to and including the n-th derivative. Consider now the difference E n (f, a)(x) = f(x) T n (f, a)(x). Intuitively this should be small. There are various estimates of the size of this difference: one is the following: E n (f, a)(x) K n+1 (n + 1)! x a n+1. This estimate is valid throughout the interval a r x a + r, for a fixed positive r, where the quantity K n+1 is the maximum of f (n+1) (x) on that interval. So for example, for the function f(x) = e x, we have K n+1 = e a+r and E n (f, a)(x) ea+r (n + 1)! x a n+1. For the functions sin(x) and cos(x), we know that K n+1 is a value of one of the two functions sin(x) or cos(x), somewhere on the interval [a r, a+r], which can never be larger than 1, so we always have the following estimate: E n (f, a)(x) 1 (n + 1)! x a n+1. For each of these functions, we notice that as n, the error goes to zero, since the denominator (n+1)! grows much faster than any power of the form u n for fixed u. 15
When the error goes to zero as n goes to infinity, we get two by-products: First the Taylor series converges on [a r, a + r]. Second the Taylor series actually represents the function on the interval [a r, a + r]. So we can conclude as stated earlier, that the Taylor series for the functions e x, sin(x) and cos(x) always represents the function, on any interval [a r, a + r], for any reals a and r, with r > 0. Since this is true for any real r > 0, these Taylor series represent the functions on the entire real line. As another example consider the function f(x) = 1 and its expansion 1+x based at 0. We have f (n+1) (x) = (n+1)!, so, on the interval [ r, r], where 0 < r < 1, (1+x) n+1 we get K n+1 = and then we have: (n+1)! (1 r) n+1 E n (f, 0)(x) ( ) n+1 x. 1 r This goes to zero as n, provided x < 1 r. Note that r must be restricted to the range 0 < r < 1, since the function and its derivatives blow up as x 1 +. We conclude that the Taylor series represents the function 1 on the interval 1+x [ r, r], for any 0 < r < 1, so therefore also on the interval ( 1, 1). Finally, if a Taylor series converges on an open interval (p, q), then it converges absolutely on that interval. 16
Tricks with Taylor series Series obey the same rules as do ordinary limits. For example if a = k=1 a k and b = k=1 b k, then a + b = k=1 (a k + b k ) and a b = k=1 (a k b k ). So suppose that we have two Taylor series, based at the same point, convergent on the same open interval (p, q) (i.e. we ignore the end=points, where these series may or may not converge): f(x) = g(x) = f n (x a) n, where f n = f (n) (a), n! g n (x a) n, where g n = g(n) (a). n! Then on the same open interval (p, q), we have: The Taylor series for f(x) + g(x) is the sum of the Taylor series for f(x) with that for g(x). f(x) + g(x) = h n (x a) n, where h n = f n + g n. The Taylor series for f(x) g(x) is the subtraction of the Taylor series for g(x) from that for f(x). f(x) g(x) = h n (x a) n, where h n = f n g n. The Taylor series for f(x)g(x) is the product of the Taylor series for f(x) with that of g(x). f(x)g(x) = h n (x a) n, where h n = n f k g n k. k=0 The Taylor series for f (x) is the derivative of the Taylor series for f(x): f (x) = h n (x a) n 1, where h n = nf n. n=1 17
The Taylor series for x f(t)dt is the integral of the Taylor series for a f(x): x f(t)dt = h n (x a) n+1, where h n = f n n + 1. a If g(a) 0, the Taylor series for f(x) g(x) for f(x) by that of g(x): x a f(t)dt = is the quotient of the Taylor series h n (x a) n+1, where h n = The quotient series may be written f(x) g(x) the first few h n are as follows: = 1 g 0 h n f n n + 1. ( ) x a n, g 0 where h 0 = f 0, h 1 = f 0 g 1 + f 1 g 0, h 2 = f 0 ( g 0 g 2 + g 2 1) f 1 g 0 g 1 + f 2 g 2 0, h 3 = f 0 ( g 2 0g 3 + 2g 0 g 1 g 2 g 3 1) + f 1 ( g 2 0g 2 + g 0 g 2 1) f 2 g 2 0g 1 + f 3 g 3 0, h 4 = f 0 ( g 3 0g 4 +2g 2 0g 1 g 3 +g 2 0g 2 2 3g 0 g 2 1g 2 +g 4 1)+f 1 ( g 3 0g 3 +2g 2 0g 1 g 2 g 0 g 3 1) +f 2 ( g 3 0g 2 + g 2 0g 2 1) f 3 g 3 0g 1 + f 4 g 4 0. Substitution of variables can create new Taylor series out of old: usually one replaces the variable x by a simple polynomial in x, say bx + c or kx 2, for constants b, c, k. The convergence interval has to be adjusted accordingly. For example if f(x) = f nx n converges to f(x) for x < R, then f(x 2 ) = f nx 2n converges for x < R. 18
Examples We start with the geometric series: f 1 (x) = 1 1 x = x n = 1 + x + x 2 + x 3 + x 4 +..., valid for x < 1. Replacing x by x, we get the series: f 2 (x) = 1 1 + x = ( x) n = 1 x+x 2 x 3 +x 4..., valid for x < 1. Replacing x by 2x, we get the series: f 3 (x) = 1 1 2x = (2x) n = 1+2x+4x 2 +8x 3 +16x 4 +..., valid for x < 1 2. Replacing x by x 2, we get the series: f 4 (x) = 1 1 + x = ( x 2 ) n = 1 x 2 +x 4 x 6 +x 8..., valid for x < 1. 2 Replacing x by 1 x, we get the series: f 5 (x) = 1 x = (1 x) n = 1+(1 x)+(1 x) 2 +(1 x) 3 +(1 x) 4 +..., valid for 0 < x < 2. Differentiating once with respect to x, we get the series: f 6 (x) = 1 (1 x) = (n+1)x n = 1+2x+3x 2 +4x 3 +5x 4 +..., valid for x < 1. 2 Differentiating twice with respect to x, and dividing the result by 2, we get the series: f 7 (x) = 1 (1 x) = 3 (n + 1)(n + 2) x n = 1+3x+6x 2 +10x 3 +15x 4 +..., valid for x < 1. 2 19
Integrating once with respect to x, we get the series: x n f 8 (x) = ln(1 x) = n + 1 = x+x2 2 +x3 3 +x4 4 +x5 +..., valid for 1 x < 1. 5 Note that the new series here has a larger range of convergence than the original series. Integrating f 4 once with respect to x gives the series: ( 1) n x 2n+1 f 9 (x) = arctan(x) = = x x3 2n + 1 3 +x5 5 x7 7 +x9 +..., valid for 1 x 1. 9 Similarly, let us start with the series for sin(x), valid for all x: ( 1) n x 2n+1 sin(x) = = x x3 (2n + 1)! 6 + x5 120 x7 5040 + x9 362880.... Then we get the series for cos(x) by differentiating with respect to x and the series for sin(2x) by replacing x by 2x, each valid for all x: ( 1) n 2 2n+1 x 2n+1 sin(2x) = = 2x 4x3 (2n + 1)! 3 + 4x5 15 8x7 315 + 4x9 2835.... Finally, using the series e x = x n = 1 + x + x2 + x3 + x4 + x5 +..., n! 2 6 24 120 valid for all x and replacing x by x 2, and integrating, we get a series for erf(x) = π x 2 0 e t2 dt, an integral which cannot be computed in terms of standard functions: First we substitute x 2 for x obtaining the series for e x2, again valid for all x: ( x 2 ) n e x2 = = 1 x 2 + x4 n! 2 x6 6 + x8 24 x10 120 +.... Then we integrate and multiply by π, to get the series for erf(x), again 2 valid for all x: π ( 1) n x 2n+1 π erf(x) = 2 n!(2n + 1) = 2 (x x3 3 +x5 10 x7 42 + x9 216 x11 +... ). 1320 20
Euler s formulas relating trigonometric and exponential functions Looking at the series for e x, sin(x) and cos(x), each of which may actually be used as defining these functions, we see a strong familial resemblance. Euler realized that these are different aspects of the same function. Indeed we can define for any complex number z, the function e z by the same formula as used in the real case: e z = It is possible to show that this gives a nice function well-defined for all z and agrees with the standard exponential when z is real. Now consider the product formula for the series of e z and e w : c n = n a k z k b n k w n k = k=0 n k=0 e z e w = e z = e w = z n n!. a n z n, b n w n, a n = b n = 1 n!. e z e w = c n, z k w n k k!(n k)! = 1 n! n ( ) n z k w n k = k k=0 (z + w) n = e z+w. n! (z + w)n. n! In particular, writing z = x + iy, with x and y real, we get the formula: e z = e x+iy = e x e iy. Now e x is the ordinary exponential function, so we will comprehend e z if we know what e iy is. 21
We write out its series, separating out the terms into even and odd powers of y: e iy (iy) n = n! = = (iy) 2n (2n)! + (iy) 2n+1 (2n + 1)! ( 1) n y 2n (2n)! + i = cos(y) + i sin(y). ( 1) n y 2n+1 (2n + 1)! Here we used the fact that i 2 = 1, so i 2n = ( 1) n and i 2n+1 = i( 1) n. So we have the beautiful formulas of Euler: e iy = cos(y) + i sin(y), e x+iy = e x (cos(y) + i sin(y)). In particular putting y = 2π, we get: e 2πi = 1, so e z+2πi = e z, so the function e z is periodic in the direction of i with period 2π. Finally, putting y = π, we get the formula: e iπ = 1. This beautiful formula links four of the most important quantities in mathematics: e, π, i and 1! 22