Negentropy as a Function of Cumulants

Size: px
Start display at page:

Download "Negentropy as a Function of Cumulants"

Transcription

1 Negentropy as a Function of Cumulants Christopher S. Withers & Saralees Nadarajah First version: 19 December 2011 Research Report No. 15, 2011, Probability and Statistics Group School of Mathematics, The University of Manchester

2 Negentropy as a function of cumulants by Christopher S. Withers Applied Mathematics Group Industrial Research Limited Lower Hutt, NEW ZEALAND Saralees Nadarajah School of Mathematics University of Manchester Manchester M13 9PL, UK Abstract: Suppose f is a density that is close to that of a standard normal, φ. For the first time, exact expressions are given for the negentropy, the differential entropy, and related quantities such as f r /φ r 1 for r = 2, 3,... in terms of cumulants. Our expressions could have widespread use. Keywords: Cumulants; Differential entropy; Hermite polynomials; Negentropy. 1 Introduction and summary Negentropy was introduced in the classical book by Schrödinger (1944). According to Wikipedia, negentropy is the entropy that it exports to keep its own entropy low; it lies at the intersection of entropy and life. Application areas of negentropy have been numerous. We mention: projection pursuit, risk management, clustering validation, dry friction of polished surfaces, beamforming, blind equalization methods, robust independent component analysis, blind detection of direct sequence spread spectrum signals over fading channels, statistical dependence, molecular thermochemical properties of diverse functional acyclic compounds, general anaesthetic activity of aliphatic hydrocarbons, halocarbons and ethers, thermodynamical and informational aspects of the Darwinian revolution, large Poincare systems, reversed phase liquid-chromatography, microdrops of water, measure of its stability in the context of sustainable development, analysis of surface electromyogram signals, nonlinear diffusion, restoration of mammographic images, adaptive equalization for digital communication systems, modeling of carbonic anhydrase inhibitory activity of sulfonamides, adaptive linear multiuser detectors, evolution of chemical recruitment in ants, modeling environmental time series data, ecology, extraction of work from a single thermal bath, cell biology, temporal patterns in biochemical processes, grievance subsystems, population mean fitness in genetic systems, enzymic catalysis, basal metabolism in animals, telecommunications, writing, and reading. For excellent accounts about the theory and applications of negentropy, we refer the readers to Brillouin (1956, 1964), Sedov (1982), Britton (1983), Eriksson et al. (1987), and Hyvarinen et al. (2001). However, we are not aware of any exact and explicit expression for negentropy. Only some approximations have been known. Jones and Sibson (1987) claim to give an ap- 1

3 proximation for the differential entropy and negentropy of a density in terms of its third and fourth order cumulants. But, in fact, what they approximate is not the negentropy. Hyvarinen et al. (2001) suggest an approximation in the context of independent component analysis. Prasad et al. (2005) propose an approximation using generalized higher order statistics of different nonquadratic, nonlinear functions. Chang et al. (2008) utilize an approximation of negentropy based on the maximum entropy principle to measure non-gaussianity of data sequences. The aim of this paper is to give exact and explicit expressions for the differential entropy, negentropy and related quantities of a density f on R, in terms of its cumulants. Let φ be the density of a standard normal random variable. The Gram-Charlier series can be viewed as a Fourier expansion for f(x)/φ(x), a weighted sum of Hermite polynomials. Section 2 gives an expression for the general Fourier coefficient in terms of the cumulants of f when f is standardized to have zero mean and unit variance. These coefficients are given explicitly up to the fourteenth, and in general in terms of Bell polynomials. This is applied to the density of a standardized sample mean. Section 3 uses this series to express (f/φ) r φ for r = 2, 3,... in terms of the cumulants of f, or rather, in terms of these Fourier coefficients. Section 4 gives the differential entropy and negentropy of f in terms of these Fourier coefficients. Section 5 gives f r for r = 2, 3,... in terms of these Fourier coefficients. Section 6 gives expressions for (f/f 0 ) r f 0 for r = 2, 3,.... Section 7 gives f ln(f/f 0 ), the Kulback-Liebler divergence of f from an arbitrary density f 0 in terms of these Fourier coefficients and the Fourier coefficients of φ ln f 0. When dealing with estimates, the variance is often only known asymptotically. Let p n be the density of an asymptotically standardized estimate. Section 8 gives expressions for (p n /φ) r φ and hence for the entropy and negentropy of p n. Jones and Sibson (1987) gave a double approximation for negentropy of a density close to the φ. An asymptotically standardized estimate is the most realistic example of this. However, Section 8 shows that the role of the cumulants in this situation now looks very different from their role in Jones and Sibson s formulation. A novel feature of many of our higher order expansions is the use of the expected value of a product of Hermite polynomials with argument a standard normal random variable. These coefficients are discussed in Appendices A and B. 2 The Gram-Charlier series and its application to sample means Let X be a real absolutely continuous random variable with distribution F, density f, and finite moments and cumulants m r = E X r, κ r = κ r (X). Let N 0 be a standard normal random variable with density φ(x) = (2π) 1/2 exp( x 2 /2). Let H k = H k (x) be the kth Hermite polynomial: H k (x) = φ(x) 1 ( d/dx) k φ(x) = E (x + in 0 ) k (2.1) for i = 1, where the second equality follows by Withers (2000). Theorem 2.1 gives an expansion for f/φ. Theorem 2.1 Suppose that f 2 /φ <. Then f/φ lies in L 2 (φ) and has the Fourier 2

4 expansion f(x)/φ(x) = B k H k (x)/k! = 1 + ɛ (2.2) k=0 say, where B k = H k f = E H k (X) = E (X + in 0 ) k = 0 j k/2 ( ) k m k 2j ν 2j, (2.3) 2j ν 2j = E N 2j 0 = 1 3 (2j 1) = (2j)!/ ( 2 j j! ) (2.4) and N 0 is independent of X. Note (2.2) holds in the sense of convergence in L 2 (φ): [ 2 K f(x)/φ(x) B k H k (x)/k!] φ(x)dx 0 k=0 as K. The integrated form of (2.2) is P (X x) = Φ(x) φ(x) where Φ(x) is the distribution function of φ(x). B k H k 1 (x)/k!, Proof: The result follows from (2.1) and by noting that {H k / k!} form a complete orthonormal set of real functions on R with respect to φ(x): H j H k φ = k!δ jk. (2.5) Here, δ jk is the Kronecker delta function, 1 or 0 for j = k or not. Note (2.3) gives the Fourier coefficient B k in terms of the moments. For example, since H 0 = 1, H 1 = x, H 2 = x 2 1, H 3 = x 3 3x, H 4 = x 4 6x 2 + 3, H 5 = x 5 10x x,..., we have B 0 = 1, B 1 = m 1, B 2 = m 2 1, B 3 = m 3 3m 1, B 4 = m 4 6m 2 + 3, B 5 = m 5 10m 3 +15m 1, B 6 = m 6 15m 4 +45m 2 15 and B 7 = m 7 21m m 3 105m 1. Note that integrals like f 2 /φ and f ln φ are only meaningful if X is dimension-free. Suppose in fact that X is standardized so that E X = 0 and var(x) = 1, that is, m 1 = 0 and m 2 = 1. Then the expressions for the B k look simpler if we convert from moments to k=1 3

5 cumulants: B 0 = 1, B 1 = B 2 = 0, B 3 = κ 3, B 4 = κ 4, B 5 = κ 5, B 6 = κ κ 2 3, B 7 = κ κ 3 κ 4, B 8 = κ κ 3 κ κ 2 4, B 9 = κ κ 3 κ κ 4 κ κ 3 3, B 10 = κ κ 3 κ κ 4 κ κ κ 2 3κ 4, B 11 = κ κ 3 κ κ 4 κ κ 5 κ κ 2 3κ κ 3 κ 2 4, B 12 = κ κ 3 κ κ 4 κ κ 5 κ κ κ 2 3κ κ 3 κ 4 κ κ κ 4 3, B 13 = κ α at α = 22κ 3 κ κ 4 κ κ 5 κ κ 6 κ κ 2 3κ κ 3 κ 4 κ κ 3 κ κ 2 4κ κ 3 3κ 4, B 14 = κ β at β = 77κ 4 κ κ 5 κ κ 6 κ κ ( 2κ 3 κ κ 2 3κ κ 3 κ 4 κ κ 3 κ 5 κ κ 4 κ 2 ) κ 2 4κ 6. (2.6) (Edgeworth derived this form by a different route.) So, (2.2) can be written ɛ = f(x)/φ(x) 1 = B k H k (x)/k!. (2.7) Note (2.2) is known as the Gram-Charlier series. An alternative proof of Theorem 2.1 is as follows. An alternative proof of Theorem 2.1: Let D = d/dx. The operator exp{h( D) r /r!} acting on a density f increases its rth cumulant by h but does not change the other cumulants. (For r = 1 this gives Taylor s expansion. We assume that derivatives of all orders exist.) So, if m 1 = 0, m 2 = 1, k r = κ r δ r2 and S(t) = r=1 k rt r /r! then But by definition, k=3 f = exp {S( D)} φ. (2.8) S(t) k /k! = B rk (k)t r /r! r=k for k = 0, 1,..., where the partial exponential Bell polynomial B rk (k) is tabled in Comtet (1974, pages ) to k = 12. Recurrence formulas for them are given on page 136. The complete exponential Bell polynomial B r (k) is defined by for r 0. So, B r (k) = exp {S(t)} = r B rk (k) (2.9) k=0 t r B r (k)/r!. r=0 4

6 So, (2.8) and (2.1) give f(x) = φ(x) B r H r (x)/r!, r=0 where B r = B r (k), that is, (2.2) with B r = B r (k). Using Comtet s table, we used (2.9) to obtain B k to k = 12. This avoids the work of having to convert from moments to cumulants as needed using (2.3). However, since k 1 = k 2 = 0, about two thirds of the terms in (2.9) are zero. An alternative is to calculate B k as follows: B r = [r] 2k B r 2k,k (η) (2.10) 1 k r/3 for r 1, where η r 2 = κ r /r(r 1) and [r] 2k = r!/(r 2k)!. Note (2.10) follows by noting that S(t) = t 2 S 1 (t), where S 1 (t) = j=1 η jt j /j!. So, exp{s(t)} = k=0 t2k S 1 (t) k /k!. Now substitute S 1 (t) k /k! = r=k B rk(η)t r /r! and take the coefficient of t r /r! to obtain B rk (k)/r! = B r 2k,k (η)/(r 2k)!. For example, Comtet s table gives B rk for r 12. We used this with (2.10) to obtain B r for r = 13, 14. Cramer showed that the Gram-Charlier series converges absolutely and uniformly if f 2 /φ < and f(x) as x. Section 6.22 of Stuart and Ord (1987) gives this and Cramer s other theorem on its convergence. These theorems do not apply, for example, to the double exponential density. The expressions for B k in (2.6) above B 8 appear to be new. They were obtained using (2.9) and (2.10). Example 2.1 Suppose that X = X n is a standardized sample mean of sample of size n from a population with rth cumulant l r, say X n = (n/l 2 ) 1/2 (Y l 1 ). For r 2 X n has rth cumulant is κ r = α r n 1 r/2 = α r δ r 2, where δ = n 1/2 and α r = l r /l r/2 2. Note that κ 3 = α 3 δ, κ 4 = α 4 δ 2, κ 5 = α 5 δ 3, κ 6 = α 6 δ 4, κ 7 = α 7 δ 5, κ 8 = α 8 δ 6, κ 9 = α 9 δ 7, κ 10 = α 10 δ 8,.... Also for f 3, B r = r 2 k=k r { } b rk δ k : r k even, (2.11) where K 3j = j, K 3j+1 = j + 1 and K 3j+2 = j + 2, where by (2.6) b r,r 2 = α r, b 62 = 10α 2 3, b 73 = 35α 4 α 3, b 84 = 56α 5 α α 2 4, b 93 = 280α 3 3, b 95 = 84α 6 α α 5 α 4, b 10,4 = 2100α 4 α 2 3, b 10,6 = 120α 7 α α 6 α α 2 5, b 11,5 = 4620α 2 3α α 3 α 2 4, b 11,7 = 165α 3 α α 4 α α 5 α 6, b 12,4 = 15400α 4 3, b 12,6 = 9240α 2 3α α 3 α 4 α α 3 4, b 12,8 = 220α 3 α α 4 α α 5 α α

7 That is, B 3 = b 31 δ, B 4 = b 42 δ 2, B 5 = b 53 δ 3, B 6 = b 62 δ 2 + b 64 δ 4, B 7 = b 73 δ 3 + b 75 δ 5, B 8 = b 84 δ 4 + b 86 δ 6, B 9 = b 93 δ 3 + b 95 δ 5 + b 97 δ 7, B 10 = b 10,4 δ 4 + b 10,6 δ 6 + b 10,8 δ 8, B 11 = b 11,5 δ 5 + b 11,7 δ 7 + b 11,9 δ 9, B 12 = b 12,4 δ 4 + b 12,6 δ 6 + b 12,8 δ 8 + b 12,10 δ 10. (K r takes the value above since B r has magnitude λ k 3 λr 3k 1, where k is the integral part of r/3.) where Note that (3.1) can be rewritten as b 0 = 1, b 1 = b 2 31/3! = α 2 3/6, f 2 /φ = b 2 = b 2 42/4! + b 2 62/6! = α 2 4/24 + 5α 4 3/36, b r n r, r=0 b 3 = b 2 53/5! + 2b 62 b 64 /6! + b 2 73/7! + b 2 93/9!, b 3 = α 2 5/120 + α 2 3α 6 / α 2 3α 2 4/ α 6 3/162, b 4 = b 2 64/6! + 2b 73 b 75 /7! + b 2 84/8! + 2b 93 b 95 /9! + b 2 10,4/10! + b 2 12,4/12!, b 5 = b 2 57/7! + 2b 84 b 86 /8! + ( 2b 93 b b 94 b 96 + b 2 95) /9! + 2b10,4 b 10,6 /10! + b 2 11,5/11! +2b 12,4 b 12,6 /12! + b 2 13,5/13! + b 2 15,5/15!. So, to get b 5 we also need b 13,5 and b 15,5. We now show that b 3j,j = (α 3 /6) j (3j)!/j!, b 3j+1,j+1 = (α 3 /6) j 1 (α 4 /24) (3j + 1)!/(j 1)!, [ ] b 3j+2,j+2 = (α 3 /6) j 1 (α 5 /120) + (j 1) (α 3 /6) j 2 (α 4 /12) 2 /8 (3j + 2)!/(j 1)!, (2.12) so that b 13,5 = α 3 3 α 4 and b 15,5 = α 5 3. The last terms in b r come from the last term in (2.10), that is for k = K r. So, b 3j,j δ j = B j,j (η)(3j)!/j!, b 3j+1,j+1 δ j+1 = B j+1,j (η)(3j + 1)!/(j + 1)!, b 3j+2,j+2 δ j+2 = B j+2,j (η)(3j + 2)!/(j + 2)!, where B j,j (η) = η j 1, B j+1,j(η) = j(j + 1)η j 1 1 η 2 /2, B j+2,j (η) = j(j + 1)(j + 2)[η j 1 1 η 3 /6 + (j 1)η j 2 1 η2 2/8], η 1 = κ 3 /6 = α 3 δ/6 and η 2 = κ 4 /12 = α 4 δ 2 /12. So, we obtain (2.12). 6

8 Example 2.2 Suppose that X 0 is gamma with mean γ. Its rth cumulant is (r 1)!γ. Its standardized form X = (X 0 γ)/ γ has rth cumulant (r 1)!γ 1 r/2 I(r 2). Set s = t/ γ. Then k r = (r 1)!γ 1 r/2 I(r 3), S(t) = γ s r /r = γ [ ln(1 s) + s + s 2 /2 ], r=3 exp {S(t)} = exp { γ ( s + s 2 /2 )} (1 s) γ, B r = γ r/2 r! ( γ) a ( γ/2) b [γ] c /a!b!c!, a+2b+c=r where [γ] c = γ(γ + 1) (γ + c 1). However, a simpler method is to note that this is a particular case of Example 2.1 with n = γ and l r the rth cumulant of an exponential distribution with unit mean, that is l r = α r = (r 1)!. Substituting this we see that B r is given by (2.11) with b r,r 2 = (r 1)!, b 62 = 40, b 73 = 420, b 84 = 3948, b 93 = 2240, b 95 = 38304, b 10,4 = 50400, b 10,6 = , b 11,5 = , b 11,7 = , b 12,4 = 61600, b 12,6 = and b 12,8 = In this form the coefficients tend to infinity. When divided by r! they tend to zero. Note (2.11) explains the behavior of B k noted in Stuart and Ord (1987, Example 6.3, page 229). 3 An expression for (f/φ) r φ By (2.5), we have a form of Parseval s identity: This can be written d 2 = (f φ) 2 /φ = f 2 /φ = k=3 k=0 Bk 2 /k!. (3.1) Bk 2 /k! = κ2 3/3! + κ 2 4/4! + Bk 2 /k!. Jones and Sibson (1987) proposed the double approximation J(f) d 2 /2 (κ 2 3 /3! + κ 2 4 /4!)/2 as a criterion for projection pursuit. What they are really approximating is d 2/2 = ( f 2 /φ 1)/2. The differential entropy and negentropy of f are defined by H(f) = f ln f f(x) ln f(x)dx and J(f) = H(φ) H(f). See Hyvarinen et al. (2001, Section 5.4). We shall see below that the negentropy is the Kulback-Liebler divergence of f from φ, J(f) = f ln(f/φ). (3.2) So, J(f) > 0 unless f = φ almost everywhere. Jones and Sibson (1987) proposed negentropy as a criterion for projection pursuit. It has been adapted by others: see, for example, Section 5.6 of Hyvarinen et al. (2001). k=5 7

9 Theorems 3.1, 3.2 and 3.3 give methods for obtaining d r = (f/φ 1) r φ = ɛ r φ, D r = (f/φ) r φ = (ɛ + 1) r φ. (3.3) These are equivalent since r ( ) r d r = D k ( 1) r k = ( 1) r 1 (r 1) + k k=0 r ( ) r r D r = d k = 1 + k k=0 k=2 r k=2 ( ) r D k ( 1) r k, k ( ) r d k. (3.4) k For example, d 3 = D 3 3D and D 3 = 1 + 3d 2 + d 3. Theorem 3.1 follows by (2.7). Theorem 3.3 follows by Theorem 8.1 given below. Theorem 3.2 gives tools to calculate the expressions in Theorem 3.1. Its proof is given in Appendix B. Theorem 3.1 We have d r = B j1 B jr L j1,...,j r /j 1! j r!, (3.5) D r = j 1,...,j r=3 j 1,...,j r=0 B j1 B jr L j1,...,j r /j 1! j r!, where L j1,...,j r = H j1 H jr φ = E H j1 (N 0 ) H jr (N 0 ) (3.6) = E (N 0 + in 1 ) j1 E (N 0 + in r ) jr (3.7) by (2.1), where N 0, N 1,... are independent standard normal random variables. It follows from (3.7) that L j1,...,j r = 0 if the total order j j r is odd. By (2.5), L j1,j 2 = j 1!δ j1 j 2. Appendix A tables L j1,...,j r of total order 2J = j j r up to J = 8. It also provides a MAPLE program to compute any other value. Theorem 3.2 We have L j1,j 2,j 3 = 3 i=1 j i!/ (J j i )! at 2J = 3 i=1 j i, (3.8) L j1,...,j r = 0 if and only if j j r < 2 max j k, (3.9) k=1 L j1,...,j r 1,J = J! when j j r 1 = J, (3.10) L j1,...,j r 1,J 1 = (J 1)! j a j b when j j r 1 = J 1. (3.11) 1 a<b<r r In particular, [ 2 ] L j1,j 2,j 1 +j 2 2n = j k!/ (j k n)! k=1 (j 1 + j 2 2n)!/n!, L 2j,2j,2j = ((2j)!/j!) 3, 8

10 and d 3 = j 1,j 2,j 3 =3 {B j1 B j2 B j3 / (J j 1 )! (J j 2 )! (J j 3 )! : 2J = j 1 + j 2 + j 3 even}. Theorem 3.3 Suppose a Taylor series expansion is available for f/φ, say f(x)/φ(x) = f j x j /j! = Then, in terms of the normal moments, (2.4), we have D r = where j 1,...,j r=0 E rk = ν j1 + +j r c j1 c jr = j 1 + +j r=k and [r] j = r!/(r j)!. c j1 c jr = c j x j. ν 2k E r,2k = E r0 + E r2 + 3E r4 + 15E r6 +, k=0 k ( ) r c r j 0 B kj (c) = j k B kj (f)c r j 0 [r] j /k! Note B kj (c) is the partial ordinary Bell polynomial tabled in Comtet (1974, page 309) to k = 10 and as in Section 2, B kj (c) is the partial exponential Bell polynomial. The first few E rk are E r0 = c r 0 = f0 r, ( ) r E r2 = 1 c r 1 0 c 2 + ( ) r c r 2 0 c = [ rf0 r 1 f 2 + r(r 1)f0 r 2 f1 2 ] /2!, ( ) ( ) ( ) r r E r4 = c0 r 1 c 4 + c r 2 ( 0 2c1 c 3 + c 2 ) r = [ rf0 r 1 f 4 + r(r 1)f0 r 2 ( 4f1 f 3 + 3f2 2 )] /4! c r 3 0 3c 2 1c [ r(r 1)(r 2)f r 3 0 6f 2 1 f 2 + [r] 4 f r 4 0 f 4 1 ] /4!. ( ) r c r 4 0 c For r = 2, 3,..., set D r = f r /φ, σ = (r 1) 1/2, τ = (r 1) 1/2 and X 0 = X/σ. So, f 0, the density of X 0, satisfies f(y) = σ 1 f 0 (y/σ). Since φ(τy) r 1 = φ(0) r 2 φ(y), (3.12) D r = φ(0) r 2 f(y) r φ(τy) r 1 dy = φ(0) r 2 τ r 1 D r (f 0 ), (3.13) where D r (f) = D r of (3.3). But this was given by (3.4) in terms of d r (f) = d r of (3.5) and also by (3.4), when f is standardized to have zero mean and unit variance. If instead f is standardized to have zero mean and variance r 1, then f 0 has zero mean and unit variance, so that D r is given by (3.13) in terms of D r (f 0 ), which is given by (3.4) in terms of d r (f 0 ) = d r of (3.5) with f replaced by f 0, that is in terms of {B j0 }, where B j0 = B j (κ 0 ), B j (κ) = B j and κ r0 = κ r /σ r. So, B j0 = B j /σ j. In terms of f 0 of zero mean and unit variance, this gives an expression for f 0 (y) r /φ(σy)dy = σ r 1 f r /φ, but not for f r 0 /φ. 9

11 4 Expressions for differential entropy and negentropy Theorem 4.1 gives expressions for the differential entropy and negentropy of f. Theorem 4.1 Let g(ɛ) = (1 + ɛ) ln(1 + ɛ) = ɛ + ɛ r w r r=2 and w r = ( 1) r / ( r 2 r ). (4.1) Then, the differential entropy and negentropy of f are given by H(f) = f ln φ g(ɛ)φ = H(φ) J(f), (4.2) J(f) = g(ɛ)φ = d 1 + w r d r = d 2 /2 + w r d r. (4.3) Proof: Set c = ln φ(0) = ln(2π)/2. Since ln φ(x) = c 1/2 H 2 (x)/2, by (2.3), f ln φ = c 1/2 B 2 = c 1/2 = φ ln φ = H(φ) (4.4) does not depend on f. So, (3.2) holds as claimed, that is, the negentropy is the Kulback- Liebler divergence of f from φ. In terms of ɛ of (2.2), r=2 f ln f = f ln φ + g(ɛ)φ. Hence, (4.2) and (4.3) follow since d 1 = 0 by (2.5). By Theorem 4.1, our expression (3.5) for d r also gives us H(f) and J(f). Further, (3.6) and (4.3) give our exact expression for negentropy. It can be truncated to give a more accurate method of projection pursuit than that of Jones and Sibson (1987). We note in passing that H j1 H jr = k=0 L k,j 1,...,j r H k. r=3 5 Expressions for f r in terms of cumulants Let X be a random variable on R with mean µ, rth cumulant κ r, and density f. Let f 0 be the density of X 0 = (X µ)/σ, where σ = κ 1/2 2, that is f(x) = σ 1 f 0 ((x µ)/σ). Then for any constant γ, f γ = σ 1 γ f γ 0. So, we may without loss of generality assume that µ = 0 and σ = 1. Theorem 5.1 gives formulas for f r in terms of cumulants for r = 2, 3,.... Theorem 5.1 We have f 2 = 2 1 π 1/2 G 2 k /k!, k=0 10

12 where G k = 2 k/2 B k+2i /( 4) I I! I=0 and B r is given by (2.6), (2.9) or (2.10). More generally, for r = 2, 3, 4,..., we have f r = r 1/2 φ(0) r 1 D r (G), (5.1) where D r (G) = and L j1,...,j r is given by (3.7). The first few G k are j 1,...,j r=0 G j1 G jr L j1,...,j r /j 1! j r!, (5.2) G k = r k/2 B k+2i λ I /I!, λ = ( r 1 1 ) /2, (5.3) I=0 G 0 = 1 + κ 4 /32 B 6 /384 + B 8 /6144, G 1 = 2 1/2 ( κ 3 /4 + κ 5 /32 B 7 /384 + ), G 2 = 2 1 ( κ 4 /4 + B 6 /32 B 8 /384 + ), G 3 = 2 3/2 (κ 3 κ 5 /4 + B 7 /32 B 9 /384 + ). Also D 2 (G) = k=0 G2 k /k!. Compare the equations of Theorem 5.1 with (3.1) and (3.5). The proof of Theorem 5.1 is as follows. Proof of Theorem 5.1: For γ > 0 set τ = γ 1/2, G(y) = f(τy)/φ(τy) and D γ (G) = G γ φ. By (3.12) with r replaced by γ + 1, f γ = τ f(τy) γ dy = τφ(0) γ 1 D γ (G). Note G(y) has Fourier expansion where since G k = G(y) = G k H k (y)/k!, k=0 H k Gφ = H k (y)f(τy)φ (τ 0 y) dy/φ(0) φ(y)/φ(τy) = φ (τ 0 y) /φ(0), (5.4) where τ 2 0 = 1 γ 1, but we shall not use this. By (2.2), G k = B j G kj /j!, 11

13 where G kj = H k (y)h j (τy)φ(y)dy. We shall obtain a formula for G kj via a rather nice result: when a 2 b 2 = 1, H j (ay) = E [a (y + in 0 ) + bn 1 ] j (5.5) = b j E Hj (a (y + in 0 ) /b) = a j E H j (y + bn 0 /a) = j ( ) j a k H k (y) E (bn 0 ) j k, k (5.6) k=0 where a, b may be complex and Hj is the modified Hermite polynomial of (B.5). Note (5.5) holds since LHS (5.5) t j /j! = exp ( tay t 2 /2 ) = RHS (5.5) t j /j!. The three other expressions follow from (2.1), (B.5), and the binomial expansion of the right hand side of (5.5). By (5.6) with a = τ, b = iτ 0 and τ 0 of (5.4), G kj /j! = τ k E (τ 0 N 0 ) j k /(j k)!. This is zero unless j k = 2I, a non-negative even integer. So, (5.3) follows using (2.4). So, for r = 2, 3, 4,..., f r is given by (5.1), (5.2) and (5.3). Note that we have not used the standardization µ = 0 and σ = 1. This merely makes for simpler B j. 6 Expressions for (f/f 0 ) r f 0 for r = 2, 3,... Theorem 6.1 gives expressions for (f/f 0 ) r f 0 for r = 2, 3,..., where f 0 is a given density. Theorem 6.1 Write the Fourier expansion, (2.2), as f(x)/φ(x) = B jh j (x), where B j = B j/j!. Then, f r /f 0 = B j 1 B j r H j1 H jr φ r /f 0 j 1,...,j r=0 when this converges. Alternatively, construct a complete of functions h 0, h 1,... which are orthonormal with respect to f 0, so f/f 0 = b j h j, where b j = fh j in L 2 (f 0 ). Then, and (f/f 0 ) r f 0 = f 2 /f 0 = j 1,...,j r=0 b 2 j b j1 b jr h j1 h jr f 0. 12

14 7 Kulback-Liebler divergence In Section 4, we gave expressions for the negentropy, that is the Kulback-Liebler divergence of f from φ. Theorem 7.1 gives expressions for f ln(f/f 0 ), the Kulback-Liebler divergence of f from an arbitrary density f 0. Theorem 7.1 Let g = (ln f 0 )φ. Suppose that (f 2 +g 2 )/φ < and (ln f 0 ) 2 φ <. Then f ln(f/f0 ) = H(f) A, where A = fg/φ is given by the proof, and H(f) = f ln f is given by (4.2) and (4.3). Proof: The Fourier expansion, (2.2), for f/φ and g/φ imply A = fg/φ = B k (f)b k (g)/k!, k=0 where B k (f) = H k f = B k of (2.3). Hence, the result. 8 Entropy and negentropy for estimates Truncation of a Fourier series does not give an order of magnitude of the remainder. For example, how can one evaluate the accuracy of the approximation f 2 /φ 1+κ 2 3 /3!+κ2 4 /4! for a standardized random variable? Implicit in this approximation and the other approximation of Jones and Sibson (1987), namely J(φ) (f 2 /φ 1)/2, is the assumption that κ r decreases in magnitude as r increases. There is an important case when this assumptions does hold. By (2.11) for X a standardized sample mean, for r 3, κ r n 1 r/2 and B r n Kr/2, where K r as r (although not monotonically) so that they both κ r and B r tend to zero. The same magnitudes hold for X = ( θ E θ)/var( θ) 1/2 = Y n say, if κ r ( θ) n 1 r for r 1. (This is known as the Cornish-Fisher assumption.) Suppose then that θ is an estimate of say θ in R with cumulants satisfying the extended Cornish-Fisher assumption, that is, they have the standard asymptotic expansions ) κ r ( θ = i=r 1 a ri n i (8.1) for r 1, where a 10 = θ. This holds, for example, by Withers (1983) for θ = T (F n ) a smooth functional of the empirical distribution F n of a random sample of size n. We assume that the cumulant coefficients a ri are bounded as n increases and that a 21 is bounded away from zero. Let f = p n be the density of the asymptotically standardized form Y n = (n/a 21 ) 1/2 ( θ θ), Y n is only approximately standardized since except for the case of a sample mean, EY n = O(n 1/2 ) and var(y n ) = 1 + O(n 1 ). So, for r 1, X = Y n has rth cumulant κ r = i=r 1 A ri n i, where A 10 = 0 and otherwise A rj = a rj /a r/2 21. We could proceed as before using the Gram- Charlier series. However, there is a closely related alternative route. By Withers (1984), 13

15 the Edgeworth expansion of Cornish and Fisher (1937) and Fisher and Cornish (1960) for the distribution of Y n, reduces to P n (x) = P (Y n x) = Φ(x) φ(x) n r/2 h 0r (x), where Φ(x) is the standard normal distribution function. Note h 0r (x) is a polynomial of degree 3r 1 in x and a polynomial in the coefficients {A rj }. The kth derivative of the distribution of Y n is P (k) n (x) = ( 1) k 1 φ(x) r=1 n r/2 h kr (x) for k 0, where h k0 (x) = H k 1 (x), H 1 (x) = Φ(x)/φ(x) and r=0 h kr (x) = {P rj H k+j 1 (x) : 1 j 3r, r j even } is a polynomial in x of degree 3r + k 1 for r 1. Note P rj is a polynomial in the {A rj } given by Appendix A of Withers and Nadarajah (2010) up to r = 4. For example, h kr (x) is given for r = 1, 2 by P 11 = A 11, P 13 = A 32 /6, P 22 = ( A 12 + A 2 11) /2, P 24 = (A A 11 A 32 ) /24, P 26 = A 2 32/72. (8.2) So, the density of Y n can be expanded as { } p n (x) = φ(x) 1 + n k/2 h 1k (x), p n /φ = k=1 C j H j, where C j = k j/3 { } P kj n k/2 : k j even, P k0 = δ k An expression for (p n /φ) r φ For r = 2, 3,... and L j1,...,j r of (3.6), (p n /φ) r φ = C j1 C jr L j1,...,j r = C(rk)n k/2 = C(r, 2k)n k, (8.3) j 1,...,j r=0 k=0 k=0 where C(rk) = c k1,...,k r, (8.4) k 1 + +k r=k,k 1 0,...,k r 0 and c k1,...,k r = {P k1 j 1 P krjr L j1,...,j r : k i j i even, i = 1,..., r} 0 j i 3k i, i=1,...,r 14

16 since C(rk) = 0 for k odd, since k j 1 j r is even and L j1,...,j r = 0 when j j r is odd. For example, C(1k) = c k = δ k0, c k1 k 2 = 3 min(k 1,k 2 ) {j!p k1 jp k2 j : k 1 j even}. (8.5) Set c 1 2 = c 11, c = c 112, c = c 1122 and so on. So, the first four coefficients needed in (8.3) are C(r0) = c 0 r = P00 r = 1, ( ) r C(r2) = c 2 1 2, ( ) ( ) r r C(r4) = (2c 13 + c 2 2 2) + 3 c ( ) r c Theorem 8.1 states formally a result which will give us the general coefficient in (8.3) in two forms. The result appears to be new and has many other applications. Theorem 8.1 Set N + = {0, 1, 2,...}. Suppose that for r 1, c k1,...,k r = f k1,...,k r /k 1! k r! is any real symmetric function on N r +. Then for k = 0, 1,..., C(rk) defined by (8.4) is given by C(rk) = k ( ) r D(rkj) = (r!/k!) j k F (rkj), (8.6) where D(rkj) is the partial ordinary Bell polynomial Bkj (x) with x n 1 1 xn 2 2 replaced by c 0 r j 1 n 1 2 n 2, and F (rkj) is the partial exponential Bell polynomial Bkj (x) with x n 1 1 xn 2 2 replaced by f 0 r j 1 n 1 2 n 2 /(r j)!. The polynomials in Theorem 8.1 are defined by its proof. For example, D(rk0) = c 0 r jδ k0 since B k0 (x) = δ k0. The important thing is that these polynomials are tabled in Comtet (1974, pages ) to k = 10, 12. For example, B 63 (x) = 3x 2 1 x 4 + 6x 1 x 2 x 3 + x 3 2 gives D(r63) = (3c c 123 +c 2 3)c0 r 3 ; B 63 (x) = 15x 2 1 x 4 +60x 1 x 2 x 3 +15x 3 2 gives F (r63) = (15f f f 2 3)f0 r 3 /(r 3)!. Similarly, Bk,1 (x) = x k and B kk (x) = x k 1 give D(rk1) = c r 1 0 c k and D(rkk) = c0 r k c 1 k. For k > 10, 12 these polynomials can be obtained from recurrence formulas. Proof of Theorem 8.1 Note that C(rk) = π ( ) r λ(π)c π, (8.7) π where the summation is over all partitions π of k, π is the number of elements of π, and λ(π) is the number of distinct permutations of the elements of π: ( ) λ (1 n 1 n1 + n 2 n2 2 + ) = = (n 1 + n 2 + )!/n 1!n 2!. n 1, n 2,... 15

17 So, we can rewrite (8.7) as C(rk)/r! = n 1 0, n 2 0,..., 1n 1 +2n 2 + =k c 0 r n 1 n 2 1 n 1 2 n 2 / (r n 1 n 2 )!n 1!n 2!. However, it is simpler to write it as the first form in (8.6) with D(rkj) = j! {c 0 r j 1 n 1 2 n 2 /n 1!n 2! : n 1 + n 2 + = j, 1n 1 + 2n 2 + = 2k}. But this is just D(rkj) of the theorem. The second form in (8.6) now follows since c j f j /j! implies B kj (c)/j! = B kj (f)/k!. This concludes our proof. In our case c 0 = 1 and for r 2, c k1 k r 1 0 = c k1 k r 1 so D(kj) = D(rkj) does not depend on r. Also we exclude j = 1 in (8.6) since c k = 0 for k > 0. We have already given C(r, 2k) above for k = 0, 1, 2. By Theorem 8.1, since c 0 = 1, the next three C(r, 2k) are given by (8.6) and k = 3 : D(62) = 2c c 24 + c 3 2, D(63) = 3c c c 2 3, D(64) = 4c c , D(65) = 5c 1 4 2, D(66) = c 1 6. k = 4 : D(82) = 2c c c 35 + c 4 2, D(83)/3 = c c c c c 23 2, D(84) = 4c c c c c 2 4, D(85)/5 = c c c , D(86) = 6c c , D(87) = 7c 1 6 2, D(88) = c 1 8. k = 5 : D(10, 2) = 2c c c c 46 + c 5 2, D(10, 3)/3 = c c c c c c c c 3 2 4, D(10, 4) = 4c c c c c c c c c , D(10, 5) = 5c c c c c c c 2 5, D(10, 6) = 6c c c c c , D(10, 7)/7 = c c c , D(10, 8) = 8c c , D(10, 9) = 9c 1 8 2, D(10, 10) = c

18 Note C(r, 2k) depends on some L j1,...,j r up to J = 3k. For k = 3 the L j1,...,j r needed are given in Appendix A. So, one obtains the c functions needed for C(r, 2k) up to k = 3 as follows, excluding c k1 k 2 given by (8.5): and for k = 3 : for k = 2 : c = 2 [ P11P P 11 P 13 (P P 24 ) + 18P13 2 (P P P 26 ) ], c 1 4 = 3 ( P P11P P11P P 11 P P13 4 ) c = 2P 2 11P P 11 P 13 (P P 44 ) + 36P 2 13 (P P P 46 ), c 123 = 2P 11 P 22 (P P 33 ) + 24P 11 P 24 (P P P 39 ) +6!P 11 P 26 (P P 37 ) + 6P 13 P 22 (P P P 35 ) +24P 13 P 24 (P P P P 37 ) +6!P 13 P 26 (P P P P 39 ), c 2 3 = 8 ( P P 2 22P P 22 P P 22 P 24 P P P P 2 24P 26 ) +8 ( 48600P 24 P P 3 26), c = 3P 3 11 (P P 33 ) + 18P 2 11P 13 (P P P 35 ) +18P 11 P 2 13 (7P P P P 37 ) +27P 3 33 (6P P P !P !P 39 /27), c = 10P P P P 22 P P 22 P ( P P P P 22 P P 22 P P 24 P 26 ) +372P !P P !P 22 P !P 22 P !P 24 P 26, c = 12 ( P P 3 11P P 2 11P P 11 P P 4 13) P22 +4! ( P P 3 11P P 2 11P P 11 P P 4 13) P ! ( P11P P11P P 11 P P13) 4 P26, c 1 6 = 15 ( P P11P P11P P11P P11P ) +15 ( P 11 P P13) 6. The P rj needed for C(r, 2k) up to k = 5 are as follows: k = 1 : P 11, P 33, k = 2 : P 31, P 33, P 22, P 24, P 26, k = 3 : P 51, P 53, P 42, P 44, P 46, P 35, P 37, P 39, k = 4 : P 71, P 73, P 62, P 64, P 66, P 55, P 57, P 59, P 48, P 4,10, P 4,12, k = 5 : P 91, P 93, P 82, P 84, P 86, P 75, P 77, P 79, P 68, P 6,10, P 6,12, P 5,11, P 5,13, P 5,15. So, C(r, 2k) needs 3k 1 new P rj apart from those needed for C(r, 2k 2). For example, C(r2) and C(r4) are given in terms of the A rj by the P rj of (8.2) and P 31 = A 12, P 33 = (A A 11 A 22 + A 3 11 )/3!. Note that C(r6) is given in terms of the A rj by the P rj of Appendix A of Withers and Nadarajah (2010) and P 51 = A 13, P 53 = A 34 /6 + A 11 A 23 /2 + 17

19 A 22 A 12 /2 + A 2 11 A 12/2. Note that C(r8) is given in terms of the A rj by the P rj of Appendix A of Withers and Nadarajah (2010) and P 71 = A 14, P 73 = A 35 /6 + A 11 A 24 /2 + A 13 A 22 /2 + A 12 A 23 /3 + A 2 11A 15 /2 + A 11 A 2 12/2, P 62 = A 24 /2 + A 11 A 13 + A 2 12/2, P 64 = A 45 /24 + A 32 A 13 /6 + A 11 A 34 /6 + A 22 A 23 /4 + A 12 A 33 /6 + A 2 11A 23 /4 +A 11 A 22 A 12 /2 + A 3 11A 12 /6, P 66 = A 66 /6! + A 11 A 55 /5! + A 32 A 34 /36 + A 22 A 44 /48 + A 43 A 23 /48 + A 2 33/72 +A 12 A 54 /5! + A 2 11A 44 /4! + A 11 A 32 A 23 /24 + A 11 A 22 A 33 /12 + A 11 A 43 A 12 /4! +A 12 A 22 A 32 /12 + A 3 22/48 + A 2 11A 2 22/16 + A 4 11A 22 /48 + A 6 11/6!, P 55 = A 55 /5! + A 11 A 44 /4! + A 32 A 23 /12 + A 22 A 33 /12 + A 43 A 12 /4! + A 2 11A 33 /12 +A 11 A 32 A 12 /6 + A 11 A 2 22/8 + A 3 11A 22 /12 + A 5 11/5!, P 57 = A 76 /7! + A 11 A 65 /6! + A 32 A 44 /144 + A 22 A 54 /240 + A 43 A 33 /144 + A 2 11A 54 /5! +A 11 A 32 A 33 /18 + A 11 A 32 2/36 + A 11 A 22 A 43 /48 + A 2 22A 32 /48 + A 3 11A 43 /6.4! +A 2 12A 22 A 32 /24 + A 4 11A 32 /144, P 59 = A 32 A 65 /3!6! + A 43 A 54 /4!5! + A 32 A 54 /6! + A 2 32A 33 /432 + A 22 A 32 A 43 /288 +A 2 11A 32 A 43 /288 + A 2 11A 2 32/432. In the case r = 2, where p 2 n/φ = j!cj 2 = C(2, 2k)n k, k=0 C(2, 2k) = D(2, 2k) = and c k1 k 2 is given by (8.5). k 1 +k 2 =2k k 1 c k1 k 2 = 2 c i,2k i + c k 2 The simplest examples have already been covered by Examples 2.1 and 2.2. i=1 8.2 Entropy and negentropy for p n Theorem 8.2 gives expressions for entropy and negentropy of p n. Its proof uses results in Withers and Nadarajah (2010). Theorem 8.2 We have p n ln p n = n r I 2r, J (p n ) = n r I 2r, where I 2r are determined by the proof. r=0 r=1 18

20 Proof: By Withers and Nadarajah (2010), ln p n (x) = where n r/2 q r (x), (8.8) r=0 r+2 q r (x) = {q rj H j (x) : r j even}. So, q r (x) is a polynomial in x of degree only r + 2. q 1 (x) = h 11 (x). The coefficients needed for r 4 are For example, q 0 (x) = ln φ(x) and q 00 = ln φ(0) 1/2 = (1 + ln(2π)) /2, q 02 = 1/2, q 11 = P 11 = A 11, q 13 = P 13 = A 32 /6, q 20 = A 2 32/12 A 2 11/2, q 22 = A 2 32/4 + (A 22 A 32 A 11 ) /2, q 24 = A 43 /24 A 2 32/8, q 31 = A 43 A 32 /6 + A 3 32/2 + (A 32 A 11 A 22 ) A 32 /2 + ( A 32 A 2 ) 11/2 A 22 A 11 + A 12, q 33 = A 43 A 32 /4 + 7A 3 32/12 + (A 33 A 43 A 11 ) /6 + (A 32 A 11 A 22 ) A 32 /2, q 35 = A 54 /120 A 43 A 32 /12 + A 3 32/8, q 40 = A 2 43/48 + A 43 A 2 32/4 + 7A 4 32/16 A 33 A 32 /6 + A 2 32 (A 22 A 32 A 11 ) /2 A 2 22/4 +A 11 (A 43 A 32 /6 A 12 ) + A 32 A 11 (A 22 A 32 A 11 /2) /2 +A 2 11 (A 22 A 32 A 11 /3) /2, q 42 = A 54 A 32 / A 43 A 2 32/ A 4 32/8 + A 43 A 22 /4 A 33 A 32 /2 +A 2 32 (A 22 7A 32 A 11 ) /4 A 2 22/2 + ( A 43 A 2 ) 11/2 A 33 A 11 A 32 A 12 + A 23 /2 +3A 11 A 43 A 32 /4 + 3A 32 A 11 (A 22 A 32 A 11 /2) /2, q 44 = A 54 A 32 /12 + 6A 2 43/ A 43 A 2 32/ A 4 32/16 + A 44 /24 + A 43 A 22 /6 A 33 A 32 /4 + A 2 32 (A 22 5A 32 A 11 ) /8 + A 11 ( A 54 /24 + 5A 43 A 32 /12), q 46 = A 65 /720 A 54 A 32 /48 + A 2 43/72 + A 43 A 2 32/4 + 7A 4 32/48. 19

21 This form of q r substituting is obtained from the form given in Withers and Nadarajah (2010) by 3x 4 12x = 3H 4 + 6H 2 + 2, x 5 7x 3 + 8x = H 5 + 3H 3 + 2H 1, 3x 5 16x x = 3H H H 1, x 3 2x = H 3 + H 1, x 6 11x x 2 7 = H 6 + 4H 4 + 4H 2, 2x 6 21x x 2 12 = 2H 6 + 7H 4 3, 7x 6 59x x 2 25 = 7H H H , 7x 6 48x x 2 15 = 7H H H , 2x 4 9x = 2H 4 + 3H 2, 3x 4 12x = 3H 4 + 6H 2 + 2, 5x 4 16x = 5H H 2 + 4, 2x 2 1 = 2H 2 + 1, 5x 4 21x = 5H 4 + 9H 2 + 2, 3x 2 2 = 3H 2 + 1, then collecting terms. So, putting P 0k = δ 0k, p n ln p n = n r/2 I r, r=0 where r I r = φh 1j q r j = r k!p jk q r j,k = q r0 + k r j=1 min(3j,r j+2) k=0 k!p jk q r j,k. Also h 1j and q j have parity j, that is, they are odd or even functions for j odd or even. So, I r = 0 for r odd. Hence, the theorem follows by (4.4). The first few I 2r are I 0 = q 00 = (1 + ln(2π)) /2 = I 2 = q 20 + ( q !q 2 13) + 2!P22 q 02 = A 22 /2 + A 2 32/12, φ ln φ = H(φ), I 4 = q 40 + (P 11 q !P 13 q 33 ) + 2!P 22 q 22 + (P 31 q !P 33 q 13 ) + 2!P 42 q 02 = A 3 11A 32 /6 + A 2 ( 11 A22 /2 + A 22 A 32 /2 + A 2 32/4 ) ( +A 11 A 2 32 /2 A 32 A 43 /6 ) + A 2 22/4 A 23 /2 +49A 4 32/12 A 2 43/48 + A 32 A 33 /6 A 22 A 2 32/4. Since J(p n ) 0, we have the unusual inequality 0 12I 2 = A A 22, (8.9) 20

22 that is, for large n, [ ) ) ] [ ( θ)] 2 6 var ( θ /avar ( θ 1 skewness, where avar( θ) = a 21 /n is the asymptotic variance of θ. Example 8.1 Suppose that θ is the mean of a random sample in R with rth cumulant l r. Set α r = l r /l r/2 2. Then (8.1) holds with A rj = α r δ i,r 1. So, I 2 = α3 2/12 and I 4 = 49α3 4/12 α2 4 /48. Example 8.2 Suppose that θ is the empirical variance of a random sample in R with rth moment µ r. Set λ r = µ r /µ r/2 2. Then (8.1) holds with a 21 = µ 4 µ 2 2 and A ri given by Withers (1983, Example 2, page 582): A 11 = 0, A 32 = ( λ 6 3λ λ 3 3) / (λ4 1) 3/2, A 22 = 2 (λ 4 2) / (λ 4 1), A 43 = ( λ λ 4 3λ λ 5 λ λ ) / (λ 4 1) 2. For this example, the inequality (8.9) can be written ( λ6 3λ λ 3 ) (λ4 3/2) 2 3, that is, ( µ6 3µ 4 µ 2 + 2µ 3 2 6µ 3 ) 2 ( µ 2 2 µ4 3µ 2 2/2 ) 2 3µ 6 2. We have checked that this inequality holds for uniform, gamma and binomial populations. 8.3 Expressions for (ln p n ) r φ Theorem 8.3 gives an expression for (ln p n ) r φ. Its proof follows directly from (8.8). Theorem 8.3 We have where and Q k1 k r = In particular, (ln p n ) r φ = J rk = n r/2 J rk, r=0 k 1 + +k r=k Q k1 k r q k1 q kr φ = {q k1 j 1 q krj r L j1,...,j r : 0 j 1 k 1 + 2,..., 0 j r k r + 2}. Q k1 k 2 = min(k 1,k 2 )+2 j!q k1 jq k2 j. If k is odd then J rk = 0 since then k i j i is odd for some i. 21

23 Appendix A: Coefficients L j1,...,j r Here, we give the coefficients L j1,...,j r defined by (3.6) of total order 2J = j j r up to J = 8. For brevity, we table l j1,...,j r = L j1,...,j r /j r! rather than L j1,...,j r. We write l = l and so on, and enclose a subscript in round brackets if over nine. For example, l 12 has r = 2 but l (12) has r = 1. r = 2 : l k1 k 2 = δ k1 k 2. r = 3 : see (3.8) : J = 2 : l = 1, J = 3 : l = 0, l 123 = 1, l 2 3 = 4, J = 4 : l = l 125 = 0, l 134 = l = 1, l 23 2 = 6, J = 5 : l = l 127 = l 136 = l = 0, l 145 = l 235 = 1, l 24 2 = 8, l = 9, J = 6 : l 1 2 (10) = l 129 = l 138 = l 147 = l = l 237 = 0, l 156 = l 246 = l = 1, l 25 2 = 10, l 345 = 12, l 4 3 = 72, J = 7 : l 1 2 (12) = l 12(11) = l 13(10) = l 149 = l 158 = l 2 2 (10) = l 239 = l 248 = l = 0, l 167 = l 257 = l 347 = 1, l 26 2 = 12, l 356 = 15, l = 16, l 45 2 = 120, J = 8 : l 1 2 (14) = l 12(13) = l 13(12) = l 14(11) = l 15(10) = l 169 = l 178 = l 268 = l 358 = l = 1, l 169 = l 2 2 (12) = l 23(11) = l 24(10) = l 259 = l 3 2 (10) = l 349 = 0, l 27 2 = 14, l 367 = 18, l 457 = 20, l 46 2 = 180, l =

24 r = 4 : J = 2 : l 1 4 = 3, J = 3 : l = 1, l = 5, J = 4 : l = 0, l = 1, l = 7, l = 8, l 2 4 = 30, J = 5 : l = l = 0, l = l = 1, l = 9, l 1234 = 11, l 13 3 = 54, l = 62, l = 12, J = 6 : l = l = l = l = 0, l = l 1236 = l = 1, l = 11, l 1245 = 14, l = 15, l = 96, l = 16, l = 106, l = 120, l 3 4 = 558, J = 7 : l 1 3 (11) = l 1 2 2(10) = l = l = l = l 1238 = l = 0, l = l 1247 = l = l = 1, l = 13, l 1256 = 17, l 1346 = 19, l = 150, l = 168, l = 20, l = 162, l = 21, l 2345 = 198, l 24 3 = 1152, l = 216, l = 1278, J = 8 : l 1 3 (13) = l 1 2 2(12) = l 1 2 3(11) = l 1 2 4(10) = l = l 12 2 (11), = l 123(10) = l 1249 = l = l = l 2 3 (10) = 0, l = l 1249 = l 1258 = l 1267 = l 1348 = l = l = 1, l = 15, l 1267 = 20, l 1357 = 23, l = 216, l = 24, l 1456 = 260, l 15 3 = 1800, l = 24, l = 230, l 2347 = 26, l 2356 = 296, l = 320, l = 2280, l = 3 3, l = 342, l = 2466, l = 2736, l 4 4 =

25 r = 5 : J = 3 : l = 6, J = 4 : l = 1, l = 9, l = 34, J = 5 : l = 0, l = 1, l = 12, l = 68, l = 13, l = 78, l 2 5 = 272, J = 6 : l = l = 0, l = l = 1, l = 15, l = 17, l = 114, l = 129, l = 18, l = 142, l = 666, l = 156, l = 740, J = 7 : l 1 4 (10) = l = l = l = 0, l = l = l = 1, l = 18, l = 21, l = 172, l = 22, l = 210, l = 1224, l = 23, l = 226, l = 246, l = 1470, l = 1638, l = 24, l = 264, l = 1592, l = 1776, l 23 4 = 7992, J = 8 : l 1 4 (12) = l 1 3 2(11) = l 1 3 3(10) = l = l (10) = l = l = 0, l = l = l = l = l = 1, l = 21, l = 25, l = 242, l = 27, l = 311, l = 336, l = 2400, l = 28, l = 330, l = 29, l = 380, l = 2766, l = 3072, l = 405, l = 3330, l = 18792, l = 30, l = 402, l = 2948, l = 428, l = 3552, l = 20088, l = 3852, l = 21924, l =

26 r = 6 : J = 3 : l 1 6 = 15, J = 4 : l = 10, l = 39, J = 5 : l = 1, l = 14, l = 75, l = 86, l = 302, J = 6 : l = 0, l = 1, l = 18, l = 123, l = 19, l = 153, l = 720, l = 168, l = 802, l = 896, l 2 6 = 3020, J = 7 : l = l = 0, l = l = 1, l = 22, l = 183, l = 24, l = 29, l = 261, l = 1566, l = 25, l = 280, l = 1698, l = 1896, l = 8550, l = 300, l = 2060, l = 9324, l = 2240, l = 10188, J = 8 : l 1 5 (11) = l 1 4 2(10) = l = l = 0, l = l = l = 1, l = 26, l = 255, l = 29, l = 347, l = 30, l = 399, l = 2916, l = 3240, l = 31, l = 422, l = 3110, l = 449, l = 3750, l = 21240, l = 4068, l = 23202, l = 32, l = 474, l = 4004, l = 4344, l = 24864, l = 27252, l 13 5 = , l = 500, l = 4640, l = 26668, l = 29268, l = r = 7 : J = 4 : l = 45, J = 5 : l = 15, l = 95, l = 336, J = 6 : l = 1, l = 20, l = 165, l = 181, l = 870, l = 974, l = 3292, J = 7 : l = 0, l = 1, l = 25, l = 255, l = 26, l = 297, l = 1812, l = 2025, l = 318, l = 2202, l = 9990, l = 2396, l = 10928, l = 11980, l 2 7 = 39504, J = 8 : l 1 6 (10) = l = 0, l = l = 1, l = 30, l = 365, l = 32, l = 443, l = 3282, l = 471, l = 3960, l = 22464, l = 33, l = 497, l = 4230, l = 4590, l = 26334, l = 28890, l = 524, l = 4904, l = 28260, l = 31044, l = , l = 5240, l = 33388, l = , l = 35940, l =

27 r = 8 : J = 4 : l 1 8 = 105, J = 5 : l = 105, l = 375, J = 6 : l = 21, l = 195, l = 945, l = 1060, l = 3594, J = 7 l = 1, l = 27, l = 315, l = 1935, l = 337, l = 2355, l = 10710, l = 2564, l = 11730, l = 12876, l = 42524, J = 8 l = 0, l = 1, l = 33, l = 465, l = 3465, l = 34, l = 521, l = 4470, l = 4851, l = 27900, l = 549, l = 5184, l = 29958, l = 32940, l = , l = 5540, l = 35448, l = , l = 38180, l = , l = , l 2 8 = r = 9 : J = 5 : l = 420, l = 630, J = 6 : l = 210, l = 1155, l = 3930, l = 6930, l = 7860, J = 7 : l = 28, l = 357, l = 2520, l = 2745, l = 12600, l = 13850, l = 45816, J = 8 : l = 1, l = 35, l = 546, l = 4725, l = 575, l = 5481, l = 31770, l = 34965, l = 5858, l = 37650, l = , l = 40576, l = , l = , l = r = 10 : J = 5 : l 1 10 = 945, J = 6 : l = 1260, l = 4305, J = 7 : l = 378, l = 2940, l = 13545, l = 14910, l = 49410, J = 8 : l = 36, l = 602, l = 5796, l = 33705, l = 6195, l = 40005, l = , l = 43140, l = , l = , l = r = 11 : J = 6 : l = 4725, J = 7 : l = 3150, l = 16065, l = 53340, J = 8 : l = 630, l = 6552, l = 42525, l = 45885, l = , l = , l = r = 12 : J = 6 : l 1 12 = 10395, J = 7 : l = 17325, l = 57645, J = 8 : l = 6930, l = 48825, l = , l = , l =

28 r = 13 : J = 7 : l = 62370, J = 8 : l = 51975, l = , l = r = 14 : J = 7 : l 1 14 = , J = 8 : l = , l = r = 15 : J = 8 : l = r = 16 : J = 8 : l 1 16 = In addition C(r, 6) of Section 8 needs the following l j1,...,j r with J = 9: l 9 2 = 1, l 3 2,12 = 0, l 369 = 1, l 6 3 = 4242, l = 0, l = 0, l = 4242, l = 1, l = 7704 = and l 3 6 = = These values were obtained using MAPLE. The following programme was used for r 5 and was modified in an obvious way for higher values. with(orthopoly); He:=proc(r) simplify(h(r,x/sqrt(2))/sqrt(2)^r,radical) end; h:=proc(j1,j2,j3,j4,j5) expand(simplify(he(j1)*he(j2)*he(j3)*he(j4)*he(j5))) end; m:= proc(n) (2*n)!/(n!*2^n) end; L:= proc(j1,j2,j3,j4,j5) subs(x^2=m(1), x^4=m(2), x^6=m(3), x^8=m(4), x^10=m(5), x^12=m(6), x^14=m(7), x^16=m(8),h(j1,j2,j3,j4,j5)) end; n:=16; for j1 from 0 to n do for j2 from j1 to n do for j3 from j2 to n do for j4 from j3 to n do for j5 from j4 to n do if j1+j2+j3+j4+j5 < 17 and (j1+j2+j3+j4+j5) mod (2)=0 then print((j1+j2+j3+j4+j5)/2, j1,j2,j3,j4,j5, L(j1,j2,j3,j4,j5)/j5! ) end if end do end do end do end do end do; quit; 27

29 Appendix B: Other methods for obtaining L j1,...,j r One method for obtaining L j1,...,j r needed for (3.5) is to expand the right hand side of (3.6) or the right hand side of (3.7) and use E N0 2r+1 = 0 and E N0 2r = 1 3 (2r 1) = (2r)!/2 r r!. This gives L j1,...,j r as r summations, a total of about j 1 j r /2 r terms. Theorem B.1 gives another method based on the generating function l(t) = l (t 1,..., t r ) = j 1,...,j r=0 L j1,...,j r t j 1 1 t jr r /j 1! j r! = L j t j /j! (B.1) say. Comparing this with (4.3) and (3.6), we see that J(f) is just l(t) with t j 1 1 t jr r replaced by w rb j1 B jr, where w 0 = w 1 = 0 and w r = w r of (4.1) for r 2. Theorem B.2 gives various tools of calculating L j in (B.1). For more tools, see Withers and McGavin (2006) and Withers and Nadarajah (2007). Theorem B.1 We have l (t 1,..., t r ) = exp(g), (B.2) where g = 1 j<k r t jt k. Proof: Set By (3.6), T r = r t r k, Q = N 0T 1 + i k=1 where g = T 2 1 /2 T 2/2. This proves (B.2). r N k t k. k=1 l (t 1,..., t r ) = E exp(q) = exp(g), Theorem B.2 For k = (k 1,..., k r ), set k = r j=1 k j. Fix J = 1, 2,.... Then, for j = 2J, we have L j /j! = 1/ n ij! : n i = j i, i = 1,..., r, (B.3) 1 i<j r where n i = j i n ij and n ji = n ij. By substituting n ir = j i n i, where n i = j i,r n ij for i = 1,..., r 1, we have L j /j! = 1/ r 1 ( n ij! ji n ) i! : n = J j r, (B.4) 1 i<j<r i=1 where n = 1 i<j<r n ij. For r = 4, (B.4) gives L j /j! = 1/n 12!n 13!n 23! (j 1 n 12 n 13 )! (j 2 n 12 n 23 )! (j 3 n 13 n 23 )!. n 12 +n 13 +n 23 =J j 4 In terms of the normal moments, ν j = E N j 0, l(t) = mn, where m = exp ( T 2 1 /2 ) = j 0 M(j)t j /j!, M(j) = ν j 28

30 and n = exp ( T 2 /2) = j 0 N(j)t j /j!, N(j) = i j ν j1 ν jr, so we have L k = I+J=k ( ) k M(I)N(J) = I, J Note that m has partial derivative I+2J=k ( ) k ν I, 2J I ( 1) J ν 2J1 ν 2Jr. m (k) = exp ( T 2 1 /2 ) H k (T 1), (B.5) where H r (x) = E(x + N 0 ) r is the modified Hermite polynomial. Proof: (B.3) follows by the multinomial expansion. Note that in (B.3) j i = n i i<j n ij = J. (B.4) follows by reducing the sum in (B.3) to the ( ) r 1 distinct nij, {n ij, 1 i < j < r}. Proof of Theorem 3.2: (B.3) implies (3.9). For r = 3, (B.4) gives (3.8). Suppose that j r = J. Then n ij = 0 for i < j < r and n ir = j i for i < r. This proves (3.10). Suppose that j r = J 1. Then n ij = 0 for i < j < r except for 1, say n ab, which equals 1. Also n ir = j i δ ia δ ib for i < r. So, (3.11) follows. An alternative proof of (3.8) is to use Leibniz s formula ( ) k (d/dx) k f 1 (x) f p (x) = f (j 1) 1 (x) f p (jp) (x), j 1 j p where f (k) (x) = (d/dx) k f(x). Set j 1 + +j p=k j = / t j, g j = j g = k j t k = T 1 t j, 2 and l (k) = l (k 1k 1 ) = 1 k 1 2 k2 l (t 1,..., t r ), [k] j = k(k 1)... (k j + 1) = k!/(k j)!. Then l (k 1) = g k 1 l (k 1k 2 ) = 1 exp(g), (B.6) min(k 1,k 2 ) l (k 1k 2 k 3 ) = k 1!k 2!k 3! exp(g) [k 1 ] j [k 2 ] j g k 1 j 1 g k 2 j 2 /j! exp(g), K 1,K 2,K 3 0 g K 1 1 gk 2 2 gk 3 3 K 1!K 2!K 3! 3 ( ( k K k j + K j ) /2 )!, where r = 3 and the last sum is restricted to k K k j + K j = 0, 2, 4,... for j = 1, 2, 3. Putting t = 0 and using gj k = δ jk at 0 gives (3.8). Higher order L j1,...,j r are more complicated. In fact, we have not been able to put l (k 1k 2 k 3 k 4 ) in symmetric form. However, (B.6) with k 1 = 1 gives the useful reduction formula L j1 +1,j 2,...,j r = j 1 L j1 1,j 2,...,j r + + j r L j1,...,j r 1,j r j=1

Solutions to the recurrence relation u n+1 = v n+1 + u n v n in terms of Bell polynomials

Solutions to the recurrence relation u n+1 = v n+1 + u n v n in terms of Bell polynomials Volume 31, N. 2, pp. 245 258, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam Solutions to the recurrence relation u n+1 = v n+1 + u n v n in terms of Bell polynomials

More information

Bias Reduction when Data are Rounded

Bias Reduction when Data are Rounded Bias Reduction when Data are Rounded Christopher S. Withers & Saralees Nadarajah First version: 10 December 2013 Research Report No. 12, 2013, Probability and Statistics Group School of Mathematics, The

More information

EXPANSIONS FOR QUANTILES AND MULTIVARIATE MOMENTS OF EXTREMES FOR HEAVY TAILED DISTRIBUTIONS

EXPANSIONS FOR QUANTILES AND MULTIVARIATE MOMENTS OF EXTREMES FOR HEAVY TAILED DISTRIBUTIONS REVSTAT Statistical Journal Volume 15, Number 1, January 2017, 25 43 EXPANSIONS FOR QUANTILES AND MULTIVARIATE MOMENTS OF EXTREMES FOR HEAVY TAILED DISTRIBUTIONS Authors: Christopher Withers Industrial

More information

New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit

New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ. 4 Legendre Functions In order to investigate the solutions of Legendre s differential equation d ( µ ) dθ ] ] + l(l + ) m dµ dµ µ Θ = 0. (4.) consider first the case of m = 0 where there is no azimuthal

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

1 Vectors and Tensors

1 Vectors and Tensors PART 1: MATHEMATICAL PRELIMINARIES 1 Vectors and Tensors This chapter and the next are concerned with establishing some basic properties of vectors and tensors in real spaces. The first of these is specifically

More information

Series Solutions. 8.1 Taylor Polynomials

Series Solutions. 8.1 Taylor Polynomials 8 Series Solutions 8.1 Taylor Polynomials Polynomial functions, as we have seen, are well behaved. They are continuous everywhere, and have continuous derivatives of all orders everywhere. It also turns

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

EXPANSIONS FOR FUNCTIONS OF DETERMINANTS OF POWER SERIES

EXPANSIONS FOR FUNCTIONS OF DETERMINANTS OF POWER SERIES CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 18, Number 1, Spring 010 EXPANSIONS FOR FUNCTIONS OF DETERMINANTS OF POWER SERIES CHRISTOPHER S. WITHERS AND SARALEES NADARAJAH ABSTRACT. Let Aɛ) be an analytic

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

Lecture 3: Central Limit Theorem

Lecture 3: Central Limit Theorem Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

ASYMPTOTIC DISTRIBUTION OF THE MAXIMUM CUMULATIVE SUM OF INDEPENDENT RANDOM VARIABLES

ASYMPTOTIC DISTRIBUTION OF THE MAXIMUM CUMULATIVE SUM OF INDEPENDENT RANDOM VARIABLES ASYMPTOTIC DISTRIBUTION OF THE MAXIMUM CUMULATIVE SUM OF INDEPENDENT RANDOM VARIABLES KAI LAI CHUNG The limiting distribution of the maximum cumulative sum 1 of a sequence of independent random variables

More information

OQ4867. Let ABC be a triangle and AA 1 BB 1 CC 1 = {M} where A 1 BC, B 1 CA, C 1 AB. Determine all points M for which ana 1...

OQ4867. Let ABC be a triangle and AA 1 BB 1 CC 1 = {M} where A 1 BC, B 1 CA, C 1 AB. Determine all points M for which ana 1... 764 Octogon Mathematical Magazine, Vol. 24, No.2, October 206 Open questions OQ4867. Let ABC be a triangle and AA BB CC = {M} where A BC, B CA, C AB. Determine all points M for which 4 s 2 3r 2 2Rr AA

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

PUTNAM 2016 SOLUTIONS. Problem A1. Find the smallest positive integer j such that for every polynomial p(x) with integer coefficients

PUTNAM 2016 SOLUTIONS. Problem A1. Find the smallest positive integer j such that for every polynomial p(x) with integer coefficients Math 7 Fall 5 PUTNAM 6 SOLUTIONS Problem A Find the smallest positive integer j such that for every polynomial px with integer coefficients and for every integer k, the integer the j-th derivative of px

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

23. The Finite Fourier Transform and the Fast Fourier Transform Algorithm

23. The Finite Fourier Transform and the Fast Fourier Transform Algorithm 23. The Finite Fourier Transform and the Fast Fourier Transform Algorithm 23.1 Introduction: Fourier Series Early in the Nineteenth Century, Fourier studied sound and oscillatory motion and conceived of

More information

Coskewness and Cokurtosis John W. Fowler July 9, 2005

Coskewness and Cokurtosis John W. Fowler July 9, 2005 Coskewness and Cokurtosis John W. Fowler July 9, 2005 The concept of a covariance matrix can be extended to higher moments; of particular interest are the third and fourth moments. A very common application

More information

Convergence of sequences and series

Convergence of sequences and series Convergence of sequences and series A sequence f is a map from N the positive integers to a set. We often write the map outputs as f n rather than f(n). Often we just list the outputs in order and leave

More information

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008)

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 38 (2008), 171 178 THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n Christopher S. Withers and Saralees Nadarajah (Received July 2008) Abstract. When a

More information

Error formulas for divided difference expansions and numerical differentiation

Error formulas for divided difference expansions and numerical differentiation Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture 3: Central Limit Theorem

Lecture 3: Central Limit Theorem Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (εx)

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS

COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS A. KNOPFMACHER, M. E. MAYS, AND S. WAGNER Abstract. A composition of the positive integer n is a representation of n as an ordered sum of positive integers

More information

Math 1B, lecture 15: Taylor Series

Math 1B, lecture 15: Taylor Series Math B, lecture 5: Taylor Series Nathan Pflueger October 0 Introduction Taylor s theorem shows, in many cases, that the error associated with a Taylor approximation will eventually approach 0 as the degree

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

are the q-versions of n, n! and . The falling factorial is (x) k = x(x 1)(x 2)... (x k + 1).

are the q-versions of n, n! and . The falling factorial is (x) k = x(x 1)(x 2)... (x k + 1). Lecture A jacques@ucsd.edu Notation: N, R, Z, F, C naturals, reals, integers, a field, complex numbers. p(n), S n,, b(n), s n, partition numbers, Stirling of the second ind, Bell numbers, Stirling of the

More information

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3 1 M 13-Lecture Contents: 1) Taylor Polynomials 2) Taylor Series Centered at x a 3) Applications of Taylor Polynomials Taylor Series The previous section served as motivation and gave some useful expansion.

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES You will be expected to reread and digest these typed notes after class, line by line, trying to follow why the line is true, for example how it

More information

Last Update: March 1 2, 201 0

Last Update: March 1 2, 201 0 M ath 2 0 1 E S 1 W inter 2 0 1 0 Last Update: March 1 2, 201 0 S eries S olutions of Differential Equations Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

Jumping Sequences. Steve Butler Department of Mathematics University of California, Los Angeles Los Angeles, CA

Jumping Sequences. Steve Butler Department of Mathematics University of California, Los Angeles Los Angeles, CA 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 11 (2008), Article 08.4.5 Jumping Sequences Steve Butler Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095 butler@math.ucla.edu

More information

Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation

Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation Yoshitatsu Matsuda 1, Tadanori Teruya, and Kenji Kashiwabara 1 1 Department of General Systems Studies, Graduate

More information

VECTORS, TENSORS AND INDEX NOTATION

VECTORS, TENSORS AND INDEX NOTATION VECTORS, TENSORS AND INDEX NOTATION Enrico Nobile Dipartimento di Ingegneria e Architettura Università degli Studi di Trieste, 34127 TRIESTE March 5, 2018 Vectors & Tensors, E. Nobile March 5, 2018 1 /

More information

Srednicki Chapter 24

Srednicki Chapter 24 Srednicki Chapter 24 QFT Problems & Solutions A. George November 4, 2012 Srednicki 24.1. Show that θ ij in equation 24.4 must be antisymmetric if R is orthogonal. Orthogonality of R implies that: Writing

More information

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Chapter III Beyond L 2 : Fourier transform of distributions

Chapter III Beyond L 2 : Fourier transform of distributions Chapter III Beyond L 2 : Fourier transform of distributions 113 1 Basic definitions and first examples In this section we generalize the theory of the Fourier transform developed in Section 1 to distributions.

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

The System of Linear Equations. Direct Methods. Xiaozhou Li.

The System of Linear Equations. Direct Methods. Xiaozhou Li. 1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always

More information

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES HIGHER ORDER CUMULANTS OF RANDOM VECTORS DIFFERENTIAL OPERATORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S RAO JAMMALAMADAKA T SUBBA RAO AND GYÖRGY TERDIK Abstract This paper provides

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

BRUNO L. M. FERREIRA AND HENRIQUE GUZZO JR.

BRUNO L. M. FERREIRA AND HENRIQUE GUZZO JR. REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 60, No. 1, 2019, Pages 9 20 Published online: February 11, 2019 https://doi.org/10.33044/revuma.v60n1a02 LIE n-multiplicative MAPPINGS ON TRIANGULAR n-matrix

More information

Math 118, Handout 4: Hermite functions and the Fourier transform. n! where D = d/dx, are a basis of eigenfunctions for the Fourier transform

Math 118, Handout 4: Hermite functions and the Fourier transform. n! where D = d/dx, are a basis of eigenfunctions for the Fourier transform The Hermite functions defined by h n (x) = ( )n e x2 /2 D n e x2 where D = d/dx, are a basis of eigenfunctions for the Fourier transform f(k) = f(x)e ikx dx on L 2 (R). Since h 0 (x) = e x2 /2 h (x) =

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

SIMULTANEOUS APPROXIMATION BY A NEW SEQUENCE OF SZÃSZ BETA TYPE OPERATORS

SIMULTANEOUS APPROXIMATION BY A NEW SEQUENCE OF SZÃSZ BETA TYPE OPERATORS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Volumen 5, Número 1, 29, Páginas 31 4 SIMULTANEOUS APPROIMATION BY A NEW SEQUENCE OF SZÃSZ BETA TYPE OPERATORS ALI J. MOHAMMAD AND AMAL K. HASSAN Abstract. In this

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

On binary reflected Gray codes and functions

On binary reflected Gray codes and functions Discrete Mathematics 308 (008) 1690 1700 www.elsevier.com/locate/disc On binary reflected Gray codes and functions Martin W. Bunder, Keith P. Tognetti, Glen E. Wheeler School of Mathematics and Applied

More information

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017

More information

Backward error analysis

Backward error analysis Backward error analysis Brynjulf Owren July 28, 2015 Introduction. The main source for these notes is the monograph by Hairer, Lubich and Wanner [2]. Consider ODEs in R d of the form ẏ = f(y), y(0) = y

More information

Almost Product Evaluation of Hankel Determinants

Almost Product Evaluation of Hankel Determinants Almost Product Evaluation of Hankel Determinants Ömer Eğecioğlu Department of Computer Science University of California, Santa Barbara CA 93106 omer@csucsbedu Timothy Redmond Stanford Medical Informatics,

More information

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION R u t c o r Research R e p o r t SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION Ersoy Subasi a Mine Subasi b András Prékopa c RRR 4-006, MARCH, 006 RUTCOR Rutgers Center for Operations Research

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Partial differential equations arise in a number of physical problems, such as fluid flow, heat transfer, solid mechanics and biological processes. These

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

3.5 Efficiency factors

3.5 Efficiency factors 3.5. EFFICIENCY FACTORS 63 3.5 Efficiency factors For comparison we consider a complete-block design where the variance of each response is σ CBD. In such a design, Λ = rj Θ and k = t, so Equation (3.3)

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

ORDER STATISTICS, QUANTILES, AND SAMPLE QUANTILES

ORDER STATISTICS, QUANTILES, AND SAMPLE QUANTILES ORDER STATISTICS, QUANTILES, AND SAMPLE QUANTILES 1. Order statistics Let X 1,...,X n be n real-valued observations. One can always arrangetheminordertogettheorder statisticsx (1) X (2) X (n). SinceX (k)

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer Lecture 9, February 8, 2006 The Harmonic Oscillator Consider a diatomic molecule. Such a molecule

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

The Free Central Limit Theorem: A Combinatorial Approach

The Free Central Limit Theorem: A Combinatorial Approach The Free Central Limit Theorem: A Combinatorial Approach by Dennis Stauffer A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honour s Seminar)

More information

Doubly Indexed Infinite Series

Doubly Indexed Infinite Series The Islamic University of Gaza Deanery of Higher studies Faculty of Science Department of Mathematics Doubly Indexed Infinite Series Presented By Ahed Khaleel Abu ALees Supervisor Professor Eissa D. Habil

More information

Transformation formulas for the generalized hypergeometric function with integral parameter differences

Transformation formulas for the generalized hypergeometric function with integral parameter differences Transformation formulas for the generalized hypergeometric function with integral parameter differences A. R. Miller Formerly Professor of Mathematics at George Washington University, 66 8th Street NW,

More information

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6 Contents Approximation Theory. Least Squares Approximation

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Introduction to Combinatorial Mathematics

Introduction to Combinatorial Mathematics Introduction to Combinatorial Mathematics George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 300 George Voutsadakis (LSSU) Combinatorics April 2016 1 / 57

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

Series Solutions of Linear ODEs

Series Solutions of Linear ODEs Chapter 2 Series Solutions of Linear ODEs This Chapter is concerned with solutions of linear Ordinary Differential Equations (ODE). We will start by reviewing some basic concepts and solution methods for

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples Chapter 3 Rings Rings are additive abelian groups with a second operation called multiplication. The connection between the two operations is provided by the distributive law. Assuming the results of Chapter

More information

Physics 342 Lecture 23. Radial Separation. Lecture 23. Physics 342 Quantum Mechanics I

Physics 342 Lecture 23. Radial Separation. Lecture 23. Physics 342 Quantum Mechanics I Physics 342 Lecture 23 Radial Separation Lecture 23 Physics 342 Quantum Mechanics I Friday, March 26th, 2010 We begin our spherical solutions with the simplest possible case zero potential. Aside from

More information

Asymptotics for minimal overlapping patterns for generalized Euler permutations, standard tableaux of rectangular shapes, and column strict arrays

Asymptotics for minimal overlapping patterns for generalized Euler permutations, standard tableaux of rectangular shapes, and column strict arrays Discrete Mathematics and Theoretical Computer Science DMTCS vol. 8:, 06, #6 arxiv:50.0890v4 [math.co] 6 May 06 Asymptotics for minimal overlapping patterns for generalized Euler permutations, standard

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

The Cayley Hamilton Theorem

The Cayley Hamilton Theorem The Cayley Hamilton Theorem Attila Máté Brooklyn College of the City University of New York March 23, 2016 Contents 1 Introduction 1 1.1 A multivariate polynomial zero on all integers is identically zero............

More information

Oikos. Appendix 1 and 2. o20751

Oikos. Appendix 1 and 2. o20751 Oikos o20751 Rosindell, J. and Cornell, S. J. 2013. Universal scaling of species-abundance distributions across multiple scales. Oikos 122: 1101 1111. Appendix 1 and 2 Universal scaling of species-abundance

More information

HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES HIGHER ORDER CUMULANTS OF RANDOM VECTORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GYÖRGY TERDIK Abstract. This paper provides a uni ed and comprehensive

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Fourth Week: Lectures 10-12

Fourth Week: Lectures 10-12 Fourth Week: Lectures 10-12 Lecture 10 The fact that a power series p of positive radius of convergence defines a function inside its disc of convergence via substitution is something that we cannot ignore

More information

RANDOM FIBONACCI-TYPE SEQUENCES

RANDOM FIBONACCI-TYPE SEQUENCES R. DAWSON, G. GABOR, R. NOWAKOWSKI, D, WIENS Dalhousie University, Halifax, Nova Scotia (Submitted December 1983) 1, INTRODUCTION In t h i s p a p e r, we s h a l l study s e v e r a l random v a r i a

More information