the convolution of f and g) given by

Similar documents
or E ( U(X) ) e zx = e ux e ivx = e ux( cos(vx) + i sin(vx) ), B X := { u R : M X (u) < } (4)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Characteristic Functions and the Central Limit Theorem

18.175: Lecture 15 Characteristic functions and central limit theorem

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

Probability and Measure

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

Gaussian vectors and central limit theorem

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

TOPICS IN FOURIER ANALYSIS-III. Contents

Sobolev Spaces. Chapter 10

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

8 Laws of large numbers

PROBABILITY THEORY LECTURE 3

Lecture 5: Moment generating functions

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

If α is a probability distribution on the line, its characteristic function is defined by

Summer Jump-Start Program for Analysis, 2012 Song-Ying Li

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Lecture Characterization of Infinitely Divisible Distributions

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

Exercises. T 2T. e ita φ(t)dt.

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

Analysis Qualifying Exam

Review of Probability Theory II

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2

Lecture 6 Basic Probability

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

u xx + u yy = 0. (5.1)

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

Statistics 3657 : Moment Generating Functions

Harmonic Functions and Brownian motion

MATH 6337: Homework 8 Solutions

Exercises Measure Theoretic Probability

STAT 801: Mathematical Statistics. Inversion of Generating Functions = 1. e ikt P (X = k). X n = [nx]/n

Probability and Measure

1 Review of Probability

Vectors in Function Spaces

1 Review of di erential calculus

Integration on Measure Spaces

discrete random variable: probability mass function continuous random variable: probability density function

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY

Notes 9 : Infinitely divisible and stable laws

Department of Mathematics

DOUBLE SERIES AND PRODUCTS OF SERIES

Solutions: Problem Set 4 Math 201B, Winter 2007

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

1.1 Review of Probability Theory

REAL AND COMPLEX ANALYSIS

Math 172 Problem Set 5 Solutions

The Central Limit Theorem

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Math 341: Probability Seventeenth Lecture (11/10/09)

Applications of the Maximum Principle

Green s Functions and Distributions

Exercises Measure Theoretic Probability

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Grading: 1, 2, 3, 4, 5 (each 8 pts) e ita R. e itx dµ(x)dt = 1 2T. cos(t(x a))dt dµ(x) T (x a) 1 2T. cos(t(x a))dt =

Spring 2012 Math 541B Exam 1

Harmonic Functions and Brownian Motion in Several Dimensions

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Asymptotic Statistics-III. Changliang Zou

Exercises from other sources REAL NUMBERS 2,...,

Exercises in Extreme value theory

Chapter 6. Characteristic functions

Wiener Measure and Brownian Motion

Math 4263 Homework Set 1

MATH 117 LECTURE NOTES

Partial Differential Equations

6.1 Moment Generating and Characteristic Functions

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Metric Spaces and Topology

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Lecture 17 Brownian motion as a Markov process

Lecture 19: Properties of Expectation

7 Convergence in R d and in Metric Spaces

Limiting Distributions

Immerse Metric Space Homework

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Chapter 5. Chapter 5 sections

Lecture 6: Special probability distributions. Summarizing probability distributions. Let X be a random variable with probability distribution

We denote the space of distributions on Ω by D ( Ω) 2.

MAS223 Statistical Inference and Modelling Exercises

Lecture 22: Variance and Covariance

17. Convergence of Random Variables

Formulas for probability theory and linear models SF2941

3 Integration and Expectation

Problem List MATH 5143 Fall, 2013

Chapter 5. Measurable Functions

Lecture 4: Fourier Transforms.

Folland: Real Analysis, Chapter 8 Sébastien Picard

Econ 508B: Lecture 5

On an uniqueness theorem for characteristic functions

1 Presessional Probability

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Transcription:

09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that formula to establish the fact that, in general, the characteristic function of X uniquely characterizes the distribution of X We begin by discussing the characteristic function of sums and products of random variables Sums and products C-valued random variables Z = U + iv,, Z n = U n + iv n, all defined on a common probability space, are said to be independent if the pairs (U k, V k ) for k =,, n are independent Theorem If Z,, Z n are independent C-valued integrable random variables, then n k= Z k is integrable and E( n k= Z k) = n k= E(Z k) () Proof The case n = 2 follows from the identity (U +iv )(U 2 +iv 2 ) = U U 2 V V 2 + i(u V 2 + U 2 V ) and the analogue of () for integrable real-valued random variables The general case follows by induction on n Suppose X and Y are independent real random variables with distributions µ and ν respectively The distribution of the sum S := X + Y is called the convolution of µ and ν, denoted µ ν, and is given by the formula (µ ν)(b) = P [S B] = = P [X + Y B X = x] µ(dx) = P [x + Y B] µ(dx) ν(b x) µ(dx) = µ(b y) ν(dy) (2) for (Borel) subsets B of R; here B x := { b x : b B } When X and Y have densities f and g respectively, this calculation can be pushed further (see Exercise ) to show that S has density f g (called 3 the convolution of f and g) given by (f g)(s) = f(x)g(s x) dx = f(s y)g(y) dy (3) Since the addition of random variables is associative (X +Y )+ Z = X + (Y + Z) so is the convolution of probability measures (µ ν) ρ = µ (ν ρ) In general, the convolution µ µ n of several measures is difficult to compute; however, the characteristic function of µ µ n is easily obtained from the characteristic functions of the individual µ k s: Theorem 2 If X,, X n are independent real random variables, then S = n k= X k has characteristic function φ S = n k= φ X k Proof Since e its is the product of the independent complex-valued integrable random variables e itx k for k =,, n, Theorem implies that E(e its ) = n k= E(eitX k) Example (A) Suppose X and Y are independent and each is uniformly distributed over [ /2, /2] Using (3) it is easy to check that S = X +Y has the so-called triangular distribution, with density f S (s) = ( s ) + (see Exercise ) Since φ X (t) = /2 /2 /2 cos(tx) dx + i sin(tx) dx = sin(t/2) /2 t/2 (this verifies line 0 of Table 2) and since φ Y = φ X, we have φ S (t) = sin 2 (t/2)/(t 2 /4) = 2( cos(t))/t 2 ; this gives line of the Table (B) Suppose X and Z are independent real random variables, with Z N(0, ) Then X σ := X+σZ has characteristic function φ Xσ (t) = φ X (t)φ σz (t) = φ X (t)e σ2 t 2 /2 We re interested in X σ because its distribution is very smooth (see Lemma and Exercise 3) for any σ, and is almost the same as the distribution of X when σ is small 3 2

Theorem 3 Let X and Y be independent real random variables with characteristic functions φ and ψ respectively The product XY has characteristic function E ( e itxy ) = E ( φ(ty ) ) = E ( ψ(tx) ) (4) Proof The condition expectation of e itxy given that X = x equals the unconditional expectation of e itxy, namely, E(e itxy ) = ψ(tx) Letting µ denote the distribution of X we thus have E(e itxy ) = E(eitXY X = x) µ(dx) = ψ(tx) µ(dx) = E(ψ(tX)) An inversion formula This section shows how some probability densities on R can be recovered from their characteristic functions by means of a so-called inversion formula We begin with a special case, which will be used later on in the proof of the general one Lemma Let X be a real random variable with characteristic function φ Let Z N(0, ) be independent of X For each σ > 0, the random variable X σ := X + σz has density f σ (with respect to Lebesgue measure) given by f σ (x) = for x R e itx φ(t) e σ2 t 2 /2 dt (5) Proof For ξ R, the random variable X ξ := X ξ has characteristic function φ Xξ (y) = φ(y)e iξy Taking X = X ξ and t = in (4) gives E ( φ(y )e iξy ) = E ( ψ(x ξ) ) (6) for any random variable Y with characteristic function ψ Applying this to Y N(0, /σ 2 ) with density f Y (y) = e σ2 y 2 /2 / /σ 2 and characteristic function ψ(t) = e t2 /(2σ 2), we get φ(y) e iξy σ e σ2 y 2 /2 dy = 3 3 e (x ξ)2 /(2σ 2) µ(dx), (5): X σ = X + σz has density f σ (x) = e itx φ(t) e σ2 t 2 /2 dt where µ is the distribution of X Rearranging terms gives e (ξ x)2 /(2σ 2 ) µ(dx) = e iξy φ(y) e σ2 y 2 /2 dy (7) σ To interpret the left-hand side, note that conditional on X = x, one has X σ = x + σz N(x, σ 2 ), with density g x (ξ) = e (ξ x)2 /(2σ 2 ) σ at ξ The left-hand side of (7), to wit g x (ξ) µ(dx), is thus the unconditional density of X σ, evaluated at ξ This proves (5) Theorem 4 (The inversion formula for densities) Let X be a real random variable whose characteristic function φ is integrable over R, so φ(t) dt < Then X has a bounded continuous density f on R given by f(x) = Proof We need to show that (A) (B) (C) (D) (E) e itx φ(t) dt (8) the integral in (8) exists, f is bounded, f is continuous, f(x) is real and nonnegative, and P [a < X b] = b f(x) dx for all < a < b < a (A) f(x) exists since e itx φ(t) = φ(t) and φ(t) dt < (B) f is bounded since f(x) = e itx φ(t) dt φ(t) dt < 3 4

09:53 /5/2000 X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt (C) f is continuous (D) f(x) is real and nonnegative (C) We need to show that x n x R entails f(x n ) f(x), ie, e itx n φ(t) dt This follows from the DCT, since e itx φ(t) dt g n (t) := e itx n φ(t) e itx φ(t) for all t R, g n (t) D(t) := φ(t) for all t R and all n N, and D(t) dt < (D) Let Z be a standard normal random variable independent of X and let σ > 0 Lemma asserts that X σ := X + σz has density f σ (x) = sup x R e itx φ(t)e σ2 t 2 /2 dt Note that f(x) f σ (x) = sup x R φ(t) ( e σ2 t 2 /2 ) dt e itx φ(t)( e σ2 t 2 /2 ) dt The DCT implies that the right-hand side here tends to 0 as σ 0, since g σ (t) := φ(t) ( e σ2 t 2 /2 ) 0 for all t R, g σ (t) D(t) := φ(t) for all t R and all n N, and D(t) dt < This proves that f σ (x) tends uniformly to f(x) as σ 0; in particular, since f σ (x) is real and nonnegative, so is f(x) 3 5 X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt (D ): the density f σ of X σ = X + σz converges uniformly to f (E) Let < a < b < be given We need to show that P [a < X b] = b a f(x) dx (9) To this end, let n N and let g n : R R be the function graphed below: g n is piecewise linear g n g n (a) = 0 = g n (b + /n) g n (a + /n) = = g n (b) 0 R g n () = 0 = g n (+ ) a a+ / n b b+ / n As in the proof of (D), let X σ = X + σz with σ > 0 and Z N(0, ) independently of X Since X σ has density f σ, we have E ( g n (X σ ) ) ( := g n Xσ (ω) ) P (dω) = g n (x)f σ (x) dx (0) Ω Take limits in (0) as σ 0 In the middle term g n (X σ (ω)) converges to g n (X(ω)) for each sample point ω; since the convergence is dominated by and E() = <, the DCT implies that E(g n (X σ )) E(g n (X)) On the right-hand side, g n (x)f σ (x) converges pointwise to g n (x)f(x), with the convergence dominated by ( φ(t) dt)i (a,b+]; since I (a,b+](x) dx <, another application of the DCT shows that the right-hand side converges to g n(x)f(x) dx The upshot is E ( g n (X) ) = g n (x)f(x) dx () Now take limits in () as n Since g n g := I (a,b] pointwise and boundedly, two more applications of the DCT (give the details!) yield E ( g(x) ) = This is the same as (9) g(x)f(x) dx 3 6

X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt Example 2 (A) Suppose X has the standard exponential distribution, with density f(x) = e x I [0, ) (x) and characteristic function φ(t) = /( it) φ is not integrable because φ(t) / t as t This is consistent with Theorem 4 because X doesn t admit a continuous density (B) Consider the two-sided exponential distribution, with density f(x) = e x /2 for < x < f has characteristic function φ(t) = e itx f(x) dx = [ 0 ] e itx e x dx + e itx e x dx 2 0 = [ 2 + it + ] = it + t 2 This φ is integrable, so Theorem 4 implies that e itx + t 2 dt = 2 e x Thus the standard Cauchy distribution, with density /(π( + x 2 )) for < x <, has characteristic function e t This gives line 9 of Table 2 The uniqueness theorem Here we establish the important fact that a probability measure on R is uniquely determined by its characteristic function Theorem 5 (The uniqueness theorem for characteristic functions) Let X be a real random variable with distribution function F and characteristic function φ Similarly, let Y have distribution function G and characteristic function ψ If φ(t) = ψ(t) for all t R, then F (x) = G(x) for all x R Proof Let Z N(0, ) independently of both X and Y Set X σ = X + σz and Y σ = Y + σz for σ > 0 Since X σ and Y σ have the same 3 7 integrable characteristic function, to wit φ(t)e σ2 t 2 /2 = ψ(t)e σ2 t 2 /2, Theorem 4 implies that they have same density, and hence that E ( g(x σ ) ) = E ( g(y σ ) ) (2) for any continuous bounded function g: R R Letting σ 0 in (2) gives (how?) E ( g(x) ) = E ( g(y ) ) (3) Taking g to be the function with graph g 0 x x+ / n in (3) and letting n gives (how?) E ( I (,x] (X) ) = E ( I (,x] (Y ) ) ; R (4) but this is the same as F (x) = G(x) (There is an inversion formula for cdfs implicit in this argument; see Exercise 6) Example 3 According to the uniqueness theorem, a random variable X is normally distributed with mean µ and variance σ 2 if and only if E(e itx ) = exp(iµt σ 2 t 2 /2) for all real t It follows easily from this that the sum of two independent normally distributed random variables is itself normally distributed In other words, the family { N(µ, σ 2 ) : µ R, σ 2 0 } of normal distributions on R is closed under convolution Example 4 Suppose X, X 2,, X n are iid standard Cauchy random variables By Example 2, each X i has characteristic function φ(t) = e t The sum S n = X + + X n thus has characteristic function φ Sn (t) = e n t, and the sample average X n = S n /n has characteristic function φ Xn (t) = φ Sn (t/n) = e t Thus X n is itself standard Cauchy What does this say about using the sample mean to estimate the center of a distribution? 3 8

09:53 /5/2000 Theorem 6 (The uniqueness theorem for MGFs) Let X and Y be real random variables with respective real moment generating functions M and N and distribution functions F and G If M(u) = N(u) < for all u in some nonempty open interval, then F (x) = G(x) for all x R Proof Call the interval (a, b) and put D = { z C : a < R(z) < b } We will consider two cases: 0 (a, b), and 0 / (a, b) Case : 0 (a, b) In this case the imaginary axis I := { it : t R } is contained in D The complex generating functions G X and G Y of X and Y exist and are differentiable on D and agree on { u + i0 : u (a, b) } by assumption By Theorem 34, G X and G Y agree on all of D, and in particular on I In other words, E(e itx ) = G X (it) = G Y (it) = E(e ity ) for all t R Thus X and Y have the same characteristic function, and hence the same distribution Case 2: 0 / (a, b) We treat this by exponential tilting, as follows For simplicity of exposition, suppose that X and Y have densities f and g respectively Put θ = (a + b)/2 and set f θ (x) = e θx f(x)/m(θ) and g θ (x) = e θx g(x)/n(θ) (5) f θ and g θ are probability densities; the corresponding MGFs are M θ (u) = M(u + θ)/m(θ) and N θ (u) = N(u + θ)/n(θ) M θ and N θ coincide and are finite on the interval (a θ, b θ) Since this interval contains 0, the preceding argument shows that the distributions with densities f θ and g θ coincide It follows from (5) that the distributions with densities f and g coincide Theorem 7 (The uniqueness theorem for moments) Let X and Y be real random variables with respective distribution functions F and G If (i) X and Y each have (finite) moments of all orders, 3 9 (ii) E(X k ) = E(Y k ) := α k for all k N, and (iii) the radius R of convergence of the power series k= α ku k /k! is nonzero, then F (x) = G(x) for all x R Proof Let M and N be the MGFs of X and Y, respectively By Theorem 25, M(u) = k= α ku k /k! = N(u) for all u s in the nonempty open interval ( R, R) Theorem 6 thus implies F = G Example 5 (A) Suppose X is a random variable such that { (k ) (k 3) 5 3, if k is even, α k := E(X k ) = 0, if k is odd Then X N(0, ), because a standard normal random variable has these moments and the series k= α ku k /k! has an infinite radius of convergence (B) It is possible for two random variables to have the same moments, but yet to have different distributions For example, suppose Z N(0, ) and set X = e Z (6) X is said to have a log-normal distribution, even though it would be more accurate to say that the log of X is normally distributed By the change of variables formula, X has density f X (x) = f Z (z) dz dx = e (log(x))2 /2 x = x +log(x)/2 (7) for x > 0, and 0 otherwise The k th moment of X is E(X k ) = E(e kz ) = e k2 /2 ; this is finite, but increases so rapidly with k that the power series k= E(Xk )u k /k! converges only for u = 0 (check this!) For real numbers α put g α (x) = f X (x) [ + sin ( α log(x) )] 3 0

X = e Z for Z N(0, ) g α (x) = f X (x) [ + sin ( α log(x) )] for x > 0, and = 0 otherwise I am going to show that one can pick an α 0 such that x k g α (x) dx = x k f X (x) dx (8) for k = 0,, For k = 0, (8) says that the nonnegative function g α integrates to, and thus is a probability density Let Y be a random variable with this density Then by (8) we have E(Y k ) = E(X k ) for k =, 2,, even though X and Y have different densities, and therefore different distributions It remains to show that there is a nonzero α such that (8) holds, or, equivalently, such that ν k := 0 x k f X (x) sin ( α log(x) ) dx = 0 for k = 0,, 2, Letting I(z) denote the imaginary part v of the complex number z = u + iv, we have ν k = E [ X k sin ( α(log(x) )] = E ( e kz sin(αz) ) = I [ E ( e kz e iαz)] = I [ E ( e (k+iα)z)] = I [ e (k+iα)2 /2 ] = I [ e (k2 α 2 )/2 e ikα] = e (k2 α 2 )/2 sin(kα) Consequently we can make ν k = 0 for all k by taking α = π, or, or 3π, We have not only produced two distinct densities with the same moments, but in fact an infinite sequence g 0 = f X, g π, g, g 3π, of such densities! The plot on the next page exhibits g 0 and g 3 g α (x) Graphs of g α (x) = [ + sin(α log(x))]/ ( x +log(x)/2) versus x (0, 4), for α = 0 and g 0 is the log-normal density g has the same moments as g 0 2 0 08 06 04 02 00 0 2 3 4 x Exercise (a) Suppose that µ and ν are probability measures on R with densities f and g respectively Show that the convolution µ ν has density f g given by (3) (b) Verify the claim made in Example (A), that the sum S of two independent random variables, each uniformly distributed over [ /2, /2], has density f S (s) = ( s ) + with respect to Lebesgue measure [Hint for (a): Continue (2) by writing ν(b x) = g(s x) ds (why?) and then use Fubini s theorem B to get P [S B] = (f g)(s) ds] B Exercise 2 Show that the function f σ defined by (5) is infinitely differentiable [Hint: replace the real argument x by iz with z C and show that the resulting integral is a linear combination of complex generating functions Exercise 3 The function g n in (0) may be replaced by g := I (a,b], to get P [a < X σ b] = b a f σ(x) dx Show how (9) may be deduced from this 3 2

09:53 /5/2000 Exercise 4 Use line of Table 2 and the inversion formula (8) for densities to get line 2 (for simplicity take α = ) Show that f(x) := ( cos(x) ) /(πx 2 ) is in fact a probability density on R and deduce that ( ) 0 cos(x) /x 2 dx = π/2 Exercise 5 (An inversion formula for point masses) Let X be a random variable with characteristic function φ Show that for each x R, P [ X = x ] = lim T 2T T T e itx φ(t) dt (9) Deduce that if φ(t) 0 as t, then P [ X = x ] = 0 for all x R [Hint for (9): write φ(t) as E(e itx ), use Fubini s Theorem, and the DCT (with justifications) Exercise 6 (An inversion formula for cdfs) Let X be a random variable with distribution function F, density f, and characteristic function φ (a) Suppose that φ is integrable Use the inversion formula (8) for densities to show that F (b) F (a) = e ita e itb φ(t) dt (20) it for < a < b < [Hint: ( F (b) F (a) ) /(b a) is the density at b for the random variable X + U, where U is uniformly distributed over [0, b a] independently of X] (b) Now drop the assumption that φ is integrable Show that F (b) F (a) = lim σ 0 e ita e itb φ(t)e σ2 t 2 /2 dt (2) it Exercise 7 (The inversion formula for lattice distributions) Let φ be the characteristic function of a random variable X which takes almost all its values in the lattice L a,h = { a+kh : k = 0, ±, ±2, } Show that P [ X = x ] h = π/h e itx φ(t) dt (22) π/h for each x L a,h Give an analogous formula for S n = n i= X i, where X,, X n are independent random variables, each distributed like X [Hint: use Fubini s theorem] Exercise 8 Use the inversion formula (22) to recover the probability mass function of the Binomial(n, p) distribution from its characteristic function (pe it + q) n Exercise 9 Use characteristic functions to show that the following families of distributions in Table are closed under convolution: Degenerate, Binomial (fixed p), Poisson, Negative binomial (fixed p), Gamma (fixed α), and Symmetric Cauchy A random variable X is said to have an infinitely divisible distribution if for each positive integer n, there exist iid random variables X n,,, X n,n whose sum X n, + + X n,n has the same distribution as X Exercise 0 Show that a Normal distribution and all but one (which one?) of the distributions in the preceding exercise is infinitely divisible Exercise (a) Suppose X and Y are independent random variables, with densities f and g respectively Show that Z := Y X has density provided that F is continuous at a and b 3 3 h(z) := f(x)g(x + z) dx (23) 3 4

(b) Suppose the random variables X and Y in part (a) are each Gamma(r, ), for a positive integer r Use (23) to show that Z has density h r (z) := e z r j=0 [ Γ(r) Γ(r + j) Γ(r j) j! ] 2 r+j z r j (24) (c) Find the characteristic function of the unnormalized t-distribution with ν degrees of freedom, for an odd integer ν Exercise 2 Let IT α be the so-called inverse triangular distribution with parameter α, having characteristic function φ α (t) = ( t /α) + (see line 2 of Table ) Show that the characteristic function φ(t) = ( ( t ) + + ( t /2) +) /2 of the mixture µ = (IT + IT 2 )/2 agrees deduce the general case (f complex and integrable) from the special one] Exercise 4 Let f: R C be continuous and integrable Show that if ˆf is integrable, then f(x) = e itx ˆf(t) dt (27) for all x R [Hint: let σ tend to 0 in (26) but beware, f need not be bounded] with the characteristic function ψ of ν = IT 4/3 on a nonempty open interval, even though µ ν Why doesn t this contradict Theorem 5? Let f be a continuous integrable function from the real line R to the complex plane C (Here integrable means f(x) dx < ) The Fourier transform of f is the function ˆf from R to C defined by ˆf(t) = e itx f(x) dx (25) for < t < If f is a probability density, then ˆf is its characteristic function; the goal of the next two exercises is to extend some of the things we know about characteristic functions to Fourier transforms Exercise 3 Let f: R C be continuous and integrable and let σ > 0 Show that f(x z) e z2 /(2σ 2 ) dz = e itx σ ˆf(t)e 2 t 2 /2 dt (26) σ 2 for all x R [Hint: by the inversion formula for densities, (26) is true when f is a probability density (ie, real, positive, and integrable); 3 5 3 6