the convolution of f and g) given by
|
|
- Louise Adela Lambert
- 6 years ago
- Views:
Transcription
1 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that formula to establish the fact that, in general, the characteristic function of X uniquely characterizes the distribution of X We begin by discussing the characteristic function of sums and products of random variables Sums and products C-valued random variables Z = U + iv,, Z n = U n + iv n, all defined on a common probability space, are said to be independent if the pairs (U k, V k ) for k =,, n are independent Theorem If Z,, Z n are independent C-valued integrable random variables, then n k= Z k is integrable and E( n k= Z k) = n k= E(Z k) () Proof The case n = 2 follows from the identity (U +iv )(U 2 +iv 2 ) = U U 2 V V 2 + i(u V 2 + U 2 V ) and the analogue of () for integrable real-valued random variables The general case follows by induction on n Suppose X and Y are independent real random variables with distributions µ and ν respectively The distribution of the sum S := X + Y is called the convolution of µ and ν, denoted µ ν, and is given by the formula (µ ν)(b) = P [S B] = = P [X + Y B X = x] µ(dx) = P [x + Y B] µ(dx) ν(b x) µ(dx) = µ(b y) ν(dy) (2) for (Borel) subsets B of R; here B x := { b x : b B } When X and Y have densities f and g respectively, this calculation can be pushed further (see Exercise ) to show that S has density f g (called 3 the convolution of f and g) given by (f g)(s) = f(x)g(s x) dx = f(s y)g(y) dy (3) Since the addition of random variables is associative (X +Y )+ Z = X + (Y + Z) so is the convolution of probability measures (µ ν) ρ = µ (ν ρ) In general, the convolution µ µ n of several measures is difficult to compute; however, the characteristic function of µ µ n is easily obtained from the characteristic functions of the individual µ k s: Theorem 2 If X,, X n are independent real random variables, then S = n k= X k has characteristic function φ S = n k= φ X k Proof Since e its is the product of the independent complex-valued integrable random variables e itx k for k =,, n, Theorem implies that E(e its ) = n k= E(eitX k) Example (A) Suppose X and Y are independent and each is uniformly distributed over [ /2, /2] Using (3) it is easy to check that S = X +Y has the so-called triangular distribution, with density f S (s) = ( s ) + (see Exercise ) Since φ X (t) = /2 /2 /2 cos(tx) dx + i sin(tx) dx = sin(t/2) /2 t/2 (this verifies line 0 of Table 2) and since φ Y = φ X, we have φ S (t) = sin 2 (t/2)/(t 2 /4) = 2( cos(t))/t 2 ; this gives line of the Table (B) Suppose X and Z are independent real random variables, with Z N(0, ) Then X σ := X+σZ has characteristic function φ Xσ (t) = φ X (t)φ σz (t) = φ X (t)e σ2 t 2 /2 We re interested in X σ because its distribution is very smooth (see Lemma and Exercise 3) for any σ, and is almost the same as the distribution of X when σ is small 3 2
2 Theorem 3 Let X and Y be independent real random variables with characteristic functions φ and ψ respectively The product XY has characteristic function E ( e itxy ) = E ( φ(ty ) ) = E ( ψ(tx) ) (4) Proof The condition expectation of e itxy given that X = x equals the unconditional expectation of e itxy, namely, E(e itxy ) = ψ(tx) Letting µ denote the distribution of X we thus have E(e itxy ) = E(eitXY X = x) µ(dx) = ψ(tx) µ(dx) = E(ψ(tX)) An inversion formula This section shows how some probability densities on R can be recovered from their characteristic functions by means of a so-called inversion formula We begin with a special case, which will be used later on in the proof of the general one Lemma Let X be a real random variable with characteristic function φ Let Z N(0, ) be independent of X For each σ > 0, the random variable X σ := X + σz has density f σ (with respect to Lebesgue measure) given by f σ (x) = for x R e itx φ(t) e σ2 t 2 /2 dt (5) Proof For ξ R, the random variable X ξ := X ξ has characteristic function φ Xξ (y) = φ(y)e iξy Taking X = X ξ and t = in (4) gives E ( φ(y )e iξy ) = E ( ψ(x ξ) ) (6) for any random variable Y with characteristic function ψ Applying this to Y N(0, /σ 2 ) with density f Y (y) = e σ2 y 2 /2 / /σ 2 and characteristic function ψ(t) = e t2 /(2σ 2), we get φ(y) e iξy σ e σ2 y 2 /2 dy = 3 3 e (x ξ)2 /(2σ 2) µ(dx), (5): X σ = X + σz has density f σ (x) = e itx φ(t) e σ2 t 2 /2 dt where µ is the distribution of X Rearranging terms gives e (ξ x)2 /(2σ 2 ) µ(dx) = e iξy φ(y) e σ2 y 2 /2 dy (7) σ To interpret the left-hand side, note that conditional on X = x, one has X σ = x + σz N(x, σ 2 ), with density g x (ξ) = e (ξ x)2 /(2σ 2 ) σ at ξ The left-hand side of (7), to wit g x (ξ) µ(dx), is thus the unconditional density of X σ, evaluated at ξ This proves (5) Theorem 4 (The inversion formula for densities) Let X be a real random variable whose characteristic function φ is integrable over R, so φ(t) dt < Then X has a bounded continuous density f on R given by f(x) = Proof We need to show that (A) (B) (C) (D) (E) e itx φ(t) dt (8) the integral in (8) exists, f is bounded, f is continuous, f(x) is real and nonnegative, and P [a < X b] = b f(x) dx for all < a < b < a (A) f(x) exists since e itx φ(t) = φ(t) and φ(t) dt < (B) f is bounded since f(x) = e itx φ(t) dt φ(t) dt < 3 4
3 09:53 /5/2000 X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt (C) f is continuous (D) f(x) is real and nonnegative (C) We need to show that x n x R entails f(x n ) f(x), ie, e itx n φ(t) dt This follows from the DCT, since e itx φ(t) dt g n (t) := e itx n φ(t) e itx φ(t) for all t R, g n (t) D(t) := φ(t) for all t R and all n N, and D(t) dt < (D) Let Z be a standard normal random variable independent of X and let σ > 0 Lemma asserts that X σ := X + σz has density f σ (x) = sup x R e itx φ(t)e σ2 t 2 /2 dt Note that f(x) f σ (x) = sup x R φ(t) ( e σ2 t 2 /2 ) dt e itx φ(t)( e σ2 t 2 /2 ) dt The DCT implies that the right-hand side here tends to 0 as σ 0, since g σ (t) := φ(t) ( e σ2 t 2 /2 ) 0 for all t R, g σ (t) D(t) := φ(t) for all t R and all n N, and D(t) dt < This proves that f σ (x) tends uniformly to f(x) as σ 0; in particular, since f σ (x) is real and nonnegative, so is f(x) 3 5 X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt (D ): the density f σ of X σ = X + σz converges uniformly to f (E) Let < a < b < be given We need to show that P [a < X b] = b a f(x) dx (9) To this end, let n N and let g n : R R be the function graphed below: g n is piecewise linear g n g n (a) = 0 = g n (b + /n) g n (a + /n) = = g n (b) 0 R g n () = 0 = g n (+ ) a a+ / n b b+ / n As in the proof of (D), let X σ = X + σz with σ > 0 and Z N(0, ) independently of X Since X σ has density f σ, we have E ( g n (X σ ) ) ( := g n Xσ (ω) ) P (dω) = g n (x)f σ (x) dx (0) Ω Take limits in (0) as σ 0 In the middle term g n (X σ (ω)) converges to g n (X(ω)) for each sample point ω; since the convergence is dominated by and E() = <, the DCT implies that E(g n (X σ )) E(g n (X)) On the right-hand side, g n (x)f σ (x) converges pointwise to g n (x)f(x), with the convergence dominated by ( φ(t) dt)i (a,b+]; since I (a,b+](x) dx <, another application of the DCT shows that the right-hand side converges to g n(x)f(x) dx The upshot is E ( g n (X) ) = g n (x)f(x) dx () Now take limits in () as n Since g n g := I (a,b] pointwise and boundedly, two more applications of the DCT (give the details!) yield E ( g(x) ) = This is the same as (9) g(x)f(x) dx 3 6
4 X has cf φ with φ(t) dt < f(x) := e itx φ(t) dt Example 2 (A) Suppose X has the standard exponential distribution, with density f(x) = e x I [0, ) (x) and characteristic function φ(t) = /( it) φ is not integrable because φ(t) / t as t This is consistent with Theorem 4 because X doesn t admit a continuous density (B) Consider the two-sided exponential distribution, with density f(x) = e x /2 for < x < f has characteristic function φ(t) = e itx f(x) dx = [ 0 ] e itx e x dx + e itx e x dx 2 0 = [ 2 + it + ] = it + t 2 This φ is integrable, so Theorem 4 implies that e itx + t 2 dt = 2 e x Thus the standard Cauchy distribution, with density /(π( + x 2 )) for < x <, has characteristic function e t This gives line 9 of Table 2 The uniqueness theorem Here we establish the important fact that a probability measure on R is uniquely determined by its characteristic function Theorem 5 (The uniqueness theorem for characteristic functions) Let X be a real random variable with distribution function F and characteristic function φ Similarly, let Y have distribution function G and characteristic function ψ If φ(t) = ψ(t) for all t R, then F (x) = G(x) for all x R Proof Let Z N(0, ) independently of both X and Y Set X σ = X + σz and Y σ = Y + σz for σ > 0 Since X σ and Y σ have the same 3 7 integrable characteristic function, to wit φ(t)e σ2 t 2 /2 = ψ(t)e σ2 t 2 /2, Theorem 4 implies that they have same density, and hence that E ( g(x σ ) ) = E ( g(y σ ) ) (2) for any continuous bounded function g: R R Letting σ 0 in (2) gives (how?) E ( g(x) ) = E ( g(y ) ) (3) Taking g to be the function with graph g 0 x x+ / n in (3) and letting n gives (how?) E ( I (,x] (X) ) = E ( I (,x] (Y ) ) ; R (4) but this is the same as F (x) = G(x) (There is an inversion formula for cdfs implicit in this argument; see Exercise 6) Example 3 According to the uniqueness theorem, a random variable X is normally distributed with mean µ and variance σ 2 if and only if E(e itx ) = exp(iµt σ 2 t 2 /2) for all real t It follows easily from this that the sum of two independent normally distributed random variables is itself normally distributed In other words, the family { N(µ, σ 2 ) : µ R, σ 2 0 } of normal distributions on R is closed under convolution Example 4 Suppose X, X 2,, X n are iid standard Cauchy random variables By Example 2, each X i has characteristic function φ(t) = e t The sum S n = X + + X n thus has characteristic function φ Sn (t) = e n t, and the sample average X n = S n /n has characteristic function φ Xn (t) = φ Sn (t/n) = e t Thus X n is itself standard Cauchy What does this say about using the sample mean to estimate the center of a distribution? 3 8
5 09:53 /5/2000 Theorem 6 (The uniqueness theorem for MGFs) Let X and Y be real random variables with respective real moment generating functions M and N and distribution functions F and G If M(u) = N(u) < for all u in some nonempty open interval, then F (x) = G(x) for all x R Proof Call the interval (a, b) and put D = { z C : a < R(z) < b } We will consider two cases: 0 (a, b), and 0 / (a, b) Case : 0 (a, b) In this case the imaginary axis I := { it : t R } is contained in D The complex generating functions G X and G Y of X and Y exist and are differentiable on D and agree on { u + i0 : u (a, b) } by assumption By Theorem 34, G X and G Y agree on all of D, and in particular on I In other words, E(e itx ) = G X (it) = G Y (it) = E(e ity ) for all t R Thus X and Y have the same characteristic function, and hence the same distribution Case 2: 0 / (a, b) We treat this by exponential tilting, as follows For simplicity of exposition, suppose that X and Y have densities f and g respectively Put θ = (a + b)/2 and set f θ (x) = e θx f(x)/m(θ) and g θ (x) = e θx g(x)/n(θ) (5) f θ and g θ are probability densities; the corresponding MGFs are M θ (u) = M(u + θ)/m(θ) and N θ (u) = N(u + θ)/n(θ) M θ and N θ coincide and are finite on the interval (a θ, b θ) Since this interval contains 0, the preceding argument shows that the distributions with densities f θ and g θ coincide It follows from (5) that the distributions with densities f and g coincide Theorem 7 (The uniqueness theorem for moments) Let X and Y be real random variables with respective distribution functions F and G If (i) X and Y each have (finite) moments of all orders, 3 9 (ii) E(X k ) = E(Y k ) := α k for all k N, and (iii) the radius R of convergence of the power series k= α ku k /k! is nonzero, then F (x) = G(x) for all x R Proof Let M and N be the MGFs of X and Y, respectively By Theorem 25, M(u) = k= α ku k /k! = N(u) for all u s in the nonempty open interval ( R, R) Theorem 6 thus implies F = G Example 5 (A) Suppose X is a random variable such that { (k ) (k 3) 5 3, if k is even, α k := E(X k ) = 0, if k is odd Then X N(0, ), because a standard normal random variable has these moments and the series k= α ku k /k! has an infinite radius of convergence (B) It is possible for two random variables to have the same moments, but yet to have different distributions For example, suppose Z N(0, ) and set X = e Z (6) X is said to have a log-normal distribution, even though it would be more accurate to say that the log of X is normally distributed By the change of variables formula, X has density f X (x) = f Z (z) dz dx = e (log(x))2 /2 x = x +log(x)/2 (7) for x > 0, and 0 otherwise The k th moment of X is E(X k ) = E(e kz ) = e k2 /2 ; this is finite, but increases so rapidly with k that the power series k= E(Xk )u k /k! converges only for u = 0 (check this!) For real numbers α put g α (x) = f X (x) [ + sin ( α log(x) )] 3 0
6 X = e Z for Z N(0, ) g α (x) = f X (x) [ + sin ( α log(x) )] for x > 0, and = 0 otherwise I am going to show that one can pick an α 0 such that x k g α (x) dx = x k f X (x) dx (8) for k = 0,, For k = 0, (8) says that the nonnegative function g α integrates to, and thus is a probability density Let Y be a random variable with this density Then by (8) we have E(Y k ) = E(X k ) for k =, 2,, even though X and Y have different densities, and therefore different distributions It remains to show that there is a nonzero α such that (8) holds, or, equivalently, such that ν k := 0 x k f X (x) sin ( α log(x) ) dx = 0 for k = 0,, 2, Letting I(z) denote the imaginary part v of the complex number z = u + iv, we have ν k = E [ X k sin ( α(log(x) )] = E ( e kz sin(αz) ) = I [ E ( e kz e iαz)] = I [ E ( e (k+iα)z)] = I [ e (k+iα)2 /2 ] = I [ e (k2 α 2 )/2 e ikα] = e (k2 α 2 )/2 sin(kα) Consequently we can make ν k = 0 for all k by taking α = π, or, or 3π, We have not only produced two distinct densities with the same moments, but in fact an infinite sequence g 0 = f X, g π, g, g 3π, of such densities! The plot on the next page exhibits g 0 and g 3 g α (x) Graphs of g α (x) = [ + sin(α log(x))]/ ( x +log(x)/2) versus x (0, 4), for α = 0 and g 0 is the log-normal density g has the same moments as g x Exercise (a) Suppose that µ and ν are probability measures on R with densities f and g respectively Show that the convolution µ ν has density f g given by (3) (b) Verify the claim made in Example (A), that the sum S of two independent random variables, each uniformly distributed over [ /2, /2], has density f S (s) = ( s ) + with respect to Lebesgue measure [Hint for (a): Continue (2) by writing ν(b x) = g(s x) ds (why?) and then use Fubini s theorem B to get P [S B] = (f g)(s) ds] B Exercise 2 Show that the function f σ defined by (5) is infinitely differentiable [Hint: replace the real argument x by iz with z C and show that the resulting integral is a linear combination of complex generating functions Exercise 3 The function g n in (0) may be replaced by g := I (a,b], to get P [a < X σ b] = b a f σ(x) dx Show how (9) may be deduced from this 3 2
7 09:53 /5/2000 Exercise 4 Use line of Table 2 and the inversion formula (8) for densities to get line 2 (for simplicity take α = ) Show that f(x) := ( cos(x) ) /(πx 2 ) is in fact a probability density on R and deduce that ( ) 0 cos(x) /x 2 dx = π/2 Exercise 5 (An inversion formula for point masses) Let X be a random variable with characteristic function φ Show that for each x R, P [ X = x ] = lim T 2T T T e itx φ(t) dt (9) Deduce that if φ(t) 0 as t, then P [ X = x ] = 0 for all x R [Hint for (9): write φ(t) as E(e itx ), use Fubini s Theorem, and the DCT (with justifications) Exercise 6 (An inversion formula for cdfs) Let X be a random variable with distribution function F, density f, and characteristic function φ (a) Suppose that φ is integrable Use the inversion formula (8) for densities to show that F (b) F (a) = e ita e itb φ(t) dt (20) it for < a < b < [Hint: ( F (b) F (a) ) /(b a) is the density at b for the random variable X + U, where U is uniformly distributed over [0, b a] independently of X] (b) Now drop the assumption that φ is integrable Show that F (b) F (a) = lim σ 0 e ita e itb φ(t)e σ2 t 2 /2 dt (2) it Exercise 7 (The inversion formula for lattice distributions) Let φ be the characteristic function of a random variable X which takes almost all its values in the lattice L a,h = { a+kh : k = 0, ±, ±2, } Show that P [ X = x ] h = π/h e itx φ(t) dt (22) π/h for each x L a,h Give an analogous formula for S n = n i= X i, where X,, X n are independent random variables, each distributed like X [Hint: use Fubini s theorem] Exercise 8 Use the inversion formula (22) to recover the probability mass function of the Binomial(n, p) distribution from its characteristic function (pe it + q) n Exercise 9 Use characteristic functions to show that the following families of distributions in Table are closed under convolution: Degenerate, Binomial (fixed p), Poisson, Negative binomial (fixed p), Gamma (fixed α), and Symmetric Cauchy A random variable X is said to have an infinitely divisible distribution if for each positive integer n, there exist iid random variables X n,,, X n,n whose sum X n, + + X n,n has the same distribution as X Exercise 0 Show that a Normal distribution and all but one (which one?) of the distributions in the preceding exercise is infinitely divisible Exercise (a) Suppose X and Y are independent random variables, with densities f and g respectively Show that Z := Y X has density provided that F is continuous at a and b 3 3 h(z) := f(x)g(x + z) dx (23) 3 4
8 (b) Suppose the random variables X and Y in part (a) are each Gamma(r, ), for a positive integer r Use (23) to show that Z has density h r (z) := e z r j=0 [ Γ(r) Γ(r + j) Γ(r j) j! ] 2 r+j z r j (24) (c) Find the characteristic function of the unnormalized t-distribution with ν degrees of freedom, for an odd integer ν Exercise 2 Let IT α be the so-called inverse triangular distribution with parameter α, having characteristic function φ α (t) = ( t /α) + (see line 2 of Table ) Show that the characteristic function φ(t) = ( ( t ) + + ( t /2) +) /2 of the mixture µ = (IT + IT 2 )/2 agrees deduce the general case (f complex and integrable) from the special one] Exercise 4 Let f: R C be continuous and integrable Show that if ˆf is integrable, then f(x) = e itx ˆf(t) dt (27) for all x R [Hint: let σ tend to 0 in (26) but beware, f need not be bounded] with the characteristic function ψ of ν = IT 4/3 on a nonempty open interval, even though µ ν Why doesn t this contradict Theorem 5? Let f be a continuous integrable function from the real line R to the complex plane C (Here integrable means f(x) dx < ) The Fourier transform of f is the function ˆf from R to C defined by ˆf(t) = e itx f(x) dx (25) for < t < If f is a probability density, then ˆf is its characteristic function; the goal of the next two exercises is to extend some of the things we know about characteristic functions to Fourier transforms Exercise 3 Let f: R C be continuous and integrable and let σ > 0 Show that f(x z) e z2 /(2σ 2 ) dz = e itx σ ˆf(t)e 2 t 2 /2 dt (26) σ 2 for all x R [Hint: by the inversion formula for densities, (26) is true when f is a probability density (ie, real, positive, and integrable);
or E ( U(X) ) e zx = e ux e ivx = e ux( cos(vx) + i sin(vx) ), B X := { u R : M X (u) < } (4)
:23 /4/2000 TOPIC Characteristic functions This lecture begins our study of the characteristic function φ X (t) := Ee itx = E cos(tx)+ie sin(tx) (t R) of a real random variable X Characteristic functions
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationCharacteristic Functions and the Central Limit Theorem
Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in
More information18.175: Lecture 15 Characteristic functions and central limit theorem
18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationTOPICS IN FOURIER ANALYSIS-III. Contents
TOPICS IN FOUIE ANALYSIS-III M.T. NAI Contents 1. Fourier transform: Basic properties 2 2. On surjectivity 7 3. Inversion theorem 9 4. Proof of inversion theorem 10 5. The Banach algebra L 1 ( 12 6. Fourier-Plancheral
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationPROBABILITY THEORY LECTURE 3
PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.
ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n
More information) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D
3 Independent Random Variables II: Examples 3.1 Some functions of independent r.v. s. Let X 1, X 2,... be independent r.v. s with the known distributions. Then, one can compute the distribution of a r.v.
More informationIf α is a probability distribution on the line, its characteristic function is defined by
Chapter 2 Weak Convergence 2. Characteristic Functions If α is a probability distribution on the line, its characteristic function is defined by φ(t) = exp[ itx] dα. (2.) The above definition makes sense.
More informationSummer Jump-Start Program for Analysis, 2012 Song-Ying Li
Summer Jump-Start Program for Analysis, 01 Song-Ying Li 1 Lecture 6: Uniformly continuity and sequence of functions 1.1 Uniform Continuity Definition 1.1 Let (X, d 1 ) and (Y, d ) are metric spaces and
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationLecture Characterization of Infinitely Divisible Distributions
Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationAnalysis Qualifying Exam
Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,
More informationReview of Probability Theory II
Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function
More informationMetric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg
Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose
More information1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2
Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More information3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationu xx + u yy = 0. (5.1)
Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function
More informationF (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous
7: /4/ TOPIC Distribution functions their inverses This section develops properties of probability distribution functions their inverses Two main topics are the so-called probability integral transformation
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationMATH 6337: Homework 8 Solutions
6.1. MATH 6337: Homework 8 Solutions (a) Let be a measurable subset of 2 such that for almost every x, {y : (x, y) } has -measure zero. Show that has measure zero and that for almost every y, {x : (x,
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family
More informationSTAT 801: Mathematical Statistics. Inversion of Generating Functions = 1. e ikt P (X = k). X n = [nx]/n
STAT 8: Mathematical Statistics Inversion of Generating Functions Previous theorem is non-constructive characterization. Can get from φ X to F X or f X by inversion. See homework for basic inversion formula:
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More information1 Review of di erential calculus
Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationdiscrete random variable: probability mass function continuous random variable: probability density function
CHAPTER 1 DISTRIBUTION THEORY 1 Basic Concepts Random Variables discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY
More informationCHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY
CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 Random Variables (R.V.) discrete random variable: probability
More informationNotes 9 : Infinitely divisible and stable laws
Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.
More informationDepartment of Mathematics
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4
More informationDOUBLE SERIES AND PRODUCTS OF SERIES
DOUBLE SERIES AND PRODUCTS OF SERIES KENT MERRYFIELD. Various ways to add up a doubly-indexed series: Let be a sequence of numbers depending on the two variables j and k. I will assume that 0 j < and 0
More informationSolutions: Problem Set 4 Math 201B, Winter 2007
Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationREAL AND COMPLEX ANALYSIS
REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any
More informationMath 172 Problem Set 5 Solutions
Math 172 Problem Set 5 Solutions 2.4 Let E = {(t, x : < x b, x t b}. To prove integrability of g, first observe that b b b f(t b b g(x dx = dt t dx f(t t dtdx. x Next note that f(t/t χ E is a measurable
More informationThe Central Limit Theorem
The Central Limit Theorem (A rounding-corners overiew of the proof for a.s. convergence assuming i.i.d.r.v. with 2 moments in L 1, provided swaps of lim-ops are legitimate) If {X k } n k=1 are i.i.d.,
More information2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More informationMath 341: Probability Seventeenth Lecture (11/10/09)
Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams
More informationApplications of the Maximum Principle
Jim Lambers MAT 606 Spring Semester 2015-16 Lecture 26 Notes These notes correspond to Sections 7.4-7.6 in the text. Applications of the Maximum Principle The maximum principle for Laplace s equation is
More informationGreen s Functions and Distributions
CHAPTER 9 Green s Functions and Distributions 9.1. Boundary Value Problems We would like to study, and solve if possible, boundary value problems such as the following: (1.1) u = f in U u = g on U, where
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions
More informationGrading: 1, 2, 3, 4, 5 (each 8 pts) e ita R. e itx dµ(x)dt = 1 2T. cos(t(x a))dt dµ(x) T (x a) 1 2T. cos(t(x a))dt =
Grading:, 2, 3, 4, 5 (each 8 pts Problem : Since e ita =, by Fubini s theorem, I T = e ita ϕ(tdt = e ita e itx dµ(xdt = e it(x a dtdµ(x. Since e it(x a = cos(t(x a + i sin(t(x a and sin(t(x a is an odd
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More informationSummary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016
8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationExercises from other sources REAL NUMBERS 2,...,
Exercises from other sources REAL NUMBERS 1. Find the supremum and infimum of the following sets: a) {1, b) c) 12, 13, 14, }, { 1 3, 4 9, 13 27, 40 } 81,, { 2, 2 + 2, 2 + 2 + } 2,..., d) {n N : n 2 < 10},
More informationExercises in Extreme value theory
Exercises in Extreme value theory 2016 spring semester 1. Show that L(t) = logt is a slowly varying function but t ǫ is not if ǫ 0. 2. If the random variable X has distribution F with finite variance,
More informationChapter 6. Characteristic functions
Chapter 6 Characteristic functions We define the characteristic function of a random variable X by ' X (t) E e x for t R. Note that ' X (t) R e x P X (dx). So if X and Y have the same law, they have the
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationMath 4263 Homework Set 1
Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More information6.1 Moment Generating and Characteristic Functions
Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,
More information1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =
Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationLecture 19: Properties of Expectation
Lecture 19: Properties of Expectation Dan Sloughter Furman University Mathematics 37 February 11, 4 19.1 The unconscious statistician, revisited The following is a generalization of the law of the unconscious
More information7 Convergence in R d and in Metric Spaces
STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationImmerse Metric Space Homework
Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationLecture 6: Special probability distributions. Summarizing probability distributions. Let X be a random variable with probability distribution
Econ 514: Probability and Statistics Lecture 6: Special probability distributions Summarizing probability distributions Let X be a random variable with probability distribution P X. We consider two types
More informationWe denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More information17. Convergence of Random Variables
7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationProblem List MATH 5143 Fall, 2013
Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was
More informationChapter 5. Measurable Functions
Chapter 5. Measurable Functions 1. Measurable Functions Let X be a nonempty set, and let S be a σ-algebra of subsets of X. Then (X, S) is a measurable space. A subset E of X is said to be measurable if
More informationLecture 4: Fourier Transforms.
1 Definition. Lecture 4: Fourier Transforms. We now come to Fourier transforms, which we give in the form of a definition. First we define the spaces L 1 () and L 2 (). Definition 1.1 The space L 1 ()
More informationFolland: Real Analysis, Chapter 8 Sébastien Picard
Folland: Real Analysis, Chapter 8 Sébastien Picard Problem 8.3 Let η(t) = e /t for t >, η(t) = for t. a. For k N and t >, η (k) (t) = P k (/t)e /t where P k is a polynomial of degree 2k. b. η (k) () exists
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationOn an uniqueness theorem for characteristic functions
ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationLecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.
Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2
More information