Fourier Transforms of Measures
|
|
- Flora Simon
- 5 years ago
- Views:
Transcription
1 Fourier Transforms of Measures Sums of Independent andom Variables We begin our study of sums of independent random variables, S n = X 1 + X n. If each X i is square integrable, with mean µ i = EX i and variance σi = E[(X i µ i ), then S n is square integrable too with mean ES n = µ n = i n µ i and variance VS n = σ n = i n σ i. But what about the actual probability distribution? If the X i have density functions f i (x i ) then so does S n ; for example, with n =, S = X 1 + X has CDF F (s) and pdf f(s) = F (s) given by P[S s = F (s) = f 1 (x 1 )f (x ) dx 1 dx = f(s) = F (s) = = x 1 +x s s x f 1 (x 1 )f (x ) dx 1 dx f 1 (s x )f (x ) dx f 1 (x 1 )f (s x 1 ) dx 1, the convolution of f 1 (x 1 ) and f (x ). Even if the distributions aren t absolutely continuous, so no pdf s exist, S has a distribution measure µ given by µ(ds) = µ 1(dx 1 )µ (ds x 1 ). There is an analogous formula for n = 3, but it is quite messy; things get worse and worse as n increases, so this is not a promising approach for studying the distribution of sums S n for large n. If CDF s and pdf s of sums of independent V s are not simple, is there some other feature of the distributions that is? The answer is Yes. What is simple about independent random variables is calculating expectations of products of the X i, or products of any functions of the X i ; the exponential function will let us turn the partial sums S n into products e S n = e X i or, more generally, e zs n = e zx i for any real or complex number z. Thus for independent V s X i and any number z we can use independence to compute the expectation Ee zs n = n Ee zx i, often called the moment generating function and denoted M X (z) = Ee zx for any random variable X. For real z the function e zx becomes huge if X becomes very large (for positive z) or very negative (if z < ), so that even for integrable or square-integrable random variables X the expectation M(z) = Ee zx may be infinite. Here are a few examples of Ee zx for some familiar distributions: Binomial: Bi(n, p) [1 + p(e z 1) N z C Neg Bin: NB(α, p) [1 (p/q)(e z 1) α z C i=1 Poisson Po(λ) e λ(ez 1) Normal: No(µ, σ ) e zµ+z σ / z C z C Gamma: Ga(α, λ) (1 λz) α (z) < 1/λ Cauchy: a π(a + (x b) ) e zb a z (z) =
2 Aside from the problem that M(z) = Ee zx may fail to exist for some z C, the approach is promising: we can identify the probability distribution from M(z), and we can even find important features about the distribution directly from M: if we can justify interchanging the limits implicit in differentiation and integration, then M (z) = E[Xe zx and M (z) = E[X e zx, so (upon taking z = ) M () = EX = µ and M () = EX = σ + µ, so we can calculate the mean and variance (and other moments EX k = M (k) ()) from derivatives of M(z) at zero. We have two problems to overcome: discovering how to infer the distribution of X from M X (z) = Ee zx, and what to do about distributions for which M(z) doesn t exist. Characteristic Functions For complex numbers z = x + iy the exponential e z can be given in terms of familiar real-valued transcendental functions as e x+iy = e x cos(y) + ie x sin(y). Since both sin(y) and cos(y) are bounded by one, for any real-valued random variable X and real number ω the real and imaginary parts of the complex-valued random variable e iωx are bounded and hence integrable; thus it always makes sense to define the characteristic function φ X (ω) = Ee iωx = e iωx µ X (dx). Of course this is just φ X (ω) = M X (iω) when M X exists, but φ X (ω) exists even when M X does not; on the chart above you ll notice that only the real part of z posed problems, and (z) = was always OK, even for the Cauchy. Binomial: Bi(n, p) [1 + p(e iω 1) N Uniqueness Neg Bin: NB(α, p) [1 (p/q)(e iω 1) α Poisson Po(λ) e λ(eiω 1) Normal: No(µ, σ ) e iωµ ω σ / Gamma: Ga(α, λ) (1 iλω) α Cauchy: a/π a + (x b) e iωb a ω Suppose that two probability distributions µ 1 (A) = P[X 1 A and µ (A) = P[X A have the same Fourier transforms ˆµ j (ω) = E[e iωx j = eiωx µ j (dx); does it follow that X 1 and X have the same probability distributions, i.e., that µ 1 = µ? The answer is yes; in fact, one can recover the measure µ explicitly from the function ˆµ(ω). Thus we regard uniqueness as a corollary of the much stronger result, the Fourier Inversion Theorem. esnick has lots of interesting results about characteristic functions in Chapter 9, Grimmett and Stirzaker discuss related results in their Chapter 5, and Billingsley proves several versions of this theorem in his Section 6; I m going to take a different approach, and stress the two special cases in which µ is discrete or has a density function, trying to make some connections with other encounters you might have had with Fourier transforms. Page
3 Inversion: Integer-valued Discrete Case Notice that the integer-valued discrete distributions always satisfy φ(ω + π) = φ(ω) (and in particular are not integrable over ), while the continuous ones satisfy φ(ω) as ω ±. For integer-valued random variables X we can recover p k = Pr[X = k by inverting the Fourier series: φ(ω) = E[e iωx = p k e ikω, so p k = 1 π π π Inversion: Continuous andom Variables e ikω φ(ω) dω. Now let s turn to the case of a distribution with a density function; first two preliminaries. For any real or complex numbers a, b, c it is easy to compute (by completing the square) that π e a bx cx /4c dx = (1) c e a+b if c has positive real part, and otherwise the integral is infinite; in particular, for any ɛ > the function γ ɛ (x) 1 πɛ e x /ɛ satisfies γ ɛ (x) dx = 1 (it s just the normal probability density with mean and variance ɛ). Let µ(dx) = f(x)dx be any probability distribution with density function f(x) and ch.f. φ(ω) = ˆµ(ω) = e iωx f(x) dx. Then φ(ω) 1 so for any ɛ > the function e iyω ɛω / φ(ω) is integrable and we can compute 1 e iyω ɛω / φ(ω) dω = 1 π π = 1 π = 1 π = 1 π = 1 πɛ [ e iyω ɛω / e ixω f(x) dx dω e i(x y)ω ɛω / f(x) dx dω [ e i(x y)ω ɛω / dω f(x) dx () [ π ɛ e (x y) /ɛ e (x y) /ɛ f(x) dx = γ ɛ f(y) = γ ɛ µ(y) f(x) dx (3) (where the interchange of orders of integration in () is justified by Fubini s theorem and the calculation in (3) by equation (1)), the convolution of the normal kernel γ ɛ ( ) with f(y). This converges uniformly to f(y) as ɛ if f( ) is bounded and continuous (the most common case), pointwise to f(y )+f(y+) if f(x) has a jump discontinuity at x = y, and to infinity if µ({y}) >, i.e., if Pr[X = y >. This is the Fourier Inversion Formula for f(x): we can recover the density f(x) from its Fourier transform φ(ω) = ˆµ(ω) by f(x) = 1 π e iωx φ(ω) dω, if that integral exists, or otherwise as the 1 limit f(x) = lim ɛ π e iωx ɛω / φ(ω) dω. Page 3
4 There are several interesting connections between the density function f(x) and characteristic function φ(ω). If φ(ω) wiggles with rate approximately ξ, i.e., if φ(ω) a cos(ωξ) + b sin(ωξ) + c, then f(x) will have a spike at x = ξ and X will have a high probability of being close to ξ; if φ(ω) is very smooth (i.e., has well-behaved continuous derivatives of high order) then it does not have high-frequency wiggles and f(x) falls off quickly for large x, so E[ X p < for large p. If φ(ω) falls off quickly as ω ± then φ(ω) doesn t have large low-frequency components and f(x) must be rather tame, without any spikes. Thus φ and f both capture information about the distribution, but from different perspectives. This is often useful, for the vague descriptions of this paragraph can be made precise: Theorem 1. If ˆµ(ω) dω < then µ ɛ µ γ ɛ converges a.s. to an L 1 function f(x), ˆµ ɛ (ω) converges uniformly to ˆf(ω), and µ(a) = 1 f(x) dx. Also f(x) = A π e iωx ˆµ(ω) dω for almostevery x. Theorem. For any µ and any a < b, µ ( (a, b) ) + 1 / µ ( {a, b} ) = lim T T T e iωa e iωb πiω ˆµ(ω) dω. Theorem 3. If x k µ(dx) < for an integer k > then ˆµ(ω) has continuous derivatives of order k given by ˆµ (k) (ω) = (ix) k e iωx µ(dx) Conversely, if ˆµ(ω) has a derivative of finite even order k at ω =, then x k µ(dx) < and EX k = xk µ(dx) = ( 1) k/ ˆµ (k) (). By Theorem 3 the first few moments of the distribution, if they exist, can be determined from derivatives of the characteristic function or its logarithm at zero: φ() = 1, φ () = ie[x, φ () = E[X, so Mean: [log φ () = φ ()/φ() = ie[x = iµ Variance: [log φ () = φ ()φ() (φ ()) φ() = E[X + E[X = σ Etc.: [log φ () = O(E[ X 3 ), so by Taylor s theorem, we have log φ(ω) = + iµω σ ω / + O(ω 3 ) φ(ω) e iµω σ ω /+O(ω 3 ) Limits of Partial Sums We ll need to re-center and re-scale the distribution of S n = n i=1 X i before we can hope to make sense of S n s distribution for large n, so we ll need some facts about characteristic functions of linear combinations of independent V s: for independent X and Y, and real numbers α, β, γ, φ α+βx+γy (ω) = Ee iω(α+βx+γy ) = Ee iωα Ee iωβx Ee iωγy = e iωα φ X (ωβ) φ Y (ωγ) In particular, for i.i.d. L random variables X i with characteristic function φ(t), the normalized sum [S n nµ/ nσ has characteristic function φ S (ω) = n [ φ(ω/ nσ )e iωµ/ nσ j=1 = [ φ(s)e isµ n, where s = ω/ nσ n[log φ(s) isµ = e Page 4
5 with logarithm log φ S (ω) = n[log φ(s) isµ = n[ + iµs σ s / + O(s 3 ) nisµ = nσ (ω /nσ )/ + O(n 1/ ) = ω / + O(n 1/ ), so φ S (ω) e ω / for all ω and hence [S n nµ/ nσ Theorem. N(, 1), the Central Limit Note: We assumed X i were i.i.d. with finite third moment; the method of proof really only requires E[X <, and can be extended to the non-identically-distributed case (and even independence can be weakened), but S n cannot converge to a normally-distributed limit if E[X = ; ask for details (or read Glivenko & Kolmogorov) if you re interested. Generalized Poisson Distributions Let X j have independent Poisson distributions with means ν j and let u j ; then the ch.f. for Y u j X j is φ Y (ω) = exp [ ν j (e iωu j 1) = exp [ (e iωu j 1)ν j = exp [ (e iωu 1)ν(du) for the discrete measure ν(du) that assigns mass ν j to each point u j ; evidently we could take a limit using a sequence of discrete measures that converges to a continuous measure ν(du) so long as the integral makes sense, i.e. eiωu 1 ν(du) < ; this will follow from the requirement that (1 u )ν(du) <. Such a distribution is called Generalized Poisson; we ll now see that it includes an astonishingly large set of distributions. Distribution Poisson Po(λ) λ(e iω 1) = Gamma: Ga(α, λ) α log(1 iλω) = Log Characteristic Function (e iωu 1) λδ 1 (du) (e iωu 1) αe λu u 1 du Normal: No(, σ ) ω σ / = lim ɛ (e iωu 1) σ ɛ δ ɛ(du) iωσ ɛ Neg Bin: NB(α, p) α log[1 p q (eiω 1) = (e iωu αp k 1)ν(du), ν(du) = k δ k(du) k=1 Cauchy: Ca(a, ) a ω = (e iωu 1)ν(du), ν(du) = a u du Stable: St(α, β, γ) ω α [1 iβ tan πα sgn(ω) = (e iωu 1)ν(du), ν(du) = c α γu 1 α du, where c α = π Γ(1+α) sin πα (must adjust for β ). Try to verify the measures ν(du) for the Negative Binomial and Cauchy distributions. All these distributions share the property called Page 5
6 infinite divisibility, that each can be written as a sum of n independent identically distributed pieces for every number n; independently the French probabilist Paul Lévy and ussian probabilist A. Ya. Khinchine discovered that every distribution with this property must have a ch.f. of a very slightly more general form than that given above, log φ(ω) = iaω σ ω + [e iωu 1 iω h(u)ν(du), where h(u) is any bounded continuous function that acts like u for u close to zero (for example, h(u) = arctan(u) or h(u) = sin(u) or h(u) = u/(1 + u )). The measure ν(du) need not quite be finite, but we must have u integrable near zero and 1 integrable away from zero... one way to write this is to require that (1 u ) ν(du) <, another is to require u 1+u ν(du) <. Some authors consider the finite measure κ(du) = u 1+u ν(du) and write log φ(ω) = iωa + [e iωu 1 iω h(u) 1 + u u κ(du), where now the Gaussian component σ ω arises from a point mass for κ(du) of size σ at u =. If u is locally integrable, i.e. if ɛ u ν(du) < for some (and hence every) ɛ >, then the ɛ term iω h(u) is unnecessary (it can be absorbed into iωa). This always happens if ν( ) =, i.e. if ν is concentrated on the positive half-line. Every increasing stationary independent-increment stochastic process X t has increments which are infinitely divisible with ν concentrated on the positive half-line and no Gaussian component (σ = ), so have the representation log φ(ω) = iωa + [e iωu 1ν(du). for some a and some measure ν on + satisfying (1 u) ν(du) <. In the generalized Poisson example, ν(du) = ν j δ uj (du) was the sum of point masses of size ν j at the possi- ble jump magnitudes u j. This interpretation extends to help us understand all ID distributions: every ID random variable X may be viewed as the sum of a constant, a Gaussian random variable, and a generalized Poisson random variable, the sum of independent Poisson jumps of sizes u E with rates ν(e). Stable Limit Laws Let S n = X X n be the partial sum of iid random variables. IF the random variables are all square integrable, then the Central Limit Theorem applies and necessarily S n nσ µ = No(, 1). But what if each X n is not square integrable? Denote by F (x) = P[X n x the common CDF of the {X n }. Theorem (Stable Limit Law). There exist constants A n > and B n and a distribution µ for which the S n A n B n = µ if and only if there are constants < α, M, and M +, with M + M + >, such that as x the following limits hold for every ξ > : F ( x) 1. 1 F (x) M M + ; Page 6
7 . M + > 1 F (xξ) 1 F (x) ξ α M > F ( xξ) F ( x) ξ α. In this case the limit is the Stable Distribution with index α, with characteristic function E[e iωy = e iδω γ ω α [1 iβ tan πα sgn(ω), M where β = + and γ = (M + M + ). Note that the Cauchy distribution is the special M +M + case of (α, β, γ, δ) = (1,, 1, ) and the Normal distribution is the special case of (α, β, γ, δ) = (,, σ /, µ). Although each Stable distribution has an absolutely continuous distribution with continuous probability density function f(y), these two cases and the inverse Gaussian distribution with α = 1/ and β = ±1 are the only ones where the p.d.f. can be given in closed form. Moments are easy enough to compute; for α < the Stable distribution only has finite moments of order p < α and, in particular, none of them has a finite variance. The Cauchy has finite moments of order p < 1 but does not have a well-defined mean. Condition. says that each tail must be fall off like a power (sometimes called Pareto tails), and the powers must be identical; Condition 1. gives the tail ratio. In a common special case, M = ; for example, random variables X n with the Pareto distribution (often used to model income) given by P [X n > t = (k/t) α for t k will have a stable limit for their partial sums if α <, and (by CLT) a normal limit if α. You can find out more details reading Chapter 9 of Breiman s book. Page 7
9 Sums of Independent Random Variables
STA 711: Probability & Measure Theory obert L. Wolpert 9 Sums of Independent andom Variables We continue our study of sums of independent random variables, S n = X 1 + + X n. If each X i is square-integrable,
More information9 Sums of Independent Random Variables
STA 711: Probability & Measure Theory obert L. Wolpert 9 Sums of Independent andom Variables We continue our study of sums of independent random variables, S n = X 1 + X n. If each X i is square-integrable,
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationFinal Examination. STA 711: Probability & Measure Theory. Saturday, 2017 Dec 16, 7:00 10:00 pm
Final Examination STA 711: Probability & Measure Theory Saturday, 2017 Dec 16, 7:00 10:00 pm This is a closed-book exam. You may use a sheet of prepared notes, if you wish, but you may not share materials.
More informationthe convolution of f and g) given by
09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that
More informationMidterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm
Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationStable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence
Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,
More informationProbability and Measure
Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real
More information1 From Fourier Series to Fourier transform
Differential Equations 2 Fall 206 The Fourier Transform From Fourier Series to Fourier transform Recall: If fx is defined and piecewise smooth in the interval [, ] then we can write where fx = a n = b
More informationMidterm Examination. STA 205: Probability and Measure Theory. Thursday, 2010 Oct 21, 11:40-12:55 pm
Midterm Examination STA 205: Probability and Measure Theory Thursday, 2010 Oct 21, 11:40-12:55 pm This is a closed-book examination. You may use a single sheet of prepared notes, if you wish, but you may
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationNotes 9 : Infinitely divisible and stable laws
Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.
More information18.175: Lecture 15 Characteristic functions and central limit theorem
18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.
More informationPROBABILITY THEORY LECTURE 3
PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More information18.175: Lecture 13 Infinite divisibility and Lévy processes
18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility
More informationProbability Distributions Columns (a) through (d)
Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)
More informationELEG 833. Nonlinear Signal Processing
Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 25 2 NON-GAUSSIAN MODELS 2 Non-Gaussian Models
More informationThe Central Limit Theorem
The Central Limit Theorem (A rounding-corners overiew of the proof for a.s. convergence assuming i.i.d.r.v. with 2 moments in L 1, provided swaps of lim-ops are legitimate) If {X k } n k=1 are i.i.d.,
More informationCharacteristic Functions and the Central Limit Theorem
Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More information6.1 Moment Generating and Characteristic Functions
Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,
More information1 The Complex Fourier Series
The Complex Fourier Series Adding Complexity to life The complex Fourier series is in some ways a superior product, at least for those people who are not terrified by complex numbers. Suppose fx) is a
More information7 Convergence in R d and in Metric Spaces
STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a
More informationFIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am
FIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam; please write your solutions on the paper provided. 2. Put the
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationCentral limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if
Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak
More information18.175: Lecture 17 Poisson random variables
18.175: Lecture 17 Poisson random variables Scott Sheffield MIT 1 Outline More on random walks and local CLT Poisson random variable convergence Extend CLT idea to stable random variables 2 Outline More
More information1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2
Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationProduct measure and Fubini s theorem
Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω
More informationLecture Characterization of Infinitely Divisible Distributions
Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationUCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)
UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable
More information0.0.1 Moment Generating Functions
0.0.1 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics
More informationSTAT 801: Mathematical Statistics. Moment Generating Functions. M X (t) = E(e tx ) M X (u) = E[e utx ]
Next Section Previous Section STAT 801: Mathematical Statistics Moment Generating Functions Definition: The moment generating function of a real valued X is M X (t) = E(e tx ) defined for those real t
More informationA Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.
A Probability Primer A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. Are you holding all the cards?? Random Events A random event, E,
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationSTAT 450. Moment Generating Functions
STAT 450 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 003, 2018 Page 0 Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 3, 216 Page Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative function
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationContinuous random variables
Continuous random variables Can take on an uncountably infinite number of values Any value within an interval over which the variable is definied has some probability of occuring This is different from
More informationStat410 Probability and Statistics II (F16)
Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationLecture 2: Review of Basic Probability Theory
ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationIndependent Events. Two events are independent if knowing that one occurs does not change the probability of the other occurring
Independent Events Two events are independent if knowing that one occurs does not change the probability of the other occurring Conditional probability is denoted P(A B), which is defined to be: P(A and
More informationMidterm Examination. STA 205: Probability and Measure Theory. Wednesday, 2009 Mar 18, 2:50-4:05 pm
Midterm Examination STA 205: Probability and Measure Theory Wednesday, 2009 Mar 18, 2:50-4:05 pm This is a closed-book examination. You may use a single sheet of prepared notes, if you wish, but you may
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationThe exponential distribution and the Poisson process
The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationCOMPLEX ANALYSIS AND RIEMANN SURFACES
COMPLEX ANALYSIS AND RIEMANN SURFACES KEATON QUINN 1 A review of complex analysis Preliminaries The complex numbers C are a 1-dimensional vector space over themselves and so a 2-dimensional vector space
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationSome Notes on Gamma Processes
Some Notes on Gamma Processes RLW, Draft 3.0 February 10, 2017 1 Gamma Distribution An exponential random variable X Ex(β) with rate β has mean µ = 1/β, complimentary CDF F(t) = P[X > t] = e βt, t 0, and
More informationOrder Statistics and Distributions
Order Statistics and Distributions 1 Some Preliminary Comments and Ideas In this section we consider a random sample X 1, X 2,..., X n common continuous distribution function F and probability density
More informationRandom variables. DS GA 1002 Probability and Statistics for Data Science.
Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities
More information1 Introduction. P (n = 1 red ball drawn) =
Introduction Exercises and outline solutions. Y has a pack of 4 cards (Ace and Queen of clubs, Ace and Queen of Hearts) from which he deals a random of selection 2 to player X. What is the probability
More informationIEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008
IEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008 Justify your answers; show your work. 1. A sequence of Events. (10 points) Let {B n : n 1} be a sequence of events in
More informationProblem set 2 The central limit theorem.
Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More informationA Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University
1 / 36 A Brief Analysis of Central Limit Theorem Omid Khanmohamadi (okhanmoh@math.fsu.edu) Diego Hernán Díaz Martínez (ddiazmar@math.fsu.edu) Tony Wills (twills@math.fsu.edu) Kouadio David Yao (kyao@math.fsu.edu)
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2
IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read
More informationGenerating Lévy Random Fields
Robert L Wolpert http://www.stat.duke.edu/ rlw/ Department of Statistical Science and Nicholas School of the Environment Duke University Durham, NC USA Århus University 2009 May 21 22 Outline Motivation:
More informationExam 3, Math Fall 2016 October 19, 2016
Exam 3, Math 500- Fall 06 October 9, 06 This is a 50-minute exam. You may use your textbook, as well as a calculator, but your work must be completely yours. The exam is made of 5 questions in 5 pages,
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More informationLecture 4: September Reminder: convergence of sequences
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused
More informationµ X (A) = P ( X 1 (A) )
1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration
More informationSlides 8: Statistical Models in Simulation
Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An
More informationlim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note:
APPM/MATH 4/5520, Fall 2013 Notes 9: Convergence in Distribution and the Central Limit Theorem Definition: Let {X n } be a sequence of random variables with cdfs F n (x) = P(X n x). Let X be a random variable
More informationChap 2.1 : Random Variables
Chap 2.1 : Random Variables Let Ω be sample space of a probability model, and X a function that maps every ξ Ω, toa unique point x R, the set of real numbers. Since the outcome ξ is not certain, so is
More information18.175: Lecture 8 Weak laws and moment-generating/characteristic functions
18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach
More informationCommon ontinuous random variables
Common ontinuous random variables CE 311S Earlier, we saw a number of distribution families Binomial Negative binomial Hypergeometric Poisson These were useful because they represented common situations:
More informationLecture 1 Measure concentration
CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples
More informationMath 341: Probability Seventeenth Lecture (11/10/09)
Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams
More informationf(s) e -i n π s/l d s
Pointwise convergence of complex Fourier series Let f(x) be a periodic function with period l defined on the interval [,l]. The complex Fourier coefficients of f( x) are This leads to a Fourier series
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationLecture 4: Stochastic Processes (I)
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 215 Lecture 4: Stochastic Processes (I) Readings Recommended: Pavliotis [214], sections 1.1, 1.2 Grimmett and Stirzaker [21], section 9.4 (spectral
More informationSpectral representations and ergodic theorems for stationary stochastic processes
AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods
More informationExponential Distribution and Poisson Process
Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential
More informationThe Moment Method; Convex Duality; and Large/Medium/Small Deviations
Stat 928: Statistical Learning Theory Lecture: 5 The Moment Method; Convex Duality; and Large/Medium/Small Deviations Instructor: Sham Kakade The Exponential Inequality and Convex Duality The exponential
More informationSection 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University
Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationReview 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015
Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted
More information