Recall that the Fourier transform (also known as characteristic function) of a random variable always exists, given by. e itx f(x)dx.

Similar documents
E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

8 Laws of large numbers

The evaluation of integrals of Bessel functions via G-function identities

Tools from Lebesgue integration

Probabilistic analysis of the asymmetric digital search trees

Random Variables (Continuous Case)

1 Assignment 1: Nonlinear dynamics (due September

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

6.1 Moment Generating and Characteristic Functions

Characteristic Functions and the Central Limit Theorem

A Note on the Differential Equations with Distributional Coefficients

Chapter 6: Rational Expr., Eq., and Functions Lecture notes Math 1010

Expectation, variance and moments

3. Probability and Statistics

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes

DIFFERENTIAL CRITERIA FOR POSITIVE DEFINITENESS

MTH 202 : Probability and Statistics

Barnes integral representation

Intermediate Algebra

Math 341: Probability Seventeenth Lecture (11/10/09)

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Exponential and. Logarithmic Functions. Exponential Functions. Logarithmic Functions

LECTURE-13 : GENERALIZED CAUCHY S THEOREM

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Lecture 5: Expectation

Statistics 3657 : Moment Generating Functions

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

Generating Functions for Random Networks

MA1131 Lecture 15 (2 & 3/12/2010) 77. dx dx v + udv dx. (uv) = v du dx dx + dx dx dx

1 Review of di erential calculus

How might we evaluate this? Suppose that, by some good luck, we knew that. x 2 5. x 2 dx 5

MATH 452. SAMPLE 3 SOLUTIONS May 3, (10 pts) Let f(x + iy) = u(x, y) + iv(x, y) be an analytic function. Show that u(x, y) is harmonic.

Numerical Solutions to Partial Differential Equations

Theory and Practice of Symbolic Integration in Maple

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

Taylor and Laurent Series

A REVIEW OF RESIDUES AND INTEGRATION A PROCEDURAL APPROACH

Ideas from Vector Calculus Kurt Bryan

THE UNIVERSITY OF WESTERN ONTARIO. Applied Mathematics 375a Instructor: Matt Davison. Final Examination December 14, :00 12:00 a.m.

SCORE BOOSTER JAMB PREPARATION SERIES II

Solution to Assignment 3

5.9 Representations of Functions as a Power Series

motion For proofs see Sixth Seminar on Stochastic Analysis, Random Fields and Applications

2 Functions of random variables

Infinite series, improper integrals, and Taylor series

Dividing Polynomials: Remainder and Factor Theorems

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

1.5 Approximate Identities

Sect Addition, Subtraction, Multiplication, and Division Properties of Equality

SECTION 2.3: LONG AND SYNTHETIC POLYNOMIAL DIVISION

1 The functional equation for ζ

arxiv: v1 [math.ca] 6 May 2016

Finding local extrema and intervals of increase/decrease

Topic 7 Notes Jeremy Orloff

Week 9 Generators, duality, change of measure

Lecture 4. Continuous Random Variables and Transformations of Random Variables

The Laplace Transform

Chapter 5. Chapter 5 sections

Name: Chapter 7: Exponents and Polynomials

Ch 7 Summary - POLYNOMIAL FUNCTIONS

Optimality, Duality, Complementarity for Constrained Optimization

Universal examples. Chapter The Bernoulli process

DOUBLE SERIES AND PRODUCTS OF SERIES

Unit 9 Study Sheet Rational Expressions and Types of Equations

p. 6-1 Continuous Random Variables p. 6-2

Convergence of sequences and series

We introduce methods that are useful in:

Chapter 8 Indeterminate Forms and Improper Integrals Math Class Notes

arxiv: v1 [math.st] 4 Oct 2018

Order Statistics and Distributions

Math 200 University of Connecticut

Physics 6303 Lecture 22 November 7, There are numerous methods of calculating these residues, and I list them below. lim

1.4 Techniques of Integration

1 A complete Fourier series solution

Theory of Stochastic Processes 3. Generating functions and their applications

1 Review of Probability and Distributions

Review of Integration Techniques

4 Expectation & the Lebesgue Theorems

Problem set 2 The central limit theorem.

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

Keywords: Acinonyx jubatus/cheetah/development/diet/hand raising/health/kitten/medication

Section 4.3 Vector Fields

On a Formula of Mellin and Its Application to the Study of the Riemann Zeta-function

the convolution of f and g) given by

An Outline of Some Basic Theorems on Infinite Series

Chapter 7 Rational Expressions, Equations, and Functions

Chapter 2 Polynomial and Rational Functions

LECTURE 5: LOOPS. dt t = x V

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Math 220A - Fall 2002 Homework 5 Solutions

Never leave a NEGATIVE EXPONENT or a ZERO EXPONENT in an answer in simplest form!!!!!

Hints/Solutions for Homework 3

First, let me recall the formula I want to prove. Again, ψ is the function. ψ(x) = n<x

Notes on the second moment method, Erdős multiplication tables

1 Lesson 13: Methods of Integration

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Sturm-Liouville Theory

Graphing Square Roots - Class Work Graph the following equations by hand. State the domain and range of each using interval notation.

Transcription:

I am going to talk about algebraic calculations of random variables: i.e. how to add, subtract, multiply, divide random variables. A main disadvantage is that complex analysis is used often, but I will skip those part and focus on others. A previous post called Generating Functions and Transforms is assumed. From now on, assume the random variables X, X 2,... are absolutely continuous (have pdf s) and independent. For the discrete case, we use brute force and it s not interesting. This post relies heavily on [], but I find it really useful and maybe I will come back to make more examples. Recall that the Fourier transform (also known as characteristic function) of a random variable always exists, given by F X (t) = E(e itx ) = e itx f(x)dx and also F X+Y (t) = F X (t)f Y (t). In most cases, the inversion formula is given by or equivalently, f(x) = 2π F (x) = F () + 2π e itx F X (t)dt We use the above facts to reach the following formulas: e itx F X (t)dt it Theorem. (+,-) () Define U = n j= X j, then the pdf of U is given by g(u) = 2π e itu n j= (2) Define V = X X 2 then the pdf of V is given by g(v) = 2π F Xj (t)dt e itv F X (t)f X2 (t)dt You may think that why don t we use the convolution method given by f X +X 2 (x) = f (x x 2 )f 2 (x 2 )dx 2 this is applicable but more comprehensive when you have many random variables to add. We can also use Laplace transform, but we have to assume those random variables are all nonnegative. Then, the pdf of U is given by g(u) = i i e ru n j= L Xj (r)dr

2 Example. (sum of two rv s) Let X Uniform([,]); X 2 Exponential(). W := X + X 2. We calculate the Laplace transform, we have L X (r) = L X2 (r) = e rx dx = e r r e rx e x dx = r + (note that the upper limit in the first integral is ). Therefore, g(w) = i rw e r e i r(r + ) dr I should have mentioned earlier that the integral is calculated through the residue theorem in complex analysis. The integral above is a line integral, where you integrate in the complex plane along a line, but we don t calculate it directly, we close the line using another curve to get a contour integral. We can prove that the integral of the curve is, using Jordan s lemma which is proved by estimation theorem of complex integrals. To evaluate the contour integral, we apply residue theorem, which simplifies calculating the integral into calculating residues, which is essentially the coefficient of (z a) in the Laurent expansion of the integrand at an undefined point a. Back to the example, we have g(w) = g (w) g 2 (w) where g (w) = i e rw i r(r + ) dr g 2 (w) = i e r(w ) i r(r + ) d Jordan e lemma holds when w > for g, and w > for g 2, elsewhere they are. By residue s theorem, g (w) = erw ()[( r + ) r= + ( erw r ) r= ] = e w Hence we have g 2 (w) = ()[(er(w ) r + ) r= + ( er(w ) ) r= ] = e w r, if w g(w) = e w, if < w e w e w, if w > () Now we look at product and quotient. First we assume X, X 2,... to be nonnegative, and

3 later for the general case. Recall that Mellin transform is given by and its inversion also M XY (s) = M X (s)m Y (s). M X (s) = f(x) = lim T c+t i c T i x s f(x)dx x s M X (s))ds The first method is similar to the convolution you know before. By a few manipulations through Jacobian and change of variables we have that, the distrbution of U = X X 2 is given by h(u) = f ( u )f 2 (x 2 )dx 2 = x 2 x 2 which is called the Mellin convolution. Similarly, if V = X X 2 h(v) = x 2 f (vx 2 )f 2 (x 2 )dx 2 x f (x )f 2 ( u x )dx then its distribution is given by However, they may be too complicated for many random variables, hence we have the following: Theorem. (, ) () Define U = n j= X j, then the pdf of U is given by g(u) = n u s j= (2) Define V = X /X 2 then the pdf of V is given by g(v) = M Xj (s)ds v s M X (s)m X2 (2 s)ds Division is not that intuitive as subtraction. We consider Y = X a for general a, M Y (s) = E(y s ) = Letting a = yields the desired result. x a(s ) f(x)dx = M X (as a + ) However, in practice the calculation is very involved as you may see in []. I only give two relatively simple examples:

4 Example. (, ) Let X,..., X n Uniform([,]) be independent. () Let U := n j= X j, then the Mellin transform of X i is hence by formula, We apply Jordan s lemma, M Xi (s) = x s dx = s g(u) = u s s n ds g(u) = (2) Let V := X X 2, by formula we have (n )! [ dn ds n (u s )] s= = (ln( u )n ) (n )! g(v) = v s s(2 s) ds now we have to separate cases according to Jordan e lemma, and different contours are involved! I will directly give you the result: For v, For v >, g(v) = v s ()[ 2 s ] s= = 2 g(v) = ()[v s s ] s=2 = 2v 2 Now, what about random variables taking negative values? Our trick is to separate the random variable into two parts, say for a distribution function f(x) we define { f +, if x < (x) = (2) f(x), if x f (ω) = {, if x f(x), if x < Naturally we have f = f + + f. (If you read the Lebesgue integral part you should note that this is different from the so called positive and negative parts ). Now consider the product XY, if the product is positive we have two cases: X >, Y > ; X <, Y <, similar for negative. Without loss of generality we discuss the case XY is positive, recall Mellin convolution h(u) = f (x )f 2 ( u )dx = (f + x x x (x ) + f (x ))(f 2 + ( u ) + f2 x ( u ))dx x (3)

5 consider the two cases we have, the correspond to the following h(u) = and now we can use Mellin s transform. x (f + (x )f + 2 ( u x ) + f ( x )f 2 ( u x ))dx The last task is when we have both addition and multiplication of random variables. Since we use different techniques when adding and multiplying, we must transform back and forth between different transforms. Theorem. (transforming between transformations) Under certain circumstances, M X (s) = Γ(s) L X (r) = F X (t) = L X (r)( r) s dr M X (s)γ( s)r s ds M X (s)γ( s)(it) s ds Bonus In [], the author introduced H-function, but I doubt if we can understand it in practice. H-function is a generalized form of distributions, which makes it convenient to multiply random variables, because there is a formula for multiplying H-function random variables. However, the proof of the formula requires complex analysis and I skip it. Definition. (H-Function) The H-function is given by H m,n p,q [z (a, α ),..., (a p, α p ) (b, β ),..., (b q, β q ) ] =: H(z) = c+i m j= Γ(b j β j s) n j= Γ( a j + α j s) q j=m+ Γ( b j + β j s) p j=n+ Γ(a j α j s) zs ds where: m q; n p; α j >, β j > ; a j, b j C The H-function distribution is given by: { kh(cx) if x f(x) = otherwise (4)

6 The characteristic function of the H-distribution is given by ϕ(t) = k c Hn+,m q,p+ [ it c ( b β, β ),..., ( b q β q, β q ) (, ), ( a α, α ),..., ( a p α p, α p ) ] And the moments can be derived by the following formula: m n = k m j= Γ(b j + β j + β j r) n j= Γ( a j α j α j r) c r+ q j=m+ Γ( b j β j β j r) p j=n+ Γ(a j + α j + α j r) zs ds Common continuous distributions where X > can be represented as H-distributions. For example, the exponential distribution with parameter λ is λh,, [λx (, )] Theorem. (product of H-function rvs) The product of independent random variables with distributions k j H m j,n j p j,q j [c j x j (a j, α j ),..., (a jp, α jp ) ] if x j f j (x j ) = (b j, β j ),..., (b jq, β jq ) (5) otherwise has distribution ( n h(y) = j= k mj, n j j)h pj, q j [ n j= c jy (a, α ),..., (a npn, α npn ) (b, β ),..., (b nqn, β nqn ) ] if y otherwise (6) References: [] Springer, M. D. (979). The algebra of random variables. New York, NY: Wiley.