General Random Variables

Similar documents
CHAPTER 5. Jointly Probability Mass Function for Two Discrete Distributed Random Variables:

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Review of Probability Theory II

Advanced Econometrics II (Part 1)

MTH234 Chapter 15 - Multiple Integrals Michigan State University

Chapter 7: Special Distributions

Self-Driving Car ND - Sensor Fusion - Extended Kalman Filters

Sums of independent random variables

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Real Analysis 1 Fall Homework 3. a n.

Multiple Random Variables

UNIVERSITY OF DUBLIN TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

STA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

B8.1 Martingales Through Measure Theory. Concept of independence

Multivariate Random Variable

Homework 10 (due December 2, 2009)

Uniform Law on the Unit Sphere of a Banach Space

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Supplement to the paper Accurate distributions of Mallows C p and its unbiased modifications with applications to shrinkage estimation

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

ε i (E j )=δj i = 0, if i j, form a basis for V, called the dual basis to (E i ). Therefore, dim V =dim V.

ECON 4130 Supplementary Exercises 1-4

SOME NEW INEQUALITIES SIMILAR TO HILBERT TYPE INTEGRAL INEQUALITY WITH A HOMOGENEOUS KERNEL. 1. Introduction. sin(

Topic 7: Using identity types

Numerical Linear Algebra

Chapter 6. Phillip Hall - Room 537, Huxley

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Homework 2: Solution

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Variable Selection and Model Building

arxiv: v1 [physics.data-an] 26 Oct 2012

The Poisson Regression Model

Statics and dynamics: some elementary concepts

OXFORD UNIVERSITY. MATHEMATICS, JOINT SCHOOLS AND COMPUTER SCIENCE WEDNESDAY 4 NOVEMBER 2009 Time allowed: hours

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

The Multivariate Normal Distribution 1

Two-dimensional Random Vectors

Estimation of the large covariance matrix with two-step monotone missing data

F(p) y + 3y + 2y = δ(t a) y(0) = 0 and y (0) = 0.

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Inference about the Slope and Intercept

3. Show that if there are 23 people in a room, the probability is less than one half that no two of them share the same birthday.

PHYS 301 HOMEWORK #9-- SOLUTIONS

Principal Components Analysis and Unsupervised Hebbian Learning

Ž. Ž. Ž. 2 QUADRATIC AND INVERSE REGRESSIONS FOR WISHART DISTRIBUTIONS 1

Chapter 12: Bivariate & Conditional Distributions

Stat 5101 Notes: Algorithms

Math 205A - Fall 2015 Homework #4 Solutions

Continuous Random Variables

t s (p). An Introduction

8 STOCHASTIC PROCESSES

Notes on Random Vectors and Multivariate Normal

Series Handout A. 1. Determine which of the following sums are geometric. If the sum is geometric, express the sum in closed form.

Limiting Distributions

Elementary Analysis in Q p

DIFFERENTIAL GEOMETRY. LECTURES 9-10,

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

ECE 534 Information Theory - Midterm 2

Computing Differential Modular Forms

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

COMMUNICATION BETWEEN SHAREHOLDERS 1

Participation Factors. However, it does not give the influence of each state on the mode.

Probability Densities in Data Mining

Introduction to Probability and Statistics

University of California, Los Angeles Department of Statistics. Joint probability distributions

Review: mostly probability and some statistics

Let X and Y denote two random variables. The joint distribution of these random

Review of Probability

5.5 The concepts of effective lengths

STA 4322 Exam I Name: Introduction to Statistics Theory

BASICS OF PROBABILITY

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

2. Review of Calculus Notation. C(X) all functions continuous on the set X. C[a, b] all functions continuous on the interval [a, b].

Department of Mathematics

Approximating min-max k-clustering

Finite Mixture EFA in Mplus

8.7 Associated and Non-associated Flow Rules

Estimating Time-Series Models

1 Probability Spaces and Random Variables

Sobolev Spaces with Weights in Domains and Boundary Value Problems for Degenerate Elliptic Equations

Real Analysis Problems

Collaborative Place Models Supplement 1

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Lecture 14: Multivariate mgf s and chf s

HEAT AND LAPLACE TYPE EQUATIONS WITH COMPLEX SPATIAL VARIABLES IN WEIGHTED BERGMAN SPACES

Stat 206: Sampling theory, sample moments, mahalanobis

SCHUR S LEMMA AND BEST CONSTANTS IN WEIGHTED NORM INEQUALITIES. Gord Sinnamon The University of Western Ontario. December 27, 2003

Existence Results for Quasilinear Degenerated Equations Via Strong Convergence of Truncations

Elementary theory of L p spaces

Limiting Distributions

A construction of bent functions from plateaued functions

ECE Lecture #9 Part 2 Overview

Transcription:

Chater General Random Variables. Law of a Random Variable Thus far we have considered onl random variables whose domain and range are discrete. We now consider a general random variable X! defined on the robabilit sace ( F P). Recall that F is a -algebra of subsets of. IP is a robabilit measure on F, i.e., IP (A) is defined for ever A F. A function X! is a random variable if and onl if for ever () (the -algebra of orel subsets of ), the set fx g 4 X () 4 f! X(!) g F i.e., X! is a random variable if and onl if X is a function from () to F(See Fig..) Thus an random variable X induces a measure X on the measurable sace ( ()) defined b X () IP X () 8 () where the robabili on the right is defined since X () F. X is often called the Law of X in Williams book this is denoted b L X.. Densit of a Random Variable The densit of X (if it exists) is a function f X![0 ) such that X () f X (x) dx 8 () 3

4 X R {X ε } Ω Figure. Illustrating a real-valued random variable X. We then write d X (x) f X (x)dx where the integral is with resect to the Lebesgue measure on. f X is the Radon-Nikodm derivative of X with resect to the Lebesgue measure. Thus X has a densit if and onl if X is absolutel continuous with resect to Lebesgue measure, which means that whenever () has Lebesgue measure zero, then IP fx g 0.3 Exectation Theorem 3.3 (Exectation of a function of X) Let h! be given. Then IEh(X) 4 h(x(!)) dip (!) h(x) d X (x) h(x)f X (x) dx Proof (Sketch). If h(x) (x) for some, then these equations are IE (X) 4 P fx g X () f X (x) dx which are true b definition. Now use the standard machine to get the equations for general h.

CHAPTER. General Random Variables 5 (X,Y) C { (X,Y) ε C} Ω x Figure. Two real-valued random variables X Y..4 Two random variables Let X Y be two random variables! defined on the sace ( F P). Then X Y induce a measure on ( ) (see Fig..) called the joint law of (X Y ), defined b X Y (C) 4 IP f(x Y ) Cg 8C ( ) The joint densit of (X Y ) is a function f X Y![0 ) that satisfies X Y (C) C f X Y (x ) dxd 8C ( ) f X Y is the Radon-Nikodm derivative of X Y with resect to the Lebesgue measure (area) on. We comute the exectation of a function of X Y in a manner analogous to the univariate case IEk(X Y ) 4 k(x(!) Y(!)) dip (!) k(x ) d X Y (x ) k(x )f X Y (x ) dxd

6.5 Marginal Densit Suose (X Y ) has joint densit f X Y. Let be given. Then where Y () IP fy g IP f(x Y ) g X Y ( ) f Y () 4 Therefore, f Y () is the (marginal) densit for Y. f X Y (x ) dxd f Y () d f X Y (x ) dx.6 Conditional Exectation Suose (X Y ) has joint densit f X Y. Let h! be given. Recall that IE[h(X)jY ] 4 IE[h(X)j(Y )] deends on! through Y, i.e., there is a function g() (g deending on h) such that IE[h(X)jY ](!) g(y (!)) How do we determine g? We can characterize g using artial averaging Recall that A (Y )()A fy g for some (). Then the following are equivalent characterizations of g g(y ) dip A A h(x) dip 8A (Y ) (6.) (Y )g(y ) dip (Y )h(x) dip 8 () (6.) ()g() Y (d) ()h(x) d X Y (x ) 8 () (6.3) g()f Y () d h(x)f X Y (x ) dxd 8 () (6.4)

CHAPTER. General Random Variables 7.7 Conditional Densit A function f XjY (xj)![0 ) is called a conditional densit for X given Y rovided that for an function h! (Here g is the function satisfing and g deends on h, butf XjY g() does not.) h(x)f XjY (xj) dx (7.) IE [h(x)jy ]g(y ) Theorem 7.33 If (X Y ) has a joint densit f X Y, then f XjY (xj) f X Y (x ) (7.) f Y () Proof Just verif that g defined b (7.) satisfies (6.4) For () h(x)f XjY (xj) dx {z } g() Notation. Let g be the function satisfing The function g is often written as f Y () d IE[h(X)jY ]g(y ) g() IE[h(X)jY ] h(x)f X Y (x ) dxd and (7.) becomes IE[h(X)jY ] h(x)f XjY (xj) dx In conclusion, to determine IE[h(X)jY ] (a function of!), first comute g() h(x)f XjY (xj) dx and then relace the dumm variable b the random variable Y IE[h(X)jY ](!) g(y (!)) Examle. (Jointl normal random variables) Given arameters > 0 > 0 <<. Let (X Y ) have the joint densit x f X Y (x ) ex ( ) x +

8 The exonent is x x + ( ) ( ) ( ) x + x ( ) # We can comute the Marginal densit of Y as follows f Y () e ( ) e u due using the substitution u e Thus Y is normal with mean 0 and variance. Conditional densit. From the exressions f X Y (x ) ( e ) x x x dxe, du e dx we have f Y () e f XjY (xj) f X Y (x ) f Y () e ( ) x In the x-variable, f XjY (xj) is a normal densit with mean and variance ( ). Therefore, IE[XjY ] xf XjY (xj) dx IE Y # X x f XjY (xj) dx ( )

CHAPTER. General Random Variables 9 From the above two formulas we have the formulas IE Taking exectations in (7.3) and (7.4) ields IE IE[XjY ] Y (7.3) X Y Y # ( ) (7.4) IEX IEY 0 (7.5) X Y # ( ) (7.6) ased on Y, the best estimator of X is Y. This estimator is unbiased (has exected error zero) and the exected square error is ( ). No other estimator based on Y can have a smaller exected square error (Homework roblem.)..8 Multivariate Normal Distribution Please see Oksendal Aendix A. Let X denote the column vector of random variables (X X X n ) T, and x the corresonding column vector of values (x x x n ) T. X has a multivariate normal distribution if and onl if the random variables have the joint densit f X (x) det A n o () n ex (X )T A(X ) Here, 4 ( n ) T IEX 4 (IEX IEX n ) T and A is an n n nonsingular matrix. A is the covariance matrix A IE h (X )(X ) Ti i.e. the (i j)th element of A is IE(X i i )(X j j ). The random variables in X are indeendent if and onl if A is diagonal, i.e., where j IE(X j j ) is the variance of X j. A diag( n )

30.9 ivariate normal distribution Take n in the above definitions, and let Thus, f X X (x x ) A 4 IE(X )(X ) A 4 ( ) ( ) det A ( ex # ( ) ( ) and we have the formula from Examle., adjusted to account for the ossibl non-zero exectations (x ) ( ) 3 5 + (x ) (x )(x ) #).0 MGF of jointl normal random variables Let u (u u u n ) T denote a column vector with comonents in, and let X have a multivariate normal distribution with covariance matrix A and mean vector. Then the moment generating function is given b IEe ut X e ut X fx X X (x x x n ) dx dx n n n ex ut A u + u T o If an n random variables X X X n have this moment generating function, then the are jointl normal, and we can read out the means and covariances. The random variables are jointl normal and indeendent if and onl if for an real column vector u (u u n ) T IEe ut X 4 IE ex 8 9 8 < nx < nx u j X j ex j j 9 [ j u j + u j j ]