Econ 508B: Lecture 5
|
|
- Beverly French
- 5 years ago
- Views:
Transcription
1 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
2 Outline 1 Expected Values 2 Moment Generating Functions 3 Cumulative Generating Functions Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
3 Outline 1 Expected Values 2 Moment Generating Functions 3 Cumulative Generating Functions Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
4 Motivation:Probability v.s. Expectation To start with, people probably have a better understanding for an expected value than for probability. Like optimization and approximation problems, they are phrased in terms of expectations. Expectations are indeed seen as special cases and are treated with uniformity and economy. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
5 Definition 1.1 Let X be a random variable on (Ω, F, P ).The expected value of X, EX, is defined as EX = XdP, given the integral is well-defined, i.e., at least one of the two quantities X + dp and X dp is finite. Ω Proposition 1.1 (Change of variable formula) If X is a random variable on (Ω, F, P ) and g : R R is Borel measurable and Y = g(x) is also a random variable on (Ω, F, P ). Ω Y dp = R g(x) P X(dx) = R y P Y (dy). If Ω Y dp <, then Y dp = h(x)p X (dx) = yp Y (dy). Ω R R Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
6 Moment Definition 1.2 For any positive integer n, the n th moment µ n and the n th central moment µ n of a random variable X is defined by µ n EX n, µ n E(X EX) n provided the expectation is well-defined. In particular, the variance of a random variable X is the 2 th central moment, namely V ar(x) = E(X EX) 2, provided EX 2 <. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
7 Outline 1 Expected Values 2 Moment Generating Functions 3 Cumulative Generating Functions Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
8 MGF Definition 2.1 The moment generating function (MGF) of a random variable X is M X (t) E(e tx ), for all t R e tx is always non-negative, therefore, E(e tx ) is well-defined but could be infinity (Why?). Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
9 MGF Definition 2.1 The moment generating function (MGF) of a random variable X is M X (t) E(e tx ), for all t R e tx is always non-negative, therefore, E(e tx ) is well-defined but could be infinity (Why?). The payoff of MGF is that it gives the direct connection between MGF and the moments of a random variable X as follows. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
10 non-negative case Proposition 2.1 Let X be a non-negative random variable t > 0. Then M X (t) E(e tx ) = Proof: By Taylor expansion, e tx = this comes from M.C.T. n=0 n=0 tn X n t n µ n n! n! and X is non-negative, Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
11 bounded case Proposition 2.2 Let X be a random variable and let M X (t) be finite for all t < ɛ, for some ɛ > 0, then (1) E X n < for all n 1, (2) M X (t) = µn n=0 tn n! for all t < ɛ, (3) M X ( )is infinitely differentiable on ( ɛ, +ɛ) and for r N, the r th derivative of M X ( ) is M (r) X (t) = n=0 µ n+r t n n! = E(etX X r )for t < ɛ. In particular, M (r) X (0) = µ r = EX r Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
12 Proof (1) : According to M X (t) is finite and the fact that t n X n n! e tx for all n N, then E(e tx ) E(e tx ) + E(e tx ) < for t < ɛ Therefore, choosing a t ( ɛ, +ɛ) leads to the outcome of (1). Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
13 Proof (1) : According to M X (t) is finite and the fact that t n X n n! e tx for all n N, then E(e tx ) E(e tx ) + E(e tx ) < for t < ɛ Therefore, choosing a t ( ɛ, +ɛ) leads to the outcome of (1). (2) : Notice that n (tx) j j=0 j! e tx for all x R and n N, then D.C.T. implies (2) holds. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
14 Proof (1) : According to M X (t) is finite and the fact that t n X n n! e tx for all n N, then E(e tx ) E(e tx ) + E(e tx ) < for t < ɛ Therefore, choosing a t ( ɛ, +ɛ) leads to the outcome of (1). (2) : Notice that n (tx) j j=0 j! e tx for all x R and n N, then D.C.T. implies (2) holds. (3) : The derivative of M X ( ) can be found by term-by-term differentiation of the power series. Hence, M (r) dr X (t) = = dt r ( n=0 n=0 t n µ n n! ) = n=0 n=0 d r (t n ) µ n dt r n! t n r µ n (n r)! = t n µ n+r n! ongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
15 Example 2.1 Let X N(0, 1), then for all t R, Thus µ n = M X (t) = + e tx 1 2π e x2 /2 dx = e t2 /2 = { 0 if n is odd (2k)! k!2 k if n = 2k, k = 1, 2,... (t 2 ) k 1 k! 2 k. k=0 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
16 Example 2.1 Let X N(0, 1), then for all t R, Thus µ n = M X (t) = Remark e tx 1 2π e x2 /2 dx = e t2 /2 = { 0 if n is odd (2k)! k!2 k if n = 2k, k = 1, 2,... (t 2 ) k 1 k! 2 k. k=0 If M X (t) finite within a finite circle is fulfilled, then all the moments {µ n } n 1 of X are determined and its probability distribution as well. However, in general, probability distributions are not completely determined by their moments. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
17 Intuitively speaking, if the sequence of moments does not grow so quickly, then the distribution is determined by its moments. Example 2.2 A standard example of two distinct distributions with the same moment is based on the density of lognormal distribution (Billingsley, Probability and Measure, chapter 30.) f(x) = 1 2π 1/x exp( (log x) 2 /2) And its perturbed density: f a (x) = f(x)(1 + a sin(2π log x)) They have the same moments and the n th moment of each of them is exp(n 2 /2). Proof: Homework! Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
18 Joint moment generating function Definition 2.2 The joint moment generating function of a random vector X = (X 1,..., X k ) is defined by M X1,...,X k (t 1,..., t k ) E(e t 1X 1 + t k X k ), for all t 1,..., t k R. And the definition applied here for M X1,...,X k ( ) is similar to M X (t), namely the MGF of X exists if M X1,...,X k ( ) is finite in a neighborhood of the origin of R d, t < t 0, t 0 > 0. M X (t) = 1 + k κ i t i i=1 r κ ij t i t j + i,j=1 where κ i 1 i r = E(Y i1 Y ir ) for i 1,..., i r = 1,..., k, which is referred to as the moment about the origin of order r of X, moments of order r form an array, symmetrical w.r.t permutations of indices. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
19 Moreover, The relationship κ i 1 i r = r M X (t) t i1 t ir t=0 M X (t) = M X1 M Xk holds if and only if the components of X are independent. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
20 Example Suppose M X (t) = 1 8 e 5t et e7t. E(X n )? Answer: M (n) X (t) = 1 8 ( 5)n e 5t et n e 7t E[X n ] = M (n) X (0) = 1 8 ( 5)n n Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
21 Example Suppose M X (t) = 1 8 e 5t et e7t. E(X n )? Answer: M (n) X (t) = 1 8 ( 5)n e 5t et n e 7t E[X n ] = M (n) X (0) = 1 8 ( 5)n n Alternatively, by the definition of expectation and MGF, random variable X, occurs 5 with probability 1/8, occurs 1 with probability 1/4, and occurs 7 with probability 5/8. Thus its E(X n ) is trivially E[X n ] = M (n) X (0) = 1 8 ( 5)n n. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
22 Outline 1 Expected Values 2 Moment Generating Functions 3 Cumulative Generating Functions Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
23 Cumulant Generating Function Definition 3.1 Let M X (t) be finite for t < t 0. The cumulant generating function of X is defined as K X (t) = log M X (t) The CGF also completely determines the distribution of X and it can be expanded in a power series with same radius of convergence R t 0 as follows t 2 K X (t) = κ 1 t + κ 2 2! + κ t 3 3 3! +. The coefficient κ r of t r /r! is referred to as the cumulant of order r of X, κ r = κ r (X) = dr dt r K X(t) t=0 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
24 Multivariate Cumulative generating function When X = (X 1,..., X k ) is a vector, the CGF is defined as K X (t) = logm X (t) If M X (t) exists, then the CGF admits a multivariate Taylor series expansion in a neighborhood of the origin, with the coefficients corresponding to cumulants of X. Definition 3.2 The joint cumulant of order r is κ i 1,i 2,,i r = r K X (t) t i1 t ir t=0. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
25 Sums of I.I.D. random variables Let S n = n i=1 X i and M Xi exists, then Also, M Sn (t) = (M X (t)) n, K Sn (t) = nk X (t), κ r (S n ) = nκ r (X) = nκ r. In a word, when working with sums of i.i.d random variables, its cumulants are simply times n by each random variable s cumulants. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
26 Example 3.1 Let X N(µ, σ 2 ) and then M X (t) = e µt+σ2 t2 2, KX (t) = µt + σ 2 t2 2 Therefore, κ 1 = µ, κ 2 = σ 2, κ r = 0 for r = 3, 4,... Cumulants of order larger than 2 are all zero if and only if X has a normal distribution. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
27 Location Shifts Shifting from X to X + a induce the corresponding transformation of M X ( ) and K X ( ), respectively and M X+a (t) = E(e t(x+a) ) = e at M X (t), K X+a (t) = at + K X (t). Only the first cumulant is affected, i.e., κ 1 (X + a) = a + κ 1. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
28 Scale Changes Scaling change of X by b, b > 0 obtains that X/b. It follows that M X/b (t) = E(e tx/b ) = M X (t/b), K X/b (t) = K X (t/b), κ r (X/b) = κ r (X)/b r = κ r /b r. All cumulants are affected by a scale change unless b = 1. Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, / 23
Math 341: Probability Seventeenth Lecture (11/10/09)
Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationSTAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it
More informationMath 341: Probability Eighteenth Lecture (11/12/09)
Math 341: Probability Eighteenth Lecture (11/12/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationPROBABILITY THEORY LECTURE 3
PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationSummary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016
8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationContinuous Distributions
A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later
More informationSummary. Ancillary Statistics What is an ancillary statistic for θ? .2 Can an ancillary statistic be a sufficient statistic?
Biostatistics 62 - Statistical Inference Lecture 5 Hyun Min Kang 1 What is an ancillary statistic for θ? 2 Can an ancillary statistic be a sufficient statistic? 3 What are the location parameter and the
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationLecture The Sample Mean and the Sample Variance Under Assumption of Normality
Math 408 - Mathematical Statistics Lecture 13-14. The Sample Mean and the Sample Variance Under Assumption of Normality February 20, 2013 Konstantin Zuev (USC) Math 408, Lecture 13-14 February 20, 2013
More informationSTAT 801: Mathematical Statistics. Moment Generating Functions. M X (t) = E(e tx ) M X (u) = E[e utx ]
Next Section Previous Section STAT 801: Mathematical Statistics Moment Generating Functions Definition: The moment generating function of a real valued X is M X (t) = E(e tx ) defined for those real t
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationMath Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT
Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set
More informationMATH4210 Financial Mathematics ( ) Tutorial 7
MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More informationSometimes can find power series expansion of M X and read off the moments of X from the coefficients of t k /k!.
Moment Generating Functions Defn: The moment generating function of a real valued X is M X (t) = E(e tx ) defined for those real t for which the expected value is finite. Defn: The moment generating function
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationDepartment of Mathematics
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4
More information5.6 The Normal Distributions
STAT 41 Lecture Notes 13 5.6 The Normal Distributions Definition 5.6.1. A (continuous) random variable X has a normal distribution with mean µ R and variance < R if the p.d.f. of X is f(x µ, ) ( π ) 1/
More informationBrownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion
Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationSTAT 450. Moment Generating Functions
STAT 450 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.
ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers
More informationExam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)
Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationExpectation, variance and moments
Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks
More information1 Expectation of a continuously distributed random variable
OCTOBER 3, 204 LECTURE 9 EXPECTATION OF A CONTINUOUSLY DISTRIBUTED RANDOM VARIABLE, DISTRIBUTION FUNCTION AND CHANGE-OF-VARIABLE TECHNIQUES Expectation of a continuously distributed random variable Recall
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationChapter 7. Basic Probability Theory
Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries
More informationLecture 11: Probability, Order Statistics and Sampling
5-75: Graduate Algorithms February, 7 Lecture : Probability, Order tatistics and ampling Lecturer: David Whitmer cribes: Ilai Deutel, C.J. Argue Exponential Distributions Definition.. Given sample space
More informationSTAT 3610: Review of Probability Distributions
STAT 3610: Review of Probability Distributions Mark Carpenter Professor of Statistics Department of Mathematics and Statistics August 25, 2015 Support of a Random Variable Definition The support of a random
More informationREVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)
REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationSTA 4321/5325 Solution to Homework 5 March 3, 2017
STA 4/55 Solution to Homework 5 March, 7. Suppose X is a RV with E(X and V (X 4. Find E(X +. By the formula, V (X E(X E (X E(X V (X + E (X. Therefore, in the current setting, E(X V (X + E (X 4 + 4 8. Therefore,
More information0.0.1 Moment Generating Functions
0.0.1 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationthe convolution of f and g) given by
09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that
More informationRandom Matrix Eigenvalue Problems in Probabilistic Structural Mechanics
Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationConditional distributions. Conditional expectation and conditional variance with respect to a variable.
Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationLecture 19: Properties of Expectation
Lecture 19: Properties of Expectation Dan Sloughter Furman University Mathematics 37 February 11, 4 19.1 The unconscious statistician, revisited The following is a generalization of the law of the unconscious
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More informationGaussian Random Fields
Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called
More informationLECTURE 10: REVIEW OF POWER SERIES. 1. Motivation
LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationChapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability
Probability Theory Chapter 6 Convergence Four different convergence concepts Let X 1, X 2, be a sequence of (usually dependent) random variables Definition 1.1. X n converges almost surely (a.s.), or with
More informationLecture 1 Measure concentration
CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationSTAT215: Solutions for Homework 1
STAT25: Solutions for Homework Due: Wednesday, Jan 30. (0 pt) For X Be(α, β), Evaluate E[X a ( X) b ] for all real numbers a and b. For which a, b is it finite? (b) What is the MGF M log X (t) for the
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationMod-φ convergence I: examples and probabilistic estimates
Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2018 version Luc Rey-Bellet April 5, 2018 Contents 1 Simulation and the Monte-Carlo method 3 1.1 Review of probability............................
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More information1 Variance of a Random Variable
Indian Institute of Technology Bombay Department of Electrical Engineering Handout 14 EE 325 Probability and Random Processes Lecture Notes 9 August 28, 2014 1 Variance of a Random Variable The expectation
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationMath 113 Winter 2005 Key
Name Student Number Section Number Instructor Math Winter 005 Key Departmental Final Exam Instructions: The time limit is hours. Problem consists of short answer questions. Problems through are multiple
More informationf (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3
1 M 13-Lecture Contents: 1) Taylor Polynomials 2) Taylor Series Centered at x a 3) Applications of Taylor Polynomials Taylor Series The previous section served as motivation and gave some useful expansion.
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationReview 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015
Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Chapter 6 : Moment Functions Néhémy Lim 1 1 Department of Statistics, University of Washington, USA Winter Quarter 2016 of Common Distributions Outline 1 2 3 of Common Distributions
More informationMATH 118, LECTURES 27 & 28: TAYLOR SERIES
MATH 8, LECTURES 7 & 8: TAYLOR SERIES Taylor Series Suppose we know that the power series a n (x c) n converges on some interval c R < x < c + R to the function f(x). That is to say, we have f(x) = a 0
More information