6.1 Moment Generating and Characteristic Functions

Size: px
Start display at page:

Download "6.1 Moment Generating and Characteristic Functions"

Transcription

1 Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average, or the sum of the random variables. To enable such abstraction, we need certain mathematical tools known as the limit theorems, in particular the law of large numbers and central limit theorem. 6. Moment Generating and Characteristic Functions Consider two independent random variables X and Y with PDFs f X (x) and f Y (y). If we are interested in finding the PDF of the sum, i.e., Z = X +Y, we know from the previous chapter that the PDF is the convolution of f X and f Y. However, convolution could be challenging to compute, especially when we have a large number of random variables to convolve. In this case, we can resort to some kind of frequency domain approach, which transforms the PDFs to another domain, and perform multiplication instead convolution to make the calculation easier. Moment Generating Function Definition. For any random variable X, the moment generating function (MGF) M X (s) is M X (s) = E [ e sx]. (6.) In the discrete case, MGF is equivalent to M X (s) = e sx p X (x), (6.2) x whereas in the continuous case, the MGF is equivalent to M X (s) = e sx f X (x)dx. (6.3)

2 From the continuous version, we can interpret MGF as the Laplace Transform of the PDF. Therefore, s is the variable in the transformed domain. If s = jω, then M X (jω) becomes the Fourier Transform of the PDF. Example. Find the MGF for a Poisson random variable. Solution. The MGF of Poisson random variable can be found as M X (s) = x=0 e sx λx e λ x! = (λe s ) x e λ = e λes e λ. x! x=0 Example 2. Find the MGF for an exponential random variable. Solution. The MGF of an exponential random variable can be found as M X (s) = Properties of MGF 0 e sx λe λx dx = 0 λe (s λ)x dx = λ, if λ > s. λ s Theorem. Let X and Y be two independent random variables. Let Z = X + Y. The MGF of Z is M Z (s) = M X (s)m Y (s). (6.4) Proof. By definition of MGF, we have that M Z (s) = E [ e s(x+y )] = E [ e sx] E [ e sy ] = M X (s)m Y (s), where the second equality holds because X and Y are independent. Implication. The implication of this proposition is that if we have a sequence of independent random variables X,..., X N, the MGF of Z = N n= X n is M Z (s) = N M Xn (s). n= If these random variables are further assumed to be identically distributed, the MGF can be simplified as M Z (s) = (M Xn (s)) N. This would significantly simplify the analysis of Z. c 207 Stanley Chan. All Rights Reserved. 2

3 Theorem 2. The MGF has the property that M X (0) = dk ds k M X (s) s=0 = E[X k ]. Proof. The first property can be proved by noting that The second property holds because Setting s = 0 yields d k ds k M X(s) = M X (0) = E[e 0X ] = E[] =. d k ds k esx f X (x)dx = x k e sx f X (x)dx. d k ds M X(s) k s=0 = x k f X (x)dx = E[X k ]. Example. Prove that a sum of i.i.d. Bernoulli is a Binomial random variable. Solution. Let us consider a sequence of i.i.d. Bernoulli R.V. X n Bernoulli(p) for n =,..., N. Let Z = X X N. The moment generating function of Z is M Z (s) = E[e sz ] = E[e s(x +...+X N ) ] = = N E[e sxn ] n= N ( pe s + ( p)e s0) = (pe s + ( p)) N. n= Now, let us check the moment generating function of a binomial R.V.: If Z Binomial(N, p), then N ( ) N M Z (s) = E[e sz ] = e sk p k ( p) N k k n=0 N ( ) N = (pe s ) k ( p) N k = (pe s + ( p)) N, k n=0 where the last equality holds because N n=0 ( N k ) a k b N k = (a + b) N. Therefore, we observe that the two moment generating functions are identical. c 207 Stanley Chan. All Rights Reserved. 3

4 Characteristic Function When we restrict s to the imaginary axis, i.e., s = jω, the moment generating function becomes the characteristic function Definition 2. The characteristic function of a random variable X is Φ X (jω) = E[e jωx ]. (6.5) However, we note that since ω can take any value in (, ), it does not matter if we consider E[e jωx ]. This leads to our ECE 302 definition of the characteristic function: Definition 3. The characteristic function of a random variable X (valid in ECE 302 only) is Φ X (jω) = E[e jωx ]. (6.6) The reason of introducing our own characteristic function is that E[e jωx ] is the Fourier transform of f X (x) but E[e jωx ] is the inverse Fourier transform of f X (x). The former is more convenient in notation for students who have taken ECE 30. However, historically the characteristic function is E[e jωx ]. Example. Let X and Y be independent, and let { λe λx, x 0 f X (x) = f Y (y) = 0, x < 0 Find the PDF of Z = X + Y. { λe λy, y 0 0, y < 0. Solution. The characteristic function of X and Y can be found from the Fourier table that Φ X (jω) = Φ Y (jω) = Therefore, the characteristic function of Z is λ λ + jω, λ λ + jω. Φ Z (jω) = Φ X (jω)φ Y (jω) = λ 2 (λ + jω) 2. By inverse Fourier transform, we have that { } f Z (z) = F λ 2 = λ 2 ze λz, z 0. (λ + jω) 2 c 207 Stanley Chan. All Rights Reserved. 4

5 Why Φ X (jω) but not M X (s)? The answer is that the moment generating function is not always defined. Recall that the expectation E[X] exists only when f X (x) is absolutely integrable, or E[ X ] <. For characteristics function, this is always valid because E[ e jωx ] = E[] =. However, for moment generating function, E[ e sx ] could possibly be unbounded. To see a counter example, we consider the Cauchy distribution. Example. Consider the Cauchy distribution with PDF Show that the MGF of X is undefined. f X (x) = Solution. It can be shown that the MGF is M X (s) = π(x 2 + ). e sx π(x 2 + ) dx e sx π(x 2 + ) dx (sx) 3 6π(x 2 + ) dx, because esx (sx)3 6 (sx) 3 s3 dx = xdx =. 6π(2x 2 ) 2π Therefore, the MGF is undefined. On the other hand, by Fourier table we know that { } Φ X (jω) = F = e ω. π(x 2 + ) 6.2 Probability Inequalities The second set of tools we need in studying limit theorems is the probability inequalities. We will introduce a few basic ones in this section. Theorem 3 (Union Bound). Let A,..., A n be a collection of sets. Then, [ n ] P A i i= n P[A i ]. (6.7) i= Proof. Without loss of generality let us assume that A A 2... A n. Construct a collection of disjoint sets i B i = A i A j. c 207 Stanley Chan. All Rights Reserved. 5 j=

6 Then, B i A i for all i =,..., n. Thus, we have that P[B i ] P[A i ] for all i. Now, because of the construction, B i s are all disjoint. By Axiom III, we can then have [ n ] [ n ] P A i = P B i = i= i= n P [B i ] i= n P [A i ]. i= Remark: The union bound is a very coarse upper bound but is easy to use. In a nutshell, union bound can be considered as a divide and conquer. For a system of n variables, we can decompose the system into small events, and use the union bound to upper limit the overall probability. If we can further ensure that each small event has a very tiny probability of happening, then we can have a good bound on the overall probability. Theorem 4 (Cauchy Schwarz Inequality). Let X and Y be two random variables. Then, E[XY ] 2 E[X 2 ]E[Y 2 ]. (6.8) Proof. Let f(s) = E[(sX + Y ) 2 ] for any real s. Then we can show that f(s) = E[X 2 ]s 2 + 2E[XY ]s + E[Y 2 ]. This is a quadratic equation, and it holds that f(s) 0 for all s because E[(sX + Y ) 2 ] 0. Therefore, the discriminant must be negative: (2E[XY ]) 2 4E[X 2 ]E[Y 2 ] 0. This implies that 4E[XY ] 2 4E[X 2 ]E[Y 2 ] 0, which completes the proof. Remark: The Cauchy-Schwarz inequality is useful in handling E[XY ], i.e., covariance if we assume X and Y are zero mean. In fact, we used Cauchy-Schwarz inequality to prove that the correlation coefficient ρ is bounded between - and. Theorem 5 (Markov Inequality). Let X 0 be a non-negative random variable. Then, for any ε > 0 we have P[X ε] E[X]. (6.9) ε Proof. Consider εp[x ε]. It holds that εp[x ε] = ε ε f X (x)dx ε xf X (x)dx. c 207 Stanley Chan. All Rights Reserved. 6

7 where the inequality holds because for any x ε, the integrand which is non-negative will always increase. It then follows that xf X (x)dx ε 0 xf X (x)dx = E[X]. Remark: The Markov inequality is useful in the sense that it relates the probability P[X ε] with the expectation. Typically, the probability P[X ε] could be difficult to evaluate if the PDF is complicated. The expectation, on the other hand, could be easier to evaluate. Theorem 6 (Chebyshev Inequality). Let X be a random variable with mean µ. Then, for any ε > 0 we have P[ X µ ε] Var[X] ε 2. (6.0) Proof. We apply Markov inequality to show that P[ X µ ε] = P[(X µ) 2 ε 2 ] E[(X µ)2 ] ε 2 = Var[X] ε 2. Remark: The Chebyshev inequality is a basic tool to prove the law of large number. It says that the probability of having X deviated significantly from µ, i.e., making ε big, is very small. Theorem 7 (Chernoff Inequality (Optional)). Let X be a random variable. Then, for any ε 0, we have that P[X ε] e f(ε), (6.) where and M X (s) is the moment generating function. Proof. Since e x is an increasing function, we have that f(ε) = min s {sε log M X (s)}, (6.2) P[X ε] = P[e sx e sε ] E[esX ] e sε = e sε M X (s) = e sε+log M X(s). c 207 Stanley Chan. All Rights Reserved. 7

8 The inequality holds due to Markov inequality, and we used the definition of MGF that E[e sx ] = M X (s). Now, note that this result holds for all s. That means it must also hold for the s that minimizes e sε+log MX(s). This implies that { P[X ε] min } e sε+log M X (s) s Again, since e x is increasing, the minimizer of the above probability is also the minimizer of the following function f(ε) = min {sε log M X (s)} s Thus, we conclude that. P[X ε] e f(ε). Example. Let X N (0, σ 2 /N). Find the Chernoff bound of P[X ε]. Solution. The MGF of a zero-mean Gaussian is Therefore, the function f can be shown as f(ε) = min s M X (s) = e σ2 s 2 2N. {sε log M X (s)} = min s {sε σ2 s 2 To minimize the function, we take derivative and set to zero. This yields } d {sε σ2 s 2 = 0 s = Nε ds 2N σ. 2 Substituting into f(ε) we can show that f(ε) = ε2 N, and hence 2σ { 2 } P[X ε] exp ε2 N. (6.3) 2σ 2 Therefore, as N, the probability P[X ε] drops to zero at an exponential decay speed. 6.3 Law of Large Number We are ready to discuss the statistical behavior of a sum of N random variables. Formally, let us define the sample mean as follows. Definition 4. The sample mean of a sequence of random variables X,..., X N is 2N }. M N = N N X n. n= c 207 Stanley Chan. All Rights Reserved. 8

9 Sample mean is an estimate of the true population mean. For example, by surveying 0,000 Americans we can find out the empirical mean of the age. As the survey size grows we will have greater confidence that the empirical mean will approach the true mean. Example. Consider an experiment of drawing random variables X,..., X N. We construct a sample average M N = N N n= X n, and we compute two quantities: E[M N ] and Var[M N ]. As N, we like to see if M N is converging to µ, and if Var[M N ] is converging to 0. Figure 6. below shows an empirical result probability number of trials Figure 6.: Illustration of law of large numbers. We now state and prove the weak law of large number. Theorem 8 (Weak Law of Large Number). Let X,..., X N be a sequence of i.i.d. random variables with common mean µ. Then, for any ε > 0, lim P [ M N µ > ε] = 0. (6.4) Proof. To prove the weak law of large number, we apply Chebyshev Inequality to show that Therefore, setting N we have P [ M N µ > ε] Var[M N] ε 2 = Var[X ] Nε 2. lim P [ M Var[X ] N µ > ε] = lim = 0. (6.5) Nε 2 c 207 Stanley Chan. All Rights Reserved. 9

10 Remark: The decay rate of the weak law can be improved by using Chernoff bound. For example, if X n N (0, σ 2 ), then we know that M N N (0, σ 2 /N). Therefore, Chernoff bound shows that { } P [ M N µ > ε] 2 exp ε2 N, 2σ 2 which is exponentially faster than Chebyshev. Convergence in Probability (Optional) The convergence type in WLLN is known as convergence in probability, formally defined as Definition 5. A sequence of random variables M,..., M N converges in probability to µ if lim P [ M N µ > ε] = 0. (6.6) We write M N p µ to denote convergence in probability. p So what is the difference between M N µ versus MN µ? We should remember that M N is a random variable. Therefore, it does not have a fixed specific value but a PDF which tells us the probability where a particular value appears. Since M N has multiple states and each state appears randomly, the classical notion of convergence M N µ is undefined. The p probabilistic convergence M N µ states that as N, the probability of having MN µ deviated significantly away from ε is arbitrarily small. Thus, where there is randomness in M N, the probability measure P( ) takes care of it. We have to emphasize that WLLN is weak because convergence in probability only specifies the low likelihood of having big deviation. Having a low chance does not mean that it will not happen. It is possible that from time to time, M N will still deviate significantly away from µ. The chance of this happening is just getting smaller and smaller as N grows. Almost Sure Convergence (Optional) If one really wants to seek a stronger type of convergence, we have to use the notion of almost sure convergence, defined as follows. Definition 6. A sequence of random variables M,..., M N converge almost surely to µ if [ ] P lim M N µ > ε = 0. (6.7) We write M N a.s. µ to denote almost sure convergence. Almost sure convergence provides a direct response to the fact that M N µ is undefined using the classical convergence. Since M N is a random variable, one has to seek a limiting c 207 Stanley Chan. All Rights Reserved. 0

11 object to which M N is converging. This limiting object is usually another random variable. After seeking the limiting object, we take the probability of the limiting object. Putting the limit inside and outside the probability has a drastic different meaning. In almost sure convergence, the probability is taken on the limiting object. There is no intermediate evaluation of the probability as N grows. If that limiting object has 0 probability of having big deviation, then there is no chance that it will enter a bad-luck experiment as in the convergence in probability case. Interestingly, the sample mean of a sequence of i.i.d. random variables X,..., X N can be shown to converge almost surely to µ. Theorem 9 (Strong Law of Large Number). Let X,..., X N be a sequence of i.i.d. random variables with common mean µ. Then, for any ε > 0, [ ] P lim M N µ > ε = 0. (6.8) The proof of the strong law of large number is beyond the scope of this course. Generally, we need some assumptions of the high order moments of X n to prove the strong law. 6.4 Central Limit Theorem The Law of Large Number provides a probabilistic way of measuring the mean of the sample mean M N. What about the PDF of M N? Can we say anything about the PDF as N? Convergence in Distribution (Optional) Before we provide an answer to this question, let us first discuss the concept of convergence in distribution. Definition 7. Let Z,..., Z N be a sequence of random variables with CDFs F Z,..., F ZN respectively. We say that Z,..., Z N converge in distribution to a random variable Z with CDF F Z if lim F Z N (z) = F Z (z), (6.9) for every continuous point z of F Z. We write Z N d Z to denote convergence in distribution. Convergence in distribution concerns about the CDFs of a sequence of random variables Z,..., Z N. If this sequence converges to a random variable Z in distribution, then the CDFs of Z,..., Z N evaluated at z should converge to the CDF of Z evaluated at z. There are a few points we should pay attention to: c 207 Stanley Chan. All Rights Reserved.

12 Why CDF but not PDF? As we discuss in Chapter 4, PDF is not defined for discrete random variables because delta functions are indeed not functions. However, the CDF is always defined because the CDF of discrete random variables are step functions. In order to include both discrete and continuous random variables, convergence in distribution is defined using the CDF. Why every continuous point of F Z? A discontinuous point of F Z only has its limit defined on one side. There are cases where we want to allow convergence without worry about the one-side limit. For example, let Z N be a random variable that P[Z N = /N] =. Then, as N, approaches a step function at 0. However, if we look at the CDF s value at z = 0, we can show that lim F ZN (0) = 0 but F Z (0) =. Therefore, if consider the discontinuous point at z = 0, we will not be able to allow convergence of Z N. For discrete random variables, continuous points of F Z are on the piecewise constant regions of the CDF. Is convergence in distribution stronger than convergence in probability? Convergence in distribution is actually weaker. Consider a continuous random variable X with a symmetric PDF f X (x) such that f X (x) = f X ( x). It holds that the PDF of X has the same PDF. If we define the sequence Z N = X if N is odd and Z N = X if N is even, and let Z = X, then F ZN (z) = F Z (z) for every z because the PDF of X and X are identical. Therefore, Z d p N Z. However, Z N Z because Z N oscillates between the random variables X and X. These two random variables are different (although they have the same CDF), because P[X = X] = P[{ω : X(ω) = X(ω)}] = P[{ω : X(ω) = 0}] = 0. Central Limit Theorem With the notion of convergence in distribution, we can now state the Central Limit Theorem. Theorem 0 (Central Limit Theorem). Let X,..., X N be a sequence of i.i.d. random variables of mean E[X n ] = µ and variance Var[X n ] = σ 2. Also, assume E[ Xn ] 3 <. Let def Z N = N ( M N ) µ. Then, σ where Z N (0, ). In other words, lim P[Z N z] = Φ(z) = Z N d Z, (6.20) z 2π e y2 /2 dy. (6.2) The powerfulness of the Central Limit Theorem is that the result holds for any distribution of X,..., X N. That is, regardless of the distribution of X,..., X N, the CDF of M N is approaching Gaussian. However, it should be very careful when we say CDF of M N c 207 Stanley Chan. All Rights Reserved. 2

13 approaches Gaussian. It does not mean that the random variable M N approaches to a Gaussian random variable. It only means that if we compute a probabilistic event of M N, the probability can be approximated by using a Gaussian. Remark: As a short hand notation, we write N ( M N ) µ d N (0, ) unless confusion σ arises. It also holds that the statement in the Central Limit Theorem is equivalent to d N (µ, σ 2 /N). M N Proof. Let Z N = N ( M N ) µ σ. Then, [ ( E[e sz N ] = E e s MN )] µ N σ = N n= N [ = E + s n= σ N (X n µ) + N [ = + s n= σ N E[X n µ] + ) N = ( + s2. 2N E [e s σ (Xn µ)] N s2 2σ 2 N (X n µ) 2 + O s2 2σ 2 N E [ (X n µ) 2] ] ( )] (Xn µ) 3 σ 3 N N ( ) N It remains to show that + s2 2N e s 2 /2. If we can show that, then we have shown that the MGF of Z N is also the MGF of N (0, ). To this end, we consider log( + x). By Taylor approximation, we have that Therefore, we have log( + x) log() + As N, the limit becomes lim N log ( ) ( ) d d 2 x 2 dx log x x= x + dx log x 2 x= 2 + O(x3 ). ) log ( + s2 s2 2N 2N s4 4N. 2 ) ( + s2 s2 2N 2 lim and so taking exponential on both sides yields ) N lim ( + s2 = e s2 2. 2N s 4 4N = s2 2, c 207 Stanley Chan. All Rights Reserved. 3

14 Example. Suppose X n Poisson(λ). Use Central Limit Theorem to approximate P[a M N b]. Solution. We first show that E[M N ] = E[X n ] = λ Var[M N ] = N Var[X n] = λ N Therefore, Central Limit Theorem implies that M N d N ( λ, λ N The probability can be found as ( ) ( ) b λ a λ P[a M N b] = Φ Φ. λ/n λ/n ). Theorem (Delta Method (Optional)). Let M N be the sample mean and suppose that Central Limit Theorem gives N(M N µ) d N (0, σ 2 ), where N is the sample size. Then, for any continuously differentiable function f, it holds that N(f(MN ) f(µ)) d N ( 0, σ 2 f (µ) 2). (6.22) Proof. By Taylor approximation, we can that that f(m N ) = f(µ) + (M N µ)f (µ) + O((M N µ) 2 ). This implies that N (f(mn ) f(µ)) = N (M N µ) f (µ). By Central Limit Theorem, we know that N (M N µ) f (µ) completes the proof. d N (0, σ 2 f (µ) 2 ). This Example. A Poisson random variable X n Poisson(λ) has a variance λ. Let f(λ) = λ. Show that N (f(m N ) f(λ)) d N (0, /4). That is, by applying the transformation f, we make the variance to a constant. Solution. Since f(λ) = λ, we have that (f (λ)) 2 = ( 2 λ /2) 2 = 4 λ. Thus, the Delta Method gives us N (f(m N ) f(λ)) d N (0, /4). c 207 Stanley Chan. All Rights Reserved. 4

15 Limitation of Central Limit Theorem (Optional) If we recall the statement of the Central Limit Theorem, we notice that the statement requires [ ( ) ] N lim P MN µ ε = Φ(ε). (6.23) σ Rearranging the terms we can show that [ lim P M N µ + σε ] = Φ(ε). N This implies that as N, the deviation which CLT can handle, i.e., σε N, goes to 0. In other words, CLT can only handle small deviations. If we want to conduct analysis of large deviations, we need to resort to tools such as Chernoff bound. Intuitive, we can understand the limitation of CLT by realizing the fact that the Gaussian (i.e., quadratic) approximation holds only for small deviations. The tail probabilities are typically not approximated well by CLT. c 207 Stanley Chan. All Rights Reserved. 5

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Chapter 4. Continuous Random Variables 4.1 PDF

Chapter 4. Continuous Random Variables 4.1 PDF Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

Math Review Sheet, Fall 2008

Math Review Sheet, Fall 2008 1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Section 9.1. Expected Values of Sums

Section 9.1. Expected Values of Sums Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1

More information

Lecture 4: Law of Large Number and Central Limit Theorem

Lecture 4: Law of Large Number and Central Limit Theorem ECE 645: Estimation Theory Sring 2015 Instructor: Prof. Stanley H. Chan Lecture 4: Law of Large Number and Central Limit Theorem (LaTeX reared by Jing Li) March 31, 2015 This lecture note is based on ECE

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

COMPSCI 240: Reasoning Under Uncertainty

COMPSCI 240: Reasoning Under Uncertainty COMPSCI 240: Reasoning Under Uncertainty Andrew Lan and Nic Herndon University of Massachusetts at Amherst Spring 2019 Lecture 20: Central limit theorem & The strong law of large numbers Markov and Chebyshev

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

Brief Review of Probability

Brief Review of Probability Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

ECE302 Spring 2015 HW10 Solutions May 3,

ECE302 Spring 2015 HW10 Solutions May 3, ECE32 Spring 25 HW Solutions May 3, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Final. Fall 2016 (Dec 16, 2016) Please copy and write the following statement:

Final. Fall 2016 (Dec 16, 2016) Please copy and write the following statement: ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 06 Instructor: Prof. Stanley H. Chan Final Fall 06 (Dec 6, 06) Name: PUID: Please copy and write the following statement: I certify

More information

Introduction to Probability

Introduction to Probability LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Lecture notes for Part A Probability

Lecture notes for Part A Probability Lecture notes for Part A Probability Notes written by James Martin, updated by Matthias Winkel Oxford, Michaelmas Term 017 winkel@stats.ox.ac.uk Version of 5 September 017 1 Review: probability spaces,

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Probability inequalities 11

Probability inequalities 11 Paninski, Intro. Math. Stats., October 5, 2005 29 Probability inequalities 11 There is an adage in probability that says that behind every limit theorem lies a probability inequality (i.e., a bound on

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Basic Probability. Introduction

Basic Probability. Introduction Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with

More information

Continuous distributions

Continuous distributions CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)

More information

Stochastic Models (Lecture #4)

Stochastic Models (Lecture #4) Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence

More information

Review of Probabilities and Basic Statistics

Review of Probabilities and Basic Statistics Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to

More information

Lecture 1 Measure concentration

Lecture 1 Measure concentration CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

MATH Notebook 5 Fall 2018/2019

MATH Notebook 5 Fall 2018/2019 MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Test Problems for Probability Theory ,

Test Problems for Probability Theory , 1 Test Problems for Probability Theory 01-06-16, 010-1-14 1. Write down the following probability density functions and compute their moment generating functions. (a) Binomial distribution with mean 30

More information

Expectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then,

Expectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, Expectation is linear So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, E(αX) = ω = ω (αx)(ω) Pr(ω) αx(ω) Pr(ω) = α ω X(ω) Pr(ω) = αe(x). Corollary. For α, β R, E(αX + βy ) = αe(x) + βe(y ).

More information

Preliminaries. Probability space

Preliminaries. Probability space Preliminaries This section revises some parts of Core A Probability, which are essential for this course, and lists some other mathematical facts to be used (without proof) in the following. Probability

More information

Week 2: Review of probability and statistics

Week 2: Review of probability and statistics Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED

More information

Chap 2.1 : Random Variables

Chap 2.1 : Random Variables Chap 2.1 : Random Variables Let Ω be sample space of a probability model, and X a function that maps every ξ Ω, toa unique point x R, the set of real numbers. Since the outcome ξ is not certain, so is

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

the law of large numbers & the CLT

the law of large numbers & the CLT the law of large numbers & the CLT Probability/Density 0.000 0.005 0.010 0.015 0.020 n = 4 0.0 0.2 0.4 0.6 0.8 1.0 x-bar 1 sums of random variables If X,Y are independent, what is the distribution of Z

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time: Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information