1: PROBABILITY REVIEW
|
|
- Cody Robertson
- 5 years ago
- Views:
Transcription
1 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56
2 Outline We will review the following notions: 1 Discrete Random Variables 2 Expectation of a Random Variable 3 Equivalence of Probability Measures 4 Variance of a Random Variable 5 Continuous Random Variables 6 Examples of Probability Distributions 7 Correlation of Random Variables 8 Conditional Distributions and Expectations M. Rutkowski (USydney) Slides 1: Probability Review 2 / 56
3 PART 1 DISCRETE RANDOM VARIABLES M. Rutkowski (USydney) Slides 1: Probability Review 3 / 56
4 Sample Space We collect the possible states of the world and denote the set by Ω. The states are called samples or elementary events. The sample space Ω is either countable or uncountable. A toss of a coin: Ω = {Head, Tail} = {H, T}. The number of successes in a sequence of n identical and independent trials: Ω = {0, 1,..., n}. The moment of occurrence of the first success in an infinite sequence of identical and independent trials: Ω = {1, 2,... }. The lifetime of a light bulb: Ω = {x R x 0}. The choice of a sample space is arbitrary and thus any set can be taken as a sample space. However, practical considerations justify the choice of the most convenient sample space for the problem at hand. Discrete (finite or infinite, but countable) sample spaces are easier to handle than general sample spaces. M. Rutkowski (USydney) Slides 1: Probability Review 4 / 56
5 Discrete Random Variables Examples of random variables: Prices of listed stocks. Fluctuations of currency exchange rates. Payoffs corresponding to portfolios of traded assets. Definition (Discrete Random Variable) A real-valued map X : Ω R on a discrete sample space Ω = (ω k ) k I, where the set I is countable, is called a discrete random variable. Definition (Probability) A map P : Ω [0, 1] is called a probability on a discrete sample space Ω if the following conditions are satisfied: P.1. P(ω k ) 0 for all k I, P.2. k I P(ω k) = 1. M. Rutkowski (USydney) Slides 1: Probability Review 5 / 56
6 Probability Measure Let F = 2 Ω stand for the class of all subsets of the sample space Ω. The set 2 Ω is called the power set of Ω. Note that the empty set also belongs to any power set. Any set A F is called an event (or a random event). Definition (Probability Measure) A map P : F [0, 1] is called a probability measure on (Ω, F) if M.1. For any sequence A i F, i = 1, 2,... of events such that A i A j = for all i j we have P( i=1a i ) = P(A i ). i=1 M.2. P(Ω) = 1. For any A F the equality P(Ω \ A) = 1 P(A) holds. M. Rutkowski (USydney) Slides 1: Probability Review 6 / 56
7 Probability Measure on a Discrete Sample Space Note a probability P : Ω [0, 1] on a discrete sample space Ω uniquely specifies probabilities of all events A k = {ω k }. It is common to write P({ω k }) = P(ω k ) = p k. The following result shows that any probability on a discrete sample space Ω generates a unique probability measure on (Ω, F). Theorem Let P : Ω [0, 1] be a probability on a discrete sample space Ω. Then the unique probability measure on (Ω, F) generated by P satisfies for all A F P(A) = P(ω k ). ω k A The proof of the theorem is easy since Ω is assumed to be discrete. M. Rutkowski (USydney) Slides 1: Probability Review 7 / 56
8 Example: Coin Flipping Example (1.2) Let X be the number of heads appearing when a fair coin is tossed twice. We choose the sample space Ω to be Ω = {0, 1, 2} where a number k Ω represents the number of times head has occurred. The probability measure P on Ω is defined as { 0.25, if k = 0, 2, P(k) = 0.5, otherwise. We recognise here the binomial distribution with n = 2 and p = 0.5. Note that a single flip of a coin is a Bernoulli trial. M. Rutkowski (USydney) Slides 1: Probability Review 8 / 56
9 Example: Coin Flipping Example (1.2 Continued) We now suppose that the coin is not a fair one. Let the probability of head be p for some p 1 2. Then the probability measure P is given by q 2, if k = 0, P(k) = 2pq, if k = 1, p 2, if k = 2, where q := 1 p is the probability of tail appearing. We deal here with the binomial distribution with n = 2 and 0 < p < 1. M. Rutkowski (USydney) Slides 1: Probability Review 9 / 56
10 PART 2 EXPECTATION OF A RANDOM VARIABLE M. Rutkowski (USydney) Slides 1: Probability Review 10 / 56
11 Expectation of a Random Variable Definition (Expectation) Let X be a random variable on a discrete sample space Ω endowed with a probability measure P. The expectation (expected value or mean value) of X is given by E P (X) = µ := k I X(ω k ) P(ω k ) = k I x k p k. E P ( ) is called the expectation operator over the probability P. Note that the expectation of a random variable can be seen as the weighted average. It is impossible to know the exact event to happen in the future and thus expectation is useful in making decisions when the probabilities of future outcomes are known. M. Rutkowski (USydney) Slides 1: Probability Review 11 / 56
12 Expectation Operator Any random variable defined on a finite set Ω admits the expectation. When the set Ω is countable (but infinite), then we say that X is P-integrable whenever E P ( X ) = ω Ω X(ω) P(ω) <. In that case the expectation E P (X) is well defined (and finite). Theorem (1.1) Let X and Y be two P-integrable random variables and P be a probability measure on a discrete sample space Ω. Then for all α, β R E P (αx + βy ) = α E P (X) + β E P (Y ). Hence E P ( ) : X R is a linear operator on the space X of P-integrable random variables. M. Rutkowski (USydney) Slides 1: Probability Review 12 / 56
13 Expectation Operator Proof of Theorem 1.1. We note that E P ( αx + βy ) α E P ( X ) + β E P ( Y ) < so that the random variable αx + βy belongs to X. The linearity of expectation can be easily deduced from its definition E P (αx + βy ) = ω Ω (αx(ω) + βy (ω)) P(ω) = αx(ω) P(ω) + βy (ω) P(ω) ω Ω ω Ω = α ω Ω X(ω) P(ω) + β ω Ω Y (ω) P(ω) = α E P (X) + β E P (Y ) M. Rutkowski (USydney) Slides 1: Probability Review 13 / 56
14 Expectation: Coin Flipping Example (1.3) A fair coin is tossed three times. The player receives one dollar each time head appears and loses one dollar when tail occurs. Let the random variable X represent the player s payoff. The sample space Ω is defined as Ω = {0, 1, 2, 3} where k Ω represents the number of times head occurs. The probability measure is given by P(k) = 1 8, if k = 0, 3, 3 8, if k = 1, 2. This is the binomial distribution with n = 3 and p = 1 2. M. Rutkowski (USydney) Slides 1: Probability Review 14 / 56
15 Expectation: Coin Flipping Example (1.3 Continued) The random variable X is defined as 3, if k = 0, 1, if k = 1, X(k) = 1, if k = 2, 3, if k = 3. Hence the player s expected payoff equals E P (X) = 3 X(k)P(k) k=0 = = 0. M. Rutkowski (USydney) Slides 1: Probability Review 15 / 56
16 Expectation of a Function of a Random Variable Function of a random variable. Let X be a random variable and P be a probability measure on a discrete sample space Ω. We define Y = f(x) where f : R R is an arbitrary function. Then Y is also a random variable on the sample space Ω endowed with the same probability measure P. Moreover, E P (Y ) = E P (f(x)) = ω Ω f(x(ω))p(ω). If a random variable X is deterministic then E P (X) = X and E P (f(x)) = f(x). M. Rutkowski (USydney) Slides 1: Probability Review 16 / 56
17 PART 3 EQUIVALENCE OF PROBABILITY MEASURES M. Rutkowski (USydney) Slides 1: Probability Review 17 / 56
18 Equivalence of Probability Measures Let P and Q be two probability measures on a discrete sample space Ω. Definition (Equivalence of Probability Measures) We say that the probability measures P and Q are equivalent and we denote P Q whenever for all ω Ω P(ω) > 0 Q(ω) > 0. The random variable L : Ω R + given as L(ω) = Q(ω) P(ω) is called the Radon-Nikodym density of Q with respect to P. Note that E Q (X) = ω Ω X(ω) Q(ω) = ω Ω X(ω)L(ω) P(ω) = E P (LX). M. Rutkowski (USydney) Slides 1: Probability Review 18 / 56
19 Example: Equivalent Probability Measures Example (Equivalent Probability Measures) The sample space Ω is defined as Ω = {1, 2, 3, 4}. Let the two probability measures P and Q be given by ( 1 8, 3 8, 2 8, 2 ) ( 4 and 8 8, 1 8, 2 8, 1 ). 8 It is clear that P and Q are equivalent, that is, P Q. Moreover, the Radon-Nikodym density L of Q with respect to P can be represented as follows ( 4, 1 3, 1, 1 ). 2 Check that for any random variable X: E Q (X) = E P (LX). M. Rutkowski (USydney) Slides 1: Probability Review 19 / 56
20 PART 4 VARIANCE OF A RANDOM VARIABLE M. Rutkowski (USydney) Slides 1: Probability Review 20 / 56
21 Risky Investments When deciding whether to invest in a given portfolio, an agent may be concerned with the risk of his investment. Example (1.4) Consider an investor who is given an opportunity to choose between the following two options: 1 The investor either receives or loses 1,000 dollars with equal probabilities. This random payoff is denoted by X 1. 2 The investor either receives or loses 1,000,000 dollars with equal probabilities. We denote this random payoff as X 2. Hence in both scenarios the expected value of the payoff equals 0 E P (X 1 ) = E P (X 2 ) = 0. The following question arises: which option is preferred? M. Rutkowski (USydney) Slides 1: Probability Review 21 / 56
22 Variance of a Random Variable Definition (Variance) The variance of a random variable X on a discrete sample set Ω is defined as V ar (X) = σ 2 := E P {(X µ) 2}, where P is a probability measure on Ω. Variance is a measure of the spread of a random variable about its mean and also a measure of uncertainty. In financial applications, it is common to identify variance of the price of a security with its degree of risk. Note that the variance is non-negative: V ar (X) = σ 2 0. The variance is equal to 0 if and only if X is deterministic. M. Rutkowski (USydney) Slides 1: Probability Review 22 / 56
23 Variance of a Random Variable Example (1.4 Continued) The variance of option 1 equals V ar (X 1 ) = (1000 0)2 2 The variance of option 2 equals + ( )2 2 = V ar (X 2 ) = (106 0) ( 106 0) 2 2 = Therefore, the option represented by X 2 is riskier than the option represented by X 1. A risk-averse agent would prefer the first option over the second. A risk-loving agent would prefer the second option over the first. M. Rutkowski (USydney) Slides 1: Probability Review 23 / 56
24 Variance of a Random Variable Theorem (1.2) Let X be a random variable and P be a probability measure on a discrete sample space Ω. Then the following equality holds V ar (X) = E P ( X 2 ) µ 2. Proof of Theorem 1.2. V ar (X) = E P { (X µ) 2 } = E P ( X 2 2µX + µ 2) (linearity) = E P ( X 2 ) 2µ E P (X) + µ 2 = E P ( X 2 ) µ 2. M. Rutkowski (USydney) Slides 1: Probability Review 24 / 56
25 Independence of Random Variables Definition (Independence) Two discrete random variables X and Y are independent if for all x, y R P (X = x, Y = y) = P (X = x) P (Y = y) where P (X = x) is the probability of the event {X = x}. A useful property of independent random variables X and Y is E P (XY ) = E P (X) E P (Y ). Theorem (1.3) For independent discrete random variables X, Y and arbitrary α, β R V ar (αx + βy ) = α 2 V ar (X) + β 2 V ar (Y ). M. Rutkowski (USydney) Slides 1: Probability Review 25 / 56
26 Independence of Random Variables Proof of Theorem 1.3. Let E P (X) = µ X and E P (Y ) = µ Y. Theorem 1.1 yields E P (αx + βy ) = αµ X + βµ Y. Using Theorem 1.2, we obtain { V ar (αx + βy ) = E P (αx + βy ) 2 } (αµ X + βµ Y ) 2 = α 2 ( E P X 2 ) + 2αβ E P (XY ) + β 2 E P (Y ) (αµ X + βµ Y ) 2 = α 2 ( ( E P X 2 ) µ 2 X) + β 2 ( ( E P Y 2 ) µ 2 ) Y + 2αβ (E P (X) E P (Y ) µ X µ Y ) = α 2 V ar (X) + β 2 V ar (Y ). M. Rutkowski (USydney) Slides 1: Probability Review 26 / 56
27 PART 5 CONTINUOUS RANDOM VARIABLES M. Rutkowski (USydney) Slides 1: Probability Review 27 / 56
28 Continuous Random Variables Definition (Continuous Random Variable) A random variable X on the sample space Ω is said to have a continuous distribution if there exists a real-valued function f such that and for all real numbers a < b P (a X b) = f(x) 0, f(x) dx = 1, b a f(x) dx. Then f : R R + is called the probability density function (pdf) of a continuous random variable X. M. Rutkowski (USydney) Slides 1: Probability Review 28 / 56
29 Continuous Random Variables Assume that X is a continuous random variable. Note that a probability density function need not satisfy the constraint f(x) 1. A function F (x) is called a cumulative distribution function (cdf) of a continuous random variable X if for all x R F (x) = P (X x) = x The relationship between the pdf f and cdf F F (x) = x f(y) dy. f(y)dy f(x) = d dx F (x). M. Rutkowski (USydney) Slides 1: Probability Review 29 / 56
30 Continuous Random Variables The expectation and variance of a continuous random variable X are defined as follows: or, equivalently, V ar (X) = E P (X) = µ := V ar (X) = σ 2 := x f(x) dx, (x µ) 2 f(x) dx, x 2 f(x) dx µ 2 = E P ( X 2 ) (E P (X)) 2 The properties of expectations of discrete random variables carry over to continuous random variables, with probability measures being replaced by pdfs and sums by integrals. M. Rutkowski (USydney) Slides 1: Probability Review 30 / 56
31 PART 6 EXAMPLES OF PROBABILITY DISTRIBUTIONS M. Rutkowski (USydney) Slides 1: Probability Review 31 / 56
32 Discrete Probability Distributions Example (Binomial Distribution) Let Ω = {0, 1, 2,..., n} be the sample space and let X be the number of successes in n independent trials where p is the probability of success in a single Bernoulli trial. The probability measure P is called the binomial distribution if ( ) n P(k) = p k (1 p) n k for k = 0,..., n k where Then ( ) n = k n! k!(n k)!. E P (X) = np and V ar (X) = np(1 p). M. Rutkowski (USydney) Slides 1: Probability Review 32 / 56
33 Discrete Probability Distributions Example (Poisson Distribution) Let the sample space be Ω = {0, 1, 2,... }. We take an arbitrary number λ > 0. The probability measure P is called the Poisson distribution if Then P (k) = λk e λ k! for k = 0, 1, 2,.... E P (X) = λ = V ar (X). It is known that the Poisson distribution can be obtained as the limit of the binomial distribution when n tends to infinity and the sequence np n tends to λ > 0. M. Rutkowski (USydney) Slides 1: Probability Review 33 / 56
34 Discrete Probability Distributions Example (Geometric Distribution) Let Ω = {1, 2, 3,... } be the sample space and X be the number of independent trials to achieve the first success. Let p stand for the probability of a success in a single trial. The probability measure P is called the geometric distribution if P (k) = (1 p) k 1 p for k = 1, 2, 3,.... Then E P (X) = 1 p and V ar (X) = 1 p p 2. M. Rutkowski (USydney) Slides 1: Probability Review 34 / 56
35 Continuous Probability Distributions Example (Uniform Distribution) We say that X has the uniform distribution on an interval (a, b) if its pdf equals { 1 f(x) = b a, if x (a, b), 0, otherwise. It is clear that We have f(x) dx = b a 1 dx = 1. b a E P (X) = a + b 2 and V ar (X) = (b a)2. 12 M. Rutkowski (USydney) Slides 1: Probability Review 35 / 56
36 Continuous Probability Distributions Example (Exponential Distribution) We say that X has the exponential distribution on (0, ) with the parameter λ > 0 if its pdf equals { λe f(x) = λx, if x > 0, 0, otherwise. It is easy to check that f(x) dx = 0 λe λx dx = 1. We have E P (X) = 1 λ and V ar (X) = 1 λ 2. M. Rutkowski (USydney) Slides 1: Probability Review 36 / 56
37 Continuous Probability Distributions Example (Gaussian Distribution) We say that X has the Gaussian (normal) distribution with mean µ R and variance σ 2 > 0 if its pdf equals f(x) = 1 (x µ)2 e 2σ 2 for x R. 2πσ 2 We write X N(µ, σ 2 ). One can show that f(x) dx = 1 (x µ)2 e 2σ 2 dx = 1. 2πσ 2 We have E P (X) = µ and V ar (X) = σ 2. M. Rutkowski (USydney) Slides 1: Probability Review 37 / 56
38 Continuous Probability Distributions Example (Standard Normal Distribution) If we set µ = 0 and σ 2 = 1 then we obtain the standard normal distribution N(0, 1) with the following pdf n(x) = 1 2π e x2 2 for x R. The cdf of the probability distribution N(0, 1) equals N(x) = x n(u) du = x 1 2π e u2 2 du for x R. The values of N(x) can be found in the cumulative standard normal table (also known as the Z table). If X N ( µ, σ 2) then Z := X µ σ N(0, 1). M. Rutkowski (USydney) Slides 1: Probability Review 38 / 56
39 LLN and CLT Theorem (Law of Large Numbers) Assume that X 1, X 2,... are independent and identically distributed random variables with mean µ. Then with probability one, X X n n µ as n Theorem (Central Limit Theorem) Assume that X 1, X 2,... are independent and identically distributed random variables with mean µ and variance σ 2 > 0. Then for all real x { } lim P X1 + + X n nµ x n σ 1 x = e u2 /2 du = N(x). n 2π M. Rutkowski (USydney) Slides 1: Probability Review 39 / 56
40 PART 7 CORRELATION OF RANDOM VARIABLES M. Rutkowski (USydney) Slides 1: Probability Review 40 / 56
41 Covariance of Random Variables In the real world, agents can invest in several securities and typically would like to benefit from diversification. The price of one security may affect the prices of other assets. For example, the stock index falling may lead the price of gold to rise. To quantify this effect, it is convenient to introduce the notion of covariance. Definition (Covariance) The covariance of two random variables X 1 and X 2 is defined as Cov (X 1, X 2 ) = σ 12 := E P {(X 1 µ 1 ) (X 2 µ 2 )} where µ i = E P (X i ) for i = 1, 2. M. Rutkowski (USydney) Slides 1: Probability Review 41 / 56
42 Correlated and Uncorrelated Random Variables The covariance of two random variables is a measure of the degree of variation of one variable with respect to the other. Unlike the variance of a random variable, covariance of two random variables may take negative values. σ 12 > 0: An increase in one variable tends to coincide with an increase in the other. σ 12 < 0: An increase in one variable tends to coincide with a decrease in the other. σ 12 = 0: Then the random variables X 1 and X 2 are said to be uncorrelated. If σ 12 0 then X 1 and X 2 are correlated. Definition (Uncorrelated Random Variables) We say that the random variables X 1 and X 2 are uncorrelated whenever Cov (X 1, X 2 ) = σ 12 = 0. M. Rutkowski (USydney) Slides 1: Probability Review 42 / 56
43 Properties of the Covariance Theorem (1.4) The following properties are valid: Cov (X, X) = V ar (X) = σ 2. Cov (X 1, X 2 ) = E P (X 1 X 2 ) µ 1 µ 2. E P (X 1 X 2 ) = E P (X 1 ) E P (X 2 ) if and only if X 1 and X 2 are uncorrelated, that is, Cov (X 1, X 2 ) = 0. V ar (a 1 X 1 + a 2 X 2 ) = a 2 1 σ2 1 + a2 2 σ a 1a 2 σ 12. V ar (X 1 + X 2 ) = V ar (X 1 ) + V ar (X 2 ) if and only if X 1 and X 2 are uncorrelated. If X 1 and X 2 are independent then they are uncorrelated. The converse is not true: it may happen that X 1 and X 2 are uncorrelated, but they are not independent. M. Rutkowski (USydney) Slides 1: Probability Review 43 / 56
44 Correlation Coefficient We can normalise the covariance measure. We obtain in this way the so-called correlation coefficient. Note that the correlation coefficient is only defined when σ 1 > 0 and σ 2 > 0, that is, none of the two random variables is deterministic. Definition (Correlation Coefficient) Let X 1 and X 2 be two random variables with variances σ 2 1 and σ2 2 and covariance σ 12. Then the correlation coefficient ρ = ρ(x 1, X 2 ) = corr (X 1, X 2 ) is defined by ρ := σ 12 σ 1 σ 2 = Cov (X 1, X 2 ) V ar (X1 ) V ar (X 2 ). M. Rutkowski (USydney) Slides 1: Probability Review 44 / 56
45 Properties of the Correlation Coefficient Theorem (1.5) The correlation coefficient satisfies 1 ρ 1. The random variables X 1 and X 2 are uncorrelated whenever ρ = 0. Proof. The result follows from an application of the Cauchy-Schwarz inequality: ( n k=1 a kb k ) 2 ( n k=1 a2 k )( n k=1 b2 k ). ρ = 1: If one variable increases, the other will also increase. 0 < ρ < 1: If one increases, the other will tend to increase. ρ = 0: The two random variables are uncorrelated. 1 < ρ < 0: If one increases, the other will tend to decrease. ρ = 1: If one increases, the other will decrease. M. Rutkowski (USydney) Slides 1: Probability Review 45 / 56
46 Linear Regression in Basic Statistics If we observe the time series of prices of two securities the following question arises: how are they related to each other? Consider two random variables X and Y on the sample space Ω = {ω 1, ω 2,..., ω n }. Denote X(ω i ) = x i and Y (ω i ) = y i. A linear fit is a relation of the following form, for α, β R, y i = α + βx i + ɛ i. The number ɛ i is called the residual of the ith observation. In Statistics, the best linear fit to observed data is obtained through the method of least-squares: minimise Σ n i=1 ɛ2 i. The line y = α + βx is called the least-squares linear regression. We denote by ɛ the random variable such that ɛ(ω i ) = ɛ i. This means that we set ɛ := Y α βx. M. Rutkowski (USydney) Slides 1: Probability Review 46 / 56
47 Linear Regression in Probability Theory In Probability Theory, if α and β are chosen to minimise E P { (Y α βx ) 2 } = E P ( ɛ 2 ) = Σ n i=1ɛ 2 (ω i )P(ω i ) then the random variable l Y X := α + βx is called the linear regression of Y on X. The next result is valid for both discrete and continuous random variables. Theorem (1.6) The minimizers α and β are α = µ Y βµ X and β = σ XY σ 2 X Hence the linear regression of Y on X is given by l Y X = σ XY (X µ X ) + µ Y. σx 2 M. Rutkowski (USydney) Slides 1: Probability Review 47 / 56.
48 Covariance Matrix Recall that the covariance of random variables X i and X j equals Cov (X i, X j ) = σ ij := E P {(X i µ i ) (X j µ j )} where µ i = E P (X i ) and µ j = E P (X j ). Definition (Covariance Matrix) We consider random variables X i for i = 1, 2,..., n with variances σ 2 i = σ ii and covariances σ ij. The matrix S given by σ 2 1 σ 12 σ 1n σ 21 σ2 2 σ 2n S := σ n1 σ n2 σn 2 is called the covariance matrix of (X 1,..., X n ). M. Rutkowski (USydney) Slides 1: Probability Review 48 / 56
49 S and P are symmetric and positive-definite matrices. S = DP D where D 2 is a diagonal matrix whose main diagonal coincides with the main diagonal in S. If X i s are independent (or uncorrelated) random variables then S is a diagonal matrix and P = I is the identity matrix. M. Rutkowski (USydney) Slides 1: Probability Review 49 / 56 Correlation Matrix Definition (Correlation Matrix) We consider random variables X i for i = 1, 2,..., n with variances σ 2 i = σ ii and covariances σ ij. The matrix P given by 1 ρ 12 ρ 1n ρ 21 1 ρ 2n P := ρ n1 ρ n2 1 is called the correlation matrix of (X 1,..., X n ).
50 Linear Combinations of Random Variables Theorem (1.7) Assume that the random variables Y 1 and Y 2 are some linear combinations of X i, that is, there exists vectors a j R n for j = 1, 2 such that Y j = n a j i X i = i=1 a j 1 a j 2. a j n T X 1 X 2. X n where, denotes the inner product in R n. Then = a j, X Cov (Y 1, Y 2 ) = ( a 1) T S a 2 = ( a 1) T DP D a 2. M. Rutkowski (USydney) Slides 1: Probability Review 50 / 56
51 PART 8 CONDITIONAL DISTRIBUTIONS AND EXPECTATIONS M. Rutkowski (USydney) Slides 1: Probability Review 51 / 56
52 Conditional Distributions and Expectations Definition (Conditional Probability) For two random variables X 1 and X 2 and an arbitrary set A such that P(X 2 A) 0, we define the conditional probability P(X 1 A 1 X 2 A 2 ) := P(X 1 A 1, X 2 A 2 ) P(X 2 A 2 ) and the conditional expectation E P (X 1 X 2 A) := E P(X 1 1 {X2 A}) P(X 2 A) where 1 {X2 A} : Ω {0, 1} is the indicator function of {X 2 A}, that is, 1 {X2 A}(ω) = { 1, if X2 (ω) A, 0, otherwise. M. Rutkowski (USydney) Slides 1: Probability Review 52 / 56
53 Discrete Case Assume that X and Y are discrete random variables p i = P(X = x i ) > 0 for i = 1, 2,... and p j = P(Y = y j ) > 0 for j = 1, 2,... and p i = 1, i=1 p j = 1. j=1 Definition (Conditional Distribution and Expectation) Then the conditional distribution equals p X Y (x i y j ) = P(X = x i Y = y j ) := P(X = x i, Y = y j ) P(Y = y j ) and the conditional expectation E P (X Y ) is given by i=1 i=1 = p i,j p j p i,j E P (X Y = y j ) := x i P(X = x i Y = y j ) = x i. p j M. Rutkowski (USydney) Slides 1: Probability Review 53 / 56
54 Discrete Case It is easy to check that p i = P(X = x i ) = P(X = x i Y = y j ) P(Y = y j ) = p X Y (x i y j ) p j. j=1 The expected value E P (X) satisfies E P (X) = E P (X Y = y j ) P(Y = y j ). Definition (Conditional cdf) j=1 The conditional cdf F X Y ( y j ) of X given Y is defined for all y j such that P(Y = y j ) > 0 by j=1 F X Y (x y j ) := P(X x Y = y j ) = x i x p X Y (x i y j ). Hence E P (X Y = y j ) is the mean of the conditional distribution. M. Rutkowski (USydney) Slides 1: Probability Review 54 / 56
55 Continuous Case Assume that the continuous random variables X and Y have the joint pdf f X,Y (x, y). Definition (Conditional pdf and cdf) The conditional pdf of Y given X is defined for all x such that f X (x) > 0 and equals f Y X (y x) := f X,Y (x, y) f X (x) for y R. The conditional cdf of Y given X equals F Y X (y x) := P(Y y X = x) = y f X,Y (x, u) f X (x) du. M. Rutkowski (USydney) Slides 1: Probability Review 55 / 56
56 Continuous Case Definition (Conditional Expectation) The conditional expectation of Y given X is defined for all x such that f X (x) > 0 by E P (Y X = x) := y df Y X (y x) = y f X,Y (x, y) f X (x) dy. An important property of conditional expectation is that E P (Y ) = E P (E P (Y X)) = E P (Y X = x)f X (x) dx. Hence the expected value E P (Y ) can be determined by first conditioning on X (in order to compute E P (Y X)) and then integrating with respect to the pdf of X. M. Rutkowski (USydney) Slides 1: Probability Review 56 / 56
1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationMeasure-theoretic probability
Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is
More informationChapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory
Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection
More informationLecture 3. Discrete Random Variables
Math 408 - Mathematical Statistics Lecture 3. Discrete Random Variables January 23, 2013 Konstantin Zuev (USC) Math 408, Lecture 3 January 23, 2013 1 / 14 Agenda Random Variable: Motivation and Definition
More informationFundamental Tools - Probability Theory II
Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More informationRandom variables. DS GA 1002 Probability and Statistics for Data Science.
Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationExpectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationDistribusi Binomial, Poisson, dan Hipergeometrik
Distribusi Binomial, Poisson, dan Hipergeometrik CHAPTER TOPICS The Probability of a Discrete Random Variable Covariance and Its Applications in Finance Binomial Distribution Poisson Distribution Hypergeometric
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationExpectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda
Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationX 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:
nna Janicka Probability Calculus 08/09 Lecture 4. Real-valued Random Variables We already know how to describe the results of a random experiment in terms of a formal mathematical construction, i.e. the
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationChapter 1. Sets and probability. 1.3 Probability space
Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More informationRandom Variables. P(x) = P[X(e)] = P(e). (1)
Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment
More informationSample Spaces, Random Variables
Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted
More informationWeek 1 Quantitative Analysis of Financial Markets Distributions A
Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October
More informationELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Random Variable Discrete Random
More informationProbability. Table of contents
Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationRandom Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More information3. Review of Probability and Statistics
3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture
More informationORF 245 Fundamentals of Statistics Chapter 4 Great Expectations
ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations Robert Vanderbei Fall 2014 Slides last edited on October 20, 2014 http://www.princeton.edu/ rvdb Definition The expectation of a random variable
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationMATH Notebook 5 Fall 2018/2019
MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationThree hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationLecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationSystem Identification
System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 27, 2013 Module 3 Lecture 1 Arun K. Tangirala System Identification July 27, 2013 1 Objectives of this Module
More informationRandom Variables. Definition: A random variable (r.v.) X on the probability space (Ω, F, P) is a mapping
Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the
More informationProbability Review. Gonzalo Mateos
Probability Review Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ September 11, 2018 Introduction
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationp. 4-1 Random Variables
Random Variables A Motivating Example Experiment: Sample k students without replacement from the population of all n students (labeled as 1, 2,, n, respectively) in our class. = {all combinations} = {{i
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationExpectation of Random Variables
1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More informationLecture 2: Review of Basic Probability Theory
ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent
More informationMATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)
MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationChapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables
Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results
More informationElementary Probability. Exam Number 38119
Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of
More informationProbability theory basics
Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2
IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR
More informationName: Firas Rassoul-Agha
Midterm 1 - Math 5010 - Spring 016 Name: Firas Rassoul-Agha Solve the following 4 problems. You have to clearly explain your solution. The answer carries no points. Only the work does. CALCULATORS ARE
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationStochastic Models in Computer Science A Tutorial
Stochastic Models in Computer Science A Tutorial Dr. Snehanshu Saha Department of Computer Science PESIT BSC, Bengaluru WCI 2015 - August 10 to August 13 1 Introduction 2 Random Variable 3 Introduction
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationScientific Computing: Monte Carlo
Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)
More information4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur
4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur Laws of Probability, Bayes theorem, and the Central Limit Theorem Rahul Roy Indian Statistical Institute, Delhi. Adapted
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationL2: Review of probability and statistics
Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution
More informationM378K In-Class Assignment #1
The following problems are a review of M6K. M7K In-Class Assignment # Problem.. Complete the definition of mutual exclusivity of events below: Events A, B Ω are said to be mutually exclusive if A B =.
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair
More informationLecture 1: Review on Probability and Statistics
STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationFinal Exam # 3. Sta 230: Probability. December 16, 2012
Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets
More informationProbability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008
Probability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008 1 Review We saw some basic metrics that helped us characterize
More informationFundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner
Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationProbability and Statistics Concepts
University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 1 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationS n = x + X 1 + X X n.
0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each
More informationRecap of Basic Probability Theory
02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk
More information6.041/6.431 Fall 2010 Quiz 2 Solutions
6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential
More informationLecture 1. ABC of Probability
Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability
More informationStatistics, Probability Distributions & Error Propagation. James R. Graham
Statistics, Probability Distributions & Error Propagation James R. Graham Sample & Parent Populations Make measurements x x In general do not expect x = x But as you take more and more measurements a pattern
More informationLectures for APM 541: Stochastic Modeling in Biology. Jay Taylor
Lectures for APM 541: Stochastic Modeling in Biology Jay Taylor November 3, 2011 Contents 1 Distributions, Expectations, and Random Variables 4 1.1 Probability Spaces...................................
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationORF 245 Fundamentals of Statistics Great Expectations
ORF 245 Fundamentals of Statistics Great Expectations Robert Vanderbei Fall 2015 Slides last edited on November 16, 2015 http://www.princeton.edu/ rvdb Definition The expectation of a random variable is
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationStatistics and Econometrics I
Statistics and Econometrics I Random Variables Shiu-Sheng Chen Department of Economics National Taiwan University October 5, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I October 5, 2016
More information