2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

Size: px
Start display at page:

Download "2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:"

Transcription

1 Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of getting a head or tail both equals to 1/2. If we perform n independent tosses, then the probability of obtaining n heads is equal to 1/2 n.infact,lets n = X 1 + X X n,where { 1 if the result of the n-th trial is a head X j = 0 if the result of the n-th trial is a tail Then the probability that we get k heads out of n tosses is equal to Prob{S n = k} = 1 ( ) n 2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time: Prob{S 2n = n} = 1 ( ) 2n 2 2n = 1 (2n)! n 2 2n (n!) 2 e1/2 0, πn as n. On the other hand, since we have a fair coin, we do expect to obtain heads roughly half of the time, i.e. S 2n 2n 1 2, for large n. Such a statement is indeed true and is embodied in the law of large numbers that we discuss in the next chapter. For the moment let us simply observe that while the probability that S 2n equals to n goes to zero as n, the probability that S 2n is close n goesto1asn. More precisely, for any ε>0, { S 2n Prob 2n 1 } 2 >ε 0, 5

2 6 CHAPTER 1. RANDOM VARIABLES as n. This is the weak law of large numbers for this particular example. In the example of the fair coin, the number of outcomes in an experiment is finite. In contrast, the second class of elementary examples involve a continuous set of possible outcomes. Consider the orientation of a unit vector τ. Denote by S 2 the unit sphere in R 3. Define f(n), n S 2 as the orientation distribution function, i.e. for A S 2, Prob{τ A} = f(n)dω, where dω is the surface area element on S 2. If τ does not have a preferred orientation, i.e. it has equal probability of pointing at any direction, then f(n) = 1 4π, On the other hand, if τ does have a preferred orientation, say n 0, then we expect f(n) tobepeakedatn Probability Space It is useful to put these intuitive notions of probability on a firm mathematical basis, as was done by Kolmogorov. For this purpose, we define the notion of probability space, often written as a triplet (Ω, F, P), where Ω is a set called the sample space and F is a collection of subsets of Ω satisfying the requirements that (i) Ω F, F; (ii) if A F,thenĀ F,whereĀ =Ω/A is the complement of A in Ω; (iii) if A 1,A 2,... F,then n N A n F. Such a collection of sets is called a σ-algebra, andthepair(ω, F) is called a measurable space. We often use ω to denote elements in Ω, which call smaple points, and each set in F is called an event. P : F [0, 1] is the probability measure or, in short, the probability, defined on F; it satisfies (a) P( ) =0,P(Ω) = 1; (b) if A 1,A 2,... F are pairwise disjoint, A i A j = if i j, then A P( A n )= P(A n ). n N n N Example (Fair coin). The probability space for the outcome of one trial can be defined as follows. Ω = {head, tail}, F = all subsets of Ω = {, {head}, {tail}, Ω}

3 1.3. CONDITIONAL PROBABILITY 7 and P( ) =0, P({head}) =P({tail}) = 1 2, P(Ω) = 1. For n tosses, we can take Ω = {head, tail} n, F = all subsets of Ω, and P(A) = 1 2 n Card(A), where Card(A) is the cardinality of the set A. Within this framework, the standard rule of set theory are used to answer probability questions. For instance, if both A, B F, the probalility that A and B occur is given by P(A B), the probability that A or B occur, by P(A B), the probability that A but not B occur by P(A/B), etc. It is also elementary to show intuitive properties like A B P(A) P(B), since B = A (B/A), implying by (b) above that P(B) =P(A) +P(B/A) P(B), or P(A B) P(A)+P(B) since A = A (B A c ), implying by (b) above that P(A B) =P(A)+P(B A c ) P(A)+P(B). This inequality is known as Boole s inequality. Another useful notion is that of independence. Two events A, B Fare independent if P(A B) =P(A)P(B). This generalizes to a sequence {A n } n N of independent events as P( A n )= P(A n ). n N n N 1.3 Conditional probability Let A, B F and assume that P(B) 0. Then the conditional probability of A given B is defined as P(A B) P(A B) =. P(B) Clearly this corresponds to the proportion of events where both A and B occur given that B has occured. For instance, the probability to obtain two tails in two tosses of a fair coin is 1 4, but the conditional probability to obtain two tails is 1 2 given that the first toss is a tail, and is zero given that the first toss is a head. One has P(Ω B) =1,

4 8 CHAPTER 1. RANDOM VARIABLES and therefore if A 1,A 2, are disjoint sets such that n A n =Ω,then P(A n B) =1. Therefore n P(A j B) = P(A j B) n P(A n B). This is known as Baye s rule. Since P(A B) = P(A B)P(B) by definition, we also have by iteration and so on. P(A B C) =P(A B C)P(B C)P(C), 1.4 Discrete Distributions Making n tosses of a fair coin is an example of an experiment where the total number of outcomes is finite. More generally, if the number of elements in Ω={A 1,A 2,...} is finite or enumerable, and associated with each of these events there is a numerical value, X j, the function X such that X(A j )=X j is called a discrete random variable. It is usually convenient to take simply X j {1, 2,...,N} in the finite case, and X j N or X j Z in the numerable case, so that X :Ω {1, 2,...,N}, X :Ω N, orx :Ω Z, and we will restrict to these cases. The probability distribution of X is the probability that X takes the value j, i.e and it obviously satisfies p(j) = Prob{X = j} = P(A j ), j =1,... p(j) 0, p(j) =1. (The second conditions also implies that p(j) 1.) Given a function f of X, its expectation is given by j Ef(X) = i f(i)p(i), if the sum is well-defined. In particular, the pth moment of the distribution is defined as m p = j p p(j), j if the sum is well-defined. The first moment is also called the mean of the random variable and is denoted by mean(x). Another important quantity is its variance defined as var(x) =m 2 m 2 1 = (j m 1 ) 2 p(j). j

5 1.5. CONTINUOUS DISTRIBUTIONS. 9 Exercise S n, the number of head in an outcome of n tosses, is an example of random variable with a distribution is given by p(j) = Prob{S n = j} = 1 ( ) n 2 n. j Show that the mean and the variance of this random variable are mean(x) = n 2, var(x) = n 4, implying that var(x)/(mean(x)) 2 0asn. Poisson distribution. This is the simplest example for the distribution of randomly scattered points. Consider a set of randomly scattered points on the plane. Let A be a set in R 2. X A (ω) be the number of points in A. X A has Poisson distribution if P{X A (ω) =n} = λn n! e λ, where λ may depend on A according to the density of the points. If the points are uniformly distributed on the plane, then λ is equal to the area of A. Notice that in this case λ =mean(x A ) = var(x A ). 1.5 Continuous Distributions. Consider now the general case when Ω is not necessarily numerable. A function X :Ω R n is called a random variable on the probability space (Ω, F, P) if the set {ω : X(ω) U} is in F for all open set U R n. The distribution of the random variable X is a probability measure µ on R n, defined for any set B R n by µ(b) = Prob{X B} = P{ω Ω:X(ω) B}. If there exists an integrable function ρ(x) such that µ(b) = ρ(x)dx, then ρ is called the probability density of X. Given its distribution, the expectation of f(x) is defined as Ef(X) = f(x)µ(dx), R n if f is continuous and the right hand is well-defined (i.e. f(x) µ(dx) < ). R n Two important expectation are the mean of X, B mean(x) =EX,

6 10 CHAPTER 1. RANDOM VARIABLES and the variance of X, var(x) =E ( (X EX)(X EX) T ). The expectation of f(x) can also be written as Ef(X) = f(x(ω))dp(ω), which allows us to make connection with L p -space. For p 1let L p = L p (Ω, F, P) ={X(ω) :E X p < }. Then L p is a Banach space with norm X p =(E X p ) 1/p, and L 2 is a Hilbert space with scalar product X, Y = E(X, Y ), Ω where (X, Y ) denotes the standrad scalar product in R d. Scwartz s inequality E (X, Y ) X 2 Y 2. More generally, we have Hölder s inequality In L 2, we have E (X, Y ) X p Y q, p > 1, 1/p +1/q =1,X L p,y L q. The triangle inequality in L p is called Minkowski s inequality X + Y p X p + Y p, p 1, X,Y L p. Exercise (Jensen s inequality). Let X be a one dimensional variable such that M(λ) =Ee λx < for some λ R. Show that M(λ) e λex. Two random variables X and Y are called independent if Ef(X)g(Y )=Ef(X)Eg(Y ), for all continuous functions f and g. This means that the joint distributions of X and Y is simply the product measure of the distributions of X and Y, i.e. µ(dx, dy) =µ x (dx)µ y (dy). A weaker notion is the following. X and Y are uncorrelated if cov(x, Y )=0, where cov(x, Y )is the covariance matrix of X and Y, cov(x, Y )=E(X EX)(Y EY ) T.

7 1.6. EXAMPLES OF CONTINUOUS DISTRIBUTION 11 Exercise Pairwise independence does not imply independence. Let X, Y be two independent random variables such that P(X = ±1) = P(Y = ±1) = 1 2, and let Z = XY. independent, i.e. Check that X, Y, Z are pairwise independent but not Ef(X)g(Y )h(z) =Ef(X)Eg(Y )Eh(Z), does not hold for all continuous functions f, g, h. 1.6 Examples of Continuous Distribution We list a few important continuous distributions. 1. Uniform distribution 1 if x B ρ(x) = vol(b) 0 otherwise In one dimension for B =[0, 1] this reduces to ρ(x) = { 1 if x [0, 1] 0 otherwise Pseudo-random numbers generated on the computer typically have this distribution. 2. Exponential distribution ρ(x) = { 0 if x<0 λe λx if x 0 Special cases include the distribution of waiting time for continuous time Markov process (λ is the rate of the process) and the Boltzmann distribution f(e) = { e βe if E 0 0 if E<0, where β =1/k B T, k B is the Boltzmann constant, and T is the temperature.

8 12 CHAPTER 1. RANDOM VARIABLES 3. Normal distribution ρ(x) = e 1 2 (x x)t A 1 (x x) (2π) n/2 (det A) 1/2, where A is a symmetric positive definite matrix, det A is the determinant of A; x is the mean, A is the variance, and the random variable with the density above is denoted by N( x, A). Such random variables are also called Gaussian random variables. In dimension one, the density of a normal variable reduces to e (x x)2 /2σ 2 ρ(x) =, 2πσ 2 where x is the mean, σ 2 is the variance. Exercise Let X =(X 1,...,X n )bean-dimensional Gaussian random variable and let Y = c 1 X 1 + c 2 X c n X n,wherec 1,...,c n are constants. Show that Y is also Gaussian. 1.7 Conditional Expectation The concept of conditional probability extends straightforwardly to discrete random variables. Let X and Y be two discrete random variables, not necessarily independent, with joint probability p(i, j) =P(X = i, Y = j). Since i p(i, j) =P(Y = j), the conditional probability that X = i given that Y = j is given by p(i, j) p(i j) = i p(i, j). if i p(i, j) > 0 and conventionaly taken to be zero if i p(i, j) = 0. From this, the natural definition of the conditional expectation of f(x) giventhaty = j is E(f(X) Y = j) = f(i)p(i j). i A difficulty arises when one tries to generalizes this concept to continuous random variables. Indeed, given two continuous random variables X and Y with joint probability density µ(dx, dy), the probability that Y = y is zero for most values of y. Therefore, the ratio involved in the definition of the conditional probability measure is not defined. An obvious way to try to fix this problem is to take limits, i.e. define the conditional probability of X B given that Y = y as µ(b y) = lim δ 0 µ(b,b δ (y)) R n µ(dx, B δ (y)),

9 1.8. BOREL-CANTELLI LEMMA 13 where B δ (y) is the ball of radius δ centered around y. However, requiring that this limit exists for every y turns out to be very restrictive, and it is better to proceed differently. The is done as follows. One says that a measure ν is absolutely continuous with rescpect to a measure µ, whichisdenotedasν µ, if for every open set B such that µ(b) = 0, one also has ν(b) = 0. It can be shown that when ν µ, there exists a function ϕ(z) such that for any B, ν(b) = ϕ(z)µ(dz). B The function ϕ is unique up to equivalence, i.e. it can be modified at most on a set of zero measure with respect to µ(dz), and it is called the Radon-Nikodym derivative of ν with respect to µ. This is denoted as ϕ = dν dµ. Note that within this terminology, the fact that µ has a density ρ(z) means that µ is absolutely continuous with respect to the Lesbegue measure dz, with Radon-Nykodym derivative ρ(z). The conditional probability of x B knowing that Y = y, which we denote as µ(b y), can be defined as the Radon-Nikodym derivative of µ(dx, dy) with respect to µ y (B) = R µ(dx, B) (the marginal probability of Y ), i.e. it satisfies n for every open sets A, B µ(a, B) = µ(a y)µ y (dy). B Thus µ(a y) is only defined up to a set of measure zero with respect to µ y (dy), which is fine in practice and is why this definition is less restrictive than the one above in terms of limit. If the pair (X, Y )hasadensity,ρ(x, y), then µ(a y) = A ρ(x, y) ρ(y) dx, where ρ(y) = R n ρ(x, y)dx is the marginal density of Y. 1.8 Borel-Cantelli Lemma Many questions in probability, like the convergence of the normalized sum of independent variables to their mean (i.e the law of large number treated below), involve tail event. A tail event is an event defined on a sequence of events, say {A n } n=n, which is such that its probability do not depends on the first A 1,...,A k no matter how large k is. For instance if S n /n is the proprtion of heads in n tosses of a fair coin, the event that S n /n 1/2 asn is a tail event because the probability that S n n = 1 n X j 1 n 2 j=1

10 14 CHAPTER 1. RANDOM VARIABLES is the same as the probability that 1 n n X j 1 2 j=k for any k 1. An important example of a tail event defined on {A n } n N is the event that A n occur infinitely many otfen, i.e. {A n i.o.} = {ω : ω A n i.o.} = lim k The probability of such an event is specified by: Lemma (Borel-Cantelli Lemma). k n A k = n N 1. If n=1 P(A n) <, thenp{a n i.o.} =0. 2. If the A n s are independent and n=1 P(A n)=, thenp{a n i.o.} =1. Proof. 1. P{ n=1 k=n A k} P{ k=n A k} k=n P(A k) for any n, but the last term goes to 0 as n since k=1 P(A k) < by assumption. 2. Using independence, one has P( A k )=1 P( A c k )=1 P(A c k )=1 (1 P(A k )). k=n k=n Using 1 x e x, this gives P( A k ) 1 k=n k=n k=n k=n k n e P(A k) =1 e P k=n P(A k) =1. Since the left hand-side is a probability, the bound must be sharp and we are done. As an example of application of this result, we prove Lemma Let {X n } n N be a sequence of identically distributed (not necessarily independent) random variables, such that E X n <. Then X n lim =0 a.s., n n Here and below a.s. stands for almost surely, i.e. for almost all ω Ω except possibly a set of zero probability (see definition 1.9.1). Proof. For any ε>0, define A ε n = {ω Ω: X n(ω)/n >ε}. A k

11 1.8. BOREL-CANTELLI LEMMA 15 Then P(A ε n)= P{ X n >nε} n=1 = = = n=1 P{ X 1 >nε} n=1 n=1 k=n P{kε < X 1 (k +1)ε} kp{kε < X 1 (k +1)ε} k=1 = k µ(dx) k=1 kε< x (k+1)ε 1 x µ(dx) ε = 1 ε k=1 ε< x kε< x (k+1)ε x µ(dx) 1 ε E X 1 <. Therefore if we define B ε = {ω Ω:ω A ε n i.o.}, then P(B ε ) = 0. Let B = n N B 1.ThenP(B) = 0, and n X n (ω) lim =0, if ω B. n n Note that this proof relies on a special case of a useful inequality which we state as a lemma. Lemma (Chebyshev s Inequality). Let X be a random variable such that E X k <, for some integer k. Then for any positive constant λ. Proof. For any λ>0, E X k = x k µ(dx) R n P{ X >λ} 1 λ k E X k, x λ x k µ(dx) λ k x λ µ(dx) =λ k P{ X λ}.

12 16 CHAPTER 1. RANDOM VARIABLES 1.9 Notions of Convergence Let {X n } n=1 be a sequence of random variables defined on a probability space (Ω, F, P), and let µ n be the distribution of X n. Let X be another random variable with distribution µ. We will discuss four notions of convergence: almost sure convergence, convergence in probability, convergence in distribution, and convergence in L p. Definition (Almost sure convergence). X n converges to X almost surely as n,if P{ω : X n (ω) X(ω)} =1. We express almost sure convergence as X n X, a.s. Definition (Convergence in probability). X n converges to X in probability if for any ε>0, P{ω : X n (ω) X(ω) >ε} 0, as n. Definition (Convergence in distribution). X n converges to X in distribution if for any continuous function f, Ef(X n ) Ef(X). d d This is denoted as X n X, orµn µ, whereµn, µ are the distributions of X n and X respectively. Definition (Convergence in L p ). Let {X n } n N be a sequence of random variables such that X n L p. X n converges to X L p in L p (or in pth mean) if E X n X p 0. For p = 1 we speak about convergence in mean; for p = 2, convergence in mean square. From real analysis, we know that convergence in L p implies convergence in L q if p q, almost sure convergence implies convergence in probability, convergence in probability implies that there is a subsequence that converges almost surely, and convergence in probability implies convergence in distribution. Convergence in L p implies convergence in probability Characteristic Functions The characteristic function of a probability measure µ is defined as f(ξ) =Ee i(ξ,x) = e i(x,ξ) µ(dx). R n It has the following obvious properties

13 1.10. CHARACTERISTIC FUNCTIONS ξ R n, f(ξ) 1, f(ξ) =f( ξ); 2. f is uniformly continuous on R 1. We also have Theorem Let {µ n } n N be a sequence of probability measures, and {f n } n N be their corresponding characteristic functions. Assume that 1. f n converges everywhere on R n to a limiting function f. 2. f is continuous at ξ =0. d Then there exists a probability distribution µ such that µ u µ. Moreover f is the characteristic function of µ. d Conversely, if µ n µ, where µ is some probability distribution then fn converges to f uniformly in every finite interval, where f is the characteristic function of µ. For a proof, see [2] As in Fourier transforms, one can also define the inverse transform ρ(x) = 1 (2π) n e i(ξ,x) f(ξ)dξ. R n An interesting question arises as to when this gives the density of a probability measure. To answer this we define Definition A function f is called positive semi-definite if for any finite set of values {ξ 1,...,ξ n }, n N, the matrix (f(ξ i ξ j )) is positive semi-definite, i.e. f(ξ i ξ j )v i v j 0, i,j for any {v 1,...,v n }. Theorem (Khinchin). If f is a positive semi-definite uniformly continuous function and f(0) = 1, then it is the characteristic function of a probability measure. Exercise (Stable laws). A one-dimensional distribution µ(dx) is stable if given any two independent random variables X and Y following this distribution, there exists a and b such that a(x + Y b) has distribution µ(dx). Show that f(ξ) =e ξ α is the characteristic function of a stable distribution for 0 α 2, and is not a characteristic function for other values of α. Exercise (Girko s circular law for random matrices). If A has i.i.d. entries with mean zero and variance σ 2, then the eigenvalues of A/ nσ are asymptotically uniformly distributed in the unit disk in the complex plane. Investigate numerically.

14 18 CHAPTER 1. RANDOM VARIABLES

Lecture Notes in Applied Stochastic Analysis

Lecture Notes in Applied Stochastic Analysis 1 Lecture Notes in Applied Stochastic Analysis Weinan E and Eric Vanden-Eijnden This is a rough preliminary version of a textbook. Do not circulate! 2 Contents 1 Random Variables 5 1.1 Elementary Examples......................

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

EE514A Information Theory I Fall 2013

EE514A Information Theory I Fall 2013 EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Lectures on Integration. William G. Faris

Lectures on Integration. William G. Faris Lectures on Integration William G. Faris March 4, 2001 2 Contents 1 The integral: properties 5 1.1 Measurable functions......................... 5 1.2 Integration.............................. 7 1.3 Convergence

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Lecture 7. Sums of random variables

Lecture 7. Sums of random variables 18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1 Outline Definitions Sums of random variables 18.175 Lecture 7 2 Outline Definitions Sums of random variables 18.175 Lecture

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events LECTURE 1 1 Introduction The first part of our adventure is a highly selective review of probability theory, focusing especially on things that are most useful in statistics. 1.1 Sample spaces and events

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

PROBABILITY VITTORIA SILVESTRI

PROBABILITY VITTORIA SILVESTRI PROBABILITY VITTORIA SILVESTRI Contents Preface 2 1. Introduction 3 2. Combinatorial analysis 6 3. Stirling s formula 9 4. Properties of Probability measures 12 5. Independence 17 6. Conditional probability

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Probability. J.R. Norris. December 13, 2017

Probability. J.R. Norris. December 13, 2017 Probability J.R. Norris December 13, 2017 1 Contents 1 Mathematical models for randomness 6 1.1 A general definition............................... 6 1.2 Equally likely outcomes.............................

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 1: Course intro and probability brush-up Department of Mathematical Sciences Aalborg University 1/22 Bayesian Statistics, Simulations and Software Course outline Course consists of 12 half-days

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

1. Probability Measure and Integration Theory in a Nutshell

1. Probability Measure and Integration Theory in a Nutshell 1. Probability Measure and Integration Theory in a Nutshell 1.1. Measurable Space and Measurable Functions Definition 1.1. A measurable space is a tuple (Ω, F) where Ω is a set and F a σ-algebra on Ω,

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Basic Probability. Introduction

Basic Probability. Introduction Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Probability Review. Chao Lan

Probability Review. Chao Lan Probability Review Chao Lan Let s start with a single random variable Random Experiment A random experiment has three elements 1. sample space Ω: set of all possible outcomes e.g.,ω={1,2,3,4,5,6} 2. event

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Probability and Statistics. Vittoria Silvestri

Probability and Statistics. Vittoria Silvestri Probability and Statistics Vittoria Silvestri Statslab, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, UK Contents Preface 5 Chapter 1. Discrete probability

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

A primer on basic probability and Markov chains

A primer on basic probability and Markov chains A primer on basic probability and Markov chains David Aristo January 26, 2018 Contents 1 Basic probability 2 1.1 Informal ideas and random variables.................... 2 1.2 Probability spaces...............................

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

2.1 Elementary probability; random sampling

2.1 Elementary probability; random sampling Chapter 2 Probability Theory Chapter 2 outlines the probability theory necessary to understand this text. It is meant as a refresher for students who need review and as a reference for concepts and theorems

More information

Some Basic Concepts of Probability and Information Theory: Pt. 2

Some Basic Concepts of Probability and Information Theory: Pt. 2 Some Basic Concepts of Probability and Information Theory: Pt. 2 PHYS 476Q - Southern Illinois University January 22, 2018 PHYS 476Q - Southern Illinois University Some Basic Concepts of Probability and

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Preliminaries. Probability space

Preliminaries. Probability space Preliminaries This section revises some parts of Core A Probability, which are essential for this course, and lists some other mathematical facts to be used (without proof) in the following. Probability

More information

Probability reminders

Probability reminders CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 34 To start out the course, we need to know something about statistics and This is only an introduction; for a fuller understanding, you would

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information