1 Probability Spaces and Random Variables

Size: px
Start display at page:

Download "1 Probability Spaces and Random Variables"

Transcription

1 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition (robability sace) (Ω, F, P) 1.3 Basic Proerties Theorem Let A 1, A 2,... be a sequence of events. ( (1) If A 1 A 2 A 3, then P n=1 ( (2) If A 1 A 2 A 3, then P n=1 A n ) = lim n P(A n ). A n ) = lim n P(A n ). 1.4 Random variables and their robability distributions Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably infinite. To be more recise, for a discrete random variable X there exist a (finite or infinite) sequence of real numbers a 1, a 2,... and corresonding nonnegative numbers 1, 2,... such that P(X = a i ) = i, i 0, i = 1. In this case µ X (dx) = i δ ai (dx) = i δ(x a i )dx is called the (robability) distribution of X. Obviously, i P(a X b) = i i:a a i b i i... a a a... a i

2 Examle (coin toss) We set Then For a fair coin we set = 1/2. 1, heads, X = 0, tails. P(X = 1) =, P(X = 0) = q = 1. Examle (waiting time) Fli a fair coin reeatedly until we get the heads. Let T be the number of coin tosses to get the first heads. (If the heads occurs at the first trial, we have T = 1; If the tails occurs at the first trial and the heads at the second trial, we have T = 2, and so on.) Continuous random variables P(T = k) = (1 ) k 1, k = 1, 2,.... A random variable X is called continuous if P(X = a) = 0 for all a R. We understand intuitively that X varies continuously. If there exists a function f (x) such that P(a X b) = b we say that X admits a robability density function. Note that In this case, is called the (robability) distribution of X. a f (x)dx, a < b, f (x)dx = 1, f (x) 0. µ X (dx) = f (x)dx f (x) a b x It is useful to consider the distribution function: Then we have F X (x) = P(X x) = x f X (x) = d dx F X(x). f X (t)dt, x R. Remark (1) A continuous random variable does not necessarily admit a robability density function. But many continuous random variables in ractical alications admit robability density functions. 2

3 (2) There is a random variable which is neither discrete nor continuous. But most random variables in ractical alications are either discrete or continuous. Examle (random cut) Divide the interval [0, L] (L > 0) into two segments. (1) Let X be the coordinate of the cutting oint (the length of the segment containing 0). 0, x < 0, F X (x) = x/l, 0 x L, f X (x) = 1, x > L, (2) Let M be the length of the longer segment. 0, x < L/2, F M (x) = (2x L)/L, L/2 x L, 1, x > L, f M (x) = Examle Let A be a randomly chosen oint from the disc with radius R > 0. Let X be the distance between the center O and A. We have P(a X b) = π(b2 a 2 ) πr 2 = 1 R 2 b so the robability density function is given by 2x f (x) = R 2, 0 x R, 0, otherwise. 1.5 Mean values and variances a 2xdx, 0 < a < b < R, Definition The mean or exectation value of a random variable X is defined by If X is discrete, we have m = E[X] = xµ X (dx) E[X] = a i i. If X admits a robability density function f (x), we have Remark For a function φ(x) we have For examle, E[X m ] = E[e itx ] = E[X] = E[φ(X)] = x m µ(dx) e itx µ(dx) i x f (x)dx. φ(x)µ(dx). (mth moment), (characteristic function). 3

4 Definition The variance of a random variable X is defined by or equivalently, σ 2 = V[X] = σ 2 = V[X] = E[(X E[X]) 2 ] = E[X 2 ] E[X] 2, (x E[X]) 2 µ(dx) = ( 2 x 2 µ(dx) xµ(dx)). Examle The mean value and variance of the waiting time T introduced in Examle Examle The mean value and variance of the random variables X and M introduced in Examle Stochastic rocesses We will study the robability models for time evolution of random henomena. Measuring a certain quantity of the random henomenon at each time ste n = 0, 1, 2,..., we obtain a sequence of real values: x 0, x 1, x 2,..., x n,.... Because of randomness, we consider x n as a realized value of a random variable X n. Here a random variable is a variable taking several different values with certain robabilities. Thus, the time evolution of a random henomenon is modeled by a sequence of random variables {X n ; n = 0, 1, 2,... } = {X 0, X 1, X 2,..., X n,... }, which is called a discrete-time stochastic rocess. If the measurement is erformed along with continuous time, we need a continuous-time stochastic rocess: {X t ; t 0} It is our urose to construct stochastic rocesses modeling tyical random henomena and to demonstrate their roerties within the framework of modern robability theory. Figure 1.1: Solar sots; Nominal exchange rate (red) and real effective exchange rate (blue) 4

5 2 Probability Distributions 2.1 One-dimensional distributions F X (x) = P(X x): distribution function 2.2 Discrete distributions Bernoulli distribution For 0 1 the distribution (1 )δ 0 + δ 1 is called Bernoulli distribution with success robability. This is the distribution of coin toss. The mean value and variance are given by m =, σ 2 = (1 ) Binomial distribution B(n, ) For 0 1 and n 1 the distribution k=0 ( ) n k (1 ) n k δ k k ( ) n is called the binomial distribution B(n, ). The quantity k (1 ) n k is the robability that n coin tosses k with robabilities for heads and q = 1 for tails result in k heads and n k tails. The mean value and variance of B(, n) are given by m = n, σ 2 = n(1 ) = Figure 2.1: B(20, 0.4) and geometric distribution with arameter = Geometric distribution For 0 1 the distribution (1 ) k 1 δ k 5

6 is called the geometric distribution with success robability. This is the distribution of waiting time for the first heads (Examle 1.4.2). The mean value and variance are given by m = 1, σ2 = 1 2. Remark In some literatures, the geometric distribution with arameter is defined by (1 ) k δ k Poisson distribution For λ > 0 the distribution k=0 k=0 λ λk e k! δ k is called the Poisson distribution with arameter λ. The mean and variance are given by m = λ, σ 2 = λ λ = λ = λ = Figure 2.2: Poisson distribution λ = 1/2, 1, 3 Problem 1 The robability generating function of the Poisson distribution is defined by G(z) = k z k λ λk, k = e k!. (1) Find a concise exression of G(z). k=0 (2) By using G (1) and G (1) find the mean value and variance of the Poisson distribution with arameter λ. (3) Show that the robability of taking even values is greater than that of odd values, i.e., k < k. k:odd k:even (4) [additional] Show another examle of a robability generating function. 6

7 2.3 Continuous distributions and density functions Uniform distribution For a finite interval [a, b], 1 f (x) = b a, a x b, 0, otherwise becomes a density function, which determines the uniform distribution on [a, b]. The mean value and the variance are given by m = b a x dx b a = a + b b, σ 2 = x 2 dx 2 a b a (b m2 a)2 = b a a b x Figure 2.3: Uniform distribution on [a, b] Exonential distribution The exonential distribution with arameter λ > 0 is defined by the density function λe λx, x 0, f (x) = 0, otherwise. This is a model for waiting time (continuous time). The mean value and variance are given by m = 1 λ, σ2 = 1 λ 2. λ 0 x Figure 2.4: Exonential distribution with arameter λ 7

8 2.3.3 Normal distribution For m R and sigma > 0 we may check that { } 1 f (x) = ex (x m)2 2πσ 2 2σ 2 becomes a density function. The distribution defined by the above density function is called the normal distribution or Gaussian distribution and denoted by N(m, σ 2 ). In articular, N(0, 1) is called the standard normal distribution or the standard Gaussian distribution. The mean value and variance of N(m, σ 2 ) are m and σ 2, resectively. This is verified by exlicit calculation of the integrals: 1 + } (x m)2 x ex { 2πσ 2 2σ 2 dx = m, 1 + } (x m) 2 (x m)2 ex { 2πσ 2 2σ 2 dx = σ 2. Use the famous integral formula: π e tx2 dx = 0 2 t, t > 0, which is a standard exercise of double integrals in the first year course of calculus. Problem 2 Choose randomly a oint A from the disc with radius one and let X be the radius of the inscribed circle with center A. (1) For x 0 find the robability P(X x). (2) Find the robability density function f X (x) of X. (Note that x varies over all real numbers.) (3) Calculate the mean and variance of X. (4) Calculate the mean and variance of the area of inscribed circle S = πx 2. (5) [additional] Discuss similar questions for a ball. X A 8

9 3 Indeendence and Deendence 3.1 Indeendent events and conditional robability Definition (Pairwise indeendence) A (finite or infinite) sequence of events A 1, A 2,... is called airwise indeendent if any air of events A i1, A i2 (i 1 i 2 ) verifies P(A i1 A i2 ) = P(A i1 )P(A i2 ). Definition (Indeendence) A (finite or infinite) sequence of events A 1, A 2,... is called indeendent if any choice of finitely many events A i1,..., A in (i 1 < i 2 < < i n ) satisfies P(A i1 A i2 A in ) = P(A i1 )P(A i2 ) P(A in ). Examle Consider the trial to randomly draw a card from a deck of 52 cards. Let A be the event that the result is an ace and B the event that the result is sades. Then A, B are indeendent. Remark It is allowed to consider whether the sequence of events {A, A} is indeendent or not. If they are indeendent, by definition we have P(A A) = P(A)P(A), from which P(A) = 0 or P(A) = 1 follows. Notice that P(A) = 0 does not imly A = (emty event). Similarly, P(A) = 1 does not imly A = Ω (whole event). Definition (Conditional robability) For two events A, B the conditional robability of A relative to B (or on the hyothesis B, or for given B) is defined by P(A B) = P(A B) P(B) whenever P(B) > 0. Theorem Let A, B be events with P(A) > 0 and P(B) > 0. Then, A, B are indeendent P(A B) = P(A) P(B A) = P(B) 3.2 Indeendent random variables Definition A (finite or infinite) sequence of random variables X 1, X 2,... is indeendent (res. airwise indeendent) if so is the sequence of events {X 1 a 1 }, {X 1 a 2 },... for any a 1, a 2, R. In other words, a (finite or infinite) sequence of random variables X 1, X 2,... is indeendent if for any finite X i1,..., X in (i 1 < i 2 < < i n ) and constant numbers a 1,..., a n P(X i1 a 1, X i2 a 2,..., X in a n ) = P(X i1 a 1 )P(X i2 a 2 ) P(X in a n ) (3.1) holds. Similar assertion holds for the airwise indeendence. If random variables X 1, X 2,... are discrete, (3.1) may be relaced with P(X i1 = a 1, X i2 = a 2,..., X in = a n ) = P(X i1 = a 1 )P(X i2 = a 2 ) P(X in = a n ). Examle Choose at random a oint from the rectangle Ω = {(x, y) ; a x b, c y d}. Let X denote the x-coordinates of the chosen oint and Y the y-coordinates. Then X, Y are indeendent. Problem 3 (1) An urn contains four balls with numbers 112, 121, 211, 222. We draw a ball at random and let X 1 be the first digit, X 2 the second digit, and X 3 the last digit. For i = 1, 2, 3 we define an event A i by A i = {X i = 1}. Show that {A 1, A 2, A 3 } is airwise indeendent but is not indeendent. (2) Two dice are tossed. Let A be the event that the first die gives a 4, B be the event that the sum is 6, and C be the event that the sum is 7. Calculate P(B A) and P(C A), and study the indeendence among {A, B, C}. 9

10 3.3 Bernoulli trials This is a model of coin-toss and is the most fundamental stochastic rocess. A sequence of random variables (or a discrete-time stochastic rocess) {X 1, X 2,..., X n,... } is called the Bernoulli trials with success robability (0 1) if they are indeendent and have the same distribution as By definition we have P(X n = 1) =, P(X n = 0) = q = 1. P(X 1 = ξ 1, X 2 = ξ 2,..., X n = ξ n ) = n P(X k = ξ k ) for all ξ 1, ξ 2,..., ξ n {0, 1}. In general, statistical quantity in the left-hand side is called the finite dimensional distribution of the stochastic rocess {X n }. The total set of finite dimensional distributions characterizes a stochastic rocess. 3.4 Covariance and correlation coefficients Recall that the mean of a random variable X is defined by m X = E(X) = xµ X (dx). Theorem (Linearity) For two random variables X, Y and two constant numbers a, b it holds that E(aX + by) = ae(x) + be(y). Theorem (Multilicativity) If random variables X 1, X 2,..., X n are indeendent, we have E[X 1 X 2 X n ] = E[X 1 ] E[X n ]. (3.2) Proof We first rove the assertion for X k = 1 Ak (indicator random variable). By definition X 1,..., X n are indeendent if and only if so are A 1,..., A n. Therefore, E[X 1 X n ] = E[1 A1 A n ] = P(A 1 A n ) = P(A 1 ) P(A n ) = E[X 1 ] E[X n ]. Thus (3.2) is verified. Then, by linearity the assertion is valid for X k taking finitely many values (finite linear combination of indicator random variables). Finally, for general X k, coming back to the definition of Lebesgue integration, we can rove the assertion by aroximation argument. Remark E[XY] = E[X]E[Y] is not a sufficient condition for the random variables X and Y being indeendent. It is merely a necessary condition! The variance of X is defined by σ 2 X = V(X) = E[(X m X) 2 ] = E[X 2 ] E[X] 2. By means of the distribution µ(dx) of X we may write V(X) = (x m X ) 2 µ(dx) = ( 2 x 2 µ(dx) xµ(dx)). 10

11 Definition The covariance of two random variables X, Y is defined by Cov (X, Y) = σ XY = E[(X E(X))(Y E(Y))] = E[XY] E[X]E[Y]. In articular, σ XX = σ 2 X becomes the variance of X. The correlation coefficient of two random variables X, Y is defined by ρ XY = σ XY, σ X σ Y whenever σ X > 0 and σ Y > 0. Definition X, Y are called uncorrelated if σ XY correlated if σ XY > 0 (res. σ XY < 0). = 0. They are called ositively (res. negatively) Theorem If two random variables X, Y are indeendent, they are uncorrelated. Remark The converse of Theorem is not true in general. Let X be a random variable satisfying P(X = 1) = P(X = 1) = 1 4, P(X = 0) = 1 2 and set Y = X 2. Then, X, Y are not indeendent, but σ XY = 0. On the other hand, for random variables X, Y taking only two values, the converse of Theorem is valid (see Problem 5). Theorem (Additivity of variance) Let X 1, X 2,..., X n be random variables, any air of which is uncorrelated. Then V X k = V[X k ]. Theorem ρ XY 1 for two random variables X, Y with σ X > 0, σ Y > 0. Proof Note that E[{t(X m X ) + (Y m Y )} 2 ] 0 for all t R. Problem 4 Throw two dice and let L be the larger sot and S the smaller. (If double sots, set L = S.) (1) Show the joint robability of (L, S ) by a table. (2) Calculate the correlation coefficient ρ LS and exlain the meaning of the signature of ρ LS. Problem 5 Let X and Y be random variables such that P(X = a) = 1, P(X = b) = q 1 = 1 1, P(Y = c) = 2, P(Y = d) = q 2 = 1 2, where a, b, c, d are constant numbers and 0 < 1 < 1, 0 < 2 < 1. Show that X, Y are indeendent if and only if σ XY = 0. Exlain the significance of this case. [Hint: In general, uncorrelated random variables are not necessarily indeendent.] 3.5 Convolutions of robability distributions Perhas omitted. 11

12 4 Limit Theorems 4.1 Simulation of Coin Toss Let {X n } be a Bernoulli trial with success robability 1/2, namely, tossing a fair coin, and consider the binomial rocess defined by S n = X k. Since S n counts the number of heads during the first n trials, S n n = 1 n gives the relative frequency of heads during the first n trials. The following is just one examle showing that the relative frequency of heads S n /n tends to 1/2. It is our question how to describe this henomenon mathematically. A naive formula: X k is not accetable. Why? S n lim n n = 1 2 (4.1) Figure 4.1: Relative frequency of heads S n /n 4.2 Law of Large Numbers (LLN) Theorem (Weak law of large numbers) Let X 1, X 2,... be identically distributed random variables with mean m and variance σ 2. (This means that X i has a finite variance.) If X 1, X 2,... are uncorrelated, for any ϵ > 0 we have lim P 1 n X k m n ϵ = 0. We say that 1 n X k converges to m in robability. Remark In many literatures the weak law of large numbers is stated under the assumtion that X 1, X 2,... are indeendent. It is noticeable that the same result holds under the weaker assumtion of being uncorrelated. 12

13 Theorem (Chebyshev inequality) Let X be a random variable with mean m and variance σ 2. Then, for any ϵ > 0 we have Proof P( X m ϵ) σ2 ϵ 2. Set A = { X m ϵ} and let 1 A be the indicator random variable. Then we have where we used the obvious relation E[1 A ] = P(A). σ 2 = E[(X m) 2 ] = E[(X m) 2 1 A + (X m) 2 1 A c] E[(X m) 2 1 A ] E[ϵ 2 1 A ] = ϵ 2 P(A), Proof [Theorem (Weak Law of Large Numbers)] The mean value is given by Y = Y n = 1 n E[Y] = 1 n X k. E[X k ] = m. For simlicity we set Since X 1, X 2,... are airwise uncorrelated, the variance is comuted by using the additive roerty of variance. In fact, we have V[Y] = 1 n 2 V X k = 1 n 2 On the other hand, alying Chebyshev inequality, we have V[X k ] = 1 n 2 nσ2 = σ2 n. Consequently, as desired. P( Y m ϵ) V[Y] ϵ 2 = σ2 nϵ 2. lim P( Y n m ϵ) = 0, n Examle (Coin toss) Theorem (Strong law of large numbers) Let X 1, X 2,... be identically distributed random variables with mean m. (This means that X i has a mean but is not assumed to have a finite variance.) If X 1, X 2,... are airwise indeendent, we have P lim 1 X k = m n n = 1. In other words, 1 lim n n X k = m Remark Kolmogorov roved the strong law of large numbers under the assumtion that X 1, X 2,... are indeendent. In many literatures, the strong law of large numbers is stated as Kolmogorov roved. Its roof being based on the so-called Kolmogorov s almost sure convergence theorem, we cannot relax the assumtion of indeendence. Theorem is due to N. Etemadi (1981), where the assumtion is relaxed to being airwise indeendent and the roof is more elementary, see also books by Sato, by Durrett, etc. a.s. 13

14 4.3 Central Limit Theorem (CLT) Theorem (Central Limit Theorem) Let Z 1, Z 2,... be indeendent identically distributed (iid) random variables with mean 0 and variance 1. Then, for any x R it holds that lim P 1 n Z k x n = 1 x e t2 /2 dt. (4.2) 2π In short, 1 n X k N(0, 1) weakly as n. Proof is by characteristic functions (Fourier transform), see the textbooks. Examle The de Moivre Lalace theorem claims that B(n, ) N(n, n(1 )). (4.3) Figure 4.2: The normal distribution whose mean and variance are the same as B(100, 0.4) This is a secial case of CLT. Let {X 1, X 2,... } be a Bernoulli trials with success robability. Set Z k = X k m σ, m = E[X k] =, σ 2 = V[X k ] = (1 ) so that {Z k } are iid random variables with 0 and variance 1. Aly the central limit theorem we have (4.2). For the left-hand side we see that 1 Z k = 1 X k m = 1 n n σ σ n X k nm. Then (4.2) becomes lim P n X k nm + xσ n = 1 x e t2 /2 dt. 2π Setting y = nm + xσ n, we have P X k y 1 y nm σ n e t2 /2 dt = 2π 14 1 y 2πσ 2 e (t nm)2 2nσ 2 dt

15 Thus, for a large n we have X k N(nm, nσ 2 ) = N(n, n(1 )) On the other hand, we know that n X k obeys B(n, ), of which the mean value and variance are given by n and n(1 ). Consequently, for a large n we have (4.3). The aroximation (4.3) means that distribution functions are almost the same. Problem 6 (Monte Carlo simulation) Let f (x) be a continuous function on the interval [0, 1] and consider the integral 1 0 f (x)dx. (4.4) (1) Let X be a random variable obeying the uniform distribution on [0, 1]. Give exressions of the mean value E[ f (X)] and variance V[ f (X)] of the random variable f (X). (2) Let x 1, x 2,... is a sequence random numbers taken from [0, 1]. Exlain that the arithmetic mean 1 n f (x k ) is a good aroximation of the integral (4.4) by means of law of large numbers and central limit theorem. (3) By using a comuter, verify the above fact for f (x) = 1 x 2. 15

16 5 Markov Chains 5.1 Conditional Probability For two events A, B the conditional robability of A relative (subject) to B is defined by P(A B) = see Section 3.1. Formula (5.1) is often used in the following form: P(A B), whenever P(B) > 0, (5.1) P(B) P(A B) = P(B)P(A B). (5.2) This is the so-called theorem on comound robabilities, giving a ground to the usage of tree diagram in comutation of robability. For examle, for two events A, B see Fig P(A) P(A c ) A A c P(B A) P(B c A) P(B A c ) A B A B c A c B P(B c A c ) A c B c Figure 5.1: Tree diagram Theorem (Comound robabilities) For events A 1, A 2,..., A n we have P(A 1 A 2 A n ) = P(A 1 )P(A 2 A 1 )P(A 3 A 1 A 2 ) P(A n A 1 A 2 A n 1 ). (5.3) Proof Straightforward by induction on n. 5.2 Markov Chains Let S be a finite or countable set. Consider a discrete time stochastic rocess {X n ; n = 0, 1, 2,... } taking values in S. This S is called a state sace and is not necessarily a subset of R in general. In the following we often meet the cases of S = {0, 1}, S = {1, 2,..., N} and S = {0, 1, 2,... }. Definition Let {X n ; n = 0, 1, 2,... } be a discrete time stochastic rocess over S. It is called a Markov rocess over S if P(X m = j X n1 = i 1, X n2 = i 2,..., X nk = i k, X n = i) = P(X m = j X n = i) holds for any 0 n 1 < n 2 < < n k < n < m and i 1, i 2,..., i k, i, j S. If {X 1, X 2,... } are indeendent random variables with values in S, obviously they form a Markov chain. Hence the Markov roerty is weaker than indeendence. 16

17 Theorem (multilication rule) Let {X n } be a Markov chain on S. Then, for any 0 n 1 < n 2 < < n k and i 1, i 2,..., i k S we have P(X n1 = i 1, X n2 = i 2,..., X nk = i k ) = P(X n1 = i 1 )P(X n2 = i 2 X n1 = i 1 )P(X n3 = i 3 X n2 = i 2 ) P(X nk = i k X nk 1 = i k 1 ). Definition For a Markov chain {X n } over S, P(X n+1 = j X n = i) is called the transition robability at time n from a state i to j. If this is indeendent of n, the Markov chain is called time homogeneous. Hereafter a Markov chain is always assumed to be time homogeneous. In this case the transition robability is denoted by i j = (i, j) = P(X n+1 = j X n = i) and the transition matrix is defined by P = [ i j ] Definition A matrix P = [ i j ] with index set S is called a stochastic matrix if i j 0 and i j = 1. Theorem The transition matrix of a Markov chain is a stochastic matrix. Conversely, given a stochastic matrix we can construct a Markov chain of which the transition matrix coincides with the given stochastic matrix. Examle (2-state Markov chain) A Markov chain over the state sace {0, 1} is determined by the transition robabilities: (0, 1) =, (0, 0) = 1, (1, 0) = q, (1, 1) = 1 q. The transition matrix is defined by [ ] 1. q 1 q The transition diagram is as follows: j S = = 1 = 1 q = q Examle (3-state Markov chain) An animal is healthy, sick or dead, and changes its state every day. Consider a Markov chain on {H, S, D} described by the following transition diagram: b q a H S D r 17

18 The transition matrix is defined by a b 0 r q, a + b = 1, + q + r = 1. Examle (Random walk on Z 1 ) The random walk on Z 1 is illustrated as q The transition robabilities are given by, if j = i + 1, (i, j) = q = 1, if j = i 1, 0, otherwise. The transition matrix is a two-sided infinite matrix given by q q q q Examle (Random walk with absorbing barriers) Let A > 0 and B > 0. The state sace of a random walk with absorbing barriers at A and B is S = { A, A + 1,..., B 1, B}. Then the transition robabilities are given as follows. For A < i < B,, if j = i + 1, (i, j) = q = 1, if j = i 1, 0, otherwise. For i = A or i = B, 1, if j = A, ( A, j) = 0, otherwise, 1, if j = B, (B, j) = 0, otherwise. In a matrix form we have q q q q

19 A B q q q q Examle (Random walk with reflecting barriers) Let A > 0 and B > 0. The state sace of a random walk with absorbing barriers at A and B is S = { A, A + 1,..., B 1, B}. The transition robabilities are given as follows. For A < i < B,, if j = i + 1, (i, j) = q = 1, if j = i 1, 0, otherwise. For i = A or i = B, 1, if j = A + 1, ( A, j) = 0, otherwise, 1, if j = B 1, (B, j) = 0, otherwise. In a matrix form we have q q q q A B q q q q 5.3 Distribution of a Markov Chain Let S be a state sace as before. In general, a row vector π = [ π i ] indexed by S is called a distribution on S if π i 0 and π i = 1. (5.4) For a Markov chain {X n } on S we set i S π(n) = [ π i (n) ], π i (n) = P(X n = i), which becomes a distribution on S. We call π(n) the distribution of X n. In articular, π(0), the distribution of X 0, is called the initial distribution. We often take π(0) = [ 0, 1, 0, ], where 1 occurs at ith osotion. In this case the Markov chain {X n } starts from the state i. 19

20 For a Markov chain {X n } with a transition matrix P = [ i j ] the n-ste transition robability is defined by n (i, j) = P(X m+n = j X m = i), i, j S. The right-hand side is indeendent of n because our Markov chain is assumed to be time homogeneous. Theorem (Chaman Kolmogorov equation) For 0 r n we have n (i, j) = r (i, k) n r (k, j). (5.5) Proof Moreover, k S First we note the obvious identity: n (i, j) = P(X m+n = j X m = i) = P(X m+n = j, X m+r = k X m = i). P(X m+n = j, X m+r = k X m = i) = P(X m+n = j, X m+r = k, X m = i) P(X m+r = k, X m = i) Using the Markov roerty, we have so that k S P(X m+r = k, X m = i) P(X m = i) = P(X m+n = j X m+r = k, X m = i)p(x m+r = k X m = i). P(X m+n = j X m+r = k, X m = i) = P(X m+n = j X m+r = k) P(X m+n = j, X m+r = k X m = i) = P(X m+n = j X m+r = k)p(x m+r = k X m = i). Finally, by the roerty of being time homogeneous, we come to Thus we have obtained (5.5). P(X m+n = j, X m+r = k X m = i) = n r (k, j) r (i, k). Alying (5.5) reeatedly and noting that 1 (i, j) = (i, j), we obtain n (i, j) = (i, k 1 )(k 1, k 2 ) (k n 1, j). (5.6) k 1,...,k n 1 S The right-hand side is nothing else but the multilication of matrices, i.e., the n-ste transition robability n (i, j) is the (i, j)-entry of the n-ower of the transition matrix P. Summing u, we obtain the following imortant result. Theorem For m, n 0 and i, j S we have Proof Immediate from Theorem P(X m+n = j X m = i) = n (i, j) = (P n ) i j. Remark As a result, the Chaman-Kolmogorov equation is nothing else but an entrywise exression of the obvious relation for the transition matrix: (As usual, P 0 = E (identity matrix).) P n = P r P n r 20

21 Theorem We have or equivalently, Therefore, π(n) = π(n 1)P, n 1, π j (n) = π i (n 1) i j. i π(n) = π(0)p n. Proof We first note that π j (n) = P(X n = j) = P(X n = j X n 1 = i)p(x n 1 = i) = i j π i (n 1), i S i S which roves π(n) = π(n 1)P. By reeated alication we have π(n) = π(n 1)P = (π(n 2)P)P = (π(n 2)P 2 = = π(0)p n, as desired. Examle (2-state Markov chain) Let {X n } be the Markov chain introduced in Examle The eigenvalues of the transition matrix [ ] 1 P =. q 1 q are 1, 1 q. These are distinct if + q > 0. Omitting the case of + q = 0, i.e., = q = 0, we assume that + q > 0. By standard argument we obtain P n = 1 [ q + r n r n ] + q q qr n + qr n, r = 1 q. Let π(0) = [π 0 (0) π 1 (0)] be the distriution of X 0. Then the distribution of X n is given by π(n) = [P(X n = 0), P(X n = 1)] = [π 0 (0) π 1 (0)]P n = π(0)p n. Problem 7 There are two arties, say, A and B, and their suorters of a constant ratio exchange at every election. Suose that just before an election, 25% of the suorters of A change to suort B and 20% of the suorters of B change to suort A. At the beginning, 85% of the voters suort A and 15% suort B. (1) When will the arty B command a majority? (2) Find the final ratio of suorters after many elections if the same situation continues. (3) (otional) Discuss relevant toics at your own choice. Problem 8 Study the n-ste transition robability of the three-state Markov chain introduced in Examle Exlain that every animal dies within finite time. [ ] 1 Problem 9 Let {X n } be a Markov chain on {0, 1} given by the transition matrix P = with the q 1 q initial distribution π 0 = [ q + q, ]. Calculate the following statistical quantities: + q E[X n ], V[X n ], Cov (X m+n, X n ) = E[X m+n X n ] E[X m+n ]E[X n ], ρ(x m+n, X n ) = Cov (X m+n, X n ) V[Xm+n ]V[X n ] 21

1 Random Variables and Probability Distributions

1 Random Variables and Probability Distributions 1 Random Variables and Probability Distributions 1.1 Random Variables 1.1.1 Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably

More information

6 Stationary Distributions

6 Stationary Distributions 6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

B8.1 Martingales Through Measure Theory. Concept of independence

B8.1 Martingales Through Measure Theory. Concept of independence B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Solution: (Course X071570: Stochastic Processes)

Solution: (Course X071570: Stochastic Processes) Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Review of Probability Theory II

Review of Probability Theory II Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time: Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of

More information

Econometrics I. September, Part I. Department of Economics Stanford University

Econometrics I. September, Part I. Department of Economics Stanford University Econometrics I Deartment of Economics Stanfor University Setember, 2008 Part I Samling an Data Poulation an Samle. ineenent an ientical samling. (i.i..) Samling with relacement. aroximates samling without

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

8 STOCHASTIC PROCESSES

8 STOCHASTIC PROCESSES 8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

Random variables (discrete)

Random variables (discrete) Random variables (discrete) Saad Mneimneh 1 Introducing random variables A random variable is a mapping from the sample space to the real line. We usually denote the random variable by X, and a value that

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

Discrete Probability Refresher

Discrete Probability Refresher ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory

More information

1 Random Experiments from Random Experiments

1 Random Experiments from Random Experiments Random Exeriments from Random Exeriments. Bernoulli Trials The simlest tye of random exeriment is called a Bernoulli trial. A Bernoulli trial is a random exeriment that has only two ossible outcomes: success

More information

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law On Isoerimetric Functions of Probability Measures Having Log-Concave Densities with Resect to the Standard Normal Law Sergey G. Bobkov Abstract Isoerimetric inequalities are discussed for one-dimensional

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

Math 701: Secant Method

Math 701: Secant Method Math 701: Secant Method The secant method aroximates solutions to f(x = 0 using an iterative scheme similar to Newton s method in which the derivative has been relace by This results in the two-term recurrence

More information

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM YOUR NAME: KEY: Answers in Blue Show all your work. Answers out of the blue and without any supporting work may receive no credit even if they

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Homework Solution 4 for APPM4/5560 Markov Processes

Homework Solution 4 for APPM4/5560 Markov Processes Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

The Longest Run of Heads

The Longest Run of Heads The Longest Run of Heads Review by Amarioarei Alexandru This aer is a review of older and recent results concerning the distribution of the longest head run in a coin tossing sequence, roblem that arise

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Introduction to Banach Spaces

Introduction to Banach Spaces CHAPTER 8 Introduction to Banach Saces 1. Uniform and Absolute Convergence As a rearation we begin by reviewing some familiar roerties of Cauchy sequences and uniform limits in the setting of metric saces.

More information

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0 Advanced Finite Elements MA5337 - WS7/8 Solution sheet This exercise sheets deals with B-slines and NURBS, which are the basis of isogeometric analysis as they will later relace the olynomial ansatz-functions

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

STAT 430/510 Probability Lecture 7: Random Variable and Expectation

STAT 430/510 Probability Lecture 7: Random Variable and Expectation STAT 430/510 Probability Lecture 7: Random Variable and Expectation Pengyuan (Penelope) Wang June 2, 2011 Review Properties of Probability Conditional Probability The Law of Total Probability Bayes Formula

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events LECTURE 1 1 Introduction The first part of our adventure is a highly selective review of probability theory, focusing especially on things that are most useful in statistics. 1.1 Sample spaces and events

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B.

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. EE/Stats 376A: Information theory Winter 207 Lecture 5 Jan 24 Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. 5. Outline Markov chains and stationary distributions Prefix codes

More information

1 Extremum Estimators

1 Extremum Estimators FINC 9311-21 Financial Econometrics Handout Jialin Yu 1 Extremum Estimators Let θ 0 be a vector of k 1 unknown arameters. Extremum estimators: estimators obtained by maximizing or minimizing some objective

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition: nna Janicka Probability Calculus 08/09 Lecture 4. Real-valued Random Variables We already know how to describe the results of a random experiment in terms of a formal mathematical construction, i.e. the

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

HENSEL S LEMMA KEITH CONRAD

HENSEL S LEMMA KEITH CONRAD HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Department of Mathematics

Department of Mathematics Deartment of Mathematics Ma 3/03 KC Border Introduction to Probability and Statistics Winter 209 Sulement : Series fun, or some sums Comuting the mean and variance of discrete distributions often involves

More information

Elementary theory of L p spaces

Elementary theory of L p spaces CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

PETER J. GRABNER AND ARNOLD KNOPFMACHER

PETER J. GRABNER AND ARNOLD KNOPFMACHER ARITHMETIC AND METRIC PROPERTIES OF -ADIC ENGEL SERIES EXPANSIONS PETER J. GRABNER AND ARNOLD KNOPFMACHER Abstract. We derive a characterization of rational numbers in terms of their unique -adic Engel

More information

Lecture 4: Law of Large Number and Central Limit Theorem

Lecture 4: Law of Large Number and Central Limit Theorem ECE 645: Estimation Theory Sring 2015 Instructor: Prof. Stanley H. Chan Lecture 4: Law of Large Number and Central Limit Theorem (LaTeX reared by Jing Li) March 31, 2015 This lecture note is based on ECE

More information

STAT 418: Probability and Stochastic Processes

STAT 418: Probability and Stochastic Processes STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Brief Review of Probability

Brief Review of Probability Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

X = X X n, + X 2

X = X X n, + X 2 CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk

More information

Probability and Statistics. Vittoria Silvestri

Probability and Statistics. Vittoria Silvestri Probability and Statistics Vittoria Silvestri Statslab, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, UK Contents Preface 5 Chapter 1. Discrete probability

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Journal of Mathematical Analysis and Applications

Journal of Mathematical Analysis and Applications J. Math. Anal. Al. 44 (3) 3 38 Contents lists available at SciVerse ScienceDirect Journal of Mathematical Analysis and Alications journal homeage: www.elsevier.com/locate/jmaa Maximal surface area of a

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

2.1 Elementary probability; random sampling

2.1 Elementary probability; random sampling Chapter 2 Probability Theory Chapter 2 outlines the probability theory necessary to understand this text. It is meant as a refresher for students who need review and as a reference for concepts and theorems

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information