SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

Size: px
Start display at page:

Download "SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS"

Transcription

1 SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution function associated with it and the expectation of a random variable (or of a distribution function). The notion of random variable is treated as a function, so rst a few words should be said about the notion of a function. Brie y put, a function f is a correspondence or rule that assigns to each element s in some set S a number denoted by f(s). For example, if S is taken as the interval [; 5], i.e., the set of all real numbers that are equal to or greater than and equal to or less than 5, and if the function f is the function the square of, then f makes correspond to every element (or real number) x in [; 5] the number x, i.e., f(x) = x. If S is the set of all ve-tuples of 0 s and s, and if the function g is the function the number of 0 s in, then g makes correspond to the element or ve-tuple (0; 0; ; 0; ) in S the real number 3, i.e., f((0; 0; ; 0; )) = 3. In general, if s is a member or an object in a set of objects, S, we record this fact by writing s S. The set S is called the domain of the function g. Now let be an arbitrary fundamental probability set, and let A be a sigma-algebra of events over. De nition. A random variable X relative to (; A) is a function whose domain is and is such that for every real number x the set of elementary events! that satisfy X(!) x is an event, i.e., an element of A. This last requirement may be written: f! : X(!) xg A: Let us consider an example. The simplest game is that of tossing a fair coin once. In this case, = fh; T g. Let the random variable in this case denote the number of heads. Then X is a function over de ned by: X(H) = and X(T ) = 0. Thus 8 < ; if x < 0 f! : X(!) xg = ft g if 0 x < : fh; T g if x. Let us consider yet another example. We consider the game of tossing a pair of dice. A fundamental probability set for this game is listed as 30

2 follows: (; ) (; ) (; 3) (; 4) (; 5) (; 6) (; ) (; 3) (; 4) (; 5) (; 6) (3; 3) (3; 4) (3; 5) (3; 6) (4; 4) (4; 5) (4; 6) (5; 5) (5; 6) (6; 6) In this game, A is the set of all subsets of, i.e., every collection of elementary events is an event. Let us consider the random variable X as the sum of the upturned faces. Thus, for each elementary event (i; j), we have X(i; j) = i + j. Now let us take some special values of x to obtain f! j X(!) :g =? f! j X(!) 3:g = f(; ); (; )g since X(; ) = and X(; ) = 3, and f! j X(!) 4g = f(; ); (; ); (; 3); (; )g. Notation: From now on we shall use the notation [X x] in place of the expression f! j X(!) xg. Also we shall denote f! j X(!) < xg by [X < x], f! j X(!) xg by [X x] and f! j X(!) > xg by [X > x]. In general, for any set S of real numbers, we shall denote f! j X(!) Sg by [X S]. We now obtain conditions equivalent to that given in the de nition of a random variable. First we need two lemmas. Lemma. If X is any function whose domain is, then [ [X x n ] = [X < x] n= for every real number x. S Proof: Let! [X x n ]. Then there is at least one integer n n= such that! [X x n ]. This means that X(!) x n, which implies that X(!) < x which in turn implies! [X < x]. On the other hand, if! [X < x], then X(!) < x. Select the positive integer m so large S that m x X(!). Then! [X x m ] [X x n ], which completes the proof. 3 n=

3 Lemma. If X is any function whose domain is, then \ [X < x + n ] = [X x] n= for every real number x. Proof: Let! [X x]. Then X(!) < x + n for every positive integer T n. Thus! T [X < x+ n ]. If, on the other hand,! [X < x+ n ], n=! then X(!) < x + n for very positive integer n. This makes it impossible for the inequality X(!) > x to be true, for if it were, we could nd an n such that n < X(!) x, thus violating an inequality of the preceding sentence. Hence X(!) x, which implies! [X x], thus completing the proof. Theorem. Let X be a function whose domain is. Then X is a random variable if and only if [X < x] A for every real number x. Proof: We rst prove the only if part. If X is a random variable, then [X x n ] A for every positive integer n. Since a denumerable union of events is an event, we have [ [X x n ] A. n= Lemma allows us to conclude that [X < x] A. Conversely, if [X < t] A for every real number t, then [X < x + n ] A for every positive integer n. T This implies that [X < x + n ] A, which, with lemma, implies that n= [X x] A, which proves the theorem. We next de ne algebraic operations on random variables and prove that the new functions formed are also random variables. De nition. If X and Y are random variables, we de ne their sum, X+Y, to be the function that assigns to every! the number X(!) + Y (!). Theorem.If X and Y are random variables, then X + Y is a random variable. Proof: Let us consider the set n= A = [ r [X < r] \ [Y < z r], where r runs through the set of all rational numbers. Since the set of all rational numbers is denumerable, A A. Because of this and theorem, we 3

4 need only prove that A = [X +Y < z]. First, it is clear that A [X +Y < z], so we need only prove inclusion in the reverse direction. So let! [X +Y < z]. Then X(!) + Y (!) < z. So now de ne = z X(!) Y (!); thus > 0. Now let r and s be rational numbers that satisfy X(!) < r < X(!) + and Y (!) < s < Y (!) +. Thus r + s < X(!) + Y (!) + = z, from which we obtain s < z r: Hence Y (!) < z; and we may conclude that! [X < r] \ [Y < z r] A. De nition: If X is a random variable, and if K is any real number, we de ne KX as a function over that assigns to every! the number KX(!). Theorem 3. If X is a random variable, and if K is any real number, then KX is a random variable. Proof: We prove this in three cases. Case (i): K = 0: In this case, KX(!) = 0 for all!, so ; A if x < 0 [KX x] = A if x 0. Case (ii) K > 0: In this case, for every real x, [KX x] = [X x=k] A. Case (iii) K < 0: In this case, for every real x, [KX x] = [X x=k] A. These three cases prove the theorem. De nition. If X and Y are random variables, then XY denotes a function de ned over that assigns to every! the number X(!)Y (!). In particular, X assigns to every! the number X(!). Theorem 4. If X is a random variable, then X is a random variable. Proof: This follows from the fact that for every x < 0; [X x] = ; A, and for every x 0, [X x] = [ p x X p x] = [ p x X] \ [X p x] A: Theorem 5. If X and Y are random variables, then XY is a random variable. Proof: Repeated use of theorems, 3 and 4 allows us to make the following sequence of statements: (i) X + Y and X Y are random variables, (ii) (X + Y ) and (X Y ) are random variables, and (iii) f(x + Y ) (X Y ) g=4 is a random variable. But this last random variable is equal to XY, so XY is a random variable. 33

5 Theorem 6. If X and Y are random variables, and if [Y = 0] = ;, then X=Y is a random variable. Proof: We may write [X=Y x] = [X=Y x] \ [Y < 0] [ [X=Y x] \ [Y > 0] = [X xy ] \ [Y < 0] [ [X xy ] \ [Y > 0] = [X xy 0] \ [Y < 0] [ [X xy 0] \ [Y > 0]. Each of these four collections of elementary events is an event by previous theorems, which in turn implies [X=Y x] is an event for every real x. De nition. If X and Y are random variables, then the maximum of X; Y, maxfx; Y g, is de ned over as the function that assigns to each! the larger of the two numbers X(!); Y (!) if they are unequal and their common value if they are equal. We also de ne the minimum of X; Y, min; Y g, is de ned over as the function that assigns to each! the smaller of the two numbers X(!); Y (!) if they are unequal and their common value if they are equal. Theorem 7. If Xand Y are random variables, then maxfx; Y g is a random variable. Proof: The theorem follows from the fact that [maxfx; Y g z] = [X z] \ [Y z] A: De nition. The indicator, I A, of a set A is de ned to be a function that assigns to every! the number if! A and the number 0 if! = A. Note that I A truly indicates the set A: If A is an event, and if the event A occurs, then an elementary event! in A occurs, in which case I A (!) =. If A does not occur, then an elementary event in A c occurs and I A (!) = 0: Theorem 8. Let A be a set of elementary events in : Then I A is a random variable if and only if A A: Proof: This follows from the easily veri able fact that 8 < ; if x < 0 [I A x] = A c if 0 x < : if x. The following theorems are so obvious and easy to prove that their proofs are left to the reader. 34

6 Theorem 9. I =, i.e., I (!) = for every! : Theorem 0. IA = I A for every A A. Theorem. I A = I A c for every A A. Theorem. I A I B = I A\B for every A A and for every B A Theorem 3. I A[B = maxfi A ; I B g for every A A and for every B A. Theorem 4. I A + I B = I A[B for every pair of disjoint events A; B: Although the notion of random variable is the central notion of probability and statistics, it is not all that is of interest. With every random variable there is associated another function known as its distribution function. The study of probability and statistics thus becomes distinct from a usual course in analysis in that properties of random variables are considered in terms of their corresponding distribution functions. De nition. If X is a random variable, then the distribution function of X; denoted by F X (x), is a function de ned by F X (x) = P ([X x]) for every real number x. (The reader should note that it is at this point that in our de nition of the random variable we need the requirement [X x] A for all real numbers x.) Let us consider an example of the game where a fair (i.e., unbiased) coin is tossed three times. In this case, consists of the following eight elementary events: (HHH) (HHT ) (HT H) (HT T ) (T HH) (T HT ) (T T H) (T T T ) Let X denote the number of heads that appear in the three tosses of the coin. Thus X(HHH) = 3; X(HHT ) = ; X(HT H) = ; X(HT T ) = ; :::; X(T T T ) = 0. Suppose x = :78. Then [X x] = ;, and P ([X x]) = 0. If 0 x < say x = :58, then [X x] = [(T T T )], so P ([X x]) =. If x <, then 8 [X x] = f(t T T ); (HT T ); (T HT ); (T T H)g; and P ([X x]) =. If x < 3, then [X x] = f(t T T ); (HT T ); (T HT ); (T T H); (HHT ); (HT H); (T HH)g; 35

7 and P ([X x]) = 7. Finally, if x 3, then [X x] = and P ([X x]) = 8. Thus the distribution function is 8 0 if < x < 0 >< if 0 x < 8 F X (x) = if x < 7 if x < 3 >: 8 if 3 x. The graph of F X (x) is shown in the next gure. This is but just one particular case of a distribution function. This space is for next gure referred to above. We now prove some properties of distribution functions. Theorem 5. Let X be a random variable and F X its distribution function. If x < x, then F X (x ) F X (x ). Proof: Since by hypothesis, x < x, one may write [X x ] = [X x ] [ [x < X x ]. Upon taking probabilities of both sides, we obtain P ([X x ]) = P ([X x ]) + P ([x < X x ]) P ([X x ]), which proves the theorem. Theorem 6. If F X is the distribution function of a random variable X, then lim x! F X (x) =. Proof: First let us recall the de nition of lim x! g(x) = L: By this we mean: for every > 0, there exists a value of x; say x (), such that L < g(x) < L + for every x > x(). Since F X (x) is a probability, we need only show that for every > 0 there exists a number x() such that for all values of x > x() the inequality < F X (x) is true. Since X is a random variable, and since X(!) is a nite number for every!, we see that it is possible to write [ = [X 0] [ [n < X n]. n= This is a disjoint union, so taking probabilities of both sides, we obtain X = P ([X 0]) + P ([n < X n]). n= 36

8 Thus the in nite series is convergent, and we see there exists an integer N such that for all n > N, or P ([X 0]) + nx P ([k < X k]) >, k= P ([X n]) >. Select x() = N +. By theorem 5 we obtain F X (x) > for all x > x(), which proves the theorem. Theorem 7. If F X is the distribution function of a random variable X, then lim x! F X (x) = 0. Proof: Let > 0 be arbitrary. We must show that there exists a real number x() such that F X (x) < for all x < x(). By the same reasoning as before, we note that = [X > 0] [ [ [ (k + ) < X k]. k=0 If we take the probability of both sides, we obtain = P ([X > 0]) + X P ([ (k + ) < X k]). k=0 Since this is a convergent series, there is an integer N() such that P ([X > 0]) + nx P ([ (k + ) < X k]) > k=0 for all n > N(). This is the same as writing P ([X (n + )]) < for all n > N(). Let x() = for all x < x(). (N() + ). Then, by theorem 5, F (x) < x() EXERCISES 37

9 . Prove: if X is a function whose domain is, then X is a random variable if and only if [X x] A for every real number x. (Hint: use theorem 3 and the de nition of A.). Prove: if X is a function whose domain is, then X is a random variable if and only if [X > x] A for every real number x. 3. Let X denote the number of heads that appear in two tosses of an unbiased coin. (i) List the elementary events in. (ii) What values does X assign to the elementary events? (iii) What elementary events are in the event [X 0:78]? (iv) What elementary events are in the event [X :645]? 4. Prove the Theorem: If X is a function whose domain is, then X is a random variable if and only if [a < X b] A for every pair of real numbers a; b where a < b Prove theorems Prove: if X and Y are random variables, then [minfx; Y g z] = [X z] [ [Y z].. Prove: if X and Y are random variables, then minfx; Y g is a random variable. 3. Prove: if X and Y are random variables, then maxf X; Y g = minfx; Y g. 4. Let X be as in problem 3. Graph the distribution function F X of X. 5. Let Y denote the sum of the upturned faces when two fair dice are tossed.. (i) List the elementary events in. (ii) Evaluate Y (!) for every!. (iii) Plot the graph of the distribution function of Y. 6. In Polya s urn scheme described in section.5, let r = 3; b = and c =. Let Z denote the number of red balls selected in the rst three trials. List the elementary events in, and plot the graph of the distribution function of Z. 7. Let A ; A ; A 3 ; A 4 be events, and let X = I A + I A + I A3 + I A4. Show that X is the number of these events that occur. 8. Prove: if X is a random variable, and if j X j is a function de ned by j X j (!) =j X(!) j, then j X j is a random variable also.. Univariate Discrete Distributions. A random variable X will be called discrete, and its distribution function, F X, will be called discrete if 38

10 the range of X is nite or denumerable. This means that there is a nite or denumerable set of real numbers fx ; x ; g such that [ n [X = x n ] =. For such a random variable the distribution function is obtained as follows. First, it is easy to verify that [X x] = [ f[x = x n ] : x n xg for all real x: The above reads: the union of all those events [X = x n ] for which x n x. Since these events are disjoint, the distribution function of X becomes F X (x) = X fp ([X = x n ]) : x n xg. Associated with a discrete random variable X or a discrete distribution function F X is a function f X called the discrete density function. It is de ned over the range of X by the formula f X (x n ) = P ([X = x n ]) for all elements x n in the range of X. Thus it follows that X ffx (x n ) : n g = and F X (x) = X ff X (x n ) : x n xg. We shall need some standard notation. If x is any real number, then [x] will denote the largest integer that is equal to or less than x. For example, [3:46] = 3 but [ 7:] = 8. Of course, [46] = 46 and [ 3] = 3. This notation will especially be used in cases of discrete random variables where the range is a subset of the nonnegative integers, and in particular when x 0 = 0, x =, x = ;. In such a case the distribution function of a discrete random variable X is written [x] [x] X X F X (x) = P ([X = k]) = f X (k). k=0 It will now be shown that any discrete random variable can be represented in terms of indicators. Theorem.If X is a discrete random variable, and if fx n g is a sequence ( nite or in nite) of distinct numbers that satisfy [ n [X : = x n ] =, then X can be written as X = X x n I [X=xn]. n k=0 39

11 Proof: Let! be arbitrary Then there exists exactly one value of n such that! [X = x n ] and such that! = [X = x j ] for all j 6= n. Thus P r x ri [X=xr](!) = x n I X=xn](!) = x n. But! [X = x n ] means that X(!) = x n. Thus the two sides of the equation are equal for every!. By a univariate distribution function we mean the distribution function of a single random variable. When we write the word distribution of a random variable we refer either to its distribution function or its discrete density or anything that determines these. The more important discrete distributions in mathematical statistics arise from sampling, and they are here divided into two main groups: those distributions that arise from sampling with replacement, and those distributions that arise from sampling without replacement. We shall develop here three of each. The collection of discrete distributions that arise from sampling with replacement do so by beginning with the notion of a sequence of Bernoulli trials, which we now de ne. De nition. By a sequence of n Bernoulli trials we mean a game or experiment consisting of n trials which satisfy the following three requirements: (i) one of two possible disjoint events, denoted by S and F, occurs as the outcome of each trial, (ii) the outcome of each trial is independent of the outcomes of the other trials, and (iii) the probability p of S at a trial does not change from trial to trial. As an example of a sequence of n Bernoulli trials, the simplest is that of tossing a coin n times. Each toss of the coin is a trial. If heads occurs on a trial, we might call this outcome H, and if the coin comes up tails, we might denote that outcome T. It is easy to see that requirements (i), (ii) and (iii) in the above de nition are satis ed in this case. Another example of a sequence of n Bernoulli trials is when one samples n times with replacement from a lot which contains both defective and nondefective items. If a defective item is obtained at a trial, this outcome might be called S, while the outcome of obtaining a nondefective item would be called F. In this de nition, the event S is usually referred to as success, and F is usually referred to as failure. The probability of S occurring at a particular trial is usually denoted by p, and the probability of F is usually denoted by q = p. In a game or experiment consisting n Bernoulli trials, the fundamental probability set can be represented as the set of all possible n-tuples of letters involving S and F only. By the same reasoning as that used in section. one can see that there are n elementary events in. The 40

12 event success occurs at the kth trial, S k, consists of all those elementary events or n-tuples in which S occurs in the kth place. The set of all n-tuples that contain exactly k S s is an event referred to as the event that k successes occur in the n trials. Because of independence, the elementary event that consists of k S s and n k F s has a probability equal to p k ( p) n k. De nition. If X is a random variable de ned as the number of S s in n Bernoulli trials, then X is said to have the binomial distribution. If p denotes the probability of S in any trial, then we refer to the distribution of X as Bin(n; p). Theorem. If X is a random variable with the Bin(n; p) distribution, then its discrete density is n f X (k) = p k ( p) n k where 0 k n. k Proof: We are considering a sequence of n Bernoulli trials where the probability of S (success) at each trial is p, where 0 < p <. Over the fundamental probability set for this game our random variable X is the number of S s in the n trials. To every elementary event, X assigns an integer equal to the number of S s in that n-tple involving S s and F s only. The event [X = k] denotes the collection of all such n-tuples in each of which there are exactly k S s and n k F 0 s. It is easily seen that = [ n k=0 [X = k]. The number of elementary events in [X = k] is easy to compute. In n trials there are k n ways of selecting the k trial numbers at which S occurs. Hence there are k n elementary events in the event [X = k], each occurring with probability p k ( p) n k. Thus the discrete density of X is n f X (k) = p k ( p) n k where 0 k n. k One thing that one should always do when one obtains a new discrete density is to add up all the values of the density and see to it that this sum is. In the case of the binomial distribution, one simply uses the binomial theorem reviewed in section. to obtain nx n p k ( p) n k = ( + ( p)) n =. k k=0 4

13 The distribution function of X can be written in the following form: 8 0 if x < 0 >< P[x] n F X (x) = P ([X x]) = k p k ( p) n k if 0 x < n >: k=0 if x n. Here we must get ahead of our subject a little and give a de nition of independence of discrete random variables; the full development of independence of random variables will be treated in chapter 4. De nition. Discrete random variables, X ; ; X n, are said to be independent if for every n tuple of real numbers, (x ; ; x n ) that satis es P ([X i = x i ]) > 0 for i n, the following equalition is satis ed: \ n P [X i = x i ] = Y n P ([X i = x i ]). i= i= It will turn out that this de nition of independence is equivalent to stating that every such n tuple (x ; ; x n ) that satis es P ([X i = x i ]) > 0 for i n, the events, f[x i = x i ] : i ng are independent. However, this is the right place to present theorems 3 and 6, hence the out of order presentation is deemed necessary. Theorem 3. If X and Y are independent random variables, if the distribution of X is Bin(m; p), and if the distribution of Y is Bin(n; p), then the distribution of X + Y is Bin(m + n; p). Proof: Consider the sequence of m + n Bernoulli trials in which the probability of success is p; i.e., P (S) = p, and let Z denote the number of times that S occurs. Then Z is Bin(m + n; p). Let X denote the number of times that S occurs in the rst m trials, and let Y denote the number of times S occurs in the last n trials. Now, X and Y are independent, the distribution of X is Bin(m; p), and the distribution of Y is Bin(n; p), and this proves the theorem. The second discrete distribution distribution that arises in sampling with replacement is the geometric distribution. De nition. A random variable Y is said to have the geometric distribution if it is equal to the trial number at which the rst success occurs in a sequence of Bernoulli trials. If the probability of S is p, then we say that the distribution of Y is geom(p). Theorem 4. If the distribution of a random variable Y is geom(p), then its density is f Y (y) = ( p) y p for y = ; ;. 4

14 Proof: The fundamental probability set can be taken in a number of ways. For our purposes here, let us take the set of all nite sequences of letters, where the last letter is S and all letters that come before it in the sequence are F s. Thus we have an in nite number of elementary events that contain (S); (F; S), (F; F; S),. It is easy to see that the event [Y = y] consists of only the elementary event composed of y F s and followed and concluded with S. The probability of this elementary event is easily seen to be ( p) y p. We can easily check that all of these probabilities add up to. First one should recall from calculus that if j x j<, then P n=0 xn =. From this x it follows that X X ( p) y p = p ( p) k = p ( p) =. y= k=0 The third of the three discrete distributions that arise from sampling with replacement is the Poisson distribution. We shall rst recall from calculus this result: If a n! a as n!, then lim n! ( + an n )n = e a. For every positive integer n; let X n be a random variable whose distribution is Bin(n; p n ). This means, for every n we consider the number of successes in n trials where the probability of success in each trial is p n. Theorem 5. If, for every positive integer n, X n is Bin (n; p n ), and if np n! > 0 as n!, then Proof: lim n! P ([X n = k]) = e k for k = 0; ; ;. k! We evaluate this limit as follows: P ([X n = k]) = n k p k n ( p n ) n k = n (npn) k np n n k k n k n np = n n (np n) k n(n )(n k+) n k! n k np = n n (np n) k kq j n k! n j=0 np n k n np n n k. Taking limits of both sides, and applying the above recalled result from calculus, we have lim n! P ([X n = k]) = e k k! for k = 0; ; ;. 43

15 De nition. If X is a random variable whose distribution function is determined by the density P ([X = k]) = e k k! for k = 0; ; ;, then X is said to have the Poisson distribution with parameter > 0; and we write: X is P(). The Poisson distribution occupies a position of considerable importance in probability and statistics. In thinking over the meaning of limit in the above theorem, one easily sees that the Poisson distribution can be used as an approximation to the binomial distribution Bin(n; p), where the probability of success in each trial, p, is very small, n is very large and the product np is of moderate size. In such a case we may take = np. For example, if Y denotes the number of successes in ; 000 Bernoulli trials with probability p = 0:005, then the distribution of Y can be approximated by the Poisson distribution with = :5. Thus, in order to compute P ([Y = 5]), it can be approximated by e :5 (:5) 5 =5! rather than computing it exactly from the expression (0:005) 5 (0:9985) 995. This distribution arises quite frequently in practice when a large number of Bernoulli-like trials occur and an outcome of each trial of small probability is the success in which we are interested. In a sizable community of people, for example, each individual constitutes a Bernoulli trial on each day, and each such individual has a small probability of dying on any particular day. These individuals are all independent of each other, so the number of deaths on any day should follow a Poisson distribution. By similar reasoning the number of claims on an insurance company on any particular day should follow a Poisson distribution. Theorem 6. If X and Y are independent random variables, if the distribution of X is P(), and if the distribution of Y is P(), then the distribution of X + Y is P( + ). Proof: For every nonnegative integer n, we have, using the hypothesis of 44

16 independence and the binomial theorem, P ([X + Y = n]) = P ns ([X = j] \ [Y = n j=0 P = n P ([X = j] \ [Y = n j=0 P = n P ([X = j])p ([Y = n j=0 = n P j=0 e j j! e n j (n j)!! j]) j]) j]) = e (+) n! np j=0 n! j j! n j (n j)! = e (+) (+) n, n! which proves the theorem. We now treat the three discrete distributions that arise in simple random sampling without replacement. The rst and most general of these is what we shall refer to as the sample survey sum distribution. This distribution has N parameters, where N is a positive integer. The parameters are n ( n N) and N real numbers, x ; x ; ; x N. Let S denote the set of all n-tuples (i ; i ; ; i n ) such that i < i < < i n N: Then the discrete density of X is de ned, for every real number x f P x ij : i < i < < i n Ng by P ([X = x]) = #f(i ; ; i n ) S : P n j= x i j = xg for all numbers. This distribution arises in sample surveys where a simple random sample of size n is taken from a population of N individuals without replacement; to the ith individual there corresponds a number x i, and X becomes the sum of the x i s of the sample. An entire branch of mathematical statistics is devoted essentially to this distribution. There are also two very special cases of this last distribution. One special case is a distribution that we shall refer to as the Wilcoxon distribution. In this case, x i = i, i N, and the random variable W that is of interest is the sum of a simple random sample of size n from the integers ; ; ; N taken without replacement. Suppose that N = 4 and N n 45

17 n = : Then it is easy to see that the discrete density of W is f W (3) = 6 ; f W (4) = 6 ; f W (5) = 3 ; f W (6) = 6 ; f W (7) = 6. This distribution is basic in nonparametric statistical inference. It is also applied in a certain type of trend analysis. We shall treat both of these topics later. Another special case of the sample survey sum distribution is the hypergeometric distribution. Consider an urn that contains r red balls and b black balls. One selects n of these balls at random without replacement. Let X denote the number of red balls in the sample. Then X is said to have the hypergeometric distribution.. Theorem 7. The discrete density of X is given by P ([X = k]) = r k b n k r+b n ; where maxf0; n bg k minfn; rg. Proof: We rst nd the set of values that X can take. Clearly the number of red balls in the sample is not less than 0. Also, if b < n, the smallest number of red balls in the sample must be n b. Hence the smallest value of X is maxf0; n bg. But also the number of red balls in the sample cannot be larger than either n or r. Therefore the maximum value of X is minfn; rg. In order to nd P ([X = k]), where maxf0; n bg k minfn; rg, we reason as follows. There are r+b n ways of selecting n balls at random from the r + b balls in the urn. There are k r ways of obtaining k red balls out of the r red balls in the urn, and for each way in which k red balls are obtained b there are n k ways of obtaining the n k black balls that must be in the r+b sample. Thus, out of the total number of n equally likely outcomes, there are r b k n k equally likely ways in which the outcome yields k red balls. This establishes the formula given above. It should be emphasized that whenever one is obtaining the formula for the density of some discrete distribution, the domain of that density is very important and must be speci ed. EXERCISES. Let X denote the number of heads minus the number of tails in three tosses of a fair coin. 46

18 (i) Write X as a sum of products of constants and indicators. (ii) Find the discrete density of X. (iii) Find the distribution function of X.. Let Y denote the smallest even number equal to or greater than the sum of the upturned faces in a toss of two fair dice. Answer questions (i), (ii) and (iii) in problem for Y. 3. An urn contains four white balls and six black balls. A trial consists of selecting a ball at random, observing its color and replacing it. Let X denote the number of white balls selected in ve trials. After verifying that this involves a sequence of ve Bernoulli trials, compute P ([X :5]). 4. There are 3; 000 registered Republicans and 7; 000 registered Democrats in an election district. Ten registered voters are selected at random, the sampling assumed to be done with replacement. Let Z denote the number of registered Republicans in this sample of size ten. Compute P ([Z = 3]). 5. Let X be a random variable with the geometric distribution, geom(p). Prove that P ([X = k + n] j [X > n]) = P ([X = k]) for all n and k. 6. Prove the converse to the assertion in problem 6: if X is a positive integer-valued random variable such that 0 < P ([X = ]) < and P ([X = k + n] j [X > n]) = P ([X = k]) for all n and k, then the distribution of X is geom(p) where p = P ([X = ]). 7. A shipment contains N items, among which are d defectives. One selects n of these items randomly and without replacement, where n N. Let X denote the number of defectives in the sample. Find the discrete density of X. 8. An urn contains three black balls and three red balls. A simple random sample of size four is taken without replacement. Let Y denote the number of red balls in the sample. Find the discrete density of Y. 9. Let A ; A ; ; A n denote n independent events, all with the same probability p, 0 < p <, and let Z = I A + I A + + I An. Find the discrete density of Z. 0. A lake contains n sh. Among them there are t tagged sh. A sherman catches c sh and discovers that X of these sh are tagged. Find the discrete density of X. 47

19 . Suppose that X is a random variable whose distribution is P(). Find the value of such that P ([X = 5]) P ([X = n]) for all n 0.. A population has six units, and to the ith unit there is attached a value x i according to the following table: Unit number Value 3: : : 4: 3: 4:5. Three of these units are selected by simple random sampling without replacement. Let X denote the sum of the values of the three units selected. Find the density of X..3 Absolutely Continuous Distributions. In this section the notion of an absolutely continuous distribution function is introduced. A word of caution is in order. In many books, the expressions continuous variate and continuous random variable are used. These two expressions mean the same thing as absolutely continuous distribution function does in this treatment. The expression used here is consistent with usage in other branches of analysis. Besides, the two expressions that are used in other books do not mean what they say. A random variable, treated as a function over a fundamental probability space in many cases cannot be considered continuous and yet have an absolutely continuous distribution function. A second word of caution is that since so much space is devoted to distribution functions that are absolutely continuous and to distribution functions that are discrete, one should not infer from this that every distribution function is in one of these two classes. The set of all probability distribution functions is much wider than the union of these two classes. For example, if X and Y are random variables, where X has a discrete distribution function and Y has an absolutely continuous distribution, then for any p (0; ), then pf X (x) + ( p)f Y (x) is a distribution function that is neither discrete nor absolutely continuous. De nition. A random variable X is said to be absolutely continuous if there exists a nonnegative function f X (x) of x that satis es F X (x) = Z x 48 f X (t)dt

20 for every real number x. In this de nition the function f X (x) is called a density function of the random variable X or of the distribution function F X (x). The above de nition is now illustrated with examples. A random Variable X is said to have the uniform distribution over the interval [0; ] if the probability of X taking a value in any subinterval [a; b] [0; ] is equal to the length of the interval, i.e., if P ([X (a; b]) = P ([a < X b]) = b a. Thus 8 < 0 if x < 0 F X (x) = x if 0 x : if x >. Such a situation is realized when a point is selected at random from [0; ], with no subinterval of xed length being more or less probabile than any other subinterval [0; ] of the same length. A fundamental probability set that could give rise to this is = [0; ] with the random variable X de ned by X(!) =! for all!. The sigma algebra of events, A, in this case contains all subintervals [a; b] of [0; ]. However, in this case, A does not contain all subsets of. The reason for this is given in a graduate course in real analysis and requires background that would take us far a eld from our present purpose. In any case, F X (x) is absolutely continuous; if one were to de ne f X (x) by if 0 x f X (x) =, 0 otherwise it is very easy to verify that F X (x) = R x f X(t)dt for all real values of x. Another absolutely continuous distribution is the exponential distribution. This distribution function is de ned by 0 if x 0 F X (x) = e x if x > 0 ; where >0 is a xed constant. Its density is 0 if x 0 f X (x) = e x if x > 0. Again, it is easy to verify that F X (x) = R x f X(t)dt for all real values of x. 49

21 If, say, one wished to compute P ([0:5 < X ]), one need only write P ([0:5 < X ]) = F X () F X (0:5) = R 0:5 f X (t)dt = e 0:5 e. Another absolutely continuous distribution is the Cauchy distribution. Its distribution function is de ned by F X (x) = + Arctan(x) for all real values of x. By di erentiating it, one obtains its density: f X (x) = ( + x ) for < x < +. In general, given any nonnegative function f(x) de ned over the entire real line for which R + f(x)dx =, one may obtain an absolutely continuous distribution function F (x) by de ning F (x) = R f(t)dt. Note that in all x cases f(x) can be obtained from F (x) by di erentiating F at all points x at which f is continuous. This is a consequence of the fundamental theorem of calculus. As an example of a rather arti cial absolutely continuous distribution function, consider the function 8 < sin(x) if 0 x f(x) = x if x 3 : 0 if x < 0 or < x < or x >. Notice that Z + f(x)dx = Z= 0 sin(x)dx + Z 3 (x )dx =. Hence if a distribution function F is de ned by 8 0 if x < 0 >< ( cos(x))= if 0 x = F (x) = = if = < x < ( + (x ) >: )= if x 3 if x > 3, 50

22 we see that F is absolutely continuous and has density f. We close this section by introducing the normal distribution, sometimes called the Gaussian distribution. De nition. We shall say that a random variable X has a normal distribution if for some R and for some > 0, the random variable has an absolutely continuous distribution function with density given by f X (x) = p e (x ) = for < x <. We shall refer to this by stating: X is N (; ), or the distribution function of X is N (; ). It is rst necessary to verify that the integral of this positive function over the interval ( ; ) is : So let us denote I = Z p e (t We rst make the change of variable x = (t square root of. Doing this, we obtain = I = Z ) = dt. p e x = dx. ), where is the positive In order to show that I =, it is su cient to show that I =. We now use the fact that a product of integrals can be represented as a double integral. Thus we obtain R R I = p e x = dx p e y = dy R R e (x +y )= dy dx. We now change from rectangular coordinates to polar coordinates via the transformation x = rcos and y = rsin and obtain R I = R e r = rdr d 0 0 R e r = j 0 d = = 0 R 0 d =, 5

23 which concludes the proof. We shall be very concerned with the case of a random variable whose distribution is N (0; ). The following theorem is of some use. Theorem. If X has the N (0; ) distribution, and if Y = + X, then Y is N (; ). Proof: We observe that P ([Y x]) = P ([X x ]) = p x R e t = dt Now make the change of variable t = (u )=; this last expression becomes Z x p e (u ) = du. Di erentiating both sides with respect to x, and making use of the Fundamental Theorem of Calculus, we obtain f Y (x) = which concludes the proof of the theorem. p e (x ) =, by EXERCISES. If F X (x) is the distribution function of a random variable X de ned F X (x) = e (x ) if x > 0 if x, where > 0 and are xed constants, show that F X (x) is absolutely continuous, and nd P ([ < X + ]).. Let X be a random variable which is uniformly distributed over [ =; =], i.e., X has a density = if j x j = f X (x) = 0 if j x j> =. 5

24 Let the random variable Y be de ned by Y = tan(x). Show that F Y absolutely continuous, and nd its density. 3. Let g be a function de ned by is g(x) = 3x if 0 x 0 if x < 0 or x >. (i) Find an absolutely continuous distribution function G(x) whose density is g. (ii) If X is a random variable whose distribution function is G, compute P ([0 X ]). 4. Let X be a random variable which has the uniform distribution over [0; ]. Let Y be a random variable de ned by Y = 5 + X. Find the distribution function and the density of Y. 5. Let X be as in problem 4, and de ne the random variable Z by Z = 4(X ). Find the distribution function and density of Z. 6. Let F X be a distribution function de ned by 8 < F X (x) = : ex if x < 0 ( + sin(x)) if 0 x if x > Show that F is absolutely continuous, and nd its density. 7. Let X be as in problem 4, and let U be a random variable whose range is a subset of [0; ] and such that X = sin(u). Find the density and distribution function of U, and compute P ([ < U ]) Let X be a random variable with density e x if x 0 f X (x) = 0 if x < 0. Let Y = log(x). Find the density and distribution function of Y. 9. Suppose there are two games, G and G. If game G is played, the outcome is a random variable X with distribution function F X, and if the game G is played, the outcome is a random variable Y with distribution function F Y. Supergame G is played in this way. One tosses an unbiased coin. If it comes up heads, you play game G, and if it comes up tails, you play game G. Let Z denote the outcome of supergame G: Prove that F Z (t) = F X(t) + F Y (t) 53.

25 for all real values of t. 54

Statistical Inference By Means of Computer Simulations By Howard G. Tucker. Introductory Remarks

Statistical Inference By Means of Computer Simulations By Howard G. Tucker. Introductory Remarks Statistical Inference By Means of Computer Simulations By Howard G. Tucker Introductory Remarks This text is based on the lecture notes for a one quarter course, Mathematics 7, that I gave during the Spring

More information

Arkansas Tech University MATH 3513: Applied Statistics I Dr. Marcel B. Finan

Arkansas Tech University MATH 3513: Applied Statistics I Dr. Marcel B. Finan 2.4 Random Variables Arkansas Tech University MATH 3513: Applied Statistics I Dr. Marcel B. Finan By definition, a random variable X is a function with domain the sample space and range a subset of the

More information

JOINT PROBABILITY DISTRIBUTIONS

JOINT PROBABILITY DISTRIBUTIONS MTH/STA 56 JOINT PROBABILITY DISTRIBUTIONS The study of random variables in the previous chapters was restricted to the idea that a random variable associates a single real number with each possible outcome

More information

1 The Well Ordering Principle, Induction, and Equivalence Relations

1 The Well Ordering Principle, Induction, and Equivalence Relations 1 The Well Ordering Principle, Induction, and Equivalence Relations The set of natural numbers is the set N = f1; 2; 3; : : :g. (Some authors also include the number 0 in the natural numbers, but number

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

1 Random variables and distributions

1 Random variables and distributions Random variables and distributions In this chapter we consider real valued functions, called random variables, defined on the sample space. X : S R X The set of possible values of X is denoted by the set

More information

Week 2. Review of Probability, Random Variables and Univariate Distributions

Week 2. Review of Probability, Random Variables and Univariate Distributions Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference

More information

Discrete Distributions

Discrete Distributions Discrete Distributions STA 281 Fall 2011 1 Introduction Previously we defined a random variable to be an experiment with numerical outcomes. Often different random variables are related in that they have

More information

Topic 9 Examples of Mass Functions and Densities

Topic 9 Examples of Mass Functions and Densities Topic 9 Examples of Mass Functions and Densities Discrete Random Variables 1 / 12 Outline Bernoulli Binomial Negative Binomial Poisson Hypergeometric 2 / 12 Introduction Write f X (x θ) = P θ {X = x} for

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

2. AXIOMATIC PROBABILITY

2. AXIOMATIC PROBABILITY IA Probability Lent Term 2. AXIOMATIC PROBABILITY 2. The axioms The formulation for classical probability in which all outcomes or points in the sample space are equally likely is too restrictive to develop

More information

Probabilistic models

Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became the definitive formulation

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Name: Firas Rassoul-Agha

Name: Firas Rassoul-Agha Midterm 1 - Math 5010 - Spring 016 Name: Firas Rassoul-Agha Solve the following 4 problems. You have to clearly explain your solution. The answer carries no points. Only the work does. CALCULATORS ARE

More information

Statistics for Economists. Lectures 3 & 4

Statistics for Economists. Lectures 3 & 4 Statistics for Economists Lectures 3 & 4 Asrat Temesgen Stockholm University 1 CHAPTER 2- Discrete Distributions 2.1. Random variables of the Discrete Type Definition 2.1.1: Given a random experiment with

More information

Probability. Lecture Notes. Adolfo J. Rumbos

Probability. Lecture Notes. Adolfo J. Rumbos Probability Lecture Notes Adolfo J. Rumbos October 20, 204 2 Contents Introduction 5. An example from statistical inference................ 5 2 Probability Spaces 9 2. Sample Spaces and σ fields.....................

More information

Statistical Theory 1

Statistical Theory 1 Statistical Theory 1 Set Theory and Probability Paolo Bautista September 12, 2017 Set Theory We start by defining terms in Set Theory which will be used in the following sections. Definition 1 A set is

More information

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with Example: Alternative calculation of mean and variance of binomial distribution A r.v. X has the Bernoulli distribution if it takes the values 1 ( success ) or 0 ( failure ) with probabilities p and (1

More information

PROBABILITY DISTRIBUTIONS: DISCRETE AND CONTINUOUS

PROBABILITY DISTRIBUTIONS: DISCRETE AND CONTINUOUS PROBABILITY DISTRIBUTIONS: DISCRETE AND CONTINUOUS Univariate Probability Distributions. Let S be a sample space with a probability measure P defined over it, and let x be a real scalar-valued set function

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Classical Probability

Classical Probability Chapter 1 Classical Probability Probability is the very guide of life. Marcus Thullius Cicero The original development of probability theory took place during the seventeenth through nineteenth centuries.

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Lecture 3. Discrete Random Variables

Lecture 3. Discrete Random Variables Math 408 - Mathematical Statistics Lecture 3. Discrete Random Variables January 23, 2013 Konstantin Zuev (USC) Math 408, Lecture 3 January 23, 2013 1 / 14 Agenda Random Variable: Motivation and Definition

More information

Chapter 1. Sets and probability. 1.3 Probability space

Chapter 1. Sets and probability. 1.3 Probability space Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability

More information

Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function

Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function Generating and characteristic functions Generating and Characteristic Functions September 3, 03 Probability generating function Moment generating function Power series expansion Characteristic function

More information

Discrete Distributions

Discrete Distributions A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose

More information

Lecture 16. Lectures 1-15 Review

Lecture 16. Lectures 1-15 Review 18.440: Lecture 16 Lectures 1-15 Review Scott Sheffield MIT 1 Outline Counting tricks and basic principles of probability Discrete random variables 2 Outline Counting tricks and basic principles of probability

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

The random variable 1

The random variable 1 The random variable 1 Contents 1. Definition 2. Distribution and density function 3. Specific random variables 4. Functions of one random variable 5. Mean and variance 2 The random variable A random variable

More information

Chapter 3 Discrete Random Variables

Chapter 3 Discrete Random Variables MICHIGAN STATE UNIVERSITY STT 351 SECTION 2 FALL 2008 LECTURE NOTES Chapter 3 Discrete Random Variables Nao Mimoto Contents 1 Random Variables 2 2 Probability Distributions for Discrete Variables 3 3 Expected

More information

Introduction to Proofs in Analysis. updated December 5, By Edoh Y. Amiran Following the outline of notes by Donald Chalice INTRODUCTION

Introduction to Proofs in Analysis. updated December 5, By Edoh Y. Amiran Following the outline of notes by Donald Chalice INTRODUCTION Introduction to Proofs in Analysis updated December 5, 2016 By Edoh Y. Amiran Following the outline of notes by Donald Chalice INTRODUCTION Purpose. These notes intend to introduce four main notions from

More information

Introduction to Statistics. By: Ewa Paszek

Introduction to Statistics. By: Ewa Paszek Introduction to Statistics By: Ewa Paszek Introduction to Statistics By: Ewa Paszek Online: C O N N E X I O N S Rice University, Houston, Texas 2008 Ewa Paszek

More information

POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS

POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted

More information

Real Analysis: Homework # 12 Fall Professor: Sinan Gunturk Fall Term 2008

Real Analysis: Homework # 12 Fall Professor: Sinan Gunturk Fall Term 2008 Eduardo Corona eal Analysis: Homework # 2 Fall 2008 Professor: Sinan Gunturk Fall Term 2008 #3 (p.298) Let X be the set of rational numbers and A the algebra of nite unions of intervals of the form (a;

More information

Chapter 1 Probability Theory

Chapter 1 Probability Theory Review for the previous lecture Eample: how to calculate probabilities of events (especially for sampling with replacement) and the conditional probability Definition: conditional probability, statistically

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition: nna Janicka Probability Calculus 08/09 Lecture 4. Real-valued Random Variables We already know how to describe the results of a random experiment in terms of a formal mathematical construction, i.e. the

More information

CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS

CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS 4.2 Events and Sample Space De nition 1. An experiment is the process by which an observation (or measurement) is obtained Examples 1. 1: Tossing a pair

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

1 Selected Homework Solutions

1 Selected Homework Solutions Selected Homework Solutions Mathematics 4600 A. Bathi Kasturiarachi September 2006. Selected Solutions to HW # HW #: (.) 5, 7, 8, 0; (.2):, 2 ; (.4): ; (.5): 3 (.): #0 For each of the following subsets

More information

2 Interval-valued Probability Measures

2 Interval-valued Probability Measures Interval-Valued Probability Measures K. David Jamison 1, Weldon A. Lodwick 2 1. Watson Wyatt & Company, 950 17th Street,Suite 1400, Denver, CO 80202, U.S.A 2. Department of Mathematics, Campus Box 170,

More information

Topic 3: The Expectation of a Random Variable

Topic 3: The Expectation of a Random Variable Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday

More information

Math 122L. Additional Homework Problems. Prepared by Sarah Schott

Math 122L. Additional Homework Problems. Prepared by Sarah Schott Math 22L Additional Homework Problems Prepared by Sarah Schott Contents Review of AP AB Differentiation Topics 4 L Hopital s Rule and Relative Rates of Growth 6 Riemann Sums 7 Definition of the Definite

More information

Part (A): Review of Probability [Statistics I revision]

Part (A): Review of Probability [Statistics I revision] Part (A): Review of Probability [Statistics I revision] 1 Definition of Probability 1.1 Experiment An experiment is any procedure whose outcome is uncertain ffl toss a coin ffl throw a die ffl buy a lottery

More information

Discrete Structures for Computer Science

Discrete Structures for Computer Science Discrete Structures for Computer Science William Garrison bill@cs.pitt.edu 6311 Sennott Square Lecture #24: Probability Theory Based on materials developed by Dr. Adam Lee Not all events are equally likely

More information

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries 1. Introduction In this paper we reconsider the problem of axiomatizing scoring rules. Early results on this problem are due to Smith (1973) and Young (1975). They characterized social welfare and social

More information

1 Which sets have volume 0?

1 Which sets have volume 0? Math 540 Spring 0 Notes #0 More on integration Which sets have volume 0? The theorem at the end of the last section makes this an important question. (Measure theory would supersede it, however.) Theorem

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Sec$on Summary. Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability

Sec$on Summary. Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Section 7.2 Sec$on Summary Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Independence Bernoulli Trials and the Binomial Distribution Random Variables

More information

Notes 6 Autumn Example (One die: part 1) One fair six-sided die is thrown. X is the number showing.

Notes 6 Autumn Example (One die: part 1) One fair six-sided die is thrown. X is the number showing. MAS 08 Probability I Notes Autumn 005 Random variables A probability space is a sample space S together with a probability function P which satisfies Kolmogorov s aioms. The Holy Roman Empire was, in the

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Introduction to Probability and Statistics Slides 3 Chapter 3

Introduction to Probability and Statistics Slides 3 Chapter 3 Introduction to Probability and Statistics Slides 3 Chapter 3 Ammar M. Sarhan, asarhan@mathstat.dal.ca Department of Mathematics and Statistics, Dalhousie University Fall Semester 2008 Dr. Ammar M. Sarhan

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 26, 2018 CS 361: Probability & Statistics Random variables The discrete uniform distribution If every value of a discrete random variable has the same probability, then its distribution is called

More information

Probability and distributions. Francesco Corona

Probability and distributions. Francesco Corona Probability Probability and distributions Francesco Corona Department of Computer Science Federal University of Ceará, Fortaleza Probability Many kinds of studies can be characterised as (repeated) experiments

More information

p. 4-1 Random Variables

p. 4-1 Random Variables Random Variables A Motivating Example Experiment: Sample k students without replacement from the population of all n students (labeled as 1, 2,, n, respectively) in our class. = {all combinations} = {{i

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or Expectations Expectations Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x) Expectations

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions

More information

Math 493 Final Exam December 01

Math 493 Final Exam December 01 Math 493 Final Exam December 01 NAME: ID NUMBER: Return your blue book to my office or the Math Department office by Noon on Tuesday 11 th. On all parts after the first show enough work in your exam booklet

More information

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 7-11 Topics 2.1 RANDOM VARIABLE 2.2 INDUCED PROBABILITY MEASURE 2.3 DISTRIBUTION FUNCTION AND ITS PROPERTIES 2.4 TYPES OF RANDOM VARIABLES: DISCRETE,

More information

k P (X = k)

k P (X = k) Math 224 Spring 208 Homework Drew Armstrong. Suppose that a fair coin is flipped 6 times in sequence and let X be the number of heads that show up. Draw Pascal s triangle down to the sixth row (recall

More information

CS 246 Review of Proof Techniques and Probability 01/14/19

CS 246 Review of Proof Techniques and Probability 01/14/19 Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we

More information

Set Theory Digression

Set Theory Digression 1 Introduction to Probability 1.1 Basic Rules of Probability Set Theory Digression A set is defined as any collection of objects, which are called points or elements. The biggest possible collection of

More information

1 INFO Sep 05

1 INFO Sep 05 Events A 1,...A n are said to be mutually independent if for all subsets S {1,..., n}, p( i S A i ) = p(a i ). (For example, flip a coin N times, then the events {A i = i th flip is heads} are mutually

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Statistics 1 - Lecture Notes Chapter 1

Statistics 1 - Lecture Notes Chapter 1 Statistics 1 - Lecture Notes Chapter 1 Caio Ibsen Graduate School of Economics - Getulio Vargas Foundation April 28, 2009 We want to establish a formal mathematic theory to work with results of experiments

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

HW2 Solutions, for MATH441, STAT461, STAT561, due September 9th

HW2 Solutions, for MATH441, STAT461, STAT561, due September 9th HW2 Solutions, for MATH44, STAT46, STAT56, due September 9th. You flip a coin until you get tails. Describe the sample space. How many points are in the sample space? The sample space consists of sequences

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Outline PMF, CDF and PDF Mean, Variance and Percentiles Some Common Distributions. Week 5 Random Variables and Their Distributions

Outline PMF, CDF and PDF Mean, Variance and Percentiles Some Common Distributions. Week 5 Random Variables and Their Distributions Week 5 Random Variables and Their Distributions Week 5 Objectives This week we give more general definitions of mean value, variance and percentiles, and introduce the first probability models for discrete

More information

Near convexity, metric convexity, and convexity

Near convexity, metric convexity, and convexity Near convexity, metric convexity, and convexity Fred Richman Florida Atlantic University Boca Raton, FL 33431 28 February 2005 Abstract It is shown that a subset of a uniformly convex normed space is nearly

More information

Probabilistic models

Probabilistic models Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became

More information

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2 Probability Probability is the study of uncertain events or outcomes. Games of chance that involve rolling dice or dealing cards are one obvious area of application. However, probability models underlie

More information

STAT 418: Probability and Stochastic Processes

STAT 418: Probability and Stochastic Processes STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Topic -2. Probability. Larson & Farber, Elementary Statistics: Picturing the World, 3e 1

Topic -2. Probability. Larson & Farber, Elementary Statistics: Picturing the World, 3e 1 Topic -2 Probability Larson & Farber, Elementary Statistics: Picturing the World, 3e 1 Probability Experiments Experiment : An experiment is an act that can be repeated under given condition. Rolling a

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R Random Variables Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R As such, a random variable summarizes the outcome of an experiment

More information

Introduction to Probability

Introduction to Probability Introduction to Probability Salvatore Pace September 2, 208 Introduction In a frequentist interpretation of probability, a probability measure P (A) says that if I do something N times, I should see event

More information

Introduction to Economic Analysis Fall 2009 Problems on Chapter 1: Rationality

Introduction to Economic Analysis Fall 2009 Problems on Chapter 1: Rationality Introduction to Economic Analysis Fall 2009 Problems on Chapter 1: Rationality Alberto Bisin October 29, 2009 1 Decision Theory After a brief review of decision theory, on preference relations, we list

More information

Example 1. The sample space of an experiment where we flip a pair of coins is denoted by:

Example 1. The sample space of an experiment where we flip a pair of coins is denoted by: Chapter 8 Probability 8. Preliminaries Definition (Sample Space). A Sample Space, Ω, is the set of all possible outcomes of an experiment. Such a sample space is considered discrete if Ω has finite cardinality.

More information

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of

More information

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model. 1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market

More information

Continuum Probability and Sets of Measure Zero

Continuum Probability and Sets of Measure Zero Chapter 3 Continuum Probability and Sets of Measure Zero In this chapter, we provide a motivation for using measure theory as a foundation for probability. It uses the example of random coin tossing to

More information

Introduction to Information Entropy Adapted from Papoulis (1991)

Introduction to Information Entropy Adapted from Papoulis (1991) Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION

More information