Conditional Expectation

Size: px
Start display at page:

Download "Conditional Expectation"

Transcription

1 Chapter Conditional Expectation Please see Hull s book (ection 9.6.).1 Binomial Model for tock Price Dynamics tock prices are assumed to follow this simple binomial model: The initial stock price during the period under study is denoted 0. t each time step, the stock price either goes up by a factor of u or down by a factor of d. It will be useful to visualize tossing a coin at each time step, and say that the stock price moves up by a factor of u if the coin comes out heads (H), and down by a factor of d if it comes out tails (T ). Note that we are not specifying the probability of heads here. Consider a sequence of tosses of the coin (ee Fig..1) The collection of all possible outcomes (i.e. sequences of tosses of length ) is =fhhh HHT HTH HTT THH THH THT TTH TTTg: typical sequence of will be denoted!, and! k will denote the kth element in the sequence!. We write k (!) to denote the stock price at time k (i.e. after k tosses) under the outcome!. Note that k (!) depends only on!1! :::! k. Thus in the -coin-toss example we write for instance, 1(!) = 1(!1!!) = 1(!1) (!) = (!1!!) = (!1!): Each k is a random variable defined on the set. More precisely, let F = P(). Then F is a -algebra and ( F) is a measurable space. Each k is an F-measurable function!ir, that is, ;1 k is a function B!F where B is the Borel -algebra on IR. We will see later that k is in fact 9

2 50 (HH) = u 0 ω = Η (HHH) = u (H) = u0 ω 1 = Η ω 1 = Τ 1 (T) = d 0 ω = Η ω = Τ (HT) = ud 0 ω = Τ ω = Η (HHT) = u d 0 (HTH) = u d 0 (THH) = u d 0 (TH) = ud 0 ω = Τ ω = Η (HTT) = d u 0 (THT) = d u 0 (TTH) = d u 0 ω = Τ ω = Η (TT) = d 0 ω = Τ (TTT) = d 0 Figure.1: three coin period binomial model. measurable under a sub--algebra of F. Recall that the Borel -algebra B is the -algebra generated by the open intervals of IR. In this course we will always deal with subsets of IR that belong to B. For any random variable X defined on a sample space and any y IR, we will use the notation: fx yg = f! X(!) yg: The sets fx <yg fx yg fx = yg etc, are defined similarly. imilarly for any subset B of IR, we define fx Bg = f! X(!) Bg: ssumption.1 u>d>0.. Information Definition.1 (ets determined by the first k tosses.) We say that a set is determined by the first k coin tosses if, knowing only the outcome of the first k tosses, we can decide whether the outcome of all tosses is in. In general we denote the collection of sets determined by the first k tosses by F k. It is easy to check that F k is a -algebra. Note that the random variable k is F k -measurable, for each k =1 ::: n. Example.1 In the coin-toss example, the collection F1 of sets determined by the first toss consists of:

3 CHPTER. Conditional Expectation H = fhhh HHT HTH HTTg,. T = fthh THT TTH TTTg,.,.. The collection F of sets determined by the first two tosses consists of: 1. HH = fhhh HHTg,. HT = fhth HTTg,. TH = fthh THTg,. TT = ftth TTTg, 5. The complements of the above sets, 6. ny union of the above sets (including the complements), 7. and. Definition. (Information carried by a random variable.) Let X be a random variable!ir. We say that a set is determined by the random variable X if, knowing only the value X(!) of the random variable, we can decide whether or not!. nother way of saying this is that for every y IR, either X ;1 (y) or X ;1 (y) \ =. The collection of susbets of determined by X is a -algebra, which we call the -algebra generated by X, and denote by (X). If the random variable X takes finitely many different values, then (X) is generated by the collection of sets fx ;1 (X(!))j! g these sets are called the atoms of the -algebra (X). In general, if X is a random variable!ir, then (X) is given by (X) =fx ;1 (B) B Bg: Example. (ets determined by ) The -algebra generated by consists of the following sets: 1. HH = fhhh HHTg = f! (!) =u 0g,. TT = ftth TTTg = f = d 0g. HT [ TH = f = ud0g. Complements of the above sets, 5. ny union of the above sets, 6. = f(!) g, 7. =f(!) IRg.

4 5. Conditional Expectation In order to talk about conditional expectation, we need to introduce a probability measure on our coin-toss sample space. Let us define p (0 1) is the probability of H, q =(1; p) is the probability of T, the coin tosses are independent, so that, e.g., IP (HHT)=p q etc. P IP () =! IP (!), 8. Definition. (Expectation.) IEX = X! X(!)IP (!): If then and IE(I X) = I (!) = ( 1 if! 0 if! 6 XdIP = X! X(!)IP (!): We can think of IE(I X) as a partial average of X over the set...1 n example Let us estimate 1, given. Denote the estimate by IE(1j). From elementary probability, IE(1j) is a random variable Y whose value at! is defined by where y = (!). Properties of IE(1j): Y (!) =IE(1j = y) IE(1j) should depend on!, i.e., it is a random variable. If the value of is known, then the value of IE(1j) should also be known. In particular, If! = HHH or! = HHT, then (!) =u 0. If we know that (!) =u 0, then even without knowing!, we know that 1(!) =u0. We define IE(1j)(HHH)=IE(1j)(HHT)=u0: If! = TTT or! = TTH, then (!) =d 0. If we know that (!) =d 0, then even without knowing!, we know that 1(!) =d0. We define IE(1j)(TTT)=IE(1j)(TTH)=d0:

5 CHPTER. Conditional Expectation 5 If! = fhth HTT THH THTg, then (!) =ud0. If we know (!) = ud0, then we do not know whether 1 = u0 or 1 = d0. We then take a weighted average: IP () =p q + pq + p q + pq =pq: Furthermore, 1dIP = p qu0 + pq u0 + p qd0 + pq d0 = pq(u + d)0 For! we define Then IE(1j)(!) = R 1dIP IP () IE(1j)dIP = = 1 (u + d) 0: 1dIP: In conclusion, we can write where g(x) = IE(1j)(!) =g((!)) 8 >< >: u0 if x = u 0 1 (u + d)0 if x = ud0 d0 if x = d 0 In other words, IE(1j) is random only through dependence on. We also write IE(1j = x) =g(x) where g is the function defined above. The random variable IE(1j) has two fundamental properties: IE(1j) is ()-measurable. For every set (), IE(1j)dIP = 1dIP:.. Definition of Conditional Expectation Please see Williams, p.8. Let ( F IP ) be a probability space, and let G be a sub--algebra of F. Let X be a random variable on ( F IP ). Then IE(XjG) is defined to be any random variable Y that satisfies: (a) Y is G-measurable,

6 5 (b) For every set G, we have the partial averaging property YdIP = XdIP: Existence. There is always a random variable Y satisfying the above properties (provided that IEjXj < 1), i.e., conditional expectations always exist. Uniqueness. There can be more than one random variable Y satisfying the above properties, but if Y 0 is another one, then Y = Y 0 almost surely, i.e., IP f! Y (!) =Y 0 (!)g =1: Notation.1 For random variables X Y, it is standard notation to write IE(XjY ) = IE(Xj(Y )): Here are some useful ways to think about IE(XjG): random experiment is performed, i.e., an element! of is selected. The value of! is partially but not fully revealed to us, and thus we cannot compute the exact value of X(!). Based on what we know about!, we compute an estimate of X(!). Because this estimate depends on the partial information we have about!, it depends on!, i.e., IE[XjY ](!) is a function of!, although the dependence on! is often not shown explicitly. If the -algebra G contains finitely many sets, there will be a smallest set in G containing!, which is the intersection of all sets in G containing!. The way! is partially revealed to us is that we are told it is in, but not told which element of it is. We then define IE[XjY ](!) to be the average (with respect to IP ) value of X over this set. Thus, for all! in this set, IE[XjY ](!) will be the same... Further discussion of Partial veraging The partial averaging property is We can rewrite this as IE(XjG)dIP = XdIP 8 G: (.1) IE[I :IE(XjG)] = IE[I :X]: (.) Note that I is a G-measurable random variable. In fact the following holds: Lemma.10 If V is any G-measurable random variable, then provided IEjV:IE(XjG)j < 1, IE[V:IE(XjG)] = IE[V:X]: (.)

7 CHPTER. Conditional Expectation 55 Proof: To see this, first use (.) and linearity of expectations P to prove (.) when V is a simple n G-measurable random variable, i.e., V is of the form V = k=1 c ki K, where each k is in G and each c k is constant. Next consider the case that V is a nonnegative G-measurable random variable, but is not necessarily simple. uch a V can be written as the limit of an increasing sequence of simple random variables V n ; we write (.) for each V n and then pass to the limit, using the Monotone Convergence Theorem (ee Williams), to obtain (.) for V. Finally, the general G- measurable random variable V can be written as the difference of two nonnegative random-variables V = V + ; V ;, and since (.) holds for V + and V ; it must hold for V as well. Williams calls this argument the standard machine (p. 56). Based on this lemma, we can replace the second condition in the definition of a conditional expectation (ection..) by: (b ) For every G-measurable random-variable V, we have IE[V:IE(XjG)] = IE[V:X]: (.).. Properties of Conditional Expectation Please see Willams p. 88. Proof sketches of some of the properties are provided below. (a) IE(IE(XjG)) = IE(X): Proof: Just take in the partial averaging property to be. The conditional expectation of X is thus an unbiased estimator of the random variable X. (b) If X is G-measurable, then IE(XjG) =X: Proof: The partial averaging property holds trivially when Y is replaced by X. nd since X is G-measurable, X satisfies the requirement (a) of a conditional expectation as well. If the information content of G is sufficient to determine X, then the best estimate of X based on G is X itself. (c) (Linearity) IE(a1X1 + axjg) =a1ie(x1jg) +aie(xjg): (d) (Positivity) If X 0 almost surely, then IE(XjG) 0: Proof: Take = f! R IE(XjG)(!) < 0g. This R set is in G since IE(XjG) is G-measurable. Partial averaging implies IE(XjG)dIP = XdIP. The right-hand side is greater than or equal to zero, and the left-hand side is strictly negative, unless IP () = 0. Therefore, IP () =0.

8 56 (h) (Jensen s Inequality) If : R!R is convex and IEj(X)j < 1, then IE((X)jG) (IE(XjG)): Recall the usual Jensen s Inequality: IE(X) (IE(X)): (i) (Tower Property) If H is a sub--algebra of G, then IE(IE(XjG)jH) =IE(XjH): H is a sub--algebra of G means that G contains more information than H. If we estimate X based on the information in G, and then estimate the estimator based on the smaller amount of information in H, then we get the same result as if we had estimated X directly based on the information in H. (j) (Taking out what is known) If is G-measurable, then IE(XjG) =:IE(XjG): When conditioning on G, the G-measurable random variable acts like a constant. Proof: Let be a G-measurable random variable. random variable Y is IE(XjG) if and only if (a) R Y is G-measurable; (b) YdIP = XdIP 8 G. R Take Y = :IE(XjG). Then Y satisfies (a) (a product of G-measurable random variables is G-measurable). Y also satisfies property (b), as we can check below: YdIP = IE(I :Y ) = IE[I IE(XjG)] = IE[I :X] ((b ) with V = I = XdIP: (k) (Role of Independence) If H is independent of ((X) G), then In particular, if X is independent of H, then IE(Xj(G H)) = IE(XjG): IE(XjH) =IE(X): If H is independent of X and G, then nothing is gained by including the information content of H in the estimation of X.

9 CHPTER. Conditional Expectation Examples from the Binomial Model Recall that F1 = f H T g. Notice that IE(jF1) must be constant on H and T. Now since IE(jF1) must satisfy the partial averaging property, We compute H H IE(jF1)dIP = dip H IE(jF1)dIP = dip: T T IE(jF1)dIP = IP ( H ):IE(jF1)(!) = pie(jf1)(!) 8! H : On the other hand, Therefore, We can also write H dip = p u 0 + pqud0: IE(jF1)(!) =pu 0 + qud0 8! H : imilarly, Thus in both cases we have IE(jF1)(!) = pu 0 + qud0 = (pu + qd)u0 = (pu + qd)1(!) 8! H IE(jF1)(!) =(pu + qd)1(!) 8! T : IE(jF1)(!) =(pu + qd)1(!) 8! : similar argument one time step later shows that IE(jF)(!) =(pu + qd)(!): We leave the verification of this equality as an exercise. We can verify the Tower Property, for instance, from the previous equations we have This final expression is IE(jF1). IE[IE(jF)jF1] = IE[(pu + qd)jf] = (pu + qd)ie(jf1) (linearity) = (pu + qd) 1:

10 58. Martingales The ingredients are: probability space ( F IP ). sequence of -algebras F0 F1 ::: F n, with the property that F 0 F1 :::F n F. uch a sequence of -algebras is called a filtration. sequence of random variables M0 M1 ::: M n. This is called a stochastic process. Conditions for a martingale: 1. Each M k is F k -measurable. If you know the information in F k, then you know the value of M k. We say that the process fm k g is adapted to the filtration ff k g.. For each k, IE(M k+1jf k )=M k. Martingales tend to go neither up nor down. supermartingale tends to go down, i.e. the second condition above is replaced by IE(M k+1jf k ) M k ;asubmartingale tends to go up, i.e. IE(M k+1jf k ) M k. Example. (Example from the binomial model.) For k =1 we already showed that IE( k+1jf k )=(pu + qd) k : For k =0, we set F0 = f g, the trivial -algebra. This -algebra contains no information, and any F0-measurable random variable must be constant (nonrandom). Therefore, by definition, IE(1jF0) is that constant which satisfies the averaging property IE(1jF0)dIP = The right hand side is IE1 =(pu + qd)0, and so we have In conclusion, 1dIP: IE(1jF0) =(pu + qd)0: If (pu + qd) =1then f k F k k =0 1 g is a martingale. If (pu + qd) 1 then f k F k k =0 1 g is a submartingale. If (pu + qd) 1 then f k F k k =0 1 g is a supermartingale.

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 2: Conditional probability Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/ psarkar/teaching

More information

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 7-11 Topics 2.1 RANDOM VARIABLE 2.2 INDUCED PROBABILITY MEASURE 2.3 DISTRIBUTION FUNCTION AND ITS PROPERTIES 2.4 TYPES OF RANDOM VARIABLES: DISCRETE,

More information

Mathematics. ( : Focus on free Education) (Chapter 16) (Probability) (Class XI) Exercise 16.2

Mathematics. (  : Focus on free Education) (Chapter 16) (Probability) (Class XI) Exercise 16.2 ( : Focus on free Education) Exercise 16.2 Question 1: A die is rolled. Let E be the event die shows 4 and F be the event die shows even number. Are E and F mutually exclusive? Answer 1: When a die is

More information

At t = T the investors learn the true state.

At t = T the investors learn the true state. 1. Martingales A discrete time introduction Model of the market with the following submodels (i) T + 1 trading dates, t =0, 1,...,T. Mathematics of Financial Derivatives II Seppo Pynnönen Professor of

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

Lecture 1 : The Mathematical Theory of Probability

Lecture 1 : The Mathematical Theory of Probability Lecture 1 : The Mathematical Theory of Probability 0/ 30 1. Introduction Today we will do 2.1 and 2.2. We will skip Chapter 1. We all have an intuitive notion of probability. Let s see. What is the probability

More information

27 Binary Arithmetic: An Application to Programming

27 Binary Arithmetic: An Application to Programming 27 Binary Arithmetic: An Application to Programming In the previous section we looked at the binomial distribution. The binomial distribution is essentially the mathematics of repeatedly flipping a coin

More information

Lecture Lecture 5

Lecture Lecture 5 Lecture 4 --- Lecture 5 A. Basic Concepts (4.1-4.2) 1. Experiment: A process of observing a phenomenon that has variation in its outcome. Examples: (E1). Rolling a die, (E2). Drawing a card form a shuffled

More information

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks Outline 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks Likelihood A common and fruitful approach to statistics is to assume

More information

Lecture 3: Random variables, distributions, and transformations

Lecture 3: Random variables, distributions, and transformations Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an

More information

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability Lecture Notes 1 Basic Probability Set Theory Elements of Probability Conditional probability Sequential Calculation of Probability Total Probability and Bayes Rule Independence Counting EE 178/278A: Basic

More information

Probability: Terminology and Examples Class 2, Jeremy Orloff and Jonathan Bloom

Probability: Terminology and Examples Class 2, Jeremy Orloff and Jonathan Bloom 1 Learning Goals Probability: Terminology and Examples Class 2, 18.05 Jeremy Orloff and Jonathan Bloom 1. Know the definitions of sample space, event and probability function. 2. Be able to organize a

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

Introduction Probability. Math 141. Introduction to Probability and Statistics. Albyn Jones

Introduction Probability. Math 141. Introduction to Probability and Statistics. Albyn Jones Math 141 to and Statistics Albyn Jones Mathematics Department Library 304 jones@reed.edu www.people.reed.edu/ jones/courses/141 September 3, 2014 Motivation How likely is an eruption at Mount Rainier in

More information

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Expected Value 7/7/2006

Expected Value 7/7/2006 Expected Value 7/7/2006 Definition Let X be a numerically-valued discrete random variable with sample space Ω and distribution function m(x). The expected value E(X) is defined by E(X) = x Ω x m(x), provided

More information

STAT 430/510 Probability

STAT 430/510 Probability STAT 430/510 Probability Hui Nie Lecture 3 May 28th, 2009 Review We have discussed counting techniques in Chapter 1. Introduce the concept of the probability of an event. Compute probabilities in certain

More information

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE A game of chance featured at an amusement park is played as follows: You pay $ to play. A penny a nickel are flipped. You win $ if either

More information

324 Stat Lecture Notes (1) Probability

324 Stat Lecture Notes (1) Probability 324 Stat Lecture Notes 1 robability Chapter 2 of the book pg 35-71 1 Definitions: Sample Space: Is the set of all possible outcomes of a statistical experiment, which is denoted by the symbol S Notes:

More information

CHAPTER - 16 PROBABILITY Random Experiment : If an experiment has more than one possible out come and it is not possible to predict the outcome in advance then experiment is called random experiment. Sample

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair

More information

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev CS4705 Probability Review and Naïve Bayes Slides from Dragomir Radev Classification using a Generative Approach Previously on NLP discriminative models P C D here is a line with all the social media posts

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics September 12, 2017 CS 361: Probability & Statistics Correlation Summary of what we proved We wanted a way of predicting y from x We chose to think in standard coordinates and to use a linear predictor

More information

Lecture 3 - Axioms of Probability

Lecture 3 - Axioms of Probability Lecture 3 - Axioms of Probability Sta102 / BME102 January 25, 2016 Colin Rundel Axioms of Probability What does it mean to say that: The probability of flipping a coin and getting heads is 1/2? 3 What

More information

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4.

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4. I. Probability basics (Sections 4.1 and 4.2) Flip a fair (probability of HEADS is 1/2) coin ten times. What is the probability of getting exactly 5 HEADS? What is the probability of getting exactly 10

More information

Lecture 11: Random Variables

Lecture 11: Random Variables EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 11: Random Variables Lecturer: Dr. Krishna Jagannathan Scribe: Sudharsan, Gopal, Arjun B, Debayani The study of random

More information

ECE 450 Lecture 2. Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview

ECE 450 Lecture 2. Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview ECE 450 Lecture 2 Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview Conditional Probability, Pr(A B) Total Probability Bayes Theorem Independent Events

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x.

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x. Ch. 16 Random Variables Def n: A random variable is a numerical measurement of the outcome of a random phenomenon. A discrete random variable is a random variable that assumes separate values. # of people

More information

Probability Distributions for Discrete RV

Probability Distributions for Discrete RV An example: Assume we toss a coin 3 times and record the outcomes. Let X i be a random variable defined by { 1, if the i th outcome is Head; X i = 0, if the i th outcome is Tail; Let X be the random variable

More information

Measure theory and stochastic processes TA Session Problems No. 3

Measure theory and stochastic processes TA Session Problems No. 3 Measure theory and stochastic processes T Session Problems No 3 gnieszka Borowska 700 Note: this is only a draft of the solutions discussed on Wednesday and might contain some typos or more or less imprecise

More information

Lecture 2: Random Variables and Expectation

Lecture 2: Random Variables and Expectation Econ 514: Probability and Statistics Lecture 2: Random Variables and Expectation Definition of function: Given sets X and Y, a function f with domain X and image Y is a rule that assigns to every x X one

More information

Recap. The study of randomness and uncertainty Chances, odds, likelihood, expected, probably, on average,... PROBABILITY INFERENTIAL STATISTICS

Recap. The study of randomness and uncertainty Chances, odds, likelihood, expected, probably, on average,... PROBABILITY INFERENTIAL STATISTICS Recap. Probability (section 1.1) The study of randomness and uncertainty Chances, odds, likelihood, expected, probably, on average,... PROBABILITY Population Sample INFERENTIAL STATISTICS Today. Formulation

More information

Conditional Probability

Conditional Probability Conditional Probability Conditional Probability The Law of Total Probability Let A 1, A 2,..., A k be mutually exclusive and exhaustive events. Then for any other event B, P(B) = P(B A 1 ) P(A 1 ) + P(B

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah June 17, 2008 Liang Zhang (UofU) Applied Statistics I June 17, 2008 1 / 22 Random Variables Definition A dicrete random variable

More information

Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is

Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is available on the Connexions website. It is used under a

More information

CLASS 6 July 16, 2015 STT

CLASS 6 July 16, 2015 STT CLASS 6 July 6, 05 STT-35-04 Plan for today: Preparation for Quiz : Probability of the union. Conditional Probability, Formula of total probability, ayes Rule. Independence: Simple problems (solvable by

More information

Almost Sure Convergence of a Sequence of Random Variables

Almost Sure Convergence of a Sequence of Random Variables Almost Sure Convergence of a Sequence of Random Variables (...for people who haven t had measure theory.) 1 Preliminaries 1.1 The Measure of a Set (Informal) Consider the set A IR 2 as depicted below.

More information

Statistics Statistical Process Control & Control Charting

Statistics Statistical Process Control & Control Charting Statistics Statistical Process Control & Control Charting Cayman Systems International 1/22/98 1 Recommended Statistical Course Attendance Basic Business Office, Staff, & Management Advanced Business Selected

More information

Probability: from intuition to mathematics

Probability: from intuition to mathematics 1 / 28 Probability: from intuition to mathematics Ting-Kam Leonard Wong University of Southern California EPYMT 2017 2 / 28 Probability has a right and a left hand. On the right is the rigorous foundational

More information

Part (A): Review of Probability [Statistics I revision]

Part (A): Review of Probability [Statistics I revision] Part (A): Review of Probability [Statistics I revision] 1 Definition of Probability 1.1 Experiment An experiment is any procedure whose outcome is uncertain ffl toss a coin ffl throw a die ffl buy a lottery

More information

Probability Pearson Education, Inc. Slide

Probability Pearson Education, Inc. Slide Probability The study of probability is concerned with random phenomena. Even though we cannot be certain whether a given result will occur, we often can obtain a good measure of its likelihood, or probability.

More information

Events A and B are said to be independent if the occurrence of A does not affect the probability of B.

Events A and B are said to be independent if the occurrence of A does not affect the probability of B. Independent Events Events A and B are said to be independent if the occurrence of A does not affect the probability of B. Probability experiment of flipping a coin and rolling a dice. Sample Space: {(H,

More information

Mean, Median and Mode. Lecture 3 - Axioms of Probability. Where do they come from? Graphically. We start with a set of 21 numbers, Sta102 / BME102

Mean, Median and Mode. Lecture 3 - Axioms of Probability. Where do they come from? Graphically. We start with a set of 21 numbers, Sta102 / BME102 Mean, Median and Mode Lecture 3 - Axioms of Probability Sta102 / BME102 Colin Rundel September 1, 2014 We start with a set of 21 numbers, ## [1] -2.2-1.6-1.0-0.5-0.4-0.3-0.2 0.1 0.1 0.2 0.4 ## [12] 0.4

More information

9/6/2016. Section 5.1 Probability. Equally Likely Model. The Division Rule: P(A)=#(A)/#(S) Some Popular Randomizers.

9/6/2016. Section 5.1 Probability. Equally Likely Model. The Division Rule: P(A)=#(A)/#(S) Some Popular Randomizers. Chapter 5: Probability and Discrete Probability Distribution Learn. Probability Binomial Distribution Poisson Distribution Some Popular Randomizers Rolling dice Spinning a wheel Flipping a coin Drawing

More information

Week 2. Review of Probability, Random Variables and Univariate Distributions

Week 2. Review of Probability, Random Variables and Univariate Distributions Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference

More information

Probability. VCE Maths Methods - Unit 2 - Probability

Probability. VCE Maths Methods - Unit 2 - Probability Probability Probability Tree diagrams La ice diagrams Venn diagrams Karnough maps Probability tables Union & intersection rules Conditional probability Markov chains 1 Probability Probability is the mathematics

More information

Basics of Probability Theory

Basics of Probability Theory Basics of Probability Theory Andreas Klappenecker Texas A&M University 2018 by Andreas Klappenecker. All rights reserved. 1 / 39 Terminology The probability space or sample space Ω is the set of all possible

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Notes 1 Autumn Sample space, events. S is the number of elements in the set S.)

Notes 1 Autumn Sample space, events. S is the number of elements in the set S.) MAS 108 Probability I Notes 1 Autumn 2005 Sample space, events The general setting is: We perform an experiment which can have a number of different outcomes. The sample space is the set of all possible

More information

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007 CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August

More information

6.2 Introduction to Probability. The Deal. Possible outcomes: STAT1010 Intro to probability. Definitions. Terms: What are the chances of?

6.2 Introduction to Probability. The Deal. Possible outcomes: STAT1010 Intro to probability. Definitions. Terms: What are the chances of? 6.2 Introduction to Probability Terms: What are the chances of?! Personal probability (subjective) " Based on feeling or opinion. " Gut reaction.! Empirical probability (evidence based) " Based on experience

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

RS Chapter 1 Random Variables 6/5/2017. Chapter 1. Probability Theory: Introduction

RS Chapter 1 Random Variables 6/5/2017. Chapter 1. Probability Theory: Introduction Chapter 1 Probability Theory: Introduction Basic Probability General In a probability space (Ω, Σ, P), the set Ω is the set of all possible outcomes of a probability experiment. Mathematically, Ω is just

More information

Chapter 2 PROBABILITY SAMPLE SPACE

Chapter 2 PROBABILITY SAMPLE SPACE Chapter 2 PROBABILITY Key words: Sample space, sample point, tree diagram, events, complement, union and intersection of an event, mutually exclusive events; Counting techniques: multiplication rule, permutation,

More information

What is a random variable

What is a random variable OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr

More information

First Digit Tally Marks Final Count

First Digit Tally Marks Final Count Benford Test () Imagine that you are a forensic accountant, presented with the two data sets on this sheet of paper (front and back). Which of the two sets should be investigated further? Why? () () ()

More information

RVs and their probability distributions

RVs and their probability distributions RVs and their probability distributions RVs and their probability distributions In these notes, I will use the following notation: The probability distribution (function) on a sample space will be denoted

More information

2010 Euclid Contest. Solutions

2010 Euclid Contest. Solutions Canadian Mathematics Competition An activity of the Centre for Education in Mathematics and Computing, University of Waterloo, Waterloo, Ontario 00 Euclid Contest Wednesday, April 7, 00 Solutions 00 Centre

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Probabilistic models

Probabilistic models Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became

More information

Compatible probability measures

Compatible probability measures Coin tossing space Think of a coin toss as a random choice from the two element set {0,1}. Thus the set {0,1} n represents the set of possible outcomes of n coin tosses, and Ω := {0,1} N, consisting of

More information

Information and Conditioning

Information and Conditioning 2 Information and Conditioning 2.1 Information and σ-algebras The no-arbitrage theory of derivative security pricing is based on contingency plans. In order to price a derivative security, we determine

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

A PECULIAR COIN-TOSSING MODEL

A PECULIAR COIN-TOSSING MODEL A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or

More information

ELEG 3143 Probability & Stochastic Process Ch. 1 Experiments, Models, and Probabilities

ELEG 3143 Probability & Stochastic Process Ch. 1 Experiments, Models, and Probabilities Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 1 Experiments, Models, and Probabilities Dr. Jing Yang jingyang@uark.edu OUTLINE 2 Applications

More information

What is Probability? Probability. Sample Spaces and Events. Simple Event

What is Probability? Probability. Sample Spaces and Events. Simple Event What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5

More information

Probability (10A) Young Won Lim 6/12/17

Probability (10A) Young Won Lim 6/12/17 Probability (10A) Copyright (c) 2017 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

Conditional Probability

Conditional Probability Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the

More information

Measure and integration

Measure and integration Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.

More information

CHAPTER 1 SETS AND EVENTS

CHAPTER 1 SETS AND EVENTS CHPTER 1 SETS ND EVENTS 1.1 Universal Set and Subsets DEFINITION: set is a well-defined collection of distinct elements in the universal set. This is denoted by capital latin letters, B, C, If an element

More information

Sec$on Summary. Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability

Sec$on Summary. Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Section 7.2 Sec$on Summary Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Independence Bernoulli Trials and the Binomial Distribution Random Variables

More information

Marquette University MATH 1700 Class 5 Copyright 2017 by D.B. Rowe

Marquette University MATH 1700 Class 5 Copyright 2017 by D.B. Rowe Class 5 Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science Copyright 2017 by D.B. Rowe 1 Agenda: Recap Chapter 3.2-3.3 Lecture Chapter 4.1-4.2 Review Chapter 1 3.1 (Exam

More information

STAT 712 MATHEMATICAL STATISTICS I

STAT 712 MATHEMATICAL STATISTICS I STAT 72 MATHEMATICAL STATISTICS I Fall 207 Lecture Notes Joshua M. Tebbs Department of Statistics University of South Carolina c by Joshua M. Tebbs TABLE OF CONTENTS Contents Probability Theory. Set Theory......................................2

More information

PERMUTATIONS, COMBINATIONS AND DISCRETE PROBABILITY

PERMUTATIONS, COMBINATIONS AND DISCRETE PROBABILITY Friends, we continue the discussion with fundamentals of discrete probability in the second session of third chapter of our course in Discrete Mathematics. The conditional probability and Baye s theorem

More information

Lecture 6 Random Variable. Compose of procedure & observation. From observation, we get outcomes

Lecture 6 Random Variable. Compose of procedure & observation. From observation, we get outcomes ENM 07 Lecture 6 Random Variable Random Variable Eperiment (hysical Model) Compose of procedure & observation From observation we get outcomes From all outcomes we get a (mathematical) probability model

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Probability. Lecture Notes. Adolfo J. Rumbos

Probability. Lecture Notes. Adolfo J. Rumbos Probability Lecture Notes Adolfo J. Rumbos October 20, 204 2 Contents Introduction 5. An example from statistical inference................ 5 2 Probability Spaces 9 2. Sample Spaces and σ fields.....................

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

2. Conditional Probability

2. Conditional Probability ENGG 2430 / ESTR 2004: Probability and Statistics Spring 2019 2. Conditional Probability Andrej Bogdanov Coins game Toss 3 coins. You win if at least two come out heads. S = { HHH, HHT, HTH, HTT, THH,

More information

Statistical methods in recognition. Why is classification a problem?

Statistical methods in recognition. Why is classification a problem? Statistical methods in recognition Basic steps in classifier design collect training images choose a classification model estimate parameters of classification model from training images evaluate model

More information

besides your solutions of these problems. 1 1 We note, however, that there will be many factors in the admission decision

besides your solutions of these problems. 1 1 We note, however, that there will be many factors in the admission decision The PRIMES 2015 Math Problem Set Dear PRIMES applicant! This is the PRIMES 2015 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 2015. For complete rules,

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

Chapter 1 Probability Theory

Chapter 1 Probability Theory Review for the previous lecture Eample: how to calculate probabilities of events (especially for sampling with replacement) and the conditional probability Definition: conditional probability, statistically

More information

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability Chapter 2 Conditional Probability and Independence 2.1 Conditional Probability Example: Two dice are tossed. What is the probability that the sum is 8? This is an easy exercise: we have a sample space

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

3.5. First step analysis

3.5. First step analysis 12 3.5. First step analysis A journey of a thousand miles begins with the first step. Lao Tse Example 3.6. (Coin Tossing) Repeatedly toss a fair coin a number of times. What s the expected number of tosses

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Intro to Information Theory

Intro to Information Theory Intro to Information Theory Math Circle February 11, 2018 1. Random variables Let us review discrete random variables and some notation. A random variable X takes value a A with probability P (a) 0. Here

More information

Lecture 2. Constructing Probability Spaces

Lecture 2. Constructing Probability Spaces Lecture 2. Constructing Probability Spaces This lecture describes some procedures for constructing probability spaces. We will work exclusively with discrete spaces usually finite ones. Later, we will

More information

The Bernoulli distribution has only two outcomes, Y, success or failure, with values one or zero. The probability of success is p.

The Bernoulli distribution has only two outcomes, Y, success or failure, with values one or zero. The probability of success is p. The Bernoulli distribution has onl two outcomes, Y, success or failure, with values one or zero. The probabilit of success is p. The probabilit distribution f() is denoted as B (p), the formula is: f ()

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

EE 178 Lecture Notes 0 Course Introduction. About EE178. About Probability. Course Goals. Course Topics. Lecture Notes EE 178

EE 178 Lecture Notes 0 Course Introduction. About EE178. About Probability. Course Goals. Course Topics. Lecture Notes EE 178 EE 178 Lecture Notes 0 Course Introduction About EE178 About Probability Course Goals Course Topics Lecture Notes EE 178: Course Introduction Page 0 1 EE 178 EE 178 provides an introduction to probabilistic

More information

10 th CBSE (SESSION : ) SUBJECT : MATHS SUMMATIVE ASSESSMENT-II SOLUTION _SET-1_CODE NO. 30/1

10 th CBSE (SESSION : ) SUBJECT : MATHS SUMMATIVE ASSESSMENT-II SOLUTION _SET-1_CODE NO. 30/1 Pre-foundation areer are Programmes (PP) Division 0 th BSE (SESSION : 05-6) SUBJET : MTHS SUMMTIVE SSESSMENT-II SOLUTION _SET-_ODE NO. 0/. Given : B is diameter B 0 To find P construction : Join O sol

More information

Conditional Probability, Independence and Bayes Theorem Class 3, Jeremy Orloff and Jonathan Bloom

Conditional Probability, Independence and Bayes Theorem Class 3, Jeremy Orloff and Jonathan Bloom Conditional Probability, Independence and Bayes Theorem Class 3, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Know the definitions of conditional probability and independence of events. 2.

More information