Def :Corresponding to an underlying experiment, we associate a probability with each ω Ω, Pr{ ω} Pr:Ω [0..1] such that Pr{ω} = 1
|
|
- Oliver Francis
- 6 years ago
- Views:
Transcription
1 Read : GKP 8.1,8.2 Quicksort is the sorting technique of choice although its worst-case behavior is inferior to that of many others. Why? What does average case mean? Seeking a phone number (which exists) in a phone book sees "on average" half the elements. Why? Def : Probability space (sample space) - Ω (nonempty) Ωdice={(1 1),...,(6 6)} Ωdice =36 (Assume dice distinguishable, i.e., Ωcoin={sequences of coin flips until 2 straight heads} ={(h h), (t h h), (h t h h), (t t h h),...} Ωcoin = (2,3) (3,2) ) Ω phone-book={ ,..., } Ω phone-book 134,400 A P.Zywien ΩUG3={unlabelled, undirected graphs of 3 vertices}= Def :Corresponding to an underlying experiment, we associate a probability with each ω Ω, Pr{ ω} Pr:Ω [0..1] such that Pr{ω} = 1 For fair distinguishable dice, Pr{ω}=1/36 for each ω Ω For Ωphone-book, can assume: -Pr{ω}=1/134,000 for each ω Ω -a closer approximation to usage, where Pr{ }>>Pr{ } Centrum Box Office Jas.Nkansah For ΩUG3, can assume: -Pr{ω}=1/4 for each ω Ω -if we start with labelled graph & add each edge with (independent) prob.=1/2, then Pr 1/8 3/8 3/8 1/8 Def : An event is a subset A Ω. Ωdice: A doubles ={(1 1),...,(6 6)} A 7 11 ={(1 6),...,(6 1), (5 6), (6 5)} Ωphone-book: A movie theater ΩUG3: A connected ={o-o-o,triangle} LN2-1
2 Def :Probability of an event: Pr{A}= Ωdice: Pr{A doubles }=6/36=1/6 Pr{A 7-11 }=8/36=2/9 Pr{ω} ω A Def : A random variable (r.v.) is a function from a sample space to the reals. X:Ω R+. (For us, R + because we measure resource consumption) For ω=(x y) Ωdice, X(ω)=x+y X( (1 3) )=4 For ω Ωcoin, X(ω)=# heads X( (h t t h t h h) )=4 Corresponding to each value x of each r.v. X is an event A={ω X(ω)=x}, and we extend Pr so that Pr{X =x}=pr{x}= Pr{ω}. ω A Mean We usually let r.v. be resource (time, memory) consumed by an algorithm, queue length of processes awaiting processor,... We usually want "typical" behavior of an r.v. Given values ( ), -mean = ( )/5=2.8 -median =3 (middle value after sorting) -mode =1 (most frequent value) In probability, we repeat an experiment many times & expect each value x of X to occur with frequency proportional to Pr{X=x}. -mean of X= x * Pr{X = x} x X(Ω) -median of X={x Pr{X x} 1/2 & Pr{X x} 1/2} -mode of X={x Pr{X=x} Pr{ X=y} for all y X(Ω)} Mean also called expected value of X, E[X] (In GKP, EX) For ω=(x y) Ωdice, X(ω)=x+y, E[X]=7 Note that if Pr{X=1} = Pr{X=2} = 0.5, then E[X] = 1.5, although X will never be 1.5 If X, Y are r.v.s on the same space Ω, then X+Y is also a r.v. on Ω. Def : X,Y are independent r.v.s if Pr{X =x Y =y}=pr{x =x}*pr{y =y } X,Y independent means they have no effect on each other. Because faces of 2 dice independent, Pr{ (1,3) }=Pr{1}*Pr{3}=(1/6)*(1/6). What is expected value of sum of faces if we throw 2 dice? E[X +Y]? =E[X]+E[Y]? Theorem : For r.v.s X 1,...,Xn defined on Ω, if E[X 1 ],...,E[Xn] exist, then E X k = E[X k ]. (Note: no assumption on independence, in fact, some Xi can be repeated) Proof : (For the case n=2) Let X=X 1,Y=X 2 E[X + Y] = ( X(ω) + Y(ω) ) * Pr{ω} = X(ω) * Pr{ω} + Y(ω) * Pr{ω} = E[X] + E[Y] x,y LN2-2
3 (In general) E X k = (ω) * Pr{ω} = X k X k (ω) * Pr{ω} = E[X k ] Ex : If 106 people throw their keys in a large hat, & each person retrieves a key "at random", what's the expected number of people who get their own key? Define a r.v. for each person such that we can combine the r.v.s to get the answer. Let Xk=if person i gets own key then 1 else 0. A less useful r.v. would be Xk=if person i gets own key then 42 else -6. For any experiment, {people who get their own key} = X k, & solution to problem is E X k. (Note: X k is an r.v.) By previous theorem, E X k = E[X k ]. For each Xk, E[Xk]=1/n. So E X k = 1 n =1. Ex : Simulation To estimate π, consider throwing darts at following target such that Pr{dart in area} area. Area of circle=π, area of square=4. Let r.v.x=if dart in circle then 1 else 0 E[X]=π/4. Throwing n darts, E X k = E[X k ] =n E[X]= nπ 4 π= 4 n * E X k = 4 n * E[X k ] (defun test () ; test if dart in circle (<= (+ (square (random 1.0)) (square (random 1.0))) 1.0)) (defun square (n) (* n n)) (defun count_successes (n) ; count darts in circle LN2-3
4 (cond ((= n 0) 0) ((test) (+ 1 (count_successes (- n 1)))) (t (count_successes (- n 1))))) (defun estimate-pi (n) (/ (* 4.0 (count_successes n)) n))? (estimate-pi 10) 3.6? (estimate-pi 100) 3.2? (estimate-pi 1000) 3.188? (estimate-pi 5000) Ex : Monte-Carlo Integration Suppose f:[a,b] -> [0,d]. ;; random x in [a,b], b>a (defun random_in_interval (a b) (+ a (random (float (- b a))))) (defun test (f) (>= (funcall f (random_in_interval a b)) (random_in_interval c d))) (defun count_successes (f n) (cond ((= n 0) 0) ((test f) (+ 1 (count_successes f (- n 1)))) (t (count_successes f (- n 1))))) (defun monte-carlo (f n) (* (- b a) (- d c) (/ (count_successes f n) (float n)))) (defun func (x) (/ 1.0 x)) (setf a 1) (setf b 500) (setf c 0) (setf d 1)? (log 500) ? (monte-carlo #'func 1000) ? (monte-carlo #'func 1000) 4.99 LN2-4
5 ? (monte-carlo #'func 1000) 6.986? (monte-carlo #'func 5000) Note that the estimate of π was Monte-Carlo. How quickly do we converge? For π, Pr{106 darts outside circle}>0. We bound the preceding probability; it never = 0. Given δ,ε (0,1), if I = integral, h = value returned by Monte-Carlo, Pr{ h -I < ε} 1- δ whenever # iterations n (1/ ε2 δ ). That is δ 1/ ε2. That is, for a fixed tolerance δ (say 0.05), 1 more decimal digit of precision requires 100 times the number of trials. In 1987, some Japanese computed π to 134*106 places. This is not the way to compute π. Monte-Carlo better than traditional numerical integration techniques for integrals of high dimension, since it's independent of dimension. From another angle, in estimating I= f(x)dx with estimator F n = 1 n ( ) f X k, the estimator is unbiased, i.e., E F n F n = E[ F n ] + ε, where ε = Vf Ω ( ) [ ] = f(x)dx, n = σ f, inverting this n yields that the number of samples to yield a desired error ε is n = Vf ε 2 More on Speed of Convergence : To estimate a parameter with n independent, identically distributed r.v.s Xi, drawn from a population with mean µ and standard deviation σ, Central Limit Theorem states that is normally distributed with mean nµ and variance nσ2. This can be X k transformed to a standard normally distributed r.v. (µ=0,σ=1) X k nµ Z = (dividing numerator & denominator by n) nσ = X µ where X = 1 σ X k, the sample mean. From another angle, choose n n some constant α <1, and let y α /2 be that value of Y such that Φ(Y)=1 - α /2. Pr{- y α /2 Z y α /2 } = Pr{ X + σy α / 2 µ X σy α / 2 } = 1- α. n n Interval X ± σy α / 2 usually called confidence interval and 1- α called n confidence level (usually as a %) For confidence level of 90%, y α/2 = 1.65, so 90% confident that µ covered by X ± σy α / 2 n Ex : Estimating the size of backtrack search trees. Consider the 4-queens problem, with the search tree for 1 queen/row. level # nodes Ω LN2-5
6 0 0 1 / / \ \ col: / apple shape =( ) size = 17 How big is the tree for the 25-queens problem? What is critical mass of fissionable material where, for a given neutron, Pr{i offspring}, i 0. What is probability of extinction of Selkow name given: -patrilineal naming, -there are no Selkows outside my house. For Americans in 1939, Pr{Xi = 0} Pr{Xi = k } (0.2126)*(0.5893)k -1, k 1 Consider sequence of r.v.s, X 0,X 1,..., where Xi estimates # level i. We want E[Xi]= # level i. size = Experiment : X 0 :=1 X i 0 i, so size is an r.v. i :=0 current_node :=root of tree neighbors :=children (current_node ) while neighbors > 0 do i :=i +1 Xi := neighbors *Xi -1 current_node :=random_select(neighbors ) neighbors :=children (current_node ) shape := (X 0,X 1,...) size := Do it for 8-queens. Theorem : E[size] is correct. Proof : (By induction on depth of tree) Basis : X 0 =1 I.H. : True for all trees of depth k. I.S. : X i 0 i LN2-6
7 E[size ]= n Pr{choosing i th child}* n * E[size(i th subtree)] 1 = n n * n * size( i th` subtree) = 1 + size( i th` subtree) 1 n Variance Suppose 100 lottery tickets are sold each week, and the prize is $100. Should we buy 2 tickets the same week, or 1 ticket in each of 2 lotteries? Let X 1 & X 2 be r.v.s denoting the amount we win with each ticket. E[X 1 ] = 0*Pr{we lose} + 100*Pr{we win} = = 1 & for X 2. E[X 1 +X 2 ]=E[X 1 ]+E[X 2 ]=2 (for both strategies). To distinguish between strategies, look at probability distributions: Ω (winnings) $0 $100 $200 1 drawing drawings To measure the "spread", we use Def : The variance of r.v. X is VX, alternatively V(X) or σ2(x), where VX=E[(X-E[X])2]. Note that X-E[X] and (X-E[X])2 are r.v.s For our example, let X=X 1 +X 2, E[X]=2, and Ω={0, 100, 200}. V X = 0*Pr{0} + 100*Pr{100} + 200*Pr{200} For 1 drawing V X = (0-2)2*.98 + (100-2)2*.02 + (200-2)2*0 = 196 For 2 drawings V X=(0-2)2*.9801+(100-2)2*.0198+(200-2)2*.0001=198 Computational simplification : Given a large Ω, computing E[(X-E[X])2] requires: -compute E[X] -for each x X(Ω), compute (x-e[x])*pr{x} To compute this in 1 step, note that V X=E[(X-E[X])2] = E[X2-2XE[X]+E[X]2] = E[X2]-2E[X]E[X]+E[X]2=E[X2]-E[X]2 (variance = the mean of the square - the square of the mean) LN2-7
8 For 1 drawing, VX = (02* * *0) - 4 = 196 For 2 drawings, VX = (02* * *.0001) - 4 = 198 If we could buy 100 tickets, betting it all in 1 lottery yields E[X]=100 and VX = 0, but betting it in 100 different lotteries yields E[X]=100 but Pr{X=0}=.99100= but a >0 probability of winning $10,000 Further Simplification : If r.v.s X 1,...,X n are independent, then V X k V ( X k ) Standard deviation = σ(x)= VX. Why do we care about variance? Chebyshev's inequality: Pr{(X-E[X])2 α ) VX/α for all α, or, replacing 1 α by c2*vx yields Pr{ X-E[X] cσ(x) ) 2. Probability X will be more than c 1 c standard deviations from the mean 2. Rolling fair dice n times, total c 35n value about 7n. Variance of n rolls. Choosing c=10, at least 99% of 6 the time the total of n =106 rolls between 6.976*106 and 7.024*106. LN2-8
Def :Corresponding to an underlying experiment, we associate a probability with each wîw, Pr{w} Pr:W [0..1] such that Pr{ω} = 1
Read : GKP 8.1,8.2 Quicksort is the sorting technique of choice although its worst-case behavior is inferior to that of many others. pwhy? What does average case mean? Seeking a phone number (which exists)
More informationExpectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then,
Expectation is linear So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, E(αX) = ω = ω (αx)(ω) Pr(ω) αx(ω) Pr(ω) = α ω X(ω) Pr(ω) = αe(x). Corollary. For α, β R, E(αX + βy ) = αe(x) + βe(y ).
More informationAlex Psomas: Lecture 17.
Alex Psomas: Lecture 17. Random Variables: Expectation, Variance 1. Random Variables, Expectation: Brief Review 2. Independent Random Variables. 3. Variance Random Variables: Definitions Definition A random
More informationDiscrete Mathematics for CS Spring 2006 Vazirani Lecture 22
CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22 Random Variables and Expectation Question: The homeworks of 20 students are collected in, randomly shuffled and returned to the students.
More informationIntroduction to Randomized Algorithms: Quick Sort and Quick Selection
Chapter 14 Introduction to Randomized Algorithms: Quick Sort and Quick Selection CS 473: Fundamental Algorithms, Spring 2011 March 10, 2011 14.1 Introduction to Randomized Algorithms 14.2 Introduction
More informationWeek 12-13: Discrete Probability
Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible
More informationThe expected value E[X] of discrete random variable X is defined by. xp X (x), (6.1) E[X] =
Chapter 6 Meeting Expectations When a large collection of data is gathered, one is typically interested not necessarily in every individual data point, but rather in certain descriptive quantities such
More informationDiscrete Random Variables
Discrete Random Variables We have a probability space (S, Pr). A random variable is a function X : S V (X ) for some set V (X ). In this discussion, we must have V (X ) is the real numbers X induces a
More informationDiscrete Probability
MAT 258 Discrete Mathematics Discrete Probability Kenneth H. Rosen and Kamala Krithivasan Discrete Mathematics 7E Global Edition Chapter 7 Reproduced without explicit consent Fall 2016 Week 11 Probability
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationOverview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis
Day 5 Functions/Probability Overview Functions Probabilistic analysis Neil Rhodes UC San Diego Image/Coimage The image of f is the set of values f actually takes on (a subset of the codomain) The inverse
More informationExample 1. The sample space of an experiment where we flip a pair of coins is denoted by:
Chapter 8 Probability 8. Preliminaries Definition (Sample Space). A Sample Space, Ω, is the set of all possible outcomes of an experiment. Such a sample space is considered discrete if Ω has finite cardinality.
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationRandom Variable. Pr(X = a) = Pr(s)
Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably
More informationSome Basic Concepts of Probability and Information Theory: Pt. 2
Some Basic Concepts of Probability and Information Theory: Pt. 2 PHYS 476Q - Southern Illinois University January 22, 2018 PHYS 476Q - Southern Illinois University Some Basic Concepts of Probability and
More informationProbability. VCE Maths Methods - Unit 2 - Probability
Probability Probability Tree diagrams La ice diagrams Venn diagrams Karnough maps Probability tables Union & intersection rules Conditional probability Markov chains 1 Probability Probability is the mathematics
More informationSenior Math Circles November 19, 2008 Probability II
University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where
More informationWhat is a random variable
OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr
More informationMath489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7
Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Steve Dunbar Due Mon, November 2, 2009. Time to review all of the information we have about coin-tossing fortunes
More informationRandomized algorithms. Inge Li Gørtz
Randomized algorithms Inge Li Gørtz Randomized algorithms Today What are randomized algorithms? Properties of randomized algorithms Two examples: Median/Select. Quick-sort Randomized Algorithms Randomized
More informationMore than one variable
Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to
More informationEstadística I Exercises Chapter 4 Academic year 2015/16
Estadística I Exercises Chapter 4 Academic year 2015/16 1. An urn contains 15 balls numbered from 2 to 16. One ball is drawn at random and its number is reported. (a) Define the following events by listing
More informationCSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions)
CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions) 1. (Confidence Intervals, CLT) Let X 1,..., X n be iid with unknown mean θ and known variance σ 2. Assume
More informationDiscrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations
EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of
More informationProbability & Statistics - FALL 2008 FINAL EXAM
550.3 Probability & Statistics - FALL 008 FINAL EXAM NAME. An urn contains white marbles and 8 red marbles. A marble is drawn at random from the urn 00 times with replacement. Which of the following is
More informationBasic Probability. Introduction
Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with
More informationCS70: Jean Walrand: Lecture 19.
CS70: Jean Walrand: Lecture 19. Random Variables: Expectation 1. Random Variables: Brief Review 2. Expectation 3. Important Distributions Random Variables: Definitions Definition A random variable, X,
More informationIntroductory Probability
Introductory Probability Bernoulli Trials and Binomial Probability Distributions Dr. Nguyen nicholas.nguyen@uky.edu Department of Mathematics UK February 04, 2019 Agenda Bernoulli Trials and Probability
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationMATH2206 Prob Stat/20.Jan Weekly Review 1-2
MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion
More informationUCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis
UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis Lecture 8 Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Lecture 8 Notes Goals for Today Counting Partitions
More informationLecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN
Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and
More informationNotes on Discrete Probability
Columbia University Handout 3 W4231: Analysis of Algorithms September 21, 1999 Professor Luca Trevisan Notes on Discrete Probability The following notes cover, mostly without proofs, the basic notions
More informationDeviations from the Mean
Deviations from the Mean The Markov inequality for non-negative RVs Variance Definition The Bienaymé Inequality For independent RVs The Chebyeshev Inequality Markov s Inequality For any non-negative random
More informationThis does not cover everything on the final. Look at the posted practice problems for other topics.
Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry
More information1. Consider a random independent sample of size 712 from a distribution with the following pdf. c 1+x. f(x) =
1. Consider a random independent sample of size 712 from a distribution with the following pdf f(x) = c 1+x 0
More informationLecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality
Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi
More informationDiscrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Spring 206 Rao and Walrand Note 6 Random Variables: Distribution and Expectation Example: Coin Flips Recall our setup of a probabilistic experiment as
More informationLecture 10: Bayes' Theorem, Expected Value and Variance Lecturer: Lale Özkahya
BBM 205 Discrete Mathematics Hacettepe University http://web.cs.hacettepe.edu.tr/ bbm205 Lecture 10: Bayes' Theorem, Expected Value and Variance Lecturer: Lale Özkahya Resources: Kenneth Rosen, Discrete
More informationthe amount of the data corresponding to the subinterval the width of the subinterval e x2 to the left by 5 units results in another PDF g(x) = 1 π
Math 10A with Professor Stankova Worksheet, Discussion #42; Friday, 12/8/2017 GSI name: Roy Zhao Problems 1. For each of the following distributions, derive/find all of the following: PMF/PDF, CDF, median,
More informationLecture 8. October 22, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.
Lecture 8 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University October 22, 2007 1 2 3 4 5 6 1 Define convergent series 2 Define the Law of Large Numbers
More informationMath 493 Final Exam December 01
Math 493 Final Exam December 01 NAME: ID NUMBER: Return your blue book to my office or the Math Department office by Noon on Tuesday 11 th. On all parts after the first show enough work in your exam booklet
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationDiscrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 203 Vazirani Note 2 Random Variables: Distribution and Expectation We will now return once again to the question of how many heads in a typical sequence
More informationHomework 4 Solution, due July 23
Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationDiscrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 202 Vazirani Note 4 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected in, randomly
More informationConditional Probability
Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the
More informationV. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE
V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE A game of chance featured at an amusement park is played as follows: You pay $ to play. A penny a nickel are flipped. You win $ if either
More informationDiscrete Random Variables
Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random
More informationMath 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =
Math 5. Rumbos Fall 07 Solutions to Review Problems for Exam. A bowl contains 5 chips of the same size and shape. Two chips are red and the other three are blue. Draw three chips from the bowl at random,
More informationTheorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )
Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the
More informationDiscrete Random Variables
Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan Introduction The markets can be thought of as a complex interaction of a large number of random processes,
More informationRandom Variables Example:
Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the
More informationEssentials on the Analysis of Randomized Algorithms
Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random
More informationSets. Review of basic probability. Tuples. E = {all even integers} S = {x E : x is a multiple of 3} CSE 101. I = [0, 1] = {x : 0 x 1}
Sets A = {a, b, c,..., z} A = 26 Review of basic probability = {0, 1} = 2 E = {all even integers} E = CSE 1 S = {x E : x is a multiple of 3} I = [0, 1] = {x : 0 x 1} In a set, the order of elements doesn
More informationReview of Probability. CS1538: Introduction to Simulations
Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More information1 Probability Review. 1.1 Sample Spaces
1 Probability Review Probability is a critical tool for modern data analysis. It arises in dealing with uncertainty, in randomized algorithms, and in Bayesian analysis. To understand any of these concepts
More informationCS 361: Probability & Statistics
February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an
More informationRandom Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin
Random Variables Statistics 110 Summer 2006 Copyright c 2006 by Mark E. Irwin Random Variables A Random Variable (RV) is a response of a random phenomenon which is numeric. Examples: 1. Roll a die twice
More informationExpectation, inequalities and laws of large numbers
Chapter 3 Expectation, inequalities and laws of large numbers 3. Expectation and Variance Indicator random variable Let us suppose that the event A partitions the sample space S, i.e. A A S. The indicator
More informationProbability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2
Probability Probability is the study of uncertain events or outcomes. Games of chance that involve rolling dice or dealing cards are one obvious area of application. However, probability models underlie
More informationRandomization in Algorithms
Randomization in Algorithms Randomization is a tool for designing good algorithms. Two kinds of algorithms Las Vegas - always correct, running time is random. Monte Carlo - may return incorrect answers,
More informationCSE 312: Foundations of Computing II Random Variables, Linearity of Expectation 4 Solutions
CSE 31: Foundations of Computing II Random Variables, Linearity of Expectation Solutions Review of Main Concepts (a Random Variable (rv: A numeric function X : Ω R of the outcome. (b Range/Support: The
More informationLecture 4 : Random variable and expectation
Lecture 4 : Random variable and expectation Study Objectives: to learn the concept of 1. Random variable (rv), including discrete rv and continuous rv; and the distribution functions (pmf, pdf and cdf).
More information2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).
Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent
More informationUCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis
UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis Lecture 10 Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Lecture 10 Notes Midterm Good job overall! = 81; =
More informationTopic 3 Random variables, expectation, and variance, II
CSE 103: Probability and statistics Fall 2010 Topic 3 Random variables, expectation, and variance, II 3.1 Linearity of expectation If you double each value of X, then you also double its average; that
More informationChapter 8: An Introduction to Probability and Statistics
Course S3, 200 07 Chapter 8: An Introduction to Probability and Statistics This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including
More informationCISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.
CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability Arthur G. Werschulz Fordham University Department of Computer and Information Sciences Copyright Arthur G. Werschulz, 2017.
More informationEcon 113. Lecture Module 2
Econ 113 Lecture Module 2 Contents 1. Experiments and definitions 2. Events and probabilities 3. Assigning probabilities 4. Probability of complements 5. Conditional probability 6. Statistical independence
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationDiscrete Random Variables
Chapter 5 Discrete Random Variables Suppose that an experiment and a sample space are given. A random variable is a real-valued function of the outcome of the experiment. In other words, the random variable
More informationChapter 4: An Introduction to Probability and Statistics
Chapter 4: An Introduction to Probability and Statistics 4. Probability The simplest kinds of probabilities to understand are reflected in everyday ideas like these: (i) if you toss a coin, the probability
More informationProbabilities and Expectations
Probabilities and Expectations Ashique Rupam Mahmood September 9, 2015 Probabilities tell us about the likelihood of an event in numbers. If an event is certain to occur, such as sunrise, probability of
More informationClass 8 Review Problems solutions, 18.05, Spring 2014
Class 8 Review Problems solutions, 8.5, Spring 4 Counting and Probability. (a) Create an arrangement in stages and count the number of possibilities at each stage: ( ) Stage : Choose three of the slots
More informationIntroduction to discrete probability. The rules Sample space (finite except for one example)
Algorithms lecture notes 1 Introduction to discrete probability The rules Sample space (finite except for one example) say Ω. P (Ω) = 1, P ( ) = 0. If the items in the sample space are {x 1,..., x n }
More informationECEN 5612, Fall 2007 Noise and Random Processes Prof. Timothy X Brown NAME: CUID:
Midterm ECE ECEN 562, Fall 2007 Noise and Random Processes Prof. Timothy X Brown October 23 CU Boulder NAME: CUID: You have 20 minutes to complete this test. Closed books and notes. No calculators. If
More informationData Analysis and Monte Carlo Methods
Lecturer: Allen Caldwell, Max Planck Institute for Physics & TUM Recitation Instructor: Oleksander (Alex) Volynets, MPP & TUM General Information: - Lectures will be held in English, Mondays 16-18:00 -
More informationMARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM
MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM XIANG LI Abstract. This paper centers on the analysis of a specific randomized algorithm, a basic random process that involves marking
More information4. Discrete Probability Distributions. Introduction & Binomial Distribution
4. Discrete Probability Distributions Introduction & Binomial Distribution Aim & Objectives 1 Aims u Introduce discrete probability distributions v Binomial distribution v Poisson distribution 2 Objectives
More informationLecture 4: Sampling, Tail Inequalities
Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /
More informationDiscrete Mathematics and Probability Theory Fall 2015 Note 20. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 215 Note 2 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces Ω, where the number
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationSingle Maths B: Introduction to Probability
Single Maths B: Introduction to Probability Overview Lecturer Email Office Homework Webpage Dr Jonathan Cumming j.a.cumming@durham.ac.uk CM233 None! http://maths.dur.ac.uk/stats/people/jac/singleb/ 1 Introduction
More informationExam III #1 Solutions
Department of Mathematics University of Notre Dame Math 10120 Finite Math Fall 2017 Name: Instructors: Basit & Migliore Exam III #1 Solutions November 14, 2017 This exam is in two parts on 11 pages and
More informationMidterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley
Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationDisjointness and Additivity
Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationTo find the median, find the 40 th quartile and the 70 th quartile (which are easily found at y=1 and y=2, respectively). Then we interpolate:
Joel Anderson ST 37-002 Lecture Summary for 2/5/20 Homework 0 First, the definition of a probability mass function p(x) and a cumulative distribution function F(x) is reviewed: Graphically, the drawings
More informationStreaming Algorithms for Optimal Generation of Random Bits
Streaming Algorithms for Optimal Generation of Random Bits ongchao Zhou Electrical Engineering Department California Institute of echnology Pasadena, CA 925 Email: hzhou@caltech.edu Jehoshua Bruck Electrical
More informationKousha Etessami. U. of Edinburgh, UK. Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 7) 1 / 13
Discrete Mathematics & Mathematical Reasoning Chapter 7 (continued): Markov and Chebyshev s Inequalities; and Examples in probability: the birthday problem Kousha Etessami U. of Edinburgh, UK Kousha Etessami
More informationPart (A): Review of Probability [Statistics I revision]
Part (A): Review of Probability [Statistics I revision] 1 Definition of Probability 1.1 Experiment An experiment is any procedure whose outcome is uncertain ffl toss a coin ffl throw a die ffl buy a lottery
More informationEECS 70 Discrete Mathematics and Probability Theory Fall 2015 Walrand/Rao Final
EECS 70 Discrete Mathematics and Probability Theory Fall 2015 Walrand/Rao Final PRINT Your Name:, (last) SIGN Your Name: (first) PRINT Your Student ID: CIRCLE your exam room: 220 Hearst 230 Hearst 237
More informationUCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis
UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis Lecture 10 Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Lecture 10 Notes Midterm Good job overall! = 81; =
More informationMath 447. Introduction to Probability and Statistics I. Fall 1998.
Math 447. Introduction to Probability and Statistics I. Fall 1998. Schedule: M. W. F.: 08:00-09:30 am. SW 323 Textbook: Introduction to Mathematical Statistics by R. V. Hogg and A. T. Craig, 1995, Fifth
More informationCSC Discrete Math I, Spring Discrete Probability
CSC 125 - Discrete Math I, Spring 2017 Discrete Probability Probability of an Event Pierre-Simon Laplace s classical theory of probability: Definition of terms: An experiment is a procedure that yields
More informationLesson 12: Systems of Linear Equations
Our final lesson involves the study of systems of linear equations. In this lesson, we examine the relationship between two distinct linear equations. Specifically, we are looking for the point where the
More informationFinal Examination. Adrian Georgi Josh Karen Lee Min Nikos Tina. There are 12 problems totaling 150 points. Total time is 170 minutes.
Massachusetts Institute of Technology 6.042J/18.062J, Fall 02: Mathematics for Computer Science Prof. Albert Meyer and Dr. Radhika Nagpal Final Examination Your name: Circle the name of your Tutorial Instructor:
More information