Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )

Size: px
Start display at page:

Download "Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )"

Transcription

1 Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the third coin is biased, landing heads with probability 2/3 We permute the coins randomly, and then flip each of the coins MAT RandAl, Spring Jan The first and second coins come up heads, and the third comes up tails? What is the probability that the first coin is the biased one? The coins are in a random order and so, before our observing the outcomes of the coin flips, each of the three coins is equally likely to be the biased one Let be the event that the th coin flipped is the biased one, and let be the event that the three coin flips came up heads, heads, and tails MAT RandAl, Spring Jan

2 Before we flip the coins Pr( ) = 1/3 for all The probability of the event conditioned on : Pr = Pr = =1 6 and Pr = = 1 12 Applying Bayes' law, we have Pr = 1 Pr Pr = 18 Pr Pr = The three coin flips increases the likelihood that the first coin is the biased one from 1/3 to 2/5 MAT RandAl, Spring Jan In randomized matrix multiplication test, we want to evaluate the increase in confidence in the matrix identity obtained through repeated tests In the Bayesian approach one starts with a prior model, giving some initial value to the model parameters This model is then modified, by incorporating new observations, to obtain a posterior model that captures the new information If we have no information about the process that generated the identity then a reasonable prior assumption is that the identity is correct with probability 1/2 MAT RandAl, Spring Jan

3 Let be the event that the identity is correct, and let be the event that the test returns that the identity is correct We start with Pr = Pr = 1, 2and since the test has a one-sided error bounded by 1, 2 we have Pr =1and Pr 12 Applying Bayes' law yields Pr Pr Pr = Pr Pr + Pr Pr = 2 3 MAT RandAl, Spring Jan Assume now that we run the randomized test again and it again returns that the identity is correct After the first test, we may have revised our prior model, so that we believe Pr( 2/3 and Pr( 1/3 Now let be the event that the new test returns that the identity is correct; since the tests are independent, as before we have Pr =1and Pr 1/2 MAT RandAl, Spring Jan

4 Applying Bayes' law then yields 23 Pr = In general: If our prior model (before running the test) is that Pr( and if the test returns that the identity is correct (event ), then Pr = = Thus, if all 100 calls to the matrix identity test return that it is correct, our confidence in the correctness of this identity is 1/(2 + 1) MAT RandAl, Spring Jan A Randomized Min-Cut Algorithm A cut-set in a graph is a set of edges whose removal breaks the graph into two or more connected components Given a graph =(,)with vertices, the minimum cut or min-cut problem is to find a minimum cardinality cut-set in Minimum cut problems arise in many contexts, including the study of network reliability MAT RandAl, Spring Jan

5 Minimum cuts also arise in clustering problems For example, if nodes represent Web pages (or any documents in a hypertext-based system) and two nodes have an edge between them if the corresponding nodes have a hyperlink between them, then small cuts divide the graph into clusters of documents with few links between clusters Documents in different clusters are likely to be unrelated MAT RandAl, Spring Jan The main operation is edge contraction In contracting an edge {, }we merge vertices and into one, eliminate all edges connecting and, and retain all other edges in the graph The new graph may have parallel edges but no self-loops The algorithm consists of 2iterations Each iteration picks an edge from the existing edges in the graph and contracts that edge Our randomized algorithm chooses the edge uniformly at random from the remaining edges MAT RandAl, Spring Jan

6 Each iteration reduces # of vertices by one After 2iterations, there are two vertices The algorithm outputs the set of edges connecting the two remaining vertices Any cut-set in an intermediate iteration of the algorithm is also a cut-set of the original graph Not every cut-set of the original graph is one in an intermediate iteration, since some edges may have been contracted in previous iterations As a result, the output of the algorithm is always a cut-set of the original graph but not necessarily the minimum cardinality cut-set MAT RandAl, Spring Jan MAT RandAl, Spring Jan

7 Theorem 1.8: The algorithm outputs a min-cut set with probability at least 2 (( 1) ). Proof: Let be the size of the min-cut set of. The graph may have several cut-sets of minimum size. We compute the probability of finding one specific such set. Since is a cut-set in the graph, removal of the set partitions the set of vertices into two sets, and, such that there are no edges connecting vertices in to those in. MAT RandAl, Spring Jan Assume that, throughout an execution of the algorithm, we contract only edges that connect two vertices in or two in, but not edges in. In that case, all the edges eliminated throughout the execution will be edges connecting vertices in or vertices in, and after 2iterations the algorithm returns a graph with two vertices connected by the edges in. We may conclude that, if the algorithm never chooses an edge of in its 2iterations, then the algorithm returns as the minimum cut-set. MAT RandAl, Spring Jan

8 If the size of the cut is small, then the probability that the algorithm chooses an edge of is small at least when the number of edges remaining is large compared to. Let be the event that the edge contracted in iteration is not in, and let = be the event that no edge of was contracted in the first iterations. We need to compute Pr. Start by computing Pr = Pr. Since the minimum cut-set has edges, all vertices in the graph must have degree or larger. If each vertex is adjacent to at least edges, then the graph must have at least 2 edges. MAT RandAl, Spring Jan Since there are at least /2 edges in the graph and since has edges, the probability that we do not choose an edge of in the first iteration is Pr = Pr =1 =12 Suppose that the first contraction did not eliminate an edge of. I.e., we condition on the event. Then, after the first iteration, we are left with an ( 1)-node graph with minimum cut-set of size. Again, the degree of each vertex in the graph must be at least, and the graph must have at least ( 1)/2 edges. MAT RandAl, Spring Jan

9 Pr Similarly, Pr To compute Pr, we use Pr = Pr = Pr Pr = Pr Pr Pr Pr 2 1 = = = 2 1. MAT RandAl, Spring Jan Discrete Random Variables and Expectation Random Variables and Expectation The Bernoulli and Binomial Random Variables Conditional Expectation The Geometric Distribution The Expected Run-Time of Quicksort 9

10 In tossing two dice we are often interested in the sum of the dice rather than their separate values The sample space in tossing two dice consists of 36 events of equal probability, given by the ordered pairs of numbers {(1,1), (1,2),, (6, 6)} If the quantity we are interested in is the sum of the two dice, then we are interested in 11 events (of unequal probability) Any such function from the sample space to the real numbers is called a random variable MAT RandAl, Spring Jan Random Variables and Expectation Definition 2.1: A random variable (RV) on a sample space is a real-valued function on ; that is,. A discrete random variable is a RV that takes on only a finite or countably infinite number of values For a discrete RV and a real value, the event " = " includes all the basic events of the sample space in which assumes the value I.e., " = " represents the set () =} MAT RandAl, Spring Jan

11 We denote the probability of that event by Pr = = Pr, () If is the RV representing the sum of the two dice, the event =4corresponds to the set of basic events {(1, 3), (2,2), (3, 1)} Hence, Pr = 4 = 3 36 = 1 12 MAT RandAl, Spring Jan Definition 2.2: Two RVs and are independent if and only if Pr(( = (=)) = Pr( = ) Pr( = ) for all values and. Similarly, RVs,,, are mutually independent if and only if, for any subset [1, ] and any values,, Pr ( = ) = Pr = MAT RandAl, Spring Jan

12 Definition 2.3: The expectation of a discrete RV, denoted by E[], is given by = Pr =, where the summation is over all values in the range of. The expectation is finite if Pr( = ), converges; otherwise, it is unbounded. E.g., the expectation of the RV representing the sum of two dice is = =7 36 MAT RandAl, Spring Jan As an example of where the expectation of a discrete RV is unbounded, consider a RV that takes on the value 2 with probability 12 for = 1,2, The expected value of is [] = = 1 expresses that [] is unbounded MAT RandAl, Spring Jan

13 Linearity of Expectations By this property, the expectation of the sum of RVs is equal to the sum of their expectations Theorem 2.1 [Linearity of Expectations]: For any finite collection of discrete RVs,,, with finite expectations, = MAT RandAl, Spring Jan Proof: We prove the statement for two random variables and (general case by induction). The summations that follow are understood to be over the ranges of the corresponding RVs: + = + Pr ( = (=) =Pr ( = (=) +Pr ( = (=) = Pr ( = (=) + Pr ( = (=) =Pr = + Pr = = []+[] The first equality follows from Definition 1.2. In the penultimate equation uses Theorem 1.6, the law of total probability. MAT RandAl, Spring Jan

14 Let us now compute the expected sum of two standard dice Let = +, where represents the outcome of die for = 1,2 Then = 1 6 = 7 2 Applying the linearity of expectations, we have = + =7 MAT RandAl, Spring Jan Linearity of expectations holds for any collection of RVs, even if they are not independent Consider, e.g., the previous example and let the random variable = We have = = 1 + [ 2 1 ] even though 1 and 1 2 are clearly dependent Verify the identity by considering the six possible outcomes for 1 MAT RandAl, Spring Jan

15 Lemma 2.2: For any constant and discrete RV, = []. Proof: The lemma is obvious for =0. For 0, [] =Pr = = / Pr = / = Pr = =. MAT RandAl, Spring Jan Jensen's Inequality Let us choose the length of a side of a square uniformly at random from the range [1,99] What is the expected value of the area? We can write this as [ ] It is tempting to think of this as being equal to, but a simple calculation shows that this is not correct In fact, = = 50 = 2500 whereas = > 2500 MAT RandAl, Spring Jan

16 More generally, [ ] ( ) Consider =() The RV is nonnegative and hence its expectation must also be nonnegative []=[( []) ] = [ + ] = [ 2[ ]+( ) =[ ( ) To obtain the penultimate line, use the linearity of expectations To obtain the last line use Lemma 2.2 to simplify [[]] = [] [] MAT RandAl, Spring Jan The fact that [ ] ( ) is an example of Jensen's inequality Jensen's inequality shows that, for any convex function, we have [()]([]) Definition 2.4: A function : is said to be convex if, for any, and 1, + +(1)( ) Lemma 2.3: If is twice differentiable function, then is convex if and only if "() 0 MAT RandAl, Spring Jan

17 MAT RandAl, Spring Jan Theorem 2.4 [Jensen's Inequality]: If is a convex function, then [()]([]). Proof: We prove the theorem assuming that has a Taylor expansion. Let = []. By Taylor's theorem there is a value such that = + (x)+ () 2 + (x) since () >0by convexity. Taking expectations and applying linearity of and Lemma 2.2 yields: [ [ + ] =[ ]+()([ ) =()=([]). MAT RandAl, Spring Jan

18 2.2. The Bernoulli and Binomial Random Variables We run an experiment that succeeds with probability and fails with probability Let be a RV such that = 1iftheexperimentsucceeds, 0otherwise The variable is called a Bernoulli or an indicator random variable Note that, for a Bernoulli RV, [] = 1 + (1 ) 0 = = Pr( = 1) MAT RandAl, Spring Jan If we, e.g., flip a fair coin and consider heads a success, then the expected value of the corresponding indicator RV is 1/2 Consider a sequence of independent coin flips What is the distribution of the number of heads in the entire sequence? More generally, consider a sequence of independent experiments, each of which succeeds with probability If we let represent the number of successes in the experiments, then has a binomial distribution MAT RandAl, Spring Jan

19 Definition 2.5: A binomial RV with parameters and, denoted by (, ), is defined by the following probability distribution on = 0,1,2,, : Pr = = I.e., the binomial RV (BRV) equals when there are exactly successes and failures in independent experiments, each of which is successful with probability Definition 2.5 ensures that the BRV is a valid probability function (Definition 1.2): Pr = = 1 MAT RandAl, Spring Jan We want to gather data about the packets going through a router We want to know the approximate fraction of packets from a certain source / of a certain type We store a random subset or sample of the packets for later analysis Each packet is stored with probability and packets go through the router each day, the number of sampled packets each day is a BRV with parameters and To know how much memory is necessary for such a sample, determine the expectation of MAT RandAl, Spring Jan

20 If is a BRV with parameters and, then is the number of successes in trials, where each trial is successful with probability Define a set of indicator RVs,,, where =1if the th trial is successful and 0 otherwise Clearly, [ ]=and = and so, by the linearity of expectations, = = = MAT RandAl, Spring Jan Conditional Expectation Definition 2.6: [ = ] =Pr = = where the summation is over all in the range of The conditional expectation of a RV is, like, a weighted sum of the values it assumes Now each value is weighted by the conditional probability that the variable assumes that value MAT RandAl, Spring Jan-17 88, 20

21 Suppose that we independently roll two standard six-sided dice Let be the number that shows on the first die, the number on the second die, and the sum of the numbers on the two dice Then = 2 = Pr = =2 = 1 = MAT RandAl, Spring Jan As another example, consider =5: = 5 = Pr = =5 = Pr = =5 Pr =5 = 136 = MAT RandAl, Spring Jan

22 Lemma 2.5: For any RVs and, [] = Pr = [ = ], where the sum is over all values in the range of and all of the expectations exist. Proof: Pr = = Pr = = = Pr = = Pr = = Pr = = Pr = = = Pr = = [] MAT RandAl, Spring Jan The linearity of expectations also extends to conditional expectations Lemma 2.6: For any finite collection of discrete RVs,,, with finite expectations and for any RV, = = [ = ] MAT RandAl, Spring Jan

23 Confusingly, the conditional expectation is also used to refer to the following RV Definition 2.7: The expression [ ] is a RV () that takes on the value [ = ] when = [ ] is not a real value; it is actually a function of the RV Hence [ ] is itself a function from the sample space to the real numbers and can therefore be thought of as a RV MAT RandAl, Spring Jan In the previous example of rolling two dice, = Pr = = 1 6 = We see that is a RV whose value depends on If [ ] is a RV, then it makes sense to consider its expectation[ ] We found that = + 72 Thus, = = =7=[] MAT RandAl, Spring Jan

24 More generally, Theorem 2.7: Y = [ ] Proof: From Definition 2.7 we have =, where takes on the value = when =. Hence = = Pr = The right-hand side equals Y by Lemma 2.5. MAT RandAl, Spring Jan Consider a program that includes one call to a process Assume that each call to process recursively spawns new copies of the process, where the number of new copies is a BRV with parameters and We assume that these random variables are independent for each call to What is the expected number of copies of the process generated by the program? MAT RandAl, Spring Jan

25 To analyze this recursive spawning process, we use generations The initial process is in generation 0 Otherwise, we say that a process is in generation if it was spawned by another process in generation 1 Let denote the number of processes in generation Since we know that =1, the number of processes in generation 1 has a binomial distribution Thus, = MAT RandAl, Spring Jan Similarly, suppose we knew that the number of processes in generation 1was, so = Then = = Applying Theorem 2.7, we can compute the expected size of the th generation inductively We have = = = [ ] By induction on, and using the fact that =1, we then obtain = MAT RandAl, Spring Jan

26 The expected total number of copies of process generated by the program is given by = = If 1then the expectation is unbounded; if <1, the expectation is 1 (1) The # of processes generated by the program is bounded iff the # of processes spawned by each process is less than 1 This is a simple example of a branching process, a probabilistic paradigm extensively studied in probability theory MAT RandAl, Spring Jan The Geometric Distribution Let us flip a coin until it lands onheads? What is the distribution of the number of flips? This is an example of a geometric distribution It arises when we perform a sequence of independent trials until the first success, where each trial succeeds with probability Definition 2.8: A geometric RV with parameter is given by the following probability distribution on = 1,2, : Pr( = ) = MAT RandAl, Spring Jan

27 Geometric RVs are said to be memoryless because the probability that you will reach your first success trials from now is independent of the number of failures you have experienced Informally, one can ignore past failures they do not change the distribution of the number of future trials until first success Formally, we have the following Lemma 2.8: For a geometric RV with parameter and for >0, Pr( = + > ) = Pr( = ) MAT RandAl, Spring Jan When a RV takes values in the set of natural numbers = {0,1,2,3, } there is an alternative formula for calculating its expectation Lemma 2.9: Let be a discrete RV that takes on only nonnegative integer values. Then Proof: [] = Pr = Pr Pr = = Pr = = Pr = = [] MAT RandAl, Spring Jan

28 For a geometric RV with parameter, Pr = = Hence = = 1 (1) = 1 Thus, for a fair coin where = 1/2, on average it takes two flips to see the first heads MAT RandAl, Spring Jan Finding the expectation of a geometric RV with parameter using conditional expectations and the memoryless property of geometric RVs Recall that corresponds to the number of flips until the first heads given that each flip isheads with probability Let =0if the first flip istails and =1if the first flip isheads By the identity from Lemma 2.5, = Pr =0 =0 + Pr =1 [ = 1] = (1 )[ = 0] + [ = 1] MAT RandAl, Spring Jan

29 If = 1 then = 1, so [ = 1] = 1 If =0, then >1 In this case, let the number of remaining flips (after the first flip until the first heads) be Then, by the linearity of expectations, [] =(1)[+1]+1=(1)[]+1 By the memoryless property of geometric RVs, is also a geometric RV with parameter Hence [] =[], since they both have the same distribution We therefore have [] =(1)[]+1= (1)[]+1, which yields [] = 1/ MAT RandAl, Spring Jan Example: Coupon Collector's Problem Each box of cereal contains one of different coupons Once you obtain one of every type of coupon, you can send in for a prize Coupon in each box is chosen independently and uniformly at random from the possibilities and that you do not collaborate to collect coupons? How many boxes of cereal must you buy before you obtain at least one of every type of coupon? MAT RandAl, Spring Jan

30 Let be the number of boxes bought until at least one of every type of coupon is obtained If is the number of boxes bought while you had exactly 1different coupons, then clearly = The advantage of breaking into a sum of random variables, =1,,, is that each is a geometric RV When exactly 1coupons have been found, the probability of obtaining a new coupon is =1 1 MAT RandAl, Spring Jan Hence, is a geometric RV with parameter : = 1 = +1 Using the linearity of expectations, we have that = = +1 = 1 MAT RandAl, Spring Jan

31 The summation 1 harmonic number () is known as the Lemma 2.10: The harmonic number = 1satisfies () = ln (1). Thus, for the coupon collector's problem, the expected number of random coupons required to obtain all coupons is ln () MAT RandAl, Spring Jan

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

1. Discrete Distributions

1. Discrete Distributions Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 1. Discrete Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an underlying sample space Ω.

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

CS155: Probability and Computing: Randomized Algorithms and Probabilistic Analysis

CS155: Probability and Computing: Randomized Algorithms and Probabilistic Analysis CS155: Probability and Computing: Randomized Algorithms and Probabilistic Analysis Eli Upfal Eli Upfal@brown.edu Office: 319 TA s: Lorenzo De Stefani and Sorin Vatasoiu cs155tas@cs.brown.edu It is remarkable

More information

CSE 548: Analysis of Algorithms. Lectures 18, 19, 20 & 21 ( Randomized Algorithms & High Probability Bounds )

CSE 548: Analysis of Algorithms. Lectures 18, 19, 20 & 21 ( Randomized Algorithms & High Probability Bounds ) CSE 548: Analysis of Algorithms Lectures 18, 19, 20 & 21 ( Randomized Algorithms & High Probability Bounds ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Fall 2012 Markov s Inequality

More information

CMPSCI 240: Reasoning Under Uncertainty

CMPSCI 240: Reasoning Under Uncertainty CMPSCI 240: Reasoning Under Uncertainty Lecture 5 Prof. Hanna Wallach wallach@cs.umass.edu February 7, 2012 Reminders Pick up a copy of B&T Check the course website: http://www.cs.umass.edu/ ~wallach/courses/s12/cmpsci240/

More information

Example 1. The sample space of an experiment where we flip a pair of coins is denoted by:

Example 1. The sample space of an experiment where we flip a pair of coins is denoted by: Chapter 8 Probability 8. Preliminaries Definition (Sample Space). A Sample Space, Ω, is the set of all possible outcomes of an experiment. Such a sample space is considered discrete if Ω has finite cardinality.

More information

The first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have

The first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have The first bound is the strongest, the other two bounds are often easier to state and compute Proof: Applying Markov's inequality, for any >0 we have Pr (1 + ) = Pr For any >0, we can set = ln 1+ (4.4.1):

More information

Conditional Probability

Conditional Probability Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 Introduction One of the key properties of coin flips is independence: if you flip a fair coin ten times and get ten

More information

ELEG 3143 Probability & Stochastic Process Ch. 1 Probability

ELEG 3143 Probability & Stochastic Process Ch. 1 Probability Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 1 Probability Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Applications Elementary Set Theory Random

More information

Dynamic Programming Lecture #4

Dynamic Programming Lecture #4 Dynamic Programming Lecture #4 Outline: Probability Review Probability space Conditional probability Total probability Bayes rule Independent events Conditional independence Mutual independence Probability

More information

Ma/CS 6b Class 15: The Probabilistic Method 2

Ma/CS 6b Class 15: The Probabilistic Method 2 Ma/CS 6b Class 15: The Probabilistic Method 2 By Adam Sheffer Reminder: Random Variables A random variable is a function from the set of possible events to R. Example. Say that we flip five coins. We can

More information

Discrete Random Variables

Discrete Random Variables CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is

More information

Random Variables Example:

Random Variables Example: Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the

More information

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized

More information

Probabilistic Systems Analysis Spring 2018 Lecture 6. Random Variables: Probability Mass Function and Expectation

Probabilistic Systems Analysis Spring 2018 Lecture 6. Random Variables: Probability Mass Function and Expectation EE 178 Probabilistic Systems Analysis Spring 2018 Lecture 6 Random Variables: Probability Mass Function and Expectation Probability Mass Function When we introduce the basic probability model in Note 1,

More information

Properties of Probability

Properties of Probability Econ 325 Notes on Probability 1 By Hiro Kasahara Properties of Probability In statistics, we consider random experiments, experiments for which the outcome is random, i.e., cannot be predicted with certainty.

More information

STAT:5100 (22S:193) Statistical Inference I

STAT:5100 (22S:193) Statistical Inference I STAT:5100 (22S:193) Statistical Inference I Week 3 Luke Tierney University of Iowa Fall 2015 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall 2015 1 Recap Matching problem Generalized

More information

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events LECTURE 1 1 Introduction The first part of our adventure is a highly selective review of probability theory, focusing especially on things that are most useful in statistics. 1.1 Sample spaces and events

More information

Random Variables. Definition: A random variable (r.v.) X on the probability space (Ω, F, P) is a mapping

Random Variables. Definition: A random variable (r.v.) X on the probability space (Ω, F, P) is a mapping Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

MATH MW Elementary Probability Course Notes Part I: Models and Counting

MATH MW Elementary Probability Course Notes Part I: Models and Counting MATH 2030 3.00MW Elementary Probability Course Notes Part I: Models and Counting Tom Salisbury salt@yorku.ca York University Winter 2010 Introduction [Jan 5] Probability: the mathematics used for Statistics

More information

2. AXIOMATIC PROBABILITY

2. AXIOMATIC PROBABILITY IA Probability Lent Term 2. AXIOMATIC PROBABILITY 2. The axioms The formulation for classical probability in which all outcomes or points in the sample space are equally likely is too restrictive to develop

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Probability Theory and Applications

Probability Theory and Applications Probability Theory and Applications Videos of the topics covered in this manual are available at the following links: Lesson 4 Probability I http://faculty.citadel.edu/silver/ba205/online course/lesson

More information

ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables

ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Random Variable Discrete Random

More information

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ). CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 8 Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According to clinical trials,

More information

Discrete Random Variables

Discrete Random Variables Chapter 5 Discrete Random Variables Suppose that an experiment and a sample space are given. A random variable is a real-valued function of the outcome of the experiment. In other words, the random variable

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random

More information

Discrete Structures for Computer Science

Discrete Structures for Computer Science Discrete Structures for Computer Science William Garrison bill@cs.pitt.edu 6311 Sennott Square Lecture #24: Probability Theory Based on materials developed by Dr. Adam Lee Not all events are equally likely

More information

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39 Entropy Probability and Computing Presentation 22 Probability and Computing Presentation 22 Entropy 1/39 Introduction Why randomness and information are related? An event that is almost certain to occur

More information

CS5314 Randomized Algorithms. Lecture 5: Discrete Random Variables and Expectation (Conditional Expectation, Geometric RV)

CS5314 Randomized Algorithms. Lecture 5: Discrete Random Variables and Expectation (Conditional Expectation, Geometric RV) CS5314 Randomized Algorithms Lecture 5: Discrete Random Variables and Expectation (Conditional Expectation, Geometric RV) Objectives Introduce Geometric RV We then introduce Conditional Expectation Application:

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces. Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for

More information

Formalizing Probability. Choosing the Sample Space. Probability Measures

Formalizing Probability. Choosing the Sample Space. Probability Measures Formalizing Probability Choosing the Sample Space What do we assign probability to? Intuitively, we assign them to possible events (things that might happen, outcomes of an experiment) Formally, we take

More information

Probability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events...

Probability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events... Probability COMP 245 STATISTICS Dr N A Heard Contents Sample Spaces and Events. Sample Spaces........................................2 Events........................................... 2.3 Combinations

More information

MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM

MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM XIANG LI Abstract. This paper centers on the analysis of a specific randomized algorithm, a basic random process that involves marking

More information

Chapter 1: Introduction to Probability Theory

Chapter 1: Introduction to Probability Theory ECE5: Stochastic Signals and Systems Fall 8 Lecture - September 6, 8 Prof. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter : Introduction to Probability Theory Axioms of Probability

More information

P (A B) P ((B C) A) P (B A) = P (B A) + P (C A) P (A) = P (B A) + P (C A) = Q(A) + Q(B).

P (A B) P ((B C) A) P (B A) = P (B A) + P (C A) P (A) = P (B A) + P (C A) = Q(A) + Q(B). Lectures 7-8 jacques@ucsdedu 41 Conditional Probability Let (Ω, F, P ) be a probability space Suppose that we have prior information which leads us to conclude that an event A F occurs Based on this information,

More information

Probability: Terminology and Examples Class 2, Jeremy Orloff and Jonathan Bloom

Probability: Terminology and Examples Class 2, Jeremy Orloff and Jonathan Bloom 1 Learning Goals Probability: Terminology and Examples Class 2, 18.05 Jeremy Orloff and Jonathan Bloom 1. Know the definitions of sample space, event and probability function. 2. Be able to organize a

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 2

CPSC 536N: Randomized Algorithms Term 2. Lecture 2 CPSC 536N: Randomized Algorithms 2014-15 Term 2 Prof. Nick Harvey Lecture 2 University of British Columbia In this lecture we continue our introduction to randomized algorithms by discussing the Max Cut

More information

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability Lecture Notes 1 Basic Probability Set Theory Elements of Probability Conditional probability Sequential Calculation of Probability Total Probability and Bayes Rule Independence Counting EE 178/278A: Basic

More information

Name: Firas Rassoul-Agha

Name: Firas Rassoul-Agha Midterm 1 - Math 5010 - Spring 016 Name: Firas Rassoul-Agha Solve the following 4 problems. You have to clearly explain your solution. The answer carries no points. Only the work does. CALCULATORS ARE

More information

Introductory Probability

Introductory Probability Introductory Probability Bernoulli Trials and Binomial Probability Distributions Dr. Nguyen nicholas.nguyen@uky.edu Department of Mathematics UK February 04, 2019 Agenda Bernoulli Trials and Probability

More information

Probability theory basics

Probability theory basics Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:

More information

With Question/Answer Animations. Chapter 7

With Question/Answer Animations. Chapter 7 With Question/Answer Animations Chapter 7 Chapter Summary Introduction to Discrete Probability Probability Theory Bayes Theorem Section 7.1 Section Summary Finite Probability Probabilities of Complements

More information

Chapter Learning Objectives. Random Experiments Dfiii Definition: Dfiii Definition:

Chapter Learning Objectives. Random Experiments Dfiii Definition: Dfiii Definition: Chapter 2: Probability 2-1 Sample Spaces & Events 2-1.1 Random Experiments 2-1.2 Sample Spaces 2-1.3 Events 2-1 1.4 Counting Techniques 2-2 Interpretations & Axioms of Probability 2-3 Addition Rules 2-4

More information

RVs and their probability distributions

RVs and their probability distributions RVs and their probability distributions RVs and their probability distributions In these notes, I will use the following notation: The probability distribution (function) on a sample space will be denoted

More information

Conditional Probability

Conditional Probability Conditional Probability When we obtain additional information about a probability experiment, we want to use the additional information to reassess the probabilities of events given the new information.

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

CS206 Review Sheet 3 October 24, 2018

CS206 Review Sheet 3 October 24, 2018 CS206 Review Sheet 3 October 24, 2018 After ourintense focusoncounting, wecontinue withthestudyofsomemoreofthebasic notions from Probability (though counting will remain in our thoughts). An important

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 EECS 70 Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 Introduction to Basic Discrete Probability In the last note we considered the probabilistic experiment where we flipped

More information

Probability and distributions. Francesco Corona

Probability and distributions. Francesco Corona Probability Probability and distributions Francesco Corona Department of Computer Science Federal University of Ceará, Fortaleza Probability Many kinds of studies can be characterised as (repeated) experiments

More information

CS280, Spring 2004: Final

CS280, Spring 2004: Final CS280, Spring 2004: Final 1. [4 points] Which of the following relations on {0, 1, 2, 3} is an equivalence relation. (If it is, explain why. If it isn t, explain why not.) Just saying Yes or No with no

More information

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution Random Variable Theoretical Probability Distribution Random Variable Discrete Probability Distributions A variable that assumes a numerical description for the outcome of a random eperiment (by chance).

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 26, 2018 CS 361: Probability & Statistics Random variables The discrete uniform distribution If every value of a discrete random variable has the same probability, then its distribution is called

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, < x <. ( ) X s. Real Line

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, < x <. ( ) X s. Real Line Random Variable Random variable is a mapping that maps each outcome s in the sample space to a unique real number, <

More information

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) =

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) = CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14 1 Probability First, recall a couple useful facts from last time about probability: Linearity of expectation: E(aX + by ) = ae(x)

More information

CSC Discrete Math I, Spring Discrete Probability

CSC Discrete Math I, Spring Discrete Probability CSC 125 - Discrete Math I, Spring 2017 Discrete Probability Probability of an Event Pierre-Simon Laplace s classical theory of probability: Definition of terms: An experiment is a procedure that yields

More information

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman.

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman. Math 224 Fall 2017 Homework 1 Drew Armstrong Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman. Section 1.1, Exercises 4,5,6,7,9,12. Solutions to Book Problems.

More information

Mathematical Foundations of Computer Science Lecture Outline October 18, 2018

Mathematical Foundations of Computer Science Lecture Outline October 18, 2018 Mathematical Foundations of Computer Science Lecture Outline October 18, 2018 The Total Probability Theorem. Consider events E and F. Consider a sample point ω E. Observe that ω belongs to either F or

More information

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis Lecture 8 Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Lecture 8 Notes Goals for Today Counting Partitions

More information

1 INFO Sep 05

1 INFO Sep 05 Events A 1,...A n are said to be mutually independent if for all subsets S {1,..., n}, p( i S A i ) = p(a i ). (For example, flip a coin N times, then the events {A i = i th flip is heads} are mutually

More information

REPEATED TRIALS. p(e 1 ) p(e 2 )... p(e k )

REPEATED TRIALS. p(e 1 ) p(e 2 )... p(e k ) REPEATED TRIALS We first note a basic fact about probability and counting. Suppose E 1 and E 2 are independent events. For example, you could think of E 1 as the event of tossing two dice and getting a

More information

Module 1. Probability

Module 1. Probability Module 1 Probability 1. Introduction In our daily life we come across many processes whose nature cannot be predicted in advance. Such processes are referred to as random processes. The only way to derive

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

P (A) = P (B) = P (C) = P (D) =

P (A) = P (B) = P (C) = P (D) = STAT 145 CHAPTER 12 - PROBABILITY - STUDENT VERSION The probability of a random event, is the proportion of times the event will occur in a large number of repititions. For example, when flipping a coin,

More information

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS 6.0/6.3 Spring 009 Quiz Wednesday, March, 7:30-9:30 PM. SOLUTIONS Name: Recitation Instructor: Question Part Score Out of 0 all 0 a 5 b c 5 d 5 e 5 f 5 3 a b c d 5 e 5 f 5 g 5 h 5 Total 00 Write your solutions

More information

Discrete Random Variable

Discrete Random Variable Discrete Random Variable Outcome of a random experiment need not to be a number. We are generally interested in some measurement or numerical attribute of the outcome, rather than the outcome itself. n

More information

Probability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008

Probability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008 Probability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008 1 Review We saw some basic metrics that helped us characterize

More information

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics Probability Rules MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Introduction Probability is a measure of the likelihood of the occurrence of a certain behavior

More information

CSE525: Randomized Algorithms and Probabilistic Analysis April 2, Lecture 1

CSE525: Randomized Algorithms and Probabilistic Analysis April 2, Lecture 1 CSE525: Randomized Algorithms and Probabilistic Analysis April 2, 2013 Lecture 1 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Eric Lei 1 Introduction The main theme of this class is randomized algorithms.

More information

Lecture 9: Conditional Probability and Independence

Lecture 9: Conditional Probability and Independence EE5110: Probability Foundations July-November 2015 Lecture 9: Conditional Probability and Independence Lecturer: Dr. Krishna Jagannathan Scribe: Vishakh Hegde 9.1 Conditional Probability Definition 9.1

More information

Senior Math Circles November 19, 2008 Probability II

Senior Math Circles November 19, 2008 Probability II University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where

More information

Discrete Probability

Discrete Probability MAT 258 Discrete Mathematics Discrete Probability Kenneth H. Rosen and Kamala Krithivasan Discrete Mathematics 7E Global Edition Chapter 7 Reproduced without explicit consent Fall 2016 Week 11 Probability

More information

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch Monty Hall Puzzle Example: You are asked to select one of the three doors to open. There is a large prize behind one of the doors and if you select that door, you win the prize. After you select a door,

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan Introduction The markets can be thought of as a complex interaction of a large number of random processes,

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

Chapter 2 Random Variables

Chapter 2 Random Variables Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung

More information

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya BBM 205 Discrete Mathematics Hacettepe University http://web.cs.hacettepe.edu.tr/ bbm205 Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya Resources: Kenneth Rosen, Discrete

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 7E Probability and Statistics Spring 6 Instructor : Class Meets : Office Hours : Textbook : İlker Bayram EEB 3 ibayram@itu.edu.tr 3.3 6.3, Wednesday EEB 6.., Monday D. B. Bertsekas, J. N. Tsitsiklis,

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Spring 206 Rao and Walrand Note 6 Random Variables: Distribution and Expectation Example: Coin Flips Recall our setup of a probabilistic experiment as

More information

Lecture 4: Sampling, Tail Inequalities

Lecture 4: Sampling, Tail Inequalities Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /

More information

Outline Conditional Probability The Law of Total Probability and Bayes Theorem Independent Events. Week 4 Classical Probability, Part II

Outline Conditional Probability The Law of Total Probability and Bayes Theorem Independent Events. Week 4 Classical Probability, Part II Week 4 Classical Probability, Part II Week 4 Objectives This week we continue covering topics from classical probability. The notion of conditional probability is presented first. Important results/tools

More information

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Lecture 6. Probability events. Definition 1. The sample space, S, of a. probability experiment is the collection of all

Lecture 6. Probability events. Definition 1. The sample space, S, of a. probability experiment is the collection of all Lecture 6 1 Lecture 6 Probability events Definition 1. The sample space, S, of a probability experiment is the collection of all possible outcomes of an experiment. One such outcome is called a simple

More information

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis Day 5 Functions/Probability Overview Functions Probabilistic analysis Neil Rhodes UC San Diego Image/Coimage The image of f is the set of values f actually takes on (a subset of the codomain) The inverse

More information

Topic 3 Random variables, expectation, and variance, II

Topic 3 Random variables, expectation, and variance, II CSE 103: Probability and statistics Fall 2010 Topic 3 Random variables, expectation, and variance, II 3.1 Linearity of expectation If you double each value of X, then you also double its average; that

More information

University of California, Berkeley, Statistics 134: Concepts of Probability. Michael Lugo, Spring Exam 1

University of California, Berkeley, Statistics 134: Concepts of Probability. Michael Lugo, Spring Exam 1 University of California, Berkeley, Statistics 134: Concepts of Probability Michael Lugo, Spring 2011 Exam 1 February 16, 2011, 11:10 am - 12:00 noon Name: Solutions Student ID: This exam consists of seven

More information

b. ( ) ( ) ( ) ( ) ( ) 5. Independence: Two events (A & B) are independent if one of the conditions listed below is satisfied; ( ) ( ) ( )

b. ( ) ( ) ( ) ( ) ( ) 5. Independence: Two events (A & B) are independent if one of the conditions listed below is satisfied; ( ) ( ) ( ) 1. Set a. b. 2. Definitions a. Random Experiment: An experiment that can result in different outcomes, even though it is performed under the same conditions and in the same manner. b. Sample Space: This

More information

Randomized Algorithms. Andreas Klappenecker

Randomized Algorithms. Andreas Klappenecker Randomized Algorithms Andreas Klappenecker Randomized Algorithms A randomized algorithm is an algorithm that makes random choices during its execution. A randomized algorithm uses values generated by a

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Statistical Inference

Statistical Inference Statistical Inference Lecture 1: Probability Theory MING GAO DASE @ ECNU (for course related communications) mgao@dase.ecnu.edu.cn Sep. 11, 2018 Outline Introduction Set Theory Basics of Probability Theory

More information