The topics in this section concern with the first course objective.

Similar documents
Lecture notes for probability. Math 124

Expected Value II. 1 The Expected Number of Events that Happen

Fundamentals of Mathematics I

1 The Basic Counting Principles

3 PROBABILITY TOPICS

1 Normal Distribution.

CIS 2033 Lecture 5, Fall

A New Interpretation of Information Rate

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

Probability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events...

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

The First Derivative Test

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman.

What is a random variable

Probability and the Second Law of Thermodynamics

Probability and Independence Terri Bittner, Ph.D.

1,1 1,2 1,3 1,4 1,5 1,6 2,1 2,2 2,3 2,4 2,5 2,6 3,1 3,2 3,3 3,4 3,5 3,6 4,1 4,2 4,3 4,4 4,5 4,6 5,1 5,2 5,3 5,4 5,5 5,6 6,1 6,2 6,3 6,4 6,5 6,6

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Module 8 Probability

Generating Function Notes , Fall 2005, Prof. Peter Shor

Appendix A. Review of Basic Mathematical Operations. 22Introduction

Section 3.1 Quadratic Functions

ABE Math Review Package

Probability Notes. Definitions: The probability of an event is the likelihood of choosing an outcome from that event.

Quadratic Equations Part I

Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22

Lecture #13 Tuesday, October 4, 2016 Textbook: Sections 7.3, 7.4, 8.1, 8.2, 8.3

5.3 Conditional Probability and Independence

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Functions - Definitions and first examples and properties

Finite Mathematics : A Business Approach

Probability Theory and Applications

Stat 225 Week 1, 8/20/12-8/24/12, Notes: Set Theory

Chapter 8 Sequences, Series, and Probability

1 What is the area model for multiplication?

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation

Chapter 5 Simplifying Formulas and Solving Equations

Mathematical Probability

STAT:5100 (22S:193) Statistical Inference I

The Basics COPYRIGHTED MATERIAL. chapter. Algebra is a very logical way to solve

Prealgebra. Edition 5

MTH 05 Lecture Notes. Andrew McInerney

CS 124 Math Review Section January 29, 2018

Elementary Algebra - Problem Drill 01: Introduction to Elementary Algebra

Section F Ratio and proportion

1 Functions, Graphs and Limits

Chapter 8: An Introduction to Probability and Statistics

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

MATH 103 Pre-Calculus Mathematics Test #3 Fall 2008 Dr. McCloskey Sample Solutions

Numbers of Different Size: Notation and Presentation

Section 7.2 Homework Answers

RANDOM WALKS IN ONE DIMENSION

Exponential Functions

MATH STUDENT BOOK. 12th Grade Unit 9

Notes and Solutions #6 Meeting of 21 October 2008

ACCUPLACER MATH 0311 OR MATH 0120

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

( 3) ( ) ( ) ( ) ( ) ( )

1. Consider the conditional E = p q r. Use de Morgan s laws to write simplified versions of the following : The negation of E : 5 points

MAT Mathematics in Today's World

June If you want, you may scan your assignment and convert it to a.pdf file and it to me.

*Karle Laska s Sections: There is no class tomorrow and Friday! Have a good weekend! Scores will be posted in Compass early Friday morning

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

GCSE AQA Mathematics. Numbers

Introduction to Probability

Lecture 6 Random walks - advanced methods

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.

Instructions. Do not open your test until instructed to do so!

Steve Smith Tuition: Maths Notes

14 - PROBABILITY Page 1 ( Answers at the end of all questions )

RVs and their probability distributions

What is Probability? Probability. Sample Spaces and Events. Simple Event

Discrete Distributions

Chapter 9: Roots and Irrational Numbers

College Algebra Through Problem Solving (2018 Edition)

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 4.1-1

P (E) = P (A 1 )P (A 2 )... P (A n ).

Section 4.6 Negative Exponents

download from

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Lecture 6 - Random Variables and Parameterized Sample Spaces

Probability the chance that an uncertain event will occur (always between 0 and 1)

Simultaneous Equations Solve for x and y (What are the values of x and y): Summation What is the value of the following given x = j + 1. x i.

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics. College Algebra for STEM

Math 016 Lessons Wimayra LUY

Men. Women. Men. Men. Women. Women

PERMUTATIONS, COMBINATIONS AND DISCRETE PROBABILITY

Computations - Show all your work. (30 pts)

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

. Find E(V ) and var(v ).

1.4 Mathematical Equivalence

Keystone Exams: Algebra

Discrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation

Exponents and Logarithms

Chapter 1 Review of Equations and Inequalities

Chapter 11 Advanced Topic Stochastic Processes

Discrete Probability

Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities)

32. SOLVING LINEAR EQUATIONS IN ONE VARIABLE

Transcription:

1.1 Systems & Probability The topics in this section concern with the first course objective. A system is one of the most fundamental concepts and one of the most useful and powerful tools in STEM (science, technology, engineering and mathematics - including applied mathematics and Statistics). A system is something that has input and output. Of cause, the input of a system is what goes into the system and the output of a system is what comes out of the system. See the diagram below. Input System Output There are a lot of systems in this universe. For instance, a human body is a system. Food can be its input. Then, can you guess what its output would be? A human brain is a system too. Its input (stimulus) comes from senses and its output is an instruction to the body. A production process is also a system. It takes in (inputs) raw materials and produces (outputs) finished products. The entire universe consists of systems. They form a cohesive unit (the universe) by interacting through their inputs and outputs. For instance, the output of one system is the input of another system. This is how the effect of what you do in (to) one part (one system) of this world shows up in the other parts (other systems) of this world. A large system might contain smaller systems (subsystems). However, we focus on the smallest system that contains or is relevant to our problem of interest. The smaller and simpler the system is, the easier and simpler to handle the system and to solve the problem. All the systems dealt in this course are relatively simple and most of them have no subsystems. 1

A (mathematical) function is a system with its input consisting of elements of the domain and its output consisting of elements of the range. This is the reason why we can use functions to model real world systems. Basically, there are two kinds of systems. One is the deterministic system, which is defined as: A deterministic system is a system whose output can be completely determined by its input. The system s output is completely determined by its input; thus, the name deterministic system. A mathematical function, f(x), is an example of the deterministic system. The functional value, which is the output, is determined by its input value of x. So, I know exactly what the functional value (output) will be when I put a particular value of x (input) into f(x). That is, I can control the output by controlling its input. For example, if the function is f(x) = 2x + 4 (system), then the input x=3 determines the output (functional value) to be f(3) = 10. Again, the set of all possible input values is the domain of the function and the set of all possible output values (which are the functional values and often referred to as outcomes; for outcomes and outputs, see the Appendix: Output & Outcomes given below) is the range of the function. See the diagram below. f(x) = 2x+4 x = 3 f(3) = 10 Again, a function, f(x), is a system and, thus, we can use functions to model systems in the real world. One of the important characteristics of the deterministic system is that you always get the same output if you feed the system the same input; that is, a 2

constant input results in a constant output. Note that a constant output of a deterministic system does not imply that its input is constant. For instance, a constant function, f(x) = 5, has a constant output, but its input can be any real number and does not have to be constant at all. Another example is a quadratic function, f(x) = x 2. A fact that its output being constant at 4 does not mean its input is constant. The input can be x = 2 or x = -2. However, if its input is constant (say, at x = -2), then its output is constant (at f(-2) = 4). Another kind of system is the random system. It is defined as: A random system is a system whose output cannot be completely determined by its input. One immediate consequence of this input-output relation is that a constant input does not result in a constant output. Generally, a random system consists of two parts; the deterministic part and the random part. By the way, if a system has no random part, it is a deterministic system. The random part of the system causes variation in its output. For instance, a random system, r(x) could be r(x) = (2x + 4) + e, where 2x + 4 is the deterministic part and e is the random part. Without the random part e, it would be a deterministic system, f(x) = 2x + 4, discussed earlier. By the way, e here is not the irrational constant, 2.7182, used as the natural exponential base in mathematics. A couple of things should be explained now. Variation is defined as follows. Variation is the difference among multiple objects. For example, with objects being numbers, four numbers; 10.5, 9.5, 9.5, 10.5; have variation while four numbers; 10.0, 10.0, 10.0, 10.0; have no variation. Four numbers; 11.5, 8.5, 11.5, 8.5; have more variation than 10.5, 9.5, 9.5, 10.5. 3

Performing a system means putting some input into the system and obtaining its output from the system. Usually, putting one piece of input into the system results in getting one piece of output (one outcome). So, for instance, 10, 10, 10, 10, are four outcomes that were obtained by performing the deterministic system, f(x) = 2x + 4, four times with the input of x = 3 for each time. Four numbers, 10.5, 9.5, 9.5, 10.5, are the four numbers obtained by performing the random system, r(x) = 2x + 4 + e, four times with x = 3 for its input each time. The random part, e, produces a value 0.5 or 0.5 randomly each time when the system is performed. The first time and the fourth time when it was performed, the value of e so happened to be 0.5, resulting in the outcomes of 10.5 while it so happened to be 0.5, resulting in the outcomes of 9.5, for the second and third times the system was performed. See the diagram below. A constant input at 3 Variation in output, 9.5 & 10.5 x= 3 r(x) = 2x + 4 + e r(3) = 10 + e 9.5 if e = -0.5 (with 60%) A random system 10.5 if e = 0.5 (with 40%) If we are about to perform this random system for the fifth time with the input of x = 3, we do not know exactly what the output would be although we know the output would be either 9.5 or 10.5 randomly (thus, the name random system ). That is, the input of x = 3 would not exactly determine its output. For a random system like this one, a constant input (x = 3) would not result in a constant output (outcomes). That is, you cannot control the output by controlling its input. 4

Suppose that the probability of the random part e producing 0.5 is, say, 40% and that of 0.5 is 60% each time the system is performed. This means that, after performing the system many times with the same input of x = 3, you will end up about 40% of all the numbers (outcomes) being 10.5 and 60% of them being 9.5. You know that the next output (outcome) would be 9.5 with 60% probability (chance) and 10.5 with 40% probability (chance), with the input of x = 3. However, you do not know exactly 90.5 or 10.5. So, it is a random system. Here are some important associated items. Probability is the numerical measure of the chance or likelihood for something to happen or will happen in the future. That is, the probability for an event is a numerical measure of chance for the event to occur. Probability takes in or (inputs) an event and indicates (or outputs) the degree of chance for the event to occur with a number between 0 (0%) and 1 (100%), both inclusively. That is, a probability itself is a mathematical function and is a deterministic system. As we know, 0% means the event never occurs while 100% means the event is certain to occur. For a number between 0 and 1, the greater the number is, the greater the chance is for the event to occur. Of course, if a probability is 50% for an event, then the chances are 50-50 that this event occurs. For example, probability takes in the event of 9.5 coming out of the random system when it is performed and produces a numerical value (number) of 60% for the chance of the event to occur. This 60% is the numerical measure of the chance for the event to happen. This 60% indicates the degree of chance for the event to occur with the number. Generally, when no probability is used to numerically describe the degree of the chance for an event, the degree of a chance is described by English words. For example, the chance of getting 9.5 is GOOD, instead of stating that the probability of getting 9.5 is 60%. The chance of getting 10.5 is NOT SO GOOD, instead of stating that the probability of getting 9.5 is 40%. 5

By the way, describing the chance of the event (whichever 9.5 or 10.5) with a number is much better than with words. This is a small yet very important example of how significant mathematics is for improving our life. A collection of probabilities for all possible outcomes is called a probability distribution or, simply, the distribution in Statistics. That is, A probability distribution or distribution is a collection of probabilities for all possible outcomes. That is, a probability distribution consists of all possible outcomes and their probabilities. Each probability is the probability for an event in which some outcome happens (when the random system is performed in the future). A probability distribution describes the random behavior (chances) of events (which comprised of outcomes). Note that a probability distribution indicates how probabilities are distributed for the outcomes. This is why the name, probability distribution. Also, note that the sum of all the probabilities must be 1 (100%). For instance, the random system, r(3), has the probability distribution of {{ 0.60 for r(3) = 9.5 and 0.40 for r(3) = 10.5 }}, where {{ }} is used for a collection or multiset while { } is used for a set. A multiset and a set are different; a set can have only different elements while a multiset can have multiple identical elements. A multiset is denoted with [ ] or {{ }}. Unfortunately, { } is often used for both a set and a multiset. However, we use {{ }} for a multiset or a collection and { } for a set in this course. A capital letter is conventionally used to denote a set and a collection. We use the notation, P(A), for the probability for the event A (to occur). P(A) = p means the probability for the event A to occur (in the future) is p. So, this distribution is expressed using the notation of {{ }} as, {{ P( {r(3) = 9.5} ) = 0.60, P( {r(3) = 10.5} ) = 0.40 }}. 6

The events {r(3) = 9.5} and {r(3) = 10.5} are all the possible outcomes with the input of x = 3 and P( {r(3) = 9.5} ) + P( {r(3) = 10.5} ) = 0.60 + 0.40 = 1 (100%). Note that some events, such as{r(3) = 9.5} and {r(3) = 10.5}, consist of only one outcome. However, in general, an event can consist of multiple outcomes. If that is the case, the probability of the event can be found (computed) by adding all the probabilities of the outcomes that constitute the event. You will see this in the Rolling-Die Example given below. The probabilities of the outcomes can be found from the probability distribution. This is the reason why probability distributions are simple but very powerful. A probability distribution has all the information that enables you to find any event (and its probability) that consists of any of the probability distribution s outcomes. A probability distribution can be expressed by the functional notation, and it is called probability distribution function. A probability distribution function does the same thing as the probability distribution but is expressed in the functional notation. For example, 0.60 for r(3) = 9.5 p = 0.40 for r(3) = 10.5 0 otherwise for the output (outcomes) of the random system r(3). It is conventional to use p, instead of f(x). Also, notice that p = 0 (not undefined as in mathematics) if r(3) is not 9.5 or 10.5, which is different from a regular function in mathematics. This, p = 0 otherwise, is often omitted and implied with any probability distribution function in Statistics. This probability distribution function can be graphically given as 7

0.60 0.40 r(3)=9.5 r(3)=10.5 Note that the outcomes are given on the horizontal real number line at the bottoms of the vertical lines, and the heights (lengths) of the vertical lines represent (indicate) the probabilities for the outcomes. The set of all possible pieces of the output (outcomes) of a random system is called the sample space. That is, The sample space is a set of all possible outcomes of a random system. That is, the elements of a sample space are all possible and different outcomes of a random system. The notation for a sample space is S. For instance, for the random system r(3), we have its sample space as S = {9.5, 10.5}. Elements of a sample space are called sample points. That is, Sample points are the elements of sample space. In other words, sample points are outcomes of a random system. Sample points in S constitute sets (in fact, subsets of S), and these subsets of S are events concerning the outcomes of a random system. Probability takes in a subset of S and produces a numerical measure for the event (represented by the subset) concerning the outcomes of the random system. That is, the domain of a probability is a set of all subsets of S, including the empty set and S itself and its range consists of numbers in [0, 1]. 8

Note that the domain of a probability distribution function is a set of all possible outcomes of a random system (each outcome can be an event) and the range of a probability distribution function is a set of probabilities of those outcomes. Generally, an event is mathematically represented by a set. An event concerning the outcomes of a random system is represented by a set but, more specifically, by a subset of the sample space S. Of course, P(S) = 1 and P({ }) = 0 where { } is the empty set which is a set with no element (no outcome). The empty set represents the null event, the event that never happens. The notation for the empty set is Φ (capital phi); that is, Φ = { }. An event occurs when an element (outcome) happens. That is, an element in a set is a way (one way) the event can occur. The null event has no element so it cannot occur. This is the reason why P({ }) = P (Φ) = 0. On the other hand, when a random system is performed, an outcome comes out of the system (if not, then the system was not preformed), which is the reason why P(S) = 1. Recall that S is a set of all possible outcomes of the system. Note that probability does not input a sample point as a point but inputs it as a set consisting of the point itself; it inputs a (sub)set or events. This is the reason why it is written as P({9.5}) = 0.60, not P(9.5) = 0.60, where {9.5} = {r(3) = 9.5} = { the outcome of the random system r(3) is 9.5}. The subset, {9.5}, of S is an event where the outcome of the random system r(3) is 9.5. Rolling-Die Example (reference example): A balanced die is rolled, and its number (of dots) landing up is observed. The input is the balanced die to be rolled, the system is rolling the inputted die, the rolled die is the output of the system, and the outcome is the number of dots showing up after rolling. The sample space of this random system is S = {1, 2, 3, 4, 5, 6} and each of the outcomes has the equal probability of 1/6 since the die is balanced. Also, for instance, an event in which a 3 or 5 is the number of dots on the 9

rolled die is represented by a set {3, 5}, and its probability is 1/3; that is, P({3, 5}) = 1/3. Do not write P(3, 5) = 1/3. See the diagram below. The probability of {3, 5} is 1/3 because this event consists of two sample points, 3 and 5, whose probability is 1/6 each (since it is a balanced die). The probability of an event is computed by adding the probabilities of all the elements (sample points) that constitute the event. In this case, 1/6 + 1/6 = 1/3. 1/6 1/6 1/6 1/6 1/6 1/6 1 2 3 4 5 6 Again, the number of sample points (elements) in the set indicates the number of different ways that the event can occur. For instance, the event of the number on a rolled die being 3 or 5 can occur in two different ways; it is a 3 or it is a 5. Check this with the number of elements in {3, 5}; it is 2, two different ways. The number of sample points in a set is the number of ways the event can occur. This explains the reason why the null event cannot occur, and its probability is zero. The null event is represented by an empty set, and it has no sample point. So, it has no way of happening. Also, addition of all the probabilities, which are zeroes since no sample points, results in the probability zero for the null event. The probability of an outcome (event) represents the importance or significance of the outcome (event). If the probability is almost 1, then the outcome (event) is almost certain to occur, and it is very important. On the other hand, if the probability is very small (say, less than 0.00001), then virtually the outcome (event) does not occur. Thus, you can practically ignore it (not significant at all). 10

However, when an event (outcome) has extremely small probability but it occurs, it becomes significant. A statistically significant event is an event which has extremely small probability for happening but actually has happened. It is a strange, rare event since it rarely happens (because of its extremely small probability to occur) but it has actually happened. I would recommend investigating the cause(s) for a statistically significant event because it might have been caused to happen by other reason(s) rather than its random chance. For instance, if I were a poker room manager and the bad-beat jackpot had been won, I would study the surveillance tape over the poker table very carefully. If I lost a big pot to a royal flush three hands in a row to the same player, I would call the floor (or even the security). This is not because I am a Phil Hellmuth but because the chance for that to happen is extremely small (virtually none). I would strongly suspect cheating. I would really want to see the surveillance tape. If you are interested in probability further and related topics such as sets and set operations, please read the optional appendix, Appendix: Conditional Probabilities (Optional) given at the end of this chapter. Now, let us get back to sample spaces. You will see some sample space consisting of non-numerical sample points. However, if the sample space of your interest has numerical sample points, then its random system is called a random variable. That is, A random variable is a random system whose sample points are numerical. For instance, the sample space in the Rolling-Die example has numerical sample points; 1, 2, 3, 4, 5, and 6. So, this random system is a random variable. If it is a random variable, it is a random system. However, being a random system does not imply that it is a random variable. For instance, you are interested in the face of a coin landing up after being tossed. It is a random system with the sample space of S = {H, T} whose sample points are not numerical. This random system is not a random variable. 11

Note that, for the same random system, it actually can or cannot be a random variable. If we assign 0 for H and 1 for T, then the sample space becomes S = {0, 1} with numerical sample points. This makes the same random system, that is not a random variable with S = {H, T}, a random variable with S = {0, 1}. That is, it is the choice of outcomes and how to designate them. This kind of choice makes a random system a random variable or not a random variable. Numerical measure for the chance of an event A can be given by odds (a to b) as in the odds statement, the odds for the event A to occur are a to b, where a and b are two positive integers without common factors. That is, they must be mutually prime natural numbers. Often, it is equivalently said, the odds for the event A are a to b for short. Sometimes, a to b is written as a : b with : for to in an odds statements. However, do not use colons for odds in this course. Generally, if a > b, then P(A) > 50% and, otherwise, P(A) < 50%. If the odds for the event A (to happen) are 1 to 1, then P(A) = 50%. So, do not write odds using colons or fractions in this course. That is, do not write a : b or a/b for odds of a to b. The odds statement given above is the same as, the odds against the event A to occur are b to a, in terms of the probability. For instance, The odds for A are 3 to 2. is the same as The odds against A are 2 to 3. That is, these two odds statements give the same P(A), which is 60%. As long as P(A) is the same, you can go back and forth between for and against odds statements by switching the first number, a, and the second number, b, around and replacing for with against. 12

So, the odds statements for events to occur are treated first in this book. And, when an odds statement against some event is needed, it is obtained from the odds statement for the same event to occur, as shown in the last paragraph. The odds are often used for betting (gambling). The odds for A is a to b means that, if you bet (pay) a dollars for the event A to occur, then you get altogether a + b dollars back if the event A happens. That is, a is your bet for the event A to happen and b is your profit if the event A occurs. Of course, if the event A does not occur, then you lose your a dollars. The odds against A is b to a means that, if you bet (pay) b dollars for the event A not to occur, then you get altogether b + a dollars back if the event A does not happen. That is, b is your bet for the event A not to happen and a is your profit if the event A does not happen. Of course, if the event A occurs, then you lose your b dollars. That is, the first number in the odds is always your bet and the second number is always your profit (the profit if you are correct), either the odds are given in terms of for or against. As stated before, the odds for an event and the odds against the same event are equivalent (and should be equivalent) to each other in terms of the probability of the event. However, they are completely opposite to each other in terms of betting. As you can see, betting for an event to happen is not equivalent to betting for the event not to happen. For example, a friend of yours gives you 2 to 1 odds for Davey Allison to win Goody s 500 at Bristol this weekend. If you bet (pay) $300 for Davey to win the race this weekend and he wins, you get $450 back from him of which $150 is your profit. An example for against with a newer driver and a newer race track is that a friend of yours gives you 1 to 2 against Jimmie Johnson winning this year s Brickyard 400. If you take the bet (of Jimmie not winning the race at Brickyard) of $300, then your winning is $600 if Jimmie fails to win the race. If you pay $300 up front for the bet, then you should get the total of $900 from your friend if Jimmie fails to win the race. The mathematical definition of the odds for an event A is the following. 13

The odds for an event A are two relative weights for the chance of A happening (the first natural number) and for the chance of A not happening (the second natural numbers). Thus, if the odds for the event A are a to b, then a corresponds to P(A) and b corresponds to 1- P(A), where a and b are mutually prime natural numbers and P(A) is the probability for A to occur. There are only two chances concerning A, either it occurs or not occurs, no other possibility. So, P(A) can be computed from a and b as P(A) = a/(a + b) because a is the weight for A to occur and b is the weight for A not to occur. For example, if the odds for A are 3 to 16, then P(A) = 3/(3 + 16) = 3/19. In turn, the odds for A can be obtained from P(A) : 1 P(A) by multiplying the denominator of P(A) to P(A) and 1 P(A) if P(A) is given by a fraction or multiplying by 10 m where m is the number of digits after the decimal point if P(A) is given by the decimal number. Make sure these two numbers do not have common factors. For example, if P(A) = 3/13, then 1- P(A) = 10/13. So, the odds for A are 3/13 to 10/13. By multiplying 3/13 and 10/13 with 13, the odds for A are 3 to 10. For example, if P(A) = 0.372, then 1 P(A) = 0.628. So the odds for A are 0.372 to 0.628. P(A) = 0.372 has three digits after the decimal points so multiplying 0.372 and 0.628 by 10 3 = 1000, the odds for A are 372 to 628 which have common factor of 4. So dividing these two numbers by 4, the odds for A are 93 to 157. At any rate, the odds for an event A must be computed from P(A). Otherwise, they are not odds even if they look like odds and given as odds. You can go back and forth between the odds and the probability for the event A by using the equation, odds-probability conversion formula, obtained from the definition given above, 14

a b = P(A) 1 - P(A). If the odds for the event A are given, then you can find P(A) by solving the equation for P(A). Example: Find the probability for the event A if the odds for A are 1 to 4. 1/4 = P(A)/(1 P(A)) (0.25)(1 P(A)) = P(A) 0.25 (0.25)P(A) = P(A) 0.25 = (1.25)P(A) P(A) = 0.25/1.25 P(A) = 0.20. The probability of A is 20%. You can find the odds for the event A by using the same equation if its probability is given. Example: State the odds statement for an event A whose probability is 0.40 a/b = 0.40/(1 0.40) = 0.40/0.60 = 4/6 = 2/3. That is, a = 2 and b = 3. Therefore, the odds for A are 2 to 3. A game played with events with their odds (computed from the true probability of the event) is called a fair game, not necessarily a game with an event of 50-50 chance. If you play a fair game with the same bet many times, your winning or losing is zero. So, for example with the 1 to 1 odds for an event, if you win only $19 for your bet of $20 when the event happens (you lose your $20 when it does not happen) for the event with a 50% probability, it is not a fair game. For instance, if you play a game based on an event which occurs with a 40% probability by betting 10 dollars each time, you should win 15 dollars when the event happens (get $25 back altogether) or lose the 10 dollars when the event does not happen, for this game to be a fair game. If you played this 15

game one million times, you would break close even because it is a fair game. In this course, all the games are fair games (that is, all the odds concerning the events are based on the true probabilities for their events) unless otherwise stated. Many games in casinos are not fair games, they are advantageous for casinos. Steve Wynn is a very nice guy, but that was not the reason why he was able to build Wynn Las Vegas ($3.8 billion with a b ). By the way, odds given in casinos are not odds. You find more about gambling games in Appendix: Gambling, below. By the way, instead of memorizing the odds-probability formula, understand it. It is an equation since it is a formula; in fact, it is also a proportion. The left hand side is a to b as stated in an odds statement. Note: a to b is a/b in the mathematical expression (but do not use a/b for odds in this course). In the right hand side, a corresponds to the probability of A to happen so P(A) is in the numerator and b corresponds to the probability of A not happening so 1 P(A) is in the denominator. By the way, can you complete the following definition of the odds against A? The odds against A are Why are we talking about systems and all this? It is because understanding systems and connected items will help understand many items (concepts, methods, topics and such) that you will learn in this course. Many items will be explained in terms of the learned items connected with random systems and variables in this section. For instance, data come from systems. We perform a system and observe and record its outcome. We typically, repeat this several times, which results in data. So, if you understand systems and their related items, then you understand data (which will be discussed in Chapter 2). Appendix: Output & Outcome The output and outcome of a system are related to each other but generally different. What comes out of the system is the output. An outcome is a possibility for a property of the output. That is, outcomes concern with a certain property of the output. For instance, if the random system is 16

randomly selecting an orange from a crate of oranges (that is, the input is the crate of oranges), then the output is the randomly selected/sampled orange. This output, a randomly selected orange, has many properties such as the weight, diameter, density, sugar content, vitamin C content and so on. Suppose, we are interested in the property of weight (to make some decision), then the outcome is a weight measurement of the selected orange (output). If the density is the property of interest (to make some other decision), then the outcome is a density measurement of the selected orange. For the system of f(x) = 2x + 4, the output of the system and what we are interested in are the same; namely, a real number. As a result, the output and outcome happen to be the same. However, a real number (output) has several properties such as its sign, absolute value, divisibility by 2, and so on. If we were interested in the sign of a real number, then the outcome is + or -, which would be different from a real number, the system s output. Appendix: Gambling Many games offered in casinos for gambling against the casinos are not fair games, and you eventually lose your money if you play it many times. For instance, a casino offers the odds of 4 to 5 for an event with a 40% probability instead of, as we saw earlier, the odds of 2 to 3 for it. If you bet $10 and the event happens, you win only $12.50 instead of $15 (which you should win for the game to be a fair game). By the way, you see odds given with the first number for your win and the second number for your bet in casinos. This happens only in the context of it pays c to d. That is, they pay you c dollars if you bet d dollars if the event happens. For instance, Blackjack pays 3 to 2 means, you win three dollars if you bet two dollars if a Blackjack happens. The odds given in casinos are actually not odds. They are merely pay-out schedules for gambling. In this course, all the odds have the first number for your bet and the second number for your winning unless otherwise stated, and the odds for an event are based on (computed from) the probability of the event. Also, you want to know how to find how much you win if the odds (not odds in casinos but ones computed from the probability in this course) for an event are a to b by betting q dollars when the event happens. Then: 17

1. Let x be the amount you win by betting q dollars on the event. 2. Set up an equation with x in it; q/x = a/b. 3. Solve the equation for x; x = bq/a. If you want to know how much you should bet to win r dollars with the odds of a to b, can you find the amount? Please find it yourself by three steps similar to those above. Hint: The correct answer is given in terms of a, b, and r. It is nothing more than applications of word-problems from the prerequisites. If you want to find out how much you win or how much you should bet to win a certain amount with a given probability (not the odds), then find the odds by the odds-probability conversion formula and do the same as we did above. Here is an exercise question for you. How much do you have to bet on an event to win $900 if the probability for the event is 35%? The answer is $484.62. Did you get it? When odds are discussed in this book and in this course, they are computed from the true probabilities. Again, if you see odds in casinos, they are not the odds because they are not computed from the true probabilities for the events. They look like odds, given as odds and treated as odds. However, they are not odds but pay-out schedules for gamblers. Unfortunately, many mathematicians and educators mistakenly believe they are odds. How do you know they are not odds in casinos? Please answer these questions. How do the sports books know the true probabilities of a certain driver winning the next race in Rockingham? Do they have direct lines to Him? How do casinos make money if they offer fair games? By overcharging at their buffets? I do not think so. Here is one last note on games in casinos. There are some games which are very close being fair or virtually fair. However, casinos usually charge commissions or juice money on your winning with such games. For instance, they charge a 5% commission on your winning in Pai Gow Poker. Copyright by Michael Greenwich, 01/2017. 18