The Island Problem Revisited

Size: px
Start display at page:

Download "The Island Problem Revisited"

Transcription

1 The Island Problem Revisited Halvor Mehlum 1 Department of Economics, University of Oslo, Norway halvormehlum@econuiono March 10, While carrying out this research, the author has been associated with the ESOP centre at the Department of Economics, University of Oslo ESOP is supported by The Research Council of Norway The author is grateful to the referees and the editors for stimulating comments and suggestions The author also wishes to thank David Balding, Philip Dawid, Jo Thori Lind, Haakon Riekeles, Tore Schweder, and to participants at the Joseph Bell Workshop in Edinburgh

2 Abstract I revisit the so called Island Problem of forensic statistics The problem is how to properly update the probability of guilt when a suspect is found that has the same characteristics as a culprit In particular, how should the search protocol be accounted for? I present the established results of the literature and extend them by considering the selection effect resulting from a protocol where only cases where there is a suspect reach the court I find that the updated probability of guilt is shifted when properly accounting for the selection effect Which way the shift goes depends on the exact distribution of all potential characteristics in the population The shift is only marginal in numerical examples that have any resemblance to real world forensic cases The Island Problem illustrates the general point that the exact protocol by which data are generated is an essential part of the information set that should be used when analyzing non-experimental data KEY WORDS: Bayesian method, Forensic statistics, Collins case

3 1 INTRODUCTION In 1968, a couple in California with certain characteristics committed a robbery The Collins couple, who had the same characteristics as the robbers, were arrested and put on trial In court, the prosecutor claimed that these particular characteristics would appear in a randomly chosen couple with the probability 1/12,000,000 and that the couple thus had to be guilty The suspected couple was convicted, but The California Supreme Court later overturned the conviction (The Supreme Court of California 1969) The Collins s case, or a stylized version of it called the Island Problem (Eggleston 1983), has a central place in forensic statistics literature This problem can be formulated as a balls in an urn problem: An urn contains a known number of balls with different colors Then a ball is drawn Its color, which happens to be red, is observed The ball then goes back to the urn Now, a search for a red ball is carried out and a red ball is indeed found What is the probability that the first and the second red balls are in fact the same ball? That is, what is the probability that the suspected ball really is the guilty ball? In the literature, one central theme is how to properly extract information from the circumstances of the case and the search protocol used when finding the suspect Central contributions start with Fairley and Mosteller (1974), Yellin (1979), and Eggleston (1983) and continue with Lindley (1987), Dawid (1994), Balding and Donnelly (1995), and 1

4 Dawid and Mortera (1996) These authors show how changes in the assumption regarding the search procedure change the results In addition to being a central starting point for thinking about forensic statistics, the Island Problem is a good example of how the immediate intuition about probabilities may be wrong The variations of the Island Problem provide several illustrations of how to appropriately take account of the often subtle information in non-experimental situations For all students of statistics the Island Problem can serve as a stimulating and challenging puzzle In particular it can be used in lectures relating to statistics and society, where the students are invited to contemplate the statisticians role as an expert scientist trying to extract information from non-experimental situations In this note I introduce a possible issue relating to selection in the Island Problem My question is as follows: given that the search procedure does not always produce a suspect, and assuming that for each case with a successful search there are several cases with unsuccessful searches, what is then the appropriate analysis of the problem? In contrast to the other authors, I include in the analysis the fact that only a selected subset of criminal cases reaches the court As in the other contributions, I consider a highly stylized and abstract version of the case I will explain this argument using the helpful balls in an urn analogy I will first present two central solutions from the literature I will then show how the analysis 2

5 should be modified when taking into account the fact that we are faced with a selected case 2 URN MODELS Consider an urn containing N balls The N balls in the urn may be a sample from an underlying population with M different colors Now, a robbery takes place: one ball is drawn at random from the urn All balls are equally likely to be drawn The color of the ball is observed as being red, and the ball is put back in Let this part of the evidence be denoted F 1 = first ball drawn at random is red The question now is: How should F 1 affect our belief regarding the number of red balls in the urn? When the ball we draw at random is red, we will adjust our belief about the likely number of red balls in the urn The only knowledge we have before picking a red ball is that the number of red balls, X, is distributed bin (N, p) with N and p known Given the evidence F 1 the distribution of X may be updated using Bayes formula: P (X = n F 1 ) = P (X = n) P (F 1 X = n) P (F 1 ) = ( ) N p n (1 p) (N n) n/n n p (1) 3

6 By re-arranging, P (X = n F 1 ) = ( ) N 1 p n 1 (1 p) (N 1) (n 1) (2) n 1 which is the unconditional probability of there being n 1 red balls among the N 1 not observed balls Now, a search for a red ball is conducted Building on Balding and Donnelly (1995) and Dawid and Mortera (1996), I will first discuss two possible search procedures: search until success and random search 21 Search until Success Yellin (1979) was the first to consider a procedure consisting of a search through the urn until a red ball, the suspect, is found If no record is kept of the balls screened, no additional evidence is gained in the process of finding the second red ball This search protocol always produces a suspect and the question of guilt G is the question of whether the second ball is identical to the first ball The larger the number of red balls, the lower is the probability of guilt For a given number of red balls, X = n, the probability of guilt is 1/n As X is unknown, the probability of guilt is P (G F 1 ) = E ( X 1 F 1 ) 4

7 Using formula (2), the probability of guilt is P (G F 1 ) = N n=1 1 n ( ) N 1 p n 1 (1 p) (N n) = n 1 1 (1 p)n Np = 1 E (X X 1) Hence, the probability of guilt P (G F 1 ) has a simple solution which happens to be identical to the reciprocal of E (X X 1) This solution is different from the solution following from the California Supreme Court s erroneous argument The California Supreme Court s interpretation of the evidence can be formulated as F 0 = there is at least one red ball In that case the probability of guilt is P (G F 0 ) = E (X 1 X 1) Given this particular relationship between the expressions for P (G F 1 ) and P (G F 0 ) it follows from Jensen s inequality that P (G F 1 ) < P (G F 0 ) Hence compared to Yellin (1979) the California Supreme Court overstated the probability of guilt 22 Random Search By design, the above search always produces a suspect and provides no additional information about the distribution of X Another alternative, as explored by Dawid (1994), is the random search where only one ball is picked at random If this ball is not red the question of guilt is straight away answered negatively If, however, the randomly selected ball is indeed red, the additional evidence is F 2 = second ball drawn at random is red, and the distribution of X should be 5

8 updated accordingly Using Bayes s formula we get: P (X = n F 1 F 2 ) = P (X = n F 1 ) P (F 2 X = n F 1 ) P (F 2 F 1 ) (3) When conditioning on the number of red balls X = n, the events F 1 = first ball drawn at random is red and F 2 = second ball drawn at random is red are independent (the draws are done with replacement); it therefore follows that the numerator can be written as P (F 2 X = n F 1 ) = P (F 2 X = n) = n N The denominator in (3) can be written as P (F 2 F 1 ) = N n=1 n N P (X = n F 1) = 1 N N n=1 np (X = n F 1 ) = E (X F 1), N where it follows from (2) that E (X F 1 ) = 1 + (N 1) p Therefore (3) can be written as P (X = n F 1 F 2 ) = n E (X F 1 ) P (X = n F 1) (4) The probability of guilt is now P (G F 1 F 2 ) = N 1 n n E (X F 1 ) P (X = n F 1) = n=1 1 E (X F 1 ) (5) 6

9 As X 1 is a convex transformation for X 1, it follows from Jensen s inequality that 1 E (X F 1 ) < E ( ) X 1 F 1 From above we know that P (G F 1 ) = E (X 1 F 1 ) hence P (G F 1 F 2 ) < P (G F 1 ) The probability of guilt of a suspect is thus lower in the random search than in search until success The intuition is simple: when two balls drawn at random happen to be red it increases the likelihood of a large number of red balls more than if only one ball drawn at random is red This updating assumes that we are faced with one experiment, where two balls drawn with replacement happen to be red Thus only a fraction of first draws (robberies) will lead to a case With the random search protocol there will therefore be a subtle selection effect of cases It is the analysis of this selection effect that is my own contribution 23 Random Search with Selection Effect I will in the following show how the analysis changes when taking account of the selection effect As before, consider an urn containing N balls drawn from an underlying population of M colors The urn is characterized by the joint frequency 7

10 of balls by color X i (i = 1M) Now, one of a never ending series of draws takes place, one ball is drawn and put back in Then a second ball is drawn If the balls are not the same color, the case is closed and the next potential case occurs Assume that after an unknown number of draws (ie potential cases), drawn from the same urn, there is a case where the first and the second ball are of the same color, red When this case arrives the question is, as before: Are the balls identical? Let the evidence be denoted F 3 = The first time that both balls are of the same color they are red Let X i, i [1, M], denote the number of balls of color i, where i = 1 is the color red and where M i=1 X i = N, then Bayes s formula gives P (X 1 = n F 3 ) = P (X 1 = n) P (F 3 X 1 = n) P (F 3 ) (6) The denominator P (F 3 ) is the unconditional probability of the first instance of two consecutive draws of balls of the same color involves two red balls In an urn where X 1 X M denotes the number of balls of each of the M colors (and where X 1 is the number of red balls) the probability of drawing two balls of color i is (X i /N) 2 Given X 1 X M the probability of the first instance of two consecutive draws of balls of the same color involving two red balls is thus M M (X 1 /N) 2 / (X i /N) 2 = X 2 1 / i=1 i=1 X i 2 (7) 8

11 The expectation of (7) over all combinations of X 1 X M determines the denominator in (6), P (F 3 ) Hence ( M P (F 3 ) = E X 2 1 / i=1 X i 2 ) The numerator in (6) P (F 3 X 1 = n) follows when taking the expectation of (7) conditioning on the number of red beads X 1 = n ( ) ( ) M M P (F 3 X 1 = n) = E X 2 1 / X 2 i X 1 = n = E n 2 / X 2 i X 1 = n i=1 i=1 It therefore follows that (6) can be written as ( E n 2 / ) M i=1 X i 2 X 1 = n P (X 1 = n F 3 ) = P (X 1 = n) ( E X 12 / ), (8) M i=1 X i 2 This expression can generally not be simplified further In order to get an idea on how updating based on F 3 compares to updating based on F 1 F 2 I will first look at some numerical illustrations and then at some approximations for large N 24 Numerical Illustrations Assume that the underlying distribution amounts to a multinomial distribution where p i is the probability of color i I will consider two problems: 9

12 1 The Tiny island, where N = 5 and P (red ball) = p 1 = 1/6 2 The Large island, where N = 100 and P (red ball) = p 1 = 0004 The parameters of the Large island corresponds exactly to Eggleston s (1983) Island Problem In each of these problems the marginal distribution for red balls is fixed by bin(n, p = p 1 ) For each of these problems I look at three different sets of assumptions regarding the probabilities of colors other than red i) Red and green, where M = 2 and p 2 = 1 p 1 ii) Red and palette, where M is enormous and p 2 = = p M = (1 p 1 ) / (M 1) 0 iii) All equal, where p 1 = p 2 = = p M = 1/M (M = 6 in the Tiny island and M = 250 in the Large island) The calculations of the results for i) and ii) are straightforward The calculation for iii) is more demanding By construction the combination Tiny island and iii) is identical to a throw of five dice In gambling it is known under the name poker dice, a variant of yatzee Poker dice is analyzed in the book in gambling by Epstein (1967 p154) and the essential probabilities can be taken from there For the combination Large island and iii) some CPU time is needed and the complete calculations involve integrating over all 190 mill partitions of the number 100 I 10

13 approximate by integrating over the 600,000 partitions with the highest probability These account for of the probability mass The results for the probability of guilt are summarized in Table 1 The row at the bottom includes, as a reference, the result of the random search P (G F 1 F 2 ) from (5) Table 1: Probability of guilt, P (G F 3 ) in the iterated case and in the reference Tiny island Large island i) Red and green ii) Red and palette iii) All equal reference Random search, P (G F 1 F 2 ) The results show that the probability of guilt P (G F 3 ) = E (X 1 F 3 ) indeed varies with the distributional assumption regarding the distribution of colors other than red in the underlying population The probability of guilt may both be above and below the reference case of random search P (G F 1 F 2 ) In order to understand the logic behind the result for a particular parameter configuration one needs to consider all the possible combinations of colors that the configuration can generate The essential insight is gained by looking at the distribution for the updated distribution of X 1 For the Tiny island the updated distribution P (X 1 = n F 3 ) is given in Figure 1 The lower density is the prior 11

14 P Figure 1: Tiny island, updating the distribution ii) iii) (reference) i) (prior) n P (X 1 = n), which is common for all the parameter configurations Consider the updated density in configuration i) (Red and green) This density is adjusted most down for X 1 = 1 while it is adjusted most up for X 1 = 3 These adjustments follows from the Bayesian updating given by (6) P (X 1 = n F 3 ) = P (X 1 = n) P (F 3 X 1 = n) P (F 3 ) (6) The likelihood of F 3 = The first time that both balls are of the same color they are red in the numerator is low when there is one red (X 1 = 1) and four green balls, while the likelihood is much higher when there is three red balls (X 1 = 3) and two green The updated density for ii) (Red and palette) deviates less from the prior The reason is that the likelihood of F 3 is quite high even when X 1 = 1 when all other 12

15 colors are unique The updated distribution i) is the one that moves most to the right, giving more probability mass to the large values of X 1 The probability of guilt P (G F 3 ) = E (X 1 F 3 ) is therefore the lowest in case i) The updated distribution in case ii) is the one that moves least to the right, giving the least probability mass to large number of X 1 Therefore the probability of guilt in case i) is the lowest while the probability of guilt in case ii) is the highest 25 Known Number of Iterations In the above analysis it was assumed that, the number of iterations until a case was found, with a second ball of the same color as the first ball, was unknown In principle the information regarding the number of iterations could be available Let R denote the number of iterations until a case was found As before let the first case involve two red balls Given a combination of colors X A (where the number of balls of color i is X A i, i [1, M] and X A i = N), the likelihood of experiencing F 4 = [R 1 potential cases and then a red case] is P ( F 4 X A) = ( 1 M i=1 ( X A i /N ) 2) R 1 (X A 1 /N ) 2 (9) Using Bayes s formula this likelihood can be used to update the distribution over all possible X A s in a given context The updated probability distribution over 13

16 X A s can in turn be marginalized to achieve an updated probability distribution over red balls X 1, which in turn can be used to calculate the probability of guilt I have done this exercise for all of the three Tiny islands The updated probability of guilt is plotted in Figure 2 Three distinct features are apparent First, when Figure 2: Probability of guilt and the number of iterations 10 P ii) Red and palette iii) All equal 04 i) Red and green 02 R the number of iterations is exactly equal to one, the probability of guilt is in all cases equal to 0600 This is no coincidence Looking at (9) it is clear that the distributions of other colors than red do not matter when R = 1 Hence, N and p 1 are the only facts that matter and in all three cases N = 5 and p 1 = 1/6 As seen above, 0600 is also the probability of guilt in the reference case of random search The reference is an experiment where: we are faced with one experiment, where two balls drawn with replacement by chance happen to be red Another 14

17 way of putting it is that the number of iterations before two red balls are drawn happen to be R = 1 This illustrates again the difference between the reference experiment and the experiment where it is assumed that the case is a selected case after a number of iterations Second, both for Red and palette and All equal the probability of guilt increases and approaches unity as R increases The reason for this asymptotic behavior is that in both configurations the Bayesian updating, when seeing a long range of iterations, gives large probability to X A s that are such that there is one out of each color Such X A s are possible both in Red and palette and All equal One out of each color implies that there is only one red ball, hence sure guilt Third, the probability of guilt with Red and green declines and settles at a level of around 045 As there are only two colors in this configuration, having just one of each color is not possible As R increases the likelihood of only two alternatives stand out: a) X 1 = 2 and X 2 = 3 or b) X 1 = 3 and X 2 = 2 Hence, in this configuration the probability of guilt settles at a level between 1/3 and 1/2 3 DISCUSSION The analysis of the Island Problem illustrates the general point that the interpretation of statistical data must take into account exactly how the data were collected 15

18 What is the protocol and where does it end? In the solution to the Island Problem, Yellin (1979) only considered the part of the evidence relating to the fact that the guilty happened to have certain characteristics Later Dawid(1994) brought into the picture the fact that the search had produced a suspect with the same characteristics Both Yellin and Dawid start from the premise that the case is given My argument is that the case may itself be selected in a stochastic process Only cases where there is a suspect with identical characteristics as the culprit are brought before the court The discussion started from the Collins Case, a classic case in forensic statistics In addition to illustrating core issues of forensic statistics, like the prosecutors fallacy, the Collins Case also illustrates some important principles in Bayesian learning and Bayesian reasoning The present discussion is not meant as a substantial contribution to forensic statistics, but rather as an elaboration on a fascinating stylized case of Bayesian reasoning In fact, the salience of the selection effect would only be marginal in stylized forensic problems of realistic size In Eggleston s (1983) Island Problem, for example, as captured by Large island in Table 1 above, N is only 100 Already at that modest population size the difference between the different parameter configurations is very small If N increases further the selection effect would not matter at all To put it loosely, when the number of balls is large and when there is a 16

19 large number of colors, each with small probabilities, then X i 2 can be treated as a constant independent of X 1 for small X 1 Then, it follows that (8), for small n, can be simplified as follows P (X 1 = n F 3 ) P (X 1 = n) n 2 E (X 2 1) = P (X 1 = n F 1 F 2 ) (10) The last equality follows as P (X 1 = n F 1 F 2 ) = P (X 1 = n) P ((F 1 F 2 ) X 1 = n) P (F 1 F 2 ) = P (X 1 = n) n 2 /N 2 E (X 2 1/N 2 ) The approximation (10) is accurate when red is quite rare, and when there is a large number of other features in the population That the feature red is rare is an implicit condition for the problem to be relevant in the context of evidence in a court case References Balding; D J, and Donnelly, P (1995), Inference in Forensic Identification, Journal of the Royal Statistical Society Series A (Statistics in Society), 158 (1), Dawid, A P (1994), The Island Problem: Coherent Use of Identification Evi- 17

20 dence, Aspects of Uncertainty A Tribute to D V Lindley, Wiley, New York, Dawid, A P, and Mortera, J (1996), Coherent Analysis of Forensic Identification Evidence, Journal of the Royal Statistical Society Series B (Methodological), 58 (2), Eggleston, R (1983), Evidence, proof and probability 2nd ed, London: Weidenfeld and Nicolson Epstein, R A (1967), The Theory of Gambling and Statistical Logic, Academic Press, New York Fairley, W B, and Mosteller, F (1974) A Conversation about Collins, The University of Chicago Law Review, 41(2) pp Lindley, DV, (1987) The probability approach to the treatment of uncertainty in artificial intelligence and expert systems, Statistical Science, 2, The Supreme Court of California (1969) People v Collins, reprinted in Fairley, W B and Mosteller, F (eds) (1977) Statistics and Public Policy, Reading, Mass, 1977 pp Yellin, J (1979) Review of Evidence, Proof and Probability (by R Eggleston) Journal of Economic Literature, 17,

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 Introduction One of the key properties of coin flips is independence: if you flip a fair coin ten times and get ten

More information

Probability, Entropy, and Inference / More About Inference

Probability, Entropy, and Inference / More About Inference Probability, Entropy, and Inference / More About Inference Mário S. Alvim (msalvim@dcc.ufmg.br) Information Theory DCC-UFMG (2018/02) Mário S. Alvim (msalvim@dcc.ufmg.br) Probability, Entropy, and Inference

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 EECS 70 Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 Introduction to Basic Discrete Probability In the last note we considered the probabilistic experiment where we flipped

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

2. I will provide two examples to contrast the two schools. Hopefully, in the end you will fall in love with the Bayesian approach.

2. I will provide two examples to contrast the two schools. Hopefully, in the end you will fall in love with the Bayesian approach. Bayesian Statistics 1. In the new era of big data, machine learning and artificial intelligence, it is important for students to know the vocabulary of Bayesian statistics, which has been competing with

More information

Topic 17: Simple Hypotheses

Topic 17: Simple Hypotheses Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our

More information

Lecture Stat 302 Introduction to Probability - Slides 5

Lecture Stat 302 Introduction to Probability - Slides 5 Lecture Stat 302 Introduction to Probability - Slides 5 AD Jan. 2010 AD () Jan. 2010 1 / 20 Conditional Probabilities Conditional Probability. Consider an experiment with sample space S. Let E and F be

More information

Solutions to In-Class Problems Week 14, Mon.

Solutions to In-Class Problems Week 14, Mon. Massachusetts Institute of Technology 6.042J/18.062J, Spring 10: Mathematics for Computer Science May 10 Prof. Albert R. Meyer revised May 10, 2010, 677 minutes Solutions to In-Class Problems Week 14,

More information

Refresher on Discrete Probability

Refresher on Discrete Probability Refresher on Discrete Probability STAT 27725/CMSC 25400: Machine Learning Shubhendu Trivedi University of Chicago October 2015 Background Things you should have seen before Events, Event Spaces Probability

More information

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics CS 540: Machine Learning Lecture 2: Review of Probability & Statistics AD January 2008 AD () January 2008 1 / 35 Outline Probability theory (PRML, Section 1.2) Statistics (PRML, Sections 2.1-2.4) AD ()

More information

DEMBSKI S SPECIFIED COMPLEXITY: A SIMPLE ERROR IN ARITHMETIC

DEMBSKI S SPECIFIED COMPLEXITY: A SIMPLE ERROR IN ARITHMETIC DEMBSKI S SPECIFIED COMPLEXITY: A SIMPLE ERROR IN ARITHMETIC HOWARD A. LANDMAN Abstract. We show that the derivation of Dembski s formula for specified complexity contains a simple but enormous error,

More information

Joint, Conditional, & Marginal Probabilities

Joint, Conditional, & Marginal Probabilities Joint, Conditional, & Marginal Probabilities Statistics 110 Summer 2006 Copyright c 2006 by Mark E. Irwin Joint, Conditional, & Marginal Probabilities The three axioms for probability don t discuss how

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT

More information

For True Conditionalizers Weisberg s Paradox is a False Alarm

For True Conditionalizers Weisberg s Paradox is a False Alarm For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Abstract: Weisberg (2009) introduces a phenomenon he terms perceptual undermining He argues that it poses a problem for Jeffrey

More information

For True Conditionalizers Weisberg s Paradox is a False Alarm

For True Conditionalizers Weisberg s Paradox is a False Alarm For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Department of Philosophy University of Toronto franz.huber@utoronto.ca http://huber.blogs.chass.utoronto.ca/ July 7, 2014; final

More information

Probability - Lecture 4

Probability - Lecture 4 1 Introduction Probability - Lecture 4 Many methods of computation physics and the comparison of data to a mathematical representation, apply stochastic methods. These ideas were first introduced in the

More information

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix

More information

The dark energ ASTR509 - y 2 puzzl e 2. Probability ASTR509 Jasper Wal Fal term

The dark energ ASTR509 - y 2 puzzl e 2. Probability ASTR509 Jasper Wal Fal term The ASTR509 dark energy - 2 puzzle 2. Probability ASTR509 Jasper Wall Fall term 2013 1 The Review dark of energy lecture puzzle 1 Science is decision - we must work out how to decide Decision is by comparing

More information

Probabilistic and Bayesian Analytics

Probabilistic and Bayesian Analytics Probabilistic and Bayesian Analytics Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these

More information

Conditional probabilities and graphical models

Conditional probabilities and graphical models Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within

More information

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).

More information

Examination Artificial Intelligence Module Intelligent Interaction Design December 2014

Examination Artificial Intelligence Module Intelligent Interaction Design December 2014 Examination Artificial Intelligence Module Intelligent Interaction Design December 2014 Introduction This exam is closed book, you may only use a simple calculator (addition, substraction, multiplication

More information

Review of Basic Probability

Review of Basic Probability Review of Basic Probability Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 September 16, 2009 Abstract This document reviews basic discrete

More information

Conditional Probability, Independence and Bayes Theorem Class 3, Jeremy Orloff and Jonathan Bloom

Conditional Probability, Independence and Bayes Theorem Class 3, Jeremy Orloff and Jonathan Bloom Conditional Probability, Independence and Bayes Theorem Class 3, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Know the definitions of conditional probability and independence of events. 2.

More information

Probabilistic Reasoning

Probabilistic Reasoning Course 16 :198 :520 : Introduction To Artificial Intelligence Lecture 7 Probabilistic Reasoning Abdeslam Boularias Monday, September 28, 2015 1 / 17 Outline We show how to reason and act under uncertainty.

More information

Using Probability to do Statistics.

Using Probability to do Statistics. Al Nosedal. University of Toronto. November 5, 2015 Milk and honey and hemoglobin Animal experiments suggested that honey in a diet might raise hemoglobin level. A researcher designed a study involving

More information

Lecture 10: Generalized likelihood ratio test

Lecture 10: Generalized likelihood ratio test Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Physics 509: Bootstrap and Robust Parameter Estimation

Physics 509: Bootstrap and Robust Parameter Estimation Physics 509: Bootstrap and Robust Parameter Estimation Scott Oser Lecture #20 Physics 509 1 Nonparametric parameter estimation Question: what error estimate should you assign to the slope and intercept

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

On the Complexity of Causal Models

On the Complexity of Causal Models On the Complexity of Causal Models Brian R. Gaines Man-Machine Systems Laboratory Department of Electrical Engineering Science University of Essex, Colchester, England It is argued that principle of causality

More information

Probability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events...

Probability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events... Probability COMP 245 STATISTICS Dr N A Heard Contents Sample Spaces and Events. Sample Spaces........................................2 Events........................................... 2.3 Combinations

More information

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

MATH2206 Prob Stat/20.Jan Weekly Review 1-2 MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion

More information

Single Maths B: Introduction to Probability

Single Maths B: Introduction to Probability Single Maths B: Introduction to Probability Overview Lecturer Email Office Homework Webpage Dr Jonathan Cumming j.a.cumming@durham.ac.uk CM233 None! http://maths.dur.ac.uk/stats/people/jac/singleb/ 1 Introduction

More information

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/

More information

Fourier and Stats / Astro Stats and Measurement : Stats Notes

Fourier and Stats / Astro Stats and Measurement : Stats Notes Fourier and Stats / Astro Stats and Measurement : Stats Notes Andy Lawrence, University of Edinburgh Autumn 2013 1 Probabilities, distributions, and errors Laplace once said Probability theory is nothing

More information

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events LECTURE 1 1 Introduction The first part of our adventure is a highly selective review of probability theory, focusing especially on things that are most useful in statistics. 1.1 Sample spaces and events

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

MS-A0504 First course in probability and statistics

MS-A0504 First course in probability and statistics MS-A0504 First course in probability and statistics Heikki Seppälä Department of Mathematics and System Analysis School of Science Aalto University Spring 2016 Probability is a field of mathematics, which

More information

18.05 Problem Set 5, Spring 2014 Solutions

18.05 Problem Set 5, Spring 2014 Solutions 8.0 Problem Set, Spring 04 Solutions Problem. (0 pts.) (a) We know that y i ax i b = ε i N(0,σ ). y i = ε i + ax i + b N(ax i + b, σ ). That is, Therefore f(y i a, b, x i,σ)= e (y i ax i b) σ π σ. (b)

More information

Introduction to Probability

Introduction to Probability LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute

More information

Probability theory basics

Probability theory basics Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:

More information

The Bayes classifier

The Bayes classifier The Bayes classifier Consider where is a random vector in is a random variable (depending on ) Let be a classifier with probability of error/risk given by The Bayes classifier (denoted ) is the optimal

More information

Assessing Risks. Tom Carter. tom/sfi-csss. Complex Systems

Assessing Risks. Tom Carter.   tom/sfi-csss. Complex Systems Assessing Risks Tom Carter http://cogs.csustan.edu/ tom/sfi-csss Complex Systems July, 2002 Our general topics: Assessing risks Using Bayes Theorem The doomsday argument References 2 The quotes Science

More information

Introduction to discrete probability. The rules Sample space (finite except for one example)

Introduction to discrete probability. The rules Sample space (finite except for one example) Algorithms lecture notes 1 Introduction to discrete probability The rules Sample space (finite except for one example) say Ω. P (Ω) = 1, P ( ) = 0. If the items in the sample space are {x 1,..., x n }

More information

Conditional Probability

Conditional Probability Conditional Probability Terminology: The probability of an event occurring, given that another event has already occurred. P A B = ( ) () P A B : The probability of A given B. Consider the following table:

More information

Bayesian Inference: What, and Why?

Bayesian Inference: What, and Why? Winter School on Big Data Challenges to Modern Statistics Geilo Jan, 2014 (A Light Appetizer before Dinner) Bayesian Inference: What, and Why? Elja Arjas UH, THL, UiO Understanding the concepts of randomness

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

1 A simple example. A short introduction to Bayesian statistics, part I Math 217 Probability and Statistics Prof. D.

1 A simple example. A short introduction to Bayesian statistics, part I Math 217 Probability and Statistics Prof. D. probabilities, we ll use Bayes formula. We can easily compute the reverse probabilities A short introduction to Bayesian statistics, part I Math 17 Probability and Statistics Prof. D. Joyce, Fall 014 I

More information

arxiv: v1 [physics.data-an] 3 Jun 2008

arxiv: v1 [physics.data-an] 3 Jun 2008 arxiv:0806.0530v [physics.data-an] 3 Jun 008 Averaging Results with Theory Uncertainties F. C. Porter Lauritsen Laboratory for High Energy Physics California Institute of Technology Pasadena, California

More information

It applies to discrete and continuous random variables, and a mix of the two.

It applies to discrete and continuous random variables, and a mix of the two. 3. Bayes Theorem 3.1 Bayes Theorem A straightforward application of conditioning: using p(x, y) = p(x y) p(y) = p(y x) p(x), we obtain Bayes theorem (also called Bayes rule) p(x y) = p(y x) p(x). p(y)

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 13, 2011 Today: The Big Picture Overfitting Review: probability Readings: Decision trees, overfiting

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 08 Problem Set : Bayes Theorem and Bayesian Tracking Last updated: March, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables,

More information

MAE 493G, CpE 493M, Mobile Robotics. 6. Basic Probability

MAE 493G, CpE 493M, Mobile Robotics. 6. Basic Probability MAE 493G, CpE 493M, Mobile Robotics 6. Basic Probability Instructor: Yu Gu, Fall 2013 Uncertainties in Robotics Robot environments are inherently unpredictable; Sensors and data acquisition systems are

More information

Lecture 4: Independent Events and Bayes Theorem

Lecture 4: Independent Events and Bayes Theorem Lecture 4: Independent Events and Bayes Theorem Independent Events Law of Total Probability Bayes Theorem Case Study: Prosecutor s Fallacy Dr. Shaobo Han, STA111: Probability and Statistical Inference

More information

Strong Lens Modeling (II): Statistical Methods

Strong Lens Modeling (II): Statistical Methods Strong Lens Modeling (II): Statistical Methods Chuck Keeton Rutgers, the State University of New Jersey Probability theory multiple random variables, a and b joint distribution p(a, b) conditional distribution

More information

Statistics 1 - Lecture Notes Chapter 1

Statistics 1 - Lecture Notes Chapter 1 Statistics 1 - Lecture Notes Chapter 1 Caio Ibsen Graduate School of Economics - Getulio Vargas Foundation April 28, 2009 We want to establish a formal mathematic theory to work with results of experiments

More information

Probability and Inference. POLI 205 Doing Research in Politics. Populations and Samples. Probability. Fall 2015

Probability and Inference. POLI 205 Doing Research in Politics. Populations and Samples. Probability. Fall 2015 Fall 2015 Population versus Sample Population: data for every possible relevant case Sample: a subset of cases that is drawn from an underlying population Inference Parameters and Statistics A parameter

More information

Structural Uncertainty in Health Economic Decision Models

Structural Uncertainty in Health Economic Decision Models Structural Uncertainty in Health Economic Decision Models Mark Strong 1, Hazel Pilgrim 1, Jeremy Oakley 2, Jim Chilcott 1 December 2009 1. School of Health and Related Research, University of Sheffield,

More information

Probability (Devore Chapter Two)

Probability (Devore Chapter Two) Probability (Devore Chapter Two) 1016-345-01: Probability and Statistics for Engineers Fall 2012 Contents 0 Administrata 2 0.1 Outline....................................... 3 1 Axiomatic Probability 3

More information

BAYESIAN inference is rooted from the well-known

BAYESIAN inference is rooted from the well-known , June 29 - July 1, 2016, London, U.K. Crafting a Lightweight Bayesian Inference Engine Feng-Jen Yang, Member, IAENG Abstract Many expert systems are counting on Bayesian inference to perform probabilistic

More information

Theory Change and Bayesian Statistical Inference

Theory Change and Bayesian Statistical Inference Theory Change and Bayesian Statistical Inference Jan-Willem Romeyn Department of Philosophy, University of Groningen j.w.romeyn@philos.rug.nl Abstract This paper addresses the problem that Bayesian statistical

More information

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 Winter 2012 Lecture 13 (Winter 2011) Estimation Lecture 13 1 / 33 Review of Main Concepts Sampling Distribution of Sample Mean

More information

Joint, Conditional, & Marginal Probabilities

Joint, Conditional, & Marginal Probabilities Joint, Conditional, & Marginal Probabilities The three axioms for probability don t discuss how to create probabilities for combined events such as P [A B] or for the likelihood of an event A given that

More information

Applied Bayesian Statistics

Applied Bayesian Statistics Applied Bayesian Statistics David Spiegelhalter Statistical Laboratory, Cambridge with thanks to: Nicky Best Dave Lunn Andrew Thomas Course: Lent Term 2010 Course Summary (provisional) Jan 18 Lect 1: Probability

More information

Lectures 5 & 6: Hypothesis Testing

Lectures 5 & 6: Hypothesis Testing Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across

More information

Data Analysis and Monte Carlo Methods

Data Analysis and Monte Carlo Methods Lecturer: Allen Caldwell, Max Planck Institute for Physics & TUM Recitation Instructor: Oleksander (Alex) Volynets, MPP & TUM General Information: - Lectures will be held in English, Mondays 16-18:00 -

More information

Conditional Probability & Independence. Conditional Probabilities

Conditional Probability & Independence. Conditional Probabilities Conditional Probability & Independence Conditional Probabilities Question: How should we modify P(E) if we learn that event F has occurred? Definition: the conditional probability of E given F is P(E F

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Probability calculus and statistics

Probability calculus and statistics A Probability calculus and statistics A.1 The meaning of a probability A probability can be interpreted in different ways. In this book, we understand a probability to be an expression of how likely it

More information

Probability, Statistics, and Bayes Theorem Session 3

Probability, Statistics, and Bayes Theorem Session 3 Probability, Statistics, and Bayes Theorem Session 3 1 Introduction Now that we know what Bayes Theorem is, we want to explore some of the ways that it can be used in real-life situations. Often the results

More information

A noninformative Bayesian approach to domain estimation

A noninformative Bayesian approach to domain estimation A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal

More information

Lecture 2. Constructing Probability Spaces

Lecture 2. Constructing Probability Spaces Lecture 2. Constructing Probability Spaces This lecture describes some procedures for constructing probability spaces. We will work exclusively with discrete spaces usually finite ones. Later, we will

More information

Language as a Stochastic Process

Language as a Stochastic Process CS769 Spring 2010 Advanced Natural Language Processing Language as a Stochastic Process Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu 1 Basic Statistics for NLP Pick an arbitrary letter x at random from any

More information

Probability Basics. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Probability Basics. Robot Image Credit: Viktoriya Sukhanova 123RF.com Probability Basics These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these

More information

Contents. Decision Making under Uncertainty 1. Meanings of uncertainty. Classical interpretation

Contents. Decision Making under Uncertainty 1. Meanings of uncertainty. Classical interpretation Contents Decision Making under Uncertainty 1 elearning resources Prof. Ahti Salo Helsinki University of Technology http://www.dm.hut.fi Meanings of uncertainty Interpretations of probability Biases in

More information

Exact Inference by Complete Enumeration

Exact Inference by Complete Enumeration 21 Exact Inference by Complete Enumeration We open our toolbox of methods for handling probabilities by discussing a brute-force inference method: complete enumeration of all hypotheses, and evaluation

More information

More on conditioning and Mr. Bayes

More on conditioning and Mr. Bayes More on conditioning and Mr. Bayes Saad Mneimneh 1 Multiplication rule for conditioning We can generalize the formula P(A,B) P(A B)P(B) to more than two events. For instance, P(A,B,C) P(A)P(B A)P(C A,B).

More information

Evidence with Uncertain Likelihoods

Evidence with Uncertain Likelihoods Evidence with Uncertain Likelihoods Joseph Y. Halpern Cornell University Ithaca, NY 14853 USA halpern@cs.cornell.edu Riccardo Pucella Cornell University Ithaca, NY 14853 USA riccardo@cs.cornell.edu Abstract

More information

Clustering and Gaussian Mixture Models

Clustering and Gaussian Mixture Models Clustering and Gaussian Mixture Models Piyush Rai IIT Kanpur Probabilistic Machine Learning (CS772A) Jan 25, 2016 Probabilistic Machine Learning (CS772A) Clustering and Gaussian Mixture Models 1 Recap

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

Weighted Majority and the Online Learning Approach

Weighted Majority and the Online Learning Approach Statistical Techniques in Robotics (16-81, F12) Lecture#9 (Wednesday September 26) Weighted Majority and the Online Learning Approach Lecturer: Drew Bagnell Scribe:Narek Melik-Barkhudarov 1 Figure 1: Drew

More information

ACM 116: Lecture 1. Agenda. Philosophy of the Course. Definition of probabilities. Equally likely outcomes. Elements of combinatorics

ACM 116: Lecture 1. Agenda. Philosophy of the Course. Definition of probabilities. Equally likely outcomes. Elements of combinatorics 1 ACM 116: Lecture 1 Agenda Philosophy of the Course Definition of probabilities Equally likely outcomes Elements of combinatorics Conditional probabilities 2 Philosophy of the Course Probability is the

More information

The Bayesian Paradigm

The Bayesian Paradigm Stat 200 The Bayesian Paradigm Friday March 2nd The Bayesian Paradigm can be seen in some ways as an extra step in the modelling world just as parametric modelling is. We have seen how we could use probabilistic

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics Workshop Organized by: ACM Unit, ISI Tutorial-1 Date: 05.07.2017 (Q1) Given seven points in a triangle of unit area, prove that three of them form a triangle of area not exceeding

More information

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with Example: Alternative calculation of mean and variance of binomial distribution A r.v. X has the Bernoulli distribution if it takes the values 1 ( success ) or 0 ( failure ) with probabilities p and (1

More information

What is the probability of getting a heads when flipping a coin

What is the probability of getting a heads when flipping a coin Chapter 2 Probability Probability theory is a branch of mathematics dealing with chance phenomena. The origins of the subject date back to the Italian mathematician Cardano about 1550, and French mathematicians

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Introducing Proof 1. hsn.uk.net. Contents

Introducing Proof 1. hsn.uk.net. Contents Contents 1 1 Introduction 1 What is proof? 1 Statements, Definitions and Euler Diagrams 1 Statements 1 Definitions Our first proof Euler diagrams 4 3 Logical Connectives 5 Negation 6 Conjunction 7 Disjunction

More information

Study Notes on the Latent Dirichlet Allocation

Study Notes on the Latent Dirichlet Allocation Study Notes on the Latent Dirichlet Allocation Xugang Ye 1. Model Framework A word is an element of dictionary {1,,}. A document is represented by a sequence of words: =(,, ), {1,,}. A corpus is a collection

More information

1 Probabilities. 1.1 Basics 1 PROBABILITIES

1 Probabilities. 1.1 Basics 1 PROBABILITIES 1 PROBABILITIES 1 Probabilities Probability is a tricky word usually meaning the likelyhood of something occuring or how frequent something is. Obviously, if something happens frequently, then its probability

More information

Optimism in the Face of Uncertainty Should be Refutable

Optimism in the Face of Uncertainty Should be Refutable Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:

More information

Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides

Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides slide 1 Inference with Bayes rule: Example In a bag there are two envelopes one has a red ball (worth $100) and a black ball one

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That Statistics Lecture 4 August 9, 2000 Frank Porter Caltech The plan for these lectures: 1. The Fundamentals; Point Estimation 2. Maximum Likelihood, Least Squares and All That 3. What is a Confidence Interval?

More information

2 Chapter 2: Conditional Probability

2 Chapter 2: Conditional Probability STAT 421 Lecture Notes 18 2 Chapter 2: Conditional Probability Consider a sample space S and two events A and B. For example, suppose that the equally likely sample space is S = {0, 1, 2,..., 99} and A

More information

Ordered Designs and Bayesian Inference in Survey Sampling

Ordered Designs and Bayesian Inference in Survey Sampling Ordered Designs and Bayesian Inference in Survey Sampling Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu Siamak Noorbaloochi Center for Chronic Disease

More information

Bayesian Model Comparison and Validation John Geweke, University of Iowa December 22, 2006 Models are the venue for much of the

Bayesian Model Comparison and Validation John Geweke, University of Iowa December 22, 2006 Models are the venue for much of the Bayesian Model Comparison and Validation John Geweke, University of Iowa john-geweke@uiowa.edu December 22, 2006 Models are the venue for much of the work of the economics profession. We use them to express,

More information