STA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2
|
|
- Liliana Peters
- 5 years ago
- Views:
Transcription
1 STA 25: Statistics Notes 7. Bayesian Aroach to Statistics Book chaters: From calibrating a rocedure to quantifying uncertainty We saw that the central idea of classical testing is to rovide a rigorous calibration of how a testing rule reject H if T (x) > c will erform under different scenarios. Once a rule with good erformance (small size, large ower at alternatives) has been chosen, it is alied to the data in question to make a accet/reject decision on H. No quantification is rovided as to how certain we are about H being correct. The -value may seem to rovide one such calibration, but it is wrong to interret the -value as the robability of H being true. For observed data x, the -value may be interreted as the maximum robability of {T (X) > T (x)} under the null. If the -value =.2, then we are right to say either H is false or something rare (only 2% chance) has haened. Don t be fooled in thinking that this statement means H has 2% chance of being true. In fact, the statement says nothing as to what odds we are willing to assign to its either and or arts. A similarly structured statement can be made in the following situation. Suose you fli a coin 3 times and count 1498 heads. Then it is correct to state either the coin is biased or something rare (only 1.5% chance) has haened (because, when X Binomial(3,.5), P (X = 1498) =.15.). But in this case it is quite insane to susect that the coin maybe biased (certainly not with a 985-to-15 odds). At the other end of the sectrum of statistical ractice is the Bayesian aroach where one cares more about quantifying uncertainties about statements made regarding the data and roblem at hand. Below is a simle examle you might be familiar with. Examle (Clinical diagnosis). A clinical test for a relatively rare disease (only 1% of oulation affected) is tested to have a 99% accuracy rate on atients who have the disease, and a 2% failure rate on atients who do not have it. A atient takes the test and gets a ositive. What are the chances that he has the disease? Let D denote the event that this atient has the disease. Then P (D) =.1. Let T + denote the event that the test results in a ositive. Then P (T + D) =.99 and P (T + D c ) =.2 where D c is the comlement of D, i..e., the event that the atient does not have this disease. We want to evaluate P (D T + ). This is an instance of inverse robability calculation that we learn as the Bayes theorem in our robability course: P (D T + ) = P (D)P (T + D) P (D)P (T + D) + P (D c )P (T + D c ) =
2 So the atient has a one in three chance of having the disease. In other words, his odds of not having the disease is two to one. 2 Inference via inverse robability reasoning The formal comonents of the above analysis are 1. Plausibility sores attached to the two ossible states of the unknown disease status (P (D) =.1, and consequently, P (D c ) =.99). 2. Plausibility scores attached to test outcomes for each state of disease status (P (T + D) =.99 = 1 P (T D), where T is the event that the test gives a negative; similarly, P (T + D c ) =.2 = 1 P (T D c )). 3. Combining the above two via the Bayes theorem to udate the lausibility scores of D and D c once an outcome of the test has been observed. Comonent 2 above is like a robability model X f(x θ), with θ Θ an unobservable quantity of interest (disease status of the atient with Θ = {D, D c }), while X S is data to be observed (outcome of the clinical test, S = {T +, T }). One learns f(x θ) through exerience and laboratory exerimentation. Comonent 1 is the novel feature of the Bayesian aroach, where one needs to attach lausibility scores to the ossible states of the unobservable quantity of interest before any observation is made. This lausibility scores reresent one s rior belief the belief that recedes the observation rocess. Comonent 3 is ure mathematics, and results straight out of the Bayes theorem once comonents 2 and 3 have been secified and an observation has been made for the observable quantity. Prior belief is not a singular quantity and cannot be learned. Prior belief combines current understanding of the unknown quantity of interest with what one is willing to assume about it. It may vary from one erson to another. It may require more than a single set of lausibility scores to reresent one s rior belief. Examle (Clinical diagnosis (contd.)). For our clinical diagnosis examle, the fact that the atient has been recommended to take the test may ersuade us to ut P (D) between 1% to 3%. In this case, P (D T + ) ranges between 33% to 61%. 3 Bayesian analysis: from rior to osterior In the general setting, a Bayesian analysis of data X combines a statistical model X f(x θ), x S, θ Θ, with a rior df/mf ξ(θ) on Θ. This df/mf reresents re-observation (or a riori) lausibility scores of the arameter values. The function h(x, θ) = f(x θ)ξ(θ) is simly a d/mf on S Θ giving a re-observation joint descrition of X and θ. So, once X = x is observed, the ost-observation descrition of θ conditional on the observed data 2
3 on X is simly the conditional df/mf which can be written in the form of Bayes rule as: f(x θ)ξ(θ) θ A, if f(x θ)ξ(θ) > on some interval A A ξ(θ x) = f(x θ )ξ(θ )dθ f(x θ)ξ(θ) if f(x θ)ξ(θ) > on some discrete set B. θ B f(x θ )ξ(θ ) We shall call this df/mf ξ(θ x) the osterior df/mf of θ (on Θ) based on the model X f(x θ), the rior ξ(θ) and the observation X = x. 4 Likelihood function and osterior df/mf Note that ost the observation X = x, the relative lausibility of θ = θ 1 against θ = θ 2 is given by ξ(θ 1 x) ξ(θ 2 x) = f(x θ 1)ξ(θ 1 ) f(x θ 2 )ξ(θ 2 ) = L x(θ 1 ) L x (θ 2 ) ξ(θ 1) ξ(θ 2 ). Therefore the scores given by the osterior combine the scores given by the rior (reobservation beliefs) with scores given by the likelihood function (evidence/suort from observation). L x (θ)ξ(θ) Θ L x(θ )ξ(θ )dθ = a(θ)ξ(θ) ξ(θ x) = Θ a(θ )ξ(θ )dθ when ξ(θ) is a df, and the same holds when ξ(θ) is a mf. 5 An examle: female birth rate in 18th century Paris The great scholar Pierre-Simon, marquis de Lalace ( ) was interested in learning the rate [, 1] of female birth in Paris in the 18th century. He had access to a large body of birth records in Paris between 1745 to 177 with n entries. From these he could extract the total number of entires X which recorded a female birth. A reasonable model for X is X Binomial(n, ), [, 1]. For a rior df on, Lalace decided that he had no reason to believe that for any two 1, 2 [, 1], the case = 1 was more lausible than the case = 2. In other words, Lalace believed all ossible values of [, 1] to be equally lausible. A df that ensures this is the Uniform(, 1) df with ξ() = 1; [, 1]. For an observations X = x, where x {, 1,, n}, the likelihood function is L x () = const. x (1 ) n x. Therefore, the osterior df ξ( x) takes the form: ξ( x) = x (1 ) n x 1 qx (1 q) n x dq = x (1 )n x, [, 1]. B(x + 1, n x + 1) where B(a, b) denotes the beta function, defined for any a >, b > as B(a, b) = 1 q a 1 (1 q) b 1 dq = Γ(a)Γ(b) Γ(a + b), where Γ(a) = x a 1 ex( x)dx is the gamma function defined for every a >. 3
4 Toy examle ξ( x) Figure 1: Posterior dfs ξ( x) for the model X θ Binomial(n, ) and Uniform(, 1). Here n = 2 and the osterior ξ( x) is shown for each of x =, 1,, 2. The df g(y) = y a 1 (1 y) b 1 /B(a, b), y [, 1] is called the beta df with arameters a, b (both must be ositive), and is denoted Beta(a, b). Therefore, ξ( x) equals Beta(x + 1, n x+1). In R, you can use dbeta(), beta(), qbeta() and rbeta() to get, resectively, the density function, the cumulative distribution function, the quantile function and random observations from a beta distribution. Figure 1 shows ξ( x) against for a toy setting with n = 2. Each curve on the Figure corresonds to one x in the range, 1, 2,, n. As x increases from to n, the eak of the ξ( x) curve shifts from left to right. The data Lalace had contained n = records with x = female births. This leads to a ξ( x) = Beta(241946, ) osterior distribution for θ. Figure 2 shows this osterior df. Below are some of several ossible summaries of the osterior. Lalace was concerned whether the female birth rate was smaller than the commonly held figure of 5%. The lausibility of this event, based on Lalace s model and observed data, is simly P (.5 X = x) =.5 ξ( x)d, which in R can be comuted as > beta(.5, , ) = 1. Kee in mind that this number is close to 1, but not exactly 1. In fact, it is more useful to look at converse: P ( >.5 X = x) = 1 P (.5 x) = 1 ξ( x)d, which equals.5 > beta(.5, , ) = 1.146e-42. Lalace concluded that he was morally certain that Θ is smaller than.5. If we are interested in a single number summary of, we could try to extract a single number summary of the df ξ( x). An attractive choice is the mean (exectation) 4
5 Female birth rate ξ( x) ξ( x) Figure 2: Posterior df ξ( x) for female birth analysis by Lalace. The 5% rate (θ =.5) is highlighted with a dotted vertical line in the middle. The osterior concentrates at a value lower than this mark. The right anel shows the same, but zooms into the range [.48,.5]. under this df: (x) = E[ X = x] = 1 ξ( x)d. The mean of a Beta(a, b) df equals a/(a + b), therefore the osterior mean of the female birth rate is > / ( ) =.49 If we are interested in reorting a range of values of, we can look for an interval such that the df ξ( x) assigns a small robability outside this interval. This is best reresented by the quantiles u (x) of ξ( x), defined for any u (, 1), as the oint a such that P (θ a X = x) = a ξ( x)d = u. In articular, for any α (, 1), the interval A(x) = [ α/2 (x), 1 α/2 (x)] satisfies: P ( A(x) X = x) = P ( < α/2 (x) X = x) + P ( > 1 α/2 (x) X = x) = α/2 + α/2 = α. For α = 5%, the end-oints of the interval [ α/2 (x), 1 α/2 (x)] equal > lower.end <- qbeta(.5 / 2, , ) =.489 > uer.end <- qbeta(1 -.5 / 2, , ) =.491 and, indeed, we can say the (osterior) robability of {.489 θ.491} equals 95%. 5
arxiv: v1 [physics.data-an] 26 Oct 2012
Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch
More information4. Score normalization technical details We now discuss the technical details of the score normalization method.
SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules
More informationIntroduction to Probability and Statistics
Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based
More informationLecture 23 Maximum Likelihood Estimation and Bayesian Inference
Lecture 23 Maximum Likelihood Estimation and Bayesian Inference Thais Paiva STA 111 - Summer 2013 Term II August 7, 2013 1 / 31 Thais Paiva STA 111 - Summer 2013 Term II Lecture 23, 08/07/2013 Lecture
More informationBayesian Networks Practice
Bayesian Networks Practice Part 2 2016-03-17 Byoung-Hee Kim, Seong-Ho Son Biointelligence Lab, CSE, Seoul National University Agenda Probabilistic Inference in Bayesian networks Probability basics D-searation
More informationLecture: Condorcet s Theorem
Social Networs and Social Choice Lecture Date: August 3, 00 Lecture: Condorcet s Theorem Lecturer: Elchanan Mossel Scribes: J. Neeman, N. Truong, and S. Troxler Condorcet s theorem, the most basic jury
More informationSampling. Inferential statistics draws probabilistic conclusions about populations on the basis of sample statistics
Samling Inferential statistics draws robabilistic conclusions about oulations on the basis of samle statistics Probability models assume that every observation in the oulation is equally likely to be observed
More informationCHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit
Chater 5 Statistical Inference 69 CHAPTER 5 STATISTICAL INFERENCE.0 Hyothesis Testing.0 Decision Errors 3.0 How a Hyothesis is Tested 4.0 Test for Goodness of Fit 5.0 Inferences about Two Means It ain't
More informationLearning Sequence Motif Models Using Gibbs Sampling
Learning Sequence Motif Models Using Gibbs Samling BMI/CS 776 www.biostat.wisc.edu/bmi776/ Sring 2018 Anthony Gitter gitter@biostat.wisc.edu These slides excluding third-arty material are licensed under
More informationCSE 599d - Quantum Computing When Quantum Computers Fall Apart
CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to
More informationTests for Two Proportions in a Stratified Design (Cochran/Mantel-Haenszel Test)
Chater 225 Tests for Two Proortions in a Stratified Design (Cochran/Mantel-Haenszel Test) Introduction In a stratified design, the subects are selected from two or more strata which are formed from imortant
More informationIntroduction to Probability for Graphical Models
Introduction to Probability for Grahical Models CSC 4 Kaustav Kundu Thursday January 4, 06 *Most slides based on Kevin Swersky s slides, Inmar Givoni s slides, Danny Tarlow s slides, Jaser Snoek s slides,
More informationBasics of Inference. Lecture 21: Bayesian Inference. Review - Example - Defective Parts, cont. Review - Example - Defective Parts
Basics of Iferece Lecture 21: Sta230 / Mth230 Coli Rudel Aril 16, 2014 U util this oit i the class you have almost exclusively bee reseted with roblems where we are usig a robability model where the model
More informationCSC321: 2011 Introduction to Neural Networks and Machine Learning. Lecture 11: Bayesian learning continued. Geoffrey Hinton
CSC31: 011 Introdution to Neural Networks and Mahine Learning Leture 11: Bayesian learning ontinued Geoffrey Hinton Bayes Theorem, Prior robability of weight vetor Posterior robability of weight vetor
More informationThe Binomial Approach for Probability of Detection
Vol. No. (Mar 5) - The e-journal of Nondestructive Testing - ISSN 45-494 www.ndt.net/?id=7498 The Binomial Aroach for of Detection Carlos Correia Gruo Endalloy C.A. - Caracas - Venezuela www.endalloy.net
More informationHotelling s Two- Sample T 2
Chater 600 Hotelling s Two- Samle T Introduction This module calculates ower for the Hotelling s two-grou, T-squared (T) test statistic. Hotelling s T is an extension of the univariate two-samle t-test
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationNamed Entity Recognition using Maximum Entropy Model SEEM5680
Named Entity Recognition using Maximum Entroy Model SEEM5680 Named Entity Recognition System Named Entity Recognition (NER): Identifying certain hrases/word sequences in a free text. Generally it involves
More information8 STOCHASTIC PROCESSES
8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular
More informationThe one-sample t test for a population mean
Objectives Constructing and assessing hyotheses The t-statistic and the P-value Statistical significance The one-samle t test for a oulation mean One-sided versus two-sided tests Further reading: OS3,
More informationBiostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Response) Logistic Regression
Biostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Resonse) Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout
More informationarxiv: v3 [physics.data-an] 23 May 2011
Date: October, 8 arxiv:.7v [hysics.data-an] May -values for Model Evaluation F. Beaujean, A. Caldwell, D. Kollár, K. Kröninger Max-Planck-Institut für Physik, München, Germany CERN, Geneva, Switzerland
More informationMorten Frydenberg Section for Biostatistics Version :Friday, 05 September 2014
Morten Frydenberg Section for Biostatistics Version :Friday, 05 Setember 204 All models are aroximations! The best model does not exist! Comlicated models needs a lot of data. lower your ambitions or get
More information7.2 Inference for comparing means of two populations where the samples are independent
Objectives 7.2 Inference for comaring means of two oulations where the samles are indeendent Two-samle t significance test (we give three examles) Two-samle t confidence interval htt://onlinestatbook.com/2/tests_of_means/difference_means.ht
More informationConjugate Priors, Uninformative Priors
Conjugate Priors, Uninformative Priors Nasim Zolaktaf UBC Machine Learning Reading Group January 2016 Outline Exponential Families Conjugacy Conjugate priors Mixture of conjugate prior Uninformative priors
More informationComputational Perception. Bayesian Inference
Computational Perception 15-485/785 January 24, 2008 Bayesian Inference The process of probabilistic inference 1. define model of problem 2. derive posterior distributions and estimators 3. estimate parameters
More information16.2. Infinite Series. Introduction. Prerequisites. Learning Outcomes
Infinite Series 6. Introduction We extend the concet of a finite series, met in section, to the situation in which the number of terms increase without bound. We define what is meant by an infinite series
More informationHow to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty
How to Estimate Exected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty Christian Servin Information Technology Deartment El Paso Community College El Paso, TX 7995, USA cservin@gmail.com
More informationProbabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Lecture 1: Introduction to Probabilistic Modelling Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Why a
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationBayesian classification CISC 5800 Professor Daniel Leeds
Bayesian classification CISC 5800 Professor Daniel Leeds Classifying with robabilities Examle goal: Determine is it cloudy out Available data: Light detector: x 0,25 Potential class (atmosheric states):
More informationBEGINNING BAYES IN R. Bayes with discrete models
BEGINNING BAYES IN R Bayes with discrete models Beginning Bayes in R Survey on eating out What is your favorite day for eating out? Construct a prior for p Define p: proportion of all students who answer
More informationBayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder
Bayesian inference & Markov chain Monte Carlo Note 1: Many slides for this lecture were kindly rovided by Paul Lewis and Mark Holder Note 2: Paul Lewis has written nice software for demonstrating Markov
More informationCHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules
CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is
More informationResearch Note REGRESSION ANALYSIS IN MARKOV CHAIN * A. Y. ALAMUTI AND M. R. MESHKANI **
Iranian Journal of Science & Technology, Transaction A, Vol 3, No A3 Printed in The Islamic Reublic of Iran, 26 Shiraz University Research Note REGRESSION ANALYSIS IN MARKOV HAIN * A Y ALAMUTI AND M R
More informationIntro to Bayesian Methods
Intro to Bayesian Methods Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Lecture 1 1 Course Webpage Syllabus LaTeX reference manual R markdown reference manual Please come to office
More information36-463/663: Multilevel & Hierarchical Models
36-463/663: Multilevel & Hierarchical Models From Maximum Likelihood to Bayes Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 Outline 2016 Pre-election oll in Ohio Binomial and Bernoulli MLE Bayes Rule
More informationAvailability and Maintainability. Piero Baraldi
Availability and Maintainability 1 Introduction: reliability and availability System tyes Non maintained systems: they cannot be reaired after a failure (a telecommunication satellite, a F1 engine, a vessel
More informationTowards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK
Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)
More information16.2. Infinite Series. Introduction. Prerequisites. Learning Outcomes
Infinite Series 6.2 Introduction We extend the concet of a finite series, met in Section 6., to the situation in which the number of terms increase without bound. We define what is meant by an infinite
More informationPublished: 14 October 2013
Electronic Journal of Alied Statistical Analysis EJASA, Electron. J. A. Stat. Anal. htt://siba-ese.unisalento.it/index.h/ejasa/index e-issn: 27-5948 DOI: 1.1285/i275948v6n213 Estimation of Parameters of
More informationCSC321: 2011 Introduction to Neural Networks and Machine Learning. Lecture 10: The Bayesian way to fit models. Geoffrey Hinton
CSC31: 011 Introdution to Neural Networks and Mahine Learning Leture 10: The Bayesian way to fit models Geoffrey Hinton The Bayesian framework The Bayesian framework assumes that we always have a rior
More information(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics
Estimating a proportion using the beta/binomial model A fundamental task in statistics is to estimate a proportion using a series of trials: What is the success probability of a new cancer treatment? What
More informationAn Analysis of Reliable Classifiers through ROC Isometrics
An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit
More informationECON 4130 Supplementary Exercises 1-4
HG Set. 0 ECON 430 Sulementary Exercises - 4 Exercise Quantiles (ercentiles). Let X be a continuous random variable (rv.) with df f( x ) and cdf F( x ). For 0< < we define -th quantile (or 00-th ercentile),
More informationGeneral Linear Model Introduction, Classes of Linear models and Estimation
Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)
More informationChapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population
Chater 7 and s Selecting a Samle Point Estimation Introduction to s of Proerties of Point Estimators Other Methods Introduction An element is the entity on which data are collected. A oulation is a collection
More informationEcon 3790: Business and Economics Statistics. Instructor: Yogesh Uppal
Econ 379: Business and Economics Statistics Instructor: Yogesh Ual Email: yual@ysu.edu Chater 9, Part A: Hyothesis Tests Develoing Null and Alternative Hyotheses Tye I and Tye II Errors Poulation Mean:
More informationAI*IA 2003 Fusion of Multiple Pattern Classifiers PART III
AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers
More informationOptimal Learning Policies for the Newsvendor Problem with Censored Demand and Unobservable Lost Sales
Otimal Learning Policies for the Newsvendor Problem with Censored Demand and Unobservable Lost Sales Diana Negoescu Peter Frazier Warren Powell Abstract In this aer, we consider a version of the newsvendor
More informationCMSC 425: Lecture 4 Geometry and Geometric Programming
CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas
More informationNotes on Instrumental Variables Methods
Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationOne-way ANOVA Inference for one-way ANOVA
One-way ANOVA Inference for one-way ANOVA IPS Chater 12.1 2009 W.H. Freeman and Comany Objectives (IPS Chater 12.1) Inference for one-way ANOVA Comaring means The two-samle t statistic An overview of ANOVA
More informationChapter 1: PROBABILITY BASICS
Charles Boncelet, obability, Statistics, and Random Signals," Oxford University ess, 0. ISBN: 978-0-9-0005-0 Chater : PROBABILITY BASICS Sections. What Is obability?. Exeriments, Outcomes, and Events.
More informationTopic 7: Using identity types
Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules
More informationReal Analysis 1 Fall Homework 3. a n.
eal Analysis Fall 06 Homework 3. Let and consider the measure sace N, P, µ, where µ is counting measure. That is, if N, then µ equals the number of elements in if is finite; µ = otherwise. One usually
More informationAn Introduction to Information Theory: Notes
An Introduction to Information Theory: Notes Jon Shlens jonshlens@ucsd.edu 03 February 003 Preliminaries. Goals. Define basic set-u of information theory. Derive why entroy is the measure of information
More informationBayesian Inference. p(y)
Bayesian Inference There are different ways to interpret a probability statement in a real world setting. Frequentist interpretations of probability apply to situations that can be repeated many times,
More informationGeneral Random Variables
Chater General Random Variables. Law of a Random Variable Thus far we have considered onl random variables whose domain and range are discrete. We now consider a general random variable X! defined on the
More informationPlotting the Wilson distribution
, Survey of English Usage, University College London Setember 018 1 1. Introduction We have discussed the Wilson score interval at length elsewhere (Wallis 013a, b). Given an observed Binomial roortion
More informationOn split sample and randomized confidence intervals for binomial proportions
On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have
More informationProbability and Estimation. Alan Moses
Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.
More informationStatics and dynamics: some elementary concepts
1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and
More informationThe Poisson Regression Model
The Poisson Regression Model The Poisson regression model aims at modeling a counting variable Y, counting the number of times that a certain event occurs during a given time eriod. We observe a samle
More informationBayesian Spatially Varying Coefficient Models in the Presence of Collinearity
Bayesian Satially Varying Coefficient Models in the Presence of Collinearity David C. Wheeler 1, Catherine A. Calder 1 he Ohio State University 1 Abstract he belief that relationshis between exlanatory
More informationSupplementary Materials for Robust Estimation of the False Discovery Rate
Sulementary Materials for Robust Estimation of the False Discovery Rate Stan Pounds and Cheng Cheng This sulemental contains roofs regarding theoretical roerties of the roosed method (Section S1), rovides
More informationFrequentist Statistics and Hypothesis Testing Spring
Frequentist Statistics and Hypothesis Testing 18.05 Spring 2018 http://xkcd.com/539/ Agenda Introduction to the frequentist way of life. What is a statistic? NHST ingredients; rejection regions Simple
More informationBayesian Inference. Chapter 2: Conjugate models
Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in
More information¼ ¼ 6:0. sum of all sample means in ð8þ 25
1. Samling Distribution of means. A oulation consists of the five numbers 2, 3, 6, 8, and 11. Consider all ossible samles of size 2 that can be drawn with relacement from this oulation. Find the mean of
More information2016-r1 Physics 220: Worksheet 02 Name
06-r Physics 0: Worksheet 0 Name Concets: Electric Field, lines of force, charge density, diole moment, electric diole () An equilateral triangle with each side of length 0.0 m has identical charges of
More informationEstimating function analysis for a class of Tweedie regression models
Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal
More information2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling
2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling Jon Wakefield Departments of Statistics and Biostatistics University of Washington Outline Introduction and Motivating
More informationCollaborative Place Models Supplement 1
Collaborative Place Models Sulement Ber Kaicioglu Foursquare Labs ber.aicioglu@gmail.com Robert E. Schaire Princeton University schaire@cs.rinceton.edu David S. Rosenberg P Mobile Labs david.davidr@gmail.com
More informationBiostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression
Biostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout 4a): χ 2 test of
More informationSTK4900/ Lecture 7. Program
STK4900/9900 - Lecture 7 Program 1. Logistic regression with one redictor 2. Maximum likelihood estimation 3. Logistic regression with several redictors 4. Deviance and likelihood ratio tests 5. A comment
More informationEcon 3790: Business and Economics Statistics. Instructor: Yogesh Uppal
Econ 379: Business and Economics Statistics Instructor: Yogesh Ual Email: yual@ysu.edu Chater 9, Part A: Hyothesis Tests Develoing Null and Alternative Hyotheses Tye I and Tye II Errors Poulation Mean:
More informationEstimation of Separable Representations in Psychophysical Experiments
Estimation of Searable Reresentations in Psychohysical Exeriments Michele Bernasconi (mbernasconi@eco.uninsubria.it) Christine Choirat (cchoirat@eco.uninsubria.it) Raffaello Seri (rseri@eco.uninsubria.it)
More informationdn i where we have used the Gibbs equation for the Gibbs energy and the definition of chemical potential
Chem 467 Sulement to Lectures 33 Phase Equilibrium Chemical Potential Revisited We introduced the chemical otential as the conjugate variable to amount. Briefly reviewing, the total Gibbs energy of a system
More informationBayesian Networks Practice
ayesian Networks Practice Part 2 2016-03-17 young-hee Kim Seong-Ho Son iointelligence ab CSE Seoul National University Agenda Probabilistic Inference in ayesian networks Probability basics D-searation
More informationGame Specification in the Trias Politica
Game Secification in the Trias Politica Guido Boella a Leendert van der Torre b a Diartimento di Informatica - Università di Torino - Italy b CWI - Amsterdam - The Netherlands Abstract In this aer we formalize
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationProspect Theory Explains Newsvendor Behavior: The Role of Reference Points
Submitted to Management Science manuscrit (Please, rovide the mansucrit number! Authors are encouraged to submit new aers to INFORMS journals by means of a style file temlate, which includes the journal
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter
More information1 Random Experiments from Random Experiments
Random Exeriments from Random Exeriments. Bernoulli Trials The simlest tye of random exeriment is called a Bernoulli trial. A Bernoulli trial is a random exeriment that has only two ossible outcomes: success
More informationConvex Optimization methods for Computing Channel Capacity
Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem
More information1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationCSC321 Lecture 18: Learning Probabilistic Models
CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling
More informationAsymptotic Properties of the Markov Chain Model method of finding Markov chains Generators of..
IOSR Journal of Mathematics (IOSR-JM) e-issn: 78-578, -ISSN: 319-765X. Volume 1, Issue 4 Ver. III (Jul. - Aug.016), PP 53-60 www.iosrournals.org Asymtotic Proerties of the Markov Chain Model method of
More informationRANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES
RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete
More informationInformation collection on a graph
Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements
More informationChapter 13 Variable Selection and Model Building
Chater 3 Variable Selection and Model Building The comlete regsion analysis deends on the exlanatory variables ent in the model. It is understood in the regsion analysis that only correct and imortant
More informationSystem Reliability Estimation and Confidence Regions from Subsystem and Full System Tests
009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract
More informationLinear diophantine equations for discrete tomography
Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,
More informationUse of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek
Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.
More information(1) Introduction to Bayesian statistics
Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they
More informationInformation collection on a graph
Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements
More informationSimplifications to Conservation Equations
Chater 5 Simlifications to Conservation Equations 5.1 Steady Flow If fluid roerties at a oint in a field do not change with time, then they are a function of sace only. They are reresented by: ϕ = ϕq 1,
More informationSlash Distributions and Applications
CHAPTER 2 Slash Distributions and Alications 2.1 Introduction The concet of slash distributions was introduced by Kafadar (1988) as a heavy tailed alternative to the normal distribution. Further literature
More information