Mean and variance. Compute the mean and variance of the distribution with density
|
|
- Veronica Ward
- 5 years ago
- Views:
Transcription
1 Mean and variance Compute the mean and variance of the distribution with density > f <- function(x) x^3 * exp(-x)/6 using integrate. Then compute the mean and variance for the distribution with distribution function > F <- function(x) 1 - x^(-0.3) * exp(-0.4 * (x - 1)). p.1/19
2 Solutions > f <- function(x) x^3 * exp(-x)/6 > f.mod <- function(x) x * f(x) > mu <- integrate(f.mod, 0, Inf)$value > mu [1] 4 > f.mod <- function(x) (x - mu)^2 * f(x) > integrate(f.mod, 0, Inf)$value [1] 4. p.2/19
3 Solutions - continued Have to compute the density by differentiation > F <- function(x) 1 - x^(-0.3) * exp(-0.4 * (x - 1)) > f <- function(x) 0.3 * x^(-1.3) * exp(-0.4 * (x - 1)) * + x^(-0.3) * exp(-0.4 * (x - 1)) > f.mod <- function(x) x * f(x) > mu <- integrate(f.mod, 1, Inf)$value > mu [1] > f.mod <- function(x) (x - mu)^2 * f(x) > integrate(f.mod, 1, Inf)$value [1] p.3/19
4 Conditional probabilities P is a probability measure on the sample space E. Definition: If A E is an event with P(A) > 0 we define the conditional probability of B given A as P(B A) = P(B A) P(A).. p.4/19
5 Conditional probabilities P is a probability measure on the sample space E. Definition: If A E is an event with P(A) > 0 we define the conditional probability of B given A as P(B A) = P(B A) P(A). Fixing A we have that B P(B A) is a probability measure the conditional probability measure given A. need to check if the conditions for a probability measure are satisfied.. p.4/19
6 EPO example An EPO test reveals in 99% of the cases a cyclist that has taken EPO. Is this good?. p.5/19
7 EPO example An EPO test reveals in 99% of the cases a cyclist that has taken EPO. Is this good? tp = fp = tn = fn = true positive false positive true negative false negative. p.5/19
8 EPO example An EPO test reveals in 99% of the cases a cyclist that has taken EPO. Is this good? tp = fp = tn = fn = true positive false positive true negative false negative With A 1 = {tp, fn} and B = {tp, fp} we know that P(B A 1 ) = 0.99 We want to know P(A 1 B).. p.5/19
9 Bayes theorem Theorem: If A 1,...,A n are disjoint events in E with E = A 1... A n and if B E is any event (with P(B) > 0) then P(A i B) = for all i = 1,...,n. P(B A i )P(A i ) P(B A 1 )P(A 1 ) P(B A n )P(A n ) (-2). p.6/19
10 EPO example Let A 2 = {fp, tn} then P(A 1 B) = P(B A 1 )P(A 1 ) P(B A 1 )P(A 1 ) + P(B A 2 )P(A 2 ).. p.7/19
11 EPO example Let A 2 = {fp, tn} then P(A 1 B) = P(B A 1 )P(A 1 ) P(B A 1 )P(A 1 ) + P(B A 2 )P(A 2 ). Assume P(A 2 B) = If P(A 1 ) = 0.07 (P(A 2 ) = 0.93) P(A 1 B) = = p.7/19
12 EPO example Let A 2 = {fp, tn} then P(A 1 B) = P(B A 1 )P(A 1 ) P(B A 1 )P(A 1 ) + P(B A 2 )P(A 2 ). Assume P(A 2 B) = If P(A 1 ) = 0.07 (P(A 2 ) = 0.93) P(A 1 B) = = If P(A 1 ) = 0.3 then P(A 1 B) = = p.7/19
13 Independence Natural to say that B is independent of A if P(B A) = P(A). This implies that A is independent of B and that P(A B) = P(A)P(B). p.8/19
14 Independence Natural to say that B is independent of A if P(B A) = P(A). This implies that A is independent of B and that P(A B) = P(A)P(B) Definition: We say that two events A and B are independent if P(A B) = P(A)P(B).. p.8/19
15 Random variables A random variable is a notational convention representing the unrealized outcome of an experiment prior to observing the outcome. Usually denoted X, Y, Z.. p.9/19
16 Random variables A random variable is a notational convention representing the unrealized outcome of an experiment prior to observing the outcome. Usually denoted X, Y, Z. The probability measure governing the experiment is called the distribution of the corresponding random variable.. p.9/19
17 Random variables A random variable is a notational convention representing the unrealized outcome of an experiment prior to observing the outcome. Usually denoted X, Y, Z. The probability measure governing the experiment is called the distribution of the corresponding random variable. A random variable X representing the outcome from a binary experiment with sample space E = {0, 1} is called a Bernoulli variable. The probability p = P(X = 1) is often called the success probability.. p.9/19
18 Transformations If E and E are two sample spaces, a transformation is a map h : E E that assigns the transformed outcome h(x) in E to the outcome x in E. Definition: If P is a probability measure on E the transformed probability measure, h(p), on E is given by h(p)(a) = P(h 1 (A)) = P({x E h(x) A}) for any event A E. If X is a random variable with distribution P, the distribution of h(x) is h(p).. p.10/19
19 Point probabilities If E discrete, P a probability measure on E with point probabilities p(x) for x E and h : E E the probability measure h(p) has point probabilities q(y) = x:h(x)=y p(x), y h(e).. p.11/19
20 Point probabilities If E discrete, P a probability measure on E with point probabilities p(x) for x E and h : E E the probability measure h(p) has point probabilities q(y) = x:h(x)=y p(x), y h(e). If E R then the mean (provided it is defined) under h(p) is µ = x E h(x)p(x). p.11/19
21 Point probabilities If E discrete, P a probability measure on E with point probabilities p(x) for x E and h : E E the probability measure h(p) has point probabilities q(y) = x:h(x)=y p(x), y h(e). If E R then the mean (provided it is defined) under h(p) is µ = x E h(x)p(x) and the variance (provided it is defined) under h(p) is σ 2 = x E (h(x) µ) 2 p(x) = x E h(x) 2 p(x) µ 2.. p.11/19
22 Exponential families Let E be discrete and H : E R a function. Define for θ R the function φ(θ) = x E exp(θh(x)) [0, ].. p.12/19
23 Exponential families Let E be discrete and H : E R a function. Define for θ R the function φ(θ) = x E exp(θh(x)) [0, ]. With Θ = {θ R φ(θ) < } we define for θ Θ the probability measure P θ on E with point probabilities p θ (x) = 1 φ(θ) exp(θh(x)).. p.12/19
24 Exponential families Let E be discrete and H : E R a function. Define for θ R the function φ(θ) = x E exp(θh(x)) [0, ]. With Θ = {θ R φ(θ) < } we define for θ Θ the probability measure P θ on E with point probabilities p θ (x) = 1 φ(θ) exp(θh(x)). The family of measures P θ on E parameterized by θ Θ is called an exponential family.. p.12/19
25 Exponential families Define the following function in R > H <- function(n) -(n-10)^2 and let H be defined on E = N. What is Θ? Compute and plot the function φ. Compute and plot the point probabilities for the probability measure corresponding to θ = 1 and θ = 4.. p.13/19
26 Exponential families Provided that all sums are sensible we have the mean under H(P θ ) is µ(θ) = x E 1 φ(θ) H(x) exp(θh(x)) = φ (θ) φ(θ).. p.14/19
27 Exponential families Provided that all sums are sensible we have the mean under H(P θ ) is µ(θ) = x E 1 φ(θ) H(x) exp(θh(x)) = φ (θ) φ(θ). and the variance under H(P θ ) σ 2 (θ) = x E H(x) 2 φ(θ) exp(θh(x)) φ (θ) 2 φ(θ) 2 = φ (θ)φ(θ) φ (θ) 2 φ(θ) 2 = µ (θ).. p.14/19
28 Special exponential families If E R and c : E R let Θ = {θ R x E exp(θx + c(x)) < } and define for θ Θ the probability measure P θ on E with point probabilities p θ (x) = exp(θx b(θ) + c(x)) where b(θ) = x E exp(θx + c(x)).. p.15/19
29 Special exponential families If E R and c : E R let Θ = {θ R x E exp(θx + c(x)) < } and define for θ Θ the probability measure P θ on E with point probabilities p θ (x) = exp(θx b(θ) + c(x)) where b(θ) = x E exp(θx + c(x)). In the general notation we have b(θ) = log φ(θ) = x E exp(θx)w(x) with w(x) = exp(c(x)).. p.15/19
30 Special exponential families For the mean and variance under P θ we find that µ(θ) = φ (θ) φ(θ) = b (θ). p.16/19
31 Special exponential families For the mean and variance under P θ we find that µ(θ) = φ (θ) φ(θ) = b (θ) and σ 2 (θ) = b (θ).. p.16/19
32 Special exponential families For the mean and variance under P θ we find that µ(θ) = φ (θ) φ(θ) = b (θ) and σ 2 (θ) = b (θ). The Bernoulli distribution and the Poisson distribution are two examples of distributions that fit within the framework of distributions considered here. The importance is that we have established a general relation between the θ parameter and the mean value.. p.16/19
33 Location and scale Consider the transformation h : R R given by h(x) = σx + µ for some constants µ R and σ > 0 is called a scale-location transformation.. p.17/19
34 Location and scale Consider the transformation h : R R given by h(x) = σx + µ for some constants µ R and σ > 0 is called a scale-location transformation. If X is a real valued random variable with distribution function F then the distribution of Y = h(x) = σx + µ has distribution function G(x) = F ( ) x µ. σ. p.17/19
35 Location and scale If F is differentiable with f = F the distribution of Y has density g(x) = G (x) = 1 ( ) x µ σ f. σ. p.18/19
36 Location and scale If F is differentiable with f = F the distribution of Y has density g(x) = G (x) = 1 ( ) x µ σ f. σ If, moreover, X has mean 0 and variance 1 then Y has mean µ and variance σ 2.. p.18/19
37 The normal distribution The density for the normal distribution with location parameter µ and scale parameter σ 2 is ) 1 ( σ 2π exp (x µ)2 2σ 2 = ( ) 1 exp (x µ)2 2πσ 2 2σ 2.. p.19/19
38 The normal distribution The density for the normal distribution with location parameter µ and scale parameter σ 2 is ) 1 ( σ 2π exp (x µ)2 2σ 2 = ( ) 1 exp (x µ)2 2πσ 2 2σ 2. We write X N(µ, σ 2 ) to denote that the distribution of X is the normal distribution with location parameter µ ans scale parameter σ 2.. p.19/19
39 The normal distribution The density for the normal distribution with location parameter µ and scale parameter σ 2 is ) 1 ( σ 2π exp (x µ)2 2σ 2 = ( ) 1 exp (x µ)2 2πσ 2 2σ 2. We write X N(µ, σ 2 ) to denote that the distribution of X is the normal distribution with location parameter µ ans scale parameter σ 2. Since the mean and variance of X N(0, 1) is 0 and 1, respectively, it follows that if X N(µ, σ 2 ) is µ and σ 2, respectively.. p.19/19
Simulations. . p.1/25
Simulations Computer simulations of realizations of random variables has become indispensable as supplement to theoretical investigations and practical applications.. p.1/25 Simulations Computer simulations
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationAdvanced Herd Management Probabilities and distributions
Advanced Herd Management Probabilities and distributions Anders Ringgaard Kristensen Slide 1 Outline Probabilities Conditional probabilities Bayes theorem Distributions Discrete Continuous Distribution
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationThe logistic regression model is thus a glm-model with canonical link function so that the log-odds equals the linear predictor, that is
Example The logistic regression model is thus a glm-model with canonical link function so that the log-odds equals the linear predictor, that is log p 1 p = β 0 + β 1 f 1 (y 1 ) +... + β d f d (y d ).
More informationRandom Variables. P(x) = P[X(e)] = P(e). (1)
Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment
More informationChapter 4. Continuous Random Variables
Chapter 4. Continuous Random Variables Review Continuous random variable: A random variable that can take any value on an interval of R. Distribution: A density function f : R R + such that 1. non-negative,
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationIndependence, Concentration, Bayes Theorem
Independence, Concentration, Bayes Theorem CSE21 Winter 2017, Day 23 (B00), Day 15 (A00) March 10, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Other functions? Expectation does not in general commute
More informationLecture 8: Signal Detection and Noise Assumption
ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationMath Review Sheet, Fall 2008
1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the
More informationProbability Theory. Patrick Lam
Probability Theory Patrick Lam Outline Probability Random Variables Simulation Important Distributions Discrete Distributions Continuous Distributions Most Basic Definition of Probability: number of successes
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA. Asymptotic Inference in Exponential Families Let X j be a sequence of independent,
More informationData Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber
Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2017 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationProblems ( ) 1 exp. 2. n! e λ and
Problems The expressions for the probability mass function of the Poisson(λ) distribution, and the density function of the Normal distribution with mean µ and variance σ 2, may be useful: ( ) 1 exp. 2πσ
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationExponential Families
Exponential Families David M. Blei 1 Introduction We discuss the exponential family, a very flexible family of distributions. Most distributions that you have heard of are in the exponential family. Bernoulli,
More information{ p if x = 1 1 p if x = 0
Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =
More informationBayesian Logic Thomas Bayes:
Bayesian Logic Thomas Bayes: 1702-1761 Based on probability, P(X) NOT frequency distributions, BUT all common distributions can be incorporated into a Bayesian analysis : Poisson, Normal (Gaussian) etc.
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationAnalysis of Experimental Designs
Analysis of Experimental Designs p. 1/? Analysis of Experimental Designs Gilles Lamothe Mathematics and Statistics University of Ottawa Analysis of Experimental Designs p. 2/? Review of Probability A short
More informationISyE 2030 Practice Test 1
1 NAME ISyE 2030 Practice Test 1 Summer 2005 This test is open notes, open books. You have exactly 90 minutes. 1. Some Short-Answer Flow Questions (a) TRUE or FALSE? One of the primary reasons why theoretical
More informationSUFFICIENT STATISTICS
SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the
More informationProbability Distributions Columns (a) through (d)
Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)
More informationMarkov Chain Monte Carlo Methods for Stochastic
Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013
More informationGenerating Random Variates 2 (Chapter 8, Law)
B. Maddah ENMG 6 Simulation /5/08 Generating Random Variates (Chapter 8, Law) Generating random variates from U(a, b) Recall that a random X which is uniformly distributed on interval [a, b], X ~ U(a,
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationIntroduction to Machine Learning
Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationA random variable is a quantity whose value is determined by the outcome of an experiment.
Random Variables A random variable is a quantity whose value is determined by the outcome of an experiment. Before the experiment is carried out, all we know is the range of possible values. Birthday example
More information4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur
4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur Laws of Probability, Bayes theorem, and the Central Limit Theorem Rahul Roy Indian Statistical Institute, Delhi. Adapted
More informationGiven a experiment with outcomes in sample space: Ω Probability measure applied to subsets of Ω: P[A] 0 P[A B] = P[A] + P[B] P[AB] = P(AB)
1 16.584: Lecture 2 : REVIEW Given a experiment with outcomes in sample space: Ω Probability measure applied to subsets of Ω: P[A] 0 P[A B] = P[A] + P[B] if AB = P[A B] = P[A] + P[B] P[AB] P[A] = 1 P[A
More informationHomework Set #2 Data Compression, Huffman code and AEP
Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code
More informationECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections
ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections 9.8-9.9 Fall 2011 Lecture 8 Part 1 (Fall 2011) Probability Distributions Lecture 8 Part 1 1 / 19 Probability
More informationISyE 6739 Test 1 Solutions Summer 2015
1 NAME ISyE 6739 Test 1 Solutions Summer 2015 This test is 100 minutes long. You are allowed one cheat sheet. 1. (50 points) Short-Answer Questions (a) What is any subset of the sample space called? Solution:
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationDefinition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci
Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationLecture 16. Lectures 1-15 Review
18.440: Lecture 16 Lectures 1-15 Review Scott Sheffield MIT 1 Outline Counting tricks and basic principles of probability Discrete random variables 2 Outline Counting tricks and basic principles of probability
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationPROBLEMS ON CONGRUENCES AND DIVISIBILITY
PROBLEMS ON CONGRUENCES AND DIVISIBILITY 1. Do there exist 1,000,000 consecutive integers each of which contains a repeated prime factor? 2. A positive integer n is powerful if for every prime p dividing
More informationDecision making and problem solving Lecture 1. Review of basic probability Monte Carlo simulation
Decision making and problem solving Lecture 1 Review of basic probability Monte Carlo simulation Why probabilities? Most decisions involve uncertainties Probability theory provides a rigorous framework
More informationSeverity Models - Special Families of Distributions
Severity Models - Special Families of Distributions Sections 5.3-5.4 Stat 477 - Loss Models Sections 5.3-5.4 (Stat 477) Claim Severity Models Brian Hartman - BYU 1 / 1 Introduction Introduction Given that
More informationTopic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.
Topic 2: Probability & Distributions ECO220Y5Y: Quantitative Methods in Economics Dr. Nick Zammit University of Toronto Department of Economics Room KN3272 n.zammit utoronto.ca November 21, 2017 Dr. Nick
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationNaïve Bayes classification
Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss
More informationWrite your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answer sheet.
2017 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answer sheet.
More informationSTA216: Generalized Linear Models. Lecture 1. Review and Introduction
STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general
More informationFundamental Tools - Probability Theory II
Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random
More informationChapter 7: Theoretical Probability Distributions Variable - Measured/Categorized characteristic
BSTT523: Pagano & Gavreau, Chapter 7 1 Chapter 7: Theoretical Probability Distributions Variable - Measured/Categorized characteristic Random Variable (R.V.) X Assumes values (x) by chance Discrete R.V.
More informationStephen Scott.
1 / 35 (Adapted from Ethem Alpaydin and Tom Mitchell) sscott@cse.unl.edu In Homework 1, you are (supposedly) 1 Choosing a data set 2 Extracting a test set of size > 30 3 Building a tree on the training
More informationSystem Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models
System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder
More informationProbability Density Functions
Statistical Methods in Particle Physics / WS 13 Lecture II Probability Density Functions Niklaus Berger Physics Institute, University of Heidelberg Recap of Lecture I: Kolmogorov Axioms Ingredients: Set
More informationPerformance Evaluation and Comparison
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Cross Validation and Resampling 3 Interval Estimation
More informationThe random variable 1
The random variable 1 Contents 1. Definition 2. Distribution and density function 3. Specific random variables 4. Functions of one random variable 5. Mean and variance 2 The random variable A random variable
More informationBayesian statistics, simulation and software
Module 1: Course intro and probability brush-up Department of Mathematical Sciences Aalborg University 1/22 Bayesian Statistics, Simulations and Software Course outline Course consists of 12 half-days
More informationThen the distribution of X is called the standard normal distribution, denoted by N(0, 1).
Normal distributions Normal distributions are commonly used because distributions of averages of IID random variables can often be approximated well by normal distributions The standard normal distribution
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More information1 INFO Sep 05
Events A 1,...A n are said to be mutually independent if for all subsets S {1,..., n}, p( i S A i ) = p(a i ). (For example, flip a coin N times, then the events {A i = i th flip is heads} are mutually
More informationExample: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently
Chapter 3 Sufficient statistics and variance reduction Let X 1,X 2,...,X n be a random sample from a certain distribution with p.m/d.f fx θ. A function T X 1,X 2,...,X n = T X of these observations is
More informationSTATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero
STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero 1999 32 Statistic used Meaning in plain english Reduction ratio T (X) [X 1,..., X n ] T, entire data sample RR 1 T (X) [X (1),..., X (n) ] T, rank
More informationGeneralized Linear Models
Generalized Linear Models Advanced Methods for Data Analysis (36-402/36-608 Spring 2014 1 Generalized linear models 1.1 Introduction: two regressions So far we ve seen two canonical settings for regression.
More informationMeasure-theoretic probability
Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is
More informationWe will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.
1 Probability 1.1 Probability spaces We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. Definition 1.1.
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationLaws of Probability, Bayes theorem, and the Central Limit Theorem
Laws of, Bayes theorem, and the Central Limit Theorem 8th Penn State Astrostatistics School David Hunter Department of Statistics Penn State University Adapted from notes prepared by Rahul Roy and RL Karandikar,
More informationParticle Filtering for Data-Driven Simulation and Optimization
Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationSociology 6Z03 Review II
Sociology 6Z03 Review II John Fox McMaster University Fall 2016 John Fox (McMaster University) Sociology 6Z03 Review II Fall 2016 1 / 35 Outline: Review II Probability Part I Sampling Distributions Probability
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationNaïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability
Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish
More informationData Mining Techniques. Lecture 3: Probability
Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 3: Probability Jan-Willem van de Meent (credit: Zhao, CS 229, Bishop) Project Vote 1. Freeform: Develop your own project proposals 30% of
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationWhen is MLE appropriate
When is MLE appropriate As a rule of thumb the following to assumptions need to be fulfilled to make MLE the appropriate method for estimation: The model is adequate. That is, we trust that one of the
More informationBANA 7046 Data Mining I Lecture 4. Logistic Regression and Classications 1
BANA 7046 Data Mining I Lecture 4. Logistic Regression and Classications 1 Shaobo Li University of Cincinnati 1 Partially based on Hastie, et al. (2009) ESL, and James, et al. (2013) ISLR Data Mining I
More informationProbability Theory for Machine Learning. Chris Cremer September 2015
Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationMath 1131Q Section 10
Math 1131Q Section 10 Review Oct 5, 2010 Exam 1 DATE: Tuesday, October 5 TIME: 6-8 PM Exam Rooms Sections 11D, 14D, 15D CLAS 110 Sections12D, 13D, 16D PB 38 (Physics Building) Material covered on the exam:
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationSTAT J535: Introduction
David B. Hitchcock E-Mail: hitchcock@stat.sc.edu Spring 2012 Chapter 1: Introduction to Bayesian Data Analysis Bayesian statistical inference uses Bayes Law (Bayes Theorem) to combine prior information
More informationClassical Probability
Chapter 1 Classical Probability Probability is the very guide of life. Marcus Thullius Cicero The original development of probability theory took place during the seventeenth through nineteenth centuries.
More informationChapter 7. Basic Probability Theory
Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries
More informationLecture 3. Biostatistics in Veterinary Science. Feb 2, Jung-Jin Lee Drexel University. Biostatistics in Veterinary Science Lecture 3
Lecture 3 Biostatistics in Veterinary Science Jung-Jin Lee Drexel University Feb 2, 2015 Review Let S be the sample space and A, B be events. Then 1 P (S) = 1, P ( ) = 0. 2 If A B, then P (A) P (B). In
More informationProbability on a Riemannian Manifold
Probability on a Riemannian Manifold Jennifer Pajda-De La O December 2, 2015 1 Introduction We discuss how we can construct probability theory on a Riemannian manifold. We make comparisons to this and
More informationLaws of Probability, Bayes theorem, and the Central Limit Theorem
Laws of, Bayes theorem, and the Central Limit Theorem 7th Penn State Astrostatistics School David Hunter Department of Statistics Penn State University Adapted from notes prepared by Rahul Roy and RL Karandikar,
More informationComments. x > w = w > x. Clarification: this course is about getting you to be able to think as a machine learning expert
Logistic regression Comments Mini-review and feedback These are equivalent: x > w = w > x Clarification: this course is about getting you to be able to think as a machine learning expert There has to be
More informationGeneralized Linear Models 1
Generalized Linear Models 1 STA 2101/442: Fall 2012 1 See last slide for copyright information. 1 / 24 Suggested Reading: Davison s Statistical models Exponential families of distributions Sec. 5.2 Chapter
More information