Θ is the stratified sampling estimator of θ. Note: y i s subdivide sampling space into k strata ;
|
|
- Sabrina Miles
- 5 years ago
- Views:
Transcription
1 1 STRATIFIED SAMPLING Stratified Sampling Background: suppose θ = E[X]. Simple MC simulation would use θ X. If X simulation depends on some discrete Y with pmf P {Y = y i } = p i, i = 1,..., k, and X can be simulated given Y = y i, E[X] = p i E[X Y = y i ] p i Xi = Θ. Θ is the stratified sampling estimator of θ. Note: y i s subdivide sampling space into k strata ; Analysis: If np i samples are used to determine X i, then V ar( X i ) = V ar(x Y = y i), and np i V ar(θ) = p 2 i V ar( X i ) = 1 n = 1 E[V ar(x Y )]. n p i V ar(x Y = y i ) Now V ar(x) = E[V ar(x Y )] + V ar(e[x Y ]), and V ar( X) = 1 nv ar(x), so V ar(θ) = V ar( X) 1 n V ar(e[x Y ]). Stratified sampling simulation can reduce variance significantly. Note: if S 2 i /(np i) is the sample variance for X i with np i samples, V ar(θ) = k p2 i V ar( X i ) 1 n k p is 2 i.
2 2 Application: one-dimensional integration, where θ = E[h(U)] = 1 h(x)dx. Strata are subintervals [ i 1 k, i k ], with p i = 1 k. For subinterval i, generate U ij = i 1 k + 1 k U, j = 1,..., n i, i = 1,..., k. Example: π 4 = E[ 1 U 2 ] = 1 1 x2 dx Try n = 5: simple MC ( raw simulation ), N = 5; U = rand(1,n); X = sqrt(1-u.^2); disp( [mean(x) 2*std(X)/sqrt(N)]) With stratified sampling there are different choices for k. a) k = n, n i = 1, so use U ij = U i = i 1 n + 1 nu, i = 1,..., n. US = [:N-1]/N + U/N; XS = sqrt(1-us.^2); disp( [mean(xs) 2*std(XS)/sqrt(N)] ) b) k = 5, n i = 1, p i = 1 5, so use U ij = i U, j = 1,..., 1, i = 1,..., 5. V ar( Θ) 5 1 S 2 1(5) 2 i = 1 5 Si K = 5; Ni = N/K; for i = 1 : K XS = sqrt(1-((i-1+rand(1,ni))/k).^2); XSB(i) = mean(xs); SS(i) = var(xs); end, SST = mean(ss/n); disp( [mean(xsb) 2*sqrt(SST)] ) e-5
3 3 Optimal Distribution of Samples : to reduce V ar more, assume n i samples for stratum i, with k n i = n. If Xi E[X Y = y i ] using n i samples, let Θ = p i Xi, with V ar( Θ) = p 2 i V ar( X i ). Given estimate s 2 i for each V ar( X i ) s2 i n i, choose n i s to minimize k p2 i s2 i /n i, subject to k n i = n; Lagrange multiplier solution is p i s i n i = n k j=1 p. js j Algorithm: use small n to estimate s i s, larger n for Θ; V ar( Θ) p 2 i Si 2, n i is estimated using sample variances S 2 i from strata.
4 4 Continued Example: π 4 = E[ 1 U 2 ] = 1 1 x2 dx c) k = 5, p i = 1 5, optimal n i = 5s i /( 5 s i). U ij = i U,j = 1,..., n i, i = 1,..., 5, V ar( Θ) 1 5 Si 2 5 5n i. ON = max( 3, round(n*sqrt(ss)/sum(sqrt(ss))) ); for i = 1 : K XS = sqrt(1-((i-1+rand(1,on(i)))/k).^2); XSB(i) = mean(xs); SS(i) = var(xs); end, SST = mean(ss./(5*on)); disp( [mean(xsb) 2*sqrt(SST)] ) e-5 More Stratified Sampling Examples Estimate P = cos(t2 ) e t2 /2 2π dt. For simulation convert to x [, 1] using x = Φ(t): so dx = e t2 /2 2π dt, P = 1 cos(φ 1 (x) 2 )dx. a) Simple MC f N = 5; U = rand(1,n); X = f(u); disp( [mean(x) 2*std(X)/sqrt(N)])
5 b) k = 5, n i = 1, p i = 1 5, so use U ij = i U, j = 1,..., 1, i = 1,..., 5. V ar( Θ) 5 1 S 2 1(5) 2 i = 1 5 Si K = 5; Ni = N/K; for i = 1 : K XS = f((i-1+rand(1,ni))/k); XSB(i) = mean(xs); SS(i) = var(xs); end, SST = mean(ss/n); disp( [mean(xsb) 2*sqrt(SST)] ) c) k = 5, p i = 1 5, optimal n i = 5s i /( 5 s i). U ij = i U,j = 1,..., n i, i = 1,..., 5, V ar( Θ) 1 5 Si 2 5 5n i. ON = max( 3, round(n*sqrt(ss)/sum(sqrt(ss))) ); for i = 1 : K XS = f((i-1+rand(1,on(i)))/k); XSB(i) = mean(xs); SS(i) = var(xs); end, SST = mean(ss./(5*on)); disp( [mean(xsb) 2*sqrt(SST)] )
6 Video Poker Example: 5 cards dealt, 1-5 discards are replaced. Let C = ( 52 5 ) 1, and assume payoff table is Hand Payoff Probability RF: Royal Flush 8 4C SF: Straight Flush 5 36C 4K: Four-of-a-kind C FH: Full House C FL: Flush 5 518C ST: Straight 4 12C 3K: Three-of-a-kind C 2P: Two Pair C HP: High Pair(> 1) C LP: Low Pair(<= 1) 7632C OT: Other 1- p i Note: 1- p i Textbook Strategy: no replacement cards dealt if > 3K; otherwise keep pairs and triplets but replace other cards. Raw simulation: for each run a) use random permutation of [1:52] (p.51) to shuffle cards; b) pick top 5 cards, discard, replace using text strategy; c) compute X from table. Compute X using many runs. 6
7 7 Video Poker Example with stratification: if payoff is RV X, E[X] = E[X RF ]P {RF } + E[X SF ]P {SF } + E[X 4K]P {4K} + E[X F H]P {F H} + E[X F L]P {F L} + E[X ST ]P {ST } + E[X 3K]P {3K} + E[X 2P ]P {2P } + E[X HP ]P {HP } + E[X LP ]P {LP } + E[X OT ]P {OT } = ( ) + E[X 2P ]P {2P } + E[X HP ]P {HP } + E[X LP ]P {LP } + E[X OT ]P {OT } E[X 2P ], E[X HP ], E[X LP ], also computed analytically. Stratified sampling: for each run a) use random permutation of [1:52] (p.51) to shuffle cards; b) pick top 5 cards, if LP, start new run; c) pick next 5 cards; compute X OT and X using X = ( ) + E[X 2P ]P {2P } +E[X HP ]P {HP } + E[X LP ]P {LP } + P {OT }X OT Compute X using many runs.
8 8 Multdimensional Integrals: consider the 2-dimensional θ = 1 1 g(x, y)dxdy Simple stratified sampling. If subdivision of [, 1] [, 1] is same for x and y, strata are squares s ij = [ i 1 k, i k ] [j 1 k, j k ], for i = 1 : k, j = 1 : k, so θ = p ij E[X (x, y) s ij ], p ij = 1 k 2, Θ = j=1 p ij Xij, with X ij E[X (x, y) s ij ], j=1 X ij = 1 n ij Note: if n ij = n/k 2, n ij l=1 ( i 1 g k V ar( Θ) V ar( Θ) 1 k 2 + U l k, j 1 k j=1 p 2 ij n ij S 2 ij. j=1 S 2 ij n. + V ) l, k
9 Two-dimensional example: θ = 1 1 e(x+y)2 dxdy. g N = 1; X = g(rand(2,n)); % Simple MC disp([mean(x) 2*std(X)/sqrt(N)]) K = 2; Nij = N/K^2; % Stratified for i = 1:K for j = 1:K XS = g([i-1+rand(1,nij);j-1+rand(1,nij)]/k); XSb(i,j) = mean(xs); SS(i,j) = var(xs); end end, SST = mean(mean(ss/n)); disp([mean(mean(xsb)) 2*sqrt(SST) ]) Note: for n dimensional integrals the growth in the number of terms k n in the sum for Θ, X i1,...,i Θ = n, k n i 1 =1 i n =1 results in a method that is infeasible for large n; however, the terms in the sum could be sampled with m X I Θ m, l=1 where I = (I 1, I 2,..., I n ), with I j Uniform(1 : k). Other strategies, e.g. with nonuniform n ij s, different ks for each variable, have been considered. 9
10 1 Transformation Method for Monotone Functions θ = 1 1 g(x 1, x 2,..., x n )dx 1 dx n. If x 1 = e y ny 1, x 2 = e y n(y 2 y 1 ),... x n = e y n(1 y n 1 ), θ = 1 y2 yn 1 yn 2 g(x 1 (y),..., x n (y))y n 1 n e y n dy 1 dy n. The pdf y n 1 n e y n is a Gamma(n, 1) pdf. For simulation, first generate Y n Gamma(n, 1), then U 1,..., U n 1 Uni(, 1), with Y i = U (i) (sorted). If Y n Gamma 1 (n, 1, U n ) (inversion), stratification can be used with U n.
11 Example 2-d continued; let x 1 = e y 2y 1, x 2 = e y 2(1 y 1 ), then θ = = e (x 1+x 2 ) 2 dx 1 dx 2 e (x 1(y 1,y 2 )+x 2 (y 1,y 2 )) 2 y 2 e y 2 dy 1 dy 2, N = 1; U = rand(2,n); % Simple MC Y(1,:) = U(1,:); Y(2,:) = gaminv(u(2,:),2,1); X(1,:) = exp(-y(1,:).*y(2,:)); X(2,:) = exp(-y(2,:).*(1-y(1,:))); Z = g(x); disp([mean(z) 2*std(Z)/sqrt(N)]) K = 1; Ni = N/K; % Stratified for i = 1:K, U=rand(2,Ni); Y(2,:) = gaminv((i-1+u(2,:))/k,2,1); X(1,:) = exp(-u(1,:).*y(2,:)); X(2,:) = exp(-y(2,:).*(1-u(1,:))); XS = g(x); XSB(i) = mean(xs); SS(i) = var(xs); end, SST = mean(ss/n); disp([mean(xsb) 2*sqrt(SST)])
12 12 Compound RVs: assume iid RVs X i and RV N: let S n (X) = n α ix i ; estimate P {S N (X) > c}. E.g. X i is insurance claim i, N is # claims by time T. In many cases, distribution for N is known, e.g. Poisson(λ), with p i = e λ (λ) i i!. Raw simulation : for K runs, generate N and N X i s; count proportion with N X i > c. Stratified simulation: given pmf for N, let g n (x) = I(S n (x) > c).. θ = m E[g n (X)]p n + E[g N (X) N > m](1 n= m p n ) n= Simulation run: a) compute m = N N > m from conditional pmf; b) generate X 1,..., X m and set Θ = m g n (X)p n + g m (X)(1 n= Then Θ P {S N (X) > c}. E.g. Poisson(λ), has conditional pmf with m p n ). n= with α = 1 m n= p n. (λ)m+i p m+i = e λ α(m + i)!,
P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationIMPORTANCE SAMPLING Importance Sampling Background: let x = (x 1,..., x n ), θ = E[h(X)] = h(x)f(x)dx 1 N. h(x i ) = Θ,
1 IMPORTANCE SAMPLING Importance Sampling Background: let x = (x 1,..., x n ), θ = E[h(X)] = h(x)f(x)dx 1 N N h(x i ) = Θ, if X i F (X), and F (x) is cdf for f(x). For many problems, F (x) is difficult
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationSummary. Complete Statistics ... Last Lecture. .1 What is a complete statistic? .2 Why it is called as complete statistic?
Last Lecture Biostatistics 602 - Statistical Inference Lecture 06 Hyun Min Kang January 29th, 2013 1 What is a complete statistic? 2 Why it is called as complete statistic? 3 Can the same statistic be
More informationSTT 441 Final Exam Fall 2013
STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationThings to remember when learning probability distributions:
SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions
More informationChapter 5: Monte Carlo Integration and Variance Reduction
Chapter 5: Monte Carlo Integration and Variance Reduction Lecturer: Zhao Jianhua Department of Statistics Yunnan University of Finance and Economics Outline 5.2 Monte Carlo Integration 5.2.1 Simple MC
More informationMATH 4426 HW7 solutions. April 15, Recall, Z is said to have a standard normal distribution (denoted Z N(0, 1)) if its pdf is
MATH 446 HW7 solutions April 5, 5 Recall, Z is said to have a standard normal distribution (denoted Z N(, )) if its pdf is f Z (z) = p e z /,z (, ). Table A. (pp.697-698) tabulates values of its cdf (denoted.)
More informationSimulation. Alberto Ceselli MSc in Computer Science Univ. of Milan. Part 4 - Statistical Analysis of Simulated Data
Simulation Alberto Ceselli MSc in Computer Science Univ. of Milan Part 4 - Statistical Analysis of Simulated Data A. Ceselli Simulation P.4 Analysis of Sim. data 1 / 15 Statistical analysis of simulated
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 9 Fall 2007
UC Berkeley Department of Electrical Engineering and Computer Science EE 26: Probablity and Random Processes Problem Set 9 Fall 2007 Issued: Thursday, November, 2007 Due: Friday, November 9, 2007 Reading:
More informationSOLUTION FOR HOMEWORK 12, STAT 4351
SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.
More informationMidterm Exam 1 Solution
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:
More informationBernoulli and Binomial Distributions. Notes. Bernoulli Trials. Bernoulli/Binomial Random Variables Bernoulli and Binomial Distributions.
Lecture 11 Text: A Course in Probability by Weiss 5.3 STAT 225 Introduction to Probability Models February 16, 2014 Whitney Huang Purdue University 11.1 Agenda 1 2 11.2 Bernoulli trials Many problems in
More informationMath Bootcamp 2012 Miscellaneous
Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.
More informationExercise 5 Release: Due:
Stochastic Modeling and Simulation Winter 28 Prof. Dr. I. F. Sbalzarini, Dr. Christoph Zechner (MPI-CBG/CSBD TU Dresden, 87 Dresden, Germany Exercise 5 Release: 8..28 Due: 5..28 Question : Variance of
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More information45 6 = ( )/6! = 20 6 = selected subsets: 6 = and 10
Homework 1. Selected Solutions, 9/12/03. Ch. 2, # 34, p. 74. The most natural sample space S is the set of unordered -tuples, i.e., the set of -element subsets of the list of 45 workers (labelled 1,...,
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More informationCalculus of Variations and Computer Vision
Calculus of Variations and Computer Vision Sharat Chandran Page 1 of 23 Computer Science & Engineering Department, Indian Institute of Technology, Bombay. http://www.cse.iitb.ernet.in/ sharat January 8,
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable
ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous
More informationPROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez
Conditional PROBABILITY AND STATISTICS IN COMPUTING III. Discrete Random Variables and Deviations From: [5][7][6] Page of 46 German Hernandez Conditional. Random variables.. Measurable function Let (Ω,
More informationSpecial Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).
Sect 3.4: Binomial RV Special Discrete RV s 1. Assumptions and definition i. Experiment consists of n repeated trials ii. iii. iv. There are only two possible outcomes on each trial: success (S) or failure
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationRandom Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.
Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested
More informationPartial Solutions for h4/2014s: Sampling Distributions
27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the
More informationMathematical Statistics 1 Math A 6330
Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015 Outline 1 Distributions of Functions
More informationProblem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},
ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More information1 Basic continuous random variable problems
Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and
More informationPractice Examination # 3
Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationMIT Spring 2015
Assessing Goodness Of Fit MIT 8.443 Dr. Kempthorne Spring 205 Outline 2 Poisson Distribution Counts of events that occur at constant rate Counts in disjoint intervals/regions are independent If intervals/regions
More informationDiscrete Distributions Chapter 6
Discrete Distributions Chapter 6 Negative Binomial Distribution section 6.3 Consider k r, r +,... independent Bernoulli trials with probability of success in one trial being p. Let the random variable
More informationSTAT 414: Introduction to Probability Theory
STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises
More informationSTAT 430/510: Lecture 16
STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions
More informationStationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.
Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of
More informationContents 1. Contents
Contents 1 Contents 1 One-Sample Methods 3 1.1 Parametric Methods.................... 4 1.1.1 One-sample Z-test (see Chapter 0.3.1)...... 4 1.1.2 One-sample t-test................. 6 1.1.3 Large sample
More informationContinuous r.v practice problems
Continuous r.v practice problems SDS 321 Intro to Probability and Statistics 1. (2+2+1+1 6 pts) The annual rainfall (in inches) in a certain region is normally distributed with mean 4 and standard deviation
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationConditional Distributions
Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(,
More informationSTAT 418: Probability and Stochastic Processes
STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical
More informationVariance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18
Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More informationARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION.
A self published manuscript ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 21 EDITION. M I G U E L A R C O N E S Miguel A. Arcones, Ph. D. c 28. All rights reserved. Author Miguel A.
More informationLecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157
Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationStatistics, Data Analysis, and Simulation SS 2015
Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler
More informationDiscrete Distributions
A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationNotes for Math 324, Part 19
48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationMultiple Integrals. Introduction and Double Integrals Over Rectangular Regions. Philippe B. Laval. Spring 2012 KSU
Multiple Integrals Introduction and Double Integrals Over Rectangular Regions Philippe B Laval KSU Spring 2012 Philippe B Laval (KSU) Multiple Integrals Spring 2012 1 / 21 Introduction In this section
More informationContinuous Distributions
Continuous Distributions 1.8-1.9: Continuous Random Variables 1.10.1: Uniform Distribution (Continuous) 1.10.4-5 Exponential and Gamma Distributions: Distance between crossovers Prof. Tesler Math 283 Fall
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More informationDiscrete Distributions
Chapter 2 Discrete Distributions 2.1 Random Variables of the Discrete Type An outcome space S is difficult to study if the elements of S are not numbers. However, we can associate each element/outcome
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationS6880 #7. Generate Non-uniform Random Number #1
S6880 #7 Generate Non-uniform Random Number #1 Outline 1 Inversion Method Inversion Method Examples Application to Discrete Distributions Using Inversion Method 2 Composition Method Composition Method
More informationStat 100a, Introduction to Probability.
Stat 100a, Introduction to Probability. Outline for the day: 1. Geometric random variables. 2. Negative binomial random variables. 3. Moment generating functions. 4. Poisson random variables. 5. Continuous
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More informationMaster s Written Examination - Solution
Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2
More information15 Discrete Distributions
Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.
More information4 Pairs of Random Variables
B.Sc./Cert./M.Sc. Qualif. - Statistical Theory 4 Pairs of Random Variables 4.1 Introduction In this section, we consider a pair of r.v. s X, Y on (Ω, F, P), i.e. X, Y : Ω R. More precisely, we define a
More informationMA 519 Probability: Review
MA 519 : Review Yingwei Wang Department of Mathematics, Purdue University, West Lafayette, IN, USA Contents 1 How to compute the expectation? 1.1 Tail........................................... 1. Index..........................................
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationIntroduction to Probability and Statistics (Continued)
Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More information1 Bernoulli Distribution: Single Coin Flip
STAT 350 - An Introduction to Statistics Named Discrete Distributions Jeremy Troisi Bernoulli Distribution: Single Coin Flip trial of an experiment that yields either a success or failure. X Bern(p),X
More informationOutline Properties of Covariance Quantifying Dependence Models for Joint Distributions Lab 4. Week 8 Jointly Distributed Random Variables Part II
Week 8 Jointly Distributed Random Variables Part II Week 8 Objectives 1 The connection between the covariance of two variables and the nature of their dependence is given. 2 Pearson s correlation coefficient
More informationFinal Review: Problem Solving Strategies for Stat 430
Final Review: Problem Solving Strategies for Stat 430 Hyunseung Kang December 14, 011 This document covers the material from the last 1/3 of the class. It s not comprehensive nor is it complete (because
More information