Complementary Random Numbers Dagger Sampling. Lecture 4, autumn 2015 Mikael Amelin

Size: px
Start display at page:

Download "Complementary Random Numbers Dagger Sampling. Lecture 4, autumn 2015 Mikael Amelin"

Transcription

1 Complementary Random Numbers Dagger Sampling Lecture 4, autumn 2015 Mikael Amelin 1

2 Introduction All observations in simple sampling are independent of each other. Sometimes it is possible to increase the probability of a good selection of samples if the samples are not independent. Random number generator U Inverse transform method Y Model g(y) X Sampling M X 2

3 Mean of Two Estimates Consider two estimates M X1 and M X2 of the same expectation value X, i.e., E[M X1 ] = E[M X2 ] = X. Study the mean of these estimates, i.e., M X1 + M X2 M X =

4 Mean of Two Estimates Expectation value: E[M X ] = E M X1 + M X2 1 = -- E M = 2 2 X1 + E M X2 1 = -- = X, 2 X + X i.e., the combined estimate M X is also an estimate of X. Variance: Var[M X ] = Var M X1 + M X2 = = --Var M + 4 X1 + --Var M 4 X2 -- 2Cov M 4 X1 M X2. 4

5 Mean of Two Estimates If M X1 and M X2 are independent estimates obtained with simple sampling and the number of samples is the same in both simulations, then we get Var[M X1 ] = Var[M X2 ] and Cov[M X1, M X2 ] = 0. Hence, Var[M X ] = Var M X1 + M X2 = 2 1 Var M = -- X1 Var M = 4 X1 + Var M X The variance is equivalent to one run of simple sampling using 2n samples; according to theorem 4.2 we should get half the variance when the number of samples is doubled. 5

6 Mean of Two Estimates If M X1 and M X2 are negatively correlated then the covariance term will make Var[M X ] smaller than the corresponding variance for simple sampling using the same number of samples. A negative correlation between estimates can be created using complementary random numbers. 6

7 Complementary Random Numbers The complementary random number of a U(0, 1)-distributed random variable is U* = 1 U. - Correlation coefficient: U, U* = 1. If a random input value is calculated using the inverse transform method, i.e., Y = F 1 Y U, then the complementary input is given by Y* = F 1 Y 1 U. - For symmetrical distributions we get that if Y= Y + then Y* = Y. - Correlation coefficient: Y, Y* U, U* 7

8 Complementary Random Numbers If an output is calculated by analysing a scenario, i.e., X = g(y), then the complementary output is given by X* =g(y*). - Correlation coefficient: X, X* Y, Y*. If an estimate is based on a set of scenarios, i.e., m X = i x i /n, then the complementary estimate is given by m X* = i x i */n. - Correlation coefficient: mx, mx* X, X*. In practice there is no need to differentiate original and complementary values: n m X + m X* 1 m X = = x 2 2n i + x i *. i = 1 8

9 Multiple Inputs Generally, the input vector Y has more than one element. The complementary random numbers of a vector can be created by combining the original and complementary random numbers of each element in the vector. - It can be noted that each scenario y i will have more than one complementary scenario. It is possible to apply complementary random numbers only to a selection of the elements in Y. 9

10 Example: Complementary Scenarios Problem The input vector Y has the three elements A, B, C. The outcomes a i, b i, c i have been randomised for each element, and the corresponding complementary random numbers, a i *, b i *, c i *, have been calculated. Enumerate the original and complementary scenarios if complementary random numbers are applied to the following elements: a) A, B, C. b) A, B. 10

11 Example: Complementary Scenarios Solution a) Randomise three values, a, b, c and calculate their complementary random numbers. The results are combined as follows: [a, b, c], [a, b, c*], [a, b*, c], [a, b*, c*], [a*, b, c], [a*, b, c*], [a*, b*, c], [a*, b*, c*]. b) Randomise six values, a, b, c 1, c 2, c 3, c 4 and calculate the random numbers a* and b*. The results are combined as follows: [a, b, c 1 ], [a, b*, c 2 ], [a*, b, c 3 ], [a*, b*, c 4 ]. 11

12 Effectiveness The variance reduction effect is depending on the correlation between x i = g(y i ) and x i * = g(y i *). This correlation depends on the correlation between Y and X. All inputs may not have a significant correlation to the outputs no use in applying complementary random numbers to these inputs. Sometimes a stronger correlation can be obtained by modifying the input probability distribution. 12

13 Auxiliary Inputs It might be possible to find a subset of the inputs, Y i, i =1,, J such that there is a strong correlation between Y E = h(y 1,,Y J ) (where h is an arbitrary function) and X. When it can be worthwhile to introduce Y E as an auxiliary input variable to the system, and apply complementary random numbers to Y E instead of Y i, i =1,, J. 13

14 Example: Auxiliary Input Problem Gismo Inc. is selling their products to K retailers, and the demand of each retailer, D k, is N( k, k )-distributed. How should complementary random numbers be applied to a Monte Carlo simulation of Gismo Inc.? 14

15 Example: Auxiliary Input Solution The costs of Gismo Inc. has a stronger correlation to the total demand compared to the demand of an individual retailer. Introduce the auxiliary input variable D tot = D D K. The probability distribution of D tot is calculated using the addition theorems for normally distributed random variables. 15

16 Example: Auxiliary Input Solution To generate the original scenario, randomise D tot as well as temporary values of the demand of the individual retailers, D 1,, D K. The temporary demand of individual retailers will generally not match the total demand. Therefore, scale the demands of the individual retailers to match D tot, i.e., let D k = D tot D k. K j = 1 D j 16

17 Example: Auxiliary Input Solution To generate the complementary scenario, calculate the complementary random value of D tot and randomise new temporary values, D 1,, D K. Then scale the individual retailer demands to match D tot *. 17

18 Simulation procedure Complementary random numbers Random number generator U Inverse transform method Y X Model g(y) U* Y* X* Sampling M X Step 1. Generate a random number u k, i and calculate y k, i = F 1 Yk u k i. If applicable to this scenario parameter, also calculate y k, i *= F 1 Yk 1 u ki. Step 2. Repeat step 1 for all scenario parameters. Create all complementary scenarios and calculate x i = g(y i ) for each scenario. 18

19 Simulation procedure Complementary random numbers Step 3. Repeat step 1 and 2 until enough scenarios have been analysed for this batch. Step 4. Update the sums n x i and x2 i i = 1 n i = 1 or store all samples x i (depending on the objective of the simulation). Step 5. Test stopping rule. If not fulfilled, repeat step 1 4 for the next batch. Step 6. Calculate estimates and present results. 19

20 Example: Complementary Random Numbers in Akabuga District Simulation Apply complementary to the total generation capacity and the total load. - Joint probability distribution for the generation capacity of the diesel generator sets: State, k G tot G 1 G Probability, f Gtot (k)

21 Example: Complementary Random Numbers in Akabuga District Simulation Simulation method Enumeration Simple sampling Compl. r.n. (G, D), (G*, D*) Compl. r.n. (G, D), (G, D*), (G*, D), (G*, D*) Average simulation time [ms] M TOC M LOLO Min. Mean Max. Var. Eff. Min. Mean Max. Var. Eff. <

22 Dagger Sampling We have seen from complementary random numbers that a negative correlation between the outputs from different scenarios can have a variance reducing effect. When the input is a two-state probability distribution, we can create a negative correlation by using dagger sampling instead of complementary random numbers. 22

23 Dagger Sampling Consider a two-state input variable Y with the frequency function f Y x = 1 p p 0 x = a, x = b, all other x, where p < 0.. In dagger sampling, the inverse transform method is replaced by a dagger transform function. 23

24 Dagger Transform Theorem 5.1: If U is a U(0, 1)-distributed random number then Y is a two-state random variable with two possible states, a and b, such that P(b) = p, if Y is calculated according to F Yj x = b if j 1 p x jp, a otherwise, for j = 1,, S, where S is the largest integer such that S 1/p. The value S in theorem 5.1 is referred to as the dagger cycle length. One pseudorandom number is used to generate S values of Y. 24

25 Dagger Transform The dagger transform can be illustrated by dividing an interval between 0 and 1 in S sections, where each section has the width p. There will also be a remainder interval if S p <1. U p p p Remainder y k = b if U is found in the k:th section, and for i = 1,, S, i k, we get y i = a. 25

26 Example: Dagger Transform Problem The random variable Y is either equal to 2 (70% probability) or 5 (30% probability). a) Randomise three values of Y using dagger sampling and the random value U =0.34 from a U(0, 1)-distribution. b) Randomise three values of Y using dagger sampling and the random value U =0.94 from a U(0, 1)-distribution. 26

27 Example: Dagger Transform Solution Here p =0.3 S = 3. Moreover, we have a =2 and b =5. a) b) U y 1 y 2 y 1 = 2 y 2 = 5 y 3 = 2 y 1 = 2 y 2 = 2 y 3 = 2 y 3 27

28 Correlation of Dagger Samples The outputs X 1,, X S from the scenarios generated in a dagger cycle will be negatively correlated if the inputs Y 1,, Y S are negatively correlated (but the correlation might be weaker). Are the input values Y i = F Yi U, i = 1,, S negatively correlated, i.e., what is the value of Cov[Y i, Y j ] = E[Y i Y j ] E[Y i ]E[Y j ]? 28

29 Correlation of Dagger Samples There are two possible values for the product Y i Y j : ab or aa. There are two sections there either Y i or Y j will be transformed to b; hence, the probability for this event is 2p. Thus, we get E[Y i Y j ] = 2pab + (1 2p)aa. y 1 y 2 y 3 29

30 Correlation of Dagger Samples Moreover, we have E[Y i ] = E[Y j ] = (1 p)a + pb. The covariance can now be calculated as Cov[Y i, Y j ] = E[Y i Y j ] E[Y i ]E[Y j ] = = 2pab + (1 2p)aa ((1 p)a + pb) 2 = = = = p 2 (a + b) 2 < 0. The input values are negatively correlated and we can therefore expect a negative correlation in the outputs too. 30

31 Multiple Inputs Assume that a computer simulation has several two-state input variables. Which dagger cycle length should be chosen? Alternative 1: Independent dagger cycles. Inputs Scenarios 31

32 Multiple Inputs Alternative 2: Reset all dagger cycles at the end of the longest cycle. Inputs Scenarios 32

33 Multiple Inputs Alternative 3: Reset all dagger cycles at the end of the shortest cycle. Scenarios Inputs 33

34 Simulation procedure Complementary random numbers Random number generator U Dagger transform Y Y S Model g(y) X X S Sampling M X Step 1. Generate a random number u k, i and calculate y i = F Yi u k i for one dagger cycle. Step 2. Repeat step 1 for all scenario parameters. Combine the dagger cycles and calculate x i = g(y i ). Step 3. Repeat step 1 and 2 until enough scenarios have been analysed for this batch. 34

35 Simulation procedure Complementary random numbers Step 4. Update the sums n x i and x2 i i = 1 n i = 1 or store all samples x i (depending on the objective of the simulation). Step 5. Test stopping rule. If not fulfilled, repeat step 1 4 for the next batch. Step 6. Calculate estimates and present results. 35

36 Example: Dagger Sampling of Akabuga District Apply dagger sampling to available generation in the diesel generator sets and to the available transmission capacity. Use independent dagger cycles. 36

37 Example: Dagger Sampling of Akabuga District Simulation method Average simulation time [ms] M TOC M LOLO Min. Mean Max. Var. Eff. Min. Mean Max. Var. Eff. Enumeration < Simple sampling Compl. r.n Dagger sampling

38 Variance Reduction To achieve a variance reduction, we need to have some information of the simulated system. For complementary random numbers, we need to identify the inputs where a negative correlation between scenarios will result in negative correlations for one or more outputs. For dagger sampling, we need to identify two-state inputs where a negative correlation between the scenarios will result in negative correlations for one or more outputs. 38

COMPLEMENTARY RANDOM NUMBERS & DAGGER SAMPLING

COMPLEMENTARY RANDOM NUMBERS & DAGGER SAMPLING INTRODUCTION EG2080 Monte Carlo Methods in Engineering Theme 4 COMPLEMENTARY RANDOM NUMBERS & DAGGER SAMPLING All observations in simple sampling are independent of each other. Sometimes it is possible

More information

Importance Sampling Stratified Sampling. Lecture 6, autumn 2015 Mikael Amelin

Importance Sampling Stratified Sampling. Lecture 6, autumn 2015 Mikael Amelin Importance Sampling Stratified Sampling Lecture 6, autumn 2015 Mikael Amelin 1 Introduction All samples are treated equally in simple sampling. Sometimes it is possible to increase the accuracy by focusing

More information

Monte Carlo Methods in Engineering

Monte Carlo Methods in Engineering Monte Carlo Methods in Engineering Mikael Amelin Draft version KTH Royal Institute of Technology Electric Power Systems Stockholm 2013 PREFACE This compendium describes how Monte Carlo methods can be

More information

Monte Carlo Methods in Engineering

Monte Carlo Methods in Engineering Monte Carlo Methods in Engineering Mikael Amelin Draft version KTH Royal Institute of Technology Electric Power Systems Stockholm 203 PREFACE This compendium describes how Monte Carlo methods can be applied

More information

IMPORTANCE SAMPLING & STRATIFIED SAMPLING

IMPORTANCE SAMPLING & STRATIFIED SAMPLING INTRODUCTION EG2080 Monte Carlo Methods in Engineering Theme 6 IMPORTANCE SAMPLING & STRATIFIED SAMPLING All samples are treated equally in simple sampling. Sometimes it is possible to increase the accuracy

More information

Monte Carlo Simulation in Engineering

Monte Carlo Simulation in Engineering Monte Carlo Simulation in Engineering Mikael Amelin Draft version KTH Royal Institute of Technology Electric Power Systems Stockholm 203 PREFACE This compium describes how Monte Carlo methods can be applied

More information

Comparing Variance Reduction Techniques for Monte Carlo Simulation of Trading and Security in a Three-Area Power System

Comparing Variance Reduction Techniques for Monte Carlo Simulation of Trading and Security in a Three-Area Power System Comparing Variance Reduction Techniques for Monte Carlo Simulation of Trading and Security in a Three-Area Power System Magnus Perninge Email: magnus.perninge@ee.kth.se Mikael Amelin Email: mikael.amelin@ee.kth.se

More information

Many phenomena in nature have approximately Normal distributions.

Many phenomena in nature have approximately Normal distributions. NORMAL DISTRIBUTION The Normal r.v. plays an important role in probability and statistics. Many phenomena in nature have approximately Normal distributions. has a Normal distribution with parameters and,

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

Professor Terje Haukaas University of British Columbia, Vancouver Sampling ( ) = f (x)dx

Professor Terje Haukaas University of British Columbia, Vancouver   Sampling ( ) = f (x)dx Sampling There exist several sampling schemes to address the reliability problem. In fact, sampling is equally applicable to component and system reliability problems. For this reason, the failure region

More information

ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b)

ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b) ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE : BASICS KRISTOFFER P. NIMARK. Bayes Rule De nition. Bayes Rule. The probability of event A occurring conditional on the event B having

More information

Wide-field Infrared Survey Explorer

Wide-field Infrared Survey Explorer Wide-field Infrared Survey Explorer Proposed Raw-Pixel Error Model Version.1, 4-ovember-008 Prepared by: Frank Masci Infrared Processing and Analysis Center California Institute of Technology WSDC D-T001

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

Module 7-2 Decomposition Approach

Module 7-2 Decomposition Approach Module 7-2 Decomposition Approach Chanan Singh Texas A&M University Decomposition Approach l Now we will describe a method of decomposing the state space into subsets for the purpose of calculating the

More information

Convolution Algorithm

Convolution Algorithm Convolution Algorithm Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu Audio/Video recordings of this lecture are available at: http://www.cse.wustl.edu/~jain/cse567-08/

More information

1 of 7 7/16/2009 6:12 AM Virtual Laboratories > 7. Point Estimation > 1 2 3 4 5 6 1. Estimators The Basic Statistical Model As usual, our starting point is a random experiment with an underlying sample

More information

Fractional Imputation in Survey Sampling: A Comparative Review

Fractional Imputation in Survey Sampling: A Comparative Review Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical

More information

Stochastic Simulation Variance reduction methods Bo Friis Nielsen

Stochastic Simulation Variance reduction methods Bo Friis Nielsen Stochastic Simulation Variance reduction methods Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfni@dtu.dk Variance reduction

More information

18.600: Lecture 24 Covariance and some conditional expectation exercises

18.600: Lecture 24 Covariance and some conditional expectation exercises 18.600: Lecture 24 Covariance and some conditional expectation exercises Scott Sheffield MIT Outline Covariance and correlation Paradoxes: getting ready to think about conditional expectation Outline Covariance

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Estimation of uncertainties using the Guide to the expression of uncertainty (GUM)

Estimation of uncertainties using the Guide to the expression of uncertainty (GUM) Estimation of uncertainties using the Guide to the expression of uncertainty (GUM) Alexandr Malusek Division of Radiological Sciences Department of Medical and Health Sciences Linköping University 2014-04-15

More information

18.440: Lecture 25 Covariance and some conditional expectation exercises

18.440: Lecture 25 Covariance and some conditional expectation exercises 18.440: Lecture 25 Covariance and some conditional expectation exercises Scott Sheffield MIT Outline Covariance and correlation Outline Covariance and correlation A property of independence If X and Y

More information

A note on multiple imputation for general purpose estimation

A note on multiple imputation for general purpose estimation A note on multiple imputation for general purpose estimation Shu Yang Jae Kwang Kim SSC meeting June 16, 2015 Shu Yang, Jae Kwang Kim Multiple Imputation June 16, 2015 1 / 32 Introduction Basic Setup Assume

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Review of Probability. CS1538: Introduction to Simulations

Review of Probability. CS1538: Introduction to Simulations Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Probability and random variables. Sept 2018

Probability and random variables. Sept 2018 Probability and random variables Sept 2018 2 The sample space Consider an experiment with an uncertain outcome. The set of all possible outcomes is called the sample space. Example: I toss a coin twice,

More information

3. DISCRETE RANDOM VARIABLES

3. DISCRETE RANDOM VARIABLES IA Probability Lent Term 3 DISCRETE RANDOM VARIABLES 31 Introduction When an experiment is conducted there may be a number of quantities associated with the outcome ω Ω that may be of interest Suppose

More information

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. A Probability Primer A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. Are you holding all the cards?? Random Events A random event, E,

More information

6.041/6.431 Fall 2010 Quiz 2 Solutions

6.041/6.431 Fall 2010 Quiz 2 Solutions 6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential

More information

Lecture 16. Lectures 1-15 Review

Lecture 16. Lectures 1-15 Review 18.440: Lecture 16 Lectures 1-15 Review Scott Sheffield MIT 1 Outline Counting tricks and basic principles of probability Discrete random variables 2 Outline Counting tricks and basic principles of probability

More information

Chapter 5 Class Notes

Chapter 5 Class Notes Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes

More information

Gov 2000: 9. Regression with Two Independent Variables

Gov 2000: 9. Regression with Two Independent Variables Gov 2000: 9. Regression with Two Independent Variables Matthew Blackwell Fall 2016 1 / 62 1. Why Add Variables to a Regression? 2. Adding a Binary Covariate 3. Adding a Continuous Covariate 4. OLS Mechanics

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney,

More information

BACKGROUND NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2016 PROBABILITY A. STRANDLIE NTNU AT GJØVIK AND UNIVERSITY OF OSLO

BACKGROUND NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2016 PROBABILITY A. STRANDLIE NTNU AT GJØVIK AND UNIVERSITY OF OSLO ACKGROUND NOTES FYS 4550/FYS9550 - EXERIMENTAL HIGH ENERGY HYSICS AUTUMN 2016 ROAILITY A. STRANDLIE NTNU AT GJØVIK AND UNIVERSITY OF OSLO efore embarking on the concept of probability, we will first define

More information

Regression and correlation. Correlation & Regression, I. Regression & correlation. Regression vs. correlation. Involve bivariate, paired data, X & Y

Regression and correlation. Correlation & Regression, I. Regression & correlation. Regression vs. correlation. Involve bivariate, paired data, X & Y Regression and correlation Correlation & Regression, I 9.07 4/1/004 Involve bivariate, paired data, X & Y Height & weight measured for the same individual IQ & exam scores for each individual Height of

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Statistics for Managers Using Microsoft Excel (3 rd Edition)

Statistics for Managers Using Microsoft Excel (3 rd Edition) Statistics for Managers Using Microsoft Excel (3 rd Edition) Chapter 4 Basic Probability and Discrete Probability Distributions 2002 Prentice-Hall, Inc. Chap 4-1 Chapter Topics Basic probability concepts

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Probabilistic production cost simulation. EG2200 Lectures 12 14, autumn 2015 Mikael Amelin

Probabilistic production cost simulation. EG2200 Lectures 12 14, autumn 2015 Mikael Amelin Probabilistic production cost simulation EG2200 Lectures 12 14, autumn 2015 Mikael Amelin 1 Course objectives Apply probabilistic production cost simulation to calculate the expected operation cost and

More information

CSE 312 Final Review: Section AA

CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material

More information

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Descriptive Statistics for Symbolic Data

Descriptive Statistics for Symbolic Data Outline Descriptive Statistics for Symbolic Data Paula Brito Fac. Economia & LIAAD-INESC TEC, Universidade do Porto ECI 2015 - Buenos Aires T3: Symbolic Data Analysis: Taking Variability in Data into Account

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 1 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

PROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

PROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS PROBABILITY AND INFORMATION THEORY Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Probability space Rules of probability

More information

Business Statistics. Lecture 10: Correlation and Linear Regression

Business Statistics. Lecture 10: Correlation and Linear Regression Business Statistics Lecture 10: Correlation and Linear Regression Scatterplot A scatterplot shows the relationship between two quantitative variables measured on the same individuals. It displays the Form

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

Preliminary statistics

Preliminary statistics 1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),

More information

Set Theory. Pattern Recognition III. Michal Haindl. Set Operations. Outline

Set Theory. Pattern Recognition III. Michal Haindl. Set Operations. Outline Set Theory A, B sets e.g. A = {ζ 1,...,ζ n } A = { c x y d} S space (universe) A,B S Outline Pattern Recognition III Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

Shannon-Fano-Elias coding

Shannon-Fano-Elias coding Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The

More information

Lecture 4: Conditional expectation and independence

Lecture 4: Conditional expectation and independence Lecture 4: Conditional expectation and independence In elementry probability, conditional probability P(B A) is defined as P(B A) P(A B)/P(A) for events A and B with P(A) > 0. For two random variables,

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University Statistics for Economists Lectures 6 & 7 Asrat Temesgen Stockholm University 1 Chapter 4- Bivariate Distributions 41 Distributions of two random variables Definition 41-1: Let X and Y be two random variables

More information

Probability Theory. Patrick Lam

Probability Theory. Patrick Lam Probability Theory Patrick Lam Outline Probability Random Variables Simulation Important Distributions Discrete Distributions Continuous Distributions Most Basic Definition of Probability: number of successes

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Chapter 5 Random Variables and Processes

Chapter 5 Random Variables and Processes Chapter 5 Random Variables and Processes Wireless Information Transmission System Lab. Institute of Communications Engineering National Sun Yat-sen University Table of Contents 5.1 Introduction 5. Probability

More information

Reference: Davidson and MacKinnon Ch 2. In particular page

Reference: Davidson and MacKinnon Ch 2. In particular page RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables. Probability UBC Economics 326 January 23, 2018 1 2 3 Wooldridge (2013) appendix B Stock and Watson (2009) chapter 2 Linton (2017) chapters 1-5 Abbring (2001) sections 2.1-2.3 Diez, Barr, and Cetinkaya-Rundel

More information

Discrete Random Variable

Discrete Random Variable Discrete Random Variable Outcome of a random experiment need not to be a number. We are generally interested in some measurement or numerical attribute of the outcome, rather than the outcome itself. n

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

6. Fractional Imputation in Survey Sampling

6. Fractional Imputation in Survey Sampling 6. Fractional Imputation in Survey Sampling 1 Introduction Consider a finite population of N units identified by a set of indices U = {1, 2,, N} with N known. Associated with each unit i in the population

More information

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:

More information

Lecture 13: Covariance. Lisa Yan July 25, 2018

Lecture 13: Covariance. Lisa Yan July 25, 2018 Lecture 13: Covariance Lisa Yan July 25, 2018 Announcements Hooray midterm Grades (hopefully) by Monday Problem Set #3 Should be graded by Monday as well (instead of Friday) Quick note about Piazza 2 Goals

More information

NONLINEAR CALIBRATION. 1 Introduction. 2 Calibrated estimator of total. Abstract

NONLINEAR CALIBRATION. 1 Introduction. 2 Calibrated estimator of total.   Abstract NONLINEAR CALIBRATION 1 Alesandras Pliusas 1 Statistics Lithuania, Institute of Mathematics and Informatics, Lithuania e-mail: Pliusas@tl.mii.lt Abstract The definition of a calibrated estimator of the

More information

Markov Random Fields

Markov Random Fields Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another

More information

PQL Estimation Biases in Generalized Linear Mixed Models

PQL Estimation Biases in Generalized Linear Mixed Models PQL Estimation Biases in Generalized Linear Mixed Models Woncheol Jang Johan Lim March 18, 2006 Abstract The penalized quasi-likelihood (PQL) approach is the most common estimation procedure for the generalized

More information

Stochastic Simulation Introduction Bo Friis Nielsen

Stochastic Simulation Introduction Bo Friis Nielsen Stochastic Simulation Introduction Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfn@imm.dtu.dk Practicalities Notes will handed

More information

Expectations and Variance

Expectations and Variance 4. Model parameters and their estimates 4.1 Expected Value and Conditional Expected Value 4. The Variance 4.3 Population vs Sample Quantities 4.4 Mean and Variance of a Linear Combination 4.5 The Covariance

More information

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics David B. University of South Carolina Department of Statistics What are Time Series Data? Time series data are collected sequentially over time. Some common examples include: 1. Meteorological data (temperatures,

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

YEAR 12 - Mathematics Pure (C1) Term 1 plan

YEAR 12 - Mathematics Pure (C1) Term 1 plan Week YEAR 12 - Mathematics Pure (C1) Term 1 plan 2016-2017 1-2 Algebra Laws of indices for all rational exponents. Use and manipulation of surds. Quadratic functions and their graphs. The discriminant

More information

Probability Theory Review Reading Assignments

Probability Theory Review Reading Assignments Probability Theory Review Reading Assignments R. Duda, P. Hart, and D. Stork, Pattern Classification, John-Wiley, 2nd edition, 2001 (appendix A.4, hard-copy). "Everything I need to know about Probability"

More information

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares 13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent

More information

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2 MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Latent Variable Models

Latent Variable Models Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:

More information

Lecture 1. ABC of Probability

Lecture 1. ABC of Probability Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability

More information

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling 1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]

More information

Summarizing Measured Data

Summarizing Measured Data Performance Evaluation: Summarizing Measured Data Hongwei Zhang http://www.cs.wayne.edu/~hzhang The object of statistics is to discover methods of condensing information concerning large groups of allied

More information

An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions

An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions H. V. Poor Princeton University March 17, 5 Exercise 1: Let {h k,l } denote the impulse response of a

More information

Probability 1 (MATH 11300) lecture slides

Probability 1 (MATH 11300) lecture slides Probability 1 (MATH 11300) lecture slides Márton Balázs School of Mathematics University of Bristol Autumn, 2015 December 16, 2015 To know... http://www.maths.bris.ac.uk/ mb13434/prob1/ m.balazs@bristol.ac.uk

More information

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Lecture 13: Data Modelling and Distributions Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Why data distributions? It is a well established fact that many naturally occurring

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

Application of Monte Carlo Simulation to Multi-Area Reliability Calculations. The NARP Model

Application of Monte Carlo Simulation to Multi-Area Reliability Calculations. The NARP Model Application of Monte Carlo Simulation to Multi-Area Reliability Calculations The NARP Model Any power system reliability model using Monte Carlo simulation consists of at least the following steps: 1.

More information

Independent Events. Two events are independent if knowing that one occurs does not change the probability of the other occurring

Independent Events. Two events are independent if knowing that one occurs does not change the probability of the other occurring Independent Events Two events are independent if knowing that one occurs does not change the probability of the other occurring Conditional probability is denoted P(A B), which is defined to be: P(A and

More information

on t0 t T, how can one compute the value E[g(X(T ))]? The Monte-Carlo method is based on the

on t0 t T, how can one compute the value E[g(X(T ))]? The Monte-Carlo method is based on the SPRING 2008, CSC KTH - Numerical methods for SDEs, Szepessy, Tempone 203 Monte Carlo Euler for SDEs Consider the stochastic differential equation dx(t) = a(t, X(t))dt + b(t, X(t))dW (t) on t0 t T, how

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

STATS 306B: Unsupervised Learning Spring Lecture 12 May 7

STATS 306B: Unsupervised Learning Spring Lecture 12 May 7 STATS 306B: Unsupervised Learning Spring 2014 Lecture 12 May 7 Lecturer: Lester Mackey Scribe: Lan Huong, Snigdha Panigrahi 12.1 Beyond Linear State Space Modeling Last lecture we completed our discussion

More information

Probability Review. Chao Lan

Probability Review. Chao Lan Probability Review Chao Lan Let s start with a single random variable Random Experiment A random experiment has three elements 1. sample space Ω: set of all possible outcomes e.g.,ω={1,2,3,4,5,6} 2. event

More information

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr. Topic 2: Probability & Distributions ECO220Y5Y: Quantitative Methods in Economics Dr. Nick Zammit University of Toronto Department of Economics Room KN3272 n.zammit utoronto.ca November 21, 2017 Dr. Nick

More information