ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

Similar documents
6.041/6.431 Fall 2010 Quiz 2 Solutions

Review of Probability. CS1538: Introduction to Simulations

Multiple Random Variables

Review (Probability & Linear Algebra)

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Multivariate probability distributions and linear regression

Algorithms for Uncertainty Quantification

Gaussian random variables inr n

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Lecture 2: Repetition of probability theory and statistics

A Probability Review

Review: mostly probability and some statistics

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Jointly Distributed Random Variables

Basics on Probability. Jingrui He 09/11/2007

Lecture 1: Basics of Probability

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

ECE 5615/4615 Computer Project

Bivariate distributions

Recitation 2: Probability

Autonomous Navigation for Flying Robots

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

01 Probability Theory and Statistics Review

Review of probability

ECON Fundamentals of Probability

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

Appendix A : Introduction to Probability and stochastic processes

V7 Foundations of Probability Theory

Basic Probability Reference Sheet

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

Introduction to Probability and Statistics (Continued)

Multivariate Random Variable

MAS223 Statistical Inference and Modelling Exercises

B4 Estimation and Inference

16.584: Random Vectors

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20

L2: Review of probability and statistics

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Preliminary statistics

Multivariate Distributions

Probability Theory for Machine Learning. Chris Cremer September 2015

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

MACHINE LEARNING ADVANCED MACHINE LEARNING

CS281 Section 4: Factor Analysis and PCA

Introduction to Probability and Stocastic Processes - Part I

MACHINE LEARNING ADVANCED MACHINE LEARNING

Section 8.1. Vector Notation

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 5 continued. Chapter 5 sections

Probability and Statistics

PROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Lecture 1: Probability Fundamentals

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Probability Theory Review Reading Assignments

The Multivariate Gaussian Distribution

Statistical Pattern Recognition

Joint Distribution of Two or More Random Variables

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Chapter 2. Probability

Vectors and Matrices Statistics with Vectors and Matrices

Generative classifiers: The Gaussian classifier. Ata Kaban School of Computer Science University of Birmingham

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Recitation. Shuang Li, Amir Afsharinejad, Kaushik Patnaik. Machine Learning CS 7641,CSE/ISYE 6740, Fall 2014

1 Random variables and distributions

Intro to Probability. Andrei Barbu

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Probability and Probability Distributions. Dr. Mohammed Alahmed

Introduction to Probability and Statistics (Continued)

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 1

Human-Oriented Robotics. Probability Refresher. Kai Arras Social Robotics Lab, University of Freiburg Winter term 2014/2015

Probability Review. Chao Lan

Bayes Decision Theory

ECE Homework Set 3

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

1 Presessional Probability

Review of Statistics

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.

4. Distributions of Functions of Random Variables

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

CS491/691: Introduction to Aerial Robotics

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Chapter 4. Chapter 4 sections

Let X and Y denote two random variables. The joint distribution of these random

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Introduction to Mobile Robotics Probabilistic Robotics

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Review of probabilities

Bayesian Decision Theory

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Transcription:

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander

p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2

Discrete Random Variable X denotes a random variable X can take on a countable number of values X x,..., 1 xn The probability that X takes on a specific value p( X x ) or p( x ) i A 6-sided die s discrete probability distribution i ( ) px i xi 1 2 3 4 5 6 n i1 px ( ) 1 i 3

Continuous Random Variable X takes on a value in a continuum Probability density function, p(x=x) or p(x) Evaluated over finite intervals of the continuum p( x ( a, b)) p( x) dx b a px ( ) x 4

Measures of Distributions Mean Expected value of a random variable i1 i E X n x p( x ) discrete case i xp( x) dx continuous case Variance Measure of the variability of a random variable Var( X ) E ( X ) 2 n 2 ( ) ( i ) ( i) discrete case i1 Var X x p x Var X x p x dx 2 ( ) ( ) ( ) continuous case 5 Square root of variance is standard deviation, σ 2 = Var(X)

Multi-variable distributions Vector of random variables Mean X X X 1 n 1 EX [ 1] n EX [ n] 6

Multi-variable distributions Covariance Measure of how much two random variables change together Cov( X, X ) E[( X )( X )] i j i i j j E[ X X ] i j i j If Cov(X,Y)>0, when X is above its expected value, then Y tends to be above its expected value If Cov(X,Y)<0, when X is above its expected value, then Y tends to be below its expected value If X,Y are independent, Cov(X,Y) = 0 7

Multi-variable distribution Covariance Matrix, Defines variational relationship between each pair of random variables i, j Cov( X i, X j ) Generalization of variance, diagonal elements represent variance of each random variable Cov( X, X ) Var( X ) i i i Covariance matrix is symmetric, positive semi-definite 8

Multiplication by a constant matrix yields T cov( Ax) E[( Ax A)( Ax A) ] T T E[ A( x )( x ) A ] T AE[( x )( x ) ] A Acov( x) A T T 9

Addition/Subtraction of random variables cov( X Y ) E ( X x ) ( Y y ) ( X x ) ( Y y ) T T E ( X x )( X x ) ( X x )( Y y ) T T ( Y )( X ) ( Y )( Y ) y x y y cov( X ) cov( Y ) 2cov( X, Y ) T If X,Y independent, cov( X Y ) cov( X ) cov( Y ) 10

Joint Probability Probability of x and y: p( X x and Y y) p( x, y) e.g. probability of clouds and rain today Independence If X,Y are independent, then p( x, y) p( x) p( y) e.g. probability of two heads coin-flips in a row is ¼ 11

Conditional Probability Probability of x given y p( X x Y y) p( x y) Probability of KD for dinner, given a Waterloo engineer is cooking Relation to joint probability p( x y) p( x, y) p( y) If X and Y are independent, p( x y) p( x) Follows from the above 12

Law of Total Probability Discrete x px ( ) 1 p( x) p( x, y) y p( x) p( x y) p( y) y Continuous p( x) dx 1 p( x) p( x, y) dy p( x) p( x y) p( y) dy 13

Probability distribution It is possible to define a discrete probability distribution as a column vector p( X x1) p( X x) p ( X xn ) The conditional probability can then be a matrix p( x y) p( x1 y1) p( x1 ym) p ( xn y1) p( xn ym) 14

Discrete Random Variable And the Law of Total Probabilities becomes p( x) p( x y) p( y) y p( x y) p( y) Note, each column of p(x y) must sum to 1 x p( x y) p( x, y) p( y) p( y, x) x p( y) 1 p( y) p( y) x Relation of joint and conditional probabilities Total probability 15

Bayes Theorem From definition of conditional probability p( x, y) p( x, y) p( x y), p( y x) p( y) p( x) p( x y) p( y) p( x, y) p( y x) p( x) Bayes Theorem defines how to update one s beliefs about X given a known (new) value of y p( x y) p( y x) p( x) likelihood prior p( y) evidence 16

Bayes Theorem If Y is a measurement and X is the current vehicle state, Bayes Theorem can be used to update the state estimate given a new measurement Prior: probabilities that the vehicle is in any of the possible states Likelihood: probability of getting the measurement that occurred given every possible state is the true state Evidence: probability of getting the specific measurement recorded p( x y) p( y x) p( x) likelihood prior p( y) evidence 17

Bayes Theorem Example: Drug testing A drug test is 99% sensitive (will correctly identify a drug user 99% of the time) The drug test is also 99% specific (will correctly identify a non-drug user 99% of the time A company tests its employees, 0.5% of whom are drug users What s the probability that a positive test result indicates an actual drug user? 33%, 66%, 97%, 99%??? 18

Bayes Theorem Example: Drug Testing Employees are either users or non-users X u, n The test is either positive or negative Y, We want to find the probability that an employee is a user given the test is positive. Applying Bayes Theorem: pu ( ) p( u) p( u) likelihood prior p( ) evidence 19

Bayes Theorem Example: Drug Testing Prior: Probability that an individual is a drug user pu ( ) 0.005 Likelihood: Probability that a test is positive given an individual is a drug user p( u) 0.99 Evidence: Total probability of a positive test result p( ) p( u) p( u) p( n) p( n) (0.99)(0.005) (0.01)(0.995) 0.0143 20

Bayes Theorem Example: Drug Testing Finally, the probability an individual is a drug user given a test is positive pu ( ) p( u) p( u) p( ) (0.99)(0.005) (0.0149) 0.3322 33% chance of that positive test result has caught a drug user. That s not a great test! Difficulty lies in the large number of non-drug users that are tested Hard to find a needle in the haystack with a low resolution camera. 21

Gaussian Distribution (Normal) p( x) 1 e 2 ( x ) 2 2 2 p x 2 ( ) ~ N(, ) 22

Multivariate Gaussian Distribution (Normal) 1 e det(2 ) 1 ( ) T 1 x μ ( x μ ) 2 x ( ) ~ (, ) p( ) p x N μ 23

Properties of Gaussians Linear combinations x ~ N(, ), y Ax B T y ~ N( A B, AA ) The result remains Gaussian! Note: exclamation point, because this is somewhat surprising, and does not hold for multiplication, division. Let s take a look 24

Demonstration of combination of Gaussians A tale of two univariate Gaussians Define two Gaussians (zero mean) Generate many samples from each distribution (5,000,000) Combine these samples linearly, one sample from each distribution at a time Multiply these samples Divide these samples Create histograms of the resulting samples Take mean and variance of resulting samples Generate Gaussian fit and compare 25

Demonstration of combination of Gaussians A tale of two univariate Gaussians f x 2 ( ) ~ N(0,0.3 ) g x 2 ( ) ~ N(0,0.8 ) 26

Demonstration of combination of Gaussians Linear combination 2 0.6 f ( x) 0.4 g( x) 1 ~ N( 1,0.62 ) 27

Demonstration of combination of Gaussians Product f ( x) g( x) ~ N(0,0.48 ^ 2) 28

Demonstration of combination of Gaussians Quotient f ( x) / g( x) ~ N(0,170 ^ 2) 29

Generating multivariate random noise samples Define two distributions, the one of interest and the standard normal distribution ~ N(, ) ~ N(0, I) If the covariance is full rank, it can be diagonalized Symmetry implies positive semi-definiteness EE T 1/2 1/2 T E I E HIH T Can now relate the two distributions (linear identity) T ~ N(, HIH ) H 30

To implement this in Matlab for simulation purposes Define μ, Find eigenvalues, λ, and eigenvectors, E of The noise can then be created with E 1/2 randn(n,1) 4 4 4 8 31

Confidence ellipses Lines of constant probability Found by setting pdf exponent to a constant Principal axes are eigenvectors of covariance Magnitudes depend on eigenvalues of covariance 1 4 4, = 2 4 8 50%, 99% error ellipses Not easily computed, code provided 32