Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda
|
|
- June Dorsey
- 6 years ago
- Views:
Transcription
1 Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney, Australia Massimo Piccardi, UTS Agenda Basic probability concepts Joint, conditional, marginal probabilities Bayes theorem Independence Cdf, pdf Mean, variance, moments Epectations Covariance matri, correlation coefficients Sample mean, sample covariance Gaussian distribution Main properties of Gaussian distributions Massimo Piccardi, UTS
2 Basic probability concepts Probability: numbers assigned to events reflecting how likely they are to occur Sample space, S: set of all possible events Probability law: mapping: event p(event) courtesy of Prof. Ricardo Gutierrez-Osuna, Teas A&M University Massimo Piccardi, UTS 3 Basic probability concepts In the eample, we have a random variable that is the outcome of a hypothetical eperiment It has 4 possible outcomes, A i, i=..4 (a discrete r.v.) Aioms on probability Aiom I: 0 p[a i ] Aiom II: p[s] = Aiom III: if A i A j =, then p[a i A j ] = p[a i ] + p[a j ] Many properties Massimo Piccardi, UTS 4
3 Joint probability Let us now consider two discrete random variables: Weather (W) and Temperature (T) Both assumed binary, i.e. only two possible values each: W: rainy (r), sunny (s) T: low (l), high (h) T W l h Take 00 samples of (W,T) and map the joint frequencies r in this table Assuming we have enough samples, we call them joint probabilities s Massimo Piccardi, UTS 5 Joint probability Joint probability of W and T, value by value: p(w = r, T = l) = 5/00 = 0.5 p(w = r, T = h) = 0/00 = 0.0 p(w = s, T = l) = 5/00 = 0.05 p(w = s, T = h) = 60/00 = 0.60 The notation with the variables, p(w,t), means the whole set of joint probabilities Each of the values can be noted as p(r,l) for short, instead of p(w = r, T = l), provided there is no ambiguity Massimo Piccardi, UTS 6
4 Joint probability The joint probabilities add up to, as they cover all possible cases (Aiom II) Thus, in the eample, only 3 of them can be arbitrarily chosen, as the fourth results from: the sum of the other 3. There are 3 independent numbers (degrees of freedom, dof) For two variables with N values each, the joint probabilities have N dof Massimo Piccardi, UTS 7 Conditional probability The concept of conditional probability is simple: calculate the desired frequencies not on all the samples, but on specific sub-sets where certain conditions are true Eample: p(w = r T = l) reads as: the probability of Weather being rainy given that the Temperature is low instead of considering all the 00 samples, one just takes those where the temperature is low (30 samples in total) out of the above, compute the frequency of rainy days: 5 out of 30 = 0.83 Massimo Piccardi, UTS 8
5 Conditional probability Given two r.v., the conditional probability fies one of the two and uses the other as the only variable NB: joint probability: D; each conditional probability: D Let us fi T = l in the eample; then, the only variable is W, with two possible values: p(w = r T = l) = 5/30 = 0.83 p(w = s T = l) = 5/30 = 0.7 they are all the possible cases and as such their sum is ; we have only one dof For variables with N values, the conditional probabilities have N dof Massimo Piccardi, UTS 9 Conditional probability In the eample: p(w = r T = l) = 5/30 =.83 p(w = s T = l) = 5/30 =.7 p(w = r T = h) = 0/70 =.4 p(w = s T = h) = 60/70 =.86 dof dof there are degrees of freedom overall for p(w T), and N(N-) for two N-valued r.v. NB: p(r, l) < p(r l) by definition (the latter has a smaller denominator!) Massimo Piccardi, UTS 0
6 Marginal probability W and T are jointly called a random vector, or, equivalently, a multivariate random variable One can obtain the marginal probability of either variable by adding up the joint probabilities for all possible values of the other (marginalisation) : ( W ) p( W T ) p =, T The above is called the sum rule In the eample: p(w = r) = 35/00 p(t = l) = 30/00 p(w = s) = 65/00 ( dof) p(t = h) = 70/00 ( dof) Massimo Piccardi, UTS Bayes theorem ( W, T ) p( W T ) p( T ) p = joint probability conditional probability of W given T marginal probability of T Always holds! It is called the product rule It is a powerful tool to break down the compleity of the joint probabilities into the product of simpler probabilities Sum rule + product rule: foundations of statistical PR Massimo Piccardi, UTS
7 Independence ( W, T ) p( W ) p( T ) p = joint probability marginal probability of W marginal probability of T If the above holds, the two r.v. are called independent Often a very desirable case Equivalent to p(w T) = p(w) and p(t W) = p(t) Does not hold here! For instance: p(r,l) = 0.5 p(r) = 0.35; p(l) = 0.30 p(r) p(l) = 0.05 Massimo Piccardi, UTS 3 A special eample Given three binary r.v., A, A and S, let us assume e, 90 samples: that p(a, A S) = p(a S) p(a S) #(A,A,S=0): 0 instead of the always true (from Bayes rule): A p(a, A S) = p(a A, S) p(a S), or 0 5 p(a, A S) = p(a A, S) p(a S) #(A The above reads as A and A are independent,a,s=): 0 given S 0 4 A Not equivalent to A and A are independent! 8 It is a relevant case, with S often called a latent p(a variable or a state and the A i being measurements,a ): A S 33 A A A A A Massimo Piccardi, UTS 4
8 Cumulative density function (cdf) The cumulative distribution function of a random variable is defined as the probability of event { t}: F(t) = p[ t] Some properties: -F( + ) = -F( - ) = 0 -F(a) F(b) if a b Matlab, Statistics Toolbo, command plot(-:0.:, cdf('norm',-:0.:,0,0.5)) Massimo Piccardi, UTS 5 Probability density function (pdf) The pdf of a continuous r.v., if it eists, is defined as the derivative of the cdf: p() = df()/d Some properties: -p() 0 - p() can be >! it is not a probability; rather, a density of probability - any p()d - Bayes still applies! Matlab, Statistics Toolbo, command plot(-:0.:, pdf('norm',-:0.:,0,0.5)) Massimo Piccardi, UTS 6
9 Mean, variance and moments The pdf, or cdf, describes the probability distribution fully; yet, sometimes we prefer to describe it in a more synthetic way Mean, or epected value: Variance: VAR µ E ( ) σ E ( µ ) The standard deviation, σ, is its square root VAR() is also = E[ ] - µe[] + µ = E[ ] - µ [ ] p( ) = d [ ] = ( µ ) p( ) d N th moment: E N N [ ] p( ) = d Massimo Piccardi, UTS 7 Epectations An epectation is an averaging operation weighted by p(); it can be etended to any function of, f(): E [ f ( ) ] f ( ) p( ) = d E[f()] is a scalar value the famous Jensen s inequality: E[f()] f(e[]) Consistently, the epectation of f(,y) over : E [ f (, y) ] f (, y) p( ) = d averages out and returns a function of the sole y p() f() Massimo Piccardi, UTS 8
10 Epectations A marginalisation can be seen as a particular epectation: ( y) p( y,) d = p( y ) p( ) d E[ p( y ) ] p = = An epectation can also be computed over a conditional probability: E [ f ( ) y] f ( ) p( y) = d Massimo Piccardi, UTS 9 Mean, variance and moments The same definitions etend to multivariate r.v., X=[,.. D ] T : The mean becomes a D vector: T [ X ] = [ µ ] [ E[ ] E[ ] T µ = E,, µ,, D = The variance becomes a D D covariance matri: COV E E [ ] T ( X ) = Σ = E ( X µ )( X µ ) = [( µ )( µ )] E[ ( µ )( µ )] [( µ )( µ )] E ( µ )( µ ) D D D D D [ ] D D D D Massimo Piccardi, UTS 0
11 Covariance matri The covariance matri is obviously a symmetric matri: only D(D + )/ dof Σ = cov( D, ) = cov(, σ D ) cov(, σ D D ) Terms cov( i, j ) measure how much i and j co-vary A valid covariance matri is also positive definite: X T Σ X > 0 for any X 0 Massimo Piccardi, UTS Correlation coefficients Terms cov( i, j ) are often epressed as correlation coefficients, ρ ij : ρ = ij cov(, ) NB: - ρ ij + (corollary of the Cauchy-Schwarz inequality) i i σ σ j j courtesy of Wkipedia Massimo Piccardi, UTS
12 (Un)correlation vs independence Two r.v., i, j, are uncorrelated iff ρ ij = 0 Two uncorrelated variables are not independent; they are only in terms of linear mutual dependencies For two uncorrelated variables, i, j, it can be easily shown that: E[ i, j ]=E[ i ] E[ j ] Independence p( i, j ) = p( i )p( j ) is a much stronger property and would guarantee: E[ in, jm ]=E[ in ] E[ jm ] for any N, M Massimo Piccardi, UTS 3 Sample mean and sample covariance At times, either p() is not available or the epectation integrals are not easy to compute Assuming a set of samples, i, i= N, is available, it is possible to approimate the mean and the covariance as: N µ E[] i sample mean N Σ E i= N [( µ ) ] ( i µ )( i µ ) N i= Epectations can be approimated in the same way (Monte Carlo methods) T sample covariance Massimo Piccardi, UTS 4
13 Gaussian distribution The Gaussian, or normal, distribution enjoys nice properties making it very popular for pdf modelling Gaussian pdf in dimension: ( µ ) σ p( ) = e πσ with µ=, σ=.5: Massimo Piccardi, UTS 5 Multivariate Gaussian distribution Gaussian pdf in D dimensions (X=[,.. D ] T ): p ( X ) = ( π ) D Σ e T ( X µ ) Σ ( X µ ) with D=, µ =0, µ =0, Σ=[.5.3;.3 ] Massimo Piccardi, UTS 6
14 Properties of Gaussian distributions Mean and variance describe the whole pdf Uncorrelation independence Covariance matri of joint probability becomes diagonal Given and jointly Gaussian, also their marginal and conditional pdfs are Gaussian Linear transformations are Gaussian: given X ~ N(µ, Σ) Y = A X + K Y ~ N(Aµ + K, AΣA T ) Massimo Piccardi, UTS 7 Properties of Gaussian distributions: eample Just an eample: given two scalar Gaussian r.v., ~ N(µ, σ ) and ~ N(µ, σ ) as marginal probabilities, consider y = + This is equivalent to X = [, ] T, A = [ ] and y = AX µ y = µ + µ ; σ y = σ + cov(, ) + σ If, have common variance, σ : σ y = σ + cov(, ) If they are also uncorrelated/independent : σ y = σ If, instead, they have maimal, positive correlation: σ y = 4σ Massimo Piccardi, UTS 8
L2: Review of probability and statistics
Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution
More informationJoint ] X 5) P[ 6) P[ (, ) = y 2. x 1. , y. , ( x, y ) 2, (
Two-dimensional Random Vectors Joint Cumulative Distrib bution Functio n F, (, ) P[ and ] Properties: ) F, (, ) = ) F, 3) F, F 4), (, ) = F 5) P[ < 6) P[ < (, ) is a non-decreasing unction (, ) = F ( ),,,
More informationCSCI-6971 Lecture Notes: Probability theory
CSCI-6971 Lecture Notes: Probability theory Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu January 31, 2006 1 Properties of probabilities Let, A,
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationLecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationMODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the
MODULE 6 LECTURE NOTES REVIEW OF PROBABILITY THEORY INTRODUCTION Most water resources decision problems ace the risk o uncertainty mainly because o the randomness o the variables that inluence the perormance
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationECE Lecture #10 Overview
ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions
More informationECON 5350 Class Notes Review of Probability and Distribution Theory
ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationLocal Probability Models
Readings: K&F 3.4, 5.~5.5 Local Probability Models Lecture 3 pr 4, 2 SE 55, Statistical Methods, Spring 2 Instructor: Su-In Lee University of Washington, Seattle Outline Last time onditional parameterization
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More information15-388/688 - Practical Data Science: Basic probability. J. Zico Kolter Carnegie Mellon University Spring 2018
15-388/688 - Practical Data Science: Basic probability J. Zico Kolter Carnegie Mellon University Spring 2018 1 Announcements Logistics of next few lectures Final project released, proposals/groups due
More informationGaussians. Hiroshi Shimodaira. January-March Continuous random variables
Cumulative Distribution Function Gaussians Hiroshi Shimodaira January-March 9 In this chapter we introduce the basics of how to build probabilistic models of continuous-valued data, including the most
More informationStochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali
Stochastic Processes Review o Elementary Probability bili Lecture I Hamid R. Rabiee Ali Jalali Outline History/Philosophy Random Variables Density/Distribution Functions Joint/Conditional Distributions
More informationRandom Vectors. 1 Joint distribution of a random vector. 1 Joint distribution of a random vector
Random Vectors Joint distribution of a random vector Joint distributionof of a random vector Marginal and conditional distributions Previousl, we studied probabilit distributions of a random variable.
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationTutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term
Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term 2016-2017 Tutorial 3: Emergency Guide to Statistics Prof. Dr. Moritz Diehl, Robin
More informationTwo-dimensional Random Vectors
1 Two-dimensional Random Vectors Joint Cumulative Distribution Function (joint cd) [ ] F, ( x, ) P xand Properties: 1) F, (, ) = 1 ),, F (, ) = F ( x, ) = 0 3) F, ( x, ) is a non-decreasing unction 4)
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationReview of Probability
Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical
More informationLecture Notes 2 Random Variables. Random Variable
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More information2.7 The Gaussian Probability Density Function Forms of the Gaussian pdf for Real Variates
.7 The Gaussian Probability Density Function Samples taken from a Gaussian process have a jointly Gaussian pdf (the definition of Gaussian process). Correlator outputs are Gaussian random variables if
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 1
CS434a/541a: Pattern Recognition Prof. Olga Veksler Lecture 1 1 Outline of the lecture Syllabus Introduction to Pattern Recognition Review of Probability/Statistics 2 Syllabus Prerequisite Analysis of
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More information2. A Basic Statistical Toolbox
. A Basic Statistical Toolbo Statistics is a mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. Wikipedia definition Mathematical statistics: concerned
More informationExpectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationExpectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda
Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More information2: Distributions of Several Variables, Error Propagation
: Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the
More informationEE 302 Division 1. Homework 6 Solutions.
EE 3 Division. Homework 6 Solutions. Problem. A random variable X has probability density { C f X () e λ,,, otherwise, where λ is a positive real number. Find (a) The constant C. Solution. Because of the
More informationcomponent risk analysis
273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete
More informationWhere now? Machine Learning and Bayesian Inference
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More information1 Exercises for lecture 1
1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationGaussian Process Vine Copulas for Multivariate Dependence
Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández-Lobato 1,2 joint work with David López-Paz 2,3 and Zoubin Ghahramani 1 1 Department of Engineering, Cambridge University,
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More informationChapter 3 Single Random Variables and Probability Distributions (Part 1)
Chapter 3 Single Random Variables and Probability Distributions (Part 1) Contents What is a Random Variable? Probability Distribution Functions Cumulative Distribution Function Probability Density Function
More informationIntroduction...2 Chapter Review on probability and random variables Random experiment, sample space and events
Introduction... Chapter...3 Review on probability and random variables...3. Random eperiment, sample space and events...3. Probability definition...7.3 Conditional Probability and Independence...7.4 heorem
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationM.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011
.S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5
More informationContents 1. Contents
Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................
More informationReview of Elementary Probability Lecture I Hamid R. Rabiee
Stochastic Processes Review o Elementar Probabilit Lecture I Hamid R. Rabiee Outline Histor/Philosoph Random Variables Densit/Distribution Functions Joint/Conditional Distributions Correlation Important
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More information3. Review of Probability and Statistics
3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationChapter 2: The Random Variable
Chapter : The Random Variable The outcome of a random eperiment need not be a number, for eample tossing a coin or selecting a color ball from a bo. However we are usually interested not in the outcome
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationThe binary entropy function
ECE 7680 Lecture 2 Definitions and Basic Facts Objective: To learn a bunch of definitions about entropy and information measures that will be useful through the quarter, and to present some simple but
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationProbability Theory for Machine Learning. Chris Cremer September 2015
Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationLecture 8: Signal Detection and Noise Assumption
ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationStochastic processes Lecture 1: Multiple Random Variables Ch. 5
Stochastic processes Lecture : Multiple Random Variables Ch. 5 Dr. Ir. Richard C. Hendriks 26/04/8 Delft University of Technology Challenge the future Organization Plenary Lectures Book: R.D. Yates and
More informationReview of probability
Review of probability Computer Sciences 760 Spring 2014 http://pages.cs.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts definition of probability random variables
More informationDependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.
Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,
More informationGaussian random variables inr n
Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b
More informationLecture 5: Moment Generating Functions
Lecture 5: Moment Generating Functions IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge February 28th, 2018 Rasmussen (CUED) Lecture 5: Moment
More informationELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random
More informationBayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory
Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationProbability theory. References:
Reasoning Under Uncertainty References: Probability theory Mathematical methods in artificial intelligence, Bender, Chapter 7. Expert systems: Principles and programming, g, Giarratano and Riley, pag.
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationBrief Review of Probability
Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationA Few Special Distributions and Their Properties
A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More information