6. Vector Random Variables

Size: px
Start display at page:

Download "6. Vector Random Variables"

Transcription

1 6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following was: B representing n random variables as a vector, we obtain a compact notation for the joint pmf, cdf, and pdf as well as marginal and conditional distributions. We present a general method for finding the pdf of transformations of vector random variables. Summar information of the distribution of a vector random variable is provided b an epected value vector and a covariance matri. We use linear transformations and characteristic functions to find alternative representations of random vectors and their probabilities. We develop optimum estimators for estimating the value of a random variable based on observations of other random variables. We show how jointl Gaussian random vectors have a compact and eas-to-workwith pdf and characteristic function. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

2 6.1 Vector Random Variables The notion of a random variable is easil generalized to the case where several quantities are of interest. A vector random variable is a function that assigns a vector of real numbers to each outcome ζ in S, the sample space of the random eperiment. We use uppercase boldface notation for vector random variables. B convention is a column vector (n rows b 1 column), so the vector random variable with components,, 1, n corresponds to 1 n We will sometimes write,,, 1 n to save space and omit the transpose unless dealing with matrices. Possible values of the vector random variable are denoted b 1,,, n where i corresponds to the value of i. 1 T n Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

3 6.1.1 Events and Probabilities 6.1. Joint Distribution Functions Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

4 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

5 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

6 6.1.3 Independence Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

7 6. Functions of Several Random Variables Functions of vector random variables arise naturall in random eperiments. For eample,, 1, n ma correspond to observations from n repetitions of an eperiment that generates a given random variable. We are almost alwas interested in the sample mean and the sample variance of the observations. In another eample 1,,, n ma correspond to samples of a speech waveform and we ma be interested in etracting features that are defined as functions of for use in a speech recognition sstem One Function of Several Random Variables Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

8 6.. Transformations of Random Vectors Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

9 6..3 *pdf of General Transformations Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

10 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

11 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

12 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

13 6.3 Epected Values of Vector Random Variables Mean Vector and Covariance Matri Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

14 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

15 6.3. Linear Transformations of Random Vectors Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

16 6.3.3 *Joint Characteristic Function *Diagonalization of Covariance Matri Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

17 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

18 6.4 Jointl Gaussian Random Vectors Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

19 6.4.1 *Linear Transformation of Gaussian Random Variables 6.4. *Joint Characteristic Function of a Gaussian Random Variable Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

20 6.5 Estimation of Random Variables In this book we will encounter two basic tpes of estimation problems. In the first tpe, we are interested in estimating the parameters of one or more random variables, e.g., probabilities, means, variances, or covariances. In Chapter 1, we stated that relative frequencies can be used to estimate the probabilities of events, and that sample averages can be used to estimate the mean and other moments of a random variable. In Chapters 7 and 8 we will consider this tpe of estimation further. In this section, we are concerned with the second tpe of estimation problem, where we are interested in estimating the value of an inaccessible random variable in terms of the observation of an accessible random variable. For eample, could be the input to a communication channel and could be the observed output. In a prediction application, could be a future value of some quantit and its present value. Estimators: Maimum a posteriori (MAP) estimator Maimum likelihood (ML) estimator Minimum MSE Estimator MAP and ML Estimators We have considered estimation problems informall earlier in the book. For eample, in estimating the output of a discrete communications channel we are interested in finding the most probable input given the observation =, that is, the value of input that maimizes P : ma P In general we refer to the above estimator for in terms of as the maimum a posteriori (MAP) estimator. The a posteriori probabilit is given b: P P P P and so the MAP estimator requires that we know the a priori probabilities P. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

21 In some situations we know P but we do not know the a priori probabilities, so we select the estimator value as the value that maimizes the likelihood of the observed value =: map We refer to this estimator of in terms of as the maimum likelihood (ML) estimator. We can define MAP and ML estimators when and are continuous random variables b replacing events of the form b d. If and are continuous, the MAP estimator for given the observation is given b: ma and the ML estimator for given the observation is given b: ma f f Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

22 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37 Eample 6.5 Comparison of ML and MAP Estimators Let and be the random pair in Eample Find the MAP and ML estimators for in terms of. From Eample 5.16 and 5.3, the conditional pdf of given is given b: else e e f 0, 0,,, e f and e e f 1 the conditional pdfs are: for e e e e e e f f f 0, 1 1,, for e e e e f f f,,, The MAP estimator f ma for e f, As the function is a smooth curve in, using the derivative to determine the ma and min does not help and the bounds must be at either or. As the function decreases as increases beond, the maimum must occur at. Therefore the MAP estimator is. ˆ MAP The ML estimator f ma for e e f 0, 1 The derivative is again not useful; however, as increases beond, the denominator becomes larger so the conditional pdf decreases. Therefore the ML estimator is. ˆ ML In this eample the ML and MAP estimators agree.

23 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37 Eample 6.6 Jointl Gaussian Random Variables Find the MAP and ML estimator of in terms of when and are jointl Gaussian random variables. Jointl Gaussian RV from section 5.9:, 1 ep 1 1, m m m m f The conditional pdf of given is given b: 1 ep 1 1 m m f which is maimized b the value of for which the eponent is zero. Therefore 0 ˆ MAP m m MAP m m ˆ The conditional pdf of given is given b: 1 ep 1 1 m m f which is again maimized b the value of for which the eponent is zero. Therefore 0 ˆ ML m m ML m m ˆ Therefore we conclude that the two estimators are not equal. In other words, knowledge of the a priori probabilities of will affect the estimator.

24 6.5. Minimum MSE Linear Estimator ˆ g. In general, the Another estimate for is given b a function of the observation ~ estimation error, ˆ g, is nonzero, and there is a cost associated with the ~ error, cost c c g. We are usuall interested in finding the function g() that minimizes the epected value of the cost, Ec ~ Ec g. For eample, if and are the discrete input and output of a communication channel, and c is zero when g and one otherwise, then the epected value of the cost corresponds to the probabilit of error, that is, that g.when and are continuous random variables, we frequentl use the mean square error (MSE) as the cost: ~ e E E g In the remainder of this section we focus on this particular cost function. We first consider the case where g() is constrained to be a linear function of, and then consider the case where g() can be an function, whether linear or nonlinear. First, consider the problem of estimating a random variable b a constant a so that the mean square error is minimized: min E a E a E a a The best a is found b taking the derivative with respect to a, setting the result to zero, and solving for a. The result is E a 0 * a E which makes sense since the epected value of is the center of mass of the pdf. Note that the conjugate of a is shown in case a is comple. * The mean square error for this estimator is equal to VAR E a Now consider estimating b a linear function g a b min E a b a, b. Equation (6.53a) can be viewed as the approimation of a b the constant b. And then finding the best a. This is the minimization posed in Eq. (6.51) and the best b is * b E a E a E Substitution into Eq. (6.53a) implies that the best a is found b min E E a E a Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

25 We once again differentiate with respect to a, set the result to zero, and solve for a: E E a E E E 0 E E a E E COV, a VAR 0 0 The best coefficient a is found to be * COV, a VAR,, Therefore, the minimum mean square error (mmse) linear estimator for in terms of is ˆ * * a b ˆ a * E * a E ˆ, E E The term E is simpl a zero-mean, unit-variance version of. Thus E is a rescaled version of that has the variance of the random variable that is being estimated, namel.the term E[] simpl ensures that the estimator has the correct mean. The ke term in the above estimator is the correlation coefficient:, that specifies the sign and etent of the estimate of relative to E. If and are uncorrelated then the best estimate for is its mean, E[]. On the other hand, if, 1 then the best estimate is equal to ˆ E E We draw our attention to the second equalit in Eq. (6.54): * E E a E E 0 This equation is called the orthogonalit condition because it states that the error of the best linear estimator, the quantit inside the braces, is orthogonal to the observation E The orthogonalit condition is a fundamental result in mean square estimation. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

26 Computing the cost function for this estimator where the second equalit follows from the orthogonalit condition. Note that when 1 the mean square error is zero. This implies that * * P a b 0 so that is essentiall a linear function of. * * P a b 1, Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

27 6.5.3 Minimum MSE Estimator (generall nonlinear) In general the estimator for that minimizes the mean square error is a nonlinear function of. The estimator g() that best approimates in the sense of minimizing mean square error must satisf min E g g The problem can be solved b using conditional epectation: E g E E g E g E g f The integrand above is positive for all ; therefore, the integral is minimized b minimizing E g for each. But g() is a constant as far as the conditional epectation is concerned, so the problem is equivalent to Eq. (6.51) and the constant that minimizes E g is g * E The function is called the regression curve which simpl traces the conditional epected value of given the observation =. The mean square error of the best estimator is: g E E d f * e E R * e VAR R f d d Linear estimators in general are suboptimal and have larger mean square errors. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

28 6.5.4 Estimation using a Vector of Observations The MAP, ML, and mean square estimators can be etended to where a vector of observations is available. Here we focus on mean square estimation. We wish to estimate b a function g() of a random vector of observations so that the mean square error is minimized: min E g g To simplif the discussion we will assume that and the have zero means. The same derivation that led to Eq. (6.58) leads to the optimum minimum mean square estimator: The minimum mean square error is then: g * E g E E f * e E R * e VAR R f d d Note that the solution is based on an auto-correlation matri and a cross-correlation vector. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

29 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

30 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

31 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

32 6.6 *Generating Correlated Vector Random Variables Man applications involve vectors or sequences of correlated random variables. Computer simulation models of such applications therefore require methods for generating such random variables. In this section we present methods for generating vectors of random variables with specified covariance matrices. We also discuss the generation of jointl Gaussian vector random variables Generating Random Vectors with Specified Covariance Matri Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

33 6.6. Generating Vectors of Jointl Gaussian Random Variables %% % Eample 6.34 % The necessar steps for generating the Gaussian random variables % with the covariance matri from Eample 6.3. clear %close all U1=rand(10000, 1); % Create a 1000-element vector U1. U=rand(10000, 1); % Create a 1000-element vector U. R=-*log(U1); % Find TH=*pi*U; % Find 1=sqrt(R).*sin(TH); % Generate 1. =sqrt(r).*cos(th); % Generate. m1 = mean(1) m = mean() v1=sqrt(cov(1)) v=sqrt(cov()) 1= 1+sqrt(3)*; % Generate 1. =-1+sqrt(3)*; % Generate. figure plot(1,,'+') figure plot(1,,'+') % Plot scattergram % Plot scattergram m1 = m = v1 = v = Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

34 %% % Eample 6.34 % The necessar steps for generating the Gaussian random variables % with the covariance matri from Eample 6.3. clear close all 1=randn(10000, 1); % Create a 1000-element vector U1. =randn(10000, 1); % Create a 1000-element vector U. m1 = mean(1) m = mean() v1=sqrt(cov(1)) v=sqrt(cov()) 1= 1+sqrt(3)*; % Generate 1. =-1+sqrt(3)*; % Generate. figure plot(1,,'+') figure plot(1,,'+') % Plot scattergram % Plot scattergram m1 = m = v1 = v = Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

35 Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

36 Summar The joint statistical behavior of a vector of random variables is specified b the joint cumulative distribution function, the joint probabilit mass function, or the joint probabilit densit function. The probabilit of an event involving the joint behavior of these random variables can be computed from these functions. The statistical behavior of subsets of random variables from a vector is specified b the marginal cdf, marginal pdf, or marginal pmf that can be obtained from the joint cdf, joint pdf, or joint pmf of. A set of random variables is independent if the probabilit of a product-form event is equal to the product of the probabilities of the component events. Equivalent conditions for the independence of a set of random variables are that the joint cdf, joint pdf, or joint pmf factors into the product of the corresponding marginal functions. The statistical behavior of a subset of random variables from a vector, given the eact values of the other random variables in the vector, is specified b the conditional cdf, conditional pmf, or conditional pdf. Man problems naturall lend themselves to a solution that involves conditioning on the values of some of the random variables. In these problems, the epected value of random variables can be obtained through the use of conditional epectation. The mean vector and the covariance matri provide summar information about a vector random variable. The joint characteristic function contains all of the information provided b the joint pdf. Transformations of vector random variables generate other vector random variables. Standard methods are available for finding the joint distributions of the new random vectors. The orthogonalit condition provides a set of linear equations for finding the minimum mean square linear estimate. The best mean square estimator is given b the conditional epected value. The joint pdf of a vector of jointl Gaussian random variables is determined b the vector of the means and b the covariance matri. All marginal pdf s and conditional pdf s of subsets of have Gaussian pdf s. An linear function or linear transformation of jointl Gaussian random variables will result in a set of jointl Gaussian random variables. A vector of random variables with an arbitrar covariance matri can be generated b taking a linear transformation of a vector of unit-variance, uncorrelated random variables. A vector of Gaussian random variables with an arbitrar covariance matri can be generated b taking a linear transformation of a vector of independent, unit-variance jointl Gaussian random variables. Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

37 CHECKLIST OF IMPORTANT TERMS Conditional cdf Conditional epectation Conditional pdf Conditional pmf Correlation matri Covariance matri Independent random variables Jacobian of a transformation Joint cdf Joint characteristic function Joint pdf Joint pmf Jointl continuous random variables Jointl Gaussian random variables Karhunen-Loeve epansion MAP estimator Marginal cdf Marginal pdf Marginal pmf Maimum likelihood estimator Mean square error Mean vector MMSE linear estimator Orthogonalit condition Product-form event Regression curve Vector random variables Notes and figures are based on or taken from materials in the tetbook: Alberto Leon-Garcia, Probabilit, Statistics, and Random Processes For Electrical Engineering, 3rd ed., Pearson Prentice Hall, 008, ISBN: of 37

Review of Probability

Review of Probability Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical

More information

Joint ] X 5) P[ 6) P[ (, ) = y 2. x 1. , y. , ( x, y ) 2, (

Joint ] X 5) P[ 6) P[ (, ) = y 2. x 1. , y. , ( x, y ) 2, ( Two-dimensional Random Vectors Joint Cumulative Distrib bution Functio n F, (, ) P[ and ] Properties: ) F, (, ) = ) F, 3) F, F 4), (, ) = F 5) P[ < 6) P[ < (, ) is a non-decreasing unction (, ) = F ( ),,,

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Inference about the Slope and Intercept

Inference about the Slope and Intercept Inference about the Slope and Intercept Recall, we have established that the least square estimates and 0 are linear combinations of the Y i s. Further, we have showed that the are unbiased and have the

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney,

More information

Review of Elementary Probability Lecture I Hamid R. Rabiee

Review of Elementary Probability Lecture I Hamid R. Rabiee Stochastic Processes Review o Elementar Probabilit Lecture I Hamid R. Rabiee Outline Histor/Philosoph Random Variables Densit/Distribution Functions Joint/Conditional Distributions Correlation Important

More information

Lecture 3: Pattern Classification. Pattern classification

Lecture 3: Pattern Classification. Pattern classification EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and

More information

Summary of Random Variable Concepts March 17, 2000

Summary of Random Variable Concepts March 17, 2000 Summar of Random Variable Concepts March 17, 2000 This is a list of important concepts we have covered, rather than a review that devives or eplains them. Tpes of random variables discrete A random variable

More information

Expected value of r.v. s

Expected value of r.v. s 10 Epected value of r.v. s CDF or PDF are complete (probabilistic) descriptions of the behavior of a random variable. Sometimes we are interested in less information; in a partial characterization. 8 i

More information

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification INF 4300 151014 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter 1-6 in Duda and Hart: Pattern Classification 151014 INF 4300 1 Introduction to classification One of the most challenging

More information

Copyright, 2008, R.E. Kass, E.N. Brown, and U. Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS

Copyright, 2008, R.E. Kass, E.N. Brown, and U. Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS Copright, 8, RE Kass, EN Brown, and U Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS Chapter 6 Random Vectors and Multivariate Distributions 6 Random Vectors In Section?? we etended

More information

3. Several Random Variables

3. Several Random Variables . Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit

More information

Two-dimensional Random Vectors

Two-dimensional Random Vectors 1 Two-dimensional Random Vectors Joint Cumulative Distribution Function (joint cd) [ ] F, ( x, ) P xand Properties: 1) F, (, ) = 1 ),, F (, ) = F ( x, ) = 0 3) F, ( x, ) is a non-decreasing unction 4)

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete

More information

Stochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali

Stochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali Stochastic Processes Review o Elementary Probability bili Lecture I Hamid R. Rabiee Ali Jalali Outline History/Philosophy Random Variables Density/Distribution Functions Joint/Conditional Distributions

More information

CONTINUOUS SPATIAL DATA ANALYSIS

CONTINUOUS SPATIAL DATA ANALYSIS CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s

More information

INF Introduction to classifiction Anne Solberg

INF Introduction to classifiction Anne Solberg INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300

More information

Lecture Note 2: Estimation and Information Theory

Lecture Note 2: Estimation and Information Theory Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 2: Estimation and Information Theory Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 2.1 Estimation A static estimation problem

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Introduction...2 Chapter Review on probability and random variables Random experiment, sample space and events

Introduction...2 Chapter Review on probability and random variables Random experiment, sample space and events Introduction... Chapter...3 Review on probability and random variables...3. Random eperiment, sample space and events...3. Probability definition...7.3 Conditional Probability and Independence...7.4 heorem

More information

2: Distributions of Several Variables, Error Propagation

2: Distributions of Several Variables, Error Propagation : Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 07 Problem Set 3: Etracting Estimates from Probability Distributions Last updated: April 5, 07 Notes: Notation: Unlessotherwisenoted,, y,andz denoterandomvariables,

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Ch. 5 Joint Probability Distributions and Random Samples

Ch. 5 Joint Probability Distributions and Random Samples Ch. 5 Joint Probability Distributions and Random Samples 5. 1 Jointly Distributed Random Variables In chapters 3 and 4, we learned about probability distributions for a single random variable. However,

More information

INF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image.

INF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image. INF 4300 700 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter -6 6inDuda and Hart: attern Classification 303 INF 4300 Introduction to classification One of the most challenging

More information

Random Vectors. 1 Joint distribution of a random vector. 1 Joint distribution of a random vector

Random Vectors. 1 Joint distribution of a random vector. 1 Joint distribution of a random vector Random Vectors Joint distribution of a random vector Joint distributionof of a random vector Marginal and conditional distributions Previousl, we studied probabilit distributions of a random variable.

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the MODULE 6 LECTURE NOTES REVIEW OF PROBABILITY THEORY INTRODUCTION Most water resources decision problems ace the risk o uncertainty mainly because o the randomness o the variables that inluence the perormance

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

L2: Review of probability and statistics

L2: Review of probability and statistics Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution

More information

EE 302 Division 1. Homework 6 Solutions.

EE 302 Division 1. Homework 6 Solutions. EE 3 Division. Homework 6 Solutions. Problem. A random variable X has probability density { C f X () e λ,,, otherwise, where λ is a positive real number. Find (a) The constant C. Solution. Because of the

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Factor Analysis. Qian-Li Xue

Factor Analysis. Qian-Li Xue Factor Analysis Qian-Li Xue Biostatistics Program Harvard Catalyst The Harvard Clinical & Translational Science Center Short course, October 7, 06 Well-used latent variable models Latent variable scale

More information

Let X denote a random variable, and z = h(x) a function of x. Consider the

Let X denote a random variable, and z = h(x) a function of x. Consider the EE385 Class Notes 11/13/01 John Stensb Chapter 5 Moments and Conditional Statistics Let denote a random variable, and z = h(x) a function of x. Consider the transformation Z = h(). We saw that we could

More information

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions

More information

LINEAR MMSE ESTIMATION

LINEAR MMSE ESTIMATION LINEAR MMSE ESTIMATION TERM PAPER FOR EE 602 STATISTICAL SIGNAL PROCESSING By, DHEERAJ KUMAR VARUN KHAITAN 1 Introduction Linear MMSE estimators are chosen in practice because they are simpler than the

More information

CHAPTER 5. Jointly Probability Mass Function for Two Discrete Distributed Random Variables:

CHAPTER 5. Jointly Probability Mass Function for Two Discrete Distributed Random Variables: CHAPTER 5 Jointl Distributed Random Variable There are some situations that experiment contains more than one variable and researcher interested in to stud joint behavior of several variables at the same

More information

Expression arrays, normalization, and error models

Expression arrays, normalization, and error models 1 Epression arrays, normalization, and error models There are a number of different array technologies available for measuring mrna transcript levels in cell populations, from spotted cdna arrays to in

More information

An example to illustrate frequentist and Bayesian approches

An example to illustrate frequentist and Bayesian approches Frequentist_Bayesian_Eample An eample to illustrate frequentist and Bayesian approches This is a trivial eample that illustrates the fundamentally different points of view of the frequentist and Bayesian

More information

Parameterized Joint Densities with Gaussian and Gaussian Mixture Marginals

Parameterized Joint Densities with Gaussian and Gaussian Mixture Marginals Parameterized Joint Densities with Gaussian and Gaussian Miture Marginals Feli Sawo, Dietrich Brunn, and Uwe D. Hanebeck Intelligent Sensor-Actuator-Sstems Laborator Institute of Computer Science and Engineering

More information

Chapter 5 Class Notes

Chapter 5 Class Notes Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example

More information

2.7 The Gaussian Probability Density Function Forms of the Gaussian pdf for Real Variates

2.7 The Gaussian Probability Density Function Forms of the Gaussian pdf for Real Variates .7 The Gaussian Probability Density Function Samples taken from a Gaussian process have a jointly Gaussian pdf (the definition of Gaussian process). Correlator outputs are Gaussian random variables if

More information

The data can be downloaded as an Excel file under Econ2130 at

The data can be downloaded as an Excel file under Econ2130 at 1 HG Revised Sept. 018 Supplement to lecture 9 (Tuesda 18 Sept) On the bivariate normal model Eample: daughter s height (Y) vs. mother s height (). Data collected on Econ 130 lectures 010-01. The data

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

Stochastic processes Lecture 1: Multiple Random Variables Ch. 5

Stochastic processes Lecture 1: Multiple Random Variables Ch. 5 Stochastic processes Lecture : Multiple Random Variables Ch. 5 Dr. Ir. Richard C. Hendriks 26/04/8 Delft University of Technology Challenge the future Organization Plenary Lectures Book: R.D. Yates and

More information

Mini-Lecture 8.1 Solving Quadratic Equations by Completing the Square

Mini-Lecture 8.1 Solving Quadratic Equations by Completing the Square Mini-Lecture 8.1 Solving Quadratic Equations b Completing the Square Learning Objectives: 1. Use the square root propert to solve quadratic equations.. Solve quadratic equations b completing the square.

More information

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work.

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work. 1 A socioeconomic stud analzes two discrete random variables in a certain population of households = number of adult residents and = number of child residents It is found that their joint probabilit mass

More information

Chapter 3. Expectations

Chapter 3. Expectations pectations - Chapter. pectations.. Introduction In this chapter we define five mathematical epectations the mean variance standard deviation covariance and correlation coefficient. We appl these general

More information

Language and Statistics II

Language and Statistics II Language and Statistics II Lecture 19: EM for Models of Structure Noah Smith Epectation-Maimization E step: i,, q i # p r $ t = p r i % ' $ t i, p r $ t i,' soft assignment or voting M step: r t +1 # argma

More information

Signals and Spectra - Review

Signals and Spectra - Review Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs

More information

ECE 5615/4615 Computer Project

ECE 5615/4615 Computer Project Set #1p Due Friday March 17, 017 ECE 5615/4615 Computer Project The details of this first computer project are described below. This being a form of take-home exam means that each person is to do his/her

More information

QUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS

QUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS arxiv:1803.0591v1 [math.gm] QUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS Aleander Panfilov stable spiral det A 6 3 5 4 non stable spiral D=0 stable node center non stable node saddle 1 tr A QUALITATIVE

More information

Computation of Total Capacity for Discrete Memoryless Multiple-Access Channels

Computation of Total Capacity for Discrete Memoryless Multiple-Access Channels IEEE TRANSACTIONS ON INFORATION THEORY, VOL. 50, NO. 11, NOVEBER 2004 2779 Computation of Total Capacit for Discrete emorless ultiple-access Channels ohammad Rezaeian, ember, IEEE, and Ale Grant, Senior

More information

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom 1 Learning Goals Covariance and Correlation Class 7, 18.05 Jerem Orloff and Jonathan Bloom 1. Understand the meaning of covariance and correlation. 2. Be able to compute the covariance and correlation

More information

Determinants. We said in Section 3.3 that a 2 2 matrix a b. Determinant of an n n Matrix

Determinants. We said in Section 3.3 that a 2 2 matrix a b. Determinant of an n n Matrix 3.6 Determinants We said in Section 3.3 that a 2 2 matri a b is invertible if and onl if its c d erminant, ad bc, is nonzero, and we saw the erminant used in the formula for the inverse of a 2 2 matri.

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic

More information

2. A Basic Statistical Toolbox

2. A Basic Statistical Toolbox . A Basic Statistical Toolbo Statistics is a mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. Wikipedia definition Mathematical statistics: concerned

More information

Chapter 5,6 Multiple RandomVariables

Chapter 5,6 Multiple RandomVariables Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.

More information

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential. Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample

More information

Computation fundamentals of discrete GMRF representations of continuous domain spatial models

Computation fundamentals of discrete GMRF representations of continuous domain spatial models Computation fundamentals of discrete GMRF representations of continuous domain spatial models Finn Lindgren September 23 2015 v0.2.2 Abstract The fundamental formulas and algorithms for Bayesian spatial

More information

MACHINE LEARNING 2012 MACHINE LEARNING

MACHINE LEARNING 2012 MACHINE LEARNING 1 MACHINE LEARNING kernel CCA, kernel Kmeans Spectral Clustering 2 Change in timetable: We have practical session net week! 3 Structure of toda s and net week s class 1) Briefl go through some etension

More information

Affine transformations. Brian Curless CSE 557 Fall 2014

Affine transformations. Brian Curless CSE 557 Fall 2014 Affine transformations Brian Curless CSE 557 Fall 2014 1 Reading Required: Shirle, Sec. 2.4, 2.7 Shirle, Ch. 5.1-5.3 Shirle, Ch. 6 Further reading: Fole, et al, Chapter 5.1-5.5. David F. Rogers and J.

More information

Bayesian Inference of Noise Levels in Regression

Bayesian Inference of Noise Levels in Regression Bayesian Inference of Noise Levels in Regression Christopher M. Bishop Microsoft Research, 7 J. J. Thomson Avenue, Cambridge, CB FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop

More information

Problem Set 3 Due Oct, 5

Problem Set 3 Due Oct, 5 EE6: Random Processes in Systems Lecturer: Jean C. Walrand Problem Set 3 Due Oct, 5 Fall 6 GSI: Assane Gueye This problem set essentially reviews detection theory. Not all eercises are to be turned in.

More information

Perturbation Theory for Variational Inference

Perturbation Theory for Variational Inference Perturbation heor for Variational Inference Manfred Opper U Berlin Marco Fraccaro echnical Universit of Denmark Ulrich Paquet Apple Ale Susemihl U Berlin Ole Winther echnical Universit of Denmark Abstract

More information

Affine transformations

Affine transformations Reading Optional reading: Affine transformations Brian Curless CSE 557 Autumn 207 Angel and Shreiner: 3., 3.7-3. Marschner and Shirle: 2.3, 2.4.-2.4.4, 6..-6..4, 6.2., 6.3 Further reading: Angel, the rest

More information

PROBLEMS In each of Problems 1 through 12:

PROBLEMS In each of Problems 1 through 12: 6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.

More information

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf) Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution

More information

Mathematical Statistics. Gregg Waterman Oregon Institute of Technology

Mathematical Statistics. Gregg Waterman Oregon Institute of Technology Mathematical Statistics Gregg Waterman Oregon Institute of Technolog c Gregg Waterman This work is licensed under the Creative Commons Attribution. International license. The essence of the license is

More information

14.1 Systems of Linear Equations in Two Variables

14.1 Systems of Linear Equations in Two Variables 86 Chapter 1 Sstems of Equations and Matrices 1.1 Sstems of Linear Equations in Two Variables Use the method of substitution to solve sstems of equations in two variables. Use the method of elimination

More information

LINEAR REGRESSION ANALYSIS

LINEAR REGRESSION ANALYSIS LINEAR REGRESSION ANALYSIS MODULE V Lecture - 2 Correcting Model Inadequacies Through Transformation and Weighting Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technolog Kanpur

More information

Correlation analysis 2: Measures of correlation

Correlation analysis 2: Measures of correlation Correlation analsis 2: Measures of correlation Ran Tibshirani Data Mining: 36-462/36-662 Februar 19 2013 1 Review: correlation Pearson s correlation is a measure of linear association In the population:

More information

Random Processes. By: Nick Kingsbury

Random Processes. By: Nick Kingsbury Random Processes By: Nick Kingsbury Random Processes By: Nick Kingsbury Online: < http://cnx.org/content/col10204/1.3/ > C O N N E X I O N S Rice University, Houston, Texas This selection and arrangement

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

An Information Theory For Preferences

An Information Theory For Preferences An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop

More information

Probability and Stochastic Processes

Probability and Stochastic Processes Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Mathematics of Cryptography Part I

Mathematics of Cryptography Part I CHAPTER 2 Mathematics of Crptograph Part I (Solution to Practice Set) Review Questions 1. The set of integers is Z. It contains all integral numbers from negative infinit to positive infinit. The set of

More information

Gaussian Process Vine Copulas for Multivariate Dependence

Gaussian Process Vine Copulas for Multivariate Dependence Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández-Lobato 1,2 joint work with David López-Paz 2,3 and Zoubin Ghahramani 1 1 Department of Engineering, Cambridge University,

More information

Where now? Machine Learning and Bayesian Inference

Where now? Machine Learning and Bayesian Inference Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from

More information

Introduction to Probability Theory for Graduate Economics Fall 2008

Introduction to Probability Theory for Graduate Economics Fall 2008 Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function

More information

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always

More information

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form.

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form. Chapter Linear lgebra Objective Demonstrate solution methods for sstems of linear equations. Show that a sstem of equations can be represented in matri-vector form. 4 Flowrates in kmol/hr Figure.: Two

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Introduction Consider a zero mean random vector R n with autocorrelation matri R = E( T ). R has eigenvectors q(1),,q(n) and associated eigenvalues λ(1) λ(n). Let Q = [ q(1)

More information

Chapter 8: MULTIPLE CONTINUOUS RANDOM VARIABLES

Chapter 8: MULTIPLE CONTINUOUS RANDOM VARIABLES Charles Boncelet Probabilit Statistics and Random Signals" Oord Uniersit Press 06. ISBN: 978-0-9-0005-0 Chapter 8: MULTIPLE CONTINUOUS RANDOM VARIABLES Sections 8. Joint Densities and Distribution unctions

More information

Machine Learning Lecture 2

Machine Learning Lecture 2 Machine Perceptual Learning and Sensory Summer Augmented 6 Computing Announcements Machine Learning Lecture 2 Course webpage http://www.vision.rwth-aachen.de/teaching/ Slides will be made available on

More information

Random Fields in Bayesian Inference: Effects of the Random Field Discretization

Random Fields in Bayesian Inference: Effects of the Random Field Discretization Random Fields in Bayesian Inference: Effects of the Random Field Discretization Felipe Uribe a, Iason Papaioannou a, Wolfgang Betz a, Elisabeth Ullmann b, Daniel Straub a a Engineering Risk Analysis Group,

More information

8.1 Exponents and Roots

8.1 Exponents and Roots Section 8. Eponents and Roots 75 8. Eponents and Roots Before defining the net famil of functions, the eponential functions, we will need to discuss eponent notation in detail. As we shall see, eponents

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

A Practitioner s Guide to Generalized Linear Models

A Practitioner s Guide to Generalized Linear Models A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically

More information

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line Random Variable Random variable is a mapping that maps each outcome s in the sample space to a unique real number,. s s : outcome Sample Space Real Line Eamples Toss a coin. Define the random variable

More information

Affine transformations

Affine transformations Reading Required: Affine transformations Brian Curless CSE 557 Fall 2009 Shirle, Sec. 2.4, 2.7 Shirle, Ch. 5.-5.3 Shirle, Ch. 6 Further reading: Fole, et al, Chapter 5.-5.5. David F. Rogers and J. Alan

More information

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown 9.07 Introduction to Probabilit and Statistics for Brain and Cognitive Sciences Emer N. Brown I. Objectives Lecture 4: Transformations of Random Variables, Joint Distributions of Random Variables A. Understand

More information

Engineering Mathematics 2018 : MA6151

Engineering Mathematics 2018 : MA6151 Engineering Mathematics 08 NAME OF THE SUBJECT : Mathematics I SUBJECT CODE : MA65 NAME OF THE METERIAL : Part A questions REGULATION : R 03 WEBSITE : wwwhariganeshcom UPDATED ON : November 07 TEXT BOOK

More information