Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20

Size: px
Start display at page:

Download "Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20"

Transcription

1 Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a reasonable level of consistency and simplicity we ask that you do not use other software tools.] If you collaborated with other members of the class, please write their names at the of the assignment. Moreover, you will need to write and sign the following statement: In preparing my solutions, I did not look at any old homeworks, copy anybody s answers or let them copy mine. Problem : Why? [5 points] Limit your answer to this problem to a page. a. Describe an application of pattern recognition related to your research. What are the features? What is the decision to be made? Speculate on how one might solve the problem. b. In the same way, describe an application of pattern recognition you would be interested in pursuing for fun in your life outside of work. Solution: Refer to examples discussed in lecture. Problem : Probability Warm-Up [0 points] Let x and y be discrete random variables, and a and b are constant values. Let µ x denote the expected value of x and σ x denote the variance of x. Use excruciating detail to answer the following: a. Show E[ax + by] ae[x] + be[y]. b. Show σ x E[x ] µ x. c. Show that indepent implies uncorrelated. d. Show that uncorrelated does not imply indepent.

2 e. Let z ax + by. Show that if x and y are uncorrelated, then σz a σx + b σy. f. Let x i i,..., n be random variables indepently drawn from the same probability distribution with mean µ x and variance σx. For the n sample mean x n x i, show the following: i E[x] µ x. ii Var[x] i variance of the sample mean σx/n. Note that this is different from n the sample variance s n n x i x. i g. Let x and x be indepent and identically distributed i.i.d continuous random variables. Can Pr[x x ] be calculated? If so, find its value. If not, explain. Hint : Remember that for a continuous variable Pr[x k] 0, for any value of k. Hint : Remember the definition of i.i.d. variables. h. Let x and x be indepent and identically distributed discrete random variables. Can Pr[x x ] be calculated? If so, find its value. If not, explain. Solution: a. The following is for continuous random variables. A similar argument holds for continuous random variables. E[ax + by] ax + by px, y a x X,y Y x X,y Y x px, y + b a x px dx + b x X y Y ae[x] + be[y] x X,y Y y py dy y px, y b. Making use of the definition of variance and the previous part, we have: σ x E[x µ x ] E[x µ x x + µ x] E[x ] E[µ x x] + E[µ x] E[x ] µ x E[x] + µ x E[x ] µ x µ x + µ x E[x ] µ x + µ x E[x ] µ x c. In order to check if two discrete random variables x and y are uncorrelated, we have to prove σ xy 0 the same holds for continuous random variables. From the previous question: σ x,y E[xy] µ x µ y

3 If two variables are indepent: Finally, E[xy] x X,y Y x X,y Y xy px, y xy px py x px x X y Y E[x] E[y] y py σ x,y E[x] E[y] µ x µ y 0 d. To prove this, we need to find one case where px, y pxpy and σ xy 0 are satisfied. One possible solution is as follows: Suppose we have the discrete random variables x and y, and we observed all possibilities: x y If we look at the case where x and y, is satisfied: px, y 6 3 px py Now it is easy to verify that is also satisfied: σ x,y E[xy] µ x µ y e. Given that z ax + by and that x and y are uncorrelated, we have σ z E[z µ z ] E[z ] µ z E[ax + by ] aµ x + bµ y E[a x + abxy + b y ] a µ x + abµ x µ y + b µ y 3

4 a E[x ] + abe[xy] + b E[y ] a µ x abµ x µ y b µ y a E[x ] µ x + abe[xy] µ x µ y + b E[y ] µ y a σx + abσxy + b σy a σx + b σy, where only the last equality deps on x and y being uncorrelated. f. Using the result of a and the fact that E[x i ] µ x, E[ x] E[ n n x i ] n i n E[x i ] n n µ x µ x i Also, using the result of d and the fact Var[x i ] σ x Var[ x] Var[ n n x i ] n i n i Var[x i ] n n σ x σ x/n g. Given that x and x are continuous random variables, we know that Pr[x k] 0 and Pr[x k] 0 for any value of k. Thus, Pr[x x ] Pr[x < x ]. Given that x and x are i.i.d., we know that replacing x with x and x with x will have no effect on the world. In particular, we know that Pr[x < x ] Pr[x < x ]. However, since probabilities or the space of possible values must sum to one, we have Pr[x < x ] + Pr[x < x ]. Thus, Pr[x x ]. h. For discrete random variables, unlike the continuous case above, we need to know the distributions of x and x in order to find Pr[x k] and Pr[x k]. Thus, the argument we used above fails. In general, it is not possible to find Pr[x x ] without knowledge of the distributions of both x and x. Problem 3: Teatime with Gauss and Bayes [0 points] Let px, y παβ e y µ α + x y β 4

5 a. Find px, py, px y, and py x. In addition, give a brief description of each of these distributions. b. Let µ 0, α 5, and β 3. Plot py and py x 9 for a reasonable range of y. What is the difference between these two distributions? Solution: a. To find py, simply factor px, y and then integrate over x: py px, y dx παβ e παβ e y µ e α πα y µ e α πα y µ α + x y β dx y µ α e x y β dx πβ x y e β dx N µ, α The integral goes to because it is of the form of a probability distribution integrated over the entire domain. To find px y, divide px, y by py: px y px, y py πβ x y e β N y, β Finding px and py x follows essentially the same procedure, but the algebra is more involved and requires completing the square in the exponent. px px, y dy παβ e παβ e παβ e y µ α + x y β dy β y µ +α x y α β dy β y β µy+β µ +α x α xy+α y α β 5 dy

6 N ote : παβ e παβ e παβ e α +β y α x+β µy+β µ +α x α β y α x+β µ α +β y+ β µ +α x α +β α β α +β dy y α x+β µ α +β y+ παβ e παβ e π α β παβ α + β e y α x+β µ α +β y α x+β µ α +β α β α +β e π α β α +β πα + β e πα + β e πα + β e πα + β e πα + β e dy α x+β µ α α +β x+β µ α +β + β µ +α x α +β α β α +β α x+β µ α +β + β µ +α x α +β α β α +β e β µ +α x α α +β x+β µ α +β y α x+β µ α +β α β α +β α β α +β dy β µ +α x α α +β x+β µ α +β dy β µ +α x α α +β x+β µ α +β α β α +β α β α +β α +β β µ +α x α x+β µ α β α +β dy dy e π α β α +β py x dy α β µ +α 4 x +β 4 µ +α β x α 4 x α β µx β 4 µ α β α +β α β x α β µx+α β µ α β α +β x µ α +β y α x+β µ α +β α β α +β dy 6

7 N µ, α + β To find py x we simply divide px, y by px. In finding px, we already know the form of py x see the longest line in the derivation of px above: py x px, y px e π α β α +β y α x+β µ α +β α β α +β N α x + β µ α + β, α β α + β Note that all the above distibutions are Gaussian. b. The following Matlab code produced Figure : c l o s e a l l ; c l e a r a l l ; %I n i t i a l i z e v a r i a b l e s m 0. 0 ; a 5. 0 ; b 3 ; x 9 ; %Compute parameters y 00::00; mean a ˆ x + bˆ m / aˆ + b ˆ ; var a b ˆ / aˆ + b ˆ ; p y g i v e n x. 0 / s q r t pi var exp y mean. ˆ / var ; var a ˆ ; p y. 0 / s q r t pi var exp y m. ˆ / var ; %Show i n f o r m a t i o n f i g u r e ; p l o t y, p y given x, b ; hold on p l o t y, p y, r ; leg p y x, p y ; sy s i z e y ; a x i s [ y, y sy, 0, 0. ] ; x l a b e l y ; 7

8 py x py µ0 α5 β y Figure : The marginal p.d.f. of y and the p.d.f. of y given x for a specific value of x. Notice how knowing x makes your knowledge of y more certain. t e x t 70,0.4, \mu0 ; t e x t 70,0., \ alpha 5 ; t e x t 70,0., \ beta 3 ; Problem 4: Let Σ [ ] Covariance Matrix [5 points] a. Find the eigenvalues and eigenvectors of Σ by hand include all calculations. Verify your computations with MATLAB function eig. b. Verify that Σ is a valid covariance matrix. 8

9 c. We provide 500 data points sampled from the distribution N [0 0], Σ. Download the dataset from the course website and project the data onto the eigenvectors of the covariance matrix. What is the effect of this projection? Include MATLAB code and plots before and after the projection. Solution: a. We can find the eigenvectors and eigenvalues of Σ by starting with the definition of an eigenvector. Namely, a vector e is an eigenvector of Σ if it satisfies Σe λe for some constant scalar λ, which is called the eigenvalue corresponding to e. This can be rewritten as This is equivalent to Thus, we require that Σ λie 0 detσ λi 0 5 λ 4 0 By inspection, this is true when λ 9 and λ. To find the eigenvectors, we plug the eigenvalues back into the equation above to get [ ] [ ] [ ] [ ] [ ] a 4 4 a 0 Σ 9Ie b 4 4 b 0 which gives a b. Normalized, this results in the eigenvector [ ] e Similarly, λ gives [ ] [ 5 4 a Σ Ie 4 5 b ] [ ] [ a b ] [ 0 0 ] which gives a b. Normalized, this results in the eigenvector [ ] e b. The matrix Σ is a valid covariance matrix if it is symmetric and positive semi-definite. Clearly, it is symmetric, since Σ T Σ. One way to prove it is positive semi-definite is to show that all its eigenvalues are non-negative. This is indeed the case, as shown in the previous question. 9

10 c. Figure shows how projecting the data onto the eigenvectors of the data s covariance matrix reduces the correlation between x and y. The MATLAB code is as follows: c l e a r a l l ; c l o s e a l l ; %I n i t i a l i z e c o v a r i a n c e matrix S [ 5 4 ; 4 5 ] ; %Load data load points ; %Show c o r r e l a t i o n o f o r i g i n a l p o i n t s c o r r c o e f X %Compute e i g e n v e c t o r s and e i g e n v a l u e s [V D] e i g S ; D V %P r o j e c t data px V X ; %Show c o r r e l a t i o n o f p r o j e c t e d p o i n t s c o r r c o e f px %Showing data f i g u r e ; subplot s c a t t e r X :,,X :,, f i l l e d ; a x i s equal x l a b e l x ; y l a b e l y ; t i t l e O r i g i n a l Data ; subplot ; s c a t t e r px :,, px :,, f i l l e d ; a x i s equal x l a b e l x ; y l a b e l y ; t i t l e P rojected Data ; 0

11 Original Data Projected Data y y x x Figure : The original data and the data transformed into the coordinate system defined by the eigenvectors of their covariance matrix. Problem 5: Probabilistic Modeling [0 points] Let x {0, } denote a person s affective state x 0 for positive-feeling state, and x for negative-feeling state. The person feels positive with probability θ. Suppose that an affect-tagging system or a robot recognizes her feeling state and reports the observed state, y, to you. But this system is unreliable and obtains the correct result with probability θ. a. Represent the joint probability distribution P x, y θ for all x, y a x matrix as a function of the parameters θ θ, θ. b. The Maximum Likelihood estimation criterion for the parameter θ is defined as: θ ML arg max θ Lt,..., t n ; θ arg max θ n pt i θ where we have assumed that each data point t i is drawn indepently from the same distribution so that the likelihood of the data is Lt,..., t n ; θ i

12 i n pt i θ. Likelihood is viewed as a function of the parameters, which deps on the data. Since the above expression can be technically challenging, we maximize the log-likelihood log Lt,..., t n ; θ instead of likelihood. Note that any monotonically increasing function i.e., log function of the likelihood has the same maxima. Thus, θ ML arg max θ log Lt,..., t n ; θ arg max θ n log pt i θ i Suppose we get the following joint observations t x, y. x y What are the maximum-likelihood ML values of θ and θ? Hint. Since P x, y θ P y x, θ P x θ, the estimation of the two parameters can be done separately in the log-likelihood criterion. Solution: a. The probability mass function pmf of x {0, } is { } θ, x 0 P x θ, x The conditional pmf of y {0, } given that x 0 is { } θ, y 0 P y x 0 θ, y The conditional pmf of y given that x is { θ, y 0 P y x θ, y } Use P x, y P y xp x to tabulate the joint pmf of x, y. P 0, 0 P 0, θ P x, y θ θ θ P, 0 P, θ θ θ θ

13 b. We select θ, θ to maximize the log-likelihood of the samples {x i, y i, i,..., n} which may be expressed as Jθ, θ i log P x i, y i log P y i x i + log P x i i log P y i x i + log P x i i i J θ + J θ Hence, we choose θ to maximize J θ i log P x i Nx log θ + n Nx log θ where Nx i x i. Differentiating w.r.t. θ gives J Nx n Nx + θ θ θ We set this derivative to zero and solve for θ to obtain θ Similarly, we choose θ to maximize Nx n J θ i log P y i x i Nx y log θ + n Nx y log θ where Nx y i x iy i + x i y i. Differentiating J w.r.t. θ, setting to zero and solving for θ gives θ Nx y n For the example data, θ 4 8, θ 6 8. Thus, θ P x, y θ θ θ θ θ θ θ The maximum likelihood of the data under this model is P x i, y i i 3

14 Problem 6: Ring Problem [0 points] To get credit for this problem, you must not only write your own correct solution, but also write a computer simulation of the process of playing this game: Suppose I hide the ring of power in one of three identical boxes while you weren t looking. The other two boxes remain empty. After hiding he ring of power, I ask you to guess which box it s in. I know which box it s in and, after you ve made your guess, I deliberately open the lid of an empty box, which is one of the two boxes you did not choose. Thus, the ring of power is either in the box you chose or the remaining closed box you did not choose. Once you have made your initial choice and I ve revealed to you an empty box, I then give you the opportunity to change your mind you can either stick with your original choice, or choose the unopened box. You get to keep the contents of whichever box you finally decide upon. What choice should you make in order to maximize your chances of receiving the ring of power? Justify your answer using Bayes rule. Write a simulation. There are two choices in this game for the contestant in this game: choice of box, choice of whether or not to switch. In your simulation, first let the host choose a random box to place the ring of power. Show a trace of your program s output for a single game play, as well as a cumulative probability of winning for 000 rounds of the two policies to choose a random box and then switch and to choose a random box and not switch. Solution: Always switch your answer to the box you didn t choose the first time. This reason is as follows. You have a /3 chance of initially picking the correct box. That is, there is a /3 chance the correct answer is one of the other two boxes. Learning which of the two other boxes is empty does not change these probabilities; your initial choice still has a /3 chance of being correct. That is, there is a /3 chance the remaining box is the correct answer. Therefore you should change your choice. Using Bayes: R Box with the Ring. O Box Opened. S Box Selected. Where: P R O, S P O R, SP R S P O S P O S Σ 3 RP O, R S Σ 3 RP O R, SP R S 4

15 3 P R O, S Another way to understand the problem is to ext it to 00 boxes, only one of which has the ring of power. After you make your initial choice, I then open 98 of the 99 remaining boxes and show you that they are empty. Clearly, with very high probability the ring of power resides in the one remaining box you did not initially choose. Here is a sample simulation output for the Ring problem: a c t u a l : guess : r e v e a l : 3 swap : 0 guess : a c t u a l : 3 guess : 3 r e v e a l : swap : 0 guess : 3 a c t u a l : guess : 3 r e v e a l : swap : 0 guess : 3 swap : 0 win : 9 l o s e : 708 win / win+l o s e : 0.9 a c t u a l : 3 guess : r e v e a l : swap : guess : 3 a c t u a l : guess : r e v e a l : swap : guess : 3 5

16 a c t u a l : 3 guess : r e v e a l : swap : guess : 3 swap : win : 686 l o s e : 34 win / win+l o s e : Here is a Matlab program that simulates the Ring simulation output above: f o r swap 0 : win 0 ; l o s e 0 ; f o r i :000 a c t u a l f l o o r rand 3+; guess f l o o r rand 3+; i f guess a c t u a l r e v e a l f l o o r rand +; i f r e v e a l a c t u a l r e v e a l r e v e a l + ; e l s e i f guess && a c t u a l r e v e a l 3 ; e l s e i f guess && a c t u a l 3 r e v e a l ; e l s e i f guess && a c t u a l r e v e a l 3 ; e l s e i f guess && a c t u a l 3 r e v e a l ; e l s e i f guess 3 && a c t u a l r e v e a l ; e l s e i f guess 3 && a c t u a l r e v e a l ; i f swap i f guess && r e v e a l guess 3 ; e l s e i f guess && r e v e a l 3 guess ; e l s e i f guess && r e v e a l guess 3 ; 6

17 e l s e i f guess && r e v e a l 3 guess ; e l s e i f guess 3 && r e v e a l guess ; e l s e i f guess 3 && r e v e a l guess ; e l s e guess guess ; i f guess a c t u a l win win + ; e l s e l o s e l o s e + ; %% only p r i n t t r a c e f o r f i r s t 3 games i f i < 3 a c t u a l guess r e v e a l swap guess %% p r i n t r e s u l t s f o r each game play p o l i c y swap win / win + l o s e 7

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set MAS 6J/1.16J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

EXAM # 3 PLEASE SHOW ALL WORK!

EXAM # 3 PLEASE SHOW ALL WORK! Stat 311, Summer 2018 Name EXAM # 3 PLEASE SHOW ALL WORK! Problem Points Grade 1 30 2 20 3 20 4 30 Total 100 1. A socioeconomic study analyzes two discrete random variables in a certain population of households

More information

CSC411 Fall 2018 Homework 5

CSC411 Fall 2018 Homework 5 Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Midterm Exam 1 Solution

Midterm Exam 1 Solution EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:

More information

Machine Learning: Homework Assignment 2 Solutions

Machine Learning: Homework Assignment 2 Solutions 10-601 Machine Learning: Homework Assignment 2 Solutions Professor Tom Mitchell Carnegie Mellon University January 21, 2009 The assignment is due at 1:30pm (beginning of class) on Monday, February 2, 2009.

More information

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

ECE Homework Set 3

ECE Homework Set 3 ECE 450 1 Homework Set 3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term 2016-2017 Tutorial 3: Emergency Guide to Statistics Prof. Dr. Moritz Diehl, Robin

More information

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X

More information

Midterm Exam 1 (Solutions)

Midterm Exam 1 (Solutions) EECS 6 Probability and Random Processes University of California, Berkeley: Spring 07 Kannan Ramchandran February 3, 07 Midterm Exam (Solutions) Last name First name SID Name of student on your left: Name

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

B4 Estimation and Inference

B4 Estimation and Inference B4 Estimation and Inference 6 Lectures Hilary Term 27 2 Tutorial Sheets A. Zisserman Overview Lectures 1 & 2: Introduction sensors, and basics of probability density functions for representing sensor error

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Parameter estimation Conditional risk

Parameter estimation Conditional risk Parameter estimation Conditional risk Formalizing the problem Specify random variables we care about e.g., Commute Time e.g., Heights of buildings in a city We might then pick a particular distribution

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Class 8 Review Problems 18.05, Spring 2014

Class 8 Review Problems 18.05, Spring 2014 1 Counting and Probability Class 8 Review Problems 18.05, Spring 2014 1. (a) How many ways can you arrange the letters in the word STATISTICS? (e.g. SSSTTTIIAC counts as one arrangement.) (b) If all arrangements

More information

Probability Review. Chao Lan

Probability Review. Chao Lan Probability Review Chao Lan Let s start with a single random variable Random Experiment A random experiment has three elements 1. sample space Ω: set of all possible outcomes e.g.,ω={1,2,3,4,5,6} 2. event

More information

CMPSCI 240: Reasoning Under Uncertainty

CMPSCI 240: Reasoning Under Uncertainty CMPSCI 240: Reasoning Under Uncertainty Lecture 7 Prof. Hanna Wallach wallach@cs.umass.edu February 14, 2012 Reminders Check the course website: http://www.cs.umass.edu/ ~wallach/courses/s12/cmpsci240/

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Problem Set 7 Due March, 22

Problem Set 7 Due March, 22 EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Introduction to Simple Linear Regression

Introduction to Simple Linear Regression Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 13: Expectation and Variance and joint distributions Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Final Exam { Take-Home Portion SOLUTIONS. choose. Behind one of those doors is a fabulous prize. Behind each of the other two isa

Final Exam { Take-Home Portion SOLUTIONS. choose. Behind one of those doors is a fabulous prize. Behind each of the other two isa MATH 477 { Section E 7/29/9 Final Exam { Take-Home Portion SOLUTIONS ( pts.). A game show host has a contestant on stage and oers her three doors to choose. Behind one of those doors is a fabulous prize.

More information

FINAL EXAM: Monday 8-10am

FINAL EXAM: Monday 8-10am ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 016 Instructor: Prof. A. R. Reibman FINAL EXAM: Monday 8-10am Fall 016, TTh 3-4:15pm (December 1, 016) This is a closed book exam.

More information

Machine Learning. Instructor: Pranjal Awasthi

Machine Learning. Instructor: Pranjal Awasthi Machine Learning Instructor: Pranjal Awasthi Course Info Requested an SPN and emailed me Wait for Carol Difrancesco to give them out. Not registered and need SPN Email me after class No promises It s a

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v } Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v

More information

6.041/6.431 Fall 2010 Quiz 2 Solutions

6.041/6.431 Fall 2010 Quiz 2 Solutions 6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion Today Probability and Statistics Naïve Bayes Classification Linear Algebra Matrix Multiplication Matrix Inversion Calculus Vector Calculus Optimization Lagrange Multipliers 1 Classical Artificial Intelligence

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Expectation maximization

Expectation maximization Expectation maximization Subhransu Maji CMSCI 689: Machine Learning 14 April 2015 Motivation Suppose you are building a naive Bayes spam classifier. After your are done your boss tells you that there is

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Multivariate probability distributions and linear regression

Multivariate probability distributions and linear regression Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 4: Solutions Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 4: Solutions Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 4: Solutions Fall 007 Issued: Thursday, September 0, 007 Due: Friday, September 8,

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 271E Probability and Statistics Spring 2011 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 16.30, Wednesday EEB? 10.00 12.00, Wednesday

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,

More information

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so. CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic

More information

S n = x + X 1 + X X n.

S n = x + X 1 + X X n. 0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

HW Solution 12 Due: Dec 2, 9:19 AM

HW Solution 12 Due: Dec 2, 9:19 AM ECS 315: Probability and Random Processes 2015/1 HW Solution 12 Due: Dec 2, 9:19 AM Lecturer: Prapun Suksompong, Ph.D. Problem 1. Let X E(3). (a) For each of the following function g(x). Indicate whether

More information

STAT 430/510: Lecture 10

STAT 430/510: Lecture 10 STAT 430/510: Lecture 10 James Piette June 9, 2010 Updates HW2 is due today! Pick up your HW1 s up in stat dept. There is a box located right when you enter that is labeled "Stat 430 HW1". It ll be out

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 1: Course intro and probability brush-up Department of Mathematical Sciences Aalborg University 1/22 Bayesian Statistics, Simulations and Software Course outline Course consists of 12 half-days

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,

More information

Introduction to Bayesian Statistics

Introduction to Bayesian Statistics School of Computing & Communication, UTS January, 207 Random variables Pre-university: A number is just a fixed value. When we talk about probabilities: When X is a continuous random variable, it has a

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Unsupervised Learning

Unsupervised Learning 2018 EE448, Big Data Mining, Lecture 7 Unsupervised Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html ML Problem Setting First build and

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB

More information

CMPSCI 240: Reasoning Under Uncertainty

CMPSCI 240: Reasoning Under Uncertainty CMPSCI 240: Reasoning Under Uncertainty Lecture 8 Prof. Hanna Wallach wallach@cs.umass.edu February 16, 2012 Reminders Check the course website: http://www.cs.umass.edu/ ~wallach/courses/s12/cmpsci240/

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

STAT 430/510 Probability Lecture 7: Random Variable and Expectation

STAT 430/510 Probability Lecture 7: Random Variable and Expectation STAT 430/510 Probability Lecture 7: Random Variable and Expectation Pengyuan (Penelope) Wang June 2, 2011 Review Properties of Probability Conditional Probability The Law of Total Probability Bayes Formula

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015.

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015. EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015 Midterm Exam Last name First name SID Rules. You have 80 mins (5:10pm - 6:30pm)

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

Business Statistics 41000: Homework # 2 Solutions

Business Statistics 41000: Homework # 2 Solutions Business Statistics 4000: Homework # 2 Solutions Drew Creal February 9, 204 Question #. Discrete Random Variables and Their Distributions (a) The probabilities have to sum to, which means that 0. + 0.3

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

Multivariate Distributions

Multivariate Distributions Copyright Cosma Rohilla Shalizi; do not distribute without permission updates at http://www.stat.cmu.edu/~cshalizi/adafaepov/ Appendix E Multivariate Distributions E.1 Review of Definitions Let s review

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable CHAPTER 4 MATHEMATICAL EXPECTATION 4.1 Mean of a Random Variable The expected value, or mathematical expectation E(X) of a random variable X is the long-run average value of X that would emerge after a

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample

More information