Machine Learning.

Size: px
Start display at page:

Download "Machine Learning."

Transcription

1 Machie Learig

2 Orgaizatioal ifo All up-to-date ifo is o the course web page (follow liks from my page). Istructors - Eric Xig - Ziv Bar-Joseph TAs: See ifo o website for recitatios, office hours etc. See web page for cotact ifo, office hours, etc. Piazza would be used for questios / commets. Make sure you are subscribed.

3 Zhitig Hu Research: large scale machie learig ad their applicatios i NLP/CV. Homepage: Cotact: zhitighu@gmail.com

4

5 Yutia Deg Research: large scale machie learig. Cotact:

6

7 Hao Zhag Fid me: GHC 8116 Office Hours: Friday 3.30 pm 4.30 pm Iterest: Distributed Machie Learig Deep Learig Applicatios i computer visio

8

9 9/3 Itro to probability, MLE 9/8 No class 9/10 Classificatio, KNN 9/15 No class, Jewish ew year 9/17 Decisio trees - PS1 out 9/22 Naïve Bayes 9/24 Liear regressio 9/26 Logistic regressio 11/17 (Moday): Midterm 10/1 Perceptro, Neural etworks - PS1 due / PS2 out 10/6 Deep learig, SVM1 10/10 SVM 2 10/13 Evaluatig classifiers, Bias Variace decompositio 10/15 Esemble learig Boostig, RF PS2 due / PS3 out 10/20 Usupervised learig clusterig 10/22 Usupervised learig clusterig / project proposal due 10/27 Semi-supervised learig 10/29 Learig theory 1 - PS3 due / PS4 out 11/3 PAC learig 11/5 Graphical models, BN 11/10 BN 11/12 - Udirected graphical models / PS4 due 11/17 Midterm 11/19 HMM PS5 out 11/24 HMM iferece 12/1 MDPs / Reiforcemet learig / ps5 due 12/3 Topic models- 12/4 - Project poster sessio 12/8 Computatioal Biology 12/10 o class Itro ad classificatio (A.K.A. supervised learig ) Clusterig ( Usupervised learig ) Probabilistic represetatio ad modelig ( reasoig uder ucertaity ) Applicatios of ML

10 5 Problem sets - 40% Project - 35% Midterm - 25% Gradig

11 Class assigmets 5 Problem sets - Most cotaiig both theoretical ad programmig assigmets Projects - Groups of Ope eded. Would have to submit a proposal based o your iterest. We will also provide suggestios o the website. Recitatios - Twice a week (same cotet i both) - Expad o material leared i class, go over problems from previous classes etc.

12 What is Machie Learig? Easy part: Machie Hard part: Learig - Short aswer: Methods that ca help geeralize iformatio from the observed data so that it ca be used to make better decisios i the future

13 What is Machie Learig? Loger aswer: The term Machie Learig is used to characterize a umber of differet approaches for geeralizig from observed data: Supervised learig - Give a set of features ad labels lear a model that will predict a label to a ew feature set Usupervised learig - Discover patters i data Reasoig uder ucertaity - Determie a model of the world either from samples or as you go alog Active learig - Select ot oly model but also which examples to use

14 Paradigms of ML Supervised learig - Give D = {X i,y i } lear a model (or fuctio) F: X k -> Y k Usupervised learig Give D = {X i } group the data ito Y classes usig a model (or fuctio) F: X i -> Y j Reiforcemet learig (reasoig uder ucertaity) Give D = {eviromet, actios, rewards} lear a policy ad utility fuctios: policy: F1: {e,r} - > a utility: F2: {a,e}- > R Active learig - Give D = {X i,y i }, {X j } lear a fuctio F1 : {X j } -> x k to maximize the success of the supervised learig fuctio F2: {X i, x k }-> Y

15 Recommeder systems Primarily supervised learig

16 Semi supervised learig

17 Driveless cars Supervised ad reiforcemet learig

18 Helicopter cotrol Reiforcemet learig

19 Biology A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T A G C A A T T C G A T A A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G Which A G C A A T C part G G A T A A is C G the C T G A gee? G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T Supervised A G C A A T T C G ad A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C usupervised A A T C G G A T A A learig C G C T G A G (ca C A A T T C G A T A G C A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G also A G C use T G A active G C A A T learig) T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A T C G A T A G C A A T T C G A T A A C G C T G A G C A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A T A A C G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C T G A G C A A T T C G A T A G C A A T T C G A T A A C G C T G A G C A A T C G G A

20 Commo Themes Mathematical framework - Well defied cocepts based o explicit assumptios Represetatio - How do we ecode text? Images? Model selectio - Which model should we use? How complex should it be? Use of prior kowledge - How do we ecode our beliefs? How much ca we assume?

21 (brief) itro to probability

22 Basic otatios Radom variable - referrig to a elemet / evet whose status is ukow: A = it will rai tomorrow Domai (usually deoted by ) - The set of values a radom variable ca take: - A = The stock market will go up this year : Biary - A = Number of Steelers wis i 2012 : Discrete - A = % chage i Google stock i 2012 : Cotiuous

23 Axioms of probability (Kolmogorov s axioms) A variety of useful facts ca be derived from just three axioms: 1. 0 P(A) 1 2. P(true) = 1, P(false) = 0 3. P(A B) = P(A) + P(B) P(A B) There have bee several other attempts to provide a foudatio for probability theory. Kolmogorov s axioms are the most widely used.

24 Priors Degree of belief i a evet i the absece of ay other iformatio No rai Rai P(rai tomorrow) = 0.2 P(o rai tomorrow) = 0.8

25 Coditioal probability P(A = 1 B = 1): The fractio of cases where A is true if B is true P(A = 0.2) P(A B = 0.5)

26 Coditioal probability I some cases, give kowledge of oe or more radom variables we ca improve upo our prior belief of aother radom variable For example: p(slept i movie) = 0.5 p(slept i movie liked movie) = 1/4 p(did t sleep i movie liked movie) = 3/4 Slept Liked

27 Joit distributios The probability that a set of radom variables will take a specific value is their joit distributio. Notatio: P(A B) or P(A,B) Example: P(liked movie, slept) If we assume idepedece the P(A,B)=P(A)P(B) However, i may cases such a assumptio maybe too strog (more later i the class)

28 Joit distributio (cot) P(class size > 20) = 0.6 P(summer) = 0.4 P(class size > 20, summer) =? Evaluatio of classes Size Time Eval 30 R 2 70 R 1 12 S 2 8 S 3 56 R 1 24 S 2 10 S 3 23 R 3 9 R 2 45 R 1

29 Joit distributio (cot) P(class size > 20) = 0.6 P(summer) = 0.4 P(class size > 20, summer) = 0.1 Evaluatio of classes Size Time Eval 30 R 2 70 R 1 12 S 2 8 S 3 56 R 1 24 S 2 10 S 3 23 R 3 9 R 2 45 R 1

30 Joit distributio (cot) P(class size > 20) = 0.6 P(eval = 1) = 0.3 P(class size > 20, eval = 1) = 0.3 Size Time Eval 30 R 2 70 R 1 12 S 2 8 S 3 56 R 1 24 S 2 10 S 3 23 R 3 9 R 2 45 R 1

31 Joit distributio (cot) Evaluatio of classes P(class size > 20) = 0.6 P(eval = 1) = 0.3 P(class size > 20, eval = 1) = 0.3 Size Time Eval 30 R 2 70 R 1 12 S 2 8 S 3 56 R 1 24 S 2 10 S 3 23 R 3 9 R 2 45 R 1

32 Chai rule The joit distributio ca be specified i terms of coditioal probability: P(A,B) = P(A B)*P(B) Together with Bayes rule (which is actually derived from it) this is oe of the most powerful rules i probabilistic reasoig

33 Bayes rule Oe of the most importat rules for this class. Derived from the chai rule: P(A,B) = P(A B)P(B) = P(B A)P(A) Thus, P( A B) P( B A) P( A) P( B) Thomas Bayes was a Eglish clergyma who set out his theory of probability i 1764.

34 Bayes rule (cot) Ofte it would be useful to derive the rule a bit further: A A P A B P A P A B P B P A P A B P A B P ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( This results from: P(B) = A P(B,A) A B A B P(B,A=1) P(B,A=0)

35 Desity estimatio

36 Desity Estimatio A Desity Estimator lears a mappig from a set of attributes to a Probability Iput data for a variable or a set of variables Desity Estimator Probability

37 Desity estimatio Estimate the distributio (or coditioal distributio) of a radom variable Types of variables: - Biary coi flip, alarm - Discrete dice, car model year - Cotiuous height, weight, temp.,

38 Whe do we eed to estimate desities? Desity estimators ca do may good thigs - Ca sort the records by probability, ad thus spot weird records (aomaly detectio) - Ca do iferece: P(E1 E2) Medical diagosis / Robot sesors - Igrediet for Bayes etworks ad other types of ML methods

39 Desity estimatio Biary ad discrete variables: Cotiuous variables: Easy: Just cout! Harder (but just a bit): Fit a model

40 Learig a desity estimator for discrete variables P ˆ (x u) #records i which x i u i total umber of records A trivial learig algorithm! But why is this true?

41 Maximum Likelihood Priciple We ca defie the likelihood of the data give the model as follows: k1 P ˆ (dataset M) P ˆ (x x x M) P ˆ (x M) 1 2 k For example M is - The probability of head for a coi flip - The probabilities of observig 1,2,3,4 ad 5 for a dice - etc. M is our model (usually a collectio of parameters)

42 Maximum Likelihood Priciple Our goal is to determie the values for the parameters i M We ca do this by maximizig the probability of geeratig the observed samples For example, let be the probabilities for a coi flip The L(x 1,,x ) = p(x 1 ) p(x ) The observatios (differet flips) are assumed to be idepedet For such a coi flip with P(H)=q the best assigmet for h is Why? P ˆ (dataset M) P ˆ (x x x M) P ˆ (x M) 1 2 k argmax q = #H/#samples k1

43 Maximum Likelihood Priciple: Biary variables For a biary radom variable A with P(A=1)=q argmax q = #1/#samples Why? Data likelihood: P( D M) q 1 (1 q) 2 We would like to fid: arg max q q 1 (1 q) 2 Omittig terms that do ot deped o q

44 Data likelihood: We would like to fid: Maximum Likelihood Priciple 2 1 ) (1 ) ( q q M D P 2 1 ) (1 arg max q q q ) (1 0 ) ) (1 ( ) (1 0 ) (1 ) (1 0 ) (1 ) (1 ) ( q q q q q q q q q q q q q q q q q q q q q

45 Log Probabilities Whe workig with products, probabilities of etire datasets ofte get too small. A possible solutio is to use the log of probabilities, ofte termed log likelihood log ˆ P (dataset M) log P ˆ (x M) log P ˆ (x M) k k k1 Maximizig this likelihood fuctio is the same as maximizig P(dataset M) k1 Log values betwee 0 ad 1 I some cases movig to log space would also make computatio easier (for example, removig the expoets)

46 Desity estimatio Biary ad discrete variables: Cotiuous variables: Easy: Just cout! Harder (but just a bit): Fit a model But what if we oly have very few samples?

47 How much do grad studets sleep? Lets try to estimate the distributio of the time studets sped sleepig (outside class).

48 Frequecy Possible statistics X Sleep time Mea of X: E{X} 7.03 Variace of X: Var{X} = E{(X-E{X})^2} Sleep Hours Sleep

49 GPA Covariace: Sleep vs. GPA Co-Variace of X1, X2: Covariace{X1,X2} = E{(X1-E{X1})(X2-E{X2})} = Sleep / GPA Sleep / GPA Sleep hours

50 Statistical Models Statistical models attempt to characterize properties of the populatio of iterest For example, we might believe that repeated measuremets follow a ormal (Gaussia) distributio with some mea µ ad variace 2, x ~ N(µ, 2 ) where ( x x ) ( ) e 2 2 p 2 2 ad =(µ, 2 ) defies the parameters (mea ad variace) of the model. 1 2

51 The Parameters of Our Model A statistical model is a collectio of distributios; the parameters specify idividual distributios x ~ N(µ, 2 ) We eed to adjust the parameters so that the resultig distributio fits the data well

52 The Parameters of Our Model A statistical model is a collectio of distributios; the parameters specify idividual distributios x ~ N(µ, 2 ) We eed to adjust the parameters so that the resultig distributio fits the data well

53 Frequecy Computig the parameters of our model Lets assume a Guassia distributio for our sleep data How do we compute the parameters of the model? 12 Sleep Sleep Hours

54 Maximum Likelihood Priciple We ca fit statistical models by maximizig the probability of geeratig the observed samples: L(x 1,,x ) = p(x 1 ) p(x ) (the samples are assumed to be idepedet) I the Gaussia case we simply set the mea ad the variace to the sample mea ad the sample variace: 1 i 1 xi 2 1 ( i 1 xi ) 2 Why?

55 Importat poits Radom variables Chai rule Bayes rule Joit distributio, idepedece, coditioal idepedece MLE

56 Probability Desity Fuctio Discrete distributios Cotiuous: Cumulative Desity Fuctio (CDF): F(a) f(x) a x

57 Total probability Cumulative Desity Fuctios Probability Desity Fuctio (PDF) Properties: F(x)

58 Expectatios Mea/Expected Value: Variace: I geeral:

59 Multivariate Joit for (x,y) Margial: Coditioals: Chai rule:

60 Bayes Rule Stadard form: Replacig the bottom:

61 Biomial Distributio: Mea/Var:

62 Uiform Aythig is equally likely i the regio [a,b] Distributio: Mea/Var a b

63 Gaussia (Normal) If I look at the height of wome i coutry xx, it will look approximately Gaussia Small radom oise errors, look Gaussia/Normal Distributio: Mea/var

64 Why Do People Use Gaussias Cetral Limit Theorem: (loosely) - Sum of a large umber of IID radom variables is approximately Gaussia

65 Multivariate Gaussias Distributio for vector x PDF:

66 Multivariate Gaussias 1 cov( x, x ) ( x )( x ) 1 2 1, i 1 2, i 2 i1

67 Covariace examples Ati-correlated Correlated Idepedet (almost) Covariace: -9.2 Covariace: Covariace: 0.6

68 Sum of Gaussias The sum of two Gaussias is a Gaussia:

Probability and MLE.

Probability and MLE. 10-701 Probability ad MLE http://www.cs.cmu.edu/~pradeepr/701 (brief) itro to probability Basic otatios Radom variable - referrig to a elemet / evet whose status is ukow: A = it will rai tomorrow Domai

More information

15-780: Graduate Artificial Intelligence. Density estimation

15-780: Graduate Artificial Intelligence. Density estimation 5-780: Graduate Artificial Itelligece Desity estimatio Coditioal Probability Tables (CPT) But where do we get them? P(B)=.05 B P(E)=. E P(A B,E) )=.95 P(A B, E) =.85 P(A B,E) )=.5 P(A B, E) =.05 A P(J

More information

10-701: Introduction to Deep Neural Networks Machine Learning.

10-701: Introduction to Deep Neural Networks Machine Learning. 10-701: Introduction to Deep Neural Networks Machine Learning http://www.cs.cmu.edu/~10701 Organizational info All up-to-date info is on the course web page (follow links from my page). Instructors - Nina

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar.

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar. Clusterig CM226: Machie Learig for Bioiformatics. Fall 216 Sriram Sakararama Ackowledgmets: Fei Sha, Ameet Talwalkar Clusterig 1 / 42 Admiistratio HW 1 due o Moday. Email/post o CCLE if you have questios.

More information

Outline. L7: Probability Basics. Probability. Probability Theory. Bayes Law for Diagnosis. Which Hypothesis To Prefer? p(a,b) = p(b A) " p(a)

Outline. L7: Probability Basics. Probability. Probability Theory. Bayes Law for Diagnosis. Which Hypothesis To Prefer? p(a,b) = p(b A)  p(a) Outlie L7: Probability Basics CS 344R/393R: Robotics Bejami Kuipers. Bayes Law 2. Probability distributios 3. Decisios uder ucertaity Probability For a propositio A, the probability p(a is your degree

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statistic ad Radom Samples A parameter is a umber that describes the populatio. It is a fixed umber, but i practice we do ot kow its value. A statistic is a fuctio of the sample data, i.e.,

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

Introduction to Artificial Intelligence CAP 4601 Summer 2013 Midterm Exam

Introduction to Artificial Intelligence CAP 4601 Summer 2013 Midterm Exam Itroductio to Artificial Itelligece CAP 601 Summer 013 Midterm Exam 1. Termiology (7 Poits). Give the followig task eviromets, eter their properties/characteristics. The properties/characteristics of the

More information

The Bayesian Learning Framework. Back to Maximum Likelihood. Naïve Bayes. Simple Example: Coin Tosses. Given a generative model

The Bayesian Learning Framework. Back to Maximum Likelihood. Naïve Bayes. Simple Example: Coin Tosses. Given a generative model Back to Maximum Likelihood Give a geerative model f (x, y = k) =π k f k (x) Usig a geerative modellig approach, we assume a parametric form for f k (x) =f (x; k ) ad compute the MLE θ of θ =(π k, k ) k=

More information

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019 Outlie CSCI-567: Machie Learig Sprig 209 Gaussia mixture models Prof. Victor Adamchik 2 Desity estimatio U of Souther Califoria Mar. 26, 209 3 Naive Bayes Revisited March 26, 209 / 57 March 26, 209 2 /

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Elementary manipulations of probabilities

Elementary manipulations of probabilities Elemetary maipulatios of probabilities Set probability of multi-valued r.v. {=Odd} = +3+5 = /6+/6+/6 = ½ X X,, X i j X i j Multi-variat distributio: Joit probability: X true true X X,, X X i j i j X X

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. No calculators. 120 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. No calculators. 120 minutes. Closed book ad otes. No calculators. 120 miutes. Cover page, five pages of exam, ad tables for discrete ad cotiuous distributios. Score X i =1 X i / S X 2 i =1 (X i X ) 2 / ( 1) = [i =1 X i 2 X 2 ] / (

More information

Quick Review of Probability

Quick Review of Probability Quick Review of Probability Berli Che Departmet of Computer Sciece & Iformatio Egieerig Natioal Taiwa Normal Uiversity Refereces: 1. W. Navidi. Statistics for Egieerig ad Scietists. Chapter & Teachig Material.

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Quick Review of Probability

Quick Review of Probability Quick Review of Probability Berli Che Departmet of Computer Sciece & Iformatio Egieerig Natioal Taiwa Normal Uiversity Refereces: 1. W. Navidi. Statistics for Egieerig ad Scietists. Chapter 2 & Teachig

More information

Regression and generalization

Regression and generalization Regressio ad geeralizatio CE-717: Machie Learig Sharif Uiversity of Techology M. Soleymai Fall 2016 Curve fittig: probabilistic perspective Describig ucertaity over value of target variable as a probability

More information

STAT Homework 2 - Solutions

STAT Homework 2 - Solutions STAT-36700 Homework - Solutios Fall 08 September 4, 08 This cotais solutios for Homework. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better isight.

More information

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version)

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version) Kurskod: TAMS Provkod: TENB 2 March 205, 4:00-8:00 Examier: Xiagfeg Yag (Tel: 070 2234765). Please aswer i ENGLISH if you ca. a. You are allowed to use: a calculator; formel -och tabellsamlig i matematisk

More information

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty Bayes Classificatio Ucertaity & robability Baye's rule Choosig Hypotheses- Maximum a posteriori Maximum Likelihood - Baye's cocept learig Maximum Likelihood of real valued fuctio Bayes optimal Classifier

More information

Classification with linear models

Classification with linear models Lecture 8 Classificatio with liear models Milos Hauskrecht milos@cs.pitt.edu 539 Seott Square Geerative approach to classificatio Idea:. Represet ad lear the distributio, ). Use it to defie probabilistic

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13 BHW # /5 ENGR Probabilistic Aalysis Beautiful Homework # Three differet roads feed ito a particular freeway etrace. Suppose that durig a fixed time period, the umber of cars comig from each road oto the

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Math 36Q Fall 25 Professor Hoh Below is a list of practice questios for the Fial Exam. I would suggest also goig over the practice problems ad exams for Exam ad Exam 2 to

More information

Machine Learning 4771

Machine Learning 4771 Machie Learig 4771 Istructor: Toy Jebara Topic 14 Structurig Probability Fuctios for Storage Structurig Probability Fuctios for Iferece Basic Graphical Models Graphical Models Parameters as Nodes Structurig

More information

STATS 200: Introduction to Statistical Inference. Lecture 1: Course introduction and polling

STATS 200: Introduction to Statistical Inference. Lecture 1: Course introduction and polling STATS 200: Itroductio to Statistical Iferece Lecture 1: Course itroductio ad pollig U.S. presidetial electio projectios by state (Source: fivethirtyeight.com, 25 September 2016) Pollig Let s try to uderstad

More information

Lecture 3: MLE and Regression

Lecture 3: MLE and Regression STAT/Q SCI 403: Itroductio to Resamplig Methods Sprig 207 Istructor: Ye-Chi Che Lecture 3: MLE ad Regressio 3. Parameters ad Distributios Some distributios are idexed by their uderlyig parameters. Thus,

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

NYU Center for Data Science: DS-GA 1003 Machine Learning and Computational Statistics (Spring 2018)

NYU Center for Data Science: DS-GA 1003 Machine Learning and Computational Statistics (Spring 2018) NYU Ceter for Data Sciece: DS-GA 003 Machie Learig ad Computatioal Statistics (Sprig 208) Brett Berstei, David Roseberg, Be Jakubowski Jauary 20, 208 Istructios: Followig most lab ad lecture sectios, we

More information

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions We have previously leared: KLMED8004 Medical statistics Part I, autum 00 How kow probability distributios (e.g. biomial distributio, ormal distributio) with kow populatio parameters (mea, variace) ca give

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Some Basic Probability Cocepts 2. Experimets, Outcomes ad Radom Variables A radom variable is a variable whose value is ukow util it is observed. The value of a radom variable results from a experimet;

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

AMS570 Lecture Notes #2

AMS570 Lecture Notes #2 AMS570 Lecture Notes # Review of Probability (cotiued) Probability distributios. () Biomial distributio Biomial Experimet: ) It cosists of trials ) Each trial results i of possible outcomes, S or F 3)

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

Joint Probability Distributions and Random Samples. Jointly Distributed Random Variables. Chapter { }

Joint Probability Distributions and Random Samples. Jointly Distributed Random Variables. Chapter { } UCLA STAT A Applied Probability & Statistics for Egieers Istructor: Ivo Diov, Asst. Prof. I Statistics ad Neurology Teachig Assistat: Neda Farziia, UCLA Statistics Uiversity of Califoria, Los Ageles, Sprig

More information

CSIE/GINM, NTU 2009/11/30 1

CSIE/GINM, NTU 2009/11/30 1 Itroductio ti to Machie Learig (Part (at1: Statistical Machie Learig Shou de Li CSIE/GINM, NTU sdli@csie.tu.edu.tw 009/11/30 1 Syllabus of a Itro ML course ( Machie Learig, Adrew Ng, Staford, Autum 009

More information

As stated by Laplace, Probability is common sense reduced to calculation.

As stated by Laplace, Probability is common sense reduced to calculation. Note: Hadouts DO NOT replace the book. I most cases, they oly provide a guidelie o topics ad a ituitive feel. The math details will be covered i class, so it is importat to atted class ad also you MUST

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

Expectation-Maximization Algorithm.

Expectation-Maximization Algorithm. Expectatio-Maximizatio Algorithm. Petr Pošík Czech Techical Uiversity i Prague Faculty of Electrical Egieerig Dept. of Cyberetics MLE 2 Likelihood.........................................................................................................

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 9 Maximum Likelihood Estimatio 9.1 The Likelihood Fuctio The maximum likelihood estimator is the most widely used estimatio method. This chapter discusses the most importat cocepts behid maximum

More information

CSE 527, Additional notes on MLE & EM

CSE 527, Additional notes on MLE & EM CSE 57 Lecture Notes: MLE & EM CSE 57, Additioal otes o MLE & EM Based o earlier otes by C. Grat & M. Narasimha Itroductio Last lecture we bega a examiatio of model based clusterig. This lecture will be

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statistic ad Radom Samples A parameter is a umber that describes the populatio. It is a fixed umber, but i practice we do ot kow its value. A statistic is a fuctio of the sample data, i.e.,

More information

Topic 8: Expected Values

Topic 8: Expected Values Topic 8: Jue 6, 20 The simplest summary of quatitative data is the sample mea. Give a radom variable, the correspodig cocept is called the distributioal mea, the epectatio or the epected value. We begi

More information

Naïve Bayes. Naïve Bayes

Naïve Bayes. Naïve Bayes Statistical Data Miig ad Machie Learig Hilary Term 206 Dio Sejdiovic Departmet of Statistics Oxford Slides ad other materials available at: http://www.stats.ox.ac.uk/~sejdiov/sdmml : aother plug-i classifier

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 5 STATISTICS II. Mea ad stadard error of sample data. Biomial distributio. Normal distributio 4. Samplig 5. Cofidece itervals

More information

Clustering: Mixture Models

Clustering: Mixture Models Clusterig: Mixture Models Machie Learig 10-601B Seyoug Kim May of these slides are derived from Tom Mitchell, Ziv- Bar Joseph, ad Eric Xig. Thaks! Problem with K- meas Hard Assigmet of Samples ito Three

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

Agnostic Learning and Concentration Inequalities

Agnostic Learning and Concentration Inequalities ECE901 Sprig 2004 Statistical Regularizatio ad Learig Theory Lecture: 7 Agostic Learig ad Cocetratio Iequalities Lecturer: Rob Nowak Scribe: Aravid Kailas 1 Itroductio 1.1 Motivatio I the last lecture

More information

Topics Machine learning: lecture 2. Review: the learning problem. Hypotheses and estimation. Estimation criterion cont d. Estimation criterion

Topics Machine learning: lecture 2. Review: the learning problem. Hypotheses and estimation. Estimation criterion cont d. Estimation criterion .87 Machie learig: lecture Tommi S. Jaakkola MIT CSAIL tommi@csail.mit.edu Topics The learig problem hypothesis class, estimatio algorithm loss ad estimatio criterio samplig, empirical ad epected losses

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments: Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal

More information

Stat 400: Georgios Fellouris Homework 5 Due: Friday 24 th, 2017

Stat 400: Georgios Fellouris Homework 5 Due: Friday 24 th, 2017 Stat 400: Georgios Fellouris Homework 5 Due: Friday 4 th, 017 1. A exam has multiple choice questios ad each of them has 4 possible aswers, oly oe of which is correct. A studet will aswer all questios

More information

Lecture 1 Probability and Statistics

Lecture 1 Probability and Statistics Wikipedia: Lecture 1 Probability ad Statistics Bejami Disraeli, British statesma ad literary figure (1804 1881): There are three kids of lies: lies, damed lies, ad statistics. popularized i US by Mark

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE. Part 3: Summary of CI for µ Confidence Interval for a Population Proportion p

Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE. Part 3: Summary of CI for µ Confidence Interval for a Population Proportion p Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE Part 3: Summary of CI for µ Cofidece Iterval for a Populatio Proportio p Sectio 8-4 Summary for creatig a 100(1-α)% CI for µ: Whe σ 2 is kow ad paret

More information

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals 7-1 Chapter 4 Part I. Samplig Distributios ad Cofidece Itervals 1 7- Sectio 1. Samplig Distributio 7-3 Usig Statistics Statistical Iferece: Predict ad forecast values of populatio parameters... Test hypotheses

More information

Understanding Samples

Understanding Samples 1 Will Moroe CS 109 Samplig ad Bootstrappig Lecture Notes #17 August 2, 2017 Based o a hadout by Chris Piech I this chapter we are goig to talk about statistics calculated o samples from a populatio. We

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Math 36Q Sprig 25 Professor Hoh Below is a list of practice questios for the Fial Exam. I would suggest also goig over the practice problems ad exams for Exam ad Exam 2

More information

NANYANG TECHNOLOGICAL UNIVERSITY SYLLABUS FOR ENTRANCE EXAMINATION FOR INTERNATIONAL STUDENTS AO-LEVEL MATHEMATICS

NANYANG TECHNOLOGICAL UNIVERSITY SYLLABUS FOR ENTRANCE EXAMINATION FOR INTERNATIONAL STUDENTS AO-LEVEL MATHEMATICS NANYANG TECHNOLOGICAL UNIVERSITY SYLLABUS FOR ENTRANCE EXAMINATION FOR INTERNATIONAL STUDENTS AO-LEVEL MATHEMATICS STRUCTURE OF EXAMINATION PAPER. There will be oe 2-hour paper cosistig of 4 questios.

More information

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes The Maximum-Lielihood Decodig Performace of Error-Correctig Codes Hery D. Pfister ECE Departmet Texas A&M Uiversity August 27th, 2007 (rev. 0) November 2st, 203 (rev. ) Performace of Codes. Notatio X,

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

HOMEWORK I: PREREQUISITES FROM MATH 727

HOMEWORK I: PREREQUISITES FROM MATH 727 HOMEWORK I: PREREQUISITES FROM MATH 727 Questio. Let X, X 2,... be idepedet expoetial radom variables with mea µ. (a) Show that for Z +, we have EX µ!. (b) Show that almost surely, X + + X (c) Fid the

More information

What is Probability?

What is Probability? Quatificatio of ucertaity. What is Probability? Mathematical model for thigs that occur radomly. Radom ot haphazard, do t kow what will happe o ay oe experimet, but has a log ru order. The cocept of probability

More information

EE 6885 Statistical Pattern Recognition

EE 6885 Statistical Pattern Recognition EE 6885 Statistical Patter Recogitio Fall 5 Prof. Shih-Fu Chag http://www.ee.columbia.edu/~sfchag Lecture 6 (9/8/5 EE6887-Chag 6- Readig EM for Missig Features Textboo, DHS 3.9 Bayesia Parameter Estimatio

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

Statisticians use the word population to refer the total number of (potential) observations under consideration

Statisticians use the word population to refer the total number of (potential) observations under consideration 6 Samplig Distributios Statisticias use the word populatio to refer the total umber of (potetial) observatios uder cosideratio The populatio is just the set of all possible outcomes i our sample space

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Overview of Estimation

Overview of Estimation Topic Iferece is the problem of turig data ito kowledge, where kowledge ofte is expressed i terms of etities that are ot preset i the data per se but are preset i models that oe uses to iterpret the data.

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

1 Review of Probability & Statistics

1 Review of Probability & Statistics 1 Review of Probability & Statistics a. I a group of 000 people, it has bee reported that there are: 61 smokers 670 over 5 960 people who imbibe (drik alcohol) 86 smokers who imbibe 90 imbibers over 5

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

A PROBABILITY PRIMER

A PROBABILITY PRIMER CARLETON COLLEGE A ROBABILITY RIMER SCOTT BIERMAN (Do ot quote without permissio) A robability rimer INTRODUCTION The field of probability ad statistics provides a orgaizig framework for systematically

More information

Hypothesis Testing. Evaluation of Performance of Learned h. Issues. Trade-off Between Bias and Variance

Hypothesis Testing. Evaluation of Performance of Learned h. Issues. Trade-off Between Bias and Variance Hypothesis Testig Empirically evaluatig accuracy of hypotheses: importat activity i ML. Three questios: Give observed accuracy over a sample set, how well does this estimate apply over additioal samples?

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems

More information

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

Exam II Review. CEE 3710 November 15, /16/2017. EXAM II Friday, November 17, in class. Open book and open notes.

Exam II Review. CEE 3710 November 15, /16/2017. EXAM II Friday, November 17, in class. Open book and open notes. Exam II Review CEE 3710 November 15, 017 EXAM II Friday, November 17, i class. Ope book ad ope otes. Focus o material covered i Homeworks #5 #8, Note Packets #10 19 1 Exam II Topics **Will emphasize material

More information

Probability and statistics: basic terms

Probability and statistics: basic terms Probability ad statistics: basic terms M. Veeraraghava August 203 A radom variable is a rule that assigs a umerical value to each possible outcome of a experimet. Outcomes of a experimet form the sample

More information

Lecture 1 Probability and Statistics

Lecture 1 Probability and Statistics Wikipedia: Lecture 1 Probability ad Statistics Bejami Disraeli, British statesma ad literary figure (1804 1881): There are three kids of lies: lies, damed lies, ad statistics. popularized i US by Mark

More information