Non-parametric Methods

Size: px
Start display at page:

Download "Non-parametric Methods"

Transcription

1 Non-parametric Methods Machine Learning Alireza Ghane Non-Parametric Methods Alireza Ghane / Torsten Möller 1

2 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 2

3 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 3

4 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 4

5 Hand-written Digit Recognition 518 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 22 Fig. 8. All of the misclassified MNIST test digits using our method 63 out of 1,). The text above each digit indicates the example number followed by the true label and the assigned label. Belongie et al. PAMI 22 straightforward sum of squared differences SSD). SSD error rate with an average of only four two-dimensional Difficult performs very well toonhand-craft this easy database duerules to the lackabout of views for digits each three-dimensional object, thanks to the variation in lighting [24] PCA just makes it faster). flexibility provided by the matching algorithm. The prototype selection algorithm is illustrated in Fig. 1. As seen, views are allocated mainly for more complex 6.3 MPEG-7 Shape Silhouette Database categories with high within class variability. The curve Our next experiment involves the MPEG-7 shape silhouette Non-Parametric Methods Alireza marked SC-proto in Fig. 9 shows the improved classification database, Ghane specifically / TorstenCore Möller Experiment CE-Shape-1 part B, 5

6 Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 22 x i = t i = (,,, 1,,,,,, ) Represent input image as a vector x i R 784. Suppose we have a target vector t i This is supervised learning Discrete, finite label set: perhaps t i {, 1} 1, a classification problem Given a training set {(x 1, t 1 ),..., (x N, t N )}, learning problem is to construct a good function y(x) from these. y : R 784 R 1 Non-Parametric Methods Alireza Ghane / Torsten Möller 6

7 Face Detection Classification problem Schneiderman and Kanade, IJCV 22 t i {, 1, 2}, non-face, frontal face, profile face. Non-Parametric Methods Alireza Ghane / Torsten Möller 7

8 Spam Detection Classification problem t i {, 1}, non-spam, spam x i counts of words, e.g. Viagra, stock, outperform, multi-bagger Non-Parametric Methods Alireza Ghane / Torsten Möller 8

9 Stock Price Prediction Problems in which t i is continuous are called regression E.g. t i is stock price, x i contains company profit, debt, cash flow, gross sales, number of spam s sent,... Non-Parametric Methods Alireza Ghane / Torsten Möller 9

10 Clustering Images Wang et al., CVPR 26 Only x i is defined: unsupervised learning E.g. x i describes image, find groups of similar images Non-Parametric Methods Alireza Ghane / Torsten Möller 1

11 Types of Learning Problems Supervised Learning Classification Regression Unsupervised Learning Density estimation Clustering: k-means, mixture models, hierarchical clustering Hidden Markov models Reinforcement Learning Non-Parametric Methods Alireza Ghane / Torsten Möller 11

12 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 12

13 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 13

14 An Example - Polynomial Curve Fitting Suppose we are given training set of N observations (x 1,..., x N ) and (t 1,..., t N ), x i, t i R Regression problem, estimate y(x) from these data Non-Parametric Methods Alireza Ghane / Torsten Möller 14

15 Polynomial Curve Fitting What form is y(x)? Let s try polynomials of degree M: y(x, w) = w +w 1 x+w 2 x w M x M 1 This is the hypothesis space. How do we measure success? Sum of squared errors: E(w) = 1 2 N {y(x n, w) t n } 2 n=1 t 1 1 tn y(xn, w) Among functions in the class, choose that which minimizes this error xn x Non-Parametric Methods Alireza Ghane / Torsten Möller 15

16 Which Degree of Polynomial? A model selection problem M = 9 E(w ) = : This is over-fitting Non-Parametric Methods Alireza Ghane / Torsten Möller 16

17 Generalization 1 Training Test Generalization is the holy grail of ML Want good performance for new data Measure generalization using a separate set Use root-mean-squared (RMS) error: E RMS = 2E(w )/N Non-Parametric Methods Alireza Ghane / Torsten Möller 17

18 Controlling Over-fitting: Regularization As order of polynomial M increases, so do coefficient magnitudes Penalize large coefficients in error function: Ẽ(w) = 1 2 N {y(x n, w) t n } 2 + λ 2 w 2 n=1 Non-Parametric Methods Alireza Ghane / Torsten Möller 18

19 Controlling Over-fitting: Regularization As order of polynomial M increases, so do coefficient magnitudes Penalize large coefficients in error function: Ẽ(w) = 1 2 N {y(x n, w) t n } 2 + λ 2 w 2 n=1 Non-Parametric Methods Alireza Ghane / Torsten Möller 19

20 Controlling Over-fitting: Regularization Non-Parametric Methods Alireza Ghane / Torsten Möller 2

21 Controlling Over-fitting: Regularization 1 Training Test Note the E RMS for the training set. Perfect match of training set with the model is a result of over-fitting Training and test error show similar trend Non-Parametric Methods Alireza Ghane / Torsten Möller 21

22 Over-fitting: Dataset size With more data, more complex model (M = 9) can be fit Rule of thumb: 1 datapoints for each parameter Non-Parametric Methods Alireza Ghane / Torsten Möller 22

23 Validation Set Split training data into training set and validation set Train different models (e.g. diff. order polynomials) on training set Choose model (e.g. order of polynomial) with minimum error on validation set Non-Parametric Methods Alireza Ghane / Torsten Möller 23

24 Cross-validation run 1 run 2 run 3 run 4 Data are often limited Cross-validation creates S groups of data, use S 1 to train, other to validate Extreme case leave-one-out cross-validation (LOO-CV): S is number of training data points Cross-validation is an effective method for model selection, but can be slow Models with multiple complexity parameters: exponential number of runs Non-Parametric Methods Alireza Ghane / Torsten Möller 24

25 Summary Want models that generalize to new data Train model on training set Measure performance on held-out test set Performance on test set is good estimate of performance on new data Non-Parametric Methods Alireza Ghane / Torsten Möller 25

26 Summary - Model Selection Which model to use? E.g. which degree polynomial? Training set error is lower with more complex model Can t just choose the model with lowest training error Peeking at test error is unfair. E.g. picking polynomial with lowest test error Performance on test set is no longer good estimate of performance on new data Non-Parametric Methods Alireza Ghane / Torsten Möller 26

27 Summary - Model Selection Which model to use? E.g. which degree polynomial? Training set error is lower with more complex model Can t just choose the model with lowest training error Peeking at test error is unfair. E.g. picking polynomial with lowest test error Performance on test set is no longer good estimate of performance on new data Non-Parametric Methods Alireza Ghane / Torsten Möller 27

28 Summary - Solutions I Use a validation set Train models on training set. E.g. different degree polynomials Measure performance on held-out validation set Measure performance of that model on held-out test set Can use cross-validation on training set instead of a separate validation set if little data and lots of time Choose model with lowest error over all cross-validation folds (e.g. polynomial degree) Retrain that model using all training data (e.g. polynomial coefficients) Non-Parametric Methods Alireza Ghane / Torsten Möller 28

29 Summary - Solutions I Use a validation set Train models on training set. E.g. different degree polynomials Measure performance on held-out validation set Measure performance of that model on held-out test set Can use cross-validation on training set instead of a separate validation set if little data and lots of time Choose model with lowest error over all cross-validation folds (e.g. polynomial degree) Retrain that model using all training data (e.g. polynomial coefficients) Non-Parametric Methods Alireza Ghane / Torsten Möller 29

30 Summary - Solutions II Use regularization Train complex model (e.g high order polynomial) but penalize being too complex (e.g. large weight magnitudes) Need to balance error vs. regularization (λ) Choose λ using cross-validation Get more data Non-Parametric Methods Alireza Ghane / Torsten Möller 3

31 Summary - Solutions II Use regularization Train complex model (e.g high order polynomial) but penalize being too complex (e.g. large weight magnitudes) Need to balance error vs. regularization (λ) Choose λ using cross-validation Get more data Non-Parametric Methods Alireza Ghane / Torsten Möller 31

32 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 32

33 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion

34 Decision Theory For a sample x, decide which class(c k ) it is from. Ideas: Maximum Likelihood Minimum Loss/Cost (e.g. misclassification rate) Maximum Aposteriori (MAP) Intro. to Machine Learning Alireza Ghane 34

35 Decision: Maximum Likelihood Inference step: Determine statistics from training data. p(x, t) OR p(x C k ) Decision step: Determine optimal t for test input x: t = arg max{ p (x C k ) k }{{} Likelihood } Intro. to Machine Learning Alireza Ghane 35

36 Decision: Maximum Likelihood Inference step: Determine statistics from training data. p(x, t) OR p(x C k ) Decision step: Determine optimal t for test input x: t = arg max{ p (x C k ) k }{{} Likelihood } Intro. to Machine Learning Alireza Ghane 36

37 Decision: Maximum Likelihood Inference step: Determine statistics from training data. p(x, t) OR p(x C k ) Decision step: Determine optimal t for test input x: t = arg max{ p (x C k ) k }{{} Likelihood } Intro. to Machine Learning Alireza Ghane 37

38 Decision: Minimum Misclassification Rate q(mistake) = p (x R 1, C 2 ) + p (x R 2, C 1 ) = R 1 p (x, C 2 ) dx + R 2 p (x, C 1 ) dx q(mistake) = k R j p (x, C k ) dx j p(x, C1) x x p(x, C2) ˆx: decision boundary. x : optimal decision boundary x : arg min{p (mistake)} R 1 x R1 R2 Intro. to Machine Learning Alireza Ghane 38

39 Decision: Minimum Misclassification Rate q(mistake) = p (x R 1, C 2 ) + p (x R 2, C 1 ) = R 1 p (x, C 2 ) dx + R 2 p (x, C 1 ) dx q(mistake) = k R j p (x, C k ) dx j p(x, C1) x x p(x, C2) ˆx: decision boundary. x : optimal decision boundary x : arg min{p (mistake)} R 1 x R1 R2 Intro. to Machine Learning Alireza Ghane 39

40 Decision: Minimum Loss/Cost Misclassification rate: R : arg min {R i i {1,,K}} L (R j, C k ) Weighted loss/cost function: R : arg min W j,k L (R j, C k ) {R i i {1,,K}} Is useful when: The population of the classes are different The failure cost is non-symmetric k k j j Intro. to Machine Learning Alireza Ghane 4

41 Decision: Maximum Aposteriori (MAP) Bayes Theorem: P {A B} = P {B A}P {A} P {B} p(c k x) }{{} P osterior p(x C k ) }{{} Likelihood p(c k ) }{{} P rior Provides an Aposteriori Belief for the estimation, rather than a single point estimate. Can utilize Apriori Information in the decision. Intro. to Machine Learning Alireza Ghane 41

42 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Non-Parametric Methods Alireza Ghane / Torsten Möller 42

43 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion

44 Coin Tossing Let s say you re given a coin, and you want to find out P (heads), the probability that if you flip it it lands as heads. Flip it a few times: H H T P (heads) = 2/3 Hmm... is this rigorous? Does this make sense? Non-Parametric Methods Alireza Ghane / Torsten Möller 44

45 Coin Tossing Let s say you re given a coin, and you want to find out P (heads), the probability that if you flip it it lands as heads. Flip it a few times: H H T P (heads) = 2/3 Hmm... is this rigorous? Does this make sense? Non-Parametric Methods Alireza Ghane / Torsten Möller 45

46 Coin Tossing Let s say you re given a coin, and you want to find out P (heads), the probability that if you flip it it lands as heads. Flip it a few times: H H T P (heads) = 2/3 Hmm... is this rigorous? Does this make sense? Non-Parametric Methods Alireza Ghane / Torsten Möller 46

47 Coin Tossing - Model Bernoulli distribution P (heads) = µ, P (tails) = 1 µ Assume coin flips are independent and identically distributed (i.i.d.) i.e. All are separate samples from the Bernoulli distribution Given data D = {x 1,..., x N }, heads: x i = 1, tails: x i =, the likelihood of the data is: p(d µ) = N p(x n µ) = n=1 N µ xn (1 µ) 1 xn n=1 Non-Parametric Methods Alireza Ghane / Torsten Möller 47

48 Maximum Likelihood Estimation Given D with h heads and t tails What should µ be? Maximum Likelihood Estimation (MLE): choose µ which maximizes the likelihood of the data µ ML = arg max µ p(d µ) Since ln( ) is monotone increasing: µ ML = arg max ln p(d µ) µ Non-Parametric Methods Alireza Ghane / Torsten Möller 48

49 Likelihood: Log-likelihood: Maximum Likelihood Estimation ln p(d µ) = p(d µ) = N µ xn (1 µ) 1 xn n=1 N x n ln µ + (1 x n ) ln(1 µ) n=1 Take derivative, set to : d N dµ ln p(d µ) = 1 x n µ (1 x 1 n) 1 µ = 1 µ h 1 1 µ t n=1 µ = h t + h Non-Parametric Methods Alireza Ghane / Torsten Möller 49

50 Likelihood: Log-likelihood: Maximum Likelihood Estimation ln p(d µ) = p(d µ) = N µ xn (1 µ) 1 xn n=1 N x n ln µ + (1 x n ) ln(1 µ) n=1 Take derivative, set to : d N dµ ln p(d µ) = 1 x n µ (1 x 1 n) 1 µ = 1 µ h 1 1 µ t n=1 µ = h t + h Non-Parametric Methods Alireza Ghane / Torsten Möller 5

51 Likelihood: Log-likelihood: Maximum Likelihood Estimation ln p(d µ) = p(d µ) = N µ xn (1 µ) 1 xn n=1 N x n ln µ + (1 x n ) ln(1 µ) n=1 Take derivative, set to : d N dµ ln p(d µ) = 1 x n µ (1 x 1 n) 1 µ = 1 µ h 1 1 µ t n=1 µ = h t + h Non-Parametric Methods Alireza Ghane / Torsten Möller 51

52 Likelihood: Log-likelihood: Maximum Likelihood Estimation ln p(d µ) = p(d µ) = N µ xn (1 µ) 1 xn n=1 N x n ln µ + (1 x n ) ln(1 µ) n=1 Take derivative, set to : d N dµ ln p(d µ) = 1 x n µ (1 x 1 n) 1 µ = 1 µ h 1 1 µ t n=1 µ = h t + h Non-Parametric Methods Alireza Ghane / Torsten Möller 52

53 Likelihood: Log-likelihood: Maximum Likelihood Estimation ln p(d µ) = p(d µ) = N µ xn (1 µ) 1 xn n=1 N x n ln µ + (1 x n ) ln(1 µ) n=1 Take derivative, set to : d N dµ ln p(d µ) = 1 x n µ (1 x 1 n) 1 µ = 1 µ h 1 1 µ t n=1 µ = h t + h Non-Parametric Methods Alireza Ghane / Torsten Möller 53

54 Bayesian Learning Wait, does this make sense? What if I flip 1 time, heads? Do I believe µ=1? Learn µ the Bayesian way: P (µ D) = P (µ D) }{{} posterior P (D µ)p (µ) P (D) P (D µ) P (µ) }{{}}{{} prior likelihood Prior encodes knowledge that most coins are 5-5 Conjugate prior makes math simpler, easy interpretation For Bernoulli, the beta distribution is its conjugate Non-Parametric Methods Alireza Ghane / Torsten Möller 54

55 Bayesian Learning Wait, does this make sense? What if I flip 1 time, heads? Do I believe µ=1? Learn µ the Bayesian way: P (µ D) = P (µ D) }{{} posterior P (D µ)p (µ) P (D) P (D µ) P (µ) }{{}}{{} prior likelihood Prior encodes knowledge that most coins are 5-5 Conjugate prior makes math simpler, easy interpretation For Bernoulli, the beta distribution is its conjugate Non-Parametric Methods Alireza Ghane / Torsten Möller 55

56 Bayesian Learning Wait, does this make sense? What if I flip 1 time, heads? Do I believe µ=1? Learn µ the Bayesian way: P (µ D) = P (µ D) }{{} posterior P (D µ)p (µ) P (D) P (D µ) P (µ) }{{}}{{} prior likelihood Prior encodes knowledge that most coins are 5-5 Conjugate prior makes math simpler, easy interpretation For Bernoulli, the beta distribution is its conjugate Non-Parametric Methods Alireza Ghane / Torsten Möller 56

57 Beta Distribution We will use the Beta distribution to express our prior knowledge about coins: Beta(µ a, b) = Γ(a + b) µ a 1 (1 µ) b 1 Γ(a)Γ(b) }{{} normalization Parameters a and b control the shape of this distribution Non-Parametric Methods Alireza Ghane / Torsten Möller 57

58 Posterior P (µ D) P (D µ)p (µ) N µ xn (1 µ) 1 xn µ a 1 (1 µ) b 1 }{{} n=1 }{{} prior likelihood µ h (1 µ) t µ a 1 (1 µ) b 1 µ h+a 1 (1 µ) t+b 1 Simple form for posterior is due to use of conjugate prior Parameters a and b act as extra observations Note that as N = h + t, prior is ignored Non-Parametric Methods Alireza Ghane / Torsten Möller 58

59 Posterior P (µ D) P (D µ)p (µ) N µ xn (1 µ) 1 xn µ a 1 (1 µ) b 1 }{{} n=1 }{{} prior likelihood µ h (1 µ) t µ a 1 (1 µ) b 1 µ h+a 1 (1 µ) t+b 1 Simple form for posterior is due to use of conjugate prior Parameters a and b act as extra observations Note that as N = h + t, prior is ignored Non-Parametric Methods Alireza Ghane / Torsten Möller 59

60 Posterior P (µ D) P (D µ)p (µ) N µ xn (1 µ) 1 xn µ a 1 (1 µ) b 1 }{{} n=1 }{{} prior likelihood µ h (1 µ) t µ a 1 (1 µ) b 1 µ h+a 1 (1 µ) t+b 1 Simple form for posterior is due to use of conjugate prior Parameters a and b act as extra observations Note that as N = h + t, prior is ignored Non-Parametric Methods Alireza Ghane / Torsten Möller 6

61 Maximum A Posteriori Given posterior P (µ D) we could compute a single value, known as the Maximum a Posteriori (MAP) estimate for µ: µ MAP = arg max µ P (µ D) Known as point estimation However, correct Bayesian thing to do is to use the full distribution over µ i.e. Compute E µ [f] = p(µ D)f(µ)dµ This integral is usually hard to compute Non-Parametric Methods Alireza Ghane / Torsten Möller 61

62 Maximum A Posteriori Given posterior P (µ D) we could compute a single value, known as the Maximum a Posteriori (MAP) estimate for µ: µ MAP = arg max µ P (µ D) Known as point estimation However, correct Bayesian thing to do is to use the full distribution over µ i.e. Compute E µ [f] = p(µ D)f(µ)dµ This integral is usually hard to compute Non-Parametric Methods Alireza Ghane / Torsten Möller 62

63 Maximum A Posteriori Given posterior P (µ D) we could compute a single value, known as the Maximum a Posteriori (MAP) estimate for µ: µ MAP = arg max µ P (µ D) Known as point estimation However, correct Bayesian thing to do is to use the full distribution over µ i.e. Compute E µ [f] = p(µ D)f(µ)dµ This integral is usually hard to compute Non-Parametric Methods Alireza Ghane / Torsten Möller 63

64 Polynomial Curve Fitting: What We Did What form is y(x)? Let s try polynomials of degree M: y(x, w) = w +w 1 x+w 2 x w M x M 1 This is the hypothesis space. How do we measure success? Sum of squared errors: E(w) = 1 2 N {y(x n, w) t n } 2 n=1 t 1 1 tn y(xn, w) Among functions in the class, choose that which minimizes this error xn x Intro. to Machine Learning Alireza Ghane 64

65 Curve Fitting: Probabilistic Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x N p(t x, w, β) = N ( t n y(x n, w), β 1) n=1 Intro. to Machine Learning Alireza Ghane 65

66 Curve Fitting: Probabilistic Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x p(t x, w, β) = N N ( t n y(x n, w), β 1) n=1 ln (p(t x, w, β)) = β N {y(x n, w) t n } 2 + N 2 2 ln β N ln (2π) n=1 }{{}} 2 {{}}{{} const. const. βe(w) Intro. to Machine Learning Alireza Ghane 66

67 Curve Fitting: Probabilistic Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x p(t x, w, β) = N N ( t n y(x n, w), β 1) n=1 ln (p(t x, w, β)) = β N {y(x n, w) t n } 2 + N 2 2 ln β N ln (2π) n=1 }{{}} 2 {{}}{{} const. const. βe(w) Maximize log-likelihood Minimize E(w). Can optimize for β as well. Intro. to Machine Learning Alireza Ghane 67

68 Curve Fitting: Bayesian Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x N p(t x, w, β) = N ( t n y(x n, w), β 1) n=1 Intro. to Machine Learning Alireza Ghane 68

69 Curve Fitting: Bayesian Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x N p(t x, w, β) = N ( t n y(x n, w), β 1) n=1 Posterior Dist.:p (w x, t, α, β) p (t x, w, β) p (w α) Intro. to Machine Learning Alireza Ghane 69

70 Curve Fitting: Bayesian Approach t y(x, w) y(x, w) p(t x, w, β) 2σ x x N p(t x, w, β) = N ( t n y(x n, w), β 1) n=1 Posterior Dist.:p (w x, t, α, β) p (t x, w, β) p (w α) Minimize: β N {y(x n, w) t n } 2 + α 2 2 wt w n=1 }{{}}{{} regularization. βe(w) Intro. to Machine Learning Alireza Ghane 7

71 Curve Fitting: Bayesian p (t x, w, β, α) = N (t P w (x ), Q w,β,α (x )) t y(x, w) y(x, w) p(t x, w, β) 2σ x x Intro. to Machine Learning Alireza Ghane 71

72 Curve Fitting: Bayesian p (t x, w, β, α) = N (t P w (x ), Q w,β,α (x )) t y(x, w) p (t x, x, t) = N ( t m(x), s 2 (x) ) y(x, w) p(t x, w, β) 2σ x x Intro. to Machine Learning Alireza Ghane 72

73 Curve Fitting: Bayesian p (t x, w, β, α) = N (t P w (x ), Q w,β,α (x )) t y(x, w) p (t x, x, t) = N ( t m(x), s 2 (x) ) y(x, w) p(t x, w, β) 2σ N m(x) = φ(x) T S φ(x n )t n n=1 s 2 (x) = β 1 ( 1 + φ(x) T Sφ(x) ) 1 x x S 1 = α N β I + φ(x n )φ(x n ) T n=1 1 1 Intro. to Machine Learning Alireza Ghane 73

74 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Intro. to Machine Learning Alireza Ghane 74

75 Histograms Consider the problem of modelling the distribution of brightness values in pictures taken on sunny days versus cloudy days We could build histograms of pixel values for each class Intro. to Machine Learning Alireza Ghane 75

76 Histograms E.g. for sunny days Count n i number of datapoints (pixels) with brightness value falling into each bin: p i = n i N i Sensitive to bin width i Discontinuous due to bin edges In D-dim space with M bins per dimension, M D bins Intro. to Machine Learning Alireza Ghane 76

77 Histograms E.g. for sunny days Count n i number of datapoints (pixels) with brightness value falling into each bin: p i = n i N i Sensitive to bin width i Discontinuous due to bin edges In D-dim space with M bins per dimension, M D bins Intro. to Machine Learning Alireza Ghane 77

78 Histograms E.g. for sunny days Count n i number of datapoints (pixels) with brightness value falling into each bin: p i = n i N i Sensitive to bin width i Discontinuous due to bin edges In D-dim space with M bins per dimension, M D bins Intro. to Machine Learning Alireza Ghane 78

79 Histograms E.g. for sunny days Count n i number of datapoints (pixels) with brightness value falling into each bin: p i = n i N i Sensitive to bin width i Discontinuous due to bin edges In D-dim space with M bins per dimension, M D bins Intro. to Machine Learning Alireza Ghane 79

80 Local Density Estimation In a histogram we use nearby points to estimate density For a small region around x, estimate density as: p(x) = K NV K is number of points in region, V is volume of region, N is total number of datapoints Intro. to Machine Learning Alireza Ghane 8

81 Kernel Density Estimation Try to keep idea of using nearby points to estimate density, but obtain smoother estimate Estimate density by placing a small bump at each datapoint Kernel function k( ) determines shape of these bumps Density estimate is p(x) 1 N N ( ) x xn k h n=1 Intro. to Machine Learning Alireza Ghane 81

82 Kernel Density Estimation Example using Gaussian kernel: p(x) = 1 N N 1 (2πh 2 exp { x x n 2 } ) 1/2 2h 2 n=1 Intro. to Machine Learning Alireza Ghane 82

83 Kernel Density Estimation !3!2! Other kernels: Rectangle, Triangle, Epanechnikov Intro. to Machine Learning Alireza Ghane 83

84 Kernel Density Estimation !3!2! ! Other kernels: Rectangle, Triangle, Epanechnikov Intro. to Machine Learning Alireza Ghane 84

85 Kernel Density Estimation !3!2! ! Other kernels: Rectangle, Triangle, Epanechnikov Fast at training time, slow at test time keep all datapoints Intro. to Machine Learning Alireza Ghane 85

86 Kernel Density Estimation !3!2! ! Other kernels: Rectangle, Triangle, Epanechnikov Fast at training time, slow at test time keep all datapoints Sensitive to kernel bandwidth h Intro. to Machine Learning Alireza Ghane 86

87 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Intro. to Machine Learning Alireza Ghane 87

88 5 Nearest-neighbour Instead of relying on kernel bandwidth to get proper density estimate, fix number of nearby points K: p(x) = K NV Note: diverges, not proper density estimate Intro. to Machine Learning Alireza Ghane 88

89 Nearest-neighbour for Classification K Nearest neighbour is often used for classification Classification: predict labels t i from x i Intro. to Machine Learning Alireza Ghane 89

90 x 2 Nearest-neighbour for Classification (a) x 1 K Nearest neighbour is often used for classification Classification: predict labels t i from x i e.g. x i R 2 and t i {, 1}, 3-nearest neighbour Intro. to Machine Learning Alireza Ghane 9

91 Nearest-neighbour for Classification x 2 x 2 (a) x 1 (b) x 1 K Nearest neighbour is often used for classification Classification: predict labels t i from x i e.g. x i R 2 and t i {, 1}, 3-nearest neighbour K = 1 referred to as nearest-neighbour Intro. to Machine Learning Alireza Ghane 91

92 Nearest-neighbour for Classification Good baseline method Slow, but can use fancy data structures for efficiency (KD-trees, Locality Sensitive Hashing) Nice theoretical properties As we obtain more training data points, space becomes more filled with labelled data As N error no more than twice Bayes error Intro. to Machine Learning Alireza Ghane 92

93 Bayes Error p(x, C 1 ) x x p(x, C 2 ) x R 1 R 2 Best classification possible given features Two classes, PDFs shown Decision rule: C 1 if x ˆx; makes errors on red, green, and blue regions Optimal decision rule: C 1 if x x, Bayes error is area of green and blue regions Intro. to Machine Learning Alireza Ghane 93

94 Bayes Error p(x, C 1 ) x x p(x, C 2 ) x R 1 R 2 Best classification possible given features Two classes, PDFs shown Decision rule: C 1 if x ˆx; makes errors on red, green, and blue regions Optimal decision rule: C 1 if x x, Bayes error is area of green and blue regions Intro. to Machine Learning Alireza Ghane 94

95 Bayes Error p(x, C 1 ) x x p(x, C 2 ) x R 1 R 2 Best classification possible given features Two classes, PDFs shown Decision rule: C 1 if x ˆx; makes errors on red, green, and blue regions Optimal decision rule: C 1 if x x, Bayes error is area of green and blue regions Intro. to Machine Learning Alireza Ghane 95

96 Outline Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection Decision Theory: ML, Loss Function, MAP Probability Theory: (e.g.) Probabilities and Parameter Estimation Kernel Density Estimation Nearest-neighbour Conclusion Intro. to Machine Learning Alireza Ghane 96

97 Conclusion Readings: Chapter 1.1, 1.3, 1.5, 2.1 Types of learning problems Supervised: regression, classification Unsupervised Learning as optimization Squared error loss function Maximum likelihood (ML) Maximum a posteriori (MAP) Want generalization, avoid over-fitting Cross-validation Regularization Bayesian prior on model parameters Intro. to Machine Learning Alireza Ghane 97

98 Conclusion Readings: Ch. 2.5 Kernel density estimation Model density p(x) using kernels around training datapoints Nearest neighbour Model density or perform classification using nearest training datapoints Multivariate Gaussian Needed for next week s lectures, if you need a refresher read pp Intro. to Machine Learning Alireza Ghane 98

Non-parametric Methods

Non-parametric Methods Non-parametric Methods Machine Learning Torsten Möller Möller/Mori 1 Reading Chapter 2 of Pattern Recognition and Machine Learning by Bishop (with an emphasis on section 2.5) Möller/Mori 2 Outline Last

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Bishop PRML Ch. 1 Alireza Ghane Outline Course Info.: People, References, Resources Machine Learning: What, Why, and How? Curve Fitting: (e.g.) Regression and Model Selection

More information

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation Machine Learning CMPT 726 Simon Fraser University Binomial Parameter Estimation Outline Maximum Likelihood Estimation Smoothed Frequencies, Laplace Correction. Bayesian Approach. Conjugate Prior. Uniform

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

Curve Fitting Re-visited, Bishop1.2.5

Curve Fitting Re-visited, Bishop1.2.5 Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Machine Learning Torsten Möller Möller/Mori 1 Reading Chapter 3 of Pattern Recognition and Machine Learning by Bishop Chapter 3+5+6+7 of The Elements of Statistical Learning

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13) STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

More information

Density Estimation: ML, MAP, Bayesian estimation

Density Estimation: ML, MAP, Bayesian estimation Density Estimation: ML, MAP, Bayesian estimation CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Maximum-Likelihood Estimation Maximum

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use? Today Statistical Learning Parameter Estimation: Maximum Likelihood (ML) Maximum A Posteriori (MAP) Bayesian Continuous case Learning Parameters for a Bayesian Network Naive Bayes Maximum Likelihood estimates

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows Kn-Nearest

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Machine Learning and Deep Learning! Vincent Lepetit!

Machine Learning and Deep Learning! Vincent Lepetit! Machine Learning and Deep Learning!! Vincent Lepetit! 1! What is Machine Learning?! 2! Hand-Written Digit Recognition! 2 9 3! Hand-Written Digit Recognition! Formalization! 0 1 x = @ A Images are 28x28

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

CMU-Q Lecture 24:

CMU-Q Lecture 24: CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Generative Models Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1

More information

COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference

COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference Associate Instructor: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted

More information

Lecture 18: Learning probabilistic models

Lecture 18: Learning probabilistic models Lecture 8: Learning probabilistic models Roger Grosse Overview In the first half of the course, we introduced backpropagation, a technique we used to train neural nets to minimize a variety of cost functions.

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Accouncements. You should turn in a PDF and a python file(s) Figure for problem 9 should be in the PDF

Accouncements. You should turn in a PDF and a python file(s) Figure for problem 9 should be in the PDF Accouncements You should turn in a PDF and a python file(s) Figure for problem 9 should be in the PDF Please do not zip these files and submit (unless there are >5 files) 1 Bayesian Methods Machine Learning

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Point Estimation. Vibhav Gogate The University of Texas at Dallas

Point Estimation. Vibhav Gogate The University of Texas at Dallas Point Estimation Vibhav Gogate The University of Texas at Dallas Some slides courtesy of Carlos Guestrin, Chris Bishop, Dan Weld and Luke Zettlemoyer. Basics: Expectation and Variance Binary Variables

More information

Announcements. Proposals graded

Announcements. Proposals graded Announcements Proposals graded Kevin Jamieson 2018 1 Bayesian Methods Machine Learning CSE546 Kevin Jamieson University of Washington November 1, 2018 2018 Kevin Jamieson 2 MLE Recap - coin flips Data:

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning

More information

MLE/MAP + Naïve Bayes

MLE/MAP + Naïve Bayes 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes MLE / MAP Readings: Estimating Probabilities (Mitchell, 2016)

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Empirical Bayes, Hierarchical Bayes Mark Schmidt University of British Columbia Winter 2017 Admin Assignment 5: Due April 10. Project description on Piazza. Final details coming

More information

Machine Learning - MT & 5. Basis Expansion, Regularization, Validation

Machine Learning - MT & 5. Basis Expansion, Regularization, Validation Machine Learning - MT 2016 4 & 5. Basis Expansion, Regularization, Validation Varun Kanade University of Oxford October 19 & 24, 2016 Outline Basis function expansion to capture non-linear relationships

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Probability Theory for Machine Learning. Chris Cremer September 2015

Probability Theory for Machine Learning. Chris Cremer September 2015 Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares

More information

Final Exam, Spring 2006

Final Exam, Spring 2006 070 Final Exam, Spring 2006. Write your name and your email address below. Name: Andrew account: 2. There should be 22 numbered pages in this exam (including this cover sheet). 3. You may use any and all

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Prediction Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict the

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

MLE/MAP + Naïve Bayes

MLE/MAP + Naïve Bayes 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes Matt Gormley Lecture 19 March 20, 2018 1 Midterm Exam Reminders

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Brief Introduction of Machine Learning Techniques for Content Analysis

Brief Introduction of Machine Learning Techniques for Content Analysis 1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods) Machine Learning InstanceBased Learning (aka nonparametric methods) Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Non parametric CSE 446 Machine Learning Daniel Weld March

More information

Introduc)on to Bayesian methods (con)nued) - Lecture 16

Introduc)on to Bayesian methods (con)nued) - Lecture 16 Introduc)on to Bayesian methods (con)nued) - Lecture 16 David Sontag New York University Slides adapted from Luke Zettlemoyer, Carlos Guestrin, Dan Klein, and Vibhav Gogate Outline of lectures Review of

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive

More information

Machine Learning for Signal Processing Bayes Classification and Regression

Machine Learning for Signal Processing Bayes Classification and Regression Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For

More information

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 3 September 14, Readings: Mitchell Ch Murphy Ch.

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 3 September 14, Readings: Mitchell Ch Murphy Ch. School of Computer Science 10-701 Introduction to Machine Learning aïve Bayes Readings: Mitchell Ch. 6.1 6.10 Murphy Ch. 3 Matt Gormley Lecture 3 September 14, 2016 1 Homewor 1: due 9/26/16 Project Proposal:

More information

ECE521 Tutorial 11. Topic Review. ECE521 Winter Credits to Alireza Makhzani, Alex Schwing, Rich Zemel and TAs for slides. ECE521 Tutorial 11 / 4

ECE521 Tutorial 11. Topic Review. ECE521 Winter Credits to Alireza Makhzani, Alex Schwing, Rich Zemel and TAs for slides. ECE521 Tutorial 11 / 4 ECE52 Tutorial Topic Review ECE52 Winter 206 Credits to Alireza Makhzani, Alex Schwing, Rich Zemel and TAs for slides ECE52 Tutorial ECE52 Winter 206 Credits to Alireza / 4 Outline K-means, PCA 2 Bayesian

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017 CPSC 340: Machine Learning and Data Mining MLE and MAP Fall 2017 Assignment 3: Admin 1 late day to hand in tonight, 2 late days for Wednesday. Assignment 4: Due Friday of next week. Last Time: Multi-Class

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 143 Part IV

More information

CPSC 340: Machine Learning and Data Mining

CPSC 340: Machine Learning and Data Mining CPSC 340: Machine Learning and Data Mining MLE and MAP Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart. 1 Admin Assignment 4: Due tonight. Assignment 5: Will be released

More information

Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation

Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation Lecture 15. Pattern Classification (I): Statistical Formulation Outline Statistical Pattern Recognition Maximum Posterior Probability (MAP) Classifier Maximum Likelihood (ML) Classifier K-Nearest Neighbor

More information

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 7301: Advanced Machine Learning Vibhav Gogate The University of Texas at Dallas Supervised Learning Issues in supervised learning What makes learning hard Point Estimation: MLE vs Bayesian

More information

COMP 328: Machine Learning

COMP 328: Machine Learning COMP 328: Machine Learning Lecture 2: Naive Bayes Classifiers Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Spring 2010 Nevin L. Zhang

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

CS-E3210 Machine Learning: Basic Principles

CS-E3210 Machine Learning: Basic Principles CS-E3210 Machine Learning: Basic Principles Lecture 4: Regression II slides by Markus Heinonen Department of Computer Science Aalto University, School of Science Autumn (Period I) 2017 1 / 61 Today s introduction

More information

CSCI567 Machine Learning (Fall 2014)

CSCI567 Machine Learning (Fall 2014) CSCI567 Machine Learning (Fall 24) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu October 2, 24 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 24) October 2, 24 / 24 Outline Review

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013 Bayesian Methods Machine Learning CSE546 Carlos Guestrin University of Washington September 30, 2013 1 What about prior n Billionaire says: Wait, I know that the thumbtack is close to 50-50. What can you

More information

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Gaussian and Linear Discriminant Analysis; Multiclass Classification Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015

More information

Logistic Regression. Machine Learning Fall 2018

Logistic Regression. Machine Learning Fall 2018 Logistic Regression Machine Learning Fall 2018 1 Where are e? We have seen the folloing ideas Linear models Learning as loss minimization Bayesian learning criteria (MAP and MLE estimation) The Naïve Bayes

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Algorithm-Independent Learning Issues

Algorithm-Independent Learning Issues Algorithm-Independent Learning Issues Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007, Selim Aksoy Introduction We have seen many learning

More information

Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2

Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Logistics CSE 446: Point Estimation Winter 2012 PS2 out shortly Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Last Time Random variables, distributions Marginal, joint & conditional

More information

Machine Learning using Bayesian Approaches

Machine Learning using Bayesian Approaches Machine Learning using Bayesian Approaches Sargur N. Srihari University at Buffalo, State University of New York 1 Outline 1. Progress in ML and PR 2. Fully Bayesian Approach 1. Probability theory Bayes

More information

Lecture 4: Probabilistic Learning

Lecture 4: Probabilistic Learning DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods

More information

Statistical Models. David M. Blei Columbia University. October 14, 2014

Statistical Models. David M. Blei Columbia University. October 14, 2014 Statistical Models David M. Blei Columbia University October 14, 2014 We have discussed graphical models. Graphical models are a formalism for representing families of probability distributions. They are

More information

Machine Learning CSE546 Sham Kakade University of Washington. Oct 4, What about continuous variables?

Machine Learning CSE546 Sham Kakade University of Washington. Oct 4, What about continuous variables? Linear Regression Machine Learning CSE546 Sham Kakade University of Washington Oct 4, 2016 1 What about continuous variables? Billionaire says: If I am measuring a continuous variable, what can you do

More information

Classification & Information Theory Lecture #8

Classification & Information Theory Lecture #8 Classification & Information Theory Lecture #8 Introduction to Natural Language Processing CMPSCI 585, Fall 2007 University of Massachusetts Amherst Andrew McCallum Today s Main Points Automatically categorizing

More information

Slides modified from: PATTERN RECOGNITION AND MACHINE LEARNING CHRISTOPHER M. BISHOP

Slides modified from: PATTERN RECOGNITION AND MACHINE LEARNING CHRISTOPHER M. BISHOP Slides modified from: PATTERN RECOGNITION AND MACHINE LEARNING CHRISTOPHER M. BISHOP Predic?ve Distribu?on (1) Predict t for new values of x by integra?ng over w: where The Evidence Approxima?on (1) The

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang Machine Learning Basics Lecture 2: Linear Classification Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d.

More information

Machine Learning Lecture 2

Machine Learning Lecture 2 Machine Perceptual Learning and Sensory Summer Augmented 15 Computing Many slides adapted from B. Schiele Machine Learning Lecture 2 Probability Density Estimation 16.04.2015 Bastian Leibe RWTH Aachen

More information

Advanced statistical methods for data analysis Lecture 2

Advanced statistical methods for data analysis Lecture 2 Advanced statistical methods for data analysis Lecture 2 RHUL Physics www.pp.rhul.ac.uk/~cowan Universität Mainz Klausurtagung des GK Eichtheorien exp. Tests... Bullay/Mosel 15 17 September, 2008 1 Outline

More information

Machine Learning Lecture 7

Machine Learning Lecture 7 Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

More information