INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification
|
|
- Preston Davis
- 5 years ago
- Views:
Transcription
1 INF Introduction to classifiction Anne Solberg Based on Chapter 1-6 in Duda and Hart: Pattern Classification INF
2 Introduction to classification One of the most challenging topics in image analsis is recognizing a specific object in an image To do that, several steps are normall used Some kind of segmentation can delimit the spatial etent of the foreground objects, then a set of features to describe the object characteristics are computed, before the recognition step that decides which object tpe this is The focus of the net three lectures is the recognition step The starting point is that we have a set of K features computed for an object These features can either describe the shape or the gra level/color/teture properties of the object Based on these features we need to decide what kind of an object this is Statistical classification is the process of estimating the probabilit that an object belongs to one of S object classes based on the observed value of K features Classification can be done both unsupervised or supervised In unsupervised classification the categories or classes are not known, and the classification process with be based on grouping similar objects together In supervised classification the categories or classes are known, and the classification will of consist of estimating the probabilit that the object belongs to each of the S object classes The object is assigned to the class with highest probabilit For supervised classifcation, training data is needed Training data consists of a set of objects with know class tpe, and the are used to estimate the parameters of the classifier Classification can be piel-based or region-based The performance of a classifier is normall computed as the accurac it gets when classifing a different set of objects with known class labels called the test set INF 4300
3 Assume that each object i in the scene is represented b a feature vector i If we have computed K features, i will be a vector with K components: i 1 i K i A classifier that is based on K features is called a multivariate classifier The K features and the objects in the training data set will define a K- dimensional feature space The training data set is a set of objects with known class labels As we saw last week, we can use a scatter plot to visualize class separation on data with known class labels if we have one or two features, but visualizing multidimensional space is difficult INF
4 In the scatter plot, each object in the training data set is plotted in K- dimensional space at a position relative to the value of the K features Each object is represented b a dot, and the color of the dot represents the class In this eample we have 4 classes visualize using features A scatter plot is a tool for visualizing features We will later look at other class separabilit measures INF
5 Concepts in classification In the following three lectures we will cover these topics related to classification: Training set Test set Classifier accurac/confusion matrices Computing the probabilit that an object belongs to a class Let each class be represented b a probabilit densit function In general man probabilit densities can be used but we use the multivariate normal distribution which is commonl used Baes rule Discriminant functions/decision boundaries Normal distribution, mean vector and covariance matrices knn classification Unsupervised classification/clustering INF
6 Introduction to classification Supervised classification is related to thresholding Divide the image into two classes: foreground and background Thresholding is a two-class classification problem based on a 1D feature vector The feature vector consist of onl the gre level f, How can we classif a feature vector of K shape features into correct character tpe? We will now stud multivariate classification theor where we use N features to determine if an object belongs to a set of K object classes Recommended additional reading: Pattern Classification, R Duda, P Hart and D Stork Chapter 1 Introduction Chapter Baesian Decision Theor, 1-6 See ~inf3300/www_docs/bilder/dudahart_chappdf and dudahart-appendipdf INF
7 Plan for this lecture: Eplain the relation between thresholding and classification with classes Background in probabilit theor Baes rule Classification with a Gaussian densit and a single feature Briefl: training and testing a classifier We go deeper into this in two weeks INF
8 From INF310: Thresholding Basic thresholding assigns all piels in the image to one of classes: foreground or background g, 0 if 1 if f f,, T T This can be seen as a -class classification problem based on a single feature, the gra level The classes are background and foreground, and the threshold T defines the border between them INF
9 Classification error for thresholding - Background - Foreground Threshold t In this region, foreground piels are misclassified as background In this region, background piels are misclassified as foreground INF
10 Classification error for thresholding We assume that bz is the normalized histogram for background bz and fz is the histogram for foreground The histograms are estimates of the probabilit distribution of the gra levels in the image Let F and B be the prior probabilities for background and foregroundb+f=1 The normalized histogram for the image is then given b p z B b z F f z The probabilit for misclassification given a treshold t is: E E B F t t t t f z dz b z dz INF
11 Find T that minimizes the error t E t F f z dz B b z dz de t 0 F dt f T t B b T Minimum error is achieved b setting T equal to the point where the probabilities for foreground and background are equal The locations in feature space where the probabilities are equal is called decision boundar in classification The goal of classification is to find the boundaries INF
12 Partitioning the feature space using thresholding 1 feature and 1 threshold Tresholding one feature gives a linear boundar separating the feature space INF
13 Partitioning the feature space using thresholding features and thresholds Tresholding two feature gives a rectangular boes in feature space INF
14 Can we find a line with better separation? We cannot isolate all classes using straight lines This problem is not linearl separable INF
15 The goal: partitioning the space with smooth non-linear boundaries INF
16 Distributions, standard deviation and variance A Gaussian distribution normal distribution is specified given the mean value and the variance : 1 p z Variance, standard deviation e INF
17 Two Gaussian distributions for a single feature Assume that bz and fz are Gaussian distributions, then p B F B F B F z B e F e B and F are the mean values for background and foreground B and F are the variance for background and foreground INF
18 The -class classification problem summarized Given two Gaussian distributions bz and fz The classes have prior probabilities F and B Ever piel should be assigned to the class that minimizes the classification error The classification error is minimized at the point where F fz = B bz What we will do now is to generalize to K classes and D features INF
19 How do we find the best border beteen K classes with features? We will find the theoretical answer and a geometrical interpretation of class means, variance, and the equivalent of a threshold INF
20 The goal of classification We estimate the decision boundaries based on training data Classification performance is alwas estimated on a separate test data set We tr to measure the generalization performance The classifier should perform well when classifing new samples Have lowest possible classification error We often face a tradeoff between classification error on the training set and generalization abilit when determining the compleit of the decision boundar INF
21 Probabilit theor - Appendi A4 Let be a discrete random variable that can assume an of a finite number of M different values The probabilit that belongs to class i is p i = Pr=i, i=1,m A probabilit distribution must sum to 1 and probabilities must be positive so p i 0 and M i1 p i 1 INF
22 Epected values - definition The epected value or mean of a random variable is: The variance or second order moment is: These will be estimated from training data where we know the true class labels: INF 4300 M i ip i P E 1 P u u E Var P E 1 1 k N i i i k k N i i i k k k k N N
23 Pairs of random variables Let and be random variables The joint probabilit of observing a pair of values =i,=j is p ij Alternativel we can define a joint probabilit distribution function P, for which, 0, P, P 1 The marginal distributions for and if we want to eliminate one of them is: P P P P,, INF
24 Statistical independence Epected values of two variables Variables and are statistical independent if and onl if Two variables are uncorrelated if Epected values of two variables: INF , P P P P E P E P E P E P E P f f E,,,,,,,, 0 Where in this course have ou seen similar formulas?
25 Feature Can ou sketch,,,, Feature 1 INF
26 Conditional probabilit If two variables are statisticall dependent, knowing the value of one of them lets us get a better estimate of the value of the other one The conditional probabilit of given is: Pr i, j Pr i j Pr j and for distributions : P P, P Eample: Threshold a page with dark tet on white background is the gre level of a piel, and is its class F or B If we consider which gre levels can have - we epect small values if is tet =F, and large values if is background =B INF
27 Epected values of M variables Using vector notation: E μ E P Σ μ μ T INF
28 Baesian decision theor A fundamental statistical approach to pattern classification Named after Thomas Baes , an english priest and matematician It combines prior knowledge about the problem with a probabilit distribution function The most central concept is Baes decision rule INF
29 Baes rule in general The equation: P P P P P P P P In words: likelihood prior posterior evidence are observations/feature vector, is the unknown class labels We want to find the most probable class given the observed feature vector To be eplained for theinf classification 4300 problem later :- 9
30 Mean vectors and covariance matrices in N dimensions If f is a n-dimensional feature vector, we can formulate its mean vector and covariance matri as: with n features, the mean vector will be of size 1n and or size nn The matri will be smmetric as kl = lk INF n n n n n nn n n n n n n n E E E E f f f f Σ μ
31 Baes rule for a classification problem Suppose we have J, j=1,j classes is the class label for a piel, and is the observed gra level or feature vector We can use Baes rule to find an epression for the class with the highest probabilit: p j P j P j p prior probabilit posterior probabilit likelihood normalizing factor For thresholding, P j is the prior probabilit for background or foreground If we don't have special knowledge that one of the classes occur more frequent than other classes, we set them equal for all classes P j =1/J, j=1,,,j Small p means a probabilit distribution Capital P means a probabilit scalar value between 0 and 1 INF
32 Baes rule eplained P j p P j p j p j is the probabilit densit function that models the likelihood for observing gra level if the piel belongs to class j Tpicall we assume a tpe of distribution, eg Gaussian, and the mean and covariance of that distribution is fitted to some data that we know belong to that class This fitting is called classifier training P j is the posterior probabilit that the piel actuall belongs to class j We will soon se that the the classifier that achieves the minimum error is a classifier that assigns each piel to the class j that has the highest posterior probabilit p is just a scaling factor that assures that the probabilities sum to 1 INF
33 Probabilit of error If we have classes, we make an error either if we decide 1 if the true class is if we decide if the true class is 1 If P 1 > P we have more belief that belongs to 1, and we decide 1 The probabilit of error is then: P error P 1 if P if we decide we decide 1 INF
34 Back to classification error for thresholding - Background - Foreground P error P error, d P error p d In this region, foreground piels are misclassified as background In this region, background piels are misclassified as foreground INF
35 Minimizing the error P error P error, d P error p d When we derived the optimal threshold, we showed that the minimum error was achieved for placing the threshold or decision border as we will call it now at the point where P 1 = P This is still valid INF
36 Baes decision rule In the class case, our goal of minimizing the error implies a decision rule: Decide ω 1 if Pω 1 >Pω ; otherwise ω For J classes, the rule analogusl etends to choose the class with maimum a posteriori probabilit The decision boundar is the border between classes i and j, simpl where Pω i =Pω j Eactl where the threshold was set in minimum error thresholding! INF
37 Baes classification with J classes How do we generalize: and D features To more the one feature at a time To J classes To consider loss functions that some errors are more costl than others INF
38 Baes rule with c classes and d features If we measure d features, will be a d-dimensional feature vector Let { 1,, C } be a set of c classes The posterior probabilit for class c is now computed as Still, we assign a piel with feature vector to the class that has the highest posterior probabilit: INF c j j j j j j P p p p P p P 1 i j P for all, P if Decide j 1 1
39 Discriminant functions The decision rule Decide1 if P 1 P j, for all j i can be written as assign to 1 if g i g j The classifier computes J discriminant functions g i and selects the class corresponding to the largest value of the discriminant function Since classification consists of choosing the class that has the largest value, a scaling of the discriminant function g i b fg i will not effect the decision if f is a monotonicall increasing function This can lead to simplifications as we will soon see INF
40 Equivalent discriminant functions The following choices of discriminant functions give equivalent decisions: The effect of the decision rules is to divide the feature space into c decision regions R 1,R c If g i >g j for all ji, then is in region R i The regions are separated b decision boundaries, surfaces in features space where the discriminant functions for two classes are equal INF ln ln i i i i i i i i i i P p g P p g p P p P g
41 Decision functions - two classes If we have onl two classes, assigning to 1 if g 1 >g is equivalent to using a single discriminant function: g = g 1 -g and decide 1 if g>0 The following functions are equivalent: g g P 1 p ln p 1 P INF
42 The Gaussian densit - univariate case a single feature To use a classifier we need to select a probabilit densit function p i The most commonl used probabilit densit is the normal Gaussian distribution: p 1 ep E with epected value or mean E and variance 1 p d p d INF
43 Training a univariate Gaussian classifier To be able to compute the value of the discriminant function, we need to have an estimate of j and j for each class j Assume that we know the true class labels for some piels and that this is given in a mask image Training the classifier then consists of computing j and j for all piels with class label j in the mask file The are computed from training data as: For all piels i with label k in the training mask, compute k k 1 N k 1 N k N k ii N k ii i i k INF
44 Classification with a univariate Gaussian Decide on values for the prior probabilities, P j If we have no prior information, assume that all classes are equall probable and P j =1/c Estimate j and j based on training data based on the formulae on the previous slide For class j=1, J, compute the discriminant function P p P j j j 1 j ep 1 j j P j Assign piel to the class with the highest value of P j The result after classification is an image with class labels corresponding to the most probable class for each piel We compute the classification error rate from an independent test mask INF
45 Eample: image and training masks The masks contain labels for the training data If a piel is not part of the training data, it will have label 0 A piel belonging to class k will have value k in the mask image We should have a similar mask for the test data INF
46 Estimating classification error A simple measure of classification accurac can be to count the percentage of correctl classified piels overall averaged for all classes, or per class If a piel has true class label k, it is correctl classified if j =k Normall we use different piels to train and test a classifier, so we have a disjoint training mask and test mask Estimate the classification error b classifing all piels in the test set and count the percentage of wrongl classified piels INF
INF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image.
INF 4300 700 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter -6 6inDuda and Hart: attern Classification 303 INF 4300 Introduction to classification One of the most challenging
More informationINF Introduction to classifiction Anne Solberg
INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300
More informationINF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification
INF 4300 90 Itroductio to classifictio Ae Solberg ae@ifiuioo Based o Chapter -6 i Duda ad Hart: atter Classificatio 90 INF 4300 Madator proect Mai task: classificatio You must implemet a classificatio
More informationp(x ω i 0.4 ω 2 ω
p( ω i ). ω.3.. 9 3 FIGURE.. Hypothetical class-conditional probability density functions show the probability density of measuring a particular feature value given the pattern is in category ω i.if represents
More informationRandom Vectors. 1 Joint distribution of a random vector. 1 Joint distribution of a random vector
Random Vectors Joint distribution of a random vector Joint distributionof of a random vector Marginal and conditional distributions Previousl, we studied probabilit distributions of a random variable.
More informationMachine Learning Lecture 2
Machine Perceptual Learning and Sensory Summer Augmented 6 Computing Announcements Machine Learning Lecture 2 Course webpage http://www.vision.rwth-aachen.de/teaching/ Slides will be made available on
More informationBayesian Decision Theory
Bayesian Decision Theory Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Bayesian Decision Theory Bayesian classification for normal distributions Error Probabilities
More informationUniversity of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries
University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent
More informationMachine Learning Lecture 2
Announcements Machine Learning Lecture 2 Eceptional number of lecture participants this year Current count: 449 participants This is very nice, but it stretches our resources to their limits Probability
More information14.1 Systems of Linear Equations in Two Variables
86 Chapter 1 Sstems of Equations and Matrices 1.1 Sstems of Linear Equations in Two Variables Use the method of substitution to solve sstems of equations in two variables. Use the method of elimination
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationIntro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation
Lecture 15. Pattern Classification (I): Statistical Formulation Outline Statistical Pattern Recognition Maximum Posterior Probability (MAP) Classifier Maximum Likelihood (ML) Classifier K-Nearest Neighbor
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationLecture 3: Pattern Classification. Pattern classification
EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and
More informationUnit 12 Study Notes 1 Systems of Equations
You should learn to: Unit Stud Notes Sstems of Equations. Solve sstems of equations b substitution.. Solve sstems of equations b graphing (calculator). 3. Solve sstems of equations b elimination. 4. Solve
More informationBayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory
Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology
More informationLinear Discriminant Functions
Linear Discriminant Functions Linear discriminant functions and decision surfaces Definition It is a function that is a linear combination of the components of g() = t + 0 () here is the eight vector and
More information0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work.
1 A socioeconomic stud analzes two discrete random variables in a certain population of households = number of adult residents and = number of child residents It is found that their joint probabilit mass
More informationCopyright, 2008, R.E. Kass, E.N. Brown, and U. Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS
Copright, 8, RE Kass, EN Brown, and U Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS Chapter 6 Random Vectors and Multivariate Distributions 6 Random Vectors In Section?? we etended
More informationInference about the Slope and Intercept
Inference about the Slope and Intercept Recall, we have established that the least square estimates and 0 are linear combinations of the Y i s. Further, we have showed that the are unbiased and have the
More informationCLASSIFICATION (DISCRIMINATION)
CLASSIFICATION (DISCRIMINATION) Caren Marzban http://www.nhn.ou.edu/ marzban Generalities This lecture relies on the previous lecture to some etent. So, read that one first. Also, as in the previous lecture,
More informationLanguage and Statistics II
Language and Statistics II Lecture 19: EM for Models of Structure Noah Smith Epectation-Maimization E step: i,, q i # p r $ t = p r i % ' $ t i, p r $ t i,' soft assignment or voting M step: r t +1 # argma
More informationHTF: Ch4 B: Ch4. Linear Classifiers. R Greiner Cmput 466/551
HTF: Ch4 B: Ch4 Linear Classifiers R Greiner Cmput 466/55 Outline Framework Eact Minimize Mistakes Perceptron Training Matri inversion LMS Logistic Regression Ma Likelihood Estimation MLE of P Gradient
More informationCONTINUOUS SPATIAL DATA ANALYSIS
CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s
More informationCSE 546 Midterm Exam, Fall 2014
CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,
More informationMultivariate statistical methods and data mining in particle physics
Multivariate statistical methods and data mining in particle physics RHUL Physics www.pp.rhul.ac.uk/~cowan Academic Training Lectures CERN 16 19 June, 2008 1 Outline Statement of the problem Some general
More informationEEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1
EEL 851: Biometrics An Overview of Statistical Pattern Recognition EEL 851 1 Outline Introduction Pattern Feature Noise Example Problem Analysis Segmentation Feature Extraction Classification Design Cycle
More informationCross-validation for detecting and preventing overfitting
Cross-validation for detecting and preventing overfitting A Regression Problem = f() + noise Can we learn f from this data? Note to other teachers and users of these slides. Andrew would be delighted if
More informationLecture 4 Discriminant Analysis, k-nearest Neighbors
Lecture 4 Discriminant Analysis, k-nearest Neighbors Fredrik Lindsten Division of Systems and Control Department of Information Technology Uppsala University. Email: fredrik.lindsten@it.uu.se fredrik.lindsten@it.uu.se
More informationChapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1)
HW 1 due today Parameter Estimation Biometrics CSE 190 Lecture 7 Today s lecture was on the blackboard. These slides are an alternative presentation of the material. CSE190, Winter10 CSE190, Winter10 Chapter
More informationp(x ω i 0.4 ω 2 ω
p(x ω i ).4 ω.3.. 9 3 4 5 x FIGURE.. Hypothetical class-conditional probability density functions show the probability density of measuring a particular feature value x given the pattern is in category
More informationD u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2
93 CHAPTER 4 PARTIAL DERIVATIVES We close this section b giving a proof of the first part of the Second Derivatives Test. Part (b) has a similar proof. PROOF OF THEOREM 3, PART (A) We compute the second-order
More informationSGN (4 cr) Chapter 5
SGN-41006 (4 cr) Chapter 5 Linear Discriminant Analysis Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology January 21, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006
More informationNonparametric Methods Lecture 5
Nonparametric Methods Lecture 5 Jason Corso SUNY at Buffalo 17 Feb. 29 J. Corso (SUNY at Buffalo) Nonparametric Methods Lecture 5 17 Feb. 29 1 / 49 Nonparametric Methods Lecture 5 Overview Previously,
More informationUncertainty and Parameter Space Analysis in Visualization -
Uncertaint and Parameter Space Analsis in Visualiation - Session 4: Structural Uncertaint Analing the effect of uncertaint on the appearance of structures in scalar fields Rüdiger Westermann and Tobias
More informationContents Lecture 4. Lecture 4 Linear Discriminant Analysis. Summary of Lecture 3 (II/II) Summary of Lecture 3 (I/II)
Contents Lecture Lecture Linear Discriminant Analysis Fredrik Lindsten Division of Systems and Control Department of Information Technology Uppsala University Email: fredriklindsten@ituuse Summary of lecture
More informationGenerative classifiers: The Gaussian classifier. Ata Kaban School of Computer Science University of Birmingham
Generative classifiers: The Gaussian classifier Ata Kaban School of Computer Science University of Birmingham Outline We have already seen how Bayes rule can be turned into a classifier In all our examples
More information2: Distributions of Several Variables, Error Propagation
: Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the
More informationSTA 414/2104, Spring 2014, Practice Problem Set #1
STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,
More informationMultilayer Neural Networks
Pattern Recognition Multilaer Neural Networs Lecture 4 Prof. Daniel Yeung School of Computer Science and Engineering South China Universit of Technolog Outline Introduction (6.) Artificial Neural Networ
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationBayes Decision Theory
Bayes Decision Theory Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density 0 Minimum-Error-Rate Classification Actions are decisions on classes
More informationReview of Probability
Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical
More informationMachine Learning Lecture 3
Announcements Machine Learning Lecture 3 Eam dates We re in the process of fiing the first eam date Probability Density Estimation II 9.0.207 Eercises The first eercise sheet is available on L2P now First
More information1 History of statistical/machine learning. 2 Supervised learning. 3 Two approaches to supervised learning. 4 The general learning procedure
Overview Breiman L (2001). Statistical modeling: The two cultures Statistical learning reading group Aleander L 1 Histor of statistical/machine learning 2 Supervised learning 3 Two approaches to supervised
More informationPerturbation Theory for Variational Inference
Perturbation heor for Variational Inference Manfred Opper U Berlin Marco Fraccaro echnical Universit of Denmark Ulrich Paquet Apple Ale Susemihl U Berlin Ole Winther echnical Universit of Denmark Abstract
More informationPattern Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors
More informationMinimum Error-Rate Discriminant
Discriminants Minimum Error-Rate Discriminant In the case of zero-one loss function, the Bayes Discriminant can be further simplified: g i (x) =P (ω i x). (29) J. Corso (SUNY at Buffalo) Bayesian Decision
More informationComputation of Csiszár s Mutual Information of Order α
Computation of Csiszár s Mutual Information of Order Damianos Karakos, Sanjeev Khudanpur and Care E. Priebe Department of Electrical and Computer Engineering and Center for Language and Speech Processing
More informationf x, y x 2 y 2 2x 6y 14. Then
SECTION 11.7 MAXIMUM AND MINIMUM VALUES 645 absolute minimum FIGURE 1 local maimum local minimum absolute maimum Look at the hills and valles in the graph of f shown in Figure 1. There are two points a,
More informationExact Differential Equations. The general solution of the equation is f x, y C. If f has continuous second partials, then M y 2 f
APPENDIX C Additional Topics in Differential Equations APPENDIX C. Eact First-Order Equations Eact Differential Equations Integrating Factors Eact Differential Equations In Chapter 6, ou studied applications
More informationPattern recognition. "To understand is to perceive patterns" Sir Isaiah Berlin, Russian philosopher
Pattern recognition "To understand is to perceive patterns" Sir Isaiah Berlin, Russian philosopher The more relevant patterns at your disposal, the better your decisions will be. This is hopeful news to
More informationResearch Design - - Topic 15a Introduction to Multivariate Analyses 2009 R.C. Gardner, Ph.D.
Research Design - - Topic 15a Introduction to Multivariate Analses 009 R.C. Gardner, Ph.D. Major Characteristics of Multivariate Procedures Overview of Multivariate Techniques Bivariate Regression and
More informationMachine Learning Lecture 2
Machine Perceptual Learning and Sensory Summer Augmented 15 Computing Many slides adapted from B. Schiele Machine Learning Lecture 2 Probability Density Estimation 16.04.2015 Bastian Leibe RWTH Aachen
More informationMACHINE LEARNING 2012 MACHINE LEARNING
1 MACHINE LEARNING kernel CCA, kernel Kmeans Spectral Clustering 2 Change in timetable: We have practical session net week! 3 Structure of toda s and net week s class 1) Briefl go through some etension
More informationEUSIPCO
EUSIPCO 3 56974469 APPLICATION OF THE SAMPLE-CONDITIONED MSE TO NON-LINEAR CLASSIFICATION AND CENSORED SAMPLING Lori A. Dalton Department of Electrical and Computer Engineering and Department of Biomedical
More informationBiostatistics in Research Practice - Regression I
Biostatistics in Research Practice - Regression I Simon Crouch 30th Januar 2007 In scientific studies, we often wish to model the relationships between observed variables over a sample of different subjects.
More informationSummary of Random Variable Concepts March 17, 2000
Summar of Random Variable Concepts March 17, 2000 This is a list of important concepts we have covered, rather than a review that devives or eplains them. Tpes of random variables discrete A random variable
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationDirectional derivatives and gradient vectors (Sect. 14.5). Directional derivative of functions of two variables.
Directional derivatives and gradient vectors (Sect. 14.5). Directional derivative of functions of two variables. Partial derivatives and directional derivatives. Directional derivative of functions of
More informationBayesian Decision Theory
Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian
More informationNotes on Discriminant Functions and Optimal Classification
Notes on Discriminant Functions and Optimal Classification Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Discriminant Functions Consider a classification problem
More informationRecall Discrete Distribution. 5.2 Continuous Random Variable. A probability histogram. Density Function 3/27/2012
3/7/ Recall Discrete Distribution 5. Continuous Random Variable For a discrete distribution, for eample Binomial distribution with n=5, and p=.4, the probabilit distribution is f().7776.59.3456.34.768.4.3
More information2. This problem is very easy if you remember the exponential cdf; i.e., 0, y 0 F Y (y) = = ln(0.5) = ln = ln 2 = φ 0.5 = β ln 2.
FALL 8 FINAL EXAM SOLUTIONS. Use the basic counting rule. There are n = number of was to choose 5 white balls from 7 = n = number of was to choose ellow ball from 5 = ( ) 7 5 ( ) 5. there are n n = ( )(
More informationSemi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analysis
Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analsis Matthew B. Blaschko, Christoph H. Lampert, & Arthur Gretton Ma Planck Institute for Biological Cbernetics Tübingen, German
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationMinimum Error Rate Classification
Minimum Error Rate Classification Dr. K.Vijayarekha Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur-613 401 Table of Contents 1.Minimum Error Rate Classification...
More informationEnhancement Using Local Histogram
Enhancement Using Local Histogram Used to enhance details over small portions o the image. Deine a square or rectangular neighborhood hose center moves rom piel to piel. Compute local histogram based on
More informationProbability Densities in Data Mining
Probabilit Densities in Data Mining Note to other teachers and users of these slides. Andrew would be delighted if ou found this source material useful in giving our own lectures. Feel free to use these
More information5. Zeros. We deduce that the graph crosses the x-axis at the points x = 0, 1, 2 and 4, and nowhere else. And that s exactly what we see in the graph.
. Zeros Eample 1. At the right we have drawn the graph of the polnomial = ( 1) ( 2) ( 4). Argue that the form of the algebraic formula allows ou to see right awa where the graph is above the -ais, where
More informationL2: Review of probability and statistics
Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution
More informationBAYESIAN DECISION THEORY
Last updated: September 17, 2012 BAYESIAN DECISION THEORY Problems 2 The following problems from the textbook are relevant: 2.1 2.9, 2.11, 2.17 For this week, please at least solve Problem 2.3. We will
More informationWhat does Bayes theorem give us? Lets revisit the ball in the box example.
ECE 6430 Pattern Recognition and Analysis Fall 2011 Lecture Notes - 2 What does Bayes theorem give us? Lets revisit the ball in the box example. Figure 1: Boxes with colored balls Last class we answered
More informationSystems of Linear Equations: Solving by Graphing
8.1 Sstems of Linear Equations: Solving b Graphing 8.1 OBJECTIVE 1. Find the solution(s) for a set of linear equations b graphing NOTE There is no other ordered pair that satisfies both equations. From
More informationBayesian Decision Theory Lecture 2
Bayesian Decision Theory Lecture 2 Jason Corso SUNY at Buffalo 14 January 2009 J. Corso (SUNY at Buffalo) Bayesian Decision Theory Lecture 2 14 January 2009 1 / 58 Overview and Plan Covering Chapter 2
More informationL20: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs
L0: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU
More informationStatistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart
Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart 1 Motivation and Problem In Lecture 1 we briefly saw how histograms
More informationInstance-level l recognition. Cordelia Schmid INRIA
nstance-level l recognition Cordelia Schmid NRA nstance-level recognition Particular objects and scenes large databases Application Search photos on the web for particular places Find these landmars...in
More informationKernel PCA, clustering and canonical correlation analysis
ernel PCA, clustering and canonical correlation analsis Le Song Machine Learning II: Advanced opics CSE 8803ML, Spring 2012 Support Vector Machines (SVM) 1 min w 2 w w + C j ξ j s. t. w j + b j 1 ξ j,
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil nfrastructure Eng. & Mgmt. Session 9- mage Detectors, Part Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering Lab e-mail:
More informationSF2935: MODERN METHODS OF STATISTICAL LECTURE 3 SUPERVISED CLASSIFICATION, LINEAR DISCRIMINANT ANALYSIS LEARNING. Tatjana Pavlenko.
SF2935: MODERN METHODS OF STATISTICAL LEARNING LECTURE 3 SUPERVISED CLASSIFICATION, LINEAR DISCRIMINANT ANALYSIS Tatjana Pavlenko 5 November 2015 SUPERVISED LEARNING (REP.) Starting point: we have an outcome
More informationFunctions of Several Variables
Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is
More informationIssues and Techniques in Pattern Classification
Issues and Techniques in Pattern Classification Carlotta Domeniconi www.ise.gmu.edu/~carlotta Machine Learning Given a collection of data, a machine learner eplains the underlying process that generated
More information1.6 ELECTRONIC STRUCTURE OF THE HYDROGEN ATOM
1.6 ELECTRONIC STRUCTURE OF THE HYDROGEN ATOM 23 How does this wave-particle dualit require us to alter our thinking about the electron? In our everda lives, we re accustomed to a deterministic world.
More informationComputer Vision & Digital Image Processing
Computer Vision & Digital Image Processing Image Segmentation Dr. D. J. Jackson Lecture 6- Image segmentation Segmentation divides an image into its constituent parts or objects Level of subdivision depends
More informationExpect Values and Probability Density Functions
Intelligent Systems: Reasoning and Recognition James L. Crowley ESIAG / osig Second Semester 00/0 Lesson 5 8 april 0 Expect Values and Probability Density Functions otation... Bayesian Classification (Reminder...3
More informationENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition
Memorial University of Newfoundland Pattern Recognition Lecture 6 May 18, 2006 http://www.engr.mun.ca/~charlesr Office Hours: Tuesdays & Thursdays 8:30-9:30 PM EN-3026 Review Distance-based Classification
More informationPattern Recognition and Machine Learning. Learning and Evaluation of Pattern Recognition Processes
Pattern Recognition and Machine Learning James L. Crowley ENSIMAG 3 - MMIS Fall Semester 2016 Lesson 1 5 October 2016 Learning and Evaluation of Pattern Recognition Processes Outline Notation...2 1. The
More informationLinear regression Class 25, Jeremy Orloff and Jonathan Bloom
1 Learning Goals Linear regression Class 25, 18.05 Jerem Orloff and Jonathan Bloom 1. Be able to use the method of least squares to fit a line to bivariate data. 2. Be able to give a formula for the total
More informationEigenvectors and Eigenvalues 1
Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and
More informationProbability Models for Bayesian Recognition
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIAG / osig Second Semester 06/07 Lesson 9 0 arch 07 Probability odels for Bayesian Recognition Notation... Supervised Learning for Bayesian
More informationSpeech and Language Processing
Speech and Language Processing Lecture 5 Neural network based acoustic and language models Information and Communications Engineering Course Takahiro Shinoaki 08//6 Lecture Plan (Shinoaki s part) I gives
More informationUNCORRECTED SAMPLE PAGES. 3Quadratics. Chapter 3. Objectives
Chapter 3 3Quadratics Objectives To recognise and sketch the graphs of quadratic polnomials. To find the ke features of the graph of a quadratic polnomial: ais intercepts, turning point and ais of smmetr.
More informationLinear Classifiers as Pattern Detectors
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear
More informationAn Information Theory For Preferences
An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop
More informationClassification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture Notes Fall 2009 November, 2009 Byoung-Ta Zhang School of Computer Science and Engineering & Cognitive Science, Brain Science, and Bioinformatics Seoul National University
More informationProbabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016
Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More information