Advanced Machine Learning & Perception
|
|
- Susanna Roberts
- 5 years ago
- Views:
Transcription
1 Advanced Machine Learning & Perception Instructor: Tony Jebara
2 Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior Info, policies, texts, web page Syllabus, Overview, Review Gaussian simplest distribution / signal handling Representation & Appearance Based Methods Least Squares, Correlation, Gaussians, Bases Principal Components Analysis and shortcomings it entails
3 About me Tony Jebara, Associate Professor in Computer Science Started at Columbia in 2002 PhD from MIT in Machine Learning Thesis: Discriminative, Generative and Imitative Learning (2001) Research Areas: (Columbia Machine Learning Lab, CEPSR 6LE5) Machine Learning Some Computer Vision
4 Course Web Page Some material & announcements will be online But, many things will be handed out in class such as photocopies of papers for readings, etc. Check EWS link to see deadlines, homework, etc. Available online, see TA info, etc. Follow the policies, homework, deadlines, readings closely please!
5 Syllabus Week 1 Introduction, Review of Basic Concepts, Representation Issues, Vector and Appearance-Based Models, Correlation and Least Squared Error Methods, Bases, Eigenspace Recognition, Principal Components Analysis Week 2 onlinear Dimensionality Reduction, Manifolds, kernel PCA, Locally Linear Embedding, Maximum Variance Unfolding, Minimum Volume Embedding Week 3 Support Vector Machines and related machines, VC Dimension, Large Margin, Large Relative Margin Week 4 Kernel Methods, Reproducing Kernel Hilbert Space, Probabilistic Kernel Approaches, Kernel Principal Components Analysis, Bag of Vectors/Pixel Kernels Week 5 Maximum Entropy, Iterative Scaling, Maximum Entropy Discrimination, Large Margin Probability Models Week 6 SVM Extensions, Multi-Class Classification, Error-Correcting Codes, Feature Selection, Kernel Selection, Meta-Learning, Transduction
6 Syllabus Week 7 Bayesian etworks, Belief Propagation, Hidden Markov Models, Gesture Recognition Week 8 Kalman Filtering, Structure from Motion, Parameter Estimation, Coupled and Linked Hidden Markov Models, Variational and Mean-Field Methods Week 9 Factorial Hidden Markov Models, Switched Kalman Filters, Dynamical Bayesian etworks, Structured Mean-Field Week 10 Graph Learning, b-matching, Loopy Belief Propagation, Perfect Graphs Week 11 Spectral Clustering, Random Walks, cuts Methods, Stability, Image Segmentation Week 12 Boosting, Mixtures of Experts, AdaBoost, Online Learning Week 13 TBD Week 14 Project Presentations
7 Beyond Canned Learning Latest research methods, high dimensions, nonlinearities, dynamics, manifolds, invariance, unlabeled, feature selection Modeling Images / People / Activity / Time Series / MoCAP
8 Representation & Vectorization How to represent our data? Images, time series, genes Vectorization: the poor man s representation Almost anything can be written as a long vector E.g. image is read lexicographically RGB of eachpixel is added to a vector Or, a gene sequence can be written as a binary vector GATACAC = [ ] For images, this is called Appearance Based representation But, we lose many important properties this way We will fix this in later lectures For now, it is an easy way to proceed
9 Least Squares Detection How to find a face in an image given a template? Template Matching and Sum-Squares-Difference (SSD) aïve: try all positions/scales, find least squares distance i * = arg min i 1,T Correlation and ormalized Correlation Could normalize length of all vectors (fixes lighting) ˆµ = µ µ i * = arg min i 1,T i * = arg min i 1,T 1 2 µ x i ( ) T ( µ x ) i 1 2 µ x i ( ) 1 2 ˆµTˆµ 2ˆµ T ˆx i + ˆx it ˆx i 2 = arg max i 1,T ˆµ T ˆx i
10 Least Squares & Gaussians Minimum squared error is equivalent to maximum likelihood under a Gaussian i * 1 = arg min µ x 2 i 1,T 2 i 1 = arg max i 1,T log ( 2π) exp 1 µ x 2 D/2 2 i Can now treat it as a probabilistic problem Trying to find the most likely position (or sub-image) in search image given the Gaussian model of the template Define the log of the likelihood as: log p( x θ) For a Gaussian probability or likelihood is: p( x i θ) = 1 ( 2π) exp 1 µ x 2 D/2 2 i
11 Gaussian Distribution Recall 1-dimensional Gaussian with mean parameter µ translates Gaussian left & right p( x µ ) = 1 exp 1 ( x µ ) 2 2π 2 Can also have variance parameter σ widens or narrows the Gaussian p( x µ,σ) = 1 exp 1 x µ 2πσ 2 2σ 2 ( ) 2 ote: ( ) p x dx = 1 x =
12 Multivariate Gaussian Gaussian can extend to D-dimensions Gaussian mean parameter µ vector (translates) Covariance matrix Σ stretches and rotates bump p( x µ,σ) 1 = exp 1 ( ( 2π) x µ ) T Σ 1 ( x µ ) D/2 Σ 2 x,µ R D, Σ R D D Max and expectation = µ Mean is any real vector, variance is now Σ matrix Covariance matrix is positive semi-definite Covariance matrix is symmetric eed matrix inverse (inv) eed matrix determinant (det) eed matrix trace operator (trace)
13 Multivariate Gaussian Spherical: Σ = σ 2 I = σ σ σ 2 Diagonal Covariance: dimensions of x are independent product of multiple 1d Gaussians ( ) = 1 p x µ,σ D d =1 2πσ d exp x µ d d 2 2σ d ( ) 2 Σ = σ σ σ σ 4
14 Max Likelihood Gaussian How to make face detector to work on all faces? Why use a template? How can we use many templates? Have IID samples of template vectors..: Represent IID samples with parameters as network: x 1,x 2,x 3,,x More efficiently drawn using Replicator Plate i = 1 Let us get a good Gaussian from these many templates. Standard approach: Max Likelihood log p x i ( µ,σ) 1 = log exp 1 x i µ 2π 2 ( ) D/2 Σ ( ) T Σ 1 ( x i µ )
15 Max Likelihood Gaussian Max over µ x T x x µ µ 1 log exp 1 ( 2π) x i µ D/2 Σ 2 ( ) T Σ 1 ( x i µ ) = 0 ( ) T Σ 1 ( x i µ ) D log 2π 1 log Σ 1 x i µ = = 2x T see Jordan Ch. 12, get sample mean For Σ need Trace operator: and several properties: tr A ( x i µ ) T Σ 1 = 0 ( ) = tr A T ( ) = tr ( BA) ( ) = tr ( A) tr AB tr BAB 1 µ = 1 ( ) D = A dd d =1 tr ( xx T A) = tr ( x T Ax) = x T Ax x i
16 Max Likelihood Gaussian Likelihood rewritten in trace notation: l = D 2 log 2π 1 2 log Σ 1 2 x i µ Max over A=Σ -1 use properties: = D 2 log 2π + 2 log Σ = D 2 log 2π + 2 log Σ = D 2 log 2π + 2 log A 1 2 l A = = 2 Σ 1 2 log A A Get sample covariance: = A 1 ( A 1 ) T 1 2 ( ) T Σ 1 ( x i µ ) tr ( x i µ ) T Σ 1 ( x i µ ) tr x i µ tr x i µ ( )( x i µ ) T Σ 1 ( )( x i µ ) T A ( ) T tr BA ( x i µ ) x i µ l A = BT ( x i µ ) x i µ ( ) T A = 0 Σ = 1 ( ) T T ( x i µ ) x i µ ( ) T
17 Principal Components Analysis Gaussians: for Classification, Regression & Compression! Data can be constant in some directions, changes in others Use Gaussian to find directions of high/low variance
18 Principal Components Analysis Idea: instead of writing data in all its dimensions, only write it as mean + steps along one direction x x i y i µ x µ y v +c x i v y More generally, keep a subset of dimensions C from D (I.e. 2 of 3) x i C µ + c j =1 ijv j Compression method: x i c i Optimal directions: along eigenvectors of covariance Which directions to keep: highest eigenvalues (variances)
19 Principal Components Analysis If we have eigenvectors, mean and coefficients: x i C µ + c j =1 ijv j Getting eigenvectors (I.e. approximating the covariance): Σ =V ΛV T Σ 11 Σ 12 Σ 13 Σ 12 Σ 22 Σ 23 Σ 13 Σ 23 Σ 33 = v 1 v2 v3 T Eigenvectors are orthonormal: v i v j = δ ij In coordinates of v, Gaussian is diagonal, cov = Λ All eigenvalues are non-negative λ i 0 Higher eigenvalues are higher variance, use those first To compute the coefficients: λ 1 λ 2 λ 3 λ 4 λ λ λ 3 c ij = x i µ v1 ( ) T v j v2 v3 T
20 Snapshot Method Assume 2000 images each containing D=10,000 pixels How big is the covariance matrix? It is DxD pixels Finding the eigenvectors or inverting DxD, requires O(D 3 ) First compute mean of all data (easy) and subtract it from each point Instead of: Σ = 1 T x compute i x Φwhere Φ i = x T i,j i x j Then find eigenvectors/eigenvalues of Eigenvectors of Σ are then: j =1 Φ= V Λ V T ( ) v i x j v i j
21 Eigenfaces { x 1,,x } = µ { µ + c 1j v j, } =
22 Eigenface Detection Use a Gaussian model but only approximate the huge covariance matrix with its eigenvectors and a constant spherical component Spherical here Use variable Eigenvalues Eigenvectors here ( ) = 1 x µ,σ ( 2π) D/2 Σ exp 1 ( x µ ) T Σ 1 ( x µ ) 2 Σ M λ k v k v T k + ρ D T δ k δ k Σ 1 k=1 M k=1 1 λ k v k v k T + 1 ρ Eigenvalues k=m +1 D k=m +1 δ k δ k T
23 Gaussian Face Finding Instead of minimizing squared error, use Gaussian model p( x µ,σ) 1 = exp 1 ( ( 2π) x µ ) T Σ 1 ( x µ ) D/2 Σ 2 Euclidean distance: 1 ( x µ ) T ( x µ ) (assumed Σ=I) 2 ( ) T Σ 1 ( x µ ) Mahalanobis distance: 1 x µ 2 (shrink distances in directions with high variance) State of the art face finder as evaluated by IST/DARPA Moghaddam et al Top performer after 2000 is Viola-Jones (boosted cascade)
24 Gaussian Face Recognition Instead of modeling face images with Gaussian model the difference of two face images with a Gaussian Each difference of all pairs of images in our data is represented as a D-dimensional vector x Also have a binary label y, y=1 same person same, y=0 not x R D y 0,1 { } y i = 1,x i = - y j = 0,x j = - One Gaussian for same-face deltas another for different people deltas
25 Gaussian Classification Have two classes, each with their own Gaussian: p x µ,σ,y Generation: 1) flip a coin, get y 2) pick Gaussian y, sample x from it Maximum Likelihood: l = {( x 1,y 1 ),,( x,y )} x R D y 0,1 ( ) ( ) + ( ) + log p x i,y i α,µ,σ = log p y i α p( α)p ( µ )p( Σ)p ( y α)p( x y,µ,σ) p( y α) = α y ( 1 α) 1 y ( ) = ( x µ y,σ ) y ( ) ( ) + log p( x i µ 1,Σ 1 ) log p x i y i,µ,σ = log p y i α log p x i µ 0,Σ 0 yi 0 { } yi 1
26 Gaussian Classification Max Likelihood can be done separately for the 3 terms l = ( ) + log p y i α log p x i µ 0,Σ 0 yi 0 ( ) + log p( x i µ 1,Σ 1 ) yi 1 Count # of pos & neg examples (class prior): α = Get mean & cov of negatives and mean & cov of positives: µ 0 = 1 0 x y i 0 i Σ 0 = 1 ( x 0 i µ )( 0 x i µ ) T yi 0 0 µ 1 = 1 1 x y i 1 i Σ 1 = 1 ( x 1 i µ )( 1 x i µ ) T yi 1 Given (x,y) pair, can now compute likelihood p( 1 x,y) To make classification, a bit of Decision Theory Without x, can compute prior guess for y p( y) Give me x, want y, I need posterior p( y x) Bayes Optimal Decision: ŷ = arg max y= { 0,1 } p ( y x ) Optimal iff we have true probability
27 Gaussian Classification Example cases, plotting decision boundary when = 0.5 ( ) = p y = 1 x = p( x,y = 1) ( ) + p( x,y = 1) α ( x µ 1,Σ 1 ) ( ) + α ( x µ 1,Σ 1 ) p x,y = 0 ( 1 α) x µ 0,Σ 0 If covariances are equal: linear decision If covariances are different: quadratic decision
28 Intra-Extra Personal Gaussians Intrapersonal Gaussian model ( x µ 1,Σ ) 1 Covariance is approximated by these eigenvectors: Extrapersonal Gaussian model ( x µ 0,Σ ) 0 Covariances is approximated by these eigenvectors: Question: what are the Gaussian means? Probability a pair is the same person: ( ) = p y = 1 x α ( x µ 1,Σ 1 ) ( ) + α ( x µ 1,Σ 1 ) ( 1 α) x µ 0,Σ 0
29 Other Standard Bases There are other choices for the eigenvectors, not just PCA Could pick eigenvectors without looking at the data Just for their interesting properties x i µ + C c ij v j j =1 Fourier basis: denoises, only keeps smooth parts of image Wavelet basis: localized or windowed Fourier PCA: optimal least squares linear dataset reconstruction
30 Basis for atural Images What happens if we do PCA on all natural images instead of just faces? Get difference of Gaussian bases Like Gabor or Wavelet basis ot specific like faces Multi-scale & orientation Also called steerable filters Similar to visual cortex
31 Problems with Linear Bases Coefficient representation changes wildly if image rotates and so does p(x) The eigenspace is sensitive to rotations, translations and transformations of the image Simple linear/gaussian/pca models are not enough What worked for aligned faces breaks for general image datasets Most of the PCA eigenvectors and spectrum energy is wasted due to OLIEAR EFFECTS c ij = ( x i µ ) T v j
Advanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 1 Introduction, researchy course, latest papers Going beyond simple machine learning Perception, strange spaces, images, time, behavior
More informationAdvanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 6 Standard Kernels Unusual Input Spaces for Kernels String Kernels Probabilistic Kernels Fisher Kernels Probability Product Kernels
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationMachine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.
Machine Learning CUNY Graduate Center, Spring 2013 Lectures 11-12: Unsupervised Learning 1 (Clustering: k-means, EM, mixture models) Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning
More informationIntroduction to Machine Learning
1, DATA11002 Introduction to Machine Learning Lecturer: Antti Ukkonen TAs: Saska Dönges and Janne Leppä-aho Department of Computer Science University of Helsinki (based in part on material by Patrik Hoyer,
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Course Info. Instructor: Nicholas Ruozzi Office: ECSS 3.409 Office hours: Tues.
More informationParametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a
Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationDimensionality reduction
Dimensionality Reduction PCA continued Machine Learning CSE446 Carlos Guestrin University of Washington May 22, 2013 Carlos Guestrin 2005-2013 1 Dimensionality reduction n Input data may have thousands
More informationIntroduction to Machine Learning
1, DATA11002 Introduction to Machine Learning Lecturer: Teemu Roos TAs: Ville Hyvönen and Janne Leppä-aho Department of Computer Science University of Helsinki (based in part on material by Patrik Hoyer
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationLecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides
Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Intelligent Data Analysis and Probabilistic Inference Lecture
More informationECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction
ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering
More informationMachine Learning 4771
Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object
More informationMachine Learning (CS 567) Lecture 5
Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol
More informationFactor Analysis and Kalman Filtering (11/2/04)
CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used
More informationFace detection and recognition. Detection Recognition Sally
Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification
More informationAdvanced Machine Learning & Perception
Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 2 Nonlinear Manifold Learning Multidimensional Scaling (MDS) Locally Linear Embedding (LLE) Beyond Principal Components Analysis (PCA)
More informationMachine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.
Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted
More informationLecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University
Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization
More informationKernel methods, kernel SVM and ridge regression
Kernel methods, kernel SVM and ridge regression Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Collaborative Filtering 2 Collaborative Filtering R: rating matrix; U: user factor;
More informationExpectation Maximization
Expectation Maximization Machine Learning CSE546 Carlos Guestrin University of Washington November 13, 2014 1 E.M.: The General Case E.M. widely used beyond mixtures of Gaussians The recipe is the same
More informationMachine Learning, Fall 2009: Midterm
10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all
More informationCS 340 Lec. 18: Multivariate Gaussian Distributions and Linear Discriminant Analysis
CS 3 Lec. 18: Multivariate Gaussian Distributions and Linear Discriminant Analysis AD March 11 AD ( March 11 1 / 17 Multivariate Gaussian Consider data { x i } N i=1 where xi R D and we assume they are
More informationMachine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall
Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume
More informationUniversity of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries
University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and
More informationCOS 429: COMPUTER VISON Face Recognition
COS 429: COMPUTER VISON Face Recognition Intro to recognition PCA and Eigenfaces LDA and Fisherfaces Face detection: Viola & Jones (Optional) generic object models for faces: the Constellation Model Reading:
More informationMACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA
1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR
More informationConnection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis
Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal
More informationComputational Genomics
Computational Genomics http://www.cs.cmu.edu/~02710 Introduction to probability, statistics and algorithms (brief) intro to probability Basic notations Random variable - referring to an element / event
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic
More informationFinal Exam, Machine Learning, Spring 2009
Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research
More informationNaïve Bayes classification
Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationEECS490: Digital Image Processing. Lecture #26
Lecture #26 Moments; invariant moments Eigenvector, principal component analysis Boundary coding Image primitives Image representation: trees, graphs Object recognition and classes Minimum distance classifiers
More informationAdvanced Introduction to Machine Learning
10-715 Advanced Introduction to Machine Learning Homework Due Oct 15, 10.30 am Rules Please follow these guidelines. Failure to do so, will result in loss of credit. 1. Homework is due on the due date
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationCS534 Machine Learning - Spring Final Exam
CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the
More informationBoosting: Algorithms and Applications
Boosting: Algorithms and Applications Lecture 11, ENGN 4522/6520, Statistical Pattern Recognition and Its Applications in Computer Vision ANU 2 nd Semester, 2008 Chunhua Shen, NICTA/RSISE Boosting Definition
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationClassification: The rest of the story
U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher
More informationCS 231A Section 1: Linear Algebra & Probability Review
CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationSPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS
SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised
More informationCS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang
CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationPCA FACE RECOGNITION
PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationExpectation Maximization
Expectation Maximization Bishop PRML Ch. 9 Alireza Ghane c Ghane/Mori 4 6 8 4 6 8 4 6 8 4 6 8 5 5 5 5 5 5 4 6 8 4 4 6 8 4 5 5 5 5 5 5 µ, Σ) α f Learningscale is slightly Parameters is slightly larger larger
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationLecture 4: Probabilistic Learning
DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods
More informationLearning gradients: prescriptive models
Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan
More informationPCA and admixture models
PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1
More informationImage Analysis. PCA and Eigenfaces
Image Analysis PCA and Eigenfaces Christophoros Nikou cnikou@cs.uoi.gr Images taken from: D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. Computer Vision course by Svetlana
More informationNaïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability
Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish
More informationCSC411 Fall 2018 Homework 5
Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to
More informationStatistical Machine Learning
Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x
More informationUniversity of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians
Engineering Part IIB: Module F Statistical Pattern Processing University of Cambridge Engineering Part IIB Module F: Statistical Pattern Processing Handout : Multivariate Gaussians. Generative Model Decision
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha
More informationLinear Subspace Models
Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,
More informationBayesian Methods: Naïve Bayes
Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior
More informationProbabilistic Graphical Models for Image Analysis - Lecture 1
Probabilistic Graphical Models for Image Analysis - Lecture 1 Alexey Gronskiy, Stefan Bauer 21 September 2018 Max Planck ETH Center for Learning Systems Overview 1. Motivation - Why Graphical Models 2.
More informationPrincipal Component Analysis
B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition
More informationLecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26
Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1
More informationOverview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated
Fall 3 Computer Vision Overview of Statistical Tools Statistical Inference Haibin Ling Observation inference Decision Prior knowledge http://www.dabi.temple.edu/~hbling/teaching/3f_5543/index.html Bayesian
More informationCSC411: Final Review. James Lucas & David Madras. December 3, 2018
CSC411: Final Review James Lucas & David Madras December 3, 2018 Agenda 1. A brief overview 2. Some sample questions Basic ML Terminology The final exam will be on the entire course; however, it will be
More informationCOMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection
COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection Instructor: Herke van Hoof (herke.vanhoof@cs.mcgill.ca) Based on slides by:, Jackie Chi Kit Cheung Class web page:
More informationBayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington
Bayesian Classifiers and Probability Estimation Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington 1 Data Space Suppose that we have a classification problem The
More informationLINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,
More informationLecture 5. Gaussian Models - Part 1. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. November 29, 2016
Lecture 5 Gaussian Models - Part 1 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza November 29, 2016 Luigi Freda ( La Sapienza University) Lecture 5 November 29, 2016 1 / 42 Outline 1 Basics
More informationCOM336: Neural Computing
COM336: Neural Computing http://www.dcs.shef.ac.uk/ sjr/com336/ Lecture 2: Density Estimation Steve Renals Department of Computer Science University of Sheffield Sheffield S1 4DP UK email: s.renals@dcs.shef.ac.uk
More informationCSC 411 Lecture 12: Principal Component Analysis
CSC 411 Lecture 12: Principal Component Analysis Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 12-PCA 1 / 23 Overview Today we ll cover the first unsupervised
More informationPhysics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More information9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee
9.520: Class 20 Bayesian Interpretations Tomaso Poggio and Sayan Mukherjee Plan Bayesian interpretation of Regularization Bayesian interpretation of the regularizer Bayesian interpretation of quadratic
More informationWhat is semi-supervised learning?
What is semi-supervised learning? In many practical learning domains, there is a large supply of unlabeled data but limited labeled data, which can be expensive to generate text processing, video-indexing,
More informationGaussian and Linear Discriminant Analysis; Multiclass Classification
Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015
More informationStatistical learning. Chapter 20, Sections 1 4 1
Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete
More informationIntroduction: MLE, MAP, Bayesian reasoning (28/8/13)
STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this
More informationCS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines
CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several
More informationAnnouncements (repeat) Principal Components Analysis
4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationMachine Learning, Midterm Exam
10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationContents Lecture 4. Lecture 4 Linear Discriminant Analysis. Summary of Lecture 3 (II/II) Summary of Lecture 3 (I/II)
Contents Lecture Lecture Linear Discriminant Analysis Fredrik Lindsten Division of Systems and Control Department of Information Technology Uppsala University Email: fredriklindsten@ituuse Summary of lecture
More informationLecture 3: Pattern Classification
EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft
More informationMore Spectral Clustering and an Introduction to Conjugacy
CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral
More informationBayesian Decision Theory
Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian
More informationMachine Learning 2nd Edition
INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010
More information