Midterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009

Size: px
Start display at page:

Download "Midterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009"

Transcription

1 Midterm Exam CS283, Computer Vision Harvard University Nov. 2, 29 You have two hours to complete this exam. Show all of your work to get full credit, and write your work in the blue books provided. Work written on this document will not be evaluated. (Possibly) Useful Information Given vectors u = [u, u 2, u 3 ], v = [v, v 2, v 3 ] we can write the following. u v = u v + u 2 v 2 + u 3 v 3 ; u v = v u = u 2v 3 u 3 u 2 u 3 v u v 3 u v 2 u 2 v A one-dimensional signal x[n], n =,...,N and its discrete Fourier transform X[u] are related by X[u] = N n= x[n] exp( j2πnu/n), x[n] = N N u= X[u] exp(j2πnu/n) Also, as you proved in Assignment Four, if y[n] = ( ) n x[n], their Fourier transforms are related by Y [u] = X[N/2 u], where N is the length of signal x[n]. Below are expressions for the general multi-variate Gaussian distribution for random variable x R d having mean µ and covariance matrix Σ, as well as the special case in which the covariance matrix is a scaled identity matrix Σ = σ 2 I. p(x) = exp (2π) d/2 Σ /2 >> help ones ONES Ones array. ONES(N) is an N-by-N matrix of ones. [ ] 2 (x µ) Σ x µ), p(x) = ONES(M,N) or ONES([M,N]) is an M-by-N matrix of ones. [ (2π) d/2 σ exp d ] x µ 2 2σ 2 >> help zeros ZEROS Zeros array. ZEROS(N) is an N-by-N matrix of zeros. ZEROS(M,N) or ZEROS([M,N]) is an M-by-N matrix of zeros. >> help repmat REPMAT Replicate and tile an array. B = repmat(a,m,n) creates a large matrix B consisting of an M-by-N tiling of copies of A. The size of B is [size(a,)*m, size(a,2)*n]. The statement repmat(a,n) creates an N-by-N tiling. B = REPMAT(A,[M N]) accomplishes the same result as repmat(a,m,n).

2 Question (2 points) The equation for a conic in the plane using inhomogeneous coordinates (x, y) is ax 2 + bxy + cy 2 + dx + ey + f =. () a. Suppose you are given a set of inhomogeneous points x i = (x i, y i ), i =,...,N. Derive an expression for the least squares estimate of the conic c = (a, b, c, d, e, f) passing through those points. (Your expression may take the form of a null vector or eigenvector of a matrix.) b. What is the minimum value of N that allows a unique solution for c? c. Homogenize Eq. by making the substitutions x x /x 3, y x 2 /x 3, and show that in terms of homogeneous coordinates (x = (x, x 2, x 3 )) the conic can be expressed in matrix form, with a symmetric matrix C. x Cx =, d. Suppose we apply a projective transformation to our points: x i = Hx i. The transformed points x i will lie on a transformed conic represented by a new symmetric matrix C. What is the relation between C and C? Question 2 (2 points) Consider a camera with intrinsic parameter matrix 3 3 K = 3 2 and complete camera matrix P = Suppose we add a new camera P with the same orientation as that of camera P. The camera centre of this second camera is located at [3 2] (an inhomogeneous point in R 3 ), and it has a focal length that is one-third that of P. a. What is the camera center for the first camera (P) in inhomogeneous coordinates? b. Compute the camera matrix P. c. Compute the epipole in each camera, expressed in inhomogeneous coordinates. d. Are the epipolar lines in the first camera parallel to one another? Justify your answer.. 2

3 Question 3 (2 points) Consider a surface patch with BRDF f r (ŝ, ˆv, ˆn) =, ˆn ŝ ˆn ˆv where ˆn, ˆv, and ŝ are the surface normal, view direction and light source direction, respectively. Here, the BRDF is expressed in global coordinates, instead of writing the input and output directions in a local coordinate system relative to the surface normal. (The two representations are equivalent. That is, at a small surface patch with normal vector ˆn, given surface irradiance due to radiance from direction ŝ, this tells us the the value of the radiance that is emitted in direction ˆv.) Suppose we view such a surface patch from a known direction ˆv, and suppose we capture two radiance measurements E and E 2 under unit-strength distant lighting from known directions ŝ and ŝ 2. a. Write expressions for the measurements E and E 2 in terms of the view, normal and source directions. b. Show that you can recover the the surface normal from these two measurements. (Hint: derive an expression for n up to scale, and then argue that the sign ambiguity can be resolved by requiring visibility from direction ˆv.) Question 4 (2 points) a. Let x z [n], n {,...,2N } be a one-dimensional image of length 2N with zeros at every alternate pixel. That is, x z [n] = for every odd n. Now suppose we down-sample x z [n] by a factor of two to obtain x dz [n] = x z [2n], which is of length N. Give an expression for X dz [u], u {,...N } in terms of X z [u], where X dz and X z are the one-dimensional discrete Fourier transforms of x dz [n] and x z [n] respectively. b. Next, consider a general one-dimensional image x[n] of length 2N (where all elements can now be non-zero). Suppose we downsample x[n] to get x d [n] = x[2n], essentially throwing away the odd pixels in x[n]. For this case, what is the expression for X d [u] in terms of X[u]? Question (8 points) According to the principal of trichromacy, given three primaries (i.e., light sources with fixed spectral distributions) P (λ), P 2 (λ), P 3 (λ) a typical person can adjust the weights (the brightness) of these light sources so that the resulting mixture looks the same as any given test light T(λ). We write this using algebraic notation as: T(λ) w P (λ) + w 2 P 2 (λ) + w 3 P 3 (λ), where means looks the same as. An important caveat is that subtractive matching must be allowed, meaning that the person needs to have the ability to add some of the primaries to the test light. This can be viewed as adjusting a primary to a negative brightness, if we are willing to apply the algebraic manipulation T(λ) + w P (λ) w 2 P 2 (λ) + w 3 P 3 (λ) = T(λ) w P (λ) + w 2 P 2 (λ) + w 3 P 3 (λ) Explain why the need for subtractive matching implies that if the primaries P i (λ) are all positive functions of λ (which they are if we are using real lights) the corresponding color matching functions must be negative at some wavelengths. 3

4 Question (2 points) Suppose we are constructing a binary (two-category) classifier to discriminate between classes ω and ω 2 based on measurements x R d. We are interested in zero-one loss, so our decision rule is Rule : Decide ω if p(ω x) > p(ω 2 x). Another way of characterizing a classifier is through discriminant functions, and when there are two categories, there are two equivalent ways to do this. We can define two discriminant functions g (x) and g 2 (x) and use the rule Decide ω if g (x) > g 2 (x) or we can define a single discriminant function g(x) g (x) g 2 (x) and use the rule Decide ω if g(x) >. a. Assuming that the class conditional densities p(x ω i ) and prior distributions p(ω i ) are known, show that the discriminant function for Rule can be written g(x) = log p(x ω ) p(x ω ) + log p(ω ) p(ω 2 ). b. Suppose the two classes are equally probable, so p(ω ) = p(ω 2 ), and suppose the class conditional densities are Gaussian distributions with means µ and µ 2 and covariance matrices that are diagonal and equal: Σ = Σ 2 = σ 2 I. Show that the discriminant function for Rule can now be written g(x) = w x + b. where w,b R d are vectors that depend on the means µ i and the variance parameter σ. (Hint: expand the quadratic forms x µ i 2 = x x 2µ i x + µ i µ i and think about which terms can be ignored.) c. The decision rule Rule induces a decision surface in the measurement space, with measurements being assigned to one class or the other depending on which side of the surface they lie. Based on part (b), provide a geometric interpretation of the decision surface. Question 7 (2 points) The next page contains Matlab code that clusters three-dimensional points using the Expectation-Maximization algorithm and a mixture-of-gaussian model for the data. a. Which two lines of this code must be modified so that the function performs the k-means algorithm instead? b. Substitute new code for these lines to implement this change. (While this can be done by inserting only two new lines of code, you are free to insert multiple lines of code in place of each of the two removed lines.) 4

5 function Zo=EM(X,Mo,Co) 2 %EM Expectation-Maximization for Gaussian mixtures in 3D. 3 % Input: X = (numpts) x 3 array of points 4 % Mo= (numclusters) x 3 array of initial cluster means % Co= 3 x 3 x (numclusters) array of initial cluster % covariance matrices 7 % Output: Zo= (numpts) x vector with cluster number 8 % (i.e., one of,2,...(numclusters)) for each point 9 numpts=size(x,); numclusters=size(mo,); 2 3 % support maps for each point 4 Z=zeros(numclusters,numpts); % mixture weights are initially assumed uniform 7 weights=ones(numclusters,)/numclusters; 8 9 % Allocate space for mean and covariance at each iteration. 2 % Initialize to Mo and Co. 2 M=Mo; C=Co; % repeat for ten iterations 24 for n=: 2 2 % E-step 27 for c=:numclusters 28 Z(c,:)=weights(c)*gaussian(X,M(c,:),C(:,:,c)) ; 29 end 3 Z=Z./repmat(sum(Z),[numclusters,]); 3 32 % M-step 33 weights = mean(z,2); 34 for c=:numclusters 3 Xm=X-repmat(M(c,:),[numpts,]); 3 C(:,:,c)=(repmat(Z(c,:),[3,]).*Xm )*Xm/sum(Z(c,:)); 37 M(c,:)=sum(repmat(Z(c,:),[3,]).*X,2)/sum(Z(c,:)); 38 end 39 end 4 4 % Final label for each point is cluster with maximum support 42 [y,zo]=max(z); 43 Zo=Zo ; 44 4 %%% SUB-ROUTINES 4 47 function G=gaussian(X,M,C) 48 % Evaluate multi-variate Gaussian with mean M and covariance C 49 % at points X. ndims=length(m); 2 numpts=size(x,); 3 X=X-repmat(M(:),[numpts,]); 4 G=exp(-(sum(X.*(inv(C)*X ))) /.)/sqrt(((2*pi)^ndims)*det(c));

6 Question 8 (2 points) Consider the six textures below, which are numbered (i) (vi). Below the textures are six sets of graphs, labeled (a) (f). Each set of graphs corresponds to one of the six textures, and each set contains a normalized histogram (pz (z)) for the gray levels in the texture as well as two functions computed from the Fourier spectrum: (r, θ) S(θ) = S(r) = R rmax Rπ r F (r, θ)dr θ F (r, θ)dθ where F (r, θ) is a centered Fourier spectrum written in polar coordinates as shown above-right. Match each texture to it s corresponding set of graphs by writing a label ((a)-(f)) for each texture ((i)-(vi)). (i) (a) (ii) (iii) (b).. (iv) (v) (c). (vi) x x (d) 2 2 (e) (f) x x 2 x

CS534 Machine Learning - Spring Final Exam

CS534 Machine Learning - Spring Final Exam CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the

More information

Final Exam Due on Sunday 05/06

Final Exam Due on Sunday 05/06 Final Exam Due on Sunday 05/06 The exam should be completed individually without collaboration. However, you are permitted to consult with the textbooks, notes, slides and even internet resources. If you

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

A Study of Kruppa s Equation for Camera Self-calibration

A Study of Kruppa s Equation for Camera Self-calibration Proceedings of the International Conference of Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 57 A Study of Kruppa s Equation for Camera Self-calibration Luh Prapitasari,

More information

8.6 Translate and Classify Conic Sections

8.6 Translate and Classify Conic Sections 8.6 Translate and Classify Conic Sections Where are the symmetric lines of conic sections? What is the general 2 nd degree equation for any conic? What information can the discriminant tell you about a

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

Minimum Error Rate Classification

Minimum Error Rate Classification Minimum Error Rate Classification Dr. K.Vijayarekha Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur-613 401 Table of Contents 1.Minimum Error Rate Classification...

More information

K-Means, Expectation Maximization and Segmentation. D.A. Forsyth, CS543

K-Means, Expectation Maximization and Segmentation. D.A. Forsyth, CS543 K-Means, Expectation Maximization and Segmentation D.A. Forsyth, CS543 K-Means Choose a fixed number of clusters Choose cluster centers and point-cluster allocations to minimize error can t do this by

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Pattern Recognition 2

Pattern Recognition 2 Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error

More information

CS181 Midterm 2 Practice Solutions

CS181 Midterm 2 Practice Solutions CS181 Midterm 2 Practice Solutions 1. Convergence of -Means Consider Lloyd s algorithm for finding a -Means clustering of N data, i.e., minimizing the distortion measure objective function J({r n } N n=1,

More information

Computer Assisted Image Analysis

Computer Assisted Image Analysis Computer Assisted Image Analysis Lecture 0 - Object Descriptors II Amin Allalou amin@cb.uu.se Centre for Image Analysis Uppsala University 2009-04-27 A. Allalou (Uppsala University) Object Descriptors

More information

Bayesian Decision Theory

Bayesian Decision Theory Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Machine Learning Problem Set 2 Due date: Wednesday October 6 Please address all questions and comments about this problem set to 6867-staff@csail.mit.edu. You will need to use MATLAB for some of

More information

Bayes Decision Theory

Bayes Decision Theory Bayes Decision Theory Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density 0 Minimum-Error-Rate Classification Actions are decisions on classes

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

1. Projective geometry

1. Projective geometry 1. Projective geometry Homogeneous representation of points and lines in D space D projective space Points at infinity and the line at infinity Conics and dual conics Projective transformation Hierarchy

More information

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1.

Linear models. x = Hθ + w, where w N(0, σ 2 I) and H R n p. The matrix H is called the observation matrix or design matrix 1. Linear models As the first approach to estimator design, we consider the class of problems that can be represented by a linear model. In general, finding the MVUE is difficult. But if the linear model

More information

Math Circles: Number Theory III

Math Circles: Number Theory III Math Circles: Number Theory III Centre for Education in Mathematics and Computing University of Waterloo March 9, 2011 A prime-generating polynomial The polynomial f (n) = n 2 n + 41 generates a lot of

More information

CSC411 Fall 2018 Homework 5

CSC411 Fall 2018 Homework 5 Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to

More information

Classification Methods II: Linear and Quadratic Discrimminant Analysis

Classification Methods II: Linear and Quadratic Discrimminant Analysis Classification Methods II: Linear and Quadratic Discrimminant Analysis Rebecca C. Steorts, Duke University STA 325, Chapter 4 ISL Agenda Linear Discrimminant Analysis (LDA) Classification Recall that linear

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Matrix Data: Clustering: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu October 19, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network

More information

Section 7.3: SYMMETRIC MATRICES AND ORTHOGONAL DIAGONALIZATION

Section 7.3: SYMMETRIC MATRICES AND ORTHOGONAL DIAGONALIZATION Section 7.3: SYMMETRIC MATRICES AND ORTHOGONAL DIAGONALIZATION When you are done with your homework you should be able to Recognize, and apply properties of, symmetric matrices Recognize, and apply properties

More information

Class Field Theory. Steven Charlton. 29th February 2012

Class Field Theory. Steven Charlton. 29th February 2012 Class Theory 29th February 2012 Introduction Motivating examples Definition of a binary quadratic form Fermat and the sum of two squares The Hilbert class field form x 2 + 23y 2 Motivating Examples p =

More information

CSE555: Introduction to Pattern Recognition Midterm Exam Solution (100 points, Closed book/notes)

CSE555: Introduction to Pattern Recognition Midterm Exam Solution (100 points, Closed book/notes) CSE555: Introduction to Pattern Recognition Midterm Exam Solution (00 points, Closed book/notes) There are 5 questions in this exam. The last page is the Appendix that contains some useful formulas.. (5pts)

More information

Georgia Department of Education Common Core Georgia Performance Standards Framework CCGPS Advanced Algebra Unit 2

Georgia Department of Education Common Core Georgia Performance Standards Framework CCGPS Advanced Algebra Unit 2 Polynomials Patterns Task 1. To get an idea of what polynomial functions look like, we can graph the first through fifth degree polynomials with leading coefficients of 1. For each polynomial function,

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

Machine Learning, Midterm Exam

Machine Learning, Midterm Exam 10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have

More information

Machine Learning for Data Science (CS4786) Lecture 12

Machine Learning for Data Science (CS4786) Lecture 12 Machine Learning for Data Science (CS4786) Lecture 12 Gaussian Mixture Models Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016fa/ Back to K-means Single link is sensitive to outliners We

More information

Bayesian Decision Theory

Bayesian Decision Theory Bayesian Decision Theory Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Bayesian Decision Theory Bayesian classification for normal distributions Error Probabilities

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Scene Planes & Homographies Lecture 19 March 24, 2005 2 In our last lecture, we examined various

More information

April 30, Name: Amy s Solutions. Discussion Section: N/A. Discussion TA: N/A

April 30, Name: Amy s Solutions. Discussion Section: N/A. Discussion TA: N/A Math 1151, April 30, 010 Exam 3 (in-class) Name: Amy s Solutions Discussion Section: N/A Discussion TA: N/A This exam has 8 multiple-choice problems, each worth 5 points. When you have decided on a correct

More information

2015 Math Camp Calculus Exam Solution

2015 Math Camp Calculus Exam Solution 015 Math Camp Calculus Exam Solution Problem 1: x = x x +5 4+5 = 9 = 3 1. lim We also accepted ±3, even though it is not according to the prevailing convention 1. x x 4 x+4 =. lim 4 4+4 = 4 0 = 4 0 = We

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so. CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic

More information

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Gaussian and Linear Discriminant Analysis; Multiclass Classification Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015

More information

Math 322. Spring 2015 Review Problems for Midterm 2

Math 322. Spring 2015 Review Problems for Midterm 2 Linear Algebra: Topic: Linear Independence of vectors. Question. Math 3. Spring Review Problems for Midterm Explain why if A is not square, then either the row vectors or the column vectors of A are linearly

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

Controlling the Population

Controlling the Population Lesson.1 Skills Practice Name Date Controlling the Population Adding and Subtracting Polynomials Vocabulary Match each definition with its corresponding term. 1. polynomial a. a polynomial with only 1

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Math 308 Practice Final Exam Page and vector y =

Math 308 Practice Final Exam Page and vector y = Math 308 Practice Final Exam Page Problem : Solving a linear equation 2 0 2 5 Given matrix A = 3 7 0 0 and vector y = 8. 4 0 0 9 (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA Review for Exam Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 0003 March 26, 204 Abstract Here are some things you need to know for the in-class

More information

10-701/ Machine Learning - Midterm Exam, Fall 2010

10-701/ Machine Learning - Midterm Exam, Fall 2010 10-701/15-781 Machine Learning - Midterm Exam, Fall 2010 Aarti Singh Carnegie Mellon University 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 15 numbered pages in this exam

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

FINAL: CS 6375 (Machine Learning) Fall 2014

FINAL: CS 6375 (Machine Learning) Fall 2014 FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for

More information

Exam 2 Solutions. x 1 x. x 4 The generating function for the problem is the fourth power of this, (1 x). 4

Exam 2 Solutions. x 1 x. x 4 The generating function for the problem is the fourth power of this, (1 x). 4 Math 5366 Fall 015 Exam Solutions 1. (0 points) Find the appropriate generating function (in closed form) for each of the following problems. Do not find the coefficient of x n. (a) In how many ways can

More information

2. If the discriminant of a quadratic equation is zero, then there (A) are 2 imaginary roots (B) is 1 rational root

2. If the discriminant of a quadratic equation is zero, then there (A) are 2 imaginary roots (B) is 1 rational root Academic Algebra II 1 st Semester Exam Mr. Pleacher Name I. Multiple Choice 1. Which is the solution of x 1 3x + 7? (A) x -4 (B) x 4 (C) x -4 (D) x 4. If the discriminant of a quadratic equation is zero,

More information

2018 Fall 2210Q Section 013 Midterm Exam I Solution

2018 Fall 2210Q Section 013 Midterm Exam I Solution 8 Fall Q Section 3 Midterm Exam I Solution True or False questions ( points = points) () An example of a linear combination of vectors v, v is the vector v. True. We can write v as v + v. () If two matrices

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Machine Learning (CS 567) Lecture 5

Machine Learning (CS 567) Lecture 5 Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

Machine Learning, Fall 2009: Midterm

Machine Learning, Fall 2009: Midterm 10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all

More information

Geometric Image Manipulation

Geometric Image Manipulation Bruce A. Draper J. Ross Beveridge, January 24, 204 Geometric Image Manipulation Lecture (part a) January 24, 204 Bruce A. Draper J. Ross Beveridge, January 24, 204 Status Update Programming assignment

More information

10708 Graphical Models: Homework 2

10708 Graphical Models: Homework 2 10708 Graphical Models: Homework 2 Due Monday, March 18, beginning of class Feburary 27, 2013 Instructions: There are five questions (one for extra credit) on this assignment. There is a problem involves

More information

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1 EEL 851: Biometrics An Overview of Statistical Pattern Recognition EEL 851 1 Outline Introduction Pattern Feature Noise Example Problem Analysis Segmentation Feature Extraction Classification Design Cycle

More information

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Final Overview. Introduction to ML. Marek Petrik 4/25/2017 Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment 1 Caramanis/Sanghavi Due: Thursday, Feb. 7, 2013. (Problems 1 and

More information

Clustering with k-means and Gaussian mixture distributions

Clustering with k-means and Gaussian mixture distributions Clustering with k-means and Gaussian mixture distributions Machine Learning and Category Representation 2012-2013 Jakob Verbeek, ovember 23, 2012 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.12.13

More information

Factor Analysis (10/2/13)

Factor Analysis (10/2/13) STA561: Probabilistic machine learning Factor Analysis (10/2/13) Lecturer: Barbara Engelhardt Scribes: Li Zhu, Fan Li, Ni Guan Factor Analysis Factor analysis is related to the mixture models we have studied.

More information

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart 1 Motivation Imaging is a stochastic process: If we take all the different sources of

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

Final Exam, Machine Learning, Spring 2009

Final Exam, Machine Learning, Spring 2009 Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3

More information

Problem #1 #2 #3 #4 #5 #6 Total Points /6 /8 /14 /10 /8 /10 /56

Problem #1 #2 #3 #4 #5 #6 Total Points /6 /8 /14 /10 /8 /10 /56 STAT 391 - Spring Quarter 2017 - Midterm 1 - April 27, 2017 Name: Student ID Number: Problem #1 #2 #3 #4 #5 #6 Total Points /6 /8 /14 /10 /8 /10 /56 Directions. Read directions carefully and show all your

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Digital Workbook for GRA 6035 Mathematics

Digital Workbook for GRA 6035 Mathematics Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................

More information

LECTURE NOTE #10 PROF. ALAN YUILLE

LECTURE NOTE #10 PROF. ALAN YUILLE LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure

More information

CS 556: Computer Vision. Lecture 21

CS 556: Computer Vision. Lecture 21 CS 556: Computer Vision Lecture 21 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Meanshift 2 Meanshift Clustering Assumption: There is an underlying pdf governing data properties in R Clustering

More information

Bayes Decision Theory

Bayes Decision Theory Bayes Decision Theory Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr 1 / 16

More information

MIXTURE MODELS AND EM

MIXTURE MODELS AND EM Last updated: November 6, 212 MIXTURE MODELS AND EM Credits 2 Some of these slides were sourced and/or modified from: Christopher Bishop, Microsoft UK Simon Prince, University College London Sergios Theodoridis,

More information

4 Gaussian Mixture Models

4 Gaussian Mixture Models 4 Gaussian Mixture Models Once you have a collection of feature vectors you will need to describe their distribution. You will do this using a Gaussian Mixture Model. The GMM comprises a collection of

More information

ECE 5984: Introduction to Machine Learning

ECE 5984: Introduction to Machine Learning ECE 5984: Introduction to Machine Learning Topics: (Finish) Expectation Maximization Principal Component Analysis (PCA) Readings: Barber 15.1-15.4 Dhruv Batra Virginia Tech Administrativia Poster Presentation:

More information

Errata for Robot Vision

Errata for Robot Vision Errata for Robot Vision This is a list of known nontrivial bugs in Robot Vision 1986) by B.K.P. Horn, MIT Press, Cambridge, MA ISBN 0-6-08159-8 and McGraw-Hill, New York, NY ISBN 0-07-030349-5. Thanks

More information

Introduction to Graphical Models

Introduction to Graphical Models Introduction to Graphical Models The 15 th Winter School of Statistical Physics POSCO International Center & POSTECH, Pohang 2018. 1. 9 (Tue.) Yung-Kyun Noh GENERALIZATION FOR PREDICTION 2 Probabilistic

More information

The Multivariate Gaussian Distribution [DRAFT]

The Multivariate Gaussian Distribution [DRAFT] The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2013/2014 Lesson 18 23 April 2014 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

Clustering with k-means and Gaussian mixture distributions

Clustering with k-means and Gaussian mixture distributions Clustering with k-means and Gaussian mixture distributions Machine Learning and Object Recognition 2017-2018 Jakob Verbeek Clustering Finding a group structure in the data Data in one cluster similar to

More information

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 Discriminative vs Generative Models Discriminative: Just learn a decision boundary between your

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

Advanced Introduction to Machine Learning

Advanced Introduction to Machine Learning 10-715 Advanced Introduction to Machine Learning Homework Due Oct 15, 10.30 am Rules Please follow these guidelines. Failure to do so, will result in loss of credit. 1. Homework is due on the due date

More information

DISCRIMINANT EXAM QUESTIONS

DISCRIMINANT EXAM QUESTIONS DISCRIMINANT EXAM QUESTIONS Question 1 (**) Show by using the discriminant that the graph of the curve with equation y = x 4x + 10, does not cross the x axis. proof Question (**) Show that the quadratic

More information

Introduction to Machine Learning

Introduction to Machine Learning Outline Introduction to Machine Learning Bayesian Classification Varun Chandola March 8, 017 1. {circular,large,light,smooth,thick}, malignant. {circular,large,light,irregular,thick}, malignant 3. {oval,large,dark,smooth,thin},

More information

Mini-project in scientific computing

Mini-project in scientific computing Mini-project in scientific computing Eran Treister Computer Science Department, Ben-Gurion University of the Negev, Israel. March 7, 2018 1 / 30 Scientific computing Involves the solution of large computational

More information

Math 150 Midterm 1 Review Midterm 1 - Monday February 28

Math 150 Midterm 1 Review Midterm 1 - Monday February 28 Math 50 Midterm Review Midterm - Monday February 28 The midterm will cover up through section 2.2 as well as the little bit on inverse functions, exponents, and logarithms we included from chapter 5. Notes

More information

Discrete Mathematics lecture notes 13-2 December 8, 2013

Discrete Mathematics lecture notes 13-2 December 8, 2013 Discrete Mathematics lecture notes 13-2 December 8, 2013 34. Pólya s Enumeration Theorem We started examining the number of essentially distinct colorings of an object with questions of the form How many

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians University of Cambridge Engineering Part IIB Module 4F: Statistical Pattern Processing Handout 2: Multivariate Gaussians.2.5..5 8 6 4 2 2 4 6 8 Mark Gales mjfg@eng.cam.ac.uk Michaelmas 2 2 Engineering

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information