CITS 4402 Computer Vision

Similar documents
Example: Face Detection

Face recognition Computer Vision Spring 2018, Lecture 21

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Lecture: Face Recognition

Lecture 13 Visual recognition

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

2D Image Processing Face Detection and Recognition

ECE 661: Homework 10 Fall 2014

COS 429: COMPUTER VISON Face Recognition

PCA FACE RECOGNITION

CITS 4402 Computer Vision

Face Detection and Recognition

Lecture: Face Recognition and Feature Reduction

Face detection and recognition. Detection Recognition Sally

Advanced Introduction to Machine Learning CMU-10715

Linear Subspace Models

Lecture: Face Recognition and Feature Reduction

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

Principal Component Analysis

Deriving Principal Component Analysis (PCA)

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Reconnaissance d objetsd et vision artificielle

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Principal Component Analysis (PCA)

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Global Scene Representations. Tilke Judd

Lecture 17: Face Recogni2on

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Eigenimaging for Facial Recognition

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Data Mining Techniques

Eigenface-based facial recognition

Lecture 17: Face Recogni2on

Eigenfaces. Face Recognition Using Principal Components Analysis

Image Analysis. PCA and Eigenfaces

Internet Video Search

Machine learning for pervasive systems Classification in high-dimensional spaces

Subspace Methods for Visual Learning and Recognition

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Dimensionality Reduction

Kronecker Decomposition for Image Classification

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Colour. The visible spectrum of light corresponds wavelengths roughly from 400 to 700 nm.

Principal Component Analysis

Dimensionality reduction

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition

Principal Component Analysis (PCA)

Lecture 8: Interest Point Detection. Saad J Bedros

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

20 Unsupervised Learning and Principal Components Analysis (PCA)

Face Recognition Using Eigenfaces

Machine Learning Basics

Lecture 7: Con3nuous Latent Variable Models

CS 340 Lec. 6: Linear Dimensionality Reduction

Lecture 8: Interest Point Detection. Saad J Bedros

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Dimensionality Reduction

PCA, Kernel PCA, ICA

Non-parametric Classification of Facial Features

Statistical Pattern Recognition

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

What is Principal Component Analysis?

Announcements (repeat) Principal Components Analysis

Machine Learning 2nd Edition

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Covariance and Principal Components

Unsupervised Learning: K- Means & PCA

CS 559: Machine Learning Fundamentals and Applications 5 th Set of Notes

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

ECE 592 Topics in Data Science

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014

Machine Learning. Data visualization and dimensionality reduction. Eric Xing. Lecture 7, August 13, Eric Xing Eric CMU,

CS 4495 Computer Vision Principle Component Analysis

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

Two-Layered Face Detection System using Evolutionary Algorithm

Machine Learning 11. week

Discriminant Uncorrelated Neighborhood Preserving Projections

Dimensionality Reduction and Principle Components

Pattern Recognition and Machine Learning

Learning Linear Detectors

L11: Pattern recognition principles

Feature extraction: Corners and blobs

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

PCA and LDA. Man-Wai MAK

Synthetic Sensing - Machine Vision: Tracking I MediaRobotics Lab, March 2010

CS5670: Computer Vision

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Geometric View of Machine Learning Nearest Neighbor Classification. Slides adapted from Prof. Carpuat

Visual Imaging and the Electronic Age Color Science Metamers & Chromaticity Diagrams. Lecture #6 September 7, 2017 Prof. Donald P.

Transcription:

CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition

Objectives To understand the concept of image based object recognition To learn how to match images beyond simple template matching To study the object recognition pipeline To learn about classification algorithms A brief introduction to machine learning 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 2

Specific Recognition Tasks Recognition is a core computer vision problem Scene classification (Outdoor / Indoor) tree sky building mountain Image tagging (street, people, tourism, mountain, cloudy, ) Find pedestrians banner people street lamp market 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 3

What is Recognition? Identification 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 4

What is Recognition? Categorization 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 5

Detection versus Recognition? 1. Face detection: Where are faces in an image? 2. Specific Person detection: Where is Person X in an image? 3. Face recognition: Given a face image (from step 1), find the identity of the person. 4. Can you define pedestrian detection and stop sign detection now? 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 6

Scale of detection Faces may need to be detected at different scales We can have nested detections Detect face Detect features such as eye corners, nose tip etc 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 7

Visual Recognition Design algorithms that are capable of Classifying images or videos Detect and localize image Estimate semantic and geometrical attributes Classify human activity and events Why is this challenging? 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 8

How many Object Categories are there? 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 9

Challenges Shape and Appearance Variations 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 10

Challenges Viewpoint Variations 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 11

Challenges Illumination 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 12

Challenges Background Clutter 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 13

Challenges Scale 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 14

Challenges Occlusion 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 15

Challenges do not appear in Isolation! Task: Detect phones in this image Appearance variations Viewpoint variations Illumination variations Background clutter Scale changes Occlusion 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 16

What Works Today Reading license plates, zip codes, checks 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 17

What Works Today Reading license plates, zip codes, checks Fingerprint recognition 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 18

What Works Today Reading license plates, zip codes, checks Fingerprint recognition Face detection 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 19

What Works Today Reading license plates, zip codes, checks Fingerprint recognition Face detection Recognition of flat textured objects (CD covers, book covers, etc) 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 20

What works today Who has the largest database of tagged/labelled faces? DeepFace is developed by Facebook A 9-layer deep neural network with over 120 million parameters using several locally connected layers..thus we (facebook) trained it on the largest facial dataset to-date, an identity labelled dataset of 4 million facial images belonging to more than 4,000 identities, where each identity has an average of over a thousand samples. 97% accuracy -> closely approaching human-level performance. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 21

A simple Object Recognition Pipeline Training Images Training Labels Image Features Training Learned Classifier Test Image Training Testing Image Features Learned Classifier Prediction 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 22

Image Features Two primary characteristics for object recognition: shape and appearance Shape can be modeled with Principal Component Analysis (PCA) PCA can also model appearance Appearance can be modeled with Colour Histograms 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 23

Example of Shape Modeling using PCA What is the shape of an aircraft? 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 24

Images as Data Points A N x N pixel image represented as a vector occupies a single point in N 2 -dimensional image space. Images of particular objects being similar in overall configuration, will not be randomly distributed in this huge image space, but will form clusters. Therefore, they can be compactly represented and modelled in a low dimensional subspace. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 25

Principal Component Analysis (PCA) Calculate vectors that account for the maximum variance of data These vectors are called Eigen Vectors. Eigen Vectors show the direction of axes of a fitted ellipsoid Eigen Values show the significance of the corresponding axis. Large value -> more variance For high dimensional data, only a few of the Eigen values are significant 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 26

Principal Component Analysis (PCA) Find Eigen Values and Eigen Vectors Chose the highest P Eigen Values Form a new coordinate system defined by the significant Eigen vectors Project data to the new space (rotate the basis) Compressed Data X 2 1 st Principal Component 2 nd Principal X Component 2 X 1 X 1 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 27

Principal Component Analysis (Maths) Let X be a matrix of the training images (each column is a vectorized image) Finds its column mean μ = 1 n X i (average face) Subtract mean from all data X = X μ USV T = X X T (singular value decomposition) Columns of U are the Eigen Vectors and diagonal of S are the Eigenvalues (sorted in decreasing value) Chose P eigenvectors i.e. first P columns of U If m is the number of pixels and m > n then use the following trick U 1 SV T = X T X and U = XU 1 Any data sample x can be projected to the PCA space as U p T (x μ) where U p contains the top (first) P eigenvectors of U 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 28

PCA for Recognition U (the Eigenvector matrix) is calculated from training data Training data X is projected to the PCA space using U Test data is also projected to the same PCA space (same U) Nearest neighbor is used for classification If the original images were of m m = 50 50 = 2500 dimension and we chose P = 20. The projected images will be only 20 dimensional If our training samples n = 100, the total possible Eigenvectors with nonzero eigenvalues will always be < 100 (99 at most) 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 29

Case Study Face Recognition Milestone methods in face detection / recognition 1. PCA and Eigenfaces (Turk & Pentland, 1991) 2. LDA and Fisherfaces (Bellumeur et al. 1997) 3. AdaBoost (Viola & Jones, 2001) 4. Local Binary Patterns (Ahonen et al. 2004) 5. DeepFace (Facebook, 2014) 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 30

PCA and Eigenfaces Training 1. Align training images x 1, x 2,, x N 2. Compute average face μ = 1 N Σx i 3. Compute PCA of the covariance matrices of the difference images 4. Compute training projections a 1, a 2,, a N 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 31

PCA and Eigenfaces Testing Visalization of Eigenfaces These are the first 4 eigenfaces (eigenvectors) from a training set of 400 images 1. Take query image y 2. Project y into the Eigenface space ω = U p T (y μ) 3. Compare projection ω with all training projection a i 4. Identity of the query image X is chosen as that of the nearest image (i.e. the one with the lowest w a i 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 32

Reconstruction using PCA Only selecting the top P eigenfaces reduces the dimensionality. Fewer eigenfaces result in more information loss, and hence less discrimination between faces. P = 4 P = 200 P = 400 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 33

PCA Final Note PCA finds directions of maximum variance of the data. This may not separate classes at all. Basic PCA is also sensitive to noise and outliers (read other variants e.g. Robust PCA). Linear Discriminant Analysis LDA finds the direction along which between class distance is maximum. Sometimes PCA is followed by LDA to combine the advantages of both. Eigen eyes, eigen nose, eigen X your imagination is the only limination. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 34

Importance of Colors in Object Detection / Recognition 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 35

Colour Histogram Colour stays constant under geometric transformations Colour is a local feature It is defined for each pixel It is robust to partial occlusion Idea: can use object colours directly for recognition, or better use statistics of object colours Colour histogram is a type of appearance features 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 36

Colour Sensing 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 37

Colour Spaces RGB Primaries are monochromatic lights for camera: Bayer filter pattern for monitors; they correspond to the 3 types of phosphors 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 38

Colour Spaces CIE XYZ Links physical pure colours (i.e wavelengths) in the electromagnetic visible spectrum and physiological perceived colours in human colour vision. Primaries X, Y, and Z are imaginary, but the matching functions are everywhere positive 2D Visualization: illustrates the x and y values where x = X/(X + Y + Z) and y = Y/(X + Y + Z). The value of z = 1 x y. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 39

Colour Spaces HSV HSV - Hue, Saturation, Value (Brightness) Nonlinear reflects topology of colours by coding hue as an angle Matlab functions: hsv2rgb, rgb2hsv 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 40

Colour Histograms Colour histograms are colour statistics Here, RGB as an example Given: tristimulus R, G, B for each pixel Compute a 3D histogram h R, G, B = #(pixels with colour (R, G, B)) 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 41

Colour Normalization One component of the 3D colour space is intensity If a colour vector is multiplied by a scalar, the intensity changes but not the colour itself. This means colours can be normalized by the intensity. Note: intensity is given by I = (R + G + B)/3 Chromatic representation: R r = g = R + G + B G R + G + B b = B R + G + B Since r + g + b = 1, only 2 parameters are needed to represent colour (knowing r and g, we can deduce b = 1 r g). Can compute colour histogram using r, g, and b instead. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 42

Object Recognition based on Colour Histograms Proposed by Swain and Ballard (1991). Objects are identified by matching a colour histogram from an image region with a colour histogram from a sample of the object. Technique has been shown to work remarkably robust to changes in object s orientation changes of scale of the object partial occlusion, and changes of viewing position and direction. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 43

Object Recognition based on Colour Histograms Colour histograms are discrete approximation of the colour distribution of an image. contain no spatial information invariant to translation, scale, and rotation 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 44

Histogram Comparison 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 45

Histogram Comparison with Multiple Training Views Need a good comparison measure for colour histograms! 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 46

What is a Good Comparison Measure? How to define matching cost? 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 47

Comparison Measures Euclidean distance (L 2 norm) d q, v = q i v i 2 i Motivation of the Euclidean distance: Focuses on the differences between the histograms. Interpretation: distance in the feature space. Range: [0, ). All cells are weighted equally. Not very robust to outliers! 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 48

Comparison Measures (Cont.) Chi-Square distance: d q, v = i q i v i 2 q i + v i Motivation of the χ 2 distance: Statistical background Test if two distributions are different. Possible to compute a significance score. Range: [0, ). Cells are not weighted equally! More robust to outliers than the Euclidean distance, if the histograms contain enough observations 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 49

Comparison for Image Retrieval The image retrieval problem concerns the retrieval of those images in a database that best match a query image. L2 distance Jeffrey divergence χ 2 distance Earth Movers Distance 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 50

Histogram Comparison Which measure is the best? It depends on the application Euclidean distance is often not robust enough. Generally, χ 2 distance gives good performance for histograms KL/Jeffreys divergence works well sometimes, but is expensive EMD is the most powerful, but also very expensive. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 51

Object Recognition Using Histograms Summary Simple algorithm 1. Build a set of histograms H = h i for each known object. More exactly, for each view of each object. 2. Build a histogram h t for the test image. 3. Compare h t with each h i H using a suitable histogram comparison measure. 4. Select the object with the best matching score; or reject the test image if no object is similar enough. This is known as the nearest-neighbour strategy. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 52

A simple Object Recognition Pipeline Training Images Training Labels Machine Learning Image Features Training Learned Classifier Test Image Training Testing Image Features Learned Classifier Prediction 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 53

Goal of Machine Learning Consider a 28 x 28 pixel image Represented by a 784 dimensional vector x Goal: build a machine that takes the vector x as input and produces the identity of digit 0,,9 as the output 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 54

The Machine Learning Framework Training data consists of data samples and the target vectors Learning / Training: Machine takes training data and automatically learns mapping from data samples to target vectors y = f(x) Output Label Image Feature Test data Prediction Function Target vectors are concealed from the machine Machine predicts the target vectors based on previously learned model Accuracy can be evaluated by comparing the predicted vectors to the actual vectors 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 55

Classification Assign input vector to one of two or more classes Any decision rule divides input space into decision regions separated by decision boundaries 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 56

Nearest Neighbour Classifier Assign label of nearest training data point to each test data point 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 57

Nearest Neighbour Classifier Partitioning of feature space for two-category 2D data using 1-nearest-neighbour 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 58

K-nearest-neighbour Distance measure Euclidean D D X, Y = i=1 1-nearest-neighbour f + = 3-nearest-neighbour f + = 5-nearest-neighbour f + = o x i, y i 2 x 2 o o * o * + * o o o * * o o o * * ** * * * x 1 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 59

K-NN Practical Matters Choosing the value of k If too small, sensitive to noise points If too large, neighbourhood may include points from other classes Solution: cross-validation Can produce counter-intuitive results Each feature may have a different scale Solution: normalize each feature to zero mean, unit variance Curse of dimensionality Solution: no good solution exists so far This classifier works well provided there are lots of training data and the distance function is good. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 60

Discriminative Classifiers 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 61

Nearest Neighbours Classifier 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 62

K-Nearest Neighbours Classifier 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 63

Linear Classifiers Support Vector Machines: find the hyper-planes (if the features are linearly separable) that separate these classes in the model space 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 64

Linear Classifiers Suppose that the points are in 2D. Points in class 1 have label y i = +1; points in class 2 have label y i = 1. Given x i, y i, where y i { 1, +1}, for i = 1,, N. Here x i R 2. Find w and b such that w T x i + b 1 if y i = +1 w T x i + b 1 if y i = 1 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 65

Linear Classifiers x Once we have learned w and b, we can do classification on any given test point x. This is known as the testing stage. If w T x + b +1 then classify x into class 1 else classify x into class 2 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 66

Nonlinear SVMs The linear SVM works out great when the data are linearly separable. E.g. the 1D case below: But what is the data are more complicated? Like We can map the data to a higher-dimensional space: 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 67

Nonlinear SVMs We use a lifting transformation Φ to transform the feature vectors to a higher dimensional space. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 68

Summary Challenges in Object Recognition A Simple Object Recognition Pipeline Principal Component Analysis Colour Histograms Discriminative Classifiers (k-nn and SVM) Acknowledgements: The slides are based on previous lectures by A/Prof Du Huynh and Prof Peter Koveski. Other material has been taken from Wikipedia, computer vision textbook by Forsyth & Ponce, and Stanford s Computer Vision course by Fei-Fei Li. 8/04/2018 Computer Vision - Lecture 06 - Object Recognition The University of Western Australia 69