Computer Assisted Image Analysis
|
|
- Kory Barker
- 5 years ago
- Views:
Transcription
1 Computer Assisted Image Analysis Lecture 0 - Object Descriptors II Amin Allalou amin@cb.uu.se Centre for Image Analysis Uppsala University A. Allalou (Uppsala University) Object Descriptors II / 26
2 Reading Instructions Chapters for this lecture Chapter.3.4 in Gonzales-Woods. A. Allalou (Uppsala University) Object Descriptors II / 26
3 Today Regional Descriptors Simple Topological Texture Statistical Spectral Moments Other Descriptors Principal components A. Allalou (Uppsala University) Object Descriptors II / 26
4 Simple Descriptors Simple Descriptors Area - number of pixels in region. Perimeter - boundary length. Compactness - P2A = P 2 /A (circularity). Graylevel measures: mean, median, max etc. A. Allalou (Uppsala University) Object Descriptors II / 26
5 Topological Descriptors topology The study of properties of a figure that are unaffected by any deformation. Topological Descriptors Number of holes in region, H. Number of connected components, C. Euler number, E = C H. A. Allalou (Uppsala University) Object Descriptors II / 26
6 Topological Descriptors Using Connected Components in Segmentation IR image Thresholded image Largest connected Skeleton A. Allalou (Uppsala University) Object Descriptors II / 26
7 Texture Figure: Smooth, coarse and regular textures Statistical and spectral approaches. A. Allalou (Uppsala University) Object Descriptors II / 26
8 Statistical Texture Statistical Texture Descriptors Histogram based Statistical moments Uniformity Average entropy Co-occurence matrix based Maximum probability etc. A drawback with using the histogram for texture analysis is that no positional information is used. Only gives information such as smooth or coarse not if patterned. A. Allalou (Uppsala University) Object Descriptors II / 26
9 Histogram Based Descriptors Statistical Moments Properties of the graylevel histogram, of an image or region, used when calculating statistical moments. z - discrete random variable representing discrete graylevels in the range [0, L ]. p(z i ) - normalised histogram component corresponding to the ith value of z. The nth Statistical Moment (about its mean) L L µ n (z) = (z i m) n p(z i ), m = z i p(z i ) i=0 µ 2 (z) - variance of z (i.e., σ 2 (z)). µ 3 (z) - histogram skewness. µ 4 (z) - relative flatness of histogram. i=0 A. Allalou (Uppsala University) Object Descriptors II / 26
10 Histogram Based Descriptors Uniformity and Average Entropy Also use the histogram properties z and p(z i ). Uniformity L U = p 2 (z i ) U is maximum for an image in which all graylevels are equal (maximally uniform). Average Entropy i=0 i=0 L e = p(z i ) log 2 p(z i ) Measure of variability. Is 0 for constant images. A. Allalou (Uppsala University) Object Descriptors II / 26
11 Co-occurence Matrix For an image with N graylevels, and P, a positional operator, generate A, a N N matrix, where a i,j is the number of times a pixel with graylevel value z i is in relative position P to graylevel value z j. Divide all elements in A with the sum of all elements in A (the number of times P was satisfied in the image). This gives a new matrix C, where c i,j is the probability that a pair of pixels fulfilling P has graylevel values z i and z j, which is called the co-occurrence matrix. A. Allalou (Uppsala University) Object Descriptors II / 26
12 Co-occurence Matrix Examples Ex.) P = one pixel down A = , C = Ex.2) A 2 = 4, C 2 = st row: pixel value 0 in pos P to other values 2 nd row: pixel value in pos P to other values 3 rd row: pixel value 2 in pos P to other values A. Allalou (Uppsala University) Object Descriptors II / 26
13 Co-occurence Matrix Descriptors Maximum Probability (strongest response to P) max(c ij ) i,j Uniformity Entropy (randomness) i i j c 2 ij c ij log 2 c ij j Characterize content of C using these descriptors. A. Allalou (Uppsala University) Object Descriptors II / 26
14 Co-occurence Matrix Descriptors Examples Example C = Maximum probability: Uniformity: Entropy: max(c ij ) = i,j 3 c 2 ij i i j c ij log 2 c ij = undef. j A. Allalou (Uppsala University) Object Descriptors II / 26
15 Co-occurence Matrix Descriptors Examples Example 2 C 2 = Maximum probability: Uniformity: Entropy: max(c ij ) = i,j 3 c 2 ij 0.67 i i j c ij log 2 c ij 2.98 j A. Allalou (Uppsala University) Object Descriptors II / 26
16 Co-occurence Matrix Descriptors Examples Figure: Match sample image with correct co-occurrence matrix A. Allalou (Uppsala University) Object Descriptors II / 26
17 Spectral Texture Peaks in the Fourierspectrum give information about direction and spatial period of pattern. Express spectrum in polar coordinates S(r, θ). For each direction θ, S(r, θ) is a D function S θ (r). Similarly, for each frequency r, S r (θ) is a D function. A global description can be obtained by summing S θ (r) and S r (θ): S(r) = π R 0 S θ (r), S(θ) = S r (θ) θ=0 r= Spectral-energy description (entire region). A. Allalou (Uppsala University) Object Descriptors II / 26
18 Spectral Texture Example Image Spectrum S(r) S(θ) A. Allalou (Uppsala University) Object Descriptors II / 26
19 Moments For a 2D continuous function f (x, y), the moment of order (p + q) is defined as m pq = x p y q f (x, y) dx dy for p, q = 0,, 2,.... The moment sequence (m pq ) is uniquely determined by f (x, y), and conversely, (m pq ) uniquely determines f (x, y). The central moments are defined as µ pq = (x x) p (y ȳ) q f (x, y) dx dy where x = m 0 m 00 and ȳ = m 0 m 00. A. Allalou (Uppsala University) Object Descriptors II / 26
20 Moments If f (x, y) is a digital image, then the central moments become µ pq = x (x x) p (y ȳ) q f (x, y). y The normalised central moments, denoted η pq, are defined as η pq = µ pq µ γ 00 where for p + q = 2, 3,.... γ = p + q 2 + A. Allalou (Uppsala University) Object Descriptors II / 26
21 Moments A set of seven invariant moments can be derived from the second and third moments. φ = η 20 + η 02 φ 2 = (η 20 η 02 ) 2 + 4η 2 φ 3 = (η 30 η 2 ) 2 + (3η 2 η 03 ) 2 φ 4 = (η 30 + η 2 ) 2 + (η 2 + η 03 ) 2... (see textbook) The set of moments is invariant to translation, rotation, and scale change. A. Allalou (Uppsala University) Object Descriptors II / 26
22 Moments Invariant Example A. Allalou (Uppsala University) Object Descriptors II / 26
23 Principal Components Image, region or boundary. R G B Each pixel represented as a 3D vector. Pixel vector and mean vector: R(i, j) mean(r) x i,j = G(i, j), m x = E{x} = mean(g) = K x k K B(i, j) mean(b) k= Covariance matrix C x = E{(x m x )(x m x ) T } = K K x k x T k m x m T x k= A. Allalou (Uppsala University) Object Descriptors II / 26
24 Principal Components C x is real and symmetric. e i and λ i are the eigenvectors and corresponding eigenvalues. (C x e i = λ i e i ) A is a matrix with the eigenvectors as rows, ordered corresponding to decreasing eigenvalue. Use A to transform x to y: y = A(x m x ). This is called the Hotelling transform or principal component transform. Covariance matrix C y = AC x A T = diagonal matrix with eigenvalues on the diagonal. Any vector x can be recovered from y by: x = A T y + m x and approximated by only using some (say k) of the eigenvalues and an A k matrix constructed from the k eigenvectors. A. Allalou (Uppsala University) Object Descriptors II / 26
25 Principal Components This gives a possibility to store the data more efficiently. For a 2D image the Hotelling or principal component transform is used to align an object or boundary according to its eigenaxes. This removes rotational effects, and the eigenvalues can be used for size normalization. Translational effects are also removed since the object is centered around its mean. A. Allalou Figure: (Uppsala University) Match sample image Objectwith Descriptors correct II co-occurrence matrix / 26
26 Principal Components Example A. Allalou (Uppsala University) Object Descriptors II / 26
Representing regions in 2 ways:
Representing regions in 2 ways: Based on their external characteristics (its boundary): Shape characteristics Based on their internal characteristics (its region): Both Regional properties: color, texture,
More informationDigital Image Processing Chapter 11 Representation and Description
Digital Image Processing Chapter 11 Representation and Description Last Updated: July 20, 2003 Preview 11.1 Representation 11.1.1 Chain Codes Chain codes are used to represent a boundary by a connected
More informationEECS490: Digital Image Processing. Lecture #26
Lecture #26 Moments; invariant moments Eigenvector, principal component analysis Boundary coding Image primitives Image representation: trees, graphs Object recognition and classes Minimum distance classifiers
More informationINF Shape descriptors
3.09.09 Shape descriptors Anne Solberg 3.09.009 1 Mandatory exercise1 http://www.uio.no/studier/emner/matnat/ifi/inf4300/h09/undervisningsmateriale/inf4300 termprojecti 009 final.pdf / t / / t t/ifi/inf4300/h09/
More informationCovariance and Principal Components
COMP3204/COMP6223: Computer Vision Covariance and Principal Components Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and
More informationBasic Concepts of. Feature Selection
Basic Concepts of Pattern Recognition and Feature Selection Xiaojun Qi -- REU Site Program in CVMA (2011 Summer) 1 Outline Pattern Recognition Pattern vs. Features Pattern Classes Classification Feature
More informationFace Recognition and Biometric Systems
The Eigenfaces method Plan of the lecture Principal Components Analysis main idea Feature extraction by PCA face recognition Eigenfaces training feature extraction Literature M.A.Turk, A.P.Pentland Face
More informationCHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION
59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods
More informationEfficient Observation of Random Phenomena
Lecture 9 Efficient Observation of Random Phenomena Tokyo Polytechnic University The 1st Century Center of Excellence Program Yukio Tamura POD Proper Orthogonal Decomposition Stochastic Representation
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationVector-attribute filters
Vector-attribute filters Erik R. Urbach, Niek J. Boersma, and Michael H.F. Wilkinson Institute for Mathematics and Computing Science University of Groningen The Netherlands April 2005 Outline Purpose Binary
More informationDigital Image Processing
Digital Image Processing 2D SYSTEMS & PRELIMINARIES Hamid R. Rabiee Fall 2015 Outline 2 Two Dimensional Fourier & Z-transform Toeplitz & Circulant Matrices Orthogonal & Unitary Matrices Block Matrices
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationK-Means, Expectation Maximization and Segmentation. D.A. Forsyth, CS543
K-Means, Expectation Maximization and Segmentation D.A. Forsyth, CS543 K-Means Choose a fixed number of clusters Choose cluster centers and point-cluster allocations to minimize error can t do this by
More informationMathematical Morphology and Distance Transforms
Mathematical Morphology and Distance Transforms Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University About the teacher PhD student at CBA Background in
More informationDimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining
Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can
More informationImage Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationGopalkrishna Veni. Project 4 (Active Shape Models)
Gopalkrishna Veni Project 4 (Active Shape Models) Introduction Active shape Model (ASM) is a technique of building a model by learning the variability patterns from training datasets. ASMs try to deform
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationMidterm Exam. CS283, Computer Vision Harvard University. Nov. 20, 2009
Midterm Exam CS283, Computer Vision Harvard University Nov. 2, 29 You have two hours to complete this exam. Show all of your work to get full credit, and write your work in the blue books provided. Work
More informationME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander
ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X
More information1 Principal Components Analysis
Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for
More informationImage Characteristics
1 Image Characteristics Image Mean I I av = i i j I( i, j 1 j) I I NEW (x,y)=i(x,y)-b x x Changing the image mean Image Contrast The contrast definition of the entire image is ambiguous In general it is
More informationMath 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is
Math 416 Lecture 3 Expected values The average or mean or expected value of x 1, x 2, x 3,..., x n is x 1 x 2... x n n x 1 1 n x 2 1 n... x n 1 n 1 n x i p x i where p x i 1 n is the probability of x i
More informationLecture 3: The Shape Context
Lecture 3: The Shape Context Wesley Snyder, Ph.D. UWA, CSSE NCSU, ECE Lecture 3: The Shape Context p. 1/2 Giant Quokka Lecture 3: The Shape Context p. 2/2 The Shape Context Need for invariance Translation,
More informationGaussian random variables inr n
Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b
More informationMTH 215: Introduction to Linear Algebra
MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationDetectors part II Descriptors
EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:
More informationTransform coding - topics. Principle of block-wise transform coding
Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts
More informationStatistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1
Week 2 Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Part I Other datatypes, preprocessing 2 / 1 Other datatypes Document data You might start with a collection of
More informationAnnouncements (repeat) Principal Components Analysis
4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long
More informationPart I. Other datatypes, preprocessing. Other datatypes. Other datatypes. Week 2 Based in part on slides from textbook, slides of Susan Holmes
Week 2 Based in part on slides from textbook, slides of Susan Holmes Part I Other datatypes, preprocessing October 3, 2012 1 / 1 2 / 1 Other datatypes Other datatypes Document data You might start with
More informationThe Principal Component Analysis
The Principal Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) PCA Fall 2017 1 / 27 Introduction Every 80 minutes, the two Landsat satellites go around the world, recording images
More informationGEOG 4110/5100 Advanced Remote Sensing Lecture 15
GEOG 4110/5100 Advanced Remote Sensing Lecture 15 Principal Component Analysis Relevant reading: Richards. Chapters 6.3* http://www.ce.yildiz.edu.tr/personal/songul/file/1097/principal_components.pdf *For
More informationIntelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham
Intelligent Data Analysis Principal Component Analysis Peter Tiňo School of Computer Science University of Birmingham Discovering low-dimensional spatial layout in higher dimensional spaces - 1-D/3-D example
More informationFeature detectors and descriptors. Fei-Fei Li
Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of
More informationSYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression
SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data
More informationCS 664 Segmentation (2) Daniel Huttenlocher
CS 664 Segmentation (2) Daniel Huttenlocher Recap Last time covered perceptual organization more broadly, focused in on pixel-wise segmentation Covered local graph-based methods such as MST and Felzenszwalb-Huttenlocher
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationCS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationMath 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:
Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationJUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.9 MATRICES 9 (Modal & spectral matrices) by A.J.Hobson 9.9. Assumptions and definitions 9.9.2 Diagonalisation of a matrix 9.9.3 Exercises 9.9.4 Answers to exercises UNIT 9.9
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationWrite your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.
2016 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationPrincipal Component Analysis
Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff
More informationHidden Markov Models Part 1: Introduction
Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that
More informationAffine Normalization of Symmetric Objects
Affine Normalization of Symmetric Objects Tomáš Suk and Jan Flusser Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod vodárenskou věží 4, 182 08 Prague 8, Czech
More informationFeature extraction: Corners and blobs
Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction
More informationReview: control, feedback, etc. Today s topic: state-space models of systems; linearization
Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered
More informationINTEREST POINTS AT DIFFERENT SCALES
INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions
More informationDescriptive Statistics
Descriptive Statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Descriptive statistics Techniques to visualize
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationNo. of dimensions 1. No. of centers
Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......
More informationStructure in Data. A major objective in data analysis is to identify interesting features or structure in the data.
Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationPoint Distribution Models
Point Distribution Models Jan Kybic winter semester 2007 Point distribution models (Cootes et al., 1992) Shape description techniques A family of shapes = mean + eigenvectors (eigenshapes) Shapes described
More informationCovariance to PCA. CS 510 Lecture #8 February 17, 2014
Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back
More informationData Preprocessing Tasks
Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationComputer Vision & Digital Image Processing
Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image
More informationEXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.
EXPECTED VALUE of a RV corresponds to the average value one would get for the RV when repeating the experiment, independently, infinitely many times. Sample (RIS) of n values of X (e.g. More accurately,
More informationPrincipal Component Analysis
Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.
More informationUnsupervised Learning: Dimensionality Reduction
Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationDimensionality Reduction
Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of
More informationGNR401 Principles of Satellite Image Processing
Principles of Satellite Image Processing Instructor: Prof. CSRE, IIT Bombay bkmohan@csre.iitb.ac.in Slot 5 Guest Lecture PCT and Band Arithmetic November 07, 2012 9.30 AM 10.55 AM IIT Bombay Slide 1 November
More informationINDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -33 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.
INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -33 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Regression on Principal components
More informationEntropy Encoding Using Karhunen-Loève Transform
Entropy Encoding Using Karhunen-Loève Transform Myung-Sin Song Southern Illinois University Edwardsville Sept 17, 2007 Joint work with Palle Jorgensen. Introduction In most images their neighboring pixels
More informationRegion Description for Recognition
Region Description for Recognition For object recognition, descriptions of regions in an image have to be compared with descriptions of regions of meaningful objects (models). The general problem of object
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationDecember 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis
.. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make
More informationSummary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016
8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying
More informationCOMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare
COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of
More informationB4 Estimation and Inference
B4 Estimation and Inference 6 Lectures Hilary Term 27 2 Tutorial Sheets A. Zisserman Overview Lectures 1 & 2: Introduction sensors, and basics of probability density functions for representing sensor error
More informationMultiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition
Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition Esa Rahtu Mikko Salo Janne Heikkilä Department of Electrical and Information Engineering P.O. Box 4500, 90014 University of
More informationPrincipal component analysis
Principal component analysis Angela Montanari 1 Introduction Principal component analysis (PCA) is one of the most popular multivariate statistical methods. It was first introduced by Pearson (1901) and
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationSTATISTICAL LEARNING SYSTEMS
STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationAnnounce Statistical Motivation Properties Spectral Theorem Other ODE Theory Spectral Embedding. Eigenproblems I
Eigenproblems I CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Eigenproblems I 1 / 33 Announcements Homework 1: Due tonight
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationData Mining and Analysis: Fundamental Concepts and Algorithms
Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA
More informationSignal Analysis. Principal Component Analysis
Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS
THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS MATH853: Linear Algebra, Probability and Statistics May 5, 05 9:30a.m. :30p.m. Only approved calculators as announced by the Examinations Secretary
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More informationEML4507 Finite Element Analysis and Design EXAM 1
2-17-15 Name (underline last name): EML4507 Finite Element Analysis and Design EXAM 1 In this exam you may not use any materials except a pencil or a pen, an 8.5x11 formula sheet, and a calculator. Whenever
More information