Active Appearances. Statistical Appearance Models
|
|
- Daniel Stevenson
- 5 years ago
- Views:
Transcription
1 Active Appearances The material following is based on T.F. Cootes, G.J. Edwards, and C.J. Taylor, Active Appearance Models, Proc. Fifth European Conf. Computer Vision, H. Burkhardt and B. Neumann, eds., vol. 2, pp , T.F. Cootes, G.J. Edwards, and C.J. Taylor, "Active appearance models," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp , June Authors focus was development of method for matching statistical models of appearance to [2D] images Applied to faces, 2D medical images Basic idea has since been extended to many applications in 2D & 3D medical imaging /645 Fall 2015 Statistical Appearance Models Shape In this case, 2D locations of key feature points Texture I.e., patterns of intensities or colors across image patches Method to build: Identify key points; do deformable warp of points to common coordinate system; normalize intensities; read intensities into an intensity vector G G = 1 G k = /645 Fall
2 Statistical Appearance Models How might we do this? Shape In this case, 2D locations of key feature points Texture I.e., patterns of intensities or colors across image patches Method to build: Identify key points; do deformable warp of points to common coordinate system; normalize intensities; read intensities into an intensity vector G G = 1 G k = /645 Fall 2015 Deformable warping from point cloud matches One answer might make use of what we learned in programming assignments E.g., Determine some nominal location for each landmark point. E.g., pick some reference image or average multiple samples or do something else x (nom) k = 1 ( j ) x N j k Then fit Bernstein polynomials to determine distortion. x (nom) k = c s,t B s (u k ) B t (v k ) s,t Note: In this case, the coefficients will also parameterize the shape /645 Fall
3 Deformable warping from point cloud matches Another answer might use something like thin plate splines (e.g. Bookstein) TPS( v; a,b,c,p) = a +B v + U( v p i ) where U(r) = r 2 log( r) Thin plate splines are multidimensional analogues of 1- dimensional spline curves. NOTE: One might also use other radial basis functions. For compact support, one example* could be c i i Ψ(r,σ) = I 1 r σ k+1+ d /2 0 otherwise if 0 r σ * See: M. Fornefett, K. Rohr, and H. S. Stiehl, "Radial basis functions with compact support for elastic registration of medical images", Image and Vision Computing, vol Äì2, pp , article/pii/s S (00) /645 Fall 2015 Thin Plate Splines Digression Some citations (from G. Donato and S. Belongie, Approximation Methods for Thin Plate Spline Mappings and Principal Warps, 2002; ) /645 Fall
4 M-dimensional Thin Plate Spline Summary Given TPS( v; a,b,c,p) = a +B v + U( v p i ) where U(r) = r 2 log r = r 2 log r 2 v = v 1,",v M p i = p 1,",p M ( ) for 2D ( ) for 3D T i P = p 1,", p N C = c 1,", c N B = b 1,", b M T T c i i /645 Fall 2015 M-dimensional Thin Plate Spline Fitting Given /645 Fall 2015 V = v 1,", v N F = f 1,", f N find a, B,C such that fi =TPS( v i ; a, B,C,V) To do this, solve the linear system K [NxN] 1[N 1] V 1 [1 N] 0 0 V T 0 0 [M M ] C T a T B T = F T 0 0 [M 1] where K i,j = K j,i = U v i v j K i,j = ( v i v j ) v i v ( j )log( ( v v ) ( v i j i v j )) ( ) with U(r) = r 2 logr or U(r) = r 2 logr 2 4
5 TPS 2D case Given a set of points p i = x i,y i and corresponding points p i * = x i *,y i *, we want to find TPS parameters such that p i * = TPS( p i ; a,b,c,p) To do this, we solve the least squares problem 0 " U 1,k " U 1,N 1 x 1 y 1 c 1 p 1 * # $ U ij # # # # # U k,1 " 0 " U k,n 1 x k y k # pk * # U ij $ # # # # # cn # = pn U N,1 " U N,k " 0 1 x N y N * a 1 " 1 " bx x 1 " x k " x N y 1 " y k " y N b y 0 where U i,j =U j,i =U( p i p j ) /645 Fall 2015 Define /645 Fall 2015 M-dimensional Thin Plate Spline Fitting L [M+N+1 M+N+1] = K 1[N 1] [NxN] V 1 [1 N] 0 0 V T 0 0 [M M ] If there are many points, this matrix may be expensive to invert or even pseudo-invert. There are various methods to deal with this problem. These include See Use a random sample of the v i to approximate the solution Use a random sample of the basis functions & all data to solve problem in least squares sense Use matrix approximation methods 5
6 Further Digression: Radial Basis Functions Note that the function U(r) in the previous discussion is a an example of a more general class of "radial basis functions". These functions can be used in deformable registration in much the same way as the TPS function used above. Other commonly used radial basis functions include U(r) = (r 2 + c 2 ) µ for µ + U(r) = (r 2 + c 2 ) µ for µ + U(e) = e r 2 /2σ 2 The last one is probably the most popular /645 Fall 2015 Appearance models, con d Appearance model is defined by an instance parameter vector λ, mean shape and texture X (avg) and G (avg), and variation mode matrices M X and M G. Thus, an instance ( j) would be G ( j ) = G (avg) +M G λ ( j ) = G (avg) + X ( j ) = X (avg) +M X λ ( j ) = X (avg) + N G M (k ) k=1 N X k=1 ( j ) λ G k M (k ) ( j ) λ X k In fact, they created a multi-resolution hierarchy with models similar to the above at different resolutions. Used PCA to determine the statistical parameters /645 Fall
7 Digression: PCA Suppose that you have a set of N vectors a i in an M dimensional space? Is there a natural "coordinate system" for these vectors? /645 Fall 2015 We proceed as follows a i a (avg) = i N /645 Fall 2015 Digression: PCA ; bi = a i a (avg) ; B= b 1," b N ; Then form the singular value decomposition B = UΣV T = U Σ(N) 0 V T where Σ (N) = diag(σ 1,",σ N ) Then we note that M = UΣ 2 U T. Of course U is huge, but we have the following useful fact. We note that B = u 1,", u N, u N+1,", u M σ 1 # σ N $ $ $ V T = u 1,", u N Σ(N) V T = U (N) Σ (N) V T 7
8 Digression: PCA This means that any column b k of B may be expressed as a linear combination of the first N columns of U where So B = u 1,", u N Σ(N) V T = U (N) Σ (N) V T b k = λ (k ) 1 u 1 +"+λ (k ) N u N = U (N) Λ (k ) Λ (k ) = transpose(u (N) ) b k a k = a (avg) + b k = a (avg) +λ 1 (k ) u 1 +"+λ N (k ) u N But often the last few values of the λ k are small. If we ignore all but the first D values, we have a k a (avg) +λ 1 (k ) u 1 +"+λ D (k ) u D /645 Fall 2015 Digression: PCA Suppose now that we have an arbitrary a (arb). We can approximate a (arb) as follows: b (arb) = a (arb) a (avg) Λ (arb) = transpose(u (D) ) b (arb) a (arb) a (avg) +λ 1 (arb) u 1 +"+λ D (arb) u D a (arb) a (approx) /645 Fall
9 400 faces 68 points Training Set for 2001 paper intensity values /645 Fall 2015 Complication How do you do PCA if shape and intensity may covary? Answer : Form combined vector of shape and intensity variation ( ) Y = W X X X(avg) G G (avg) where W X is a diagonal matrix of weights. Then do PCA on Y /645 Fall
10 Further complication How do you find the right weights to use? Answer (from Cootes et al. 1998): I.e., do PCA first on shape only and determine an appropriate V X. Then find an optimal λ ( j ) for each training sample ( j). Then vary the values of λ ( j,k ) = λ ( j ) +α e k to create new shape models X ( j,k ) and determine the corresponding texture vectors G ( j,k ). Then the weight w k = 1 G ( j,k ) G ( j ) 2 / α. j N /645 Fall 2015 Face modes Shape Intensity Source: Cootes et al /645 Fall
11 Face modes Combined Source: Cootes et al /645 Fall 2015 Basic Algorithm Make an initial guess at model weights Create a model from weights Evaluate error Iteratively improve /645 Fall
12 Basic Iteration of the Method Source: Cootes et al /645 Fall 2015 Basic Iteration of the Method Source: Cootes et al /645 Fall 2015 Note: simple sum of differences. What are some alternatives? 12
13 Results Source: Cootes et al /645 Fall 2015 Results Source: Cootes et al /645 Fall
14 Results: Knee Example Trained on 30 knee MRI images With 42 landmark points Source: Cootes et al /645 Fall 2015 Results: Knee Example Source: Cootes et al /645 Fall
Machine learning for pervasive systems Classification in high-dimensional spaces
Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version
More information1 Introduction. BMVC 1994 doi: /c.8.41
Combining Point Distribution Models with Shape Models Based on Finite Element Analysis TF.Cootes and C.J.Taylor Department of Medical Biophysics University of Manchester email: bim@wiau.mb.man.ac.uk Abstract
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationKronecker Decomposition for Image Classification
university of innsbruck institute of computer science intelligent and interactive systems Kronecker Decomposition for Image Classification Sabrina Fontanella 1,2, Antonio Rodríguez-Sánchez 1, Justus Piater
More informationHomework 1. Yuan Yao. September 18, 2011
Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationReconstruction from projections using Grassmann tensors
Reconstruction from projections using Grassmann tensors Richard I. Hartley 1 and Fred Schaffalitzky 2 1 Australian National University and National ICT Australia, Canberra 2 Australian National University,
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More informationMedical Image Synthesis via Monte Carlo Simulation
Medical Image Synthesis via Monte Carlo Simulation An Application of Statistics in Geometry & Building a Geometric Model with Correspondence James Z. Chen, Stephen M. Pizer, Edward L. Chaney, Sarang Joshi,
More informationMachine Learning - MT & 14. PCA and MDS
Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationA New Closed-Form Information Metric for Shape Analysis
A New Closed-Form Information Metric for Shape Analysis Adrian Peter 1 and Anand Rangarajan 2 1 Dept. of ECE, 2 Dept. of CISE, University of Florida, Gainesville, FL Abstract. Shape matching plays a prominent
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationPoint Distribution Models
Point Distribution Models Jan Kybic winter semester 2007 Point distribution models (Cootes et al., 1992) Shape description techniques A family of shapes = mean + eigenvectors (eigenshapes) Shapes described
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationLinear Subspace Models
Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,
More informationStatistical 2D and 3D shape analysis using Non-Euclidean Metrics
Statistical 2D and 3D shape analysis using Non-Euclidean Metrics Rasmus Larsen, Klaus Baggesen Hilger, and Mark C. Wrobel Informatics and Mathematical Modelling, Technical University of Denmark Richard
More informationDimensionality Reduction
Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationAffine Structure From Motion
EECS43-Advanced Computer Vision Notes Series 9 Affine Structure From Motion Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 68 yingwu@ece.northwestern.edu Contents
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationUnsupervised Learning: Dimensionality Reduction
Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional
More informationOperations On Networks Of Discrete And Generalized Conductors
Operations On Networks Of Discrete And Generalized Conductors Kevin Rosema e-mail: bigyouth@hardy.u.washington.edu 8/18/92 1 Introduction The most basic unit of transaction will be the discrete conductor.
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationFunctions of Several Variables
Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector
More informationFall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop
Fall 2017 Inverse of a matrix Authors: Alexander Knop Institute: UC San Diego Row-Column Rule If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding
More informationPCA and admixture models
PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationSTATISTICAL SHAPE MODELS (SSM)
STATISTICAL SHAPE MODELS (SSM) Medical Image Analysis Serena Bonaretti serena.bonaretti@istb.unibe.ch ISTB - Institute for Surgical Technology and Biomechanics University of Bern Overview > Introduction
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationLinear Classifiers as Pattern Detectors
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationA Coupled Helmholtz Machine for PCA
A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationClusters. Unsupervised Learning. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved
Clusters Unsupervised Learning Luc Anselin http://spatial.uchicago.edu 1 curse of dimensionality principal components multidimensional scaling classical clustering methods 2 Curse of Dimensionality 3 Curse
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationSPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS
SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised
More informationCPSC 340: Machine Learning and Data Mining. More PCA Fall 2017
CPSC 340: Machine Learning and Data Mining More PCA Fall 2017 Admin Assignment 4: Due Friday of next week. No class Monday due to holiday. There will be tutorials next week on MAP/PCA (except Monday).
More informationBayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington
Bayesian Classifiers and Probability Estimation Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington 1 Data Space Suppose that we have a classification problem The
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationNonparametric Regression With Gaussian Processes
Nonparametric Regression With Gaussian Processes From Chap. 45, Information Theory, Inference and Learning Algorithms, D. J. C. McKay Presented by Micha Elsner Nonparametric Regression With Gaussian Processes
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationSingular Value Decomposition
Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationFocus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.
Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,
More informationFixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University
Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationConjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing
Cambridge, Massachusetts Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing Andrew Knyazev (knyazev@merl.com) (speaker) Alexander Malyshev (malyshev@merl.com)
More informationComputational Physics
Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationMatrix-Matrix Multiplication
Chapter Matrix-Matrix Multiplication In this chapter, we discuss matrix-matrix multiplication We start by motivating its definition Next, we discuss why its implementation inherently allows high performance
More informationA METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE SIMILARITY
IJAMML 3:1 (015) 69-78 September 015 ISSN: 394-58 Available at http://scientificadvances.co.in DOI: http://dx.doi.org/10.1864/ijamml_710011547 A METHOD OF FINDING IMAGE SIMILAR PATCHES BASED ON GRADIENT-COVARIANCE
More informationLatent Semantic Analysis (Tutorial)
Latent Semantic Analysis (Tutorial) Alex Thomo Eigenvalues and Eigenvectors Let A be an n n matrix with elements being real numbers. If x is an n-dimensional vector, then the matrix-vector product Ax is
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More informationIndependent Component Analysis
Independent Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) ICA Fall 2017 1 / 18 Introduction Independent Component Analysis (ICA) falls under the broader topic of Blind Source
More informationAppendix C Vector and matrix algebra
Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University
Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationM2R IVR, October 12th Mathematical tools 1 - Session 2
Mathematical tools 1 Session 2 Franck HÉTROY M2R IVR, October 12th 2006 First session reminder Basic definitions Motivation: interpolate or approximate an ordered list of 2D points P i n Definition: spline
More informationComputing the Geodesic Interpolating Spline
Computing the Geodesic Interpolating Spline Anna Mills 1, Tony Shardlow 1, and Stephen Marsland 2 1 The University of Manchester, UK 2 Massey University, NZ Abstract. We examine non-rigid image registration
More informationSupervised locally linear embedding
Supervised locally linear embedding Dick de Ridder 1, Olga Kouropteva 2, Oleg Okun 2, Matti Pietikäinen 2 and Robert P.W. Duin 1 1 Pattern Recognition Group, Department of Imaging Science and Technology,
More informationPrepared by: M. S. KumarSwamy, TGT(Maths) Page
Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers
More informationSpectral Regularization
Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,
More informationThe Mathematics of Facial Recognition
William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More information