Functional Maps ( ) Dr. Emanuele Rodolà Room , Informatik IX

Similar documents
The Laplacian ( ) Matthias Vestner Dr. Emanuele Rodolà Room , Informatik IX

Spectral Processing. Misha Kazhdan

Isometries ( ) Dr. Emanuele Rodolà Matthias Vestner Room , Informatik IX

EE Technion, Spring then. can be isometrically embedded into can be realized as a Gram matrix of rank, Properties:

Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation

LECTURE NOTE #11 PROF. ALAN YUILLE

Heat Kernel Signature: A Concise Signature Based on Heat Diffusion. Leo Guibas, Jian Sun, Maks Ovsjanikov

Diffusion/Inference geometries of data features, situational awareness and visualization. Ronald R Coifman Mathematics Yale University

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course

Justin Solomon MIT, Spring 2017

Spectral Algorithms II

Definition (T -invariant subspace) Example. Example

Data-dependent representations: Laplacian Eigenmaps

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

IFT LAPLACIAN APPLICATIONS. Mikhail Bessmeltsev

Review of Linear Algebra

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Data Analysis and Manifold Learning Lecture 9: Diffusion on Manifolds and on Graphs

Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

MATH 12 CLASS 4 NOTES, SEP

Laplace Operator and Heat Kernel for Shape Analysis

Linear Algebra. Min Yan

Recitation 8: Graphs and Adjacency Matrices

IFT CONTINUOUS LAPLACIAN Mikhail Bessmeltsev

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

CS 143 Linear Algebra Review

Chapter 3 Transformations

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

FINITE-DIMENSIONAL LINEAR ALGEBRA

Data dependent operators for the spatial-spectral fusion problem

Section 3.9. Matrix Norm

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra and Robot Modeling

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Review 1 Math 321: Linear Algebra Spring 2010

CS 468, Spring 2013 Differential Geometry for Computer Science Justin Solomon and Adrian Butscher


Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Manifold Regularization

Linear Algebra I. Ronald van Luijk, 2015

An Introduction to Laplacian Spectral Distances and Kernels: Theory, Computation, and Applications. Giuseppe Patané, CNR-IMATI - Italy

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Image Registration Lecture 2: Vectors and Matrices

STAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017

Quizzes for Math 304

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Chapter 4 Euclid Space

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

AMS526: Numerical Analysis I (Numerical Linear Algebra)

2-Dimensional Density Matrices

Elementary Linear Algebra

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Chapter 8 Integral Operators

Applied Linear Algebra

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Review of similarity transformation and Singular Value Decomposition

Non-linear Dimensionality Reduction

Computing and Processing Correspondences with Functional Maps

Linear Algebra Short Course Lecture 2

4.2. ORTHOGONALITY 161

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Chapter 2 Linear Transformations

Laplacian Mesh Processing

Jordan normal form notes (version date: 11/21/07)

Quantum wires, orthogonal polynomials and Diophantine approximation

Linear Algebra Review. Fei-Fei Li

Math (P)Review Part II:

Review of linear algebra

Directional Field. Xiao-Ming Fu

Lecture 6 Sept Data Visualization STAT 442 / 890, CM 462

Math Homework 8 (selected problems)

Write on one side of the paper only and begin each answer on a separate sheet. Write legibly; otherwise, you place yourself at a grave disadvantage.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Exercises * on Linear Algebra

MIT Final Exam Solutions, Spring 2017

The Gram Schmidt Process

The Gram Schmidt Process

Maths for Signals and Systems Linear Algebra for Engineering Applications

Introduction to Matrix Algebra

The Symmetric Space for SL n (R)

L26: Advanced dimensionality reduction

Exercise Sheet 1.

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra, Summer 2011, pt. 3

Robust Laplacian Eigenmaps Using Global Information

Eigenvectors and Hermitian Operators

6/9/2010. Feature-based methods and shape retrieval problems. Structure. Combining local and global structures. Photometric stress

Delay Coordinate Embedding

LINEAR ALGEBRA REVIEW

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Transcription:

Functional Maps (12.06.2014) Dr. Emanuele Rodolà rodola@in.tum.de Room 02.09.058, Informatik IX

Seminar «LP relaxation for elastic shape matching» Fabian Stark Wednesday, June 18th 14:00 Room 02.09.023

Seminar «Sparse modeling of intrinsic correspondences» Tobias Gurdan Wednesday, June 18th 14:45 Room 02.09.023

Wrap-up We introduced the Laplace-Beltrami operator for regular surfaces and noted it is an isometry invariant quantity of the shape. In particular, we saw how its eigenfunctions provide a basis for squareintegrable functions defined on the surface.

Wrap-up Based on the intrinsic properties of this operator, we considered an isometryinvariant Euclidean embedding of the surface, namely via the mapping: scale-invariant!

Wrap-up We studied heat diffusion on surfaces, modeled according to the heat equation: A solution to the heat equation is given by: heat kernel

Wrap-up Interesting properties of the heat kernel include: Informative property A surjective is an isometry iff Restricting our attention to still gives us a similar property under mild conditions (non-repeating eigenvalues). We then defined the heat kernel signature as the time-dependent function

Wrap-up The heat kernel signature can be expressed in terms of the eigenfunctions of as:

Wrap-up

Representing correspondences We have already seen that a correspondence can be represented by a matrix R 0, 1 n n X Does not scale well with the size of the shapes 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 Y Asking for a bijection corresponds to require R to be a permutation matrix. In other words, we are optimizing over all permutations of, X Y 1,n 1 2 3 4 5 3 5 2 1 4

Representing correspondences d P ( X, Y) 1 2 min P n max 1 i, j n d X ( x i, x j ) d Y ( y i, y j ) We defined the cost matrix C R n 2 n 2 such that: C d ( x, x ) d ( y ( i )( jm) X i j Y m, y ) ( 1 ( 2 x 1, y ) x 1, y ) x 1, y ) ( 3 0 13.5 23.4 104.6 7.64 13.5 0 13.52 11.2 71.1 23.4 13.52 0 0.22 23.44 104.6 11.2 0.22 0 16.5 7.64 71.1 23.44 16.5 0 ( 1 x 1, y ) ( x, y )( x, y ) 1 2 1 3 Quite big! Can we do better than this?

A map between functions Let be a bijection between two regular surfaces M and N. Given a scalar function on shape M, we can induce a function on the other shape by composition: We can denote this transformation by a functional, such that We call the functional representation of T.

Functional maps are linear Note that is a linear map between function spaces: by linearity of the composition operator This means that we can give a matrix representation for it, after we choose a basis for the two function spaces on M and N. The key observation here is that, while T can in general be a very complex transformation between the two shapes, always acts linearly. It remains to see whether knowledge of T is equivalent to knowledge of, and vice versa.

Recovering from If we know T, we can obviously construct by its definition. Let s see if we can also do the contrary. That is, assume we know how to map functions to functions, and we want to be able to map points to points. We first construct an indicator function that, and. on the first shape, such Then if we call, it must be whenever, and otherwise. Since T is a bijection, this happens only once, and T(a) is the unique point such that. In other words, we have reconstructed T from. At this point it is still a bit unclear why we introduced functional mappings in the first place. Let s see!

Matrix notation (1/2) Let be a basis for functions f on M, so that. Then we can write: Similarly, if is a basis for functions on N, we can write: Putting the two equations together, we get:

Matrix notation (2/2) So we can represent each function f on M by its coefficients, and similarly function on M by its coefficients. If the basis for functions f on M is orthogonal with respect to some inner product, then we can simply write. Similarly on N, we can write. Rewriting in matrix notation the equations above, we have:

Choice of a basis Up until now we have been assuming the presence of a basis for functions defined on the two shapes. The first possibility is to consider the standard basis on each shape, that is the set of indicator functions defined at each vertex: i-th vertex The two terms in the inner product are indicator functions permutation matrix

Choice of a basis We already know another possibility! The eigenfunctions of the Laplace-Beltrami operator form an orthogonal basis (w.r.t. S-weighted inner product ) for functions on each shape. here S is the mass matrix In particular, we have seen that we can approximate:

Choice of a basis This means that we can also approximate: And then, going back to matrix notation we find out that we are reducing the size of matrix C quite a lot. Matrix C, which represents our correspondence, is a matrix. Its size does not depend on the size of the shapes! Moreover, matrices associated to correct correspondences tend to be sparse. Typical values for m are 50 or 100

Examples Fully encodes the original map T. Note that this is a linear mapping! Note also that not every linear map corresponds to a point-to-point correspondence!

From P to C Given a correspondence (bijection) (in matrix notation, it can be written as a permutation matrix P), we can construct the associated functional map as follows: indicator function in the basis indicator function mapped to the basis indicator function on the other shape We know it must be: indicator function for vertex indicator function for vertex

Projecting onto the eigenbasis Given a correspondence, we now know how to construct a functional map out of it. More than that, we have a way to approximate the correspondence and thus reduce the space taken by the representation. This is simply done by choosing the number of eigenfunctions to use when performing the projection, that is Note that in general we cannot say, because our eigenbasis is not orthogonal with respect to the standard inner product. Indeed, our basis is orthogonal w.r.t. the mass-weighted inner product Since, we simply have

Function transfer Functional maps provide us a compact way to transfer functions between surfaces. A simple example is segmentation transfer:

Cat algebra Since we are now dealing with matrices (i.e. linear transformations), we can use all the tools from linear algebra to deal with our functional maps (e.g. sum or composition). As a simple example, here we map the colors from a source shape (left) to a target shape (down), using an interpolation of direct and symmetric ground-truth maps according to

Imposing linear constraints Interestingly, many common constraints that are used in shape matching problems also become linear in the functional map formulation. Descriptor preservation function on M If we are given a k-dimensional descriptor, we can just phrase k such equations, one for each dimension: function on N For instance, consider curvature or other descriptors. Landmark matches Assume we know that. We can define two distance maps:,, and then preserve the two functions as in the previous case.

Matching with functional maps The functional maps representation can be employed to determine a correspondence between two shapes. Using the ideas from the previous slide, we can just set up a linear system: under-determined full rank over-determined In the common case in which in the least-squares sense:, we can solve the resulting linear system

Convert C back to T Once we have found an optimal functional map C*, we may want to convert it back to a point-to-point correspondence. Simplest idea: Use C* to map indicator functions at each point. (this looks like the flag of Turkey) This is quite inefficient...

Convert C back to T Observe that the delta function centered around, when represented in the eigenbasis, has as coefficients the k-th column of matrix, where k is the index of point and S is the mass matrix. Thus, images for all the delta functions can be computed simply as Let and be two functions on N, with spectral coefficients and. Plancherel / Parseval s theorem This means that for every point (i.e. column) of we can look for its nearest neighbor in. These are m-dimensional points. This process can be accelerated using efficient search structures (kd-trees)!

Main issues The functional maps representation provides a very convenient framework for shape matching, but most of its power derives from the availability of an appropriate basis. Laplace-Beltrami eigenbasis: robust to nearly-isometric deformations only! Recent approaches get state-of-the-art results by explicitly requiring C to have a diagonal structure. Other approaches try to define an optimal basis given two shapes undergoing general deformations. Tobias will present one of these approaches on Wednesday, don t miss it!

Suggested reading Functional maps: A flexible representation of maps between surfaces. Ovsjanikov et al. SIGGRAPH 2012.