FFTs in Graphics and Vision. The Laplace Operator

Similar documents
FFTs in Graphics and Vision. Groups and Representations

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Spectral Processing. Misha Kazhdan

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 302 Outcome Statements Winter 2013

Spectral Theorem for Self-adjoint Linear Operators

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

MATH 115A: SAMPLE FINAL SOLUTIONS

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Quantum Computing Lecture 2. Review of Linear Algebra

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

Linear Algebra Review

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Math Linear Algebra II. 1. Inner Products and Norms

MATH 235. Final ANSWERS May 5, 2015

6 Inner Product Spaces

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

FFTs in Graphics and Vision. Rotational and Reflective Symmetry Detection

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

The following definition is fundamental.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Print Your Name: Your Section:

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

REPRESENTATION THEORY WEEK 7

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MATH 221, Spring Homework 10 Solutions

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Upon successful completion of MATH 220, the student will be able to:

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Review of Linear Algebra

Quizzes for Math 304

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

Transpose & Dot Product

Homework 11 Solutions. Math 110, Fall 2013.

Announcements Monday, November 20

A Note on Two Different Types of Matrices and Their Applications

Anouncements. Assignment 3 has been posted!

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w.

Definition (T -invariant subspace) Example. Example

SUMMARY OF MATH 1600

LINEAR ALGEBRA REVIEW

k is a product of elementary matrices.

There are two things that are particularly nice about the first basis

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Transpose & Dot Product

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Applied Linear Algebra in Geoscience Using MATLAB

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Chapter 2 Notes, Linear Algebra 5e Lay

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

University of British Columbia Math 307, Final

Math 118, Fall 2014 Final Exam

Math 210, Final Exam, Fall 2010 Problem 1 Solution. v cosθ = u. v Since the magnitudes of the vectors are positive, the sign of the dot product will

235 Final exam review questions

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

FFTs in Graphics and Vision. Fast Alignment of Spherical Functions

Vectors and Matrices Statistics with Vectors and Matrices

Math 20F Final Exam(ver. c)

Math 1553, Introduction to Linear Algebra

Math 234 Final Exam (with answers) Spring 2017

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Eigenvalues and Eigenvectors

Math Linear Algebra Final Exam Review Sheet

Solutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 28, 2011

Lecture 7: Positive Semidefinite Matrices

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

A PRIMER ON SESQUILINEAR FORMS

Chapter 2 The Group U(1) and its Representations

A geometric proof of the spectral theorem for real symmetric matrices

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

ELLIPSES. Problem: Find the points on the locus. Q(x, y) = 865x 2 294xy + 585y 2 = closest to, and farthest from, the origin. Answer.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math 205 A B Final Exam page 1 12/12/2012 Name

Name: Final Exam MATH 3320

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Introduction to Group Theory

WORKSHEET #13 MATH 1260 FALL 2014

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

1. General Vector Spaces

Tangent spaces, normals and extrema

Appendix A: Matrices

is Use at most six elementary row operations. (Partial

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

Linear Algebra Lecture Notes-II

The Spectral Theorem for normal linear maps

EXAM. Exam 1. Math 5316, Fall December 2, 2012

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Elementary maths for GMT

Methods for sparse analysis of high-dimensional data, II

Symmetric and anti symmetric matrices

Lecture 2: Linear operators

1 Distributions (due January 22, 2009)

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Transcription:

FFTs in Graphics and Vision The Laplace Operator 1

Outline Math Stuff Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 2

Linear Operators Definition: Given a real inner product space (V,, ) and a linear operator L: V V, the adjoint of the L is the linear operator L, with the property that: v, Lw = L v, w v, w V 3

Linear Operators Note: If V is the space of n-dimensional, real-valued, arrays with the standard inner product: v, w = n i=1 v i w i = v t w then the adjoint of a matrix M is its transpose: v, Mw = v t Mw = M t v t w = M t v, w 4

Linear Operators Definition: A real linear operator L is self-adjoint if it is its own adjoint, i.e.: v, Lw = Lv, w v, w V 5

Linear Operators Note: If V is the space of n-dimensional, real-valued, arrays with the standard inner product: v, w = n i=1 v i w i = v t w then a matrix M is self-adjoint if it is symmetric: M = M t 6

Linear Operators Definition: Given a complex inner product space (V,, ) and given a linear operator L: V V, the adjoint of the L is the linear operator L, with the property that: v, Lw = L v, w v, w V 7

Linear Operators Note: If V is the space of n-dimensional, complexvalued, arrays with the standard inner product: v, w = n i=1 v i w i = v t w then the adjoint of a matrix M is just the complex conjugate of the transpose: v, Mw = v t Mw = M t v t w = M t v, w 8

Linear Operators Definition: A complex linear operator L is self-adjoint if it is its own adjoint, i.e.: v, Lw = Lv, w v, w V 9

Linear Operators Note: If V is the space of n-dimensional, complexvalued, arrays with the standard inner product: v, w = n i=1 v i w i = v t w then a matrix M is self-adjoint if it is Hermitian: M = M t 10

Outline Math Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 11

Implicit Surface Definition: Given a function g(x, y, z), the implicit surface or iso-surface defined by g(x, y, z) is the set of points in 3D satisfying the condition: g x, y, z = 0 g x, y, z = x 2 + y 2 + z 2 1 g x, y, z = x 2 + y 2 + z 2 R 2 + r 2 2 4R 2 r 2 z 2

Lagrange Multipliers Goal: Given an implicit surface defined by a function g(x, y, z) and given a function f(x, y, z), we want to find the extrema of f on the surface. This can be done by finding the points on the surface where the gradient of f is parallel to the surface normal. 0 http://www.answers.com 13

Lagrange Multipliers Since the implicit surface is the set of points with: g x, y, z = 0 the normal at a point on the surface is parallel to the gradient of g. Finding the extrema amounts to finding the points (x, y, z) such that: g x, y, z = 0 (the point is on the surface) λ f = g (the point is a local extrema) 14

Outline Math Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 15

Diagonalizing Symmetric Matrices Claim: Given the space of n-dimensional, real-valued, arrays and given a symmetric matrix M: The eigenvectors of M form an orthogonal basis 16

Diagonalizing Symmetric Matrices The Eigenvectors Form an Orthogonal Basis: To show this we will show two things: 1. If v is an eigenvector, then the space of vectors orthogonal to v is fixed by M. 2. At least one eigenvector exists. 17

Diagonalizing Symmetric Matrices 1. If v is an eigenvector, then the space of vectors orthogonal to v is fixed by M. Suppose that v is an eigenvector and w is some other vector that is orthogonal to v: v, w = 0 Since v is an eigenvector, this implies that: Mv, w = λv, w = 0 Since M is symmetric, we have: v, Mw = Mv, w = 0 18

Diagonalizing Symmetric Matrices 1. If v is an eigenvector, then the space of vectors orthogonal to v is fixed by M. If W is the subspace of vectors orthogonal to v: W = w V w, v = 0 then we have: v, Mw = 0 M w W w W w W 19

Diagonalizing Symmetric Matrices 1. If v is an eigenvector, then the space of vectors perpendicular to v is fixed by M. Implications: If we know that we can find one eigenvector v, we can consider the restriction of M to W. We know that: M W W Mu, w = u, Mw u, w W So we can repeat to find the next eigenvector. 20

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist We will show this using Lagrange multipliers: The implicit surface will be the sphere in R n : g x 1,, x n = x 1 2 + x n 2 1 g v = v 2 1 The function we optimize will be: f v = v, Mv Because the sphere is compact, an extrema must exist. g x, y = x 2 + y 2 1

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist The normal of a point on the sphere is parallel to the gradient of g: g x 1,, x n = 2 x 1,, x n g v = 2v g x, y = x 2 + y 2 1

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Claim: The gradient of f is: f v = 2Mv 23

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof: Let e i be the vector with zeros everywhere but in the i-th entry: e i = (0,, 0,1,0,, 0) i-th entry 24

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof: Let e i be the vector with zeros everywhere but in the i-th entry. Then the i-th coefficient of the gradient is: x i f v = d ds s=0 f v + se i = d ds s=0 v + se i t M v + se i = d ds s=0 vt Mv + se i t Mv + sv t Me i + s 2 e i t Me i 25

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof: Let e i be the vector with zeros everywhere but in the i-th entry. Then the i-th coefficient of the gradient is: x i f v = e i t Mv + v t Me i = e i, Mv + M t v, e i = 2 e i, Mv = 2 Mv i 26

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Proof: Since, the i-th coefficient of the gradient of f at v is twice the i-th coefficient of Mv, we have: f v = 2Mv 27

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist We know that the normal of the point v on the unit sphere is parallel to the gradient, which is: g v = 2v And we know that the gradient of f is: f v = 2Mv 28

Diagonalizing Symmetric Matrices 2. At least one eigenvector must exist Since the function g must have a maximum on the sphere, we know that there exists v s.t.: λ g v = f(v) λv = Mv So at the maximum, we have our eigenvalue. 29

Outline Math Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 30

The Laplacian Operator Recall: The Laplacian of a function f at a point measures how similar the value of f at the point is to the average values of its neighbors. f(θ) Δf θ 31

The Laplacian Operator Recall: Formally, for a function in 2D, the Laplacian is the sum of unmixed partial second derivatives: Δf x, y = 2 f x 2 + 2 f y 2 32

The Laplacian Operator Observation 1: The Laplacian is a self-adjoint operator. To show this, we need to show that for any two functions f and g, we have: f, Δg = Δf, g 33

The Laplacian Operator Observation 1: First, we show this in the 1D case, for functions f(θ) and g(θ): f, g = f, g Writing the dot-product as an integral gives: f, g = 0 2π f θ g θ dθ 34

The Laplacian Operator Observation 1: Using the product rule for derivatives: f g = f g + f g 0 2π f g θ dθ = 0 2π f θ g θ dθ + 0 2π f θ g θ dθ Since f and g are functions on a circle, their values at 0 and 2π are the same: 0 2π f g θ dθ = f g 2π f g 0 = 0 35

The Laplacian Operator Observation 1: Thus, we have: 0 2π f θ g θ dθ = 0 2π Moving the derivative twice gives: 0 2π f θ g θ dθ = 0 2π = 1 2 f θ g θ dθ f θ g θ dθ 0 2π f θ g θ dθ f, g = f, g 36

The Laplacian Operator Observation 1: To generalize this to higher dimensions, we write out the dot-product as: Δf, g = = 2π 0 2π 0 0 = f, Δg 0 2π 2 f θ 2 g dθ dφ + 0 2π f 2 g θ 2 dθ dφ + 0 2π 2π 2π 2 f g dφ dθ φ2 2π f 2 g dφ dθ φ2 0 0 37

The Laplacian Operator Observation 2: The Laplacian operator commutes with rotation i.e. computing the Laplacian and rotating gives the same function as first rotating and then computing the Laplacian: Δ ρ R f = ρ R Δ f 38

The Laplacian Operator Implications: Observation 1: Since the Laplacian operator is self-adjoint, it must be diagonalizable. There is an orthogonal basis of eigenvectors. If we group the eigenvectors with the same eigenvalues together, we get a set of vector spaces F λ such that for any function f F λ : Δf = λf 39

The Laplacian Operator Implications: Observation 2: Since the Laplacian operator commutes with rotation, rotations map vectors in F λ back into F λ. Δ ρ R f = ρ R Δ f = ρ R λf = λ ρ R f The space F λ fixed under the action of rotation. The space F λ is a sub-representations for the group of rotation. 40

The Laplacian Operator Going back to the problem of finding the irreducible representations, this means we can begin by looking for the eigenspaces of the Laplacian operator. 41

Computing the Laplacian We know how to compute the Laplacian of a circular function represented by parameter: Δf θ = f (θ) How do we compute the Laplacian for a function represented by restriction? 42

Computing the Laplacian If we define a function on a circle as the restriction of a 2D function, the 2D Laplacian is not the same as the circular Laplacian! Example: Consider the function f(x, y) = x. In the plane, the Laplacian is: Δf x, y = 0 On the circle this is the function f θ = cos θ : Δf θ = cos(θ) 43

Computing the Laplacian If we define a function on a circle as the restriction of a 2D function, the 2D Laplacian is not the same as the circular Laplacian! Intuitively: The Laplacian measures the difference between the value of a point and the average value of the neighbors. Who the neighbors are changes depending on whether we are considering the plane or the circle. 44

Computing the Laplacian Recall: For a vector field: F x, y = F 1 x, y, F 2 x, y the divergence is defined: div F = F = F 1 x + F 2 y We can also express the Laplacian as the divergence of the gradient: Δf = ( f) 45

Computing the Gradient In general, the gradient of the function f(x, y) need not lie along the unit-circle. f(x, y) = x f 46

Computing the Gradient In general, the gradient of the function f(x, y) need not lie along the unit-circle. We can fix this by projecting the gradient on to the unit circle: π f = f f, x, y (x, y) f π f 47

Computing the Laplacian The divergence of a vector field F can be expressed as the sum of partials: div F = F = F 1 x + F 2 y 48

Computing the Laplacian Given any orthogonal basis {v, w}, the divergence is the derivative of the v-component of the vector field in the v-direction, plus the derivative of the w-component of the vector field in the w-direction: div F = F = F, v v + F, w w 49

Computing the Laplacian Thus, to compute the divergence of the vector field along the circle, we can compute the 2D divergence, and subtract off the contribution from the normal direction: div circle F = div 2D F F, n n Since the component of F in the normal direction is a scalar function, its derivative in the normal direction can be expressed as a gradient: F, n = F, n, n n 50

Computing the Laplacian (f x, y = x) Example: Consider the function: f x, y = x f(x, y) = x 51

Computing the Laplacian (f x, y = x) Example: Its gradient is: f = (1,0) f(x, y) = x f x, y = (1,0) 52

Computing the Laplacian (f x, y = x) Example: f = (1,0) Projecting the gradient onto the unit-circle we get: π f = f f, n n = f f, x, y x, y = 1,0 x(x, y) f(x, y) = x f x, y = (1,0) π( f) 53

Computing the Laplacian (f x, y = x) Example: π f = 1,0 x(x, y) The divergence of the vector field π( f) is: div 2D π f = 2x x = 3x f(x, y) = x f x, y = (1,0) π( f) 54

Computing the Laplacian (f x, y = x) Example: π f = 1,0 x(x, y) Projecting π f on the normal direction gives: π f, n = 1,0 x x, y, x, y = x x x 2 + y 2 = x x 3 + xy 2 f(x, y) = x f x, y = (1,0) π( f) 55

Computing the Laplacian (f x, y = x) Example: π f, n = x x 3 + xy 2 The gradient of the projection is: π f, n = 1,0 (3x 2 + y 2, 2xy) f(x, y) = x f x, y = (1,0) π( f) 56

Computing the Laplacian (f x, y = x) Example: π f, n = 1,0 3x 2 + y 2, 2xy So the divergence in the normal direction is: div n π f = 1,0 3x 2 + y 2, 2xy, x, y = x 3x xy 2 2xy 2 = x 3x 3 3xy 2 f(x, y) = x f x, y = (1,0) π( f) 57

Computing the Laplacian (f x, y = x) Example: div 2D π f = 3x div n π f = x 3x 3 3xy 2 Thus the circular Laplacian can be expressed as the difference between the 2D divergence and the divergence in the normal direction: Δ circle f x, y = div 2D π f div n π f = 3x x 3x 3 3xy 2 = 4x + 3x(x 2 + y 2 ) Since points on the circle satisfy x 2 + y 2 = 1, this implies that for (x, y) on the circle: Δ circle f x, y = x 58

Computing the Laplacian (f x, y = x) Example: div 2D π f = 3x div n π f = x 3x 3 3xy 2 Thus the circular Laplacian can be expressed as the difference between the 2D divergence and the divergence in the normal direction: Δ circle f x, y = div 2D π f div n π f = 3x x 3x 3 3xy 2 = 4x + 3x(x 2 + y 2 ) Since points on the circle satisfy x 2 + y 2 = 1, this implies that for (x, y) on the circle: Δ circle f x, y = f(x, y) 59

Computing the Laplacian (f x, y = x) Example: Thus just as in the parameter case the function f, is an eigenvector of the circular Laplacian operator, with eigenvalue 1. 60