FFTs in Graphics and Vision. Groups and Representations

Similar documents
FFTs in Graphics and Vision. The Laplace Operator

FFTs in Graphics and Vision. Rotational and Reflective Symmetry Detection

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

4 Group representations

Chapter 2 The Group U(1) and its Representations

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS

18.702: Quiz 1 Solutions

Representation Theory. Ricky Roy Math 434 University of Puget Sound

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Anouncements. Assignment 3 has been posted!

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001

Linear Algebra Massoud Malek

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Digital Image Processing

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Algebra I Fall 2007

REPRESENTATION THEORY WEEK 7

Structure of rings. Chapter Algebras

The following definition is fundamental.

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012

Elementary linear algebra

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math Linear Algebra II. 1. Inner Products and Norms

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Lecture 8 : Eigenvalues and Eigenvectors

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Math 108b: Notes on the Spectral Theorem

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

THE EULER CHARACTERISTIC OF A LIE GROUP

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

k is a product of elementary matrices.

Representation Theory

Particles I, Tutorial notes Sessions I-III: Roots & Weights

Math 250: Higher Algebra Representations of finite groups

Introduction to Group Theory

Introduction to the Baum-Connes conjecture

Eigenvectors and Hermitian Operators

Quantum Computing Lecture 2. Review of Linear Algebra

Linear algebra 2. Yoav Zemel. March 1, 2012

Dot Products, Transposes, and Orthogonal Projections

Linear Algebra. Preliminary Lecture Notes

A PRIMER ON SESQUILINEAR FORMS

Quantum Theory and Group Representations

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

GROUP THEORY PRIMER. D(g 1 g 2 ) = D(g 1 )D(g 2 ), g 1, g 2 G. and, as a consequence, (2) (3)

Representations. 1 Basic definitions

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Risi Kondor, The University of Chicago

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Factorization of unitary representations of adele groups Paul Garrett garrett/

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Algebra. Preliminary Lecture Notes

Image Registration Lecture 2: Vectors and Matrices

Group Representations

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

10. Smooth Varieties. 82 Andreas Gathmann

Fundamental theorem of modules over a PID and applications

LINEAR ALGEBRA REVIEW

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

LINEAR ALGEBRA: THEORY. Version: August 12,

1 Classifying Unitary Representations: A 1

1. Plurisubharmonic functions and currents The first part of the homework studies some properties of PSH functions.

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

6. Linear Transformations.

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Algebra Exam Topics. Updated August 2017

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

There are two things that are particularly nice about the first basis

The Spinor Representation

Analysis II: The Implicit and Inverse Function Theorems

12x + 18y = 30? ax + by = m

Representation Theory

LINEAR ALGEBRA KNOWLEDGE SURVEY

Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem

Linear Algebra March 16, 2019

October 25, 2013 INNER PRODUCT SPACES

Representation Theory

CHAPTER 6. Representations of compact groups

CLASSICAL GROUPS DAVID VOGAN

REPRESENTATION THEORY OF S n

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

An operator is a transformation that takes a function as an input and produces another function (usually).

Classification of Root Systems

Vectors in Function Spaces

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Group Representation Theory

Lemma 1.3. The element [X, X] is nonzero.

PSET 0 Due Today by 11:59pm Any issues with submissions, post on Piazza. Fall 2018: T-R: Lopata 101. Median Filter / Order Statistics

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

QUATERNIONS AND ROTATIONS

Transcription:

FFTs in Graphics and Vision Groups and Representations

Outline Groups Representations Schur s Lemma Correlation

Groups A group is a set of elements G with a binary operation (often denoted ) such that for all f, g, h G, the following properties are satisfied: Closure: g h G Associativity: f g h = f g h Identity: 1 G s.t.: 1 g = g 1 = g Inverse: g G g 1 G s.t.: g g 1 = g 1 g = 1 If it is also true that f g = g f for all f, g G, the group is called commutative, or abelian.

Groups Examples Under what binary operations are the following groups, what is the identity element, and what is the inverse: Integers? Positive real-numbers? Points in R 2 modulo (2π, 2π)? Vectors in a fixed vector space? Invertible linear transformations of a vector space?

Groups Examples Are these groups commutative: Integers under addition? Positive real-numbers under multiplication? Points in R 2 modulo (2π, 2π) under addition? Vectors under addition? Linear transformations under composition?

Representations Often, we think of a group as a set of elements that act on some space: E.g.: Invertible linear transformations act on vector spaces 2D rotations act on 2D arrays 3D rotations act on 3D arrays A representation is a way of formalizing this

Representations A representation of a group G on a vector space V, denoted ρ, V, is a map ρ that sends every element in G to an invertible linear transformation on V, satisfying: ρ g h = ρ g ρ h g, h G. Note: ρ 1 = 1 since: ρ g = ρ g 1 = ρ g ρ(1) ρ g 1 = ρ g 1 since: ρ 1 = ρ g g 1 = ρ g ρ g 1

Representations A representation of a group G on a vector space V, denoted ρ, V, is a map ρ that sends every element in G to an invertible linear transformation on V, satisfying: ρ g h = ρ g ρ h g, h G. Analogy: Linear maps are functions between vector spaces that preserve the vector space structure: L a v 1 + b v 2 = a L v 1 + b L v 2

Representations A representation of a group G on a vector space V, denoted ρ, V, is a map ρ that sends every element in G to an invertible linear transformation on V, satisfying: ρ g h = ρ g ρ h g, h G. Analogy: For simplicity, we will write: ρ g = ρ g Linear maps are functions between vector spaces that preserve the vector space structure: L a v 1 + b v 2 = a L v 1 + b L v 2

Unitary Representations If the vector space V has a Hermitian inner product, and the representation preserves the inner product: v, w = ρ g v, ρ g w g G; v, w V the representation is called unitary. Note: For nice (e.g. finite, compact) we can always define a Hermitian inner product such that the representation is unitary.

Unitary Representations Examples G is the group of invertible n n matrices V is the space of n-dimensional arrays with the standard inner-product ρ is the map: ρ M (v) = Mv Representation? Unitary?

Unitary Representations Examples G is the group of invertible n n matrices V is the space of n-dimensional arrays with the standard inner-product ρ is the map: ρ M (v) = v Representation? Unitary?

Unitary Representations Examples G is the group of unitary transformations on V V is a complex Hermitian inner product space ρ is the map: ρ U (v) = Uv Representation? Unitary?

Unitary Representations Examples G is the group of 2D/3D rotations V is the space of functions on a circle/sphere with the standard inner-product ρ is the map: ρ R (f) p = f Rp R G Representation? Unitary?

Unitary Representations Examples G is the group of 2D/3D rotations V is the space of functions on a circle/sphere with the standard inner-product ρ is the map: ρ R (f) p = f R 1 p R G Representation? Unitary?

Unitary Representations Examples G Is the group R 2 modulo (2π, 2π) V is the space of continuous, periodic functions in the plane, with the standard Hermitian inner-product ρ is the map: ρ a,b (f) x, y = f(x a, y b) Representation? Unitary?

Big Picture Our goal is to try to better understand how a group acts on a vector space: How translational shifts act on periodic functions, How rotations act on functions on a sphere/circle Etc. To do this we would like to simplify the action of the group into bite-size chunks. Unless otherwise stated we will always be assuming that our representations are unitary

Sub-Representation Given a representation (ρ, V) of a group G, if there exists a subspace W V such that the representation fixes W: ρ g (w) W g G and w W then we say that W is a sub-representation of V.

Sub-Representation Maschke s Theorem: If W is a sub-representation of V, then the perpendicular space W will also be a subrepresentation of V. W is defined by the property that every vector in W is perpendicular to every vector in W: w, w = 0 w W and w W

Sub-Representation Claim: W will also be a sub-representation of V. Proof: (By contradiction) We would like to show that the representation ρ sends W back into itself

Sub-Representation Claim: W will also be a sub-representation of V. Proof: (By contradiction) We would like to show that the representation ρ sends W back into itself Assume not. There exist w W, w W, and g G s.t.: w, ρ g (w ) 0 Since ρ is unitary, this implies that: ρ g 1(w), ρ g 1 ρ g w 0 ρ g 1(w), w 0

Sub-Representation Claim: W will also be a sub-representation of V. Proof: (By contradiction) We would like to show that the representation ρ sends W back into itself Assume not. There exist w W, w W, and g G s.t.: w, ρ g w 0 But this would contradict the Since ρ is assumption unitary, this that implies the representation that: ρρ g 1(w), maps ρw g 1 back ρ into g w itself! 0 ρ g 1(w), w 0

Sub-Representation Example: 1. Consider the group of 2D rotations, acting on vectors in 3D by rotating around the y-axis. What are two sub-representations?

Sub-Representation Example: 1. Consider the group of 2D rotations, acting on vectors in 3D by rotating around the y-axis. What are two sub-representations? a) The y-axis: The group acts on this sub-space trivially, mapping every vector to itself b) The xz-plane: The group acts as a 2D rotation on this 2D space.

Sub-Representation Example: 2. Consider the group of 2D rotations, acting on functions on the unit disk. What are two sub-representations?

Sub-Representation Example: 2. Consider the group of 2D rotations, acting on functions on the unit disk. What are two sub-representations? a) The space of functions that are zero for all points p with p < 1/2 b) The space of functions that are zero for all points p with p 1/2 = +

Irreducible Representations Given a representation (ρ, V) of a group G, the representation is said to be irreducible if the only subspaces of V that are sub-representations are: W = V and W =

Structure Preservation We had talked about linear transformations as maps between vector spaces, that preserve the underlying vector space structure: L a v 1 + b v 2 = a L v 1 + b L(v 2 ) We had talked about a representation as a map from a group into the group of invertible linear transforms that preserves the group structure: ρ g h = ρ g ρ(h) It doesn t matter if we perform the vector-space/group operations before or after we apply the map.

Structure Preservation Given a representation (ρ, V) a group G, what does it mean for a map Φ: V V to preserve the representation structure? Since Φ is a map between vector spaces, it should preserve the vector space structure: Φ is a linear transformation. Φ should also preserve the group action structure: Φ ρ g (v) = ρ g Φ v Such a map is called G-linear.

Structure Preservation Claim: If Φ: V V is G-linear, then both the kernel and the image of Φ are sub-representations.

Structure Preservation Claim: If Φ: V V is G-linear, then both the kernel and the image of Φ are sub-representations. Proof: If v Kernel Φ then, for g G we have: 0 = Φ v = ρ g Φ v = Φ ρ g v ρ g v Kernel(Φ)

Structure Preservation Claim: If Φ: V V is G-linear, then both the kernel and the image of Φ are sub-representations. Proof: If w = Φ v Image Φ then, for g G we have: ρ g w = ρ g Φ v = Φ ρ g v Image(Φ)

Schur s Lemma Given an irreducible representation (ρ, V) of a group G, if Φ is G-linear than Φ is scalar multiplication: Φ = λ Id.

Schur s Lemma Proof: 1. Since Φ is a linear transformation, it has a (complex) eigenvalue λ.

Schur s Lemma Proof: 1. Since Φ is a linear transformation, it has a (complex) eigenvalue λ. 2. Since Φ is G-linear, so is (Φ λ Id. ) : Φ λ Id. ρ g (v) = Φ ρ g (v) λ ρ g v = ρ g Φ v ρ g λ v = ρ g Φ λ Id. v

Schur s Lemma Proof: 3. Since λ is an eigenvalue of Φ, (Φ λ Id. ) must have a non-trivial kernel W V.

Schur s Lemma Proof: 3. Since λ is an eigenvalue of Φ, (Φ λ Id. ) must have a non-trivial kernel W V. 4. This implies that the kernel of (Φ λ Id. ) must be a sub-representation of V.

Schur s Lemma Proof: 3. Since λ is an eigenvalue of Φ, (Φ λ Id. ) must have a non-trivial kernel W V. 4. This implies that the kernel of (Φ λ Id. ) must be a sub-representation of V. 5. Since (ρ, V) is irreducible and the kernel of (Φ λ Id. ) is not empty, W = V.

Schur s Lemma Proof: 3. Since λ is an eigenvalue of Φ, (Φ λ Id. ) must have a non-trivial kernel W V. 4. This implies that the kernel of (Φ λ Id. ) must be a sub-representation of V. 5. Since (ρ, V) is irreducible and the kernel of (Φ λ Id. ) is not empty, W = V. 6. Since the kernel is the entire vector space: Φ λ Id. = 0 Φ = λ Id.

Schur s Lemma (Corollary) Corollary: All irreducible representations of commutative groups must be one-dimensional.

Schur s Lemma (Corollary) Proof: 1. Fix some element h G.

Schur s Lemma (Corollary) Proof: 1. Fix some element h G. 2. Since G is commutative, ρ h must be G-linear: ρ g ρ h v = ρ g h (v) = ρ h g v = ρ h ρ g v

Schur s Lemma (Corollary) Proof: 1. Fix some element h G. 2. Since G is commutative, ρ h must be G-linear. 3. Since (ρ, V) is irreducible, ρ h = λ Id.

Schur s Lemma (Corollary) Proof: 1. Fix some element h G. 2. Since G is commutative, ρ h must be G-linear. 3. Since (ρ, V) is irreducible, ρ h = λ Id. 4. Since this is true for any h G, any subspace W V is a sub-representation.

Schur s Lemma (Corollary) Proof: 1. Fix some element h G. 2. Since G is commutative, ρ h must be G-linear. 3. Since (ρ, V) is irreducible, ρ h = λ Id. 4. Since this is true for any h G, any subspace W V is a sub-representation. 5. Since V is irreducible, V is one-dimensional.

Example: Since 2D rotations commute, we can decompose the space of functions on a circle into1d subspaces that rotate into themselves.

Example: Since 2D rotations commute, we can decompose the space of functions on a circle into1d subspaces that rotate into themselves.

Example: Since 2D rotations commute, we can decompose the space of functions on a circle into1d subspaces that rotate into themselves.

Example: Since 2D rotations commute, we can decompose the space of functions on a circle into1d subspaces that rotate into themselves.

Example: Since 2D rotations commute, we can decompose the space of functions on a circle into1d subspaces that rotate into themselves.

Example: 3D rotations don t commute, so the space of spherical functions need not be decomposable into 1D irreducible sub-representations.

Example: 3D rotations don t commute, so the space of spherical functions need not be decomposable into 1D irreducible sub-representations.

Example: 3D rotations don t commute, so the space of spherical functions need not be decomposable into 1D irreducible sub-representations.

Example: 3D rotations don t commute, so the space of spherical functions need not be decomposable into 1D irreducible sub-representations.

Example: 3D rotations don t commute, so the space of spherical functions need not be decomposable into 1D irreducible sub-representations.

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing: * =

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Why do we care? In signal/image/voxel processing, we are often interested in applying a filter to some initial data. E.g. Smoothing:

Smoothing What we are really doing is computing a moving inner product: g(θ) h(θ) g θ, h θ π π π π

Smoothing What we are really doing is computing a moving inner product: g(θ) h(θ) g θ, h θ α 1 π π π π α 1

Smoothing What we are really doing is computing a moving inner product: g(θ) h(θ) g θ, h θ α 2 π π π π α 2

Smoothing What we are really doing is computing a moving inner product: g(θ) h(θ) g θ, h θ α 3 π π π π α 3

Smoothing We can write out the operation of smoothing a signal g by a filter h as: g h (α) = g, ρ α h where ρ α is the linear transformation that translates a periodic function by α. g(θ) h(θ) g θ, h θ α 3 π π π π α 3

Smoothing We can think of this as a representation: V is the space of periodic functions on the line G is the group of real numbers in [0,2π) ρ α is the representation translating a function by α.

Smoothing We can think of this as a representation: V is the space of periodic functions on the line G is the group of real numbers in [0,2π) ρ α is the representation translating a function by α. This is a representation of a commutative group

Smoothing There exist orthogonal one-dimensional (complex) subspaces V 1,, V n V that are the irreducible representations of V. Setting φ i V i to be a unit-vector, we know that the group acts on φ i by scalar multiplication: ρ α φ i = λ i α φ i Note: Since the V i are orthogonal, the basis {φ 1, φ n } is orthonormal.

Smoothing Setting φ i V i to be a unit-vector, we know that the group acts on φ i by scalar multiplication: ρ α φ i = λ i α φ i We can write out the functions g, h V as: g θ = g 1 φ 1 θ + + g n φ n θ h θ = h 1 φ 1 θ + + h n φ n θ with g i, h i C.

Smoothing Then the moving dot-product can be written as: g h α = g, ρ α (h)

Smoothing g h α = g, ρ α (h) Expanding in the basis {φ 1,, φ n }: n g h α = g j φ j, ρ α h k φ k n j=1 k=1

Smoothing g h α = n j=1 g j φ j, ρ α n k=1 h k φ k By linearity of ρ α : g h α = n j=1 g j φ j, n k=1 h k ρ α (φ k )

Smoothing g h α = n j=1 g j φ j, n k=1 h k ρ α (φ k ) By linearity of the inner product in the first term: n g h α = g j φ j, h k ρ α (φ k ) n j=1 k=1

Smoothing g h α = n j=1 g j φ j, n j=1 h k ρ α (φ k ) By conjugate-linearity in the second term: n g h α = g j h k φ j, ρ α (φ k ) j,k=1

Smoothing g h α = n j,k=1 g j h k φ j, ρ α (φ k ) Because ρ α is scalar multiplication in V i : n g h α = g j h k φ j, λ k α φ k j,k=1

Smoothing g h α = n j,k=1 g j h k φ j, λ k α φ k Again, by conjugate-linearity in the second term: n g h α = g j h k λ k α φ j, φ k j,k=1

Smoothing g h α = n j,k=1 g j h k λ k α φ j, φ k And finally, by the orthonormality of {φ 1,, φ n }: g h α = g j h j λ j α n j=1

Smoothing g h α = n j=1 g j h j λ j α This implies that we can compute the moving dotproduct by multiplying the coefficients of g and h. Convolution/Correlation in the spatial domain is multiplication in the frequency domain!

Smoothing What is λ j (α)?

Smoothing What is λ j (α)? Since the representation is unitary, λ j α = 1. λ j : 0,2π R s. t. λ j α = e i λ j α

Smoothing What is λ j (α)? λ j α = e i λ j α for some λ j : 0,2π R. Since it s a representation: λ j α + β = λ j α λ j β α, β 0,2π λ j α + β = λ j α + λ j (β) κ j R s. t. λ j α = κ j α

Smoothing What is λ j (α)? λ j α = e iκ jα for some κ j R. Since it s a representation: 1 = λ j 0 = λ j 2π = e iκ j2π κ j Z

Smoothing Thus, the correlation of the signals g, h: S 1 C can be expressed as: where κ j Z. g h α = g j h j e iκ jα n j=1