Statistical Geometry Processing Winter Semester 2011/2012

Similar documents
Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Chapter 3 Transformations

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review problems for MA 54, Fall 2004.

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Review of some mathematical tools

Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebra Massoud Malek

Computational Methods. Eigenvalues and Singular Values

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Background Mathematics (2/2) 1. David Barber

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Course Notes: Week 1

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

Applied Linear Algebra

Conceptual Questions for Review

14 Singular Value Decomposition

Linear Algebra. Min Yan

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

33A Linear Algebra and Applications: Practice Final Exam - Solutions

LINEAR ALGEBRA KNOWLEDGE SURVEY

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Lecture II: Linear Algebra Revisited

4.2. ORTHOGONALITY 161

UNIT 6: The singular value decomposition.

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Math 302 Outcome Statements Winter 2013

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Numerical Methods I Non-Square and Sparse Linear Systems

k is a product of elementary matrices.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Stat 159/259: Linear Algebra Notes

CSE 554 Lecture 7: Alignment

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

7. Symmetric Matrices and Quadratic Forms

Linear Algebra- Final Exam Review

Preliminary Examination, Numerical Analysis, August 2016

Linear Algebra Review

Reduction to the associated homogeneous system via a particular solution

We wish the reader success in future encounters with the concepts of linear algebra.

Linear Algebra Primer

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Linear Least-Squares Data Fitting

Elementary linear algebra

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

CS 143 Linear Algebra Review

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math Subject GRE Questions

Lecture 6. Numerical methods. Approximation of functions

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Iterative Methods for Solving A x = b

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Preface to Second Edition... vii. Preface to First Edition...

Linear Algebra: Matrix Eigenvalue Problems

MATH 583A REVIEW SESSION #1

COMP 558 lecture 18 Nov. 15, 2010

Numerical Methods I Eigenvalue Problems

Linear Algebra & Geometry why is linear algebra useful in computer vision?

8. Diagonalization.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

The Conjugate Gradient Method

Upon successful completion of MATH 220, the student will be able to:

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Image Registration Lecture 2: Vectors and Matrices

Math Linear Algebra II. 1. Inner Products and Norms

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra Review. Vectors

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Applied Linear Algebra in Geoscience Using MATLAB

Lecture Notes 1: Vector spaces

1 9/5 Matrices, vectors, and their applications

and let s calculate the image of some vectors under the transformation T.

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Linear Equations in Linear Algebra

I. Multiple Choice Questions (Answer any eight)

Mathematical foundations - linear algebra

Linear Algebra for Machine Learning. Sargur N. Srihari

Algebra C Numerical Linear Algebra Sample Exam Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Lecture 2: Linear Algebra Review

Paul Heckbert. Computer Science Department Carnegie Mellon University. 26 Sept B - Introduction to Scientific Computing 1

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Maths for Signals and Systems Linear Algebra in Engineering

SUMMARY OF MATH 1600

Math 411 Preliminaries

15 Singular Value Decomposition

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

B553 Lecture 5: Matrix Algebra Review

Transcription:

Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems

Vector and Function Spaces

3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian space

4 Vector Operations w v v + w Adding Vectors: Concatenation

5 Vector Operations 2.0 v 1.5 v v -1.0 v Scalar Multiplication: Scaling vectors (incl. mirroring)

6 You can combine it... v 2w + v w Linear Combinations: This is basically all you can do. n r v i1 i i

Vector Spaces Vector space: Set of vectors V Based on field F (we use only F = ) Two operations: Adding vectors u = v + w (u, v, w V) Scaling vectors w = v (u V, F) Vector space aioms: 7

8 Additional Tools More concepts: Subspaces, linear spans, bases Scalar product Angle, length, orthogonality Gram-Schmidt orthogonalization Cross product (R 3 ) Linear maps Matrices Eigenvalues & eigenvectors Quadratic forms (Check your old math books)

Eample Spaces Function spaces: Space of all functions f: Space of all smooth C k functions f: Space of all functions f: [0..1] 5 8 etc... + = 0 1 0 1 0 1 9

Function Spaces Intuition: Start with a finite dimensional vector Increase sampling density towards infinity Real numbers: uncountable amount of dimensions [f 1,f 2,...,f 9 ] T [f 1,f 2,...,f 18 ] T f() 0 1 0 1 0 1 d = 9 d = 18 d = 10

Dot Product on Function Spaces Scalar products For square-integrable functions f, g: n, the standard scalar product is defined as: It measures an abstract norm and angle between function (not in a geometric sense) Orthogonal functions: f g : f ( ) g( ) d Do not influence each other in linear combinations. Adding one to the other does not change the value in the other ones direction. 11

Approimation of Function Spaces Finite dimensional subspaces: Function spaces with infinite dimension are hard to represented on a computer For numerical purpose, finite-dimensional subspaces are used to approimate the larger space Two basic approaches 12

Approimation of Function Spaces Task: Given: Infinite-dimensional function space V. Task: Find f V with a certain property. Recipe: Finite Differences Sample function f on discrete grid Approimate property discretely Derivatives => finite differences Integrals => Finite sums Optimization: Find best discrete function 13

14 Approimation of Function Spaces actual solution function space basis approimate solution Recipe: Finite Elements Choose basis functions b 1,..., b d V Find f = d i=1 λ i b i that matches the property best f is described by ( 1,..., d )

15 Eamples 2 2 1,8 1,6 1,5 1,4 1 1,2 1 0,8 0,5 0 0,6 0,4 0,2-0,5-1 0 0 0,5 1 1,5 2-1,5 0,5 1 1,5 2 0 2 Monomial basis Fourier basis B-spline basis, Gaussian basis

16 Best Match Linear combination matches best Solution 1: Least squares minimization n f λ i b i d min R i=1 2 Solution 2: Galerkin method Both are equivalent i = 1.. n: f λ i b i n i=1, b i = 0

Optimality Criterion Given: Subspace W V An element v V Then we get: V W v w w W minimizes the quadratic error w v 2 (i.e. the Euclidean distance) if and only if: the residual w v is orthogonal to W Least squares = minimal Euclidean distance 17

Formal Derivation Least-squares n 2 E f = f λ i b i d R i=1 n n n = f 2 2 λ i f b i + λ i λ j b i b j R i=1 i=1 i=1 Setting derivatives to zero: d E f = 2 λ 1 f, b 1 λ n f, b n + λ 1,, λ n b i, b j Result: j = 1.. n: f λ i b i n i=1, b j = 0 18

Linear Maps

Linear Maps A Function f: V W between vector spaces V, W is linear if and only if: v 1,v 2 V: f (v 1 +v 2 ) = f (v 1 ) + f (v 2 ) vv, F: f (v) = f (v) Constructing linear mappings: A linear map is uniquely determined if we specify a mapping value for each basis vector of V. 20

21 Matri Representation Finite dimensional spaces Linear maps can be represented as matrices For each basis vector v i of V, we specify the mapped vector w i. Then, the map f is given by: n n n v v v v f f w w v )... ( 1 1 1

22 Matri Representation This can be written as matri-vector product: The columns are the images of the basis vectors (for which the coordinates of v are given) n n v v v f 1 1 ) ( w w

Linear Systems of Equations Problem: Invert an affine map Given: M = b We know M, b Looking for Solution Set of solutions: always an affine subspace of n, or the empty set. Point, line, plane, hyperplane... Innumerous algorithms for solving linear systems 23

Solvers for Linear Systems Algorithms for solving linear systems of equations: Gaussian elimination: O(n 3 ) operations for n n matrices We can do better, in particular for special cases: Band matrices: constant bandwidth Sparse matrices: constant number of non-zero entries per row Store only non-zero entries Instead of (3.5, 0, 0, 0, 7, 0, 0), store [(1:3.5), (5:7)] 24

Solvers for Linear Systems Algorithms for solving linear systems of n equations: Band matrices, O(1) bandwidth: Modified O(n) elimination algorithm. Iterative Gauss-Seidel solver converges for diagonally dominant matrices Typically: O(n) iterations, each costs O(n) for a sparse matri. Conjugate Gradient solver Only symmetric, positive definite matrices Guaranteed: O(n) iterations Typically good solution after O( n) iterations. More details on iterative solvers: J. R. Shewchuk: An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, 1994. 25

Eigenvectors & Eigenvalues Definition: Linear map M, non-zero vector with M = an is eigenvalue of M is the corresponding eigenvector. 26

Eample Intuition: In the direction of an eigenvector, the linear map acts like a scaling Eample: two eigenvalues (0.5 and 2) Two eigenvectors Standard basis contains no eigenvectors 27

Eigenvectors & Eigenvalues Diagonalization: In case an n n matri M has n linear independent eigenvectors, we can diagonalize M by transforming to this coordinate system: M = TDT -1. 28

Spectral Theorem Spectral Theorem: If M is a symmetric n n matri of real numbers (i.e. M = M T ), there eists an orthogonal set of n eigenvectors. This means, every (real) symmetric matri can be diagonalized: M = TDT T with an orthogonal matri T. 29

Computation Simple algorithm Power iteration for symmetric matrices Computes largest eigenvalue even for large matrices Algorithm: Start with a random vector (maybe multiple tries) Repeatedly multiply with matri Normalize vector after each step Repeat until ration before / after normalization converges (this is the eigenvalue) Intuition: Largest eigenvalue = dominant component/direction 30

31 Powers of Matrices What happens: A symmetric matri can be written as: M TDT T 1 T n T Taking it to the k-th power yields: M k TDT T TDT T k 1 T T k T T... TDT TD T T T Bottom line: Eigenvalue analysis key to understanding powers of matrices. k n

Improvements Improvements to the power method: Find smallest? use inverse matri. Find all (for a symmetric matri)? run repeatedly, orthogonalize current estimate to already known eigenvectors in each iteration (Gram Schmidt) How long does it take? ratio to net smaller eigenvalue, gap increases eponentially. There are more sophisticated algorithms based on this idea. 32

Generalization: SVD Singular value decomposition: Let M be an arbitrary real matri (may be rectangular) Then M can be written as: M = U D V T The matrices U, V are orthogonal D is a diagonal matri (might contain zeros) The diagonal entries are called singular values. U and V are different in general. For diagonalizable matrices, they are the same, and the singular values are the eigenvalues. 33

34 Singular Value Decomposition Singular value decomposition M U D = orthogonal 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0 0 0 0 4 0 0 V T orthogonal

Singular Value Decomposition Singular value decomposition Can be used to solve linear systems of equations For full rank, square M: M = U D V T M -1 = (U D V T ) -1 = (V T ) -1 D -1 (U -1 ) = V D -1 U T Good numerical properties (numerically stable) More epensive than iterative solvers The OpenCV library provides a very good implementation of the SVD 35

Linear Inverse Problems

Inverse Problems Settings A (physical) process f takes place It transforms the original input into an output b Task: recover from b Eamples: 3D structure from photographs Tomography: values from line integrals 3D geometry from a noisy 3D scan 37

Linear Inverse Problems Assumption: f is linear and finite dimensional f () = b M f = b Inversion of f is said to be an ill-posed problem, if one of the following three conditions hold: There is no solution There is more than one solution There is eactly one solution, but the SVD contains very small singular values. 38

39 Ill posed Problems Ratio: Small singular values amplify errors Assume ineact input Measurement noise Numerical noise Reminder: M -1 = V D -1 U T does not hurt does not hurt (orthogonal) (orthogonal) this one is decisive Orthogonal transforms preserve norm of, so V and U do not cause problems

40 Ill posed Problems Ratio: Small singular values amplify errors Reminder: = M -1 b = (V D -1 U T )b Say D looks like that: D : Any input noise in b in the direction of the fourth right singular vector will be amplified by 10 9. If our measurement precision is less than that, the result will be unusable. Does not depend on how we invert the matri. Condition number: ma / min 2.5 0 0 0 0 1.1 0 0 0 0 0.9 0 0 0 0 0.000000001

41 Ill Posed Problems Two problems: (1) Mapping destroys information goes below noise level cannot be recovered by any means (2) Inverse mapping amplifies noise D : 0 1.1 0 0 0 0 yields garbage solution even remaining information not recovered etremely large random solutions are obtained 2.5 0 0 0 0.9 0 0 0 0 0.000000001 We can do something about problem #2

Regularization Regularization Avoiding destructive noise caused by inversion Various techniques Goal: ignore the misleading information Subspace inversion: Ignore subspace with small singular values needs an SVD, risk of ringing Additional assumptions: smoothness (or something similar) make compound problem (f -1 + assumptions) well posed We will look at this in detail later 42

43 Illustration of the Problem f g forward f g problem original function smoothed function

44 Illustration of the Problem f inverse f g problem reconstructed function smoothed function

45 Illustration of the Problem f inverse f g problem regularized reconstructed function smoothed function

46 Illustration of the Problem f g original function G G -1 regularization Frequency Domain Frequency Domain

47 Illustration of the Problem f g forward f g problem original function smoothed function 1.2 1.5 0.3 0.4 1.6 0.2 0.3 * 1 3 2 1 0 0 0 0 1 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 1 0 0 0 0 1 2 = 1.4 1.5 0.8 0.9 1.3 0.8 0.7

Illustration of the Problem f g forward f g problem original function 1.4 1.5 0.8 0.9 1.3 0.8 0.7 * 3 2 1 0 0 0 0 1 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 1 0 0 0 0 1 2-1 = smoothed function 1.4 1.2 1.5 1.5 0.8 0.9 1.2 0.8 0.7 solution (from 2 digits) 0.3 0.4 1.6 0.2 0.3 correct 48

Analysis 2 1 0 0 0 0 1 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 1 0 0 0 0 1 2 Matri 1,4 1,2 1 0,8 0,6 0,4 0,2 0 1 2 3 4 5 6 7 Spectrum 0,6 0,6 0,4 0,4 0,2 0-0,2 4 3,2 3,2 0,2 0-0,2 0,2 0,2-0,4-0,4-0,6 1 2 3 4 5 6 7 Dominant Eigenvectors -0,6 1 2 3 4 5 6 7 Smallest Eigenvectors 49

50 Digging Deeper... Let s look at this eample again: Convolution operation: f () = R Shift-invariant linear operator f g continuous f(t) g t dt 2 1 0 0 0 0 1 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 0 0 0 0 0 1 2 1 1 0 0 0 0 1 2 discrete

Convolution Theorem Fourier Transform Function space: f: 0.. 2π R (suff. smooth) Fourier basis: 1, cos k, sin k k = 1,2, Properties The Fourier basis is an orthogonal basis (standard scalar prod.) The Fourier basis diagonalizes all shift-invariant linear operators 51

Convolution Theorem This means: Let F: N R be the Fourier transform (FT) of f: [0.. 2π] R Then: f g = FT 1 (F G) Consequences Fourier spectrum of convolution operator ( filter ) determines well-posedness of inverse operation (deconvolution) Certain frequencies might be lost In our eample: high frequencies 52

Quadratic Forms

Multivariate Polynomials A multi-variate polynomial of total degree d: A function f: n, f() f is a polynomial in the components of Any 1D direction f(s + tr) is a polynomial of maimum degree d in t. Eamples: f(, y) := + y + y is of total degree 2. In diagonal direction, we obtain f(t[1/ 2, 1/ 2] T ) = t 2. f(, y) := c 20 2 + c 02 y 2 + c 11 y + c 10 + c 01 y + c 00 is a quadratic polynomial in two variables 54

Quadratic Polynomials In general, any quadratic polynomial in n variables can be written as: T A + b T + c A is an n n matri, b is an n-dim. vector, c is a number Matri A can always be chosen to be symmetric If it isn t, we can substitute by 0.5 (A + A T ), not changing the polynomial 55

56 Eample Eample: 4 2.5 2.5 1 4 2 3 1 4 3 2 1 2 1 4 2.5) (2.5 1 4 3) (2 1 4 3 2 1 4 3 2 1 ] [ 4 3 2 1 ] [ 4 3 2 1 ) ( T T 2 2 2 2 T y y y y y y y y y y y y y f y f

57 Quadratic Polynomials Specifying quadratic polynomials: T A + b T + c b shifts the function in space (if A has full rank): c is an additive constant c c c A A A A A A 2 T (A sym.) T T T T = b

Some Properties Important properties Multivariate polynomials form a vector space We can add them component-wise: 2 2 + 3y 2 + 4y + 2 + 2y + 4 + 3 2 + 2y 2 + 1y + 5 + 5y + 5 = 5 2 + 5y 2 + 5y + 7 + 7y + 9 In vector notation: T A 1 + b 1T + c 1 + ( T A 2 + b 2T + c 2 ) = T (A 1 + A 2 ) + (b 1 + b 2 ) T + (c 1 + c 2 ) 58

Quadratic Polynomials Quadrics Zero level set of a quadratic polynomial: quadric Shape depends on eigenvalues of A b shifts the object in space c sets the level 59

60 Shapes of Quadrics Shape analysis: A is symmetric A can be diagonalized with orthogonal eigenvectors Q contains the principal ais of the quadric The eigenvalues determine the quadratic growth (up, down, speed of growth) n n Q Q Q Q A 1 T 1 T T T

61 Shapes of Quadratic Polynomials 1 = 1, 2 = 1 1 = 1, 2 = -1 1 = 1, 2 = 0

62 The Iso-Lines: Quadrics elliptic hyperbolic degenerate case 1 > 0, 2 > 0 1 < 0, 2 > 0 1 = 0, 2 0

Quadratic Optimization Quadratic Optimization Minimize quadratic objective function T A + b T + c Required: A > 0 (only positive eigenvalues) It s a paraboloid with a unique minimum The verte (critical point) can be determined by simply solving a linear system Necessary and sufficient condition 2A = b 63

Condition Number How stable is the solution? Depends on Matri A good bad 64

Regularization Regularization Sums of positive semi-definite matrices are positive semi-definite Add regularizing quadric Eample Fill in the valleys Bias in the solution Original: T A + b T + c Regularized: T (A + I ) + b T + c A + I 65

66 Rayleigh Quotient Relation to eigenvalues: Min/ma eigenvalues of a symmetric A epressed as constraint quadratic optimization: T A T min min A T ma ma min A T 1 The other way round eigenvalues solve a certain type of constrained, (non-conve) optimization problem. ma T A T 1

67 Coordinate Transformations One more interesting property: Given a positive definite symmetric ( SPD ) matri M (all eigenvalues positive) Such a matri can always be written as square of another matri: n T T T D D T D T D T T D D T 1 2 T TDT M

SPD Quadrics main ais T T M Identity I Interpretation: Start with a unit positive quadric T. Scale the main ais (diagonal of D) M TDT T D 2 Rotate to a different coordinate system (columns of T) Recovering main ais from M: Compute eigensystem ( principal component analysis ) T 68

69 Why should I care? What are quadrics good for? log-probability of Gaussian models Estimation in Gaussian probabilistic models......is quadratic optimization....is solving of linear systems of equations. Quadratic optimization easy to use & solve feasible :-) Approimate more comple models locally p Gaussian normal distribution, ( ) 1 ep 2π 2 2 2

Groups and Transformations

Groups Definition: A set G with operation is called a group (G, ), iff: Closed: : G G G (always maps back to G) Associativity: (f g) h = f (g h) Neutral element: there eists id G such that for any g G: g id = g Inverse element: For each g G there eists g 1 G such that g g 1 = g 1 g = id Abelian Groups The group is commutative iff always f g = g f 71

Eamples of Groups Eamples: G = invertible matrices, = composition (matri mult.) G = invertible affine transformation of R d, = composition (matri form: homogeneos coordinates) G = bijections of a set S to itself, = composition G = smooth C k bijections of a set S to itself, = composition G = global symmetry transforms of a shape, = composition G = permutation of a discrete set, = composition 72

Eamples of Groups Eamples: G = invertible matrices, = composition (matri mult.) G = invertible affine transformation of R d Subgroups: G = similarity transform (translation, rotation, mirroring, scaling 0) E(d): G = rigid motions (translation, rotation, mirroring) SE(d): G = rigid motions (translation, rotation) O(d): G = orthogonal matri (rotation, mirroring) (columns/rows orthonormal) SO(d): G = orthogonal matri (rotation) (columns/rows orthonormal, determinant 1) G = translations (the only commutative group out of these) 73

Eamples of Groups Eamples: G = global symmetry transforms of a 2D shape 74

Eamples of Groups Eamples: G = global symmetry transforms of a 3D shape (etended to infinity) (etended to infinity) 75

Outlook More details on this later Symmetry groups Structural regularity Crystalographic groups and regular lattices 76