Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Similar documents
Statistical Geometry Processing Winter Semester 2011/2012

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

Review problems for MA 54, Fall 2004.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Chapter 3 Transformations

Linear Algebra Massoud Malek

Mathematics 1. Part II: Linear Algebra. Exercises and problems

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra - Part II

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Review

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra- Final Exam Review

LINEAR ALGEBRA KNOWLEDGE SURVEY

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Applied Linear Algebra in Geoscience Using MATLAB

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

2. Every linear system with the same number of equations as unknowns has a unique solution.

Linear Algebra Review. Vectors

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

CS 143 Linear Algebra Review

Review of some mathematical tools

k is a product of elementary matrices.

Computational Methods. Eigenvalues and Singular Values

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Conceptual Questions for Review

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Designing Information Devices and Systems II

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 302 Outcome Statements Winter 2013

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture Summaries for Linear Algebra M51A

33A Linear Algebra and Applications: Practice Final Exam - Solutions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra Primer

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Applied Linear Algebra

Linear Algebra Highlights

MATH 235. Final ANSWERS May 5, 2015

14 Singular Value Decomposition

Review of Linear Algebra

A Brief Outline of Math 355

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra: Matrix Eigenvalue Problems

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Mathematical foundations - linear algebra

Linear Algebra March 16, 2019

MATH 583A REVIEW SESSION #1

A Review of Linear Algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

LINEAR ALGEBRA REVIEW

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra in Actuarial Science: Slides to the lecture

6 Inner Product Spaces

Math Linear Algebra Final Exam Review Sheet

235 Final exam review questions

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Linear Algebra. Min Yan

Background Mathematics (2/2) 1. David Barber

TBP MATH33A Review Sheet. November 24, 2018

7. Symmetric Matrices and Quadratic Forms

Eigenvalues and Eigenvectors A =

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

1 9/5 Matrices, vectors, and their applications

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MATH 221, Spring Homework 10 Solutions

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

UNIT 6: The singular value decomposition.

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LINEAR ALGEBRA SUMMARY SHEET.

Linear Least-Squares Data Fitting

Linear Algebra Practice Problems

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

Introduction to Matrix Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Review of Some Concepts from Linear Algebra: Part 2

Lecture 7: Positive Semidefinite Matrices

DS-GA 1002 Lecture notes 10 November 23, Linear models

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Lecture II: Linear Algebra Revisited

MAT 1302B Mathematical Methods II

I. Multiple Choice Questions (Answer any eight)

Review of Linear Algebra

Upon successful completion of MATH 220, the student will be able to:

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

NOTES on LINEAR ALGEBRA 1

Image Registration Lecture 2: Vectors and Matrices

Transcription:

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra

Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric Modeling Summer Semester 2010 Mathematical Tools 2/ 78

Mathematical Tools Linear Algebra

Overview Linear algebra Vector spaces Linear maps Quadrics Geometric Modeling Summer Semester 2010 Mathematical Tools 4/ 78

Vectors Vector spaces Vectors, Coordinates & Points Formal definition of a vector space Vector algebra Generalizations: Infinite dimensional vector spaces Function spaces Approimation with finite dimensional spaces More Tools: Dot product and norms The cross product Geometric Modeling Summer Semester 2010 Mathematical Tools 5/ 78

Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian space Geometric Modeling Summer Semester 2010 Mathematical Tools 6/ 78

Vector Operations w v v + w Adding Vectors: Concatenation Geometric Modeling Summer Semester 2010 Mathematical Tools 7/ 78

Vector Operations 2.0 v 1.5 v v 1.0 v Scalar Multiplication: Scaling vectors (incl. mirroring) Geometric Modeling Summer Semester 2010 Mathematical Tools 8/ 78

You can combine it... v 2w + v w Linear Combinations: This is basically all you can do. n r = λ v i= 1 i i Geometric Modeling Summer Semester 2010 Mathematical Tools 9/ 78

Vector Spaces Many classes of objects share the same structure: Geometric Objects 1,2,3,4... dimensional Euclidian vectors But also a lot of other mathematical objects Vectors with comple numbers, or finite fields Certain sets of functions Polynomials... Approach the problem from a more abstract level More general: Saves time, reduces number of proofs Can still resort to geometric vectors to get an intuition about what s going on Geometric Modeling Summer Semester 2010 Mathematical Tools 10 / 78

Vector Spaces Definition: Vector Space V over a Field F Consists of a set of vectors V F is a field (usually: Real numbers, F = R) Provides two operations: Adding vectors u = v + w (u, v, w V) Scaling vectors w = λv (u V, λ F) The two operations are closed, i.e.: operations on any elements of the vector space will yields elements of the vector space itself....and finally: A number of properties that have to hold: Geometric Modeling Summer Semester 2010 Mathematical Tools 11 / 78

Vector Spaces Definition: Vector Space V over a Field F (cont.) (a1) u, v, w V : ( u + v) + w = u + ( v + w) (a2) u, v V : u + v = v + u (a3) 0 V : v V v + 0 = v V : V (a4) (s1) (s2) (s3) (s4) v V : w V : v + w = 0 V ( μ v) ( λμ )v v V, λ, μ F : λ = for 1 F : v V : v = v F 1F ( v + w) = λv λw λ F : v, w V : λ + ( λ + μ) v = λv μv λ, μ F, v V : + Geometric Modeling Summer Semester 2010 Mathematical Tools 12 / 78

Vector Spaces Vector spaces Out of these formal assumptions, a long list of derivative properties (theorems) can be deduced. Will hold for any vector space. In particular, we will see that the assumptions are sufficient to obtain the columns with coordinates, we started with (in the finite dimensional case). Geometric Modeling Summer Semester 2010 Mathematical Tools 13 / 78

Properties Some properties you can easily prove: The zero vector 0 V is unique. For 2D vectors: Multiplication with the scalar 0 F yields the zero vector. The additive inverse v is unique given v. Multiplication by 1 yields the inverse vector. And so on... 0 V 0 = 0 Geometric Modeling Summer Semester 2010 Mathematical Tools 14 / 78

Span and Basis v span {v} line through the origin {u,v} {u,v,w} form a basis of R 2 span {u,v,w} = R 2 u, v, w u, v, w Geometric Modeling Summer Semester 2010 Mathematical Tools 15 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 16 / 78 Eample Spaces Eamples of finite dimensional vector spaces: Of course: R, R 2, R 3, R 4... Standard basis of R 3 : Coordinates: 1 0 0, 0 1 0, 0 0 1 ^ : 1 0 0 0 1 0 0 0 1 k j i z y z y z y + + = + + =

Eample Spaces Eamples of finite dimensional vector spaces: Polynomials of fied degree For eample, all polynomials of 2nd order: general form: a 2 + b + c addition: (a 1 2 + b 1 + c 1 ) + (a 2 2 + b 2 +c 2 ) = (a 1 + a 2 ) 2 + (b 1 + b 2 ) + (c 1 + c 2 ) scalar multiplication: λ(a 2 + b + c) = (λa) 2 + (λb) + (λc) Might be confusing: Evaluation of polynomials at is non linear, does not relate to the vector space structure Coordinates: [a, b, c] T Basis for these coordinates: { 2,, 1} Geometric Modeling Summer Semester 2010 Mathematical Tools 17 / 78

Eample Spaces Infinite dimensional vector spaces: Polynomials (of any degree) Need to represent coefficients of arbitrary degree Coordinate vectors can potentially become arbitrarily long i General form: ( ) = a i (only a finite subset of the a i nonzero) poly = i 0 Basis: { i i = 0,1,2,... } Coordinate vectors: ( a a, a,,...) 0, 1 2 a3 Geometric Modeling Summer Semester 2010 Mathematical Tools 18 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 19 / 78 Spaces of Sequences First generalization: Make vectors infinitely long Spaces of sequences Dimension =, countable a i a a a z y 3 2 1

Eample Spaces More infinite dimensional vector spaces: Function spaces Space of all functions f: R R Space of all smooth C k functions f: R R Space of all functions f: [0..1] R Not a vector space: f: [0..1] [0..1] + = 0 1 0 1 0 1 Geometric Modeling Summer Semester 2010 Mathematical Tools 20 / 78

Function Spaces Vector operations For f: Ω R, define: (f + g)() := f() + g() ( Ω) (λf)() := λ(f()) ( Ω) The zero vector is: 0 V = (f: f() 0) Geometric Modeling Summer Semester 2010 Mathematical Tools 21 / 78

Function Spaces Intuition: Start with a finite dimensional vector Increase sampling density towards infinity Real numbers: uncountable amount of dimensions [f 1,f 2,...,f 9 ] T [f 1,f 2,...,f 18 ] T f() 0 1 0 1 0 1 d = 9 d = 18 d = Geometric Modeling Summer Semester 2010 Mathematical Tools 22 / 78

Approimation of Function Spaces Finite dimensional subspaces: Function spaces with infinite dimension are hard to represented on a computer For numerical purpose, finite dimensional subspaces are used to approimate the larger space Two basic approaches: Geometric Modeling Summer Semester 2010 Mathematical Tools 23 / 78

Approimation of Function Spaces Here is the recipe : We are given an infinite dimensional function space V. We are looking for f V with a certain property. From a function space V we choose linearly independent functions f 1,...,f d V to form the d dimensional subspace span{f 1,...,f d }. Instead of looking for the f V, we look only among the ~ d f : = i= 1 λ i f i for a function that best matches the desired property (might be just an approimation, though). ~ The good thing: f is described by (λ 1,...,λ d ). Good for the computer... Geometric Modeling Summer Semester 2010 Mathematical Tools 24 / 78

Approimation of Function Spaces Two Approaches: Construct a basis, that already provides a subspace containing the functions you want Typically, the coefficients will have an intuitive meaning then Bezier Splines, B Splines, NURBS are all about that Choose a basis that can approimate the functions you might want, then pick the closest Standard approach in numerical solutions to partial differential equations and integral equations Basic idea: Define a measure of correctness C(f), then try to ~ maimize C(f) Geometric Modeling Summer Semester 2010 Mathematical Tools 25 / 78

Finite Dimensional Function Spaces Typical Basis Sets: Consider the space of functions f: [a, b] R. Some d dimensional subspaces: span {,,...,, } (piecewise constant basis) span { 1,, 2,..., d 1 } (Monomial basis of degree d 1) span { 1, sin, cos, sin 2, cos 2,..., sin (d 1)/2, cos (d 1)/2 } (Fourier basis of order (d 1)/2, usually a = 0, b = 2π) It depends all on the application, of course... Geometric Modeling Summer Semester 2010 Mathematical Tools 26 / 78

Eamples 0 π 2π Monomial basis Fourier basis Geometric Modeling Summer Semester 2010 Mathematical Tools 27 / 78

More Tools for Vectors More operations: Dot product / scalar product / inner product (measures distances, angles) Cross product (only R 3 ) Geometric Modeling Summer Semester 2010 Mathematical Tools 28 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 29 / 78 The Standard Scalar Product The standard dot product for vectors v,w R d is defined as: For v,w R 3 : = = = = d i v i w i 1 T, w : v w v w v z z y y z y z y w v w v w v w w w v v v + + = v w =

Properties Geometric properties: length( v): = v = v v (Pythagoras) 2 v w = v w cos ( v, w) (projection property) v v w w v w w w w w Geometric Modeling Summer Semester 2010 Mathematical Tools 30 / 78

Properties Geometric properties: length( v): = v = v v (Pythagoras) 2 v w = v w cos ( v, w) (projection property) In particular: v orthogonal to w v w = 0 Geometric Modeling Summer Semester 2010 Mathematical Tools 31 / 78

Properties Gram Schmidt Orthogonalization: v v w w w w v v w w w w v w w w Repeat for multiple vectors to create orthogonal set of vectors {v 1,..., v n } from set {v 1,...,v n } Geometric Modeling Summer Semester 2010 Mathematical Tools 32 / 78

Properties Scalar product properties: Symmetric: Bi linear: Positive: v w = w v u ( λv + w) = u λv + u w v v = 0 v = 0 abstract definition Geometric Modeling Summer Semester 2010 Mathematical Tools 33 / 78

Dot Product on Function Spaces We need dot products on function spaces... For square integrable functions f, g: Ω R n R, the standard scalar product is defined as: f g : = f ( ) g( ) d Ω It measures an abstract normal and angle between function (not in a geometric sense) Orthogonal functions: Don t influence each other in linear combinations. Adding one to the other does not change the value in the other ones direction. Geometric Modeling Summer Semester 2010 Mathematical Tools 34 / 78

Linear Maps Linear maps Linear maps and matrices Inverting and linear systems of equations Eigenvectors and eigenvalues Ill posed problems Geometric Modeling Summer Semester 2010 Mathematical Tools 35 / 78

Linear Maps A function f: V W between vector spaces V, W over a field F is a linear map, if and only if: v 1,v 2 V: f(v 1 +v 2 ) = f(v 1 ) + f(v 2 ) v V, λ F: f(λv) = λf(v) Theorem: A linear map is uniquely determined if we specify a mapping value for each basis vector of V. Geometric Modeling Summer Semester 2010 Mathematical Tools 36 / 78

Matri Representation Any linear map f between finite dimensional spaces can be represented as a matri: We fi a basis (usually the standard basis) For each basis vector v i of V, we specify the mapped vector w i. Then, the map f is given by: f v 1 v ) = f = v1w +... + v v n ( 1 n w n Geometric Modeling Summer Semester 2010 Mathematical Tools 37 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 38 / 78 Matri Representation This can be written as matri vector product: The columns are the images of the basis vectors (for which the coordinates of v are given) = n n v v v f 1 1 ) ( w w

Matri Multiplication Composition of linear maps corresponds to matri products: f ( g ) = f g = M M Matri product calculation: f g M f M g The (,y) th entry is the dot product of row of M f and column y of M g M f M g Geometric Modeling Summer Semester 2010 Mathematical Tools 39 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 40 / 78 Eample Eample: rotation matri Eample: identity matri 0 1 1 0 = α α α α cos sin sin cos M rot = 1 0 0 0 1 0 0 0 1 : I

Orthogonal Matrices Orthogonal Matri A matri is called orthogonal if all of its columns (rows) are orthonormal, i.e. c c = 1, c c = 0 for i j i The inverse of an orthogonal matri is its transpose: 1 T MM = MM = I i T M i j M Geometric Modeling Summer Semester 2010 Mathematical Tools 41 / 78

Affine Maps Affine Maps Translations are not linear (ecept for zero translation) A combination of a linear map and a translation can be described by: f ( ) = M + t This is called an affine map Composition of affine maps are affine: f ( g( ) ) = M f ( M g + t g ) + t f = ( M M ) + ( M t + t ) f g f g For a vector space V, a subspace S V and a point p V, the set { = p + v, v V} is called an affine subspace of V. If p 0, this is not a vector space. Geometric Modeling Summer Semester 2010 Mathematical Tools 42 / 78 f

Linear Systems of Equations Problem: Invert an affine map Given: M = b We know M, b Looking for Solution The set of solution is always an affine subspace of R n (i.e., a point, a line, a plane...), or the empty set. There are innumerous algorithms for solving linear systems, here is a brief summary... Geometric Modeling Summer Semester 2010 Mathematical Tools 43 / 78

Solvers for Linear Systems Algorithms for solving linear systems of equations: Gaussian elimination: O(n 3 ) operations for n n matrices We can do better, in particular for special cases: Band matrices: constant bandwidth Sparse matrices: constant number of non zero entries per row Store only non zero entries Instead of (3.5, 0, 0, 0, 7, 0, 0), store [(1:3.5), (5:7)] Geometric Modeling Summer Semester 2010 Mathematical Tools 44 / 78

Solvers for Linear Systems Algorithms for solving linear systems of equations: Band matrices, constant bandwidth: modified elimination algorithm with O(n) operations. Iterative Gauss Seidel solver: converges for diagonally dominant matrices. Typically: O(n) iterations, each costs O(n) for a sparse matri. Conjugate Gradient solver: works for symmetric, positive definite matrices in O(n) iterations, but typically we get a good solution already after O( n) iterations. More details on iterative solvers: J. R. Shewchuk: An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, 1994. Geometric Modeling Summer Semester 2010 Mathematical Tools 45 / 78

Determinants Determinants Assign a scalar det(m) to square matrices M The scalar measures the volume of the parallelepiped formed by the column vectors: M = v 1 v 2 v 3 v 2 v 3 det(m) v 1 Geometric Modeling Summer Semester 2010 Mathematical Tools 46 / 78

Properties A few properties: det(a)det(b) = det(ab) det(λa) = λ n det(a) (n n matri A) det(a 1 ) = det(a) 1 det(a T ) = det(a) Can be computed efficiently using Gaussian elimination Geometric Modeling Summer Semester 2010 Mathematical Tools 47 / 78

Eigenvectors & Eigenvalues Definition: If for a linear map M and a non zero vector we have M = λ we call λ an eigenvalue of M and the corresponding eigenvector. Geometric Modeling Summer Semester 2010 Mathematical Tools 48 / 78

Eample Intuition: In the direction of an eigenvector, the linear map acts like a scaling Eample: two eigenvalues (0.5 and 2) Two eigenvectors Standard basis contains no eigenvectors Geometric Modeling Summer Semester 2010 Mathematical Tools 49 / 78

Eigenvectors & Eigenvalues Diagonalization: In case an n n matri M has n linear independent eigenvectors, we can diagonalize M by transforming to this coordinate system: M = TDT 1. Geometric Modeling Summer Semester 2010 Mathematical Tools 50 / 78

Spectral Theorem Spectral Theorem: If M is a symmetric n n matri of real numbers (i.e. M = M T ), there eists an orthogonal set of n eigenvectors. This means, every (real) symmetric matri can be diagonalized: M = TDT T with an orthogonal matri T. Geometric Modeling Summer Semester 2010 Mathematical Tools 51 / 78

Computation Simple algorithm Power iteration for symmetric matrices Computes largest eigenvalue even for large matrices Algorithm: Start with a random vector (maybe multiple tries) Repeatedly multiply with matri Normalize vector after each step Repeat until ration before / after normalization converges (this is the eigenvalue) Important intuition: Largest eigenvalue is the dominant component of the linear map. Geometric Modeling Summer Semester 2010 Mathematical Tools 52 / 78

Powers of Matrices What happens: A symmetric matri can be written as: M = TDT T λ1 = T λ n T Taking it to the k th power yields: M k = TDT T TDT Bottom line: Eigenvalue analysis is the key to understanding powers of matrices. T k λ1 T T k T T... TDT = TD T = T T Geometric Modeling Summer Semester 2010 Mathematical Tools 53 / 78 λ k n

Improvements Improvements to the power method: Find smallest? use inverse matri. Find all (for a symmetric matri)? run repeatedly, orthogonalize current estimate to already known eigenvectors in each iteration (Gram Schmidt) How long does it take? ratio to net smaller eigenvalue, gap increases eponentially. There are more sophisticated algorithms based on this idea. Geometric Modeling Summer Semester 2010 Mathematical Tools 54 / 78

Generalization: SVD Singular value decomposition: Let M be an arbitrary real matri (may be rectangular) Then M can be written as: M = UDV T The matrices U, V are orthogonal D is a diagonal matri (might contain zeros) The diagonal entries are called singular values. U and V are different in general. For diagonalizable matrices, they are the same, and the singular values are the eigenvalues. Geometric Modeling Summer Semester 2010 Mathematical Tools 55 / 78

Singular Value Decomposition Singular value decomposition M = U orthogonal D σ 1 0 0 0 0 0 0 σ 2 0 0 0 0 0 0 σ 3 0 0 0 0 0 0 σ 4 0 0 T V orthogonal Geometric Modeling Summer Semester 2010 Mathematical Tools 56 / 78

Singular Value Decomposition Singular value decomposition Can be used to solve linear systems of equations For full rank, square M: M = UDV T M 1 = (UDV T ) 1 = (V T ) 1 D 1 (U 1 ) = VD 1 U T Good numerical properties (numerically stable), but epensive The OpenCV library provides a very good implementation of the SVD Geometric Modeling Summer Semester 2010 Mathematical Tools 57 / 78

Inverse Problems Settings A (physical) process f takes place It transforms the original input into an output b Task: recover from b Eamples: 3D structure from photographs Tomography: values from line integrals 3D geometry from a noisy 3D scan Geometric Modeling Summer Semester 2010 Mathematical Tools 58 / 78

Linear Inverse Problems Assumption: f is linear and finite dimensional f() = b M f = b Inversion of f is said to be an ill posed problem, if one of the following three conditions hold: There is no solution There is more than one solution There is eactly one solution, but the SVD contains very small singular values. Geometric Modeling Summer Semester 2010 Mathematical Tools 59 / 78

Ill posed Problems Ratio: Small singular values amplify errors Assume our input is ineact (e.g. measurement noise) Reminder: M 1 = VD 1 U T does not hurt does not hurt (orthogonal) (orthogonal) this one is decisive Orthogonal transforms preserve the norm of, so V and U do not cause problems Geometric Modeling Summer Semester 2010 Mathematical Tools 60 / 78

Ill posed Problems Ratio: Small singular values amplify errors Reminder: = M 1 b = (VD 1 U T )b Say D looks like that: D : = Any input noise in b in the direction of the fourth right singular vector will be amplified by 10 9. If our measurement precision is less than that, the result will be unusable. Does not depend on how we invert the matri. Condition number: σ ma /σ min 0.000000001 Geometric Modeling Summer Semester 2010 Mathematical Tools 61 / 78 2.5 0 0 0 0 1.1 0 0 0 0 0.9 0 0 0 0

Regularization Regularization Aims at avoiding the inversion problems Various techniques; in general the goal is to ignore the misleading information Subspace inversion: do not use directions with small singular values (needs an SVD) Additional assumptions: Assume smoothness (or something similar) in case of unclear or missing information so that compound problem (f + assumptions) is well posed Geometric Modeling Summer Semester 2010 Mathematical Tools 62 / 78

Quadrics Quadrics Multivariate polynomials Quadratic optimization Quadrics & eigenvalue problems Geometric Modeling Summer Semester 2010 Mathematical Tools 63 / 78

Multivariate Polynomials A multi variate polynomial of total degree d: A function f: R n R, f() f is a polynomial in the components of In any direction f(s+tr), we obtain a one dimensional polynomial of maimum degree d in t. Eamples: f([,y] T ) := + y + y is of total degree 2. In diagonal direction, we obtain f(t[1/ 2, 1/ 2] T ) = t 2. f([,y] T ) := c 20 2 + c 02 y 2 + c 11 y + c 10 + c 01 y + c 00 is a quadratic polynomial in two variables Geometric Modeling Summer Semester 2010 Mathematical Tools 64 / 78

Quadratic Polynomials In general, any quadratic polynomial in n variables can be written as: T A + b T + c A is an n n matri, b is an n dim. vector, c is a number Matri A can always be chosen to be symmetric If it isn t, we can substitute by 0.5 (A + A T ), not changing the polynomial Geometric Modeling Summer Semester 2010 Mathematical Tools 65 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 66 / 78 Eample Eample: = + = + + + = + + + = + + + = = = = = 4 2.5 2.5 1 4 2 3 1 4 3 2 1 2 1 4 2.5) (2.5 1 4 3) (2 1 4 3 2 1 4 3 2 1 ] [ 4 3 2 1 ] [ 4 3 2 1 ) ( T T 2 2 2 2 T y y y y y y y y y y y y y f y f

Geometric Modeling Summer Semester 2010 Mathematical Tools 67 / 78 Quadratic Polynomials Specifying quadratic polynomials: T A + b T + c b shifts the function in space (if A has full rank): c is an additive constant ( ) ( ) ( ) c c c + + = + + = + μ μ μ μ μ μ μ μ μ A A A A A A 2 T (A sym.) T T T T = b

Some Properties Important properties Multivariate polynomials form a vector space We can add them component wise: 2 2 + 3y 2 +4y + 2 + 2y + 4 +3 2 + 2y 2 +1y + 5 + 5y + 5 =5 2 + 5y 2 +5y + 7 + 7y + 9 In vector notation: T A 1 + b 1T + c 1 + λ( T A 2 + b 2T + c 2 ) = T (A 1 +λa 2 ) + (b 1 +λb 2 ) T + (c 1 +λc 2 ) Geometric Modeling Summer Semester 2010 Mathematical Tools 68 / 78

Quadratic Polynomials Quadrics The zero level set of such a quadratic polynomial is called a quadric Shape depends on eigenvalues of A b shifts the object in space c sets the level Geometric Modeling Summer Semester 2010 Mathematical Tools 69 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 70 / 78 Shapes of Quadrics Shape analysis: A is symmetric A can be diagonalized with orthogonal eigenvectors Q contains the principal ais of the quadric The eigenvalues determine the quadratic growth (up, down, speed of growth) ( ) ( ) n n Q Q Q Q A = = λ λ λ λ 1 T 1 T T T

Shapes of Quadratic Polynomials λ 1 = 1, λ 2 = 1 λ 1 = 1, λ 2 = 1 λ 1 = 1, λ 2 = 0 Geometric Modeling Summer Semester 2010 Mathematical Tools 71 / 78

The Iso Lines: Quadrics elliptic hyperbolic degenerate case λ 1 > 0, λ 2 > 0 λ 1 < 0, λ 2 > 0 λ 1 = 0, λ 2 0 Geometric Modeling Summer Semester 2010 Mathematical Tools 72 / 78

Quadratic Optimization Quadratic Optimization Assume we want to minimize a quadratic objective function T A + b T + c A has only positive eigenvalues. Means: It s a paraboloid with a unique minimum The verte (critical point) of the paraboloid can be determined by simply solving a linear system More on this later (need some more analysis first) Geometric Modeling Summer Semester 2010 Mathematical Tools 73 / 78

Rayleigh Quotient Relation to eigenvalues: The minimum and maimum eigenvalues of a symmetric matri A can be epressed as constraint quadratic optimization problem: T ( A) T λ = min = min ( A) T λ = ma = ma min A T = 1 ma T A T = 1 The other way round eigenvalues solve a certain type of constrained, (non conve) optimization problem. Geometric Modeling Summer Semester 2010 Mathematical Tools 74 / 78

Geometric Modeling Summer Semester 2010 Mathematical Tools 75 / 78 Coordinate Transformations One more interesting property: Given a positive definite symmetric ( SPD ) matri M (all eigenvalues positive) Such a matri can always be written as square of another matri: ( ) ( )( ) ( ) = = = = = n T T T D D T D T D T T D D T λ λ 1 2 T TDT M

SPD Quadrics main ais T T M Identity I Interpretation: Start with a unit positive quadric T. Scale the main ais (diagonal of D) M = TDT ( ) T D 2 Rotate to a different coordinate system (columns of T) Recovering main ais from M: Compute eigensystem ( principal component analysis ) T = Geometric Modeling Summer Semester 2010 Mathematical Tools 76 / 78

Software

GeoX GeoX comes with several linear algebra libraries: 2D, 3D, 4D vectors and matrices: LinearAlgebra.h Large (dense) vectors and matrices: DynamicLinearAlgebra.h Gaussian elimination: invertmatri() Sparse matrices: SparseLinearAlgebra.h Iterative solvers (Gauss Seidel, conjugate gradients, power iteration): IterativeSolvers.h Geometric Modeling Summer Semester 2010 Mathematical Tools 78 / 78