Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Similar documents
Fall, 2003 CIS 610. Advanced geometric methods. Homework 3. November 11, 2003; Due November 25, beginning of class

Spring 2018 CIS 610. Advanced Geometric Methods in Computer Science Jean Gallier Homework 3

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

235 Final exam review questions

4.2. ORTHOGONALITY 161

Linear Algebra- Final Exam Review

Review of Linear Algebra

Math 462: Homework 2 Solutions

Chap 3. Linear Algebra

Math 306 Topics in Algebra, Spring 2013 Homework 7 Solutions

Introduction to Matrix Algebra

Diagonalizing Matrices

CS 246 Review of Linear Algebra 01/17/19

Review problems for MA 54, Fall 2004.

Properties of Linear Transformations from R n to R m

Methods in Computer Vision: Introduction to Matrix Lie Groups

Linear Algebra Highlights

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Differential Topology Solution Set #2

MATH 369 Linear Algebra

Conceptual Questions for Review

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Linear Algebra Review. Vectors

Linear Algebra Massoud Malek

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Review 1 Math 321: Linear Algebra Spring 2010

1. General Vector Spaces

Linear Algebra and Matrices

Differential equations

Review of linear algebra

Linear Algebra March 16, 2019

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Part IA. Vectors and Matrices. Year

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture 4 Orthonormal vectors and QR factorization

Homework Set #8 Solutions

Chapter 3 Transformations

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

18.06 Quiz 2 April 7, 2010 Professor Strang

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 396. An application of Gram-Schmidt to prove connectedness

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Math 320, spring 2011 before the first midterm

The Cartan Dieudonné Theorem

Linear Algebra

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Section 6.4. The Gram Schmidt Process

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Linear Algebra and Dirac Notation, Pt. 3

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Linear Algebra & Geometry why is linear algebra useful in computer vision?

1 Matrices and Systems of Linear Equations

Elementary linear algebra

Problems in Linear Algebra and Representation Theory

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

The Lorentz group provides another interesting example. Moreover, the Lorentz group SO(3, 1) shows up in an interesting way in computer vision.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Linear Algebra Primer

Applied Linear Algebra in Geoscience Using MATLAB

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

A matrix over a field F is a rectangular array of elements from F. The symbol

Algebra I Fall 2007

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

MATH 235. Final ANSWERS May 5, 2015

f(a)f(b) = ab for all a, b E. WhenE = F, an affine isometry f: E E is also called a rigid motion.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

18.06SC Final Exam Solutions

Chapter 5 Eigenvalues and Eigenvectors

GQE ALGEBRA PROBLEMS

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Knowledge Discovery and Data Mining 1 (VO) ( )

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Chapter 6: Orthogonality

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Chapter 2. Introduction to Manifolds and Classical Lie Groups

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

4. Determinants.

Reduction to the associated homogeneous system via a particular solution

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Basic Elements of Linear Algebra

Spring, 2006 CIS 610. Advanced Geometric Methods in Computer Science Jean Gallier Homework 1

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

ELEMENTARY LINEAR ALGEBRA

Lecture 3: QR-Factorization

LINEAR ALGEBRA MICHAEL PENKAVA

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Linear Algebra Homework and Study Guide

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Transcription:

Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be any real or complex n n matrix and let be any operator norm Prove that for every m m I + A k k! e A k= Deduce from the above that the sequence (E m ) of matrices E m = I + converges to a limit denoted e A and called the exponential of A Problem B (Extra Credit 60 pts) Recall that the affine maps ρ: R R defined such that ( ) ( ) ( ) ( ) x cos θ sin θ x w ρ = + y sin θ cos θ y where θ w w R are rigid motions (or direct affine isometries) and that they form the group SE() Given any map ρ as above if we let ( cos θ sin θ R = sin θ cos θ ) X = m k= A k k! w ( ) x and W = y then ρ can be represented by the 3 3 matrix ( ) cos θ sin θ w R W A = = sin θ cos θ w 0 0 0 ( w w )

in the sense that iff ( ) ρ(x) = ( R W 0 ρ(x) = RX + W ) ( ) X (a) Consider the set of matrices of the form 0 θ u θ 0 v 0 0 0 where θ u v R Verify that this set of matrices is a vector space isomorphic to (R 3 +) This vector space is denoted by se() Show that in general BC CB if B C se() (b) Given a matrix 0 θ u B = θ 0 v 0 0 0 prove that if θ = 0 then 0 u e B = 0 v 0 0 and that if θ 0 then u cos θ sin θ sin θ + v (cos θ ) e B θ θ = u sin θ cos θ ( cos θ + ) + v sin θ θ θ 0 0 Hint Prove that and that B 3 = θ B e B = I 3 + sin θ θ B + cos θ B θ (c) Check that e B is a direct affine isometry in SE() Prove that the exponential map exp: se() SE() is surjective If θ kπ (k Z) how do you need to restrict θ to get an injective map? Problem B3 (00 + 00 pts) Consider the affine maps ρ: R R defined such that ( ) ( ) ( ) ( ) x cos θ sin θ x w ρ = α + y sin θ cos θ y w

where θ w w α R with α > 0 These maps are called (direct) affine similitudes (for short similitudes) The number α > 0 is the scale factor of the similitude These affine maps are the composition of a rotation of angle θ a rescaling by α > 0 and a translation (a) Prove that these maps form a group that we denote by SIM() Given any map ρ as above if we let ( cos θ sin θ R = sin θ cos θ ) X = ( ) x and W = y then ρ can be represented by the 3 3 matrix ( ) α cos θ α sin θ w αr W A = = α sin θ α cos θ w 0 0 0 ( w w ) in the sense that iff ( ) ρ(x) = ( αr W 0 ρ(x) = αrx + W ) ( ) X (b) Consider the set of matrices of the form λ θ u θ λ v 0 0 0 where θ λ u v R Verify that this set of matrices is a vector space isomorphic to (R 4 +) This vector space is denoted by sim() (c) Given a matrix prove that Hint Write with ( ) λ θ Ω = θ λ ( cos θ e Ω = e λ sin θ Ω = λi + θj J = ( ) 0 0 ) sin θ cos θ 3

Observe that J = I and prove by induction on k that Ω k = ( (λ + iθ) k + (λ iθ) k) I + ( (λ + iθ) k (λ iθ) k) J i (d) As in (c) write let and let Prove that where Ω 0 = I Prove that where (e) Prove that ( ) λ θ Ω = θ λ U = B = B n = e B = V = I + k V = I + k ( ) u v ( ) Ω U 0 0 ( ) Ω n Ω n U 0 0 ( ) e Ω V U 0 Ω k (k + )! Ω k (k + )! = 0 e Ωt dt Use this formula to prove that if λ = θ = 0 then else V = λ + θ V = I ( ) λ(e λ cos θ ) + e λ θ sin θ θ( e λ cos θ) e λ λ sin θ θ( e λ cos θ) + e λ λ sin θ λ(e λ cos θ ) + e λ θ sin θ Observe that for λ = 0 the above gives back the expression in B(b) for θ 0 Conclude that if λ = θ = 0 then e B = ( ) I U 0 4

else with and V = λ + θ e B = ( cos θ e Ω = e λ sin θ and that e B SIM() with scale factor e λ ( ) e Ω V U 0 ) sin θ cos θ ( ) λ(e λ cos θ ) + e λ θ sin θ θ( e λ cos θ) e λ λ sin θ θ( e λ cos θ) + e λ λ sin θ λ(e λ cos θ ) + e λ θ sin θ (f) Prove that the exponential map exp: sim() SIM() is surjective (g) Similitudes can be used to describe certain deformations (or flows) of a deformable body B t in the plane Given some initial shape B in the plane (for example a circle) a deformation of B is given by a piecewise differentiable curve D : [0 T ] SIM() where each D(t) is a similitude (for some T > 0) The deformed body B t at time t is given by B t = D(t)(B) The surjectivity of the exponential map exp: sim() SIM() implies that there is a map log: SIM() sim() although it is multivalued The exponential map and the log function allows us to work in the simpler (noncurved) Euclidean space sim() For instance given two similitudes A A SIM() specifying the shape of B at two different times we can compute log(a ) and log(a ) which are just elements of the Euclidean space sim() form the linear interpolant ( t) log(a ) + t log(a ) and then apply the exponential map to get an interpolating deformation t e ( t) log(a )+t log(a ) t [0 ] Also given a sequence of snapshots of the deformable body B say A 0 A A m where each is A i is a similitude we can try to find an interpolating deformation (a curve in SIM()) by finding a simpler curve t C(t) in sim() (say a B-spline) interpolating log A log A log A m Then the curve t e C(t) yields a deformation in SIM() interpolating A 0 A A m () (75 pts) Write a program interpolating between two deformations () (5 pts) Write a program using your cubic spline interpolation program from the first project to interpolate a sequence of deformations given by similitudes A 0 A A m 5

Problem B4 (40 pts) Recall that the coordinates of the cross product u v of two vectors u = (u u u 3 ) and v = (v v v 3 ) in R 3 are (u v 3 u 3 v u 3 v u v 3 u v u v ) (a) If we let U be the matrix 0 u 3 u U = u 3 0 u u u 0 check that the coordinates of the cross product u v are given by 0 u 3 u v u v 3 u 3 v u 3 0 u v = u 3 v u v 3 u u 0 v 3 u v u v (b) Show that the set of matrices of the form 0 u 3 u U = u 3 0 u u u 0 is a vector space isomorphic to (R 3 +) This vector space is denoted by so(3) Show that such matrices are never invertible Find the kernel of the linear map associated with a matrix U Describe geometrically the action of the linear map defined by a matrix U Show that when restricted to the plane orthogonal to u = (u u u 3 ) if u is a unit vector then U behaves like a rotation by π/ Problem B5 (00 pts) (a) For any matrix 0 c b A = c 0 a b a 0 if we let θ = a + b + c and a ab ac B = ab b bc ac bc c prove that A = θ I + B AB = BA = 0 6

From the above deduce that A 3 = θ A (b) Prove that the exponential map exp: so(3) SO(3) is given by or equivalently by if θ 0 with exp(0 3 ) = I 3 exp A = e A = cos θ I 3 + sin θ ( cos θ) A + B θ θ e A = I 3 + sin θ ( cos θ) A + A θ θ (c) Prove that e A is an orthogonal matrix of determinant + ie a rotation matrix (d) Prove that the exponential map exp: so(3) SO(3) is surjective For this proceed as follows: Pick any rotation matrix R SO(3); () The case R = I is trivial () If R I and tr(r) then { } θ exp (R) = sin θ (R RT ) + cos θ = tr(r) (Recall that tr(r) = r + r + r 3 3 the trace of the matrix R) Note that both θ and π θ yield the same matrix exp(r) (3) If R I and tr(r) = then prove that the eigenvalues of R are that R = R and that R = I Prove that the matrix S = (R I) is a symmetric matrix whose eigenvalues are 0 Thus S can be diagonalized with respect to an orthogonal matrix Q as 0 0 S = Q 0 0 Q 0 0 0 Prove that there exists a skew symmetric matrix 0 d c U = d 0 b c b 0 7

so that U = S = (R I) Observe that (c + d ) bc bd U = bc (b + d ) cd bd cd (b + c ) and use this to conclude that if U = S then b + c + d = Then show that 0 d c exp (R) = (k + )π d 0 b k Z c b 0 where (b c d) is any unit vector such that for the corresponding skew symmetric matrix U we have U = S (e) To find a skew symmetric matrix U so that U = S = (R I) as in (d) we can solve the system b bc bd bc c cd = S bd cd d We immediately get b c d and then since one of b c d is nonzero say b if we choose the positive square root of b we can determine c and d from bc and bd Implement a computer program to solve the above system Problem B6 (Extra Credit 00 pts) (a) Consider the set of affine maps ρ of R 3 defined such that ρ(x) = RX + W where R is a rotation matrix (an orthogonal matrix of determinant +) and W is some vector in R 3 Every such a map can be represented by the 4 4 matrix ( ) R W 0 in the sense that iff ( ) ρ(x) = ( R W 0 ρ(x) = RX + W ) ( ) X Prove that these maps are affine bijections and that they form a group denoted by SE(3) (the direct affine isometries or rigid motions of R 3 ) 8

(b) Let us now consider the set of 4 4 matrices of the form ( ) Ω W B = 0 0 where Ω is a skew-symmetric matrix 0 c b Ω = c 0 a b a 0 and W is a vector in R 3 Verify that this set of matrices is a vector space isomorphic to (R 6 +) This vector space is denoted by se(3) Show that in general BC CB (c) Given a matrix as in (b) prove that where Ω 0 = I 3 Given B n = B = ( ) Ω W 0 0 ( ) Ω n Ω n W 0 0 0 c b Ω = c 0 a b a 0 let θ = a + b + c Prove that if θ = 0 then e B = ( I3 W 0 ) and that if θ 0 then where (d) Prove that and e B = V = I 3 + k ( ) e Ω V W 0 Ω k (k + )! e Ω = I 3 + sin θ ( cos θ) Ω + Ω θ θ V = I 3 + ( cos θ) (θ sin θ) Ω + Ω θ θ 3 9

Hint Use the fact that Ω 3 = θ Ω (e) Prove that e B is a direct affine isometry in SE(3) Prove that V is invertible and that Z = I Ω + ( θ sin θ ) Ω θ ( cos θ) for θ 0 Hint Assume that the inverse of V is of the form Z = I 3 + aω + bω and show that a b are given by a system of linear equations that always has a unique solution Prove that the exponential map exp: se(3) SE(3) is surjective Remark: As in the case of the plane rigid motions in SE(3) can be used to describe certain deformations of bodies in R 3 Problem B7 (80 pts) Let A be a real matrix ( ) a A = a a a () Prove that the squares of the singular values σ σ of A are the roots of the quadratic equation X tr(a A)X + det(a) = 0 () If we let prove that µ(a) = a + a + a + a a a a a cond (A) = σ σ = µ(a) + (µ(a) ) / (3) Consider the subset S of invertible matrices whose entries a i j are integers such that 0 a ij 00 Prove that the functions cond (A) and µ(a) reach a maximum on the set S for the same values of A Check that for the matrix we have A m = µ(a m ) = 9 603 ( 00 ) 99 99 98 0 det(a m ) =

and cond (A m ) 39 06 (4) Prove that for all A S if det(a) then µ(a) 0 000 Conclude that the maximum of µ(a) on S is achieved for matrices such that det(a) = ± Prove that finding matrices that maximize µ on S is equivalent to finding some integers n n n 3 n 4 such that 0 n 4 n 3 n n 00 n + n + n 3 + n 4 00 + 99 + 99 + 98 = 39 06 n n 4 n n 3 = You may use without proof that the fact that the only solution to the above constraints is the multiset {00 99 99 98} (5) Deduce from part (4) that the matrices in S for which µ has a maximum value are ( ) ( ) ( ) ( ) 00 99 98 99 99 00 99 98 A m = 99 98 99 00 98 99 00 99 and check that µ has the same value for these matrices Conclude that max A S cond (A) = cond (A m ) (6) Solve the system ( ) ( ) 00 99 x = 99 98 Perturb the right-hand side b by δb = x ( ) 00097 0006 ( ) 99 97 and solve the new system where y = (y y ) Check that A m y = b + δb ( ) δx = y x = 003 Compute x δx b δb and estimate c = δx x ( ) δb b

Ckeck that c cond (A m ) = 39 06 Problem B8 (0 pts) Let E be a real vector space of finite dimension n Say that two bases (u u n ) and (v v n ) of E have the same orientation iff det(p ) > 0 where P the change of basis matrix from (u u n ) and (v v n ) namely the matrix whose jth columns consist of the coordinates of v j over the basis (u u n ) (a) Prove that having the same orientation is an equivalence relation with two equivalence classes An orientation of a vector space E is the choice of any fixed basis say (e e n ) of E Any other basis (v v n ) has the same orientation as (e e n ) (and is said to be positive or direct) iff det(p ) > 0 else it is said to have the opposite orientation of (e e n ) (or to be negative or indirect) where P is the change of basis matrix from (e e n ) to (v v n ) An oriented vector space is a vector space with some chosen orientation (a positive basis) (b) Let B = (u u n ) and B = (v v n ) be two orthonormal bases For any sequence of vectors (w w n ) in E let det B (w w n ) be the determinant of the matrix whose columns are the coordinates of the w j s over the basis B and similarly for det B (w w n ) Prove that if B and B have the same orientation then det B (w w n ) = det B (w w n ) Given any oriented vector space E for any sequence of vectors (w w n ) in E the common value det B (w w n ) for all positive orthonormal bases B of E is denoted and called a volume form of (w w n ) λ E (w w n ) (c) Given any Euclidean oriented vector space E of dimension n for any n vectors w w n in E check that the map x λ E (w w n x) is a linear form Then prove that there is a unique vector denoted w w n such that λ E (w w n x) = (w w n ) x for all x E The vector w w n is called the cross-product of (w w n ) It is a generalization of the cross-product in R 3 (when n = 3)

Problem B9 (40 pts) Given p vectors (u u p ) in a Euclidean space E of dimension n p the Gram determinant (or Gramian) of the vectors (u u p ) is the determinant u u u u u p u u u u u p Gram(u u p ) = u p u u p u u p () Prove that Gram(u u n ) = λ E (u u n ) Hint If (e e n ) is an orthonormal basis and A is the matrix of the vectors (u u n ) over this basis det(a) = det(a A) = det(a i A j ) where A i denotes the ith column of the matrix A and (A i A j ) denotes the n n matrix with entries A i A j () Prove that u u n = Gram(u u n ) Hint Letting w = u u n observe that and show that λ E (u u n w) = w w = w w 4 = λ E (u u n w) = Gram(u u n w) = Gram(u u n ) w Problem B0 (60 pts) () Implement the Gram-Schmidt orthonormalization procedure and the modified Gram-Schmidt procedure You may use the pseudo-code showed in () () Implement the method to compute the QR decomposition of an invertible matrix You may use the following pseudo-code: function qr(a: matrix): [Q R] pair of matrices begin n = dim(a); R = 0; (the zero matrix) Q(: ) = A(: ); R( ) = sqrt(q(: ) Q(: )); Q(: ) = Q(: )/R( ); for k := to n do 3

w = A(: k + ); for i := to k do R(i k + ) = A(: k + ) Q(: i); w = w R(i k + )Q(: i); endfor; Q(: k + ) = w; R(k + k + ) = sqrt(q(: k + ) Q(: k + )); Q(: k + ) = Q(: k + )/R(k + k + ); endfor; end Test it on various matrices including those involved in Project (3) Given any invertible matrix A define the sequences A k Q k R k as follows: A = A Q k R k = A k A k+ = R k Q k for all k where in the second equation Q k R k is the QR decomposition of A k given by part () Run the above procedure for various values of k (50 00 ) and various real matrices A in particular some symmetric matrices; also run the Matlab command eig on A k and compare the diagonal entries of A k with the eigenvalues given by eig(a k ) What do you observe? How do you explain this behavior? TOTAL: 400 + 60 points + 60 extra 4