A Robust Eigensolver for 3 3 Symmetric Matrices
|
|
- Alyson McLaughlin
- 6 years ago
- Views:
Transcription
1 A Robust Eigensolver for 3 3 Symmetric Matrices David Eberly, Geometric Tools, Redmond WA This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: December 6, 2014 Last Modified: September 20, 2016 Contents 1 Introduction 2 2 An Iterative Algorithm 2 3 A Variation of the Iterative Algorithm 3 4 Implementation of the Iterative Algorithm 5 5 A Noniterative Algorithm Computing the Eigenvalues Computing the Eigenvectors Implementation of the Noniterative Algorithm 17 1
2 1 Introduction Let A be a 3 3 symmetric matrix of real numbers. From linear algebra, we know that A has all real-valued eigenvalues and a full basis of eigenvectors. Let D = Diagonal(λ 0, λ 1, λ 2 ) be the diagonal matrix whose diagonal entries are the eigenvalues. The eigenvalues are not necessarily distinct. Let R = [U 0 U 1 U 2 ] be an orthogonal matrix whose columns are linearly independent eigenvectors, ordered consistently with the diagonal entries of D. That is, AU i = λ i U i. The eigendecomposition is A = RDR T. The typical presentation in a linear algebra class shows that the eigenvalues are the roots of the cubic polynomial det(a λi) = 0, where I is the 3 3 identity matrix. The left-hand side of the equation is the determinant of the matrix A λi. Closed-form equations exist for the roots of a cubic polynomial, so in theory one could compute the roots and for each one solve the equation (A λi)u = 0 for nonzero vectors U. Although correct theoretically, computing roots of the cubic polynomial using the closed-form equations is known to be a non-robust algorithm (generally). 2 An Iterative Algorithm A matrix M is specified by M = [m ij ] for 0 i 2 and 0 2. The classical numerical approach is to use a Householder reflection matrix H to compute B = H T AH so that b 02 = 0; that is, B is a tridiagonal matrix. The matrix H is a reflection, so H T = H. A sequence of Givens rotations G k are used to drive the superdiagonal entries to zero. This is an iterative process for which a termination condition is required. If n rotations are applied, we obtain G T n 1 G T 0 H T AHG 0 G n 1 = D = D + E where D is a tridiagonal matrix. The matrix D is diagonal and the matrix E has entries that are sufficiently small that the diagonal entries of D are reasonable approximations to the eigenvalues of A. The orthogonal matrix R = HG 0 G n 1 has columns that are reasonable approximations to the eigenvectors of A. The source code that implements this algorithm is in class SymmetricEigensolver found in the files GeometricTools/GTEngine/Include/GteSymmetricEigensolver.h, inl and is an implementation of Algorithm (Symmetric QR Algorithm) described in Matrix Computations, 2nd edition, by G. H. Golub and C. F. Van Loan, The Johns Hopkins University Press, Baltimore MD, Fourth Printing Algorithm (Householder Tridiagonalization) is used to reduce matrix A to tridiagonal D. Algorithm (Implicit Symmetric QR Step with Wilkinson Shift) is used for the iterative reduction from tridiagonal to diagonal. Numerically, we have errors E = R T AR D. Algorithm mentions that one expects E is approximately µ A, where M denotes the Frobenius norm of M and where µ is the unit roundoff for the floating-point arithmetic: 2 23 for float, which is FLT EPSILON = e-7f, and 2 52 for double, which is DBL EPSILON = e-16. The book uses the condition a(i, i+1) ε a(i, i)+a(i+1, i+1) to determine when the reduction decouples to smaller problems. That is, when a superdiagonal term is effectively zero, the iterations may be applied separately to two tridiagonal submatrices. Our source code is implemented instead to deal with floating-point numbers, 2
3 sum = a ( i, i ) + a ( i +1, i + 1 ) ; i f ( sum + a ( i, i +1) == sum ) // The s u p e r d i a g o n a l term a ( i, i +1) i s e f f e c t i v e l y z e r o. That is, the superdiagonal term is small relative to its diagonal neighbors, and so it is effectively zero. The unit tests have shown that this interpretation of decoupling is effective. 3 A Variation of the Iterative Algorithm The variation uses the Householder transformation to compute B = H T AH where b 02 = 0. Let c = cos θ and s = sin θ for some angle θ. The right-hand side is c s 0 a 00 a 01 a 02 c s 0 H T AH = s c 0 a 01 a 11 a 12 s c a 02 a 12 a = c 2 a sca 01 + s 2 a 11 sc(a 00 a 11 ) + (s 2 c 2 )a 01 ca 02 + sa 12 sc(a 00 a 11 ) + (s 2 c 2 )a 01 c 2 a 11 2sca 01 + s 2 a 00 sa 02 ca 12 ca 02 + sa 12 sa 02 ca 12 a 22 = b 00 b 01 b 02 b 01 b 11 b 12 b 02 b 12 b 22 We require 0 = b 02 = ca 02 + sa 12 = (c, s) (a 02, a 12 ), which occurs when (c, s) = (a 12, a 02 ) / a a2 12. Rather than using Givens rotations for the iterations, we may instead use reflection matrices. Suppose that b 12 b 01. We will choose a sequence of reflection matrices to drive b 12 to zero. Choose a reflection matrix c 1 0 s 1 G 1 = s 1 0 c where c 1 = cos θ 1 and s 1 = sin θ 1 for some angle θ 1. Consider the product c 2 1b s 1 c 1 b 01 + s 2 1b 11 s 1 b 12 s 1 c 1 (b 11 b 00 ) + (c 2 1 s 2 1)b 01 P 1 = G T 1 BG 1 = s 1 b 12 b 22 c 1 b 12 s 1 c 1 (b 11 b 00 ) + (c 2 1 s 2 1)b 01 c 1 b 12 c 2 1b 11 2s 1 c 1 b 01 + s 2 1b 00 3
4 We need P 1 to be tridiagonal, which requires leading us to two possible choices 0 = s 1 c 1 (b 11 b 00 ) + (c 2 1 s 2 1)b 01 = sin(2θ 1 )(b 11 b 00 )/2 + cos(2θ 1 )b 01 (cos(2θ 1 ), sin(2θ 1 )) = ± (b 00 b 11, 2b 01 ) (b00 b 11 ) 2 + 4b 2 01 We must extract c 1 = cos θ 1 and s 1 = sin θ from whichever solution we choose. cos(2θ 1 ) 0 so that c 1 s 1 = s 1. Let σ = Sign(b 00 b 11 ); then In fact, we will choose Notice that (cos(2θ 1 ), sin(2θ 1 )) = ( b 00 b 11, 2σb 01 ), sin θ (b00 b 11 ) 2 + 4b 2 1 = 01 because we chose cos(2θ 1 ) 0. 1 cos(2θ1 ) cos θ 1 = (1 + cos(2θ 1 ))/2 (1 cos(2θ 1 ))/2 = sin θ 1 2, cos θ 1 = sin(2θ 1) 2 sin θ 1 The previous construction guarantees that cos θ 0 1/ 2 < 1. Let P 1 = [p (1) ij ]; we now know p(1) 12 = c 0b 12, so p (1) 12 b 12 / 2. Our precondition for computing P 1 from B was that b 12 b 01. A postcondition is that p (1) 12 = c 0b 12 s 0 b 12 = p (1) 01, which is the precondition if we construct and multiply by another reflection G 2 of the same form as G 1. Define P 0 = B and let P i+1 = G T i+1 P ig i+1 be the iterative process. The conclusion is that the (1, 2)-entry of the output matrix is smaller than the (1, 2)-entry of the input matrix, so indeed repeated iterations will drive the (1, 2)-entry to zero. If c i = cos θ i and s i = sin θ i, we have which implies p (i+1) 12 = c i+1 p (i) 2 1/2 p (i) 12 p (i) 2 i/2 b 12, i 1 12 In the limit as i, the (1, 2)-entry is forced to zero. In fact the bounds here allow you to select the final i so that 2 i/2 b 12 is a number whose nearest floating-point representation is zero. The number of iterations of the algorithm is limited by this i. If instead we find that b 01 b 12, we can choose the reflections to be of the form G i = c 1 0 s 1 s 1 0 c 1 12 Consider the product P 1 = G T 1 BG 1 = c 2 1b 11 2s 1 c 1 b 12 + s 2 1b 22 c 1 b 01 s 1 c 1 (b 11 b 22 ) + (c 2 1 s 2 1)b 12 c 1 b 01 b 00 s 1 b 01 s 1 c 1 (b 11 b 22 ) + (c 2 1 s 2 1)b 12 s 1 b 01 s 2 1b s 1 c 1 b 12 + c 2 1b 22 4
5 Define σ = Sign(b 22 b 11 ) and choose (cos(2θ 1 ), sin(2θ 1 )) = ( b 22 b 11, 2σb 12 ) 1 cos(2θ1 ), sin θ (b22 b 11 ) 2 + 4b 2 1 =, cos θ 1 = sin(2θ 1) 2 2 sin θ 12 1 so that cos θ 1 = (1 + cos(2θ 1 ))/2 (1 cos(2θ 1 ))/2 = sin θ 1. (0, 1)-entry to zero. We can apply the G i to drive the A less aggressive approach is to use the condition mentioned previously that compares the superdiagonal entry to the diagonal entries using floating-point arithmetic. 4 Implementation of the Iterative Algorithm The source code that implements the iterative algorithm for symmetric 3 3 matrices is in class SymmetricEigensolver3x3 found in the file GeometricTools/GTEngine/Include/GteSymmetricEigensolver3x3.h The code was written not to use any GTEngine object. The class interface is #i n c l u d e <a l g o r i t h m > #i n c l u d e <a r r a y > #i n c l u d e <cmath> #i n c l u d e <l i m i t s > c l a s s S y m m e t r i c E i g e n s o l v e r 3 x 3 p u b l i c : // The i n p u t matrix must be symmetric, so o n l y the unique elements must // be s p e c i f i e d : a00, a01, a02, a11, a12, and a22. // // I f a g g r e s s i v e i s t r u e, t h e i t e r a t i o n s o c c u r u n t i l a s u p e r d i a g o n a l // e n t r y i s e x a c t l y z e r o. I f a g g r e s s i v e i s f a l s e, t h e i t e r a t i o n s // o c c u r u n t i l a s u p e r d i a g o n a l e n t r y i s e f f e c t i v e l y z e r o compared to t h e // sum o f magnitudes o f i t s d i a g o n a l n e i g h b o r s. G e n e r a l l y, t h e // n o n a g g r e s s i v e c o n v e r g e n c e i s a c c e p t a b l e. // // The o r d e r o f t h e e i g e n v a l u e s i s s p e c i f i e d by s o r t T y p e : 1 ( d e c r e a s i n g ), // 0 ( no s o r t i n g ), o r +1 ( i n c r e a s i n g ). When s o r t e d, t h e e i g e n v e c t o r s a r e // o r d e r e d a c c o r d i n g l y, and e v e c [ 0 ], e v e c [ 1 ], e v e c [ 2 ] i s g u a r a n t e e d to // be a r i g h t handed o r t h o n o r m a l s e t. The r e t u r n v a l u e i s t h e number o f // i t e r a t i o n s used by t h e a l g o r i t h m. i n t operator ( ) ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, b o o l a g g r e s s i v e, i n t sorttype, s t d : : a r r a y <Real, 3>& e v a l, s t d : : a r r a y <s t d : : a r r a y <Real, 3>, 3>& e v e c ) c o n s t ; p r i v a t e : // Update Q = Q G in p l a c e u s i n g G = c,0, s, s, 0, c, 0, 0, 1. v o i d Update0 ( Real Q [ 3 ] [ 3 ], Real c, R e a l s ) c o n s t ; // Update Q = Q G in p l a c e u s i n g G = 0,1,0, c, 0, s, s, 0, c. v o i d Update1 ( Real Q [ 3 ] [ 3 ], Real c, R e a l s ) c o n s t ; // Update Q = Q H in p l a c e u s i n g H = c, s, 0, s, c, 0, 0, 0, 1. v o i d Update2 ( Real Q [ 3 ] [ 3 ], Real c, R e a l s ) c o n s t ; // Update Q = Q H in p l a c e u s i n g H = 1,0,0,0, c, s,0, s, c. v o i d Update3 ( Real Q [ 3 ] [ 3 ], Real c, R e a l s ) c o n s t ; 5
6 // N o r m a l i z e ( u, v ) r o b u s t l y, a v o i d i n g f l o a t i n g p o i n t o v e r f l o w i n t h e s q r t // c a l l. The n o r m a l i z e d p a i r i s ( cs, sn ) w i t h c s <= 0. I f ( u, v ) = ( 0, 0 ), // t h e f u n c t i o n r e t u r n s ( cs, sn ) = ( 1,0). When used to g e n e r a t e a // H o u s e h o l d e r r e f l e c t i o n, i t does not matter whether ( cs, sn ) o r ( cs, sn ) // i s used. When g e n e r a t i n g a G i v e n s r e f l e c t i o n, c s = c o s (2 t h e t a ) and // sn = s i n (2 t h e t a ). Having a n e g a t i v e c o s i n e f o r t h e double a n g l e // term e n s u r e s t h a t t h e s i n g l e a n g l e terms c = c o s ( t h e t a ) and // s = s i n ( t h e t a ) s a t i s f y c <= s. void GetCosSin ( Real u, Real v, Real& cs, Real& sn ) const ; // The c o n v e r g e n c e t e s t. When a g g r e s s i v e i s t r u e, t h e s u p e r d i a g o n a l // t e s t i s bsuper == 0. When a g g r e s s i v e i s f a l s e, t h e s u p e r d i a g o n a l // t e s t i s bdiag0 + bdiag1 + bsuper == bdiag0 + bdiag1, which // means bsuper i s e f f e c t i v e l y z e r o compared to t h e s i z e s o f t h e d i a g o n a l // e n t r i e s. b o o l Converged ( b o o l a g g r e s s i v e, R e al bdiag0, R e a l bdiag1, Real bsuper ) const ; ; // Support f o r s o r t i n g t h e e i g e n v a l u e s and e i g e n v e c t o r s. The o u t p u t // ( i0, i1, i 2 ) i s a p e r m u t a t i o n o f ( 0, 1, 2 ) so t h a t d [ i 0 ] <= d [ i 1 ] <= d [ i 2 ]. // The b o o l r e t u r n i n d i c a t e s whether t h e p e r m u t a t i o n i s odd. I f i t i s // not, t h e handedness o f t h e Q m a t r i x must be a d j u s t e d. b o o l S o r t ( s t d : : a r r a y <Real, 3> c o n s t& d, i n t& i0, i n t& i1, i n t& i 2 ) c o n s t ; and the implementation is i n t SymmetricEigensolver3x3 <Real >:: operator ( ) ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, b o o l a g g r e s s i v e, i n t sorttype, s t d : : a r r a y <Real, 3>& e v a l, s t d : : a r r a y <s t d : : a r r a y <Real, 3>, 3>& e v e c ) c o n s t // Compute t h e H o u s e h o l d e r r e f l e c t i o n H and B = H A H, where b02 = 0. Real c o n s t z e r o = ( R eal ) 0, one = ( Real ) 1, h a l f = ( R e a l ) 0. 5 ; b o o l i s R o t a t i o n = f a l s e ; Real c, s ; GetCosSin ( a12, a02, c, s ) ; Real Q [ 3 ] [ 3 ] = c, s, z e r o, s, c, z e r o, zero, zero, one ; Real term0 = c a00 + s a01 ; Real term1 = c a01 + s a11 ; Real b00 = c term0 + s term1 ; Real b01 = s term0 c term1 ; term0 = s a00 c a01 ; term1 = s a01 c a11 ; Real b11 = s term0 c term1 ; Real b12 = s a02 c a12 ; Real b22 = a22 ; // G i v e n s r e f l e c t i o n s, B = GˆT B G, p r e s e r v e t r i d i a g o n a l m a t r i c e s. i n t c o n s t m a x I t e r a t i o n = 2 (1 + s t d : : n u m e r i c l i m i t s <Real >:: d i g i t s s t d : : n u m e r i c l i m i t s <Real >:: min exponent ) ; i n t i t e r a t i o n ; Real c2, s2 ; i f ( s t d : : abs ( b12 ) <= s t d : : abs ( b01 ) ) Real saveb00, saveb01, saveb11 ; f o r ( i t e r a t i o n = 0 ; i t e r a t i o n < m a x I t e r a t i o n ; ++i t e r a t i o n ) // Compute t h e G i v e n s r e f l e c t i o n. GetCosSin ( h a l f ( b00 b11 ), b01, c2, s2 ) ; s = s q r t ( h a l f ( one c2 ) ) ; // >= 1/ s q r t ( 2 ) c = h a l f s2 / s ; // Update Q by t h e G i v e n s r e f l e c t i o n. Update0 (Q, c, s ) ; i s R o t a t i o n =! i s R o t a t i o n ; // Update B < QˆT B Q, e n s u r i n g t h a t b02 i s z e r o and b12 has // s t r i c t l y d e c r e a s e d. 6
7 saveb00 = b00 ; saveb01 = b01 ; saveb11 = b11 ; term0 = c saveb00 + s saveb01 ; term1 = c saveb01 + s saveb11 ; b00 = c term0 + s term1 ; b11 = b22 ; term0 = c saveb01 s saveb00 ; term1 = c saveb11 s saveb01 ; b22 = c term1 s term0 ; b01 = s b12 ; b12 = c b12 ; i f ( Converged ( a g g r e s s i v e, b00, b11, b01 ) ) // Compute t h e H o u s e h o l d e r r e f l e c t i o n. GetCosSin ( h a l f ( b00 b11 ), b01, c2, s2 ) ; s = s q r t ( h a l f ( one c2 ) ) ; c = h a l f s2 / s ; // >= 1/ s q r t ( 2 ) // Update Q by t h e H o u s e h o l d e r r e f l e c t i o n. Update2 (Q, c, s ) ; i s R o t a t i o n =! i s R o t a t i o n ; // Update D = QˆT B Q. saveb00 = b00 ; saveb01 = b01 ; saveb11 = b11 ; term0 = c saveb00 + s saveb01 ; term1 = c saveb01 + s saveb11 ; b00 = c term0 + s term1 ; term0 = s saveb00 c saveb01 ; term1 = s saveb01 c saveb11 ; b11 = s term0 c term1 ; break ; Real saveb11, saveb12, saveb22 ; f o r ( i t e r a t i o n = 0 ; i t e r a t i o n < m a x I t e r a t i o n ; ++i t e r a t i o n ) // Compute t h e G i v e n s r e f l e c t i o n. GetCosSin ( h a l f ( b22 b11 ), b12, c2, s2 ) ; s = s q r t ( h a l f ( one c2 ) ) ; // >= 1/ s q r t ( 2 ) c = h a l f s2 / s ; // Update Q by t h e G i v e n s r e f l e c t i o n. Update1 (Q, c, s ) ; i s R o t a t i o n =! i s R o t a t i o n ; // Update B < QˆT B Q, e n s u r i n g t h a t b02 i s z e r o and b12 has // s t r i c t l y d e c r e a s e d. MODIFY... saveb11 = b11 ; saveb12 = b12 ; saveb22 = b22 ; term0 = c saveb22 + s saveb12 ; term1 = c saveb12 + s saveb11 ; b22 = c term0 + s term1 ; b11 = b00 ; term0 = c saveb12 s saveb22 ; term1 = c saveb11 s saveb12 ; b00 = c term1 s term0 ; b12 = s b01 ; b01 = c b01 ; i f ( Converged ( a g g r e s s i v e, b11, b22, b12 ) ) // Compute t h e H o u s e h o l d e r r e f l e c t i o n. GetCosSin ( h a l f ( b11 b22 ), b12, c2, s2 ) ; s = s q r t ( h a l f ( one c2 ) ) ; 7
8 c = h a l f s2 / s ; // >= 1/ s q r t ( 2 ) // Update Q by t h e H o u s e h o l d e r r e f l e c t i o n. Update3 (Q, c, s ) ; i s R o t a t i o n =! i s R o t a t i o n ; // Update D = QˆT B Q. saveb11 = b11 ; saveb12 = b12 ; saveb22 = b22 ; term0 = c saveb11 + s saveb12 ; term1 = c saveb12 + s saveb22 ; b11 = c term0 + s term1 ; term0 = s saveb11 c saveb12 ; term1 = s saveb12 c saveb22 ; b22 = s term0 c term1 ; break ; s t d : : a r r a y <Real, 3> d i a g o n a l = b00, b11, b22 ; i n t i0, i1, i 2 ; i f ( s o r t T y p e >= 1) // d i a g o n a l [ i 0 ] <= d i a g o n a l [ i 1 ] <= d i a g o n a l [ i 2 ] b o o l isodd = Sort ( diagonal, i0, i1, i 2 ) ; i f (! isodd ) i s R o t a t i o n =! i s R o t a t i o n ; i f ( s o r t T y p e <= 1) // d i a g o n a l [ i 0 ] >= d i a g o n a l [ i 1 ] >= d i a g o n a l [ i 2 ] b o o l isodd = Sort ( diagonal, i0, i1, i 2 ) ; s t d : : swap ( i0, i 2 ) ; // ( i0, i1, i 2 ) >(i2, i1, i 0 ) i s odd i f ( isodd ) i s R o t a t i o n =! i s R o t a t i o n ; i 0 = 0 ; i 1 = 1 ; i 2 = 2 ; e v a l [ 0 ] = d i a g o n a l [ i 0 ] ; e v a l [ 1 ] = d i a g o n a l [ i 1 ] ; e v a l [ 2 ] = d i a g o n a l [ i 2 ] ; e v e c [ 0 ] [ 0 ] = Q [ 0 ] [ i 0 ] ; e v e c [ 0 ] [ 1 ] = Q [ 1 ] [ i 0 ] ; e v e c [ 0 ] [ 2 ] = Q [ 2 ] [ i 0 ] ; e v e c [ 1 ] [ 0 ] = Q [ 0 ] [ i 1 ] ; e v e c [ 1 ] [ 1 ] = Q [ 1 ] [ i 1 ] ; e v e c [ 1 ] [ 2 ] = Q [ 2 ] [ i 1 ] ; e v e c [ 2 ] [ 0 ] = Q [ 0 ] [ i 2 ] ; e v e c [ 2 ] [ 1 ] = Q [ 1 ] [ i 2 ] ; e v e c [ 2 ] [ 2 ] = Q [ 2 ] [ i 2 ] ; // Ensure t h e columns o f Q form a r i g h t handed s e t. i f (! i s R o t a t i o n ) f o r ( i n t j = 0 ; j < 3 ; ++j ) e v e c [ 2 ] [ j ] = e v e c [ 2 ] [ j ] ; r e t u r n i t e r a t i o n ; 8
9 v o i d SymmetricEigensolver3x3 <Real >:: Update0 ( Real Q [ 3 ] [ 3 ], Real c, Real s ) c o n s t f o r ( i n t r = 0 ; r < 3 ; ++r ) Real tmp0 = c Q[ r ] [ 0 ] + s Q[ r ] [ 1 ] ; Real tmp1 = Q[ r ] [ 2 ] ; Real tmp2 = c Q[ r ] [ 1 ] s Q[ r ] [ 0 ] ; Q[ r ] [ 0 ] = tmp0 ; Q[ r ] [ 1 ] = tmp1 ; Q[ r ] [ 2 ] = tmp2 ; v o i d SymmetricEigensolver3x3 <Real >:: Update1 ( Real Q [ 3 ] [ 3 ], Real c, Real s ) c o n s t f o r ( i n t r = 0 ; r < 3 ; ++r ) Real tmp0 = c Q[ r ] [ 1 ] s Q[ r ] [ 2 ] ; Real tmp1 = Q[ r ] [ 0 ] ; Real tmp2 = c Q[ r ] [ 2 ] + s Q[ r ] [ 1 ] ; Q[ r ] [ 0 ] = tmp0 ; Q[ r ] [ 1 ] = tmp1 ; Q[ r ] [ 2 ] = tmp2 ; v o i d SymmetricEigensolver3x3 <Real >:: Update2 ( Real Q [ 3 ] [ 3 ], Real c, Real s ) c o n s t f o r ( i n t r = 0 ; r < 3 ; ++r ) Real tmp0 = c Q[ r ] [ 0 ] + s Q[ r ] [ 1 ] ; Real tmp1 = s Q[ r ] [ 0 ] c Q[ r ] [ 1 ] ; Q[ r ] [ 0 ] = tmp0 ; Q[ r ] [ 1 ] = tmp1 ; v o i d SymmetricEigensolver3x3 <Real >:: Update3 ( Real Q [ 3 ] [ 3 ], Real c, Real s ) c o n s t f o r ( i n t r = 0 ; r < 3 ; ++r ) Real tmp0 = c Q[ r ] [ 1 ] + s Q[ r ] [ 2 ] ; Real tmp1 = s Q[ r ] [ 1 ] c Q[ r ] [ 2 ] ; Q[ r ] [ 1 ] = tmp0 ; Q[ r ] [ 2 ] = tmp1 ; v o i d SymmetricEigensolver3x3 <Real >:: GetCosSin ( Real u, Real v, Real& cs, Real& sn ) c o n s t Real maxabscomp = s t d : : max ( s t d : : abs ( u ), s t d : : abs ( v ) ) ; i f ( maxabscomp > ( Real ) 0 ) u /= maxabscomp ; // i n [ 1,1] v /= maxabscomp ; // i n [ 1,1] Real l e n g t h = s q r t ( u u + v v ) ; c s = u / l e n g t h ; sn = v / l e n g t h ; i f ( c s > ( R eal ) 0 ) c s = c s ; sn = sn ; 9
10 c s = ( R eal ) 1; sn = ( Real ) 0 ; b o o l SymmetricEigensolver3x3 <Real >:: Converged ( b o o l a g g r e s s i v e, Real bdiag0, Real bdiag1, Real bsuper ) c o n s t i f ( a g g r e s s i v e ) r e t u r n bsuper == ( Real ) 0 ; Real sum = s t d : : abs ( bdiag0 ) + s t d : : abs ( bdiag1 ) ; r e t u r n sum + s t d : : abs ( bsuper ) == sum ; b o o l S y m m e t r i c E i g e n s o l v e r 3 x 3 <Real >:: S o r t ( s t d : : a r r a y <Real, 3> c o n s t& d, i n t& i0, i n t& i1, i n t& i 2 ) c o n s t b o o l odd ; i f ( d [ 0 ] < d [ 1 ] ) i f ( d [ 2 ] < d [ 0 ] ) i 0 = 2 ; i 1 = 0 ; i 2 = 1 ; odd = t r u e ; i f ( d [ 2 ] < d [ 1 ] ) i 0 = 0 ; i 1 = 2 ; i 2 = 1 ; odd = f a l s e ; i 0 = 0 ; i 1 = 1 ; i 2 = 2 ; odd = t r u e ; i f ( d [ 2 ] < d [ 1 ] ) i 0 = 2 ; i 1 = 1 ; i 2 = 0 ; odd = f a l s e ; i f ( d [ 2 ] < d [ 0 ] ) i 0 = 1 ; i 1 = 2 ; i 2 = 0 ; odd = t r u e ; i 0 = 1 ; i 1 = 0 ; i 2 = 2 ; odd = f a l s e ; r e t u r n odd ; 5 A Noniterative Algorithm An algorithm that is reasonably robust in practice is provided at the Wikipedia page Eigenvalue Algorithm in the section entitled 3 3 matrices. The algorithm involves converting a cubic polynomial to a form that allows the application of a trigonometric identity in order to obtain closed-form expressions for the 10
11 roots. This topic is quite old; the Wikipedia page Cubic Function summarizes the history. 1 Although the topic is old, the current-day emphasis is usually the robustness of the algorithm when using floating-point arithmetic. Knowing that real-valued symmetric matrices have only real-valued eigenvalues, we know that the characteristic cubic polynomial has roots that are all real valued. The Wikipedia discussion includes pseudocode and a reference to a 1961 paper in the Communications of the ACM entitled Eigenvalues of a symmetric 3 3 matrix. In my opinion, important details are omitted from the Wikipedia page. In particular, it is helpful to visualize the graph of the cubic polynomial det(βi B) = β 3 3β det(b) in order to understand the location of the polynomial roots. This information is essential to compute eigenvectors when a real-valued root is repeated. The Wikipedia page mentions briefly the mathematical details for generating the eigenvectors, but the discussion does not make it clear how to do so robustly in a computer program. I will use the notation that occurs at the Wikipedia page, except that the pseudocode will use 0-based indexing of vectors and matrices rather than 1-based indexing. Let A be a real-valued symmetric 3 3 matrix, a 00 a 01 a 02 A = a 01 a 11 a 12 (1) a 02 a 12 a 22 The trace of a matrix M, denoted tr(m), is the sum of the diagonal entries of M. The determinant of M is denoted det(m). The characteristic polynomial for A is where I is the 3 3 identity matrix and where f(α) = det(αi A) = α 3 c 2 α 2 + c 1 α c 0 (2) c 2 = a 00 + a 11 + a 22 = tr(a) c 1 = (a 00 a 11 a 2 01) + (a 00 a 22 a 2 02) + (a 11 a 22 a 2 12) = ( tr 2 (A) tr(a 2 ) ) /2 c 0 = a 00 a 11 a a 01 a 02 a 12 a 00 a 2 12 a 11 a 2 02 a 22 a 2 01 = det(a) (3) Given scalars p > 0 and q, define the matrix B by A = pb + qi. It is the case that A and B have the same eigenvectors. If v is an eigenvector of A, then by definition Av = αv where α is an eigenvalue of A; moreover, αv = Av = pbv + qv (4) which implies Bv = ((α q)/p)v. Thus, v is an eigenvector of B with corresponding eigenvalue β = (α q)/p, or equivalently α = pβ + q. If we compute the eigenvalues and eigenvectors of B, we can then obtain the eigenvalues and eigenvectors of B. 1 My first exposure to the trigonometric approach was in high school when reading parts of my first-purchased mathematics book CRC Standard Mathematic Tables, the 19th edition published in 1971 by The Chemical Rubber Company. You might recognize the CRC acronym as part of CRC Press, a current-day publisher of books. The book title hides the fact that there is quite a bit of (old) mathematics in the book. However, the book does have many tables of numbers because at the time the common tools for computing were (1) pencil and paper, (2) table lookups, and (3) a slide rule. Yes, I had my very own slide rule. 11
12 5.1 Computing the Eigenvalues Define q = tr(a)/3 and p = tr ((A qi) 2 ) /6 so that B = (A qi)/p. The Wikipedia discussion does not point out that this is defined only when p 0. If A is a scalar multiple of the identity, then p = 0. On the other hand, the pseudocode at the Wikipedia page has special handling for when A is a diagonal matrix, which happens to include the case p = 0. The remainder of the construction here assumes p 0. Some algebraic manipulation will show that tr(b) = 0 and the characteristic polynomial is Choosing β = 2 cos(θ), the characteristic polynomial becomes g(β) = det(βi B) = β 3 3β det(b) (5) g(θ) = 2 ( 4 cos 3 (θ) 3 cos(θ) ) det(b) (6) Using the trigonometric identity cos(3θ) = 4 cos 3 (θ) 3 cos(θ), we obtain g(θ) = 2 cos(3θ) det(b) (7) The roots of g are obtained by solving for θ in the equation cos(3θ) = det(b)/2. Knowing that B has only real-valued roots, it must be that θ is real-valued which implies cos(3θ) 1. 2 This additionally implies that det(b) 2. The real-valued roots of g(θ) are ( β = 2 cos θ + 2πk ), k = 0, 1, 2 (8) 3 where θ = arccos(det(b)/2)/3, and all roots are in the interval [ 2, 2]. Let us now examine in greater detail the function g(β). The first derivative is g (β) = 3β 2 3, which is zero when β = ±1. The second derivative is g (β) = 3β, which is not zero when g (β) = 0. Therefore, β = 1 produces a local maximum of g and β = +1 produces a local minimum of g. The graph of g has an inflection point at β = 0 because g (0) = 0. A typical graph is shown in Figure 1. 2 When working with complex numbers, the magnitude of the cosine function can exceed 1. 12
13 Figure 1. The graph of g(β) = β 3 3β 1 where det(b) = 1. The graph was drawn using Mathematica (Wolfram Research, Inc., Mathematica, Version 11.0, Champaign, IL (2016)) but with manual modifications to emphasize some key graph features. The roots of g(β) are indeed in the interval [ 2, 2]. Generally, g( 2) = g(1) = 2 det(b), g( 1) = g(2) = 2 det(b), and g(0) = det(b). Table 1 shows bounds for the roots. The roots are named so that β 0 β 1 β 2. Table 1. Bounds on the roots of g(β). The roots are ordered by β 0 β 1 β 2. det(b) > 0 θ + 2π/3 (2π/3, 5π/6) cos(θ + 2π/3) ( 3/2, 1/2) β 0 ( 3, 1) θ + 4π/3 (4π/3, 3π/2) cos(θ + 4π/3) ( 1/2, 0) β 1 ( 1, 0) θ (0, π/6) cos(θ) ( 3/2, 1) β 2 ( 3, 2) det(b) < 0 θ + 2π/3 (5π/6, π) cos(θ + 2π/3) ( 1, 3/2) β 0 ( 2, 3) θ + 4π/3 (3π/2, 5π/3) cos(θ + 4π/3) (0, 1/2) β 1 (0, 1) θ (π/6, π/3) cos(θ) (1/2, 3/2) β 2 (1, 3) det(b) = 0 θ + 2π/3 = 5π/6 cos(θ + 2π/3) = 3/2 β 0 = 3 θ + 4π/3 = 3π/2 cos(θ + 4π/3) = 0 β 1 = 0 θ = π/6 cos(θ) = 3/2 β 2 = 3 Because tr(b) = 0, we also know β 0 + β 1 + β 2 = 0. 13
14 Computing the eigenvectors for distinct and well-separated eigenvalues is generally robust. However, if a root is repeated or two roots are nearly the same numerically, issues can occur when computing the 2-dimensional eigenspace. Fortunately, the special form of g(β) leads to a robust algorithm. 5.2 Computing the Eigenvectors The discussion here is based on constructing an eigenvector for β 0 when β 0 < 0 < β 1 β 2. The implementation must also handle the construction of an eigenvector for β 2 when β 0 β 1 < 0 < β 2. The corresponding eigenvalues of A are α 0, α 1, and α 2, ordered as α 0 α 1 α 2. The eigenvectors for α 0 are solutions to (A α 0 I)v = 0. The matrix A α 0 I is singular (by definition of eigenvalue) and in this case has rank 2; that is, two rows are linearly independent. Write the system of equations as shown next, 0 r T 0 r 0 v 0 = 0 = (A α 0I)v = r T 1 v = r 1 v (9) 0 r 2 v where r i are the 3 1 vectors whose transposes are the rows of the matrix B β 0 I. Assuming the first two rows are linearly independent, the conditions r 0 v = 0 and r 1 v = 0 imply v is perpendicular to both rows. Consequently, v is parallel to the cross product r 0 r 1. We can normalize the cross product to obtain a unit-length eigenvector. It is unknown which two rows of the matrix are linearly independent, so we must figure out which two rows to choose. If two rows are nearly parallel, the cross product will have length nearly zero. For numerical robustness, this suggests that we look at the three possible cross products of rows and choose the pair of rows for which the cross product has largest length. Alternatively, we could apply elementary row and column operations to reduce the matrix so that the last row is (numerically) the zero vector; this effectively is the algorithm of using Gaussian elimination with full pivoting. In the pseudocode shown in Listing 1, the cross product approach is used. r T 2 Listing 1. Pseudocode for computing the eigenvector corresponding to the eigenvalue α 0 of A that was generated from the root β 0 of the cubic polynomial for B that has multiplicity 1. v o i d ComputeEigenvector0 ( M a t rix3x3 A, R e a l e i g e n v a l u e 0, V e c t o r 3& e i g e n v e c t o r 0 ) Vector3 row0 ( A( 0, 0 ) e i g e n v a l u e 0, A( 0, 1 ), A( 0, 2 ) ) ; Vector3 row1 ( A( 0, 1 ), A( 1, 1 ) e i g e n v a l u e 0, A( 1, 2 ) ) ; Vector3 row2 ( A( 0, 2 ), A( 1, 2 ), A( 2, 2 ) e i g e n v a l u e 0 ) ; Vector3 r 0 x r 1 = C r o s s ( row0, row1 ) ; Vector3 r 0 x r 2 = C r o s s ( row0, row2 ) ; Vector3 r 1 x r 2 = C r o s s ( row1, row2 ) ; Real d0 = Dot ( r 0 x r 1, r 0 x r 1 ) ; Real d1 = Dot ( r 0 x r 2, r 0 x r 2 ) ; Real d2 = Dot ( r 1 x r 2, r 1 x r 2 ) ; Real dmax = d0 ; i n t imax = 0 ; i f ( d1 > dmax ) dmax = d1 ; imax = 1 ; i f ( d2 > dmax ) imax = 2 ; i f ( imax == 0) e i g e n v e c t o r 0 = r 0 x r 1 / s q r t ( d0 ) ; i f ( imax == 1) 14
15 e i g e n v e c t o r 0 = r 0 x r 2 / s q r t ( d1 ) ; e i g e n v e c t o r 0 = r 1 x r 2 / s q r t ( d2 ) ; If the other two eigenvalues are well separated, the algorithm of Listing 1 may be used for each eigenvalue. However, if the two eigenvalues are nearly equal, Listing 1 can have numerical issues. The problem is that theoretically when the eigenvalue is repeated (α 1 = α 2 ), the rank of A α 1 I is 1. The cross products of any pair of rows is the zero vector. Determining the rank of a matrix numerically is problematic. A different approach is used to compute an eigenvector of A α 1 I. We know that the eigenvectors corresponding to α 1 and α 2 are perpendicular to the eigenvector W of α 0. Compute unit-length vectors U and V such that U, V, W is a right-handed orthonormal set; that is, the vectors are all unit length, mutually perpendicular, and W = U V. The computations can be done robustly when using floating-point arithmetic, as shown in Listing 2. Listing 2. Robust computation of U and V for a specified W. v o i d ComputeOrthogonalComplement ( Vector3 W, Vector3& U, Vector3& V) Real invlength ; i f ( f a b s (W[ 0 ] ) > f a b s (W[ 1 ] ) ) // The component o f maximum a b s o l u t e v a l u e i s e i t h e r W[ 0 ] o r W[ 2 ]. i n v L e n g t h = 1 / s q r t (W[ 0 ] W[ 0 ] + W[ 2 ] W[ 2 ] ) ; U = Vector3( W[ 2 ] i nvlength, 0, +W[ 0 ] i n v L e n g t h ) ; // The component o f maximum a b s o l u t e v a l u e i s e i t h e r W[ 1 ] o r W[ 2 ]. i n v L e n g t h = 1 / s q r t (W[ 1 ] W[ 1 ] + W[ 2 ] W[ 2 ] ) ; U = Vector3 ( 0, +W[ 2 ] i nvlength, W[ 1 ] i n v L e n g t h ) ; Vector3 V = C r o s s (W, U ) ; A unit-length eigenvector E of A α 1 I must be a circular combination of U and V; that is, E = x 0 U + x 1 V for some choice of x 0 and x 1 with x x 2 1 = 1. There are exactly two such unit-length eigenvectors when α 1 α 2 but infinitely many when α 1 = α 2. Define the 3 2 matrix J = [U V] whose columns are the specified vectors. Define the 2 2 symmetric matrix M = J T (A α 1 I)J. Define the 2 1 vector X = JE whose rows are x 0 and x 1. The 3 3 linear system (A α 1 I)E = 0 reduces to the 2 2 linear system MX = 0. M has rank 1 when α 1 α 2, in which case M is not the zero matrix, or rank 0 when α 1 = α 2, in which case M is the zero matrix. Numerically, we need to trap the cases properly. 15
16 In the event M is not the zero matrix, we can select the row of M that has largest length and normalize it. A solution X is perpendicular to the normalized row, and we can choose X so that it is unit length. The normalization first factors out the largest component of the row and discards it to avoid floating-point underflow or overflow when computing the length of the row. If M is the zero matrix, then any choice of unit-length X suffices. Listing 3 contains pseudocode for computing a unit-length eigenvector corresponding to α 1. Listing 3. Pseudocode for computing the eigenvector corresponding to the eigenvalue α 1 of A that was generated from the root β 1 of the cubic polynomial for B that potentially has multiplicity 2. v o i d ComputeEigenvector1 ( M a t rix3x3 A, V e c t o r 3 e i g e n v e c t o r 0, R e a l e i g e n v a l u e 1, V e c t o r 3& e i g e n v e c t o r 1 ) Vector3 AU = A U, AV = A V ; f l o a t m00 = Dot (U, AU) e i g e n v a l u e 1, m01 = Dot (U, AV), m11 = Dot (V, AV) e i g e n v a l u e 1 ; f l o a t absm00 = f a b s (m00 ), absm01 = f a b s (m01 ), absm11 = f a b s (m11 ) ; i f ( absm00 >= absm11 ) f l o a t maxabscomp = max ( absm00, absm01 ) ; i f ( maxabscomp > 0) i f ( absm00 >= absm01 ) m01 /= m00 ; m00 = 1 / s q r t (1 + m01 m01 ) ; m01 = m00 ; m00 /= m01 ; m01 = 1 / s q r t (1 + m00 m00 ) ; m00 = m01 ; e i g e n v e c t o r 1 = m01 U m00 V ; e i g e n v e c t o r 1 = U; f l o a t maxabscomp = max ( absm11, absm01 ) ; i f ( maxabscomp > 0) i f ( absm11 >= absm01 ) m01 /= m11 ; m11 = 1 / s q r t (1 + m01 m01 ) ; m01 = m11 ; m11 /= m01 ; m01 = 1 / s q r t (1 + m11 m11 ) ; m11 = m01 ; e i g e n v e c t o r 1 = m11 U m01 V ; e i g e n v e c t o r 1 = U; The remaining eigenvector must be perpendicular to the first two computed eigenvectors as shown in Listing 4. 16
17 Listing 4. The remaining eigenvector is simply a cross product of the first two computed eigenvectors, and it is guaranteed to be unit length (within numerical tolerance). ComputeEigenvector0 (A, e i g e n v a l u e 0, e i g e n v e c t o r 0 ) ; ComputeEigenvector1 (A, e i g e n v e c t o r 0, e i g e n v a l u e 1, e i g e n v e c t o r 1 ) ; e i g e n v e c t o r 2 = C r o s s ( e i g e n v e c t o r 0, e i g e n v e c t o r 1 ) ; 6 Implementation of the Noniterative Algorithm The source code that implements the iterative algorithm for symmetric 3 3 matrices is in class NISymmetricEigensolver3x3 found in the file GeometricTools/GTEngine/Include/GteSymmetricEigensolver3x3.h The code was written not to use any GTEngine object. The class interface is #i n c l u d e <a l g o r i t h m > #i n c l u d e <a r r a y > #i n c l u d e <cmath> #i n c l u d e <l i m i t s > c l a s s N I S y m m e t r i c E i g e n s o l v e r 3 x 3 p u b l i c : // The i n p u t matrix must be symmetric, so o n l y the unique elements must // be s p e c i f i e d : a00, a01, a02, a11, a12, and a22. The e i g e n v a l u e s a r e // s o r t e d i n a s c e n d i n g o r d e r : e v a l 0 <= e v a l 1 <= e v a l 2. void operator ( ) ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, s t d : : a r r a y <Real, 3>& e v a l, s t d : : a r r a y <s t d : : a r r a y <Real, 3>, 3>& e v e c ) c o n s t ; p r i v a t e : s t a t i c s t d : : a r r a y <Real, 3> M u l t i p l y ( R e a l s, s t d : : a r r a y <Real, 3> c o n s t& U ) ; s t a t i c s t d : : a r r a y <Real, 3> S u b t r a c t ( s t d : : a r r a y <Real, 3> c o n s t& U, s t d : : a r r a y <Real, 3> c o n s t& V ) ; s t a t i c s t d : : a r r a y <Real, 3> D i v i d e ( s t d : : a r r a y <Real, 3> c o n s t& U, R e a l s ) ; s t a t i c Real Dot ( s t d : : a r r a y <Real, 3> c o n s t& U, s t d : : a r r a y <Real, 3> c o n s t& V ) ; s t a t i c s t d : : a r r a y <Real, 3> C r o s s ( s t d : : a r r a y <Real, 3> c o n s t& U, s t d : : a r r a y <Real, 3> c o n s t& V ) ; v o i d ComputeOrthogonalComplement ( s td : : array<real, 3> c o n s t& W, s t d : : a r r a y <Real, 3>& U, s t d : : a r r a y <Real, 3>& V) c o n s t ; v o i d ComputeEigenvector0 ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, Real& eval0, s td : : array<real, 3>& evec0 ) const ; ; v o i d ComputeEigenvector1 ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, s t d : : a r r a y <Real, 3> c o n s t& evec0, R e a l& e v a l 1, s t d : : a r r a y <Real, 3>& e v e c 1 ) c o n s t ; and the implementation is void NISymmetricEigensolver3x3<Real >:: operator ( ) ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, s t d : : a r r a y <Real, 3>& e v a l, s t d : : a r r a y <s t d : : a r r a y <Real, 3>, 3>& e v e c ) c o n s t 17
18 // P r e c o n d i t i o n t h e m a t r i x by f a c t o r i n g out t h e maximum a b s o l u t e v a l u e // o f t h e components. This g u a r d s a g a i n s t f l o a t i n g p o i n t o v e r f l o w when // computing t h e e i g e n v a l u e s. Real max0 = s t d : : max ( f a b s ( a00 ), f a b s ( a01 ) ) ; Real max1 = s t d : : max ( f a b s ( a02 ), f a b s ( a11 ) ) ; Real max2 = s t d : : max ( f a b s ( a12 ), f a b s ( a22 ) ) ; Real maxabselement = std : : max ( std : : max ( max0, max1 ), max2 ) ; i f ( maxabselement == ( Real ) 0 ) // A i s t h e z e r o m a t r i x. e v a l [ 0 ] = ( Real ) 0 ; e v a l [ 1 ] = ( Real ) 0 ; e v a l [ 2 ] = ( Real ) 0 ; e v e c [ 0 ] = ( R eal ) 1, ( Real ) 0, ( Real )0 ; e v e c [ 1 ] = ( R eal ) 0, ( Real ) 1, ( Real )0 ; e v e c [ 2 ] = ( R eal ) 0, ( Real ) 0, ( Real )1 ; r e t u r n ; Real invmaxabselement = ( Real )1 / maxabselement ; a00 = invmaxabselement ; a01 = invmaxabselement ; a02 = invmaxabselement ; a11 = invmaxabselement ; a12 = invmaxabselement ; a22 = invmaxabselement ; Real norm = a01 a01 + a02 a02 + a12 a12 ; i f ( norm > ( R eal ) 0 ) // Compute t h e e i g e n v a l u e s o f A. The a c o s ( z ) f u n c t i o n r e q u i r e s z <= 1, // but w i l l f a i l s i l e n t l y and r e t u r n NaN i f t h e i n p u t i s l a r g e r than 1 i n // magnitude. To a v o i d t h i s c o n d i t i o n due to r o u n d i n g e r r o r s, t h e h a l f D e t // v a l u e i s clamped to [ 1, 1 ]. Real t r a c e D i v 3 = ( a00 + a11 + a22 ) / ( R e a l ) 3 ; Real b00 = a00 t r a c e D i v 3 ; Real b11 = a11 t r a c e D i v 3 ; Real b22 = a22 t r a c e D i v 3 ; Real denom = s q r t ( ( b00 b00 + b11 b11 + b22 b22 + norm ( R e a l ) 2 ) / ( R e a l ) 6 ) ; Real c00 = b11 b22 a12 a12 ; Real c01 = a01 b22 a12 a02 ; Real c02 = a01 a12 b11 a02 ; Real det = ( b00 c00 a01 c01 + a02 c02 ) / ( denom denom denom ) ; Real h a l f D e t = d e t ( R eal ) 0. 5 ; h a l f D e t = s t d : : min ( s t d : : max ( h a l f D e t, ( R e a l ) 1), ( R e a l ) 1 ) ; // The e i g e n v a l u e s o f B a r e o r d e r e d as beta0 <= beta1 <= beta2. The // number o f d i g i t s i n t w o T h i r d s P i i s chosen so that, whether f l o a t o r // double, t h e f l o a t i n g p o i n t number i s t h e c l o s e s t to t h e o r e t i c a l 2 p i / 3. Real a n g l e = a c o s ( h a l f D e t ) / ( Real ) 3 ; Real c o n s t t w o T h i r d s P i = ( Real ) ; Real beta2 = cos ( a n g l e ) ( R eal ) 2 ; Real beta0 = cos ( a n g l e + t w o T h i r d s P i ) ( R e a l ) 2 ; Real beta1 = (beta0 + beta2 ) ; // The e i g e n v a l u e s o f A a r e o r d e r e d as a l p h a 0 <= a l p h a 1 <= a l p h a 2. e v a l [ 0 ] = t r a c e D i v 3 + denom beta0 ; e v a l [ 1 ] = t r a c e D i v 3 + denom beta1 ; e v a l [ 2 ] = t r a c e D i v 3 + denom beta2 ; // The i n d e x i 0 c o r r e s p o n d s to t h e r o o t g u a r a n t e e d to have m u l t i p l i c i t y 1 // and goes w i t h e i t h e r t h e maximum r o o t o r t h e minimum r o o t. The i n d e x // i 2 goes w i t h t h e r o o t o f t h e o p p o s i t e extreme. Root beta2 i s a l w a y s // between beta0 and beta1. i n t i0, i2, i 1 = 1 ; i f ( h a l f D e t >= ( Real ) 0 ) i 0 = 2 ; i 2 = 0 ; 18
19 i 0 = 0 ; i 2 = 2 ; // Compute t h e e i g e n v e c t o r s. The s e t e v e c [ 0 ], e v e c [ 1 ], e v e c [ 2 ] i s // r i g h t handed and o r t h o n o r m a l. ComputeEigenvector0 ( a00, a01, a02, a11, a12, a22, e v a l [ i 0 ], e v e c [ i 0 ] ) ; ComputeEigenvector1 ( a00, a01, a02, a11, a12, a22, e v e c [ i 0 ], e v a l [ i 1 ], e v e c [ i 1 ] ) ; e v e c [ i 2 ] = C r o s s ( e v e c [ i 0 ], e v e c [ i 1 ] ) ; // The m a t r i x i s d i a g o n a l. e v a l [ 0 ] = a00 ; e v a l [ 1 ] = a11 ; e v a l [ 2 ] = a22 ; e v e c [ 0 ] = ( R eal ) 1, ( Real ) 0, ( Re a l )0 ; e v e c [ 1 ] = ( R eal ) 0, ( Real ) 1, ( Real )0 ; e v e c [ 2 ] = ( R eal ) 0, ( Real ) 0, ( Real )1 ; // The p r e c o n d i t i o n i n g s c a l e d t h e m a t r i x A, which s c a l e s t h e e i g e n v a l u e s. // R e v e r t t h e s c a l i n g. e v a l [ 0 ] = maxabselement ; e v a l [ 1 ] = maxabselement ; e v a l [ 2 ] = maxabselement ; s t d : : a r r a y <Real, 3> N I S y m m e t r i c E i g e n s o l v e r 3 x 3 <Real >:: M u l t i p l y ( Real s, s t d : : a r r a y <Real, 3> c o n s t& U) s t d : : a r r a y <Real, 3> p r o d u c t = s U [ 0 ], s U [ 1 ], s U [ 2 ] ; r e t u r n p r o d u c t ; s t d : : a r r a y <Real, 3> N I S y m m e t r i c E i g e n s o l v e r 3 x 3 <Real >:: S u b t r a c t ( std : : array <Real, 3> c o n s t& U, std : : array <Real, 3> c o n s t& V) s t d : : a r r a y <f l o a t, 3> d i f f e r e n c e = U [ 0 ] V [ 0 ], U [ 1 ] V [ 1 ], U [ 2 ] V [ 2 ] ; r e t u r n d i f f e r e n c e ; std : : array <Real, 3> NISymmetricEigensolver3x3<Real >:: D i v i d e ( std : : array <Real, 3> c o n s t& U, Real s ) Real invs = ( Real )1 / s ; s t d : : a r r a y <Real, 3> d i v i s i o n = U [ 0 ] invs, U [ 1 ] invs, U [ 2 ] i n v S ; r e t u r n d i v i s i o n ; Real NISymmetricEigensolver3x3<Real >:: Dot ( std : : array <Real, 3> c o n s t& U, std : : array <Real, 3> c o n s t& V) Real dot = U [ 0 ] V [ 0 ] + U [ 1 ] V [ 1 ] + U [ 2 ] V [ 2 ] ; r e t u r n dot ; std : : array <Real, 3> NISymmetricEigensolver3x3<Real >:: Cross ( s td : : array <Real, 3> c o n s t& U, std : : array <Real, 3> c o n s t& V) s t d : : a r r a y <Real, 3> c r o s s = U [ 1 ] V [ 2 ] U [ 2 ] V [ 1 ], U [ 2 ] V [ 0 ] U [ 0 ] V [ 2 ], U [ 0 ] V [ 1 ] U [ 1 ] V [ 0 ] ; r e t u r n c r o s s ; 19
20 v o i d NISymmetricEigensolver3x3<Real >:: ComputeOrthogonalComplement ( s t d : : a r r a y <Real, 3> c o n s t& W, s t d : : a r r a y <Real, 3>& U, s t d : : a r r a y <Real, 3>& V) c o n s t // Robustly compute a r i g h t handed orthonormal s e t U, V, W. The // v e c t o r W i s g u a r a n t e e d to be u n i t l e n g t h, i n which c a s e t h e r e i s no // need to worry about a d i v i s i o n by z e r o when computing i n v L e n g t h. Real invlength ; i f ( f a b s (W[ 0 ] ) > f a b s (W[ 1 ] ) ) // The component o f maximum a b s o l u t e v a l u e i s e i t h e r W[ 0 ] o r W[ 2 ]. i n v L e n g t h = ( Real )1 / s q r t (W[ 0 ] W[ 0 ] + W[ 2 ] W[ 2 ] ) ; U = W[ 2 ] i nvlength, ( Real ) 0, +W[ 0 ] i n v L e n g t h ; // The component o f maximum a b s o l u t e v a l u e i s e i t h e r W[ 1 ] o r W[ 2 ]. i n v L e n g t h = ( Real )1 / s q r t (W[ 1 ] W[ 1 ] + W[ 2 ] W[ 2 ] ) ; U = ( R eal ) 0, +W[ 2 ] i n vlength, W[ 1 ] i n v L e n g t h ; V = C r o s s (W, U ) ; v o i d NISymmetricEigensolver3x3 <Real >:: ComputeEigenvector0 ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, Real& eval0, s td : : array <Real, 3>& evec0 ) c o n s t // Compute a u n i t l e n g t h e i g e n v e c t o r f o r e i g e n v a l u e [ i 0 ]. The m a t r i x i s // rank 2, so two o f t h e rows a r e l i n e a r l y i n d e p e n d e n t. For a r o b u s t // computation o f t h e e i g e n v e c t o r, s e l e c t t h e two rows whose c r o s s p r o d u c t // has l a r g e s t l e n g t h o f a l l p a i r s o f rows. std : : array <Real, 3> row0 = a00 eval0, a01, a02 ; std : : array <Real, 3> row1 = a01, a11 eval0, a12 ; s t d : : a r r a y <Real, 3> row2 = a02, a12, a22 e v a l 0 ; s t d : : a r r a y <Real, 3> r 0 x r 1 = C r o s s ( row0, row1 ) ; s t d : : a r r a y <Real, 3> r 0 x r 2 = C r o s s ( row0, row2 ) ; s t d : : a r r a y <Real, 3> r 1 x r 2 = C r o s s ( row1, row2 ) ; Real d0 = Dot ( r 0 x r 1, r 0 x r 1 ) ; Real d1 = Dot ( r 0 x r 2, r 0 x r 2 ) ; Real d2 = Dot ( r 1 x r 2, r 1 x r 2 ) ; Real dmax = d0 ; i n t imax = 0 ; i f ( d1 > dmax ) dmax = d1 ; imax = 1 ; i f ( d2 > dmax ) imax = 2 ; i f ( imax == 0) evec0 = D i v i d e ( r 0 x r 1, s q r t ( d0 ) ) ; i f ( imax == 1) evec0 = D i v i d e ( r 0 x r 2, s q r t ( d1 ) ) ; evec0 = D i v i d e ( r 1 x r 2, s q r t ( d2 ) ) ; v o i d NISymmetricEigensolver3x3<Real >:: ComputeEigenvector1 ( Real a00, Real a01, Real a02, Real a11, Real a12, Real a22, std : : array <Real, 3> c o n s t& evec0, 20
21 Real& eval1, s td : : array<real, 3>& evec1 ) c o n s t // Robustly compute a r i g h t handed orthonormal s e t U, V, evec0. s t d : : a r r a y <Real, 3> U, V ; ComputeOrthogonalComplement ( evec0, U, V ) ; // Let e be e v a l 1 and l e t E be a c o r r e s p o n d i n g e i g e n v e c t o r which i s a // s o l u t i o n to t h e l i n e a r system (A e I ) E = 0. The m a t r i x (A e I ) // i s 3x3, not i n v e r t i b l e ( so i n f i n i t e l y many s o l u t i o n s ), and has rank 2 // when e v a l 1 and e v a l a r e d i f f e r e n t. I t has rank 1 when e v a l 1 and e v a l 2 // a r e e q u a l. N u m e r i c a l l y, i t i s d i f f i c u l t to compute r o b u s t l y t h e rank // o f a m a t r i x. I n s t e a d, t h e 3 x3 l i n e a r system i s r e d u c e d to a 2 x2 system // as f o l l o w s. Define the 3x2 matrix J = [U V ] whose columns are the U // and V computed p r e v i o u s l y. D e f i n e t h e 2 x1 v e c t o r X = J E. The 2 x2 // system i s 0 = M X = ( JˆT (A e I ) J ) X where JˆT i s the // t r a n s p o s e o f J and M = JˆT (A e I ) J i s a 2 x2 m a t r i x. The system // may be w r i t t e n as // // UˆT A U e UˆT A V x0 = e x0 // VˆT A U VˆT A V e x1 x1 // // where X has row e n t r i e s x0 and x1. s t d : : a r r a y <Real, 3> AU = a00 U [ 0 ] + a01 U [ 1 ] + a02 U [ 2 ], a01 U [ 0 ] + a11 U [ 1 ] + a12 U [ 2 ], a02 U [ 0 ] + a12 U [ 1 ] + a22 U [ 2 ] ; s t d : : a r r a y <Real, 3> AV = a00 V [ 0 ] + a01 V [ 1 ] + a02 V [ 2 ], a01 V [ 0 ] + a11 V [ 1 ] + a12 V [ 2 ], a02 V [ 0 ] + a12 V [ 1 ] + a22 V [ 2 ] ; Real m00 = U [ 0 ] AU [ 0 ] + U [ 1 ] AU [ 1 ] + U [ 2 ] AU [ 2 ] e v a l 1 ; Real m01 = U [ 0 ] AV [ 0 ] + U [ 1 ] AV [ 1 ] + U [ 2 ] AV [ 2 ] ; Real m11 = V [ 0 ] AV [ 0 ] + V [ 1 ] AV [ 1 ] + V [ 2 ] AV [ 2 ] e v a l 1 ; // For r o b u s t n e s s, choose t h e l a r g e s t l e n g t h row o f M to compute t h e // e i g e n v e c t o r. The 2 t u p l e o f c o e f f i c i e n t s o f U and V i n t h e // a s s i g n m e n t s to e i g e n v e c t o r [ 1 ] l i e s on a c i r c l e, and U and V a r e // u n i t l e n g t h and p e r p e n d i c u l a r, so e i g e n v e c t o r [ 1 ] i s u n i t l e n g t h // ( w i t h i n n u m e r i c a l t o l e r a n c e ). Real absm00 = f a b s (m00 ) ; Real absm01 = f a b s (m01 ) ; Real absm11 = f a b s (m11 ) ; Real maxabscomp ; i f ( absm00 >= absm11 ) maxabscomp = st d : : max ( absm00, absm01 ) ; i f ( maxabscomp > ( Real ) 0 ) i f ( absm00 >= absm01 ) m01 /= m00 ; m00 = ( R eal )1 / s q r t ( ( Real )1 + m01 m01 ) ; m01 = m00 ; m00 /= m01 ; m01 = ( R eal )1 / s q r t ( ( Real )1 + m00 m00 ) ; m00 = m01 ; evec1 = S u b t r a c t ( M u l t i p l y (m01, U), M u l t i p l y (m00, V ) ) ; evec1 = U ; 21
22 maxabscomp = st d : : max ( absm11, absm01 ) ; i f ( maxabscomp > ( R eal ) 0 ) i f ( absm11 >= absm01 ) m01 /= m11 ; m11 = ( R eal )1 / s q r t ( ( Real )1 + m01 m01 ) ; m01 = m11 ; m11 /= m01 ; m01 = ( R eal )1 / s q r t ( ( Real )1 + m11 m11 ) ; m11 = m01 ; evec1 = S u b t r a c t ( M u l t i p l y (m11, U), M u l t i p l y (m01, V ) ) ; evec1 = U ; 22
Information About Ellipses
Information About Ellipses David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More informationComputing Orthonormal Sets in 2D, 3D, and 4D
Computing Orthonormal Sets in 2D, 3D, and 4D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International
More informationDistance from a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid
Distance from a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationDistance from Line to Rectangle in 3D
Distance from Line to Rectangle in 3D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.
More informationLinear Algebra - Part II
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationThe Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices
The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative
More informationDistance Between Ellipses in 2D
Distance Between Ellipses in 2D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To
More informationLeast-Squares Fitting of Data with Polynomials
Least-Squares Fitting of Data with Polynomials David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International
More informationA Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves
A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves David Eberly, Geometric Tools, Redmond WA 9852 https://www.geometrictools.com/ This work is licensed under the Creative
More informationA Review of Linear Algebra
A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More informationPerspective Projection of an Ellipse
Perspective Projection of an Ellipse David Eberly, Geometric ools, Redmond WA 98052 https://www.geometrictools.com/ his work is licensed under the Creative Commons Attribution 4.0 International License.
More informationLinear Algebra. Carleton DeTar February 27, 2017
Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationDistance to Circles in 3D
Distance to Circles in 3D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationTBP MATH33A Review Sheet. November 24, 2018
TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationIntersection of Ellipses
Intersection of Ellipses David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationReconstructing an Ellipsoid from its Perspective Projection onto a Plane
Reconstructing an Ellipsoid from its Perspective Projection onto a Plane David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationLinear Algebra Exercises
9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationExamples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.
The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationSolving Systems of Polynomial Equations
Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.
More informationQuestion: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?
Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.
ENGI 940 Lecture Notes - Matrix Algebra Page.0. Matrix Algebra A linear system of m equations in n unknowns, a x + a x + + a x b (where the a ij and i n n a x + a x + + a x b n n a x + a x + + a x b m
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationPractice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5
Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationInterpolation of Rigid Motions in 3D
Interpolation of Rigid Motions in 3D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationMath 315: Linear Algebra Solutions to Assignment 7
Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationLinear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway
Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationMAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:
MAC 2 Module Determinants Learning Objectives Upon completing this module, you should be able to:. Determine the minor, cofactor, and adjoint of a matrix. 2. Evaluate the determinant of a matrix by cofactor
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More information1. Select the unique answer (choice) for each problem. Write only the answer.
MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationMA 265 FINAL EXAM Fall 2012
MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators
More informationSOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.
SOLUTIONS: ASSIGNMENT 9 66 Use Gaussian elimination to find the determinant of the matrix det 1 1 4 4 1 1 1 1 8 8 = det = det 0 7 9 0 0 0 6 = 1 ( ) 3 6 = 36 = det = det 0 0 6 1 0 0 0 6 61 Consider a 4
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationFitting a Natural Spline to Samples of the Form (t, f(t))
Fitting a Natural Spline to Samples of the Form (t, f(t)) David Eberly, Geometric Tools, Redmond WA 9852 https://wwwgeometrictoolscom/ This work is licensed under the Creative Commons Attribution 4 International
More informationLeast Squares Fitting of Data by Linear or Quadratic Structures
Least Squares Fitting of Data by Linear or Quadratic Structures David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationThis appendix provides a very basic introduction to linear algebra concepts.
APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationLecture 6 Positive Definite Matrices
Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices
More informationMAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:
MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationMAC Module 12 Eigenvalues and Eigenvectors
MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationLow-Degree Polynomial Roots
Low-Degree Polynomial Roots David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki
More informationMathematical Foundations
Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationIntroduction to Linear Algebra, Second Edition, Serge Lange
Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More information(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).
.(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationQuiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.
Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More information1 Matrices and Systems of Linear Equations
March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.
More informationIntersection of Ellipsoids
Intersection of Ellipsoids David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More information