Constructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants

Similar documents
Chapter 3 Transformations

Review problems for MA 54, Fall 2004.

Image Registration Lecture 2: Vectors and Matrices

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Linear Algebra. and

Linear Algebra Massoud Malek

Quantum Computing Lecture 2. Review of Linear Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Conceptual Questions for Review

ANSWERS. E k E 2 E 1 A = B

Review of Linear Algebra

Linear Algebra Review. Vectors

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Linear Algebra (Review) Volker Tresp 2018

Eigenvectors and Hermitian Operators

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

1 Singular Value Decomposition and Principal Component

Diagonalizing Matrices

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Foundations of Matrix Analysis

Review of similarity transformation and Singular Value Decomposition

October 25, 2013 INNER PRODUCT SPACES

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Linear Algebra in Actuarial Science: Slides to the lecture

The following definition is fundamental.

Linear Algebra: Matrix Eigenvalue Problems

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra Solutions 1

MA 265 FINAL EXAM Fall 2012

LINEAR ALGEBRA REVIEW

1. General Vector Spaces

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

UNIT 6: The singular value decomposition.

Fundamentals of Engineering Analysis (650163)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1 Last time: least-squares problems

Math Linear Algebra II. 1. Inner Products and Norms

Singular Value Decomposition

Linear Algebra. Min Yan

Linear Algebra. Session 12

Linear Algebra, part 3 QR and SVD

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Mathematics. EC / EE / IN / ME / CE. for

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

1. Elements of linear algebra

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture notes: Applied linear algebra Part 1. Version 2

Spectral Theorem for Self-adjoint Linear Operators

Ma/CS 6b Class 20: Spectral Graph Theory

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

ELE/MCE 503 Linear Algebra Facts Fall 2018

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

G1110 & 852G1 Numerical Linear Algebra

Lecture 7: Positive Semidefinite Matrices

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

235 Final exam review questions

Linear Algebra Practice Final

Linear Algebra Primer

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

INNER PRODUCT SPACE. Definition 1

Math 113 Final Exam: Solutions

Stat 159/259: Linear Algebra Notes

MP463 QUANTUM MECHANICS

CS 143 Linear Algebra Review

Knowledge Discovery and Data Mining 1 (VO) ( )

8. Diagonalization.

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

Linear Algebra Methods for Data Mining

Reduction to the associated homogeneous system via a particular solution

Review of linear algebra

EE731 Lecture Notes: Matrix Computations for Signal Processing

Quizzes for Math 304

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

I. Multiple Choice Questions (Answer any eight)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Applied Linear Algebra in Geoscience Using MATLAB

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 6 Inner product spaces

SUMMARY OF MATH 1600

Transcription:

Constructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants Vadim Zaliva, lord@crocodile.org July 17, 2012 Abstract The problem of constructing an orthogonal set of eigenvectors for a DFT matrix is well studied. An elegant solution is mentioned by Matveev in [1]. In this paper, we present a distilled form of his solution including some steps unexplained in his paper, along with correction of typos and errors using more consistent notation. Then we compare the computational complexity of his method with the more traditional method involving direct application of the Gram-Schmidt process. Finally, we present our implementation of Matveev s method as a Mathematica module. 1 Definitions The normalized matrix for discrete Fourier transform (DFT) of size n is defined as: Φ jk (n) = 1 n w jk, j,k = 0,...,n 1, w = e i 2π n (1) In some literature, an alternative definition is used where w = e i 2π n. It should be possible to adopt the algorithm described here with some minimal changes. Throughout this paper, unless explicitly specified, we will use 0-based indices for matrices and arrays. 1 The scaling factor n ensures that Φ is unitary. An important for us property of a unitary matrix is that its eigenvectors corresponding to different eigenvalues are orthogonal[2]. 1 If e k is an eigenvector of Φ with associated eigenvalue λ k then by the definition of an eigenvector Φe k = λ k e k. It is also a property of eigenvectors that Φ q e k = λ q k e k. In [3] it has been shown that Φ = I. This gives us Ie k = λ e k. From that, it follows that λk = 1, and eigenvalues of DFT matrix are fourth roots of unity: λ = (1,i, 1, i) (2) The well known [1, 3] spectral decomposition of Φ into four orthogonal projections can be defined as: 1 There is a typo in [2] stating that they are orthonormal, while it should read orthogonal or can be chosen orthonormal instead. It has been reported to the author and acknowledged by him. 1

p k = 1 3 j=0 ( i) jk Φ j, k = 0,...,n 1 (3) Each projection matrix corresponds to one of four the possible eigenvalues from equation (2). The columns of each projection matrix are eigenvectors of Φ sharing the same eigenvalue. As shown in [3], the multiplicity of the eigenvalue with a value of λ k is equal to the trace of p k. However, we can use simpler formulae from [1] to calculate the multiplicity of m k : m o = n + 1, associated with λ 0 = 1 m 1 = n + 2, associated with λ 1 = i m 2 = n + 3 1, associated with λ 2 = 1 m 3 = n + 1, associated with λ 3 = i () where... in the equation above denotes the floor function. Note that for convenience, λ k is defined so that λ k = i k. Finally, following [1], we define v(m,k), where m,k = 0,...,n 1 as an n-dimensional vector which is equal to the m-th row (or m-th column due to matrix symmetry) of p k : v(m,k) = ([p k ] 0,m,[p k ] 1,m,...,[p k ] n 1,m ) (5) In the formula above, [p k ] m,n denotes an element at row m and column n of projection matrix p k. The p k per equation (3) could be expanded as: p k = I + ( i)k Φ + ( i) 2k Φ 2 + ( i) 3k Φ 3 This allows us to write a formula, computing an element of p k at position ( j,m). δ j,m + ( i) k w jm + ( 1) k δ (n) ( j+m mod n),0 + ( i) 3k w jm (n) [p k ] j,m = (6) Using this, we can express the j-th element of a vector v(m,k) from equation (5) as v j (m,k) = [p k ] j,m. It should be noted that equation (6) differs slightly from the similar equation (22) in [1] accounting for a correction. The difference is in the arguments of the second Kronecker delta, representing Φ 2 which has the following form: 1 0 0 0 0 0 0 1 Φ 2 = 0 0 1 0.. 0 0 0 1 0 0 2

According to Matveev s formula, which incorrectly uses δ n j,m, we get the incorrect result: 0 0 0 0 0 0 0 1 δ n j,m = 0 0 1 0.. 0 0 0 1 0 0 This is correct for all elements except the one at (0,0), which should be 1 instead of 0. The expression δ ( j+m mod n),0, which we are using instead, gives us the correct representation of Φ 2. An expression similar to ours is also used in [3]. 2 Finding a complete system eigenvectors of Φ(n) Each projection matrix corresponds to one of four the possible eigenvalues from equation (2). The columns of each projection matrix are eigenvectors of Φ sharing the same eigenvalue. A complete set of eigenvectors of Φ spanning C n could be constructed from columns of orthogonalized projection matrices taking the first m k non-zero columns of p k. The first column of p 1 and p 3 are formed by all zeros and have to be skipped. Thus, our set of n eigenvectors would consist of: v(m,0), m = 0,1,...m 0 1, associated with λ 0 = 1 v(m,1), m = 1,2,...m 1, associated with λ 1 = i v(m,2), m = 0,1,...m 2 1, associated with λ 2 = 1 v(m,3), m = 1,2,...m 3, associated with λ 3 = i (7) with m 0 + m 1 + m 2 + m 3 = n. The proof of this using Chebychev sets could be found in []. 3 Orthonormalization Eigenvectors corresponding to different eigenvalues are orthogonal. However, eigenvectors within the same projection matrix are not guaranteed to be orthogonal, so the associated set of eigenvectors does not possess the orthogonality property either. A straightforward approach to get orthonormal eigenvectors as suggested in Candan[3] is to apply Gram-Schmidt process to all columns of each projection matrix. Each projection matrix p k will have rank m k and thus after normalization, the resulting orthonormalized vector set will contain exactly m k non-zero vectors. Matveev in [1] presents another approach to constructing an orthonormal basis based on the same principles as the Gram-Schmidt process but involving the use of Gramian matrices and determinants. 3

The calculations of the orthogonal basis of p k involve m k columns of p k taken according to equation (7). We have two cases: one for odd values of k = 1,3 and one for even values of k = 0,2. Let us consider the case of even values first. We can find a sequence of orthogonal vectors (e 0 (k),e 1 (k),...,e mk 1(k)) spanning eigenspace p k using Gramian matrices and determinants[5]: e 0 (k) = v(0,k),... e j (k) = v(0,k),v(0,k) v(0,k),v( j 1,k) v(0,k).... v( j 1,k),v(0,k) v( j 1,k),v( j 1,k) v( j 1,k) v( j,k),v(0,k) v( j,k),v( j 1,k) v( j,k) (8) In the equation above, v,u denotes the inner product of the vectors v and u. The determinant notation assumes generic determinant formulation which is defined for matrices containing mixed scalar and vector entries. The determinant could be calculated using Laplace (cofactor) expansion. It has been observed in [1] that p k is in fact a Gramian matrix of a set of vectors v(m,k),m = 0,...,n 1, such as [p k ] j,m = v( j,k),v(m,k) Using this fact, we can replace ( j+1) j upper entries of the matrix in equation (8) with corresponding entries from p k : e 0 (k) = v(0,k),... e j (k) = [p k ] 0,0 [p k ] 0, j 1 v(0,k).... [p k ] j 1,0 [p k ] j 1, j 1 v( j 1,k) [p k ] j,0 [p k ] j, j 1 v( j,k) (9) The resulting system of vectors e k is orthogonal but not yet orthonormal. Each vector is normalized by dividing by its norm. As shown in [5], the norm can be calculated by: e i = G j G j+1 (10) where G j,g j+1 are principal minors of p k of respective orders. They represent Gram determinants. For k = 1,3, we need to take into account the fact that the first row and the first column of p k, k = 1,3 contain all zeros. Therefore, for these values of k, the equation (9) will become:

e 0 (k) = v(1,k),... e j (k) = [p k ] 1,1 [p k ] 1, j v(1,k).... [p k ] j,1 [p k ] j, j v( j,k) [p k ] j+1,1 [p k ] j+1, j v( j + 1,k) (11) and equation (10) will become: [p k ] 1,1 [p k ] 1, j e i =... [p k ] j,1 [p k ] j, j [p k ] 1,1 [p k ] 1, j+1... [p k ] j+1,1 [p k ] j+1, j+1 (12) Computational complexity Because computational complexity of Matveev s algorithm is very high, it is not very practical for large n. The complexity is mostly attributed to multiple cofactor expansion operations which have complexity of O(n!). For comparison: obtaining a set of non-orthogonal eigenvectors and orthonormalizing the set using the Modified Gram-Schmidt process would take just 2n 3 floating point operations (FLOPS)[6] which translates to O(n 3 ) in Big-O notation. According to the equation () for any n, the multiplicities of different projections could differ at most by 2. That means for a reasonably big n, the dimensionality of each of the four eigenspaces of Φ is at approximately n. Using this observation, the performance could be improved further by a factor of four by applying the Gram-Schmidt process to m k n vectors from each projection, which gives us a total cost of n3 FLOPS to orthogonalize the complete set of eigenvectors. Although an improvement, this is still O(n 3 ). 5 Mathematica implementation Listing 1 presents the full source code of the Mathematica module constructing a complete orthonormal set of eigenvectors of DFT matrix Φ(n). It performs all computations and returns the results in symbolic form. The code intention was to illustrate and validate the algorithm, and clarity and expressiveness were chosen over performance. It has been developed and tested with Mathematica version 8. Since Mathematica s Det function does not work with matrices containing both scalars and vectors, we have implemented our own function ldet which finds the determinant of any matrix using cofactor expansion across rows. For the same reason, we must use our own function rowminor instead of Mathematica s Minors. 5

Listing 1: Mathematica module source code 1 ( : : P a c k a g e : : ) 2 3 BeginPackage [ d f t e i g h ] ; 5 d f t E i g e n : : u s a g e = Orthonormal b a s i s of DFT m a t r i x of rank N ; 6 l D e t : : u s a g e = D e t e r m i n a n t u s i n g L a p l a c e e x p a n s i o n ; 7 r o w M i n o r : : u s a g e = Row minor ; 8 9 Begin [ P r i v a t e ] 10 11 C l e a r [ rowminor ] 12 rowminor [ m, r ] : =Map [ Rest, D e l e t e [m, r ] ] 13 1 C l e a r [ l D e t ] 15 l D e t [ m ] : = Module [{ rows }, 16 rows = Length [m] ; 17 I f [ rows == 1, m[ [ 1, 1 ] ], 18 Sum[( 1 ) ˆ ( 1 + r ) m[ [ r, 1 ] ] l D e t [ rowminor [m, r ] ], 19 { r, 1, rows } ] ] 20 ] 21 22 C l e a r [ d f t E i g e n ] 23 d f t E i g e n [ n ] : = Module [{ m u l t i p l i c i t i e s, vj, e }, 2 25 m u l t i p l i c i t i e s = { F l o o r [ n / ] + 1, F l o o r [ ( n + 1 ) / ], 26 F l o o r [ ( n + 2 ) / ], F l o o r [ ( n + 3 ) / ] 1 } ; 27 28 v j [ k, j, m ] : = ( 1 / ) ( 29 K r o n e c k e r D e l t a [ j, m] + 30 ( 1)ˆ k K r o n e c k e r D e l t a [Mod[ j + m, n ], 0] + 31 ( I ) ˆ k Exp [ ( 2 Pi I j m) / n ] / S q r t [ n ]+ 32 ( I ) ˆ ( 3 k ) Exp [ ( 2 Pi I ( j ) m) / n ] / S q r t [ n ] 33 ) ; 3 35 e [ j, k ] : = Module [{ g, gv, z, xn, d0, d1 }, 36 z= I f [ OddQ [ k ], 1, 0 ] ; 37 I f [ j ==0, Table [ v j [ k, y, z ], { y, 0, n 1}]/ S q r t [ v j [ k,0+ z,0+ z ] ], 38 g= Table [ v j [ k, y, x ], { x, z, j +z },{ y, z, j +z 1}]; 39 gv= Table [{ Table [ v j [ k, y, x ], { y, 0, n 1}]},{x, z, j +z } ] ; 0 d0=det [ Table [ v j [ k, y, x ], { x, z, j +z },{ y, z, j +z } ] ] ; 1 d1=det [ Table [ v j [ k, y, x ], { x, z, j +z 1},{y, z, j +z 1}]]; 2 l D e t [ J o i n [ g, gv, 2 ] ] / S q r t [ d0 d1 ] 3 ] ] ; 5 6 Map [ e [ # [ [ 1 ] ], # [ [ 2 ] ] ] &, 7 F l a t t e n [ Table [ T r a n s p o s e [{ Range [ 0, m u l t i p l i c i t i e s [ [m+1]] 1], 8 C o n s t a n t A r r a y [m, m u l t i p l i c i t i e s [ [m+ 1 ] ] ] } ], {m, 0, 3 } ], 1 ] ] 6

9 ] 50 51 End [ ] ; 52 EndPackage [ ] ; 6 Examples Using the Mathematica code above, we can calculate a set of eigenvectors for a DFT matrix of order 6. Combining them as columns of matrix O gives the following matrix, approximated numerically: O 6 = 0.8391 0. 0. 0.539 0. 0. 0.233 0.512 0.6533 0.3753 0.083 0.2706 0.233 0.2979 0.2706 0.3753 0.596 0.6533 0.233 0.865 0. 0.3753 0.7505 0. 0.233 0.2979 0.2706 0.3753 0.596 0.6533 0.233 0.512 0.6533 0.3753 0.083 0.2706 We can verify that it diagonalizes Φ(6) by calculating: 1 0 0 0 0 0 0 1 0 0 0 0 O 1 6 Φ(6)O 6 = 0 0 i 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 i The diagonal contains eigenvalues repeating consistently with the associated multiplicities m = (2,1,2,1) and dimensions of the eigenspaces. Taking the outer product of all columns of O 6 we can confirm that the set is indeed orthonormal: 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. The Mathematica notebook, used to make calculations in the example above was: Needs[ dfteigh`, ToFileName[NotebookDirectory[], dfteigh.m ]]; n = 6; Φ = Table[ n 1 Exp [ ] ] 2π i k m,{k,0,n 1},{m,0,n 1} ; n 7

eall = dfteigen[n]; o = N[Transpose[eall]]; MatrixForm[Round[o, 0.0001]]; MatrixForm[Round[o, 0.0001]] MatrixForm [ Round [ Inverse[o].N[Φ].o,10 10]] MatrixForm[N[FullSimplify[Outer[Dot, eall, eall, 1]]]] 7 Acknowledgements I would like to thank Lester F. Ludwig from the New Renaissance Institute who suggested this problem to me and who has also provided guidance, inspiration, and suggestions. References [1] Vladimir B Matveev. Interwining relations between the fourier transfom and discrete fourier transform, the related functional identities and beyond. Inverse Problems, 17:633, 2001. [2] G. Strang. Linear Algebra and Its Applications. Thomson, Brooks/Cole, fourth edition, 2006. [3] Ç. Candan. On the eigenstructure of DFT matrices [DSP education]. Signal Processing Magazine, IEEE, 28(2):105 108, 2011. [] J McClellan and T Parks. Eigenvalue and eigenvector decomposition of the discrete fourier transform, 1972. [5] F R Gantmakher. The Theory of Matrices. Chelsea Publishing Co., 1959. [6] G.H. Golub and C.F. Van Loan. Matrix computations, volume 3. Johns Hopkins Univ Pr, 1996. 8