Efficient and Accurate Rectangular Window Subspace Tracking

Similar documents
ADVANCES IN SLIDING WINDOW SUBSPACE TRACKING TIMOTHY M. TOOLAN

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

Chapter 3 Transformations

Total least squares. Gérard MEURANT. October, 2008

CS 246 Review of Linear Algebra 01/17/19

Fast adaptive ESPRIT algorithm

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Multivariate Statistical Analysis

Review of Linear Algebra

Sliding window adaptive SVD algorithms

Properties of Matrices and Operations on Matrices

Review problems for MA 54, Fall 2004.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Conceptual Questions for Review

Review of some mathematical tools

A Divide-and-Conquer Method for the Takagi Factorization

LINEAR ALGEBRA SUMMARY SHEET.

Mathematical foundations - linear algebra

Stat 159/259: Linear Algebra Notes

Linear Algebra Review. Vectors

Computational Methods. Eigenvalues and Singular Values

1 Singular Value Decomposition and Principal Component

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

UNIT 6: The singular value decomposition.

Linear Algebra (Review) Volker Tresp 2018

arxiv: v1 [cs.lg] 26 Jul 2017

The Singular Value Decomposition

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Numerical Methods in Matrix Computations

Notes on Eigenvalues, Singular Values and QR

Notes on basis changes and matrix diagonalization

Foundations of Computer Vision

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra Primer

MATH 350: Introduction to Computational Mathematics

Singular Value Decomposition and Principal Component Analysis (PCA) I

AMS526: Numerical Analysis I (Numerical Linear Algebra)

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Chap 3. Linear Algebra

Chapter 2 Canonical Correlation Analysis

Linear Algebra (Review) Volker Tresp 2017

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

MATH 583A REVIEW SESSION #1

Chapter 7: Symmetric Matrices and Quadratic Forms

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Parallel Singular Value Decomposition. Jiaxing Tan

Solution of Linear Equations

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Background Mathematics (2/2) 1. David Barber

A New Look at the Power Method for Fast Subspace Tracking

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics

Maths for Signals and Systems Linear Algebra in Engineering

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra: Matrix Eigenvalue Problems

Study Notes on Matrices & Determinants for GATE 2017

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra & Geometry why is linear algebra useful in computer vision?

ANSWERS. E k E 2 E 1 A = B

The constrained stochastic matched filter subspace tracking

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

COMP 558 lecture 18 Nov. 15, 2010

Diagonalizing Matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

EECS 275 Matrix Computation

Definition (T -invariant subspace) Example. Example

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

APPLIED NUMERICAL LINEAR ALGEBRA

ELE/MCE 503 Linear Algebra Facts Fall 2018

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Review of similarity transformation and Singular Value Decomposition

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Quantum Computing Lecture 2. Review of Linear Algebra

MIT Final Exam Solutions, Spring 2017

Numerical Methods for Solving Large Scale Eigenvalue Problems

Mathematical Methods wk 2: Linear Operators

A Brief Outline of Math 355

Math 408 Advanced Linear Algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Lecture 2: Linear Algebra Review

Generalized Triangular Transform Coding

Solving Linear Systems of Equations

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

This appendix provides a very basic introduction to linear algebra concepts.

This can be accomplished by left matrix multiplication as follows: I

Transcription:

Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu Abstract In this paper, we describe a rectangular window subspace tracking algorithm, which tracks the r largest singular values and corresponding left singular vectors of a sequence of n c matrices in Onr ) time. This algorithm is designed to track rapidly changing subspaces. It uses a rectangular window to include a finite number of approximately stationary data columns. This algorithm is based on the Improved Fast Adaptive Subspace Tracking IFAST) algorithm of Toolan and Tufts, but reforms the rth order eigendecomposition with an alternative method that takes advantage of matrix structure. This matrix is a special rank-six modification of a diagonal matrix, so its eigendecomposition can be determined with only a single Or 3 ) matrix product to rotate its eigenvectors, and all other computation is Or ). Methods for implementing this algorithm in a numerically stable way are also discussed. I. INTRODUCTION Let us assume that we have a data source which produces a new length n column vector at regular intervals. This situation applies when we have an n element sensor array and take snapshots at regularly spaced time intervals, it applies when we analyze a digital image by panning across either the columns or rows, and it applies to a single sensor which takes regularly spaced samples in time, then creates a vector from the last n samples. The goal of subspace tracking, is to apply a sliding window to our data to create a sequence of overlapping matrices, then track the r principal singular values and left singular vectors of that windowed matrix, or an orthogonal set of columns which span that r dimensional principal subspace. The large overlap between successive windowed matrices allows one to reduce computation by determining the new subspace as an update of the subspace determined from the previous matrix. Because the singular value decomposition SVD) is a full matrix operation, these update methods are generally approximations. One reason we are often only interested in the left singular vectors, is that they are also the eigenvectors of the sample correlation matrix. Most of the earlier methods of subspace tracking, are based on applying an exponential window to the data [] [4], but recently there have been some methods which apply a rectangular window [5] [8]. The rectangular window methods allow inclusion of a finite amount of data, and can give better performance when the data is highly non-stationary, or there are abrupt changes in the data. II. SLIDING RECTANGULARLY WINDOWED DATA Letting x i be the ith column vector that we received, a length c rectangularly windowed matrix is formed by creating a matrix from the last c column vectors. At time t, we can write our previous matrix, M, and current matrix, M, as M = [ x t c x t c x t c ] x t, ) [ ] M = xt c xt c xt xt. ) Note that M and M share all but one column. We can write the current matrix in terms of the previous matrix as M = M x t c e ) P xt e c, 3) where e i is the ith length c canonical vector the ith column of a c c identity matrix) and P = [e, e 3,, e c e ]. Since the left singular vectors of a matrix are the eigenvectors of that matrix times its conjugate transpose, then from 3), we can write M M = MM x t c x t c x t x t, 4) which makes it clear that M M is a rank-two perturbation of MM. The problem of updating the eigendecomposition of a rank-one perturbation of a symmetric matrix has been well studied both theoretically and numerically, see [9] and references therein). A rank-two perturbation of this form has been much less studied, even though it is equivalent to two sequential rank-one perturbations. III. DEFINING TE PROBLEM We start with two sequential n c rectangularly windowed complex matrices M and M, along with approximations to the r largest singular values and corresponding left singular vectors of M, which we will call R r r and U C n r respectively. Our goal is to determine a good approximation of the r or possibly r ) largest singular values and corresponding left singular vectors of M by updating U and. From ) and ), we can see that M and M share all but one column, and from 4) we can see that the ordering of the columns has no effect on left singular vectors, therefore the column space spanned by U should be close to the column space spanned by the r dominant left singular vectors of the n c ) matrix of shared columns, [ x t c x t c x t x t ]. The subspace that is not included in U which may have a strong influence on the

dominant left singular vectors of M is the part of both xt c and x t that is orthogonal to U. If we use a Gram-Schmidt like method [] to augment U by this subspace, and call this augmentation Q C n, then the subspace spanned by [ U Q ] should be close to the subspace spanned by the r largest singular vectors of M. In [8], a detailed analytical analysis is presented, which shows why the subspace [ U Q ] is a good approximation to the subspace spanned by the r dominant left singular values of M, and that it is not trivial to find an r dimensional subspace that will give a better approximation. The idea behind the improved fast adaptive subspace tracking IFAST) algorithm [8], [], is to approximate the matrix M by the rank r matrix M = [ U Q ][ U Q ] M, then determine the singular values and left singular vectors of M in an efficient way. Approximating M by M is the only approximation involved in the algorithm. Determining the nonzero singular values and corresponding left singular vectors of M, is equivalent to applying the Rayleigh-Ritz procedure [] to M M using the subspace [ U Q ]. This means that the SVD of M is the optimal approximation to the SVD of M, when limited to the subspace spanned by the columns of [ U Q ] []. Table I is intended to clearly show how IFAST works, and will produce the exact singular values and left singular vectors of M. Table I will also produce the same result as the algorithm presented in this paper, as well as that in [8], []. The only difference between Table I and the IFAST algorithm in [8], [], is that the Oncr) computation in step is performed as an equivalent Onc) computation. The contribution of this paper is to replace the eigendecomposition in step 3 with a single matrix product of two r ) r ) matrices, and all other computation will be Or ). This is approximately four times faster than a conventional EVD []. Because the efficient computation of the EVD is done using secular equations, [3], we will call the version of the algorithm presented in this paper the Improved Secular Fast Adaptive Subspace Tracking ISFAST) algorithm. The only computation in this algorithm that is not the equivalent of a matrix times a vector, is a the product of two r ) r ) matrices, and the product of a n r ) matrix with a r ) r ) matrix. IV. TE DETAILS OF, U, AND Q Although and U are approximations, there are a few properties that they must have. must be diagonal and nonnegative, the columns of U must form an orthonormal set ie. U U = I), and U MM U must equal. These conditions will always be satisfied if U and were generated by a previous iteration of this algorithm [8], or from the SVD of M. All these constraints really say, is that U should be rotated so that U MM U is diagonal. For any matrix U x C n r, whose columns form an orthonormal set, we can generate a U and that satisfy these constraints by taking the EVD of Ux MM U x = U y y Uy, then setting = y and U = U x U y. TABLE I TEORETICAL IFAST ALGORITM Step Calculation `I U U x t c ) q = I U U ) x t c `I U U q q xt q = `I U U q q xt ) F = [ U Q ] M M [ U Q ] 3) U f f U f = F 4) Ũ = [ U Q ]U f = f In the last section, we mentioned that we will create the matrix Q using a Gram-Schmidt like method [] to augment U with x t c the column we are discarding from M) and x t the column we are adding to M). We can augment U with x t c followed by x t, or x t followed by x t c, as they will both produce a Q which span the same column space. In fact, we can rotate Q by any unitary matrix, and it will not change the column space spanned by Q. We will take advantage of this property to make later computation easier. We start by creating the matrix Q = [ q q ] using Gram- Schmidt augmentation of U, I U U ) x t c q = I U U ) x t c, ) I U U q q xt q = ) I U U q q xt. The two vectors q and q are an orthonormal set that span the desired column space, but in general the ermitian matrix ˆT = Q MM Q will not be diagonal, therefore we will rotate Q to form Q, such that ˆ = Q MM Q is diagonal. In order to determine the necessary rotation, we can take the EVD of ˆT, which we will write as [ q ˆT = MM q q MM ] q q MM q q MM = q Û ˆÛ. Because the formation of ˆT is required later in the algorithm, there is no additional computational load introduced by forming it now. The eigenvalues of a matrix are the roots of its characteristic equation, and since ˆT is a matrix, we can use the quadratic equation to find the eigenvalues. Defining b = ˆt, ˆt, )/ and c = ˆt,ˆt, ˆt,, where ˆt i,j is the i, j)th element of ˆT, allows us to write the eigenvalues of ˆT as [ ] b b c ˆ = b. b c Now that we have the eigenvalues of ˆT, we can determine its eigenvectors. We can use the definition of an eigenvector

[], and write ˆT = ˆσ β, β where [, β] T is an unnormalized eigenvector of ˆT corresponding to eigenvalue ˆσ. Solving for β, normalizing the eigenvector, then repeating for the second eigenvector, we get α α Û =, φα φα where α = / ˆσ ˆt, )/ˆt,, α = α, and φ = ˆt, / ˆt,. This allows us to define Q = QÛ, which will have the property Q MM Q = ˆ. V. EFFICIENTLY CALCULATING TE SVD OF M Determining the singular values and left singular vectors of M is equivalent to determining the eigenvalues and eigenvectors of M M [], therefore we will start by analyzing the eigendecomposition of M M = [ U Q ][ U Q ] M M [ U Q ][ U Q ]. If we define F = [ U Q ] M M [ U Q ] = U f f U f, 5) then we can write M M = [ U Q ]U f ) f U f [ U Q ] ), and it becomes obvious that the non-zero eigenvalues of M M will be the eigenvalues of F, and the corresponding eigenvectors of M M will be [ U Q ]U f. From 4), we know M M = MM x t c x t c x t x t, which we can substitute into 5), to get F = [ U Q ] MM [ U Q ] aa bb, 6) where the vectors a and b are defined as a = [ U Q ] x t c, and b = [ U Q ] x t. Multiplying out the matrix blocks in the first term of 6) gives us [ U F MM U U MM ] Q = Q MM U Q MM aa bb. 7) Q In section IV we made it clear that U MM U = and Q MM Q = ˆ, both of which we already have. If we define Z = U MM Q, remember M Q = M Q)Û, and we already computed M Q when constructing ˆT ), we can write 7) as F = [ ˆ ] [ Z Z ] aa bb. 8) The matrix F, as it is written in 8), is a rank-six modification of a diagonal matrix. The matrices aa and bb are each rank-one, and the matrix Z Z is rank-four, which can be seen from 6) in the appendix. Because the eigendecomposition of certain low rank modifications of diagonal matrices can be computed in On ) time, we will take advantage of this to compute the EVD of F more efficiently. We will compute the EVD of F in two steps. The first step is to compute the EVD of Z Ġ = ˆ Z = U U, 9) which is a rank-four modification of a diagonal matrix. Since F = Ġ aa bb, if we rotate F by the eigenvectors of Ġ, we can define the matrix Ḧ = U F U. Therefore, defining ȧ = U a and ḃ = U b, the second step is to compute the EVD of Ḧ = ȧȧ ḃḃ = Ü Ü, which is a rank-two modification of a diagonal matrix. Finally, we can determine the EVD of F as f = and U f = UÜ, allowing us to write the singular values and left singular vectors of M as = and Ũ = [U Q] UÜ. VI. COMPUTING TE EVD OF Ġ The eigenvalues of Ġ from 9) are the roots of its characteristic polynomial, Ċλ) = det[ġ λi], which can also be written as λi Z Ċλ) = det. ) ˆ λi Z Using [8, pp. 6-6], the determinant of a bordered diagonal matrix in the form of ) can be written as r Ċλ) = σ i λ)) ẇ λ)ẇ λ) ẇ x λ) ), where i= ẇ i λ) = ˆσ i λ z j,i σ j λ, ẇ xλ) = z j, z j, σ j λ, ) σ i is the ith diagonal element of, z j,i is the j, i)th element of Z, and ˆσ i is the ith diagonal element of ˆ. Because the product term in front of Ċλ) has no effect on its roots, we can cancel it to get ẇλ) = ẇ λ)ẇ λ) ẇ x λ), which is a rank-four version of the rank-two formula given in [4], and behaves similarly to the rank-two secular function from [8]. We can now get the r eigenvalues of Ġ by determining the roots of ẇλ), which we will call σ through σ r. If u is the unnormalized ith eigenvector of Ġ, then from the definition of an eigenvector [], we get Ġ u = u σ i. Solving for u, the details of which are given in the appendix), we get [ ] σ i I u = z ẇ σ i ) z ) I ẇ x σ i )

where and z are the first and second columns of Z, respectively. This allows us to write the normalized ith eigenvector of Ġ as u i = u/ u. To stably calculate the eigenvalues of Ġ, they should be calculated as two sequential updates using [4], combined with the techniques described at the end of the next section. VII. COMPUTING TE EVD OF Ḧ After determining the EVD of Ġ, we can determine the EVD of Ḧ. The eigenvalues of Ḧ = ȧȧ ḃḃ are the roots of the rank-two secular equation ẅλ) = ẅ a λ)ẅ b λ) ẅ ab λ), 3) which is a rank-two version of the secular function given in [3], and whose derivation is in [8]. The parts of 3) are defined as ẅ a λ) = ȧ j σ j λ, ẅ bλ) = ẅ ab λ) = ȧ j ḃj σ j λ, ḃj σ j λ, where ȧ j and ḃj are the jth elements of ȧ and ḃ, respectively. The unnormalized ith eigenvector of Ḧ is a linear combination of ȧ and ḃ, and from [8] can be written as ) ü = σi I ȧ ẅa σ ) i ). 4) ẅ ab σ i )ḃ This allows us to write the normalized ith eigenvector of Ḧ as ü i = ü/ ü. To stably calculate the eigenvalues of Ḧ, they should be calculated as two sequential updates using [5], with the stopping criterion from [6], and the first set of eigenvectors should be calculated using the method in [6], [7]. To avoid an Or 3 ) rotation, the eigenvectors of Ḧ are calculated using the method in this paper. For closely spaced eigenvalues, there will be some loss of orthogonality, which should be able to be addressed using methods like [8], [9] and []. VIII. CONCLUSION In this paper, we present a method for efficiently computing the small EVD of the IFAST algorithm. We call this modified version of IFAST the Improved Secular Fast Adaptive Subspace Tracking ISFAST) algorithm. The steps of the ISFAST algorithm are presented in table II, and produce the same result as Table I. The dominant step computationally is step 4-a, as it has one Or 3 ) matrix product, and one Onr ) matrix product. All other steps in the algorithm only contain terms which are matrices times vectors, or their computational equivalent. The computational improvement introduced by performing the calculations as presented in this paper are shown in Fig. for a 5 5 complex matrix. The full decomposition represents determining the full set of singular values and left singular vectors of M using a conventional SVD []. The ISFAST algorithm and the IFAST algorithm both produce Approximate FLOPS 4 3 TABLE II TE ISFAST ALGORITM Step Calculation `I U U x t c -a) q = I U U ) x t c `I U U q q xt b) q = `I U U q q xt c) ˆT = Q MM Q d) Û ˆÛ = ˆT e) Q = QÛ -a) Z = U» MM Q» b) Ġ = ˆ 3-a) x 6 c) U U = Ġ ȧ = U [ U Q ] x t c b) ḃ = U [ U Q ] x t c) Ḧ = ȧȧ ḃḃ d) Ü Ü = Ḧ 4-a) Ũ = [ U Q ] UÜ b) = full decomposition secular update FAST ց Z Z տ ISFAST 3 4 5 subspace dimension r) տ IFAST Fig.. Computational requirements of various algorithms for a 5 5 complex matrix vs. subspace dimension. the singular values and left singular vectors of M. The FAST algorithm [5] is given for comparison, and is almost computationaly equivalent to the steps in Table I. The secular update method uses the method from VII directly on M, and will give the same results as the full decomposition. Plots of approximate FLOPS vs. subspace dimension for other square matrix dimensions look very similar, even for very large n. We have not mentioned how we determine the subspace dimension, r, but the method in [], which is based on the energy in the noise subspace works well with this algorithm for determining r at each iteration.

APPENDIX TE EIGENVECTORS OF Ġ In this appendix, we derive the formula for the unnormalized ith eigenvector of Ġ. If the vector u is the ith eigenvector of Ġ multiplied by some scale factor, then from the definition of an eigenvector [], we get Ġu = u σ i. Using 9), we can write ) Z zz u = u σ i, ˆ where Z zz = Z Z. After [ rearranging ] some ) terms, we get Du = Z zz u, where D = ˆ σ i I. If we deflated the problem by removing unchanged eigenvalues as in [8], [5], then D is invertible, and we can write u = D Z zz u. 5) The matrix Z zz is a rank-four ermitian matrix, and can be written as the sum of four symmetric rank-one matrices, Z zz = / / / / z z z z / / / /. 6) If we partition the length r vector u as [ ū u r u r ] T, and substitute this partitioned u along with 6) into 5), we get u = D u r z ū u r z z ū. 7) If we partition the r r diagonal matrix D as D D = d r, d r then from 7), we get the two equalities u r = z ū/d r and u r = z ū/d r. Substituting for u r and u r in 7), we get u = D z ū /d r z ū z /d r. 8) where z ū and z ū are scalars. If we multiply both sides of the first r elements of 8) by z from the left, we get z ū = z D z ū z ū ) z. 9) d r d r Solving for z ū in terms of ū gives us ) z ū = ū dr dr z D ) d r z D z ) From ), we can see that ẇ σ i ) = d r z D and ẇ x σ i ) = z D z. Using these identities, and plugging ) into 8) gives us ) z u = ū D z d r ẇ σ i ) z d r ẇ x σ i ) d r Because u is unnormalized, we can discard the scalar z ū/d r, and because the r )th and r )th diagonal elements of D are /d r and /d r respectively, they will cancel the d r and d r in the two vectors, leaving us with ). REFERENCES [] I. Karasalo, Estimating the covariance matrix by signal subspace averaging, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP- 34, no., pp. 8, Feb. 986. [] B. Yang, Projection approximation subspace tracking, IEEE Trans. Signal Processing, vol. 43, no., pp. 95 7, 995. [3] D. J. Rabideau, Fast, rank adaptive subspace tracking and applications, IEEE Trans. Signal Processing, vol. 44, no. 9, pp. 9 44, 996. [4] P. Strobach, Bi-iteration SVD subspace tracking algorithms, IEEE Trans. Signal Processing, vol. 45, no. 5, pp. 4, 997. [5] E. C. Real, D. W. Tufts, and J. W. Cooley, Two algorithms for fast approximate subspace tracking, IEEE Trans. Signal Processing, vol. 47, no. 7, pp. 36 45, July 999. [6] D. J. Rabideau, Subspace tracking with quasi-rectangular data windows, in Proc. SPIE - The International Society for Optical Engineering, vol. 596, Apr. 3, pp. 646 658. [7] R. Badeau, G. Richard, and B. David, Sliding window adaptive SVD algorithms, IEEE Trans. Signal Processing, vol. 5, no., pp., 4. [8] T. M. Toolan, Advances in sliding window subspace tracking, Ph.D. dissertation, University of Rhode Island, 5. [9] J. W. Demmel, Applied Numerical Linear Algebra. Philadelphia, PA: Society for Industrial and Applied Mathematics, 997. [] G.. Golub and C. F. van Loan, Matrix Computations, 3rd ed. Baltimore, MD: Johns opkins Univ. Press, 996. [] T. M. Toolan and D. W. Tufts, Improved fast adaptive subspace tracking, in Proc. Thirteenth Annual Adaptive Sensor Array Processing Workshop ASAP), June 5. [] B. N. Parlett, The Symmetric Eigenvalue Problem. Englewood Cliffs, NJ: Prentice-all, 98. [3] G.. Golub, Some modified matrix eigenvalue problems, SIAM Rev., vol. 5, no., pp. 38 334, Apr. 973. [4] D. R. Fuhrmann, An algorithm for subspace computation with applications in signal processing, SIAM J. Matrix Anal. Appl., vol. 9, no., pp. 3, 988. [5] J. R. Bunch, C. P. Nielsen, and D. C. Sorensen, Rank-one modification of the symmetric eigenproblem, Numer. Math., vol. 3, pp. 3 48, 978. [6] M. Gu and S. C. Eisenstat, A stable and efficient algorithm for the rank-one modification of the symmetric eigenproblem, SIAM J. Matrix Anal. Appl., vol. 5, no. 4, pp. 66 76, Oct. 994. [7], Downdating the singular value decomposition, SIAM J. Matrix Anal. Appl., vol. 6, no. 3, pp. 793 8, July 995. [8] D. C. Sorensen and P. T. P. Tang, On the orthogonality of eigenvectors computed by divide-and-conquer techniques, SIAM J. Numer. Anal., vol. 8, no. 6, pp. 75 775, Dec. 99. [9] R. D. DeGroat and R. A. Roberts, Efficient, numerically stabilized rank-one eigenstructure updating, IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, no., pp. 3 36, Feb. 99. [] M. Moonen, P. van Dooren, and J. Vandewalle, A note on Efficient numerically stabilized rank-one eigenstructure updating, IEEE Trans. Signal Processing, vol. 39, no. 8, pp. 9 94, Aug. 99. [] T. M. Toolan and D. W. Tufts, Detection and estimation in nonstationary environments, in Proc. IEEE Asilomar Conference on Signals, Systems & Computers, Nov. 3, pp. 797 8.