Instrumental Variable Subspace Tracking Using Projection Approximation Tony Gustafsson y October 17, 1996 Abstract Subspace estimation plays an import

Size: px
Start display at page:

Download "Instrumental Variable Subspace Tracking Using Projection Approximation Tony Gustafsson y October 17, 1996 Abstract Subspace estimation plays an import"

Transcription

1 Instrumental Variable Subspace Tracking Using Projection Approximation Tony Gustafsson y October 17, 1996 Abstract Subspace estimation plays an important role in, for example, sensor array signal processing. Recursive methods for subspace tracking, with obvious applications to non-stationary environments, have also drawn considerable interest. In this paper we present an Instrumental Variable (IV) extension of the recently developed Projection Approximation Subspace Tracking (PAST) algorithm. The IV-approach is motivated by the fact that PAST gives biased estimates when the noise vectors are not spatially white. IV-methods are well-known in the context of system identication. The basic idea of IV-methods is that the IV-vectors decorrelate the colored noise from the signal of interest, but leave the informative signal part undestroyed. The proposed algorithm is based on a (possibly over-determined) projection like unconstrained system of equations. The resulting basic algorithm has a computational complexity of 3mn + O(n 2 ) where m is the dimension of the measurement vector and n is the subspace dimension. The basic IV algorithm is also extended to a second order IV version, which is demonstrated to have better tracking properties than the basic IV-algorithm. The performance of the algorithms is demonstrated with a simulation study of time-varying sinusoids in additive colored noise. Keywords: Subspace Tracking, Sensor Array Signal Processing, Instrumental Variables, Frequency Estimation, Subspace Methods Permission to publish this abstract separately is granted This work was supported in part by the Swedish Research Council for Engineering Sciences (TFR). y Chalmers University of Technology, Department of Applied Electronics, S Gothenburg, Sweden, tonyg@ae.chalmers.se 1

2 1 Introduction Sensor array signal processing is a signal processing area that has received much attention in recent literature. Especially high-resolution subspace based methods have been studied by many researchers, see for example [9, 12]. These methods are model-based in the sense that they rely on a mathematical model of the received signal. Typical for the model-based approaches is that a serious degradation of the performance may occur if the signal model is incompatible with the 'true' signal. For example, the well-known MUSIC [13] algorithm assumes spatially white noise, i.e. the noise covariance matrix is proportional to the identity matrix. Of course, if the noise covariance is known this eect can be eliminated by pre-whitening. However, since the noise covariance generally is unknown, pre-whitening is in most cases not a realistic option. An alternative approach that does not require a known noise covariance matrix is the method of Instrumental Variables (IV), see [10] for a treatment of IV methods in the context of identifying linear time invariant dynamical systems. A combination of IV and subspace tting methods has recently been proposed as an alternative approach to array processing in spatially colored noise elds [15, 16]. Another aspect of the array processing eld that has drawn attention is the application of high-resolution frequency and direction of arrival (DOA) estimation techniques to non-stationary environments. A drawback of the traditional subspace methods in this scenario is that the singular value decomposition, SVD (or the eigenvalue decomposition, EVD), is time consuming to update. An excellent review of algorithms that tries to overcome this drawback can be found in [5]. Most of the proposed recursive methods require spatially white noise. One exception is the SWEDE algorithm [6], which allows a slightly more general structure of the noise covariance matrix. A specic example of a successful subspace tracking method is the Projection Approximation Subspace Tracking (PAST) algorithm [17]. The basic idea of PAST is that a projection-like unconstrained equation is simplied using a clever projection approximation; which leads to a Recursive Least Squares (RLS) like algorithm for tracking the signal subspace. The DOA (or frequency) estimates are then, for example, taken as the angles of the eigenvalues of a matrix obtained using the shift-invariant structure of the subspace. However, also the PAST algorithm requires spatially white noise, and tends to give biased estimates when this requirement is not fullled. This fact is the motivation of the algorithms proposed herein. The aim of this paper is to derive an IV extension of PAST. Like all other IV-methods we require that an IV-vector that is uncorrelated with the noise vector can be found. As long as this requirement is fullled, the noise vectors can be allowed to have arbitrary spatial and temporal color. A certain rank condition must also be fullled. This is since the informative part of the signal must remain undestroyed. 2

3 The IV approach leads to an unconstrained (possibly over-determined) system of equations. The solutions to this system are matrices with orthonormal columns that span the dominating (signal) subspace. Using the projection approximation idea of [17], an IV recursive subspace tracking algorithm is obtained. The resulting basic algorithm has the same computational complexity as PAST. The basic IV-algorithm does not perform any rank reduction of the sample cross covariance matrix. This limitation is overcome in all extensions of the basic algorithm. The basic IV-algorithm is for example extended to a second-order IV version, which is demonstrated to have good tracking properties. The outline of the paper is as follows: Section 2 introduces the signal model and other assumptions. Section 3 contains the derivations of the proposed IV-algorithms. These sections, more or less explicitly, focus on the DOA and frequency estimation problems. But it is important to notice that the studied data-model is not constrained to this problem. Section 4 describes a few simulation examples and Section 5 concludes the paper. The following notations are used in the paper: Matrices and vectors are represented with uppercase boldface and lowercase boldface fonts, respectively. The superscripts () H,() T denote complex conjugate transpose and transpose, respectively. The superscript () denotes complex conjugate. Further, I r denotes the r r identity matrix, E[] denotes mathematical expectation and the subspace spanned by the columns of A is denoted as R(A). The Kronecker delta function is denoted with and the rank of a matrix A is denoted with (A). A diagonal matrix consisting of the diagonal elements 1 ; : : : ; M is denoted diag( 1 ; : : : ; M ), Trfg denotes the trace operator and k k F denotes the Frobenius matrix norm. The computational complexity is in this paper dened as the number of complex multiplications, which agrees with the denition used in [5]. 2 Problem Formulation Let z(t) 2 C m1 be the observed data vector. In the array case z(t) consists of the samples of a Uniform Linear Array (ULA) with m sensors. In time series (sum of complex sinusoids) problems the vector z(t) = [z(t); : : : ; z(t + m? 1)] T (2.1) consists of m consecutive samples of an observed scalar signal. See Appendix A for a further treatment of the time series problem. In the following we assume that z(t) consists of n narrow-band plane waves impinging on an antenna array or n complex sinusoids corrupted by additive noise. Here the subspace dimension n, n < m, is assumed to be known. Hence, the following data model will be studied (see for example [17]) z(t) =?x(t) + e(t): (2.2) 3

4 The model (2.2) is quite general, and the algorithms to be presented apply to this model whatever structure? may have. But, in this paper we will focus on the DOA and frequency estimation problem. Thus, the following assumptions are introduced. The m n matrix? is deterministic, but possibly time-varying, and is constructed as where? = [(! 1 ) : : : (! n )] (2.3) (! k ) = [1 e j! k : : : ej! k(m?1) ] T (2.4) is a so-called steering-vector. The un-measurable signal vector x(t) is a random n-vector with covariance matrix C x = E[x(t)x H (t)]: (2.5) The noise vector e(t) is assumed to be independent of x(t), and has zero mean and an unknown covariance matrix dened as Q e () = E[e(t)e H (t + )]: (2.6) In the time series case,! k is the angular frequency of the kth complex sinusoid. In the array case! k = 2 d sin( k) (2.7) where d is the spacing between adjacent sensors, is the wavelength and k is the DOA relative the array normal. The above relations hold when plane waves impinge on a uniform linear array (ULA). Typical for IV approaches are the following assumptions. Assume that there exists an IV vector (t) 2 C l1, l n such that A1: E[e(t) H (t)] = 0 A2: (E[x(t) H (t)]) = (C x ) = n. The assumption A2 is made in order to ensure that (?C x ) = n, which implies that R(?C x ) = R(?). This rank condition is for the time series problem treated in Appendix A. A discussion about how to nd an IV-vector that, in the array case, fullls these conditions can be found in [15]. See also the references therein. One possible approach is to consider an array that is divided into sub-arrays. Then the outputs of one of the sub-arrays can be taken as instruments. If the sub-arrays are suciently far apart, the noise in the main sub-array is uncorrelated with the IV vector. These IV-vectors are called spatial IVvectors. When a subarray is not available but the signals are temporally 4

5 correlated, an IV-vector can be obtained by delaying the sensor outputs. This approach relies on that the signal temporal correlation length is longer than that corresponding to the noise. These IV-vectors are called temporal IV-vectors. Finally, in the time series case the only choice is to use (possibly pre-ltered) delayed outputs (t) = [z(t? M); : : : z(t? M? l + 1)] T (2.8) for some user specied integer M. Hence, for the IV approach to be applicable in the time series case we require that e(t) is nitely correlated, i.e. Q e () = 0 8 > s for some s. It is also possible to consider a 'pre-ltering' of the instruments. This could for example be achieved by F (t) = F(t) (2.9) where F 2 C ll is a 'pre-ltering' matrix. A natural choice, following the general theory of IV-methods, would be to choose F = C?1=2 where C = E[(t) H (t)]. However, this possibility will not be further studied. Implicit in the denitions above is that the subspace R(?) might be slowly time-varying, i.e. R(?) = R(?(t)), and of course! k =! k (t). For notational convenience, we often suppress a notation indicating the time variation, i.e. we use C z instead of C z (t), where C z (t) = E[z(t) H (t)]. By slowly time-varying we mean that z(t) is approximately stationary, and ergodic, in a window of size 1=(1? ), where 0 < 1 is the so-called forgetting factor. The local cross covariance matrix may thus be recursively estimated with ^C z (t) = ^C z (t? 1) + z(t) H (t): (2.10) With samples z(t); t = 1; : : : we are interested in estimating the signal subspace, i.e. R(?), of z(t). With the present application, we are also interested in estimating! k. That is, our aim is to derive an ecient algorithm which estimates R(?) at time instant t, given the subspace estimate at time instant t? 1 and the sample z(t). We will also study an algorithm that additionally takes the subspace estimate at time instant t? 2 into account. 3 Derivation of the IV algorithm 3.1 Summary of the PAST-algorithm In order to motivate our IV-algorithms, a short summary of PAST [17] is given. Consider the following unconstrained criterion: V (W) = E z(t)? WWH z(t) 2 = Tr C z? 2W H C z W + W H C z WW H W (3.1) 5

6 with a matrix argument W 2 C mn, m > n, that without loss of generality is assumed to have full rank (=n). Due to the independence of x(t) and e(t) we have that C z = E[z(t)z H (t)] =?C x? H + Q e (0): (3.2) The above relation forms the basis for most subspace based methods. Let the EVD of C z be given as C z = UU H (3.3) with U = [u 1 ; : : : ; u m ], =diag( 1 ; : : : ; m ). The eigenvalues are ordered as 1 2 : : : m. If Q e (0) = 2 I m, then n > n+1 = : : : = m = 2. The dominant eigenvectors u 1 ; : : : ; u n are termed the signal eigenvectors whereas u n+1 ; : : : ; u m are the noise eigenvectors. It is easy to verify that R(U s ) = R([u 1 ; : : : ; u n ]) = R(?), i.e. the signal eigenvectors span the subspace spanned by the steering vectors. This observation motivates most subspace based algorithms. Now, if Q e () = 2 I m, the following theorem applies: Theorem 1 W is a stationary point of V (W) if and only if W = UT where U 2 C mn is orthogonal and contains any n distinct eigenvectors of C z. All stationary points of V (W) are saddle points, except when U contains the n dominant eigenvectors, U = U s. In this case V (W) attains the global minimum. Here T 2 C nn is an arbitrary unitary matrix. Proof: See [17]. 2 In order to derive a practical algorithm, replace (3.1) with V (W(t)) = tx t?k z(k)? W(t)W H (t)z(k) 2 (3.4) k=1 where, 0 < 1, is the so-called forgetting factor. Clearly, Theorem 1 is applicable also to (3.4) if C z is replaced with ^C z (t) = tx k=1 t?k z(k)z H (k): (3.5) The key idea of PAST is to replace W H (t)z(k) in (3.4) with h(k) = W H (k? 1)z(k). This projection approximation results in the criterion V (W(t)) = tx k=1 t?k kz(k)? W(t)h(k)k 2 (3.6) 6

7 which is a quadratic in W(t), and is minimized by W(t) = ^C?1 zh (t) ^C h (t): (3.7) The covariance matrices are estimated as: ^C zh (t) = ^C zh (t? 1) + z(t)h H (t) = ^C h (t) = ^C h (t? 1) + h(t)h H (t) = tx k=1 tx k=1 t?k z(k)h H (k) t?k h(k)h H (k): (3.8a) (3.8b) If the matrix inversion lemma is applied to (3.8b), an ecient RLS-like algorithm is easily derived. 3.2 IV-PAST Derivation In this section the proposed basic IV-extension of PAST is derived. The criterion (3.4) can be interpreted as (neglecting the forgetting factor): given z(k); h(k) k = 1; : : : ; t nd the least squares solution to which gives Z = [z(1); : : : ; z(t)] W(t)[h(1); : : : ; h(t)] = W(t)H (3.9) The natural IV solution to (3.9) would be W(t) = ZH H (HH H )?1 : (3.10) W(t) = Z H (H H )?1 (3.11) where = [(1); : : : ; (t)], and for simplicity we take the number of instruments as l = n (to make H H a square matrix). With a forgetting factor we nd the following IV algorithm W(t) = ^C z (t) ^C?1 h (t) (3.12) where the covariance matrices are updated according to: ^C z (t) = ^C z (t? 1) + z(t) H (t) (3.13a) ^C h (t) = ^C h (t? 1) + h(t) H (t): (3.13b) A theoretical justication of the above is as follows. Consider the solutions to V (W) = E[z(t) H (t)]? WW H E[z(t) H (t)] = 0, V (W) =?C x? WW H?C x = 0: (3.14) 7

8 The above holds under assumption A1. Provided A2 is fullled, by denition of the orthogonal projector, all solutions to (3.14) will be of the form W = U s T where R(U s ) = R(?), U s 2 C mn has orthonormal columns, and T 2 C nn is an arbitrary unitary matrix. Thus, for all solutions to (3.14) we have that WW H =? = U s U H s (3.15) is the orthogonal projector onto the space spanned by the columns of?. To derive a practical algorithm, consider the solutions to (compare with (3.14)) V (W(t)) = tx? t?k z(k) H (k)? W(t)W H (t)z(k) H (k) = 0: (3.16) k=1 The afore mentioned projection approximation, h(k) = W H (k? 1)z(k), together with (3.16) then leads to the proposed algorithm (3.12). Using the matrix inversion lemma, straightforward calculations gives the following algorithm: h(t) = W H (t? 1)z(t) (3.17a) W(t) = W(t? 1) + (z(t)? W(t? 1)h(t))K(t) (3.17b) P(t) = 1 (P(t? 1)? P(t? 1)h(t)K(t)) (3.17c) K(t) = H (t)p(t? 1) + H (t)p(t? 1)h(t) (3.17d) where P(t) = ^C?1 h (t). In the above we have assumed that initial values W(0); P(0) are given. These initial values only aect the transient behavior and are not important for the steady-state performance of the algorithm. They can for example be taken as any full-rank matrices. For faster convergence, W(0) can be taken as a subspace estimate from a conventional batch-method (using the rst samples of the signal). The computational complexity of the IV-PAST algorithm is identical to that of the original PAST algorithm, i.e. at every time instant 3mn + O(n 2 ) complex multiplications are required [17]. The original PAST-algorithm is retained if in the above, (t) is replaced with h(t). Remark 1 Due to the introduced approximations, the columns of W(t) will not be exactly orthonormal. However, simulations show that they are 'nearly' orthonormal, see Section 4. Some applications may require orthonormal columns, which may call for a re-orthogonalization scheme such as Gram- Schmidt. However, in the simulations in Section 4 no orthogonalization is performed. Under stationary signal conditions W(t) will converge to a matrix with orthonormal columns if = 1. 8

9 Remark 2 The basic IV-algorithm does not perform any rank-reduction of the sample cross covariance matrix. This means that at every time instant R( ^C z (t)) = R(W(t)), see (3.12). So, why not take W(t) = ^C z (t)? The main motivation is that the matrix ^C?1 h (t) post-multiplying in (3.12) forces the columns of W(t) to be 'nearly' orthonormal resulting in good conditioning. In scenarios with closely spaced frequencies this can be an advantage. Another possibility would be to orthogonolize the columns of ^C z (t), which would yield a complexity of O(mn 2 ), see [5]. Thus, IV-PAST can be thought of as a simple way to approximately orthogonolize the columns of ^C z (t). The basic IV-algorithm will also serve as a preview of more general rankreducing IV-approaches described in the following sections. 3.3 Extensions of the basic algorithm Rank Reduction Theorem In Section 3:2 we constrained the dimension of the IV-vector (t) to l = n. A straightforward extension, l > n, leads to the following criterion V (W(t)) = ^C z (t)? W(t)W H (t) ^C z (t) 2 F (3.18) where W(t) 2 C mn, ^C z (t) 2 C ml. This approach corresponds to what in [14, Section 8] is called the Extended IV estimate. In [14, Complement 8.5] it is shown that in the context of system identication, the accuracy of the extended estimate, as expected, increases with increasing number of instruments l (provided an optimal choice of instruments is made). Without loss of generality we assume that (W(t)) = n. With probability 1 (w.p.1) we have that ( ^C z (t)) = min(m; l) = ~n, but (C z ) = n < l. Consequently, a low-rank approximation of ^C z (t) is desired. Thus, we need the following theorem. Theorem 2 Let ^C z (t) have the singular value decomposition (SVD) ^C z (t) = ^U ^ ^V H = h ^Us ^U n i ^s 0 " H ^V s 0 ^ n ^V H n # (3.19) where ^U s 2 C mn. The remaining partitions are of appropriate dimensions. W(t) is a stationary point of (3.18) if and only if W(t) = ^UT, where ^U denotes any n left singular vectors of ^U and T 2 C nn denotes an arbitrary unitary matrix. At each stationary point V (W(t)) equals the sum of the squares of the singular values whose corresponding singular vectors are not involved in ^U. 9

10 All stationary points of V (W(t)) are saddle points except when ^U = ^U s. In this case V (W(t)) attains the global minimum and V (W(t))j W(t)= ^U st = ~nx k=n+1 ^ 2 k : (3.20) where ^ k are the singular values of ^C z (t). Note that for this choice, W(t)W H (t) ^C z (t) = ^U s ^ s ^V H s, which in the sense of the Frobenius norm is the best possible rank n approximation of ^C z (t). Proof: For notational convenience, let C = ^C z (t); ~C = CC H and W = W(t). Evaluate (3.18) as V (W) = Trn? C? WWH C? C? WW H C H o = Trn ~C + W H ~CWW H W? 2W H ~CW o : (3.21) The theorem is then proved by the proof of Theorem 1, using that C = ^U ^ ^V H, ~ C = ^U ^ 2 ^U H :2 (3.22) There are several possible ways to derive practical algorithms based on Theorem 2. However, we constrain ourselves to two dierent approaches, namely the previously studied projection approximation and a gradient based method. 3.4 EIV-PAST Once again the following approximation is applied: W H (t) ^C z (t) = W H (t) which gives the criterion tx k=1 t?k z(k) H (k) tx k=1 t?k W H (k? 1)z(k) H (k) = ^C h (t) {z } h(k) (3.23) V (W(t)) = ^C z (t)? W(t) ^C h (t) 2 : (3.24) F The minimizing argument of (3.24) is given by W(t) = ^C z (t) ^C y h(t) (3.25) where (:) y denotes the Moore-Penrose pseudo-inverse. This approach will in most cases improve the accuracy of the estimates. For example, in Section 10

11 4 we will see that the tracking capabilities are much more 'well-behaved' in this case. However, the main disadvantage is that this scheme leads to an increased computational complexity. In Appendix B we give an ecient recursive updating formula of (3.25). We give the algorithm without derivation. This is since the calculations to a great extent parallels the calculations in [14, Complement C9.1]. See also [7] for further reading of Extended IVtechniques. An operation count of the algorithm given in Appendix B gives a complexity of 3ml + O(mn). Note that the matrix inversion that arises in (B.1b) is of size (2 2), so it is a simple matter to invert it. Remark 3 One point of view worth mentioning is that the recursive update of the inverse of the 'covariance' matrix in this case involves positive denite matrices, see Appendix B. The update formulas for P(t) do not guarantee its positive (semi) deniteness. In ill-conditioned situations this may lead to numerical instability. The simplest approach to handle this problem is to evaluate only the upper right triangular part of P(t), see (B.1j). The remaining elements of P(t) are obtained using that P(t) = P H (t), i.e. P(t) is Hermitian. This approach guarantees that P(t) is at least positive semidenite. See also [11, Section 6.5] for an overview of regularization techniques, i.e. forcing P(t) to be non-singular. In [7] it is suggested that the so-called SRIF-algorithm [2, Section V.2] is used to update the 'system identication' extended IV-algorithm. However, to our knowledge, the SRIF-algorithm is not easily extended to the present case. Remark 4 It could be interesting to compare the performance of an algorithm based on the above projection approximation with a 'substitution' based approximation, i.e. W(t)W H (t) W(t)W H (t? 1). The latter approximation would give the following subspace estimate W(t) = ^C z (t) W H (t? 1) ^C z (t) y : (3.26) Our practical experience is that the 'substitution' approximation oers better tracking performance than the EIV-PAST algorithm. This is since the substitution approach remembers only the previous estimate, whereas EIV- PAST in principle remembers all previous estimates. Under stationary signal conditions the dierence is however insignicant. The drawback with the substitution approach is that there is no easy update formula for (3.26). Hence, a considerably increased complexity is obtained. 3.5 Gradient method, IVg A gradient based approach to minimize (3.18) can be devised as W k+1 (t) = W k (t)? k V 0 (W k (t)); k = 0; 1; : : : (3.27) 11

12 where k > 0 are step-sizes. Here subscript () k denotes the kth iteration of the minimization procedure and V 0 (W k (t)) W(t)=W k (t) : (3.28) The gradient can be evaluated as (see [3] for a denition of the derivative of a scalar function with respect to a matrix): V 0 (W(t)) = h?2 ~C(t) + ~C(t)W(t)W H (t) + W(t)W H (t) ~C(t)i W(t) (3.29) where ~ C(t) = ^C z (t) ^C H z(t). Using the approximation W H (t)w(t) I n the following gradient based algorithm is found: W 0 (t) = W(t? 1) (3.30a) R k (t) = W k (t) H ^C z (t) (3.30b) W k+1 (t) = W k (t)? k Wk (t)r k (t)? ^C z (t) Rk (t) H (3.30c) where the covariance matrix for example is updated as in (3.13a). At every iteration, 3mnl + O(nl) complex multiplications are required. A number of comments are now in place. Our experience is that, at every time instant, one iteration of (3.30c) is enough. The complexity is however still too large for many real-time applications. Since V (W(t)) has no local minima, global convergence is guaranteed if one nds the signal subspace by minimizing V (W(t)) with iterative methods. At convergence W(t) does not contain the left singular vectors of the estimated covariance matrix. But, what really matters is that W(t) contains an orthonormal basis for the signal subspace. Gradient based optimization methods are known to be rather ineective close to the minimum of the criterion. A Newton based method would probably be more eective. But there are several drawbacks with a Newton scheme. The major drawback is that the complexity would increase. Another drawback is that the Hessian of the criterion (3.18) will be ill-conditioned since there is no unique minimizing argument of (3.18). This problem can be overcome with regularization. By monitoring the quantity? I m? W(t)W H (t) ^C z 2 F ~nx k=n+1 ^ 2 k ; (3.31) 12

13 the user have a possibility to detect changes in the subspace dimension. The above relation holds approximately also for the EIV-PAST estimates A second order IV-algorithm, 2IV-PAST In [4] a second order recursive algorithm for adaptive signal processing (and system identication) is proposed. An extension of ordinary PAST to a second order recursive PAST can be found in [1]. This algorithm has the same computational complexity as PAST, but additionally computer memory is required. In [1] it is demonstrated that in some cases the second order algorithm oers better tracking performance than PAST. Inspired by this, we will derive a second order IV version of PAST. Introduce a hypothetical subspace W hyp and dene an 'equation error' together with the additional signal where h(t) = W H z(t) and W = U s T. Dene w(t) = z(t)? W hyp h(t) (3.32) v(t) = z(t)? Wh(t) (3.33) C w = E[w(t) H (t)] = E[(Wh(t)?W hyp h(t)+v(t)) H (t)] = (W?W hyp )C h : (3.34) The last equality in (3.34) follows since E[v(t) H (t)] = E[(z(t)? WW H z(t)) H (t)] =?? E[e(t)H (t)] = 0 (3.35) by assumption A1. So far both W and W hyp are assumed to be known deterministic matrices. Additionally, W is assumed to be an orthogonal matrix spanning the true signal subspace. However, we will use the above relationship (3.34) to dene an estimator of W as, where the hypothetical subspace W hyp is taken as the previous estimate W hyp = W(t? 1): W(t) = W(t? 1) + m ^C w (t) ^C y h(t) (3.36) where the estimates of the covariance matrices are dened by ^C h (t) = (1? ) ^C h (t? 1) + h(t) H (t) (3.37a) ^C w (t) = (1? ) ^C w (t? 1) + w(t) H (t) (3.37b) and ; m are step-sizes. The extra scaling factor in (3:38) is introduced in order to conform with the frame-work of [1]. Strictly speaking, this choice of W hyp invalidates the calculations in (3.34). So, once again we emphasize that (3.34) is merely used to dene our estimator. 13

14 The step-size m in (3.36) leads to an additional smoothing of the subspace estimate. It oers the possibility of increasing the updating speed in (3.37a,b) without getting too noisy estimates in (3.36). To gain some insight of the estimation principle, consider a xed hypothetical subspace matrix. It is then easy to show that the resulting estimates can be written as, neglecting the scaling factor in (3.37a,b): W(t) = (1? m )W hyp + m W EP (t) (3.38) where W EP (t) denotes the subspace estimate of the EIV-PAST algorithm. Consequently, for a xed W hyp and if m = 1, the EIV-PAST algorithm is retained. The second order algorithm consequently encompasses the EIV- PAST algorithm. The algorithm is now specied by (3.36),(3.37a) and (3.37b). An 4ml + O(mn) algorithm that is more well suited for practical implementation can be found in Appendix C. In order to gain some insight, the algorithm in Appendix C can for the special case l = n (i.e. no rank reduction) be simplied to the following expressions: h(t) = W H (t? 1)z(t) (3.39a) W(t)(t) = W(t? 1) + (W(t? 1)? W(t? 2))? I n? h(t) H (t)p(t) P(t) = + m (z(t)? W(t? 1)h(t)) H (t)p(t) (3.39b) 1 1? P(t? 1)? P(t? 1)h(t)H (t)p(t? 1) 1? + H (t)p(t? 1)h(t)! (3.39c) where P(t) = ^C?1 h (t). It is assumed that initial values are given. From the algorithm above we see that the current estimate is a combination of both W(t? 1) and W(t? 2), which motivates the name 'second-order'. This relationship can also be seen in Appendix C, see (C.7). Finally, following the discussion in [1], a reasonable choice of the stepsizes are as follows: m = 4 : (3.40) Hence, the selection of the step-sizes is reduced to a one-dimensional problem. The somewhat ad-hoc motivation of this choice is that the associated Ordinary Dierential Equation (ODE), see for example [11, Section 4] for a treatment of the theory of associated ODE's, then at convergence has a real-valued double pole. Strictly speaking, the ODE discussion is valid only for the linear regression problem, i.e. ARX systems [10]. The motivation of the second-order algorithm is rather ad-hoc. One way to understand the algorithm is that a time-varying ltering of the estimates delivered by the Extended IV-algorithm is performed. But, we select the step-sizes of the second-order algorithm so that the underlying Extended 14

15 IV-algorithm is run with a larger step-size than normal use of the Extended IV-algorithm would admit. The estimates of the second order algorithm is to our knowledge not the minimizing argument of any 'typical' criterion such as (3.4). However, from practical experience we are convinced that the second order approach oers many advantages. For example, it will with simulations be demonstrated that a faster response to sudden changes can be achieved, without degrading the stationary performance. Finally, Remark 3 applies also to the 2IV-PAST algorithm. 3.6 Summary In the present section several IV-based subspace tracking algorithms have been proposed. We conclude the section with a summary of the main features of the algorithms, see Table 1, and with a note on sliding windows. All of the IV-based tracking algorithms are based on exponentially weighted criteria. These criteria correspond to covariance estimates that also are exponentially weighted, i.e. ^C z (t) = ^C z (t? 1) + z(t) H (t): (3.41) Another possibility to update the covariance estimates is to use a so-called sliding window. Then the covariance estimates are taken as ^C z (t) = ^C z (t? 1) + z(t) H (t)? z(t? L) H (t? L) (3.42) where the user dened integer L is the window length. This approach will not be pursued any further. However, in [17], it is via simulations demonstrated that the tracking performance in some cases are improved with the sliding window approach. 4 Examples In this section the performance of PAST, IV-PAST, EIV-PAST, IVg and 2IV-PAST will be demonstrated. It is not the aim of the present paper to give an exhaustive comparison of dierent subspace-tracking algorithms. We restrict ourselves to compare the IV-algorithms only with the original PAST-algorithm. We will demonstrate that the IV-based methods reduce bias caused by colored noise. The IV-based algorithms will also be compared with the truncated SVD of ^C z. In a sense, given ^C z the truncated SVD is the best possible way to nd the subspace estimate. It is therefore interesting to compare the SVD-based estimates with those of our IV-algorithms. All approaches nd the frequency estimates using the ESPRIT-approach, i.e. the angles of the eigenvalues of W y 2:m (t)w 1:m?1(t), where W i:j denotes rows i to j of W. Other approaches for nding the frequencies may be considered. One advantage with the ESPRIT-approach is that the columns 15

16 of W(t) are not required to be orthonormal. For all algorithms, the following initial values were used: P(0) = I n W(0) = [I n 0 T (m?n)n ]T : (4.1) However, the transient will typically not be shown. Both real-valued and complex-valued data will be considered. Earlier we have stated that the columns of the subspace estimates are 'nearly' orthonormal. To quantify 'nearly', the following measure of deviation from orthonormality will be studied W H (t)w(t)? I n F : (4.2) The simulations are focused on the frequency estimation problem. Further, unless otherwise stated, only one iteration of (3.27) is performed. Example 1 Real-valued data Consider the scalar signal z(t) = 2X j=1 a j cos (2f j (t) + ' j ) + e(t) (4.3) where a 1 = a 2 = p 2. The random phases ' j are independent and uniformly distributed in (?; ). Thus, with this setting n = 4, and we chose m = 8 which gives The IV-vector is chosen as z(t) = [z(t); : : : ; z(t + 7)] T : (4.4) (t) = [z(t? M); : : : ; z(t? M? l + 1)] T (4.5) with M = 11. The number of instruments l is for IV-PAST l = 4. The other IV-algorithms use l = m = 8. The SVD-based subspace estimate is obtained from the n principal left singular vectors of with l = 8. The noise is given by ^C z (t) = ^C z (t? 1) + z(t) H (t): (4.6) e(t) = 1 "(t); (4.7) 1? 0:8q?1 where q?1 is the delay operator and "(t) is white Gaussian noise. Note that for this noise process condition A1 is violated E[e(t) H ] 6= 0: (4.8) 16

17 We will show that we still can achieve an improvement compared to the original PAST-algorithm. The forgetting factors were for all rst order algorithms chosen as = 0:97. It is not straightforward to transform this value to a step-size for 2IV-PAST, so that a fair comparison is obtained. One possible way is to choose the step-size so that the mean square error (MSE) of the estimated frequencies under stationary signal conditions are equal for 2IV-PAST and EIV-PAST. This approach was also used to determine the step-size of IVg. It turned out that = 0:21 (!) and = 0: approximately gave the same MSE as did EIV-PAST. Finally, dene the Signal to Noise ratio as 'signal power/noise power'. Our rst example serves as an illustration of the bias reduction oered by the IV-approach, see Figure 1. For reasons of simplicity, only PAST and EIV-PAST are compared. From Figure 1 we conclude that the IV-approach clearly reduces the bias. This feature is shared with all of our proposed algorithms. Thus, in the following further comparisons with PAST will be omitted. Next we focus on the tracking performance. Consider a sudden step change in a frequency. Figures 2,3 and 4 shows the outcome of one realization of the frequency tracks, deviations from orthonormality and the subspace angles between the SVD and the EIV-PAST estimates. We dene the subspace angle as the largest principal angle [8, Section 12]. A number of comments are in place. In Figure 2 we see the advantage with a rankreducing algorithm. A much more 'well-behaved' response to the change is achieved. Another advantage with the rank-reducing approaches is that the estimates of the constant frequency are less aected by the change. Further, a smoother behavior during the stationary part is achieved. Note also the fast, and smooth, response of 2IV-PAST. From Figure 3 we conclude that all algorithms give nearly orthonormal columns. The deviation is larger during the transient phase. Next we study the subspace angle between the SVD and the EIV-PAST subspace estimates, see Figure 4. Note that the estimates of EIV-PAST in this sense are very close to the left singular vectors of ^C z. It is only during the transient phase the dierence is noticeable. Since EIV-PAST remembers previous subspace estimates, the response is slightly delayed compared to the SVD-estimates. Next we consider an example with a frequency modulated frequency. Let f 1 (t) = 0:3t and f 2 (t) = 0:1t + 0: sin t : (4.9) In the following we omit the results of the basic IV-algorithm. This is since its tracking capabilities are rather poor. The outcome of one realization of the frequency tracks together with the true instantaneous frequency (d=dt f 2 (t)) are shown in Figure 5. For reasons of simplicity, only ^f 2 (t) are shown. Neither the SVD or the EIV-PAST estimates behave smoothly when the frequency changes rapidly. This is since ^C z is updated too slowly to cope with the changes in the signal. Note that it is almost impossible to 17

18 distinguish the SVD and EIV-PAST freqeuncy estimates! Note also the smooth behavior of 2IV-PAST. IVg is based on the same ^C z as EIV-PAST. However, the step-size makes the frequency tracks smoother, and more delayed than those of EIV-PAST. But, this is accomplished at a higher computational cost than both EIV-PAST and 2IV-PAST. Example 2 Complex-valued data In this example complex-valued data will be considered. Consider the scalar signal z(t) = 2X j=1 e j(2f j(t)+' j ) + e(t) (4.10) where the real and imaginary part of e(t) are Gaussian and independent of each other, and each component of e(t) is generated as in (4.7). Consider the example in Figure 6, where SNR=15 db, = 0:97; = 0:21 and = 0:0003. The number of instruments were was chosen as l = 5, and also m = 5. It turned out that the step-size of the IVg-algorithm for stability reasons had to be chosen smaller than in the previous examples. So, the comparison in Figure 6 is perhaps not 'fair'. The step-size is chosen so that the stationary performance is better for IVg, hence the smoothness of the frequency tracks. Once again we see how close the SVD-estimates are to the EIV-PAST estimates. Both approaches fail to work in the region where the frequencies cross. Hence, the behavior is due to the ill-conditioning of ^C z (t). Since 2IV-PAST is run with a larger updating speed, the problem is locally magnied for the second-order algorithm. These problems suggest that in a scenarios with closely spaced frequencies, regularization may be necessary, see Remark 3. Alternatively, the subspace dimension should be adjusted when the frequencies are too close. Our last example concerns the convergence properties. For reasons of simplicity we study only EIV- PAST. With (4.10) and f 1 (t) =?0:1t; f 2 (t) =?0:3t, the deviations from orthonormality and the subspace angle to the true subspace, will for dierent be compared. The noise realizations are identical in all cases, and the transient eects due to (4.1) are in this case shown. See Figure 7 and Figure 8. The signal to noise ratio was chosen to SNR=15 db. From Figure 7 and Figure 8 we conclude that for = 1, the subspace estimate tends to a matrix with orthonormal columns whose range is close to the true subspace. 5 Conclusions In this paper several Instrumental Variable generalizations of the subspace tracking algorithm PAST have been proposed. The presented algorithms are able to track time-varying subspaces in spatially and/or temporally colored 18

19 noise elds. One requirement is that we must be able to nd an IV-vector that is uncorrelated with the noise vector. Additionally, a certain rank condition must be fullled. The original algorithm (PAST) is a special case of the basic IV algorithm proposed herein (put (t) = h(t)). The basic IV algorithm has the same computational complexity as PAST. Also, a least squares (Extended IV), a gradient based algorithm and a secondorder version of the basic IV algorithm were proposed. The performance of the algorithms was illustrated in a simulation study. The conclusions are that an IV approach in our examples improves the results when the noise is not spatially white. The main features of the basic IV-algorithm is that it has low complexity and is able to track rapid changes. However, the basic IV-algorithm does not perform any rank-reduction of the sample crosscovariance matrix. The Extended IV algorithm requires more computations but a much more 'well-behaved' response, due to the rank-reduction, to sudden changes was achieved. The second order algorithm oers better trade-o between tracking and stationary performance. This is due to the fact that we can increase the updating speed of certain covariance estimates without getting too noisy estimates. A drawback of our algorithms is that they have no 'built-in' rank-detection. However, a quantity that can be used to detect changes in the subspace dimension was given. Even though all simulation results indicate global convergence of the IV-algorithms, there has been no theoretical analysis performed. Hence, remaining issues for future studies include rank estimation and convergence analysis. A Appendix In this section a further treatment of the time series problem under stationary signal conditions is given. Consider the complex valued scalar signal z(t) = nx k=1 a k e j(! kt+' k ) + e(t) (A.1) with real valued amplitudes a k. The additive noise term e(t) represents a possibly non-white disturbance. The random phases ' k are assumed to be mutually independent and uniformly distributed in (?; ). Introduce the vector of stacked samples z m (t) = [z(t); : : : ; z(t + m? 1)] T : (A.2) Then the following relation is easily veried z m (t) =? m x(t) + e m (t) (A.3) with obvious denition of e m (t). The deterministic matrix? m is dened by? m = [(! 1 ); : : : ; (! n )] (A.4) 19

20 where the so-called steering vectors are given by (! k ) = [1; e j! k ; : : : ; e j! k (m?1) ] T. In? m, subscript 'm' denotes the length of the steering vectors. The unmeasurable signal vector x(t), assumed to be independent of the noise, is given by x(t) = [a 1 e j(! 1t+' 1 ) ; : : : ; a n e j(!nt+'n) ] T (A.5) with a covariance matrix C x = E[x(t)x H (t)] = diag(a 2 1 ; : : : ; a2 n): (A.6) Choose the IV-vector as (t) = [z(t? M); : : : ; z(t? M? l + 1)] T (A.7) where the user-dened integer M is chosen so that E[e m (t) H (t)] = 0. The number of instruments is chosen so that l n. Now it easy to verify that (t) =? l x(t? M) + e l (t? M) (A.8) and x(t? M) = diag(e?j! 1M ; : : : ; e?j!nm )x(t) = Mx(t) (A.9) so that E[x(t) H (t)] = E[x(t)x H (t)]m? H l = diag(a 2 1e j! 1M ; : : : ; a 2 ne j!nm )? H l = D? H l : (A.10) The above relation follows since x(t) is independent of the noise by assumption. Thus, if a 2 k 6= 0 and the frequencies! k are distinct (i.e.! k 6=! l unless k = l) the rank-condition A2 is satised for all M if l n. This is since a diagonal matrix is of full rank if the diagonal entries are nonzero, and? l, which is a so-called Vandermonde matrix, is of full rank if the frequencies are distinct. However, M must still be chosen so that A1 is satised. B Appendix In this appendix we give, without derivation, the recursive update formulas for the Extended Instrumental Variable subspace tracking algorithm. We omit the calculations since they to a large extent parallel those in [14, Com- 20

21 plement C9.1]. W(t) = W(t? 1) + (v(t)? W(t? 1)(t)) K(t) (B.1a) K(t) =? 2 (t) + H (t)p(t? 1)(t)?1 H (t)p(t? 1) (B.1b) (t) = [w(t) h(t)] (B.1c) w(t) = ^C h (t? 1)(t) (B.1d)? 2 (t) = H (t)(t) (B.1e) 0 v(t) = h ^Cz (t? 1)(t) z(t)i (B.1f) ^C h (t) = ^C h (t? 1) + h(t) H (t) (B.1g) ^C z (t) = ^C z (t? 1) + z(t) H (t) (B.1h) h(t) = W H (t? 1)z(t) (B.1i) P(t) = 1 (P(t? 1)? P(t? 1)(t)K(t)) (B.1j) 2 where P(t) = ^Ch (t) ^C H h(t)?1. C Appendix In this appendix an algorithm that is more computationally ecient than (3.36) is given. The dening relationships are for easy reference repeated below: h(t) = W H (t? 1)z(t) (C.1a) w(t) = z(t)? W(t? 1)h(t) (C.1b) ^C h (t) = (1? ) ^C h (t? 1) + h(t) H (t) (C.1c) ^C w (t) = (1? ) ^C w (t? 1) + w(t) H (t) (C.1d) W(t) = W(t? 1) + m ^C w (t) ^C y h(t): (C.1e) For notational convenience, let R(t) = ^C h (t) and r(t) = ^C w (t). Assume that R(t) has full rank. Then R y (t) = R? H (t) R(t)R H?1 (t) {z } P(t) : (C.2) 21

22 From Appendix B we have the following updating formula for P(t) P(t) = 1 (P(t? 1)? (1? ) 2 (C.3a) P(t? 1)(t) (1? ) 2 (t) + H (t)p(t? 1)(t)?1 H (t)p(t? 1))? (1? ) 2 (t) = H (t)(t) 1? 1? 0 (t) = [R(t? 1)(t) (1? )h(t)] : (C.3b) (C.3c) Note that the matrix inverse appearing in (C.3a) is of size (2 2). Hence, it is a simple matter to invert it. Now, r(t)r H (t) = (1? ) 2 r(t? 1)R H (t? 1) + 2 j(t)j 2 w(t)h H (t) + (1? )w(t) H (t)r H (t? 1) + (1? )r(t? 1)(t)h(C.4) H (t): The rst term on the right hand side of the above equation can be rewritten as (1? ) 2 r(t? 1)R H (t? 1) = (1? ) 2 r(t? 1)R H (t? 1)P(t? 1)P?1 (t? 1) = (1? ) 2 (W(t? 1)? W(t? 2)) P?1 (t? 1)(C.5) m which gives m (1? ) 2 r(t? 1)R H (t? 1) = (W(t? 1)? W(t? 2)) (I n? (t) (1? ) 2 (t) + H (t)p(t? 1)(t)?1 H (t)p(t? 1)):(C.6) We have not been able to nd any further major simplications of (C.1e). Hence, the 4ml + O(mn) second order IV-algorithm reads as W(t) = W(t? 1) + References (W(t? 1)? W(t? 2)) (I n? (t) (1? ) 2 (t) + H (t)p(t? 1)(t)?1 H (t)p(t? 1)) + m (1? ) w(t) H (t)r H (t? 1) + r(t? 1)(t)h H (t) + m 2 j(t)j 2 w(t)h H (t): [1] A. Andersson and H. Broman. \A second order recursive algorithm with applications to adaptive ltering and subspace tracking". Technical Report CTH-TE-30, Chalmers University of Technology, Gothenburg, Sweden, May Submitted to IEEE Trans SP. (C.7) 22

23 [2] G.J. Bierman. Factorization Methods for Discrete Sequential Estimation. Academic, New York, [3] J Brewer. Kronecker products and matrix calculus in system theory. IEEE Transactions on Circuits and Systems, CAS-25:772{781, Sep [4] H. Broman and A. Andersson. \Parameter Estimation: A Second Order Recursive Least Squares Algorithm". In Proc. SYSID'94, 10th IFAC Symposium on System Identication, pages 447{452, Copenhagen, Jul [5] P. Comon and G. H. Golub. \Tracking a few Extreme Singular Values and Vectors in Signal Processing". Proceedings of the IEEE, 78:1327{ 1343, Aug [6] A. Eriksson. \Algorithms and performance analysis for spatial and spectral parameter estimation". PhD thesis, Uppsala Universisy, Uppsala, Sweden, [7] B. Friedlander. \The overdetermined recursive intrumental variable method". IEEE Trans. Automatic Control, AC-29:353{356, Feb [8] G.H. Golub and C.F. VanLoan. Matrix Computations, 2 nd edition. Johns Hopkins University Press, Baltimore, MD., [9] H. Krim and M. Viberg. \Sensor array processing: Two decades later". Technical report CTH-TE-28. Department of Applied Electronics, Chalmers University of Technology, Jan [10] L. Ljung. System Identication: Theory for the User. Prentice-Hall, Englewood Clis, NJ, [11] L. Ljung and T. Soderstrom. \Theory and Practice of Recursive Identication". The MIT Press, Cambridge, Massachusetts, [12] R. H. Roy. \ESPRIT, Estimation of Signal Parameters via Rotational Invariance Techniques". PhD thesis, Stanford Univ., Stanford, CA, Aug [13] R.O. Schmidt. \Multiple Emitter Location and Signal Parameter Estimation". In Proc. RADC Spectrum Estimation Workshop, pages 243{ 258, Rome, NY, [14] T. Soderstrom and P. Stoica. System Identication. Prentice-Hall, London, U.K.,

24 [15] P. Stoica, M. Viberg, M. Wong, and Q. Wu. \A unied Instrumental Variable Approach to Direction nding in colored noise elds". Technical report, Chalmers University of Technology, CTH-TE-32, Gothenburg, Sweden, Jul To appear in Digital Signal Processing Handbook, CRC Press,1996. [16] M. Viberg, P. Stoica, and B. Ottersten. \Array Processing in Correlated Noise Fields Based on Instrumental Variables and Subspace Fitting". IEEE Trans. SP, May [17] B. Yang. \Projection Approximation Subspace Tracking". IEEE Trans. SP, 43(1):95{107, Jan

25 Figure and table captions Table 1 Main features of the IV-algorithms Figure 1 One realization of the frequency estimates. SNR=5 db, e(t) = 1 1?0:8q?1 "(t). = 0:97 Figure 2 One realization of the frequency estimates. SNR=8 db, e(t) = 1 1?0:8q?1 "(t). = 0:97, = 0:21, = 0: Figure 3 Deviation from orthonormality for a step-change. SNR=8 db, e(t) = 1 1?0:8q?1 "(t). = 0:97, = 0:21, = 0: Figure 4 Subspace angle between SVD and EIV-PAST subspace estimates. SNR=8 db, e(t) = 1 1?0:8q?1 "(t). = 0:97, = 0:21, = 0: Figure 5 Frequency estimates for a frequency modulated frequency. SNR=10 db, e(t) = 1 1?0:8q?1 "(t). = 0:97, = 0:21, = 0: Figure 6 Frequency estimates for complex linear FM. SNR=15 db, = 0:97, = 0:21, = 0:00003 Figure 7 Deviation from Orthonormality for EIV-PAST. SNR=15 db, Solid: = 1, Dashed: = 0:97, Dotted: = 0:95 Figure 8 Subspace Angle between EIV-PAST estimate and true Subspace. SNR=15 db, Solid: = 1, Dashed: = 0:97, Dotted: = 0:95 25

26 Algorithm Complexity Rank Reduction Projection Approximation IV-PAST 3mn + O(n 2 ) No Yes EIV-PAST 3ml + O(mn) Yes Yes IVg 3mnl + O(nl) Yes No 2IV-PAST 4ml + O(mn) Yes Yes Table 1: Frequency (Normalized) PAST: EIV PAST: Sample No. Figure 1: Frequency (Normalized) EIV PAST: IV PAST: 2IV PAST: IVg: SVD: Sample No. Figure 2: 26

27 EIV PAST: IV PAST: 2IV PAST: IVg: Sample No Figure 3: Degrees Sample No. Figure 4: 27

28 0.16 Frequency (Normalized) EIV PAST: 2IV PAST: IVg: SVD: Sample No. Figure 5: Frequency (Normalized) EIV PAST: 2IV PAST: IVg: SVD: Sample No. Figure 6: 28

29 10 2 Deviation from Orthonormality Sample No. Figure 7: Degrees Sample No Figure 8: 29

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process Optimal Dimension Reduction for Array Processing { Generalized Soren Anderson y and Arye Nehorai Department of Electrical Engineering Yale University New Haven, CT 06520 EDICS Category: 3.6, 3.8. Abstract

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Efficient and Accurate Rectangular Window Subspace Tracking

Efficient and Accurate Rectangular Window Subspace Tracking Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu

More information

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray A Fast Algorithm for 2-D Direction-of-Arrival Estimation Yuntao Wu 1,Guisheng Liao 1 and H. C. So 2 1 Laboratory for Radar Signal Processing, Xidian University, Xian, China 2 Department of Computer Engineering

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Progress In Electromagnetics Research M, Vol. 35, 163 171, 2014 A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Basma Eldosouky, Amr H. Hussein *, and Salah Khamis Abstract

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

J. Liang School of Automation & Information Engineering Xi an University of Technology, China Progress In Electromagnetics Research C, Vol. 18, 245 255, 211 A NOVEL DIAGONAL LOADING METHOD FOR ROBUST ADAPTIVE BEAMFORMING W. Wang and R. Wu Tianjin Key Lab for Advanced Signal Processing Civil Aviation

More information

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters. Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date

More information

Generalization Propagator Method for DOA Estimation

Generalization Propagator Method for DOA Estimation Progress In Electromagnetics Research M, Vol. 37, 119 125, 2014 Generalization Propagator Method for DOA Estimation Sheng Liu, Li Sheng Yang, Jian ua uang, and Qing Ping Jiang * Abstract A generalization

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Iterative Algorithms for Radar Signal Processing

Iterative Algorithms for Radar Signal Processing Iterative Algorithms for Radar Signal Processing Dib Samira*, Barkat Mourad**, Grimes Morad*, Ghemit Amal* and amel Sara* *Department of electronics engineering, University of Jijel, Algeria **Department

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Huiping Duan, Tiantian Tuo, Jun Fang and Bing Zeng arxiv:1511.06828v1 [cs.it] 21 Nov 2015 Abstract In underdetermined direction-of-arrival

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

1886 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 7, JULY Plane Rotation-Based EVD Updating Schemes for Efficient Subspace Tracking

1886 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 7, JULY Plane Rotation-Based EVD Updating Schemes for Efficient Subspace Tracking 1886 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 7, JULY 1998 Plane Rotation-Based EVD Updating Schemes for Efficient Subspace Tracking Benoît Champagne and Qing-Guang Liu Abstract We present

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Time-domain Identication of Dynamic Errors-in-variables Systems Using Periodic Excitation Signals Urban Forssell, Fredrik Gustafsson, Tomas McKelvey Department of Electrical Engineering Linkping University,

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Robust Subspace DOA Estimation for Wireless Communications

Robust Subspace DOA Estimation for Wireless Communications Robust Subspace DOA Estimation for Wireless Communications Samuli Visuri Hannu Oja ¾ Visa Koivunen Laboratory of Signal Processing Computer Technology Helsinki Univ. of Technology P.O. Box 3, FIN-25 HUT

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

DOA Estimation of Uncorrelated and Coherent Signals in Multipath Environment Using ULA Antennas

DOA Estimation of Uncorrelated and Coherent Signals in Multipath Environment Using ULA Antennas DOA Estimation of Uncorrelated and Coherent Signals in Multipath Environment Using ULA Antennas U.Somalatha 1 T.V.S.Gowtham Prasad 2 T. Ravi Kumar Naidu PG Student, Dept. of ECE, SVEC, Tirupati, Andhra

More information

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Performance Analysis of an Adaptive Algorithm for DOA Estimation Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction

More information

Discrete Simulation of Power Law Noise

Discrete Simulation of Power Law Noise Discrete Simulation of Power Law Noise Neil Ashby 1,2 1 University of Colorado, Boulder, CO 80309-0390 USA 2 National Institute of Standards and Technology, Boulder, CO 80305 USA ashby@boulder.nist.gov

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems Signal Processing S STOCKHOLM

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems Signal Processing S STOCKHOLM Optimal Array Signal Processing in the Presence of oherent Wavefronts P. Stoica B. Ottersten M. Viberg December 1995 To appear in Proceedings ASSP{96 R-S3-SB-9529 ROYAL NSTTUTE OF TEHNOLOGY Department

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,

More information

Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems

Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems Thomas Ostman, Stefan Parkvall and Bjorn Ottersten Department of Signals, Sensors and Systems, Royal Institute of

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Experimental evidence showing that stochastic subspace identication methods may fail 1

Experimental evidence showing that stochastic subspace identication methods may fail 1 Systems & Control Letters 34 (1998) 303 312 Experimental evidence showing that stochastic subspace identication methods may fail 1 Anders Dahlen, Anders Lindquist, Jorge Mari Division of Optimization and

More information

Detection and Localization of Tones and Pulses using an Uncalibrated Array

Detection and Localization of Tones and Pulses using an Uncalibrated Array Detection and Localization of Tones and Pulses using an Uncalibrated Array Steven W. Ellingson January 24, 2002 Contents 1 Introduction 2 2 Traditional Method (BF) 2 3 Proposed Method Version 1 (FXE) 3

More information

On the Equivariance of the Orientation and the Tensor Field Representation Klas Nordberg Hans Knutsson Gosta Granlund Computer Vision Laboratory, Depa

On the Equivariance of the Orientation and the Tensor Field Representation Klas Nordberg Hans Knutsson Gosta Granlund Computer Vision Laboratory, Depa On the Invariance of the Orientation and the Tensor Field Representation Klas Nordberg Hans Knutsson Gosta Granlund LiTH-ISY-R-530 993-09-08 On the Equivariance of the Orientation and the Tensor Field

More information

Chapter 2 Canonical Correlation Analysis

Chapter 2 Canonical Correlation Analysis Chapter 2 Canonical Correlation Analysis Canonical correlation analysis CCA, which is a multivariate analysis method, tries to quantify the amount of linear relationships etween two sets of random variales,

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong A Fast Algorithm for Nonstationary Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk June 19,

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The constrained stochastic matched filter subspace tracking

The constrained stochastic matched filter subspace tracking The constrained stochastic matched filter subspace tracking Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret To cite this version: Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret.

More information

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1)

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1) 926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 Reduced-Rank Channel Estimation for Time-Slotted Mobile Communication Systems Monica Nicoli, Member, IEEE, and Umberto Spagnolini,

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon Cover page Title : On-line damage identication using model based orthonormal functions Author : Raymond A. de Callafon ABSTRACT In this paper, a new on-line damage identication method is proposed for monitoring

More information

Improved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling

Improved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling 140 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 2, FEBRUARY 2014 Improved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling Cheng Qian, Lei Huang, and H. C. So Abstract A novel pseudo-noise

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm

Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Mingyang Chen 1,LuGan and Wenwu Wang 1 1 Department of Electrical and Electronic Engineering, University of Surrey, U.K.

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

THE estimation of covariance matrices is a crucial component

THE estimation of covariance matrices is a crucial component 1 A Subspace Method for Array Covariance Matrix Estimation Mostafa Rahmani and George K. Atia, Member, IEEE, arxiv:1411.0622v1 [cs.na] 20 Oct 2014 Abstract This paper introduces a subspace method for the

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information