Interpretation of the Multi-Stage Nested Wiener Filter in the Krylov Subspace Framework

Size: px
Start display at page:

Download "Interpretation of the Multi-Stage Nested Wiener Filter in the Krylov Subspace Framework"

Transcription

1 Interpretation of the Multi-Stage Neste Wiener Filter in the Krylov Subspace Framework M. Joham an M. D. Zoltowski School of Electrical Engineering, Purue University, West Lafayette, IN Abstract In this paper, we show that the Multi-Stage Neste Wiener Filter (MSNWF) can be ientifie to be the solution of the Wiener-Hopf equation in the Krylov subspace of the covariance matrix of the observation an the crosscorrelation vector of the observation an the esire signal. This unerstaning leas to the conclusion that the Arnoli algorithm which arises from the MSNWF evelopment can be replace by the Lanczos algorithm. Thus, the computation of the unerlying basis of the Krylov subspace can be simplifie. Moreover, the settlement of the MSNWF in the Krylov subspace framework helps to erive an alternative formulation of the alreay presente MSNWF composition. Introuction The Wiener filter (WF) is a well known approach to estimate the unknown signal 0 [n] from an observation x 0 [n] an is optimal in the Minimum Mean Square Error (MMSE) sense. The WF is also optimal in the Bayesian sense if the signals 0 [n] anx 0 [n] are jointly Gaussian ranom variables (e. g. [Sch9, MW95]). Therefore, the WF is employe in many applications because it is easily implemente an only relies on secon orer statistics which can be estimate with partial knowlege of the transmitte signal. Since the resulting filter epens upon the inverse of the covariance matrix of the observation x 0 [n] an the neee filter length can be very large, if the observation x 0 [n] is of high imensionality, an alternative approach which operates in a reuce space is of great interest to reuce computational complexity an the number of observations neee to estimate the statistics. The first approach to reuce the imension of the estimation problem is the well establishe Principal Component (PC) metho [Hot33, EY36]. The observation signal is transforme by a matrix constitute by the eigenvectors

2 belonging to the principal eigenvalues, thus, a truncate Karhunen-Loeve transform is applie. The Wiener Filter solution with respect to the new observation can be easily obtaine since the covariance matrix of the new observation is a iagonal matrix with the principal eigenvalues as its entries. However, the PC metho only takes into account the statistics of the observation signal an oes not consier the relation to the esire signal. Therefore, Golstein et. al. [GR97b] introuce the Cross-Spectral (CS) metric which evolve from the Generalize Sielobe Canceller (GSC) [AC76, GJ82] an incorporates the similarity of the crosscorrelation vector of the observation an the esire signal with the respective eigenvector. The CS metho oes not choose the principal eigenvectors, but the eigenvectors which belong to the largest CS metric as was also suggeste by Byerly et. al. in [BR89]. Recently, Golstein et. al. presente the Multi-Stage Neste Wiener Filter (MSNWF) approach [GR97a, GRS98] which can be seen as a chain of GSCs. This funamental contribution showe that the reuction of the imension of the observations signal base on the eigenvectors as propose by the PC an CS methos is suboptimum. The MSNWF oes not nee the eigenvectors of the covariance matrix of the observation signal an is, thus, computationally avantageous (see [GGR99, GGDR98] for an illustrative example). This property le to the application of the MSNWF in etection problems [GRZ99] an CDMA interference cancellation [HG00]. Honig et. al. showe that the number of necessary MSNWF stages even for a heavy loae CDMA system is small compare to the eigenspace base methos [HX99]. Paos et. al. presente a similar result in [PB99] for the Auxiliary Vector (AV) metho which was also evelope from the GSC. In fact the earlier AV publications [KBP98, PLB99] are an implementation of the GSC metho. Their formulation in [PB99] is similar to [HG00] except that the combination of the transforme observation signal to form the estimate of the esire signal is suboptimal. Our contribution is to show that the Multi-Stage Neste Wiener Filter (MSNWF) can be seen as the solution of the Wiener-Hopf equation in the Krylov subspace of the covariance matrix of the observation signal an the crosscorrelation vector of the observation an the esire signal. This conclusion follows from the results in [PB99] an especially in [HG00], but we present the consequences of this connection. First, the Arnoli algorithm which is use to fin the orthonormal basis for the Krylov subspace can be replace by the Lanczos algorithm [Saa96] since the covariance matrix is Hermitian. Secon, we evelop a new formulation of the MSNWF algorithm which gives an expression for the resulting Mean Square Error (MSE), although it only works in the reuce imensional space. Schneier et. al. [SW99] presente a similar algorithm which also inclue the computation of the error variance in an image processing application. However, their formulation assumes only aitive noise an a few ominant eigenvalues of the covariance matrix. In the next section, we briefly review the theory of the Wiener filter to make the reaer familiar with our notation. Before we iscuss the reuce rank MSNWF in Section 4, we concentrate on the original MSNWF approach in 2

3 Section 3 to motivate our reasoning in Section 5, where we show the close relationship between the MSNWF an Krylov subspace base methos. In Section 6 we present a new formulation of the MSNWF algorithm. 2 Wiener Filter Figure shows the principle of a Wiener Filter (WF). The esire signal 0 [n] C which is assume to be a zero-mean white Gaussian process is estimate by applying the linear filter w C N to the observation signal x 0 [n] C N which is a multivariate zero-mean Gaussian process. The error of the estimation can be written as ε 0 [n] = 0 [n] ˆ 0 [n] = 0 [n] w H x 0 [n], () where ( ) H enotes conjugate transpose. The variance of the estimation error is the mean square error MSE 0 =E{ ε 0 2 } = σ 2 0 w H r x0, 0 r H x 0, 0 w + w H R x0 w, (2) with the covariance matrix of the observation x 0 [n] the variance of the esire signal 0 [n] R x0 =E{x 0 [n]x H 0 [n]} C N N, (3) σ 2 0 =E{ 0 [n] 2 }, (4) an the crosscorrelation between the esire signal 0 [n] an the observation signal x 0 [n] r x0, 0 =E{x 0 [n] 0[n]} C N, (5) where ( ) enotes complex conjugate. 0 [n] " 0 [n] x 0 [n] w 0 ^ 0 [n] Figure : Wiener Filter The Wiener filter w 0 is the filter w which minimizes the mean square error, thus, w 0 = arg min MSE 0. (6) w This criterion leas to the Wiener-Hopf equation R x0 w 0 = r x0, 0 (7) an the Wiener filter w 0 = R x 0 r x0, 0 C N. (8) 3

4 Consequently, the minimum mean square error (MMSE) can be written as MMSE 0 = σ 2 0 r H x 0, 0 R x 0 r x0, 0. (9) Before we focus on the Multi-Stage Neste Wiener Filter (MSNWF) in the next section, we have to recite the following theorem (e. g. [GRS98]) which is prove in Appenix A. Theorem If the observation x 0 [n] to estimate 0 [n] is pre-filtere by a fullrank matrix T C N N,i. e.,z [n] =Tx 0 [n], the Wiener filter w z to estimate 0 [n] from z [n] leas to the same estimate ˆ 0 [n], thus, the MMSE is unchange. This Theorem can be generalize to a bigger set of pre-filtering matrices by the following corollary. Corollary If the pre-filtering matrix T C M N,M N has full column rank, i. e. T has a left inverse, the resulting estimate ˆ 0 [n] an the MMSE are unchange. The proof of Corollary is straightforwar, the inverse of T just has to be replace by the left inverse of T in Appenix A. However, if T is tall, i. e., M>N, then the Wiener filter solution is ambiguous, because a vector which is orthogonal to the column space of T canbeaetow z in Equation (58) without changing the estimate ˆ 0 [n] an the MMSE. 3 Multi-Stage Neste Wiener Filter The Multi-Stage Neste Wiener Filter (MSNWF) was evelope by Golstein et. al. [GR97a, GRS98] to fin an approximate solution of the Wiener-Hopf equation (cf. Equation 7) which oes not nee the inverse or the eigenvalue ecomposition of the covariance matrix. The approximation for the Wiener filter is foun by stopping the recursive algorithm after D steps, hence, the approximation lies in a D-imensional subspace of C N. The first step of the MSNWF algorithm is to apply a full rank pre-filtering matrix of the form [ ] h H T = C N N (0) B to get the new observation signal z = T x 0 [n] = [ h H x 0 [n] B x 0 [n] ] = [ [n] x [n] ] C N () which oes not change the estimate ˆ 0 [n] as postulate in Theorem. The rows of B are chosen to be orthogonal to h H, therefore, B h = 0 or B = null(h H )H. (2) 4

5 The intuitive choice for the first row h H is the vector which, when applie to x 0 [n], gives a scalar signal [n] that has maximum correlation with the esire signal 0 [n]. Without loss of generality we assume that h 2 = an force [n] to be in-phase with 0 [n], i. e. the correlation between 0 [n] an [n] is real, which is motivate by the trivial case when [n] = 0 [n]. Note that this criterion is ifferent from the one propose in [GGDR98] since we o not optimize the absolute value of the correlation which woul lea to a loss of information about the phase an, thus, Golstein et. al. ha to use Schwarz inequality to erive the same result. The formulation in [PB99, PLB99] is also base on the absolute value of the correlation, but in the proof of the result the correlation is assume to be real. With our consieration we en up with following optimization problem h = arg max E{Re( [n] 0[n])} or h h = arg max h which results in the normalize matche filter 2 (hh r x0, 0 + r H x 0, 0 h) s.t.: h H h = (3) h = r x 0, 0 r x0, 0 2 C N. (4) Note that [n] contains all information about 0 [n] which can be foun in x 0 [n] since 0 [n] is a scalar, thus, the information about 0 [n] lies in a oneimensional subspace of C N an with the matche filter we pick out only these portions of x 0 [n] which are inclue in this subspace. Moreover, the secon part of z [n], namely x [n], oes not contain any information about 0 [n] because of the orthogonality conition in Equation (2). 0 [n] " 0 [n] x 0 [n] T z [n] w z ^ 0 [n] Figure 2: Wiener Filter with Pre-Filtering Again, we have to solve the Wiener-Hopf equation of the new system epicte in Figure 2 to get w z = R z r z, 0 C N, (5) where the covariance matrix of z [n] can be expresse by the statistics of [n] an x [n], i.e., [ ] σ 2 R z = r H x, C N N, (6) with the variance of [n] r x, R x σ 2 =E{ [n] 2 } = h H R x 0 h, (7) 5

6 the crosscorrelation vector between x [n] an [n] r x, =E{x [n] [n]} = B R x0 h C N (8) an the covariance matrix of x [n] R x =E{x [n]x H [n]} = B R x0 B H CN N. (9) The crosscorrelation vector of the pre-filtere observation signal z [n] an the esire signal [n] reveals the usefulness of the choice of pre-filtering: r z, 0 = T r x0, 0 = r x0, 0 2 e R N, (20) where e i enotes a unit norm vector with a one at the i-th position. Therefore, the Wiener filter w z of the pre-filtere signal z [n] is just a weighte version of the first column of the inverse of the covariance matrix R z in Equation (6). After applying the inversion lemma for partitione matrices (e. g. [Sch9, MW95]) we en up with [GR97a, GRS98] [ ] w z = α R C N, (2) x r x, where α = r x0, 0 2 (σ 2 r H x, R x r x, ). (22) Equation (2) is the key equation to unerstan most interpretations of the MSNWF. The first an most important observation in Equation (2) is that the vector in brackets, when applie to z [n], gives the error signal ε [n] of the Wiener filter which estimates [n] from x [n]: ε [n] = [n] ˆ [n] = [n] w H x [n] = [, w H ] z [n] (23) with the Wiener filter w = R x r x, C N. (24) This observation immeiately leas to the next step in the MSNWF evelopment. In the secon step, the output of the Wiener filter w with imension N can be replace by the weighte error signal ε 2 [n] of a Wiener filter which estimates the output signal 2 [n] of the matche filter h 2 from the blockingmatrix output x 2 [n] =B 2 x [n]. Moreover, this observation shows the close relationship of the MSNWF to the Generalize Sielobe Canceller (GSC, [AC76, GJ82, GRZ99]). The GSC can be interprete as a MSNWF after the first step. Secon, the factor α is a scalar Wiener filter to estimate 0 [n] from the scalar ε [n]. The covariance matrix or variance of ε [n] is the MMSE of the Wiener filter w an the crosscorrelation between the scalar observation signal ε [n] an the esire signal 0 [n] is the norm of the matche filter r x0, 0,thus, α = σε r ε, 0 =(σ 2 r H x, R x r x, ) r x0, 0 2. (25) 6

7 0 [n] " 0 [n] x 0 [n] h [n] " [n] ^ 0 [n] x [n] B w ^ [n] Figure 3: MSNWF after the First Step These two interpretations lea to the well known structure in Figure 3. The Wiener filter w 0 is replace by the scalar Wiener filter α which estimates 0 [n] from the error signal ε [n] of the vector Wiener filter w. Thir, the MSNWF can be create without knowlege of the covariance matrix R x0 an, therefore, the implementation of a MSNWF is simplifie, because at each step the Wiener filter is replace by the normalize matche filter an the next stage. Since the matche filter is simply the crosscorrelation between the new observation x i [n] an the new esire signal i [n] ateachsteponly an estimation of this crosscorrelation is neee. Note, however, that estimating all the crosscorrelations an estimating the covariance matrix lea to the same resulting estimate ˆ 0 [n] at the same expense. Fourth, it is straightforwar to see that each new esire signal i [n],i =,...,N, is the output of a length N filter i t i =( B H k )h i C N (26) k= as epicte in Figure 4. This interpretation of the MSNWF will be use in the following sections. Note that all following stages are orthogonal to the first stage, i.e., t H t i = δ,i, i=2,...,n, an δ k,i enotes the Kronecker elta function which is for k = i an 0 for k i. However, the filters t i are not an orthogonal basis of C N in general (cf. Section 5). The Auxiliary Vector metho [KBP98] also evolve from GSC consierations, but ene the iteration after the secon step an is, thus, a rank two MSNWF approximation of the ieal Wiener filter. The extension of the AV metho presente in [PB99] is basically the structure shown in Figure 4, but the propose choice of the scalar filters α i is suboptimal. The very important property of the MSNWF is that the pre-filtere observation vector [n] =[ [n],..., N [n]] T, (27) where ( ) T enotes transpose, has a tri-iagonal covariance matrix [GRS98]. This can be unerstoo with the help of Figures 3 an 4. The matche filter h i is esigne to retrieve all information of i [n] that can be foun in x i [n]. Therefore, the output of h i, i [n], is correlate with i [n] an also with i+ [n], because h i+ is the matche filter to fin i [n]. But i+ [n] inclues 7

8 x 0 [n] t [n] ^ 0 [n] t 2 2 [n] 2 ^ [n] ^ 2 [n] t N N [n] N ^N,[n] Figure 4: MSNWF as a Filter Bank no information about i [n], since the input of h i+ was pre-filtere by the blocking matrix B i+. Consequently, i [n] is only correlate to its neighbours i [n] an i+ [n] leaing to a tri-iagonal covariance matrix. The remaining part of the MSNWF (summers an scalar Wiener filters α i ) is a Wiener filter which estimates 0 [n] of[n]. Because only [n] is correlate with 0 [n], the crosscorrelation vector is simply a weighte version of e,thus, again only the first column of the inverse of the covariance matrix of [n] isof interest. The structure with summers an scalar Wiener filters follows from the tri-iagonal property of the covariance matrix of [n]. 4 Reuce Rank MSNWF The reuce rank MSNWF of rank D is easily obtaine by stopping the evelopment of the MSNWF after D steps an replacing the last Wiener filter w D by the respective matche filter. Thus, the MSNWF of rank is simply the matche filter r x0, 0. We restrict ourselves to the MSNWF interpretation shown in Figure 4, hence, for D<Nwe get the length D observation with a tri-iagonal covariance matrix (D) [n] =T (D),H x 0 [n] C D, (28) where the superscript ( ) (D) inicates that we use a rank D approximation an the transformation matrix T (D) =[t,...,t D ] C N D (29) comprises the first D filters which were alreay efine in Equation (26). Therefore, we have to fin the Wiener filter w (D) which estimates 0 [n] from (D) [n]. Obviously, this Wiener filter reas as ( ) ( w (D) = R (D) (D) r, 0 = T (D),H R x0 T (D)) T (D),H r x0, 0 (30) an the MSNWF rank D approximation of the Wiener filter w 0 can be expresse as ( w (D) 0 = T (D) w (D) = T (D) T (D),H R x0 T (D)) T (D),H r x0, 0 (3) 8

9 an has following mean square error: MSE (D) = σ 2 0 r H x 0, 0 T (D) ( T (D),H R x0 T (D)) T (D),H r x0, 0. (32) 5 MSNWF an Krylov Subspace In Section 3 we state that the filters t i in Figure 4 are not orthogonal in general. It can be easily shown that they are orthogonal for the special choice of the blocking matrices B i to have the property that all singular values unequal to 0 are the same, i. e., σ =...= σ N i = σ an σ k =0,k >N i. In the following we restrict ourselves to this class of blocking matrices, therefore, the filters t i are orthogonal. Moreover, without loss of generality we assume that t i 2 =. Now, recall the MSNWF evelopment (cf. Figure 3). At step i the signal x i [n] at the blocking matrix output of the previous stage was the input for the matche filter h i an the blocking matrix B i. Again, the matche filter was chosen, because its output i [n] has the maximum correlation with the output of the previous stage i [n]. Because we can substitute the chain of blocking matrices an the matche filter h i C N i+ by a filter t i C N (cf. Equation 26) an since we restrict ourselves to use orthonormal filters t i, we can compute the filters t i irectly. At the i-th step we get the aitional output signal i [n] =t H i x 0 [n] which has to be maximally correlate with the output signal of the previous stage i [n] = t H i x 0[n]. Together with the orthogonality conitions this leas to following optimization: t i = arg max E{Re( i [n] i [n])} t t i = arg max t 2 (th R x0 t i + t H i R x0 t) (33) s.t.: t H t = an t H t k =0,k =,...,i. The result which is easily obtaine (e.g. with Langrange multipliers, cf. Appenix B) reas as ( ) k=i P k R x0 t i t i = ( ), (34) k=i P k R x0 t i 2 where P k enotes the projector onto the space orthogonal to t k, i.e., P k = N t k t H k (35) an N enotes the N N ientity matrix. The expression in Equation (34) implies that the best choice for the blocking matrix B i is the projector P i onto the orthogonal space of the matche filter t i, if the filters t i are orthogonal to each other. This result suggests that the or 9

10 reuction of the imension of the solution space at each step of the MSNWF iteration oes not necessarily lea to a reuction of the length of the filter as propose by Golstein et. al. in [GRS98]. This result is consistent since Corollary allows to use a tall pre-filtering matrix T i at each step. Honig et. al. [HX99, HG00] alreay evelope the same result, but they gave no motivation for this special choice of B i. Also Paos et. al. [PB99, LUN99] ene up with a similar expression for the Auxiliary Vector (AV) metho, but i not consier the orthogonality of the filters t i which they incorporate into the optimization, leaing to a more complicate expression. However, both contributions i not make the observation that the recursive algorithm escribe in Equation (34) is the well known Gram-Schmit Arnoli algorithm [Arn5, Saa96]. The Arnoli recursion is the basic algorithm to compute the orthonormal basis of the Krylov subspace K (D) of the square matrix A C M M an the column vector b C M. The Krylov subspace of imension D is efine as follows [Saa96, vv00]: ( ) K (D) =span [b, Ab,...,A D b]. (36) Again, also Honig et. al. [HX99] mae this observation an prove that for the choice B i = P i the filters t i are an orthonormal basis of the Krylov subspace. However, they i not excavate the funamental implications of this result which can be foun in following theorem [Arn5, Saa96]. Theorem 2 If the columns of the matrix T D = [t,...,t D ] were compute using the recursion ) t i = ( k=i P k ( k=i P k At i ) At i 2, t = b b 2, (37) where A C N N is an arbitrary square matrix an b C N is an arbitrary column vector, then the following equality hols: AT (D) = T (D) H (D) + h D+,D t D+ e T D, (38) where H (D) is a D D Hessenberg matrix an t D+ enotes the next vector of the recursion. Obviously, since the vectors t i are orthonormal, Equation (38) can be rewritten to get H (D) = T (D),H AT (D). (39) The proof of Theorem 2 which shows the actual values of the entries of H (D) can be easily obtaine from the recursion in Equation (37) an is outline in Appenix C. To see the value of Theorem 2 we nee to specialize A to be Hermitian. 0

11 Corollary 2 Given an Hermitian square matrix A C N N, i.e., A = A H, the recursion in Equation (37) leas to a transformation matrix T (D) which tri-iagonalizes A, i.e., is a Hermitian tri-iagonal matrix. H (D) = T (D),H AT (D). Proof. Theorem 2 proposes that H (D) is a Hessenberg matrix. Since A is Hermitian, H (D),H = T (D),H A H T (D) = T (D),H AT (D) = H (D), (40) therefore, H (D) is a Hermitian Hessenberg matrix or Hermitian tri-iagonal. Equation (34), Theorem 2, an Corollary 2 isclose the connection between the MSNWF approach an Krylov subspace methos to solve linear equation systems. Our conclusion is that the MSNWF approach, although it is only motivate by statistical reasoning, is simply the solution of the Wiener-Hopf equation by employing the Krylov subspace of the matrix vector pair (R x0, r x0, 0 ), if the filters t i are orthogonal. Furthermore, if a reuce rank MSNWF with rank D is compute, this is fully equivalent to solving the Wiener-Hopf equation in the D-imensional Krylov subspace K (D). An more escriptively, the inverse of is approximate by a matrix polynomial q(r x0 )of orer D. We showe that the MSNWF proceure elivers filters t i which form an orthonormal basis of the Krylov subspace. Henceforth, we can base our reasoning on the Krylov subspace. With Corollary 2 it is straightforwar to unerstan that the covariance matrix R of the pre-filtere observation [n] (cf. Equation 27) is tri-iagonal, because the covariance matrix of the original observation the covariance matrix R x 0 x 0 [n] is Hermitian. Moreover, the Hermitian property of R x0 can be exploite to compute the orthogonal basis t i of the Krylov subspace K (D) of (R x0, r x0, 0 ). In the case of Hermitian matrices the Lanczos algorithm [Lan50, Lan52, Saa96] can be applie to fin the orthogonal basis: t i = = P i P i 2 R x0 t i P i P i 2 R x0 t i 2 R x0 t i t H i 2R x0 t i t i 2 t H i R x0 t i t i R x0 t i t H i 2 R x 0 t i t i 2 t H i R x 0 t i t i 2. (4) Thus, we reuce the complexity of the MSNWF iteration by exploiting the fact that the MSNWF is equivalent to fining the solution of the Wiener-Hopf equation in the Krylov subspace of (R x0, r x0, 0 ). However, the Lanczos algorithm is sensitive to rouning errors, hence, the filters t i are not orthogonal anymore for large i. In the sequel, we assume that the necessary rank D to fin an goo approximation of the Wiener filter is small enough to be able to apply the Lanczos algorithm.

12 6 A New MSNWF Iteration In this section, we evelop a new algorithm which computes the rank D MSNWF, but works only within the Krylov subspace of imension D. Therefore, we assume that the orthonormal basis T (D) C N D of the Krylov subspace K (D) was foun by the Arnoli algorithm (cf. Equation 34) or by the Lanczos algorithm (cf. Equation 4). The resulting rank D MSNWF is given in Equation (3) by the means of the Wiener filter w (D) which is applie to the pre-filtere observation (D) [n] =T (D),H x 0 [n] C D, (42) with the tri-iagonal covariance matrix 0 R (D) = T (D),H R x0 T (D) T = (D ),H R x0 T (D ) 0 T rd,d r D,D r D,D C D D (43) an the crosscorrelation vector with respect to the esire signal 0 [n] [ ] r (D), 0 = T (D) rx0, r x0, 0 = 0 2 R 0 D. (44) If we use the covariance matrix R (D ), the new entries of R (D) are simply r D,D = t H D R x 0 t D an r D,D = t H D R x 0 t D. (45) Because r,0 has the property that only the first element is not equal to 0, only the first column of the inverse of R is neee to compute w (D) = R (D), r (D), 0 C D. (46) Consequently, we are only intereste in the first column c (D) C D of C (D) = R (D), =[c (D),...,c (D) D ] CD D (47) an the inversion lemma for partitione matrices (e. g. [Sch9, MW95]) leas to [ ] C (D) C (D ) 0 = 0 T + β D 0 b(d) b (D),H, (48) where the aitional terms rea as [ ] 0 [ b (D) = C(D ) r D,D rd,d c (D ) ] = D C D (49) an β D = r D,D [0 T,r D,D]C (D ) [ ] 0 = r r D,D r D,D 2 c (D ) D,D D,D (50) 2

13 with c (D ) D,D being the last element of the last column c(d ) D of C (D) at the previous step. Therefore, the new first column c (D) can be written as [ ] c (D) (D ) c = + β 0 D c(d ),,D [ r D,D 2 c (D ) D r D,D ] C D, (5) where c (D ),D enotes the first element of c(d ) D. Obviously, the first column of C (D) an, thus, the Wiener filter w (D) at step D epens upon the first column c (D ) at step D an the new entries of the covariance matrix r D,D an r D,D. However, we also observe a epenency on the previous last column c (D ). Hence, we have to fin an expression for the last column of C(D) an D with Equation (48) we get c (D) D = β D [ rd,d c (D ) D ] (52) which only epens on the previous last column an the new entries of R (D). So, we foun an iteration that only upates two vectors c (D) an c (D) D at each step an, moreover, the mean square error at step D can be expresse with the first entry c (D), of c(d) (cf. Equation 32): MSE (D) = σ 2 0 r x0, 0 2 2c (D),. (53) The iteration in Equation (5) an (52) only operates with scalars an vectors. However, the scalars r D,D an r D,D (cf. Equation 45) are neee which are quaratic forms with the N N covariance matrix R x0. But the matrix vector multiplication R x0 t i with O(N 2 ) which can be foun in the expression for r i,i an r i,i has alreay been use for the Lanczos algorithm in Equation (4) to fin the orthonormal basis T (i). Thus, it is worth to inclue the Lanczos recursion an the resulting algorithm is shown in Table, where we substitute c (i) an c (i) i by c (i) first an c(i) last, respectively. The resulting computational complexity for a rank D MSNWF is O(N 2 D), since a matrix vector multiplication with O(N 2 ) has to be performe at each step. Note that the algorithm in Table is just a version of the Conjugate Graient algorithm [HS52, Saa96]. In fact it is a irect version of the Lanczos algorithm [Saa96] for linear systems. A Wiener Filter with Pre-Filtering The new observation signal after pre-filtering with a full-rank matrix T C N N reas as z [n] =Tx 0 [n] (54) leaingtoanewmean square error MSE z = σ 2 0 w H z r z, 0 r H z, 0 w z + w H z R z w z, (55) 3

14 choose esire MSE: σε 2 choose maximum imension: D t 0 = 0, t = r x0, 0 / r x0, 0 2 u = R x0 t r 0, =0, r, = t H u c () first = r,, c() last = r, MSE () = σ 2 0 r x0, c() first = for i =2,...,D if MSE (i) <σε 2 then = i ; break v = u r i,i t i r i 2,i t i 2 r i,i = v 2 if r i,i =0then =i ; break t i = v/r i,i u = R x0 t i r i,i = t H i u β i = r i,i r i,i 2 c (i ) [ ] last,i [ c (i) (i ) c first = first + β i c (i ), ri,i 2 c (i ) last last, 0 r [ i,i c (i) last = ri,i c (i ) ] β last i MSE (i) = σ 2 0 r x0, c(i) first, T (D) =[t,...,t ] w (D) 0 = r x0, 0 2 T (D) c ( ) first ] Table : Lanczos MSNWF where the covariance matrix of the new observation signal z [n] can be expresse as R z =E{z [n]z H [n]} = TR x 0 T H (56) an the new crosscorrelation vector is r z, 0 =E{z [n] 0[n]} = Tr x0, 0. (57) If we apply the resulting Wiener filter (cf. Equation 8) w z = R z r z, 0 = T,H R x 0 T Tr x0, 0 (58) to the new observation z [n] to get the new estimate ˆ 0,z [n] =w H z z [n] =r H x 0, 0 R x 0 T Tx 0 [n] =w H 0 x 0 [n], (59) we observe that ˆ 0,z [n] = ˆ 0 [n] an, thus, the estimate is unchange. Consequently, the new minimum mean square error MMSE z = σ 2 0 w H z R z w z = σ 2 0 r H x 0, 0 T H T,H R x0 T Tr x0, 0 (60) 4

15 is the same as before (cf. Equation 9) which completes the proof of Theorem. B Basis of MSNWF is Krylov Subspace The Langrange function for the optimization in Equation (33) reas as L(t,λ,...,λ i )= 2 (th R x0 t i + t H i R i x 0 t) λ k t H t k λ i (t H t ). (6) The erivation with respect to the complex conjugate of t must be zero, thus, k= L(t,λ,...,λ i ) t = 2 R i x 0 t i λ k t k λ i t = 0. (62) k= Because t is orthogonal to t k,k =,...,i an the filters t k are orthonormal, i.e., t H k t l = δ k,l, the Langrange multipliers can be expresse as λ k = 2 th k R x 0 t k, k =,...,i, (63) an λ i t = 2 R x 0 t i i t H k R x0 t i t k = 2 2 ( i N t k t H k )R x0 t i. (64) k= If we rewrite this expression by substituting the sum with a prouct of projection matrices (cf. Equation 35), we en up with ( ) t = P k R x0 t i (65) 2λ i k=i an since λ i has to be chosen to give a unit norm vector, we prove the result in Equation (34). k= C Tri-Diagonalization with Arnoli Algorithm The recursion formula of Theorem 2 in Equation 37 can be rewritten to get ( ) h i+,i t i+ = P k At i, (66) where we introuce the abbreviation ( ) h i+,i = P k At i 2. (67) k=i 5 k=i

16 If the prouct of the projectors P k is converte to a sum an the orthogonality of the t i is consiere, Equation (66) reas as an by substituting weenupwith h i+,i t i+ = At i i t H k At i t k (68) k= h k,i = t H k At i (69) At i = h i+,i t i+ + i h k,i t k (70) k= which is the i-th column of the matrix equality in Equation (38). References [AC76] [Arn5] [BR89] S. P. Applebaum an D. J. Chapman. Aaptive Arrays with Main Beam Constraints. IEEE Transactions on Antennas an Propagation, 24(5), September 976. W. E. Arnoli. The Principle of Minimize Iterations in the Solution of the Matrix Eigenvalue Problem. Quarterly of Applie Mathematics, 9():7 29, January 95. K. A. Byerly an R. A. Roberts. Output Power Base Partial Aaptive Array Design. In Proc. 23r Asilomar Conference on Signals, Systems & Computers, volume 2, pages , October 989. [EY36] C. Eckart an G. Young. The Approximation of One Matrix by Another of Lower Rank. Psychometrika, (3):2 28, September 936. [GGDR98] J. S. Golstein, J. R. Guerci, D. E. Dugeon, an I. S. Ree. Theory of Signal Representation. submitte to IEEE Transactions on Signal Processing, 998. [GGR99] [GJ82] [GR97a] J. S. Golstein, J. R. Guerci, an I. S. Ree. An Optimal Generalize Theory of Signal Representation. In Proc. ICASSP 99, volume 3, pages , March 999. L. J. Griffiths an C. W. Jim. An Alternative Approach to Linearly Constraine Aaptive Beamforming. IEEE Transactions on Antennas an Propagation, 30(), January 982. J. S. Golstein an I. S. Ree. A New Metho of Wiener Filtering an its Application to Interference Mitigation for Communications. In Proc. MILCOM 997, volume 3, pages , November

17 [GR97b] J. S. Golstein an I. S. Ree. Subspace Selection for Partially Aaptive Sensor Array Processing. IEEE Transactions on Aerospace an Electronic Systems, 33(2): , April 997. [GRS98] [GRZ99] [HG00] J. S. Golstein, I. S. Ree, an L. L. Scharf. A Multistage Representation of the Wiener Filter Base on Orthogonal Projections. IEEE Transactions on Information Theory, 44(7): , November 998. J. S. Golstein, I. S. Ree, an P. A. Zulch. Multistage Partially Aaptive STAP CFAR Detection Algorithm. IEEE Transactions on Aerospace an Electronic Systems, 35(2):645 66, April 999. M. L. Honig an J. S. Golstein. Aaptive Reuce-Rank Interference Suppression Base on the Multistage Wiener Filter. submitte to IEEE Transactions on Communications, March [Hot33] H. Hotelling. Analysis of a Complex of Statistical Variables into Principal Components. Journal of Eucational Psychology, 24(6/7):47 44, , September/October 933. [HS52] [HX99] [KBP98] [Lan50] [Lan52] [LUN99] [MW95] M. R. Hestenes an E. Stiefel. Methos of Conjugate Graients for Solving Linear Systems. Journal of Research of the National Bureau of Stanars, 49(6): , December 952. M. L. Honig an W. Xiao. Performance of Reuce-Rank Linear Interference Suppression for DS-CDMA. submitte to IEEE Transactions on Information Theory, December 999. A. Kansal, S. N. Batlama, an D. A. Paos. Aaptive Maximum SINR RAKE Filtering for DS-CDMA Multipath Faing Channels. IEEE Journal on Selecte Areas in Communications, 6(9), December 998. C. Lanczos. An Iteration Metho for the Solution of the Eigenvalue Problem of Linear Differential an Integral Operators. Journal of Research of the National Bureau of Stanars, 45(4): , October 950. C. Lanczos. Solution of Systems of Linear Equations by Minimize Iterations. Journal of Research of the National Bureau of Stanars, 49():33 53, July 952. P. Li, W. Utschick, an J. A. Nossek. A New DS-CDMA Interference Cancellation Scheme for Space-Time rake Processing. In Proc. European Wireless 99 & ITG Mobile Communications, pages , October 999. R. N. McDonough an A. D. Whalen. Detection of Signals in Noise. Acaemic Press,

18 [PB99] [PLB99] [Saa96] D. A. Paos an S. N. Batlama. Joint Space-Time Auxiliary-Vector Filtering for DS/CDMA Systems with Antenna Arrays. IEEE Transactions on Communications, 47(9), September 999. D. A. Paos, F. J. Lombaro, an S. N. Batalama. Auxiliary-Vector Filters an Aaptive Steering for DS/CDMA Single-User Detection. IEEE Transactions on Vehicular Technology, 48(6), November 999. Y. Saa. Iterative Methos for Sparse Linear Systems. PWS out of print, saa/books.html. [Sch9] L. L. Scharf. Statistical Signal Processing. Aison-Wesley, 99. [SW99] M. K. Schneier an A. S. Willsky. Krylov Subspace Estimation. submitte to SIAM Journal on Scientific Computing, June 999. [vv00] H. A. van er Vorst. Krylov Subspace Iteration. Computing in Science & Engineering, 2():32 37, January/February

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

Optimal CDMA Signatures: A Finite-Step Approach

Optimal CDMA Signatures: A Finite-Step Approach Optimal CDMA Signatures: A Finite-Step Approach Joel A. Tropp Inst. for Comp. Engr. an Sci. (ICES) 1 University Station C000 Austin, TX 7871 jtropp@ices.utexas.eu Inerjit. S. Dhillon Dept. of Comp. Sci.

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Performance of Reduced-Rank Linear Interference Suppression

Performance of Reduced-Rank Linear Interference Suppression 1928 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 5, JULY 2001 Performance of Reduced-Rank Linear Interference Suppression Michael L. Honig, Fellow, IEEE, Weimin Xiao, Member, IEEE Abstract The

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Space-time Linear Dispersion Using Coordinate Interleaving

Space-time Linear Dispersion Using Coordinate Interleaving Space-time Linear Dispersion Using Coorinate Interleaving Jinsong Wu an Steven D Blostein Department of Electrical an Computer Engineering Queen s University, Kingston, Ontario, Canaa, K7L3N6 Email: wujs@ieeeorg

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

Influence of weight initialization on multilayer perceptron performance

Influence of weight initialization on multilayer perceptron performance Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Robustness and Perturbations of Minimal Bases

Robustness and Perturbations of Minimal Bases Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important

More information

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations Characterizing Real-Value Multivariate Complex Polynomials an Their Symmetric Tensor Representations Bo JIANG Zhening LI Shuzhong ZHANG December 31, 2014 Abstract In this paper we stuy multivariate polynomial

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Capacity Analysis of MIMO Systems with Unknown Channel State Information Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,

More information

Construction of Equiangular Signatures for Synchronous CDMA Systems

Construction of Equiangular Signatures for Synchronous CDMA Systems Construction of Equiangular Signatures for Synchronous CDMA Systems Robert W Heath Jr Dept of Elect an Comp Engr 1 University Station C0803 Austin, TX 78712-1084 USA rheath@eceutexaseu Joel A Tropp Inst

More information

Fast image compression using matrix K-L transform

Fast image compression using matrix K-L transform Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.

More information

LECTURE NOTES ON DVORETZKY S THEOREM

LECTURE NOTES ON DVORETZKY S THEOREM LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.

More information

Kinematics of Rotations: A Summary

Kinematics of Rotations: A Summary A Kinematics of Rotations: A Summary The purpose of this appenix is to outline proofs of some results in the realm of kinematics of rotations that were invoke in the preceing chapters. Further etails are

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS TIME-DEAY ESTIMATION USING FARROW-BASED FRACTIONA-DEAY FIR FITERS: FITER APPROXIMATION VS. ESTIMATION ERRORS Mattias Olsson, Håkan Johansson, an Per öwenborg Div. of Electronic Systems, Dept. of Electrical

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

On conditional moments of high-dimensional random vectors given lower-dimensional projections

On conditional moments of high-dimensional random vectors given lower-dimensional projections Submitte to the Bernoulli arxiv:1405.2183v2 [math.st] 6 Sep 2016 On conitional moments of high-imensional ranom vectors given lower-imensional projections LUKAS STEINBERGER an HANNES LEEB Department of

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

arxiv: v1 [cs.lg] 22 Mar 2014

arxiv: v1 [cs.lg] 22 Mar 2014 CUR lgorithm with Incomplete Matrix Observation Rong Jin an Shenghuo Zhu Dept. of Computer Science an Engineering, Michigan State University, rongjin@msu.eu NEC Laboratories merica, Inc., zsh@nec-labs.com

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION UNIFYING AND MULISCALE APPROACHES O FAUL DEECION AND ISOLAION Seongkyu Yoon an John F. MacGregor Dept. Chemical Engineering, McMaster University, Hamilton Ontario Canaa L8S 4L7 yoons@mcmaster.ca macgreg@mcmaster.ca

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

APPPHYS 217 Thursday 8 April 2010

APPPHYS 217 Thursday 8 April 2010 APPPHYS 7 Thursay 8 April A&M example 6: The ouble integrator Consier the motion of a point particle in D with the applie force as a control input This is simply Newton s equation F ma with F u : t q q

More information

Large System Analysis of Projection Based Algorithms for the MIMO Broadcast Channel

Large System Analysis of Projection Based Algorithms for the MIMO Broadcast Channel Large System Analysis of Projection Based Algorithms for the MIMO Broadcast Channel Christian Guthy and Wolfgang Utschick Associate Institute for Signal Processing, Technische Universität München Joint

More information

Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Bair, faculty.uml.eu/cbair University of Massachusetts Lowell 1. Pre-Einstein Relativity - Einstein i not invent the concept of relativity,

More information

PROBLEMS of estimating relative or absolute camera pose

PROBLEMS of estimating relative or absolute camera pose IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 1381 Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision Zuzana Kukelova, Member, IEEE, Martin

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,

More information

Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan

Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan Matrix Recipes Javier R Movellan December 28, 2006 Copyright c 2004 Javier R Movellan 1 1 Definitions Let x, y be matrices of orer m n an o p respectively, ie x 11 x 1n y 11 y 1p x = y = (1) x m1 x mn

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Systems & Control Letters

Systems & Control Letters Systems & ontrol Letters ( ) ontents lists available at ScienceDirect Systems & ontrol Letters journal homepage: www.elsevier.com/locate/sysconle A converse to the eterministic separation principle Jochen

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum October 6, 4 ARDB Note Analytic Scaling Formulas for Crosse Laser Acceleration in Vacuum Robert J. Noble Stanfor Linear Accelerator Center, Stanfor University 575 San Hill Roa, Menlo Park, California 945

More information

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP ALESSANDRA PANTANO, ANNEGRET PAUL, AND SUSANA A. SALAMANCA-RIBA Abstract. We classify all genuine unitary representations of the metaplectic

More information

TAYLOR S POLYNOMIAL APPROXIMATION FOR FUNCTIONS

TAYLOR S POLYNOMIAL APPROXIMATION FOR FUNCTIONS MISN-0-4 TAYLOR S POLYNOMIAL APPROXIMATION FOR FUNCTIONS f(x ± ) = f(x) ± f ' (x) + f '' (x) 2 ±... 1! 2! = 1.000 ± 0.100 + 0.005 ±... TAYLOR S POLYNOMIAL APPROXIMATION FOR FUNCTIONS by Peter Signell 1.

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

Analyzing Tensor Power Method Dynamics in Overcomplete Regime Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract Pseuo-Time Methos for Constraine Optimization Problems Governe by PDE Shlomo Ta'asan Carnegie Mellon University an Institute for Computer Applications in Science an Engineering Abstract In this paper we

More information

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread] Witt vectors. Part 1 Michiel Hazewinkel Sienotes by Darij Grinberg Witt#5: Aroun the integrality criterion 9.93 [version 1.1 21 April 2013, not complete, not proofrea In [1, section 9.93, Hazewinkel states

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

State observers and recursive filters in classical feedback control theory

State observers and recursive filters in classical feedback control theory State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent

More information

Laplacian Cooperative Attitude Control of Multiple Rigid Bodies

Laplacian Cooperative Attitude Control of Multiple Rigid Bodies Laplacian Cooperative Attitue Control of Multiple Rigi Boies Dimos V. Dimarogonas, Panagiotis Tsiotras an Kostas J. Kyriakopoulos Abstract Motivate by the fact that linear controllers can stabilize the

More information

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS THE PUBISHING HOUSE PROCEEDINGS O THE ROMANIAN ACADEMY, Series A, O THE ROMANIAN ACADEMY Volume, Number /, pp. 6 THE ACCURATE EEMENT METHOD: A NEW PARADIGM OR NUMERICA SOUTION O ORDINARY DIERENTIA EQUATIONS

More information

Optimization of Geometries by Energy Minimization

Optimization of Geometries by Energy Minimization Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

2Algebraic ONLINE PAGE PROOFS. foundations

2Algebraic ONLINE PAGE PROOFS. foundations Algebraic founations. Kick off with CAS. Algebraic skills.3 Pascal s triangle an binomial expansions.4 The binomial theorem.5 Sets of real numbers.6 Surs.7 Review . Kick off with CAS Playing lotto Using

More information

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx Math 53 Notes on turm-liouville equations Many problems in physics, engineering, an chemistry fall in a general class of equations of the form w(x)p(x) u ] + (q(x) λ) u = w(x) on an interval a, b], plus

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph

More information

All s Well That Ends Well: Supplementary Proofs

All s Well That Ends Well: Supplementary Proofs All s Well That Ens Well: Guarantee Resolution of Simultaneous Rigi Boy Impact 1:1 All s Well That Ens Well: Supplementary Proofs This ocument complements the paper All s Well That Ens Well: Guarantee

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Linear inversion. A 1 m 2 + A 2 m 2 = Am = d. (12.1) again.

Linear inversion. A 1 m 2 + A 2 m 2 = Am = d. (12.1) again. 14 Linear inversion In Chapter 12 we saw how the parametrization of a continuous moel allows us to formulate a iscrete linear relationship between ata an moel m. With unknown corrections ae to the moel

More information

Connections Between Duality in Control Theory and

Connections Between Duality in Control Theory and Connections Between Duality in Control heory an Convex Optimization V. Balakrishnan 1 an L. Vanenberghe 2 Abstract Several important problems in control theory can be reformulate as convex optimization

More information

Inverse Theory Course: LTU Kiruna. Day 1

Inverse Theory Course: LTU Kiruna. Day 1 Inverse Theory Course: LTU Kiruna. Day Hugh Pumphrey March 6, 0 Preamble These are the notes for the course Inverse Theory to be taught at LuleåTekniska Universitet, Kiruna in February 00. They are not

More information

Unit vectors with non-negative inner products

Unit vectors with non-negative inner products Unit vectors with non-negative inner proucts Bos, A.; Seiel, J.J. Publishe: 01/01/1980 Document Version Publisher s PDF, also known as Version of Recor (inclues final page, issue an volume numbers) Please

More information

Image Denoising Using Spatial Adaptive Thresholding

Image Denoising Using Spatial Adaptive Thresholding International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,

More information

arxiv: v1 [math-ph] 5 May 2014

arxiv: v1 [math-ph] 5 May 2014 DIFFERENTIAL-ALGEBRAIC SOLUTIONS OF THE HEAT EQUATION VICTOR M. BUCHSTABER, ELENA YU. NETAY arxiv:1405.0926v1 [math-ph] 5 May 2014 Abstract. In this work we introuce the notion of ifferential-algebraic

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

The canonical controllers and regular interconnection

The canonical controllers and regular interconnection Systems & Control Letters ( www.elsevier.com/locate/sysconle The canonical controllers an regular interconnection A.A. Julius a,, J.C. Willems b, M.N. Belur c, H.L. Trentelman a Department of Applie Mathematics,

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

PCA. Principle Components Analysis. Ron Parr CPS 271. Idea:

PCA. Principle Components Analysis. Ron Parr CPS 271. Idea: PCA Ron Parr CPS 71 With thanks to Tom Mitchell Principle Components Analysis Iea: Given ata points in - imensional space, project into lower imensional space while preserving as much informakon as possible

More information

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10 Some vector algebra an the generalize chain rule Ross Bannister Data Assimilation Research Centre University of Reaing UK Last upate 10/06/10 1. Introuction an notation As we shall see in these notes the

More information