Introduction Reduced-rank ltering and estimation have been proposed for numerous signal processing applications such as array processing, radar, model

Size: px
Start display at page:

Download "Introduction Reduced-rank ltering and estimation have been proposed for numerous signal processing applications such as array processing, radar, model"

Transcription

1 Performance of Reduced-Rank Linear Interference Suppression Michael L. Honig and Weimin Xiao Dept. of Electrical & Computer Engineering Northwestern University Evanston, IL 6008 January 3, 00 Abstract The performance of reduced-rank linear ltering is studied for the suppression of multiple access interference. A reduced-rank lter resides in a lower dimensional space, relative to the full-rank lter, which enables faster convergence and tracking. We evaluate the large system output Signal-to-Interference plus Noise Ratio (SINR) as a function of lter rank D for the Multi-Stage Wiener Filter (MSWF) presented by Goldstein and Reed. The large system limit is dened by letting the number of users, K, and number of dimensions, N, tend to innity with K=N xed. For the case where all users are received with the same power, the reduced-rank SINR converges to the full-rank SINR as a continued fraction. An important conclusion from this analysis is that the rank D needed to achieve a desired output SINR does not scale with system size. Numerical results show that D = 8 is sucient to achieve near fullrank performance even under heavy loads (K=N = ). We also evaluate the large system output SINR for other reduced-rank methods, namely, Principal Components and Cross-Spectral, which are based on an eigen-decomposition of the input covariance matrix, and Partial Despreading. For those methods, the large system limit lets D! with D=N xed. Our results show that for large systems the MSWF allows a dramatic reduction in rank relative to the other techniques considered. Keywords Interference suppression, large system analysis, multiuser detection, reduced-rank lters This work was supported by the U.S. Army Research Oce under grant DAAH

2 Introduction Reduced-rank ltering and estimation have been proposed for numerous signal processing applications such as array processing, radar, model order reduction, and quantization (e.g., see [,, 3, 4] and references therein). A reduced-rank estimator may require relatively little observed data to produce an accurate approximation of the optimal lter. In this paper we study the performance of reduced-rank linear lters for the suppression of multiple access interference. Reduced-rank linear ltering has recently been applied to interference suppression in Direct-Sequence (DS) Code-Division Multiple Access (CDMA) systems [5, 6, 7, 8, 9, 0]. Although conventional adaptive ltering algorithms can be used to estimate the linear Minimum Mean Squared Error (MMSE) detector, assuming short, or repeated spreading codes [], the performance may be inadequate when a large number of lter coecients must be estimated. For example, a conventional implementation of a time-domain adaptive lter which spans three symbols for proposed Third Generation Wideband DS-CDMA cellular systems can have over 300 coecients. Introducing multiple antennas for additional space-time interference suppression capability exacerbates this problem. Adapting such a large number of lter coecients is hampered by very slow response to changing interference and channel conditions. In a reduced-rank lter, the received signal is projected onto a lower-dimensional subspace, and the lter optimization then occurs within this subspace. This has the advantage of reducing the number of lter coecients to be estimated. However, by adding this subspace constraint, the overall MMSE may be higher than that achieved by a full-rank lter. Much of the previous work on reduced-rank interference suppression has been based on \Principal Components" in which the received vector is projected onto an estimate of the lower-dimensional signal subspace with largest energy (e.g., [8, ]). This technique can improve convergence and tracking performance when the number of degrees of freedom (e.g., CDMA processing gain) is much larger than the signal subspace. This assumption, however, does not hold in a heavily loaded commercial cellular system. Our main contribution is to characterize the performance of the reduced-rank Multi-Stage Wiener Filter (MSWF) presented by Goldstein and Reed [3, 4]. This technique has the important property that the lter rank (i.e., dimension of the projected subspace) can be much less than the dimension of the signal subspace without compromising performance. Furthermore, adaptive estimation of the optimum lter coecients does not require an eigen-decomposition of the input (sample) covariance matrix. Adaptive interference suppression algorithms based on the MSWF are presented in [9]. Our performance evaluation is motivated by the large-system analysis for DS- CDMA with random spreading sequences introduced in [5, 6, 7]. Specically, let K be the number of users, N be the number of available dimensions (e.g., chips per coded symbol in CDMA, or number of receiver antennas in a narrowband system), and D be the subspace dimension. We evaluate the Signal-to-Interference Plus Noise

3 Ratio (SINR) at the output of the MSWF as K; N! with K=N xed. For the case where all users are received with the same power, we obtain a closed-form expression for the output SINR as a function of D, K=N, and the background noise variance. As D increases, this expression rapidly converges to the full-rank largesystem SINR derived in [7] as a continued-fraction. The MSWF therefore has the surprising property that the dimension D needed to obtain a target SINR (e.g., within a small of the full-rank SINR) does not scale with the system size (i.e., K and N). Our results show that for moderate to heavy loads, a rank D = 8 lter essentially achieves full-rank performance, and the SINR for a rank D = 4 lter is within one db of the full-rank SINR. We also evaluate the large-system performance of the reduced-rank MSWF given an arbitrary power distribution. A byproduct of this analysis is a method for computing the full-rank large-system SINR which does not explicitly make use of the asymptotic eigenvalue distribution for the class of random matrices derived in [8], and used in [5, 6, 7]. Finally, we compare the large-system performance of the MSWF with the following reduced-rank techniques: () Principal Components, () Cross-Spectral [9, 0], and (3) Partial-Despreading []. (See also [6].) The Cross-Spectral method is based on an eigen-decomposition of the input covariance matrix, but unlike Principal Components, selects the basis vectors which minimize MSE. Partial Despreading refers to a relatively simple reduced-rank technique in which the subspace is spanned by nonoverlapping segments of the matched lter. In contrast with the MSWF, the large system analysis of the latter techniques lets D; K; N! with both K=N and D=K xed. That is, to achieve a target SINR near the full-rank large system limit, D! as K; N!. For the case where all users are received with the same power, we obtain closed-form expressions which accurately predict output SINR as a function of K=N, D=K, and noise variance. In the next two sections we present the system model and the reduced-rank techniques considered. In Section 4 we briey review large system analysis. Our main results are presented in Section 5, and numerical examples are presented in Section 6. Proofs and derivations are given in Section 7. System Model Let r(i) be the (N ) received vector corresponding to the ith transmitted symbol. For example, the elements of r(i) may be samples at the output of a chip-matched lter (for CDMA) or across an antenna array. We assume that where r(i) = SAb(i) + n(i) () S = [s ; : : : ; s K ] () is the N K matrix of signature sequences where N is the number of dimensions (e.g., processing gain or number of antennas) and K is the number of users, and s k is 3

4 the signature sequence for user k. The amplitude matrix A = diag( p P ; ; p P K ) where P k is the power for user k, b(i) is the (K )-vector of symbols across users at time i, and n(i) is the noise vector, which has covariance matrix I. We assume that the symbol variance is one for all users, and that all vectors are complex valued. In what follows we assume that user is the desired user. The MMSE receiver consists of the lter represented by the vector c, which is chosen to minimize the MSE M = Efjb (i)? c y y(i)j g (3) where Efg denotes expectation, and y denotes Hermitian transpose. The MMSE solution is [] c = R? s (4) where the N N covariance matrix where P = AA y and the (full-rank) MMSE is and R = E[r(i)r y (i)] = SP y S y + I (5) M =? s y R? s (6) Let the N (K? ) matrix of spreading codes for the interferers be S I = [s ; : : : ; s K ]; (7) r I (i) = S I A I b :K (i) + n(i); (8) where A I is the diagonal matrix of interference amplitudes and x l:m denotes components l through m of the vector x. The interference-plus-noise covariance matrix is R I = E[r I (i)r y I(i)] = S I A I A y IS y I + I: (9) The output Signal-to-Interference-Plus Noise Ratio (SINR) of the MMSE lter is where P is the received power for user. 3 Reduced-Rank Linear Filtering = P s y R? I s (0) A reduced-rank lter reduces the number of coecients to be estimated by projecting the received vector onto a lower dimensional subspace [, Sec. 8.4],[]. Specically, let M D be the N D matrix with column vectors forming a basis for a D-dimensional subspace, where D < N. The vector of combining coecients for the ith received vector corresponding to this subspace is given by ~r(i) = (M y DM D )? M y Dr(i): () 4

5 In what follows, a \tilde" denotes a (reduced-rank) D-dimensional vector, or D D covariance matrix. The sequence of vectors f~r(i)g is the input to a tapped-delay line lter, represented by the (D)-vector ~c. The lter output corresponding to the ith transmitted symbol is z(i) = ~c y ~r(i), and the objective is to select ~c to minimize the reduced-rank MSE The solution is where and M D = Efjb (i)? ~c y ~r(i)j g: () ~c = ~R? ~s (3) ~R = (M y DM D )? (M y DRM D )(M y DM D )? (4) ~s = (M y DM D )? M y Ds (5) Dening ~R I in the obvious way, the output SINR is given by D = P ~s y ~ R? I ~s = P s y M D (M y DR I M D )? M y Ds (6) In what follows we present the reduced-rank lters of interest. We remark that other reduced-rank methods have been proposed in [5, 0, 3, 4, 4, 5]. (The auxiliary vector method proposed in [0] generates the same D-dimensional subspace as the MSWF.) A simulation study of adaptive versions of the eigen-decomposition and partial despreading interference suppression methods described here is presented in [6]. 3. Multi-Stage Wiener Filter (MSWF) A block diagram showing four stages of the MSWF is shown in Figure. The stages are associated with the sequence of nested lters c ; : : : ; c D, where D is the order of the lter. Let B m denote a blocking matrix, i.e., B y mc m = 0: (7) Referring to Figure, let d m (i) denote the output of the lter c m, and r m (i) denote the output of the blocking matrix B m, both at time i. Then the lter for the (m+)st stage is determined by c m+ = E[d mr m ] (8) where denotes complex conjugate. For m = 0, we have d 0 (i) = b (i) (the desired input symbol), r 0 (i) = r(i), and c is the matched lter s. Here we assume that each blocking matrix B m is N N, so that each vector c m is N. As in [4], it will be convenient to normalize the lters c ; : : : ; c D so that kc m k =. The lter output is obtained by linearly combining the outputs of the lters c ; : : : ; c D via the weights w ; : : : ; w D?. This is accomplished stage-by-stage. Referring to Figure, let m (i) = d m (i)? w m+ m+ (i) (9) 5

6 -- -- r 0 (i) = r(i) c B r (i) c B r (i) c 3 B 3 r 3 (i) c 4 d 0 (i) = b (i) d (i) d (i) d 3 (i) d 4 (i) + ε 3(i) w 3 + ε (i) w + ε (i) w ε + 0 (i) w 4 Figure : Multi-Stage Wiener Filter. for m D and D (i) = d D (i). Then w m+ is selected to minimize E[j m j ]. The rank D MSWF is given by the following set of recursions. Initialization: For n = ; : : : ; D (Forward Recursion): d 0 (i) = b (i); r 0 (i) = r(i) (0) c n = E[d n?r n? (i)]=ke[d n?r n? ]k () d n (i) = c y nr n? (i) () B n = I? c n c y n (3) r n = B y nr n? (4) Decrement n = D; : : : ; (Backward Recursion): where D (i) = d D (i). The MSWF has the following properties: w n = E[d n?(i) n (i)]=e[j n (i)j ] (5) n? (i) = d n? (i)? w n n (i) (6). At stage n the lter generates the desired (time) sequence fd n (i)g and the \observation" sequence fr n (i)g. The MSWF for estimating the former from the latter has the same structure at each stage. The full-rank MMSE lter can therefore be represented as an MSWF with n stages, where c n is replaced by the MMSE lter for estimating d n? (i) from r n? (i).. It is shown in [4] that where c i+ = (I? c ic y i)r i? c i k(i? c i c y i)r i? c i k (7) R i+ = (I? c i+ c y i+)r i (I? c i+ c y i+) (8) for i = 0; ; ; ; D?, where c = s and R 0 = R. The following induction argument establishes that c i is orthogonal to c j for all j 6= i. First, it is easily 6

7 veried from (7) that c is orthogonal to c. Assume, then, that c l is orthogonal to c m for l; m i, l 6= m. We can rewrite (8) and (7) as!! X R i = I? i X c l c y l R I? i c l c y l (9) l= l= and X c i+ = i+ (I? i X c l c y l )R(I? i? c l c y l )c i l= X = i+ (I? i c l c y l )Rc i l= = i+ (I? i X l= l= c l c y l )R I c i (30) where i+ is a normalization constant, and the last equality holds since R = R I + P s s y and c = s is orthogonal to c i. It can then be veried from (30) that c i+ is orthogonal to c ; ; c i, or equivalently, that c l is orthogonal to c m for l; m i +, l 6= m, which establishes the induction step. The relations (9) and (30) will be useful in what follows. From Figure it is easily seen that the matrix of basis vectors for the MSWF is given by " # D? Y M D = c B c B B c 3 : : : B n c D = [c c : : : c D ] (3) n= where the last equality is due to the fact that the c i 's are orthogonal. (This implies that the blocking matrices in Figure can be replaced by the identity matrix without aecting the variables d n (i) and n (i), n = 0; : : : ; D.) An alternate set of nonorthogonal basis vectors is given in the next section. 3. It is easily shown that each c m is contained in the signal subspace, hence K stages are needed to form the full-rank lter. 4. It is shown in [4] that E[d n(i)d n+m (i)] = c y nrc n+m = 0 (3) for jmj > and 0 n; n + m D?. It follows that M y DRM D is tri-diagonal. 5. The blocking matrix B m is not unique. (In [4] B m is assumed to be an [N? (m? )] (N? m) matrix, so that c n is [N? (n? )].) Although any rank N? m matrix that satises (8) achieves the same performance (MMSE), this choice can aect the performance for a specic data record. In particular, a poor choice of blocking matrix can lead to numerical instability. 7

8 6. Computation of the MMSE lter coecients does not require an estimate of the signal subspace, as do the eigen-decomposition techniques to be described. Successive lters are determined by \residual correlations" of signals in the preceding stage. Adaptive algorithms based on this technique are presented in [9]. 3. Eigen-Decomposition Methods The reduced-rank technique which has probably received the most attention is \Principal Components (PC)", which is based on the following eigen-decomposition of the covariance matrix R = VV y (33) where V is the orthonormal matrix of eigenvectors of R and is the diagonal matrix of eigenvalues. Suppose that the eigenvalues are ordered as : : : N. For given subspace dimension D, the projection matrix for PC is M D = V :D, the rst D columns of V. For K < N, the eigenvalues ; : : : ; K are associated with the signal subspace, and the remaining eigenvalues are associated with the noise subspace, i.e., m = for K < m N. Consequently, by selecting D K, PC retains full-rank MMSE performance (e.g., see [8, 6]). However, the performance can degrade quite rapidly as D decreases below K, since there is no guarantee that the associated subspace will retain most of the desired signal energy. This is especially troublesome in a nearfar scenario, since for small D, the subspace which contains most of the energy will likely correspond to the interference, and not the desired signal. We remark that in a heavily loaded cellular system, the dimension of the signal subspace may be near, or even exceed the number of dimensions available, in which case PC does not oer much of an advantage relative to conventional full-rank adaptive techniques. An alternative to PC is to choose a set of D eigenvectors for the projection matrix which minimizes the MSE. Specically, we can rewrite the MSE (6) in terms of reduced-rank variables as M =? k? ~s k (34) The subspace that minimizes the MSE has basis vectors which are the eigenvectors of R associated with the D largest values of j~s ;k = k j, where ~s ;k is the kth component of ~s, and is given by vks y, where v k is the kth column of V. (Note the inverse weighting of j k j in contrast with PC.) This technique, called \Cross-Spectral (CS)" reduced-rank ltering, was proposed in [9] and [0]. This technique can perform well for D < K since it takes into account the energy in the subspace contributed by the desired user. Unlike PC, the projection subspace for CS requires knowledge of the desired user's spreading code s. A disadvantage of eigen-decomposition techniques in general is the complexity associated with estimation of the signal subspace. 8

9 3.3 Partial Despreading In this method, proposed for DS-CDMA in [], the received signal is partially despread over consecutive segments of m chips, where m is a parameter. The partially despread vector has dimension D = dn=me, and is the input to the D-tap lter. Consequently, m = corresponds to the full-rank MMSE lter, and m = N corresponds to the matched lter. The columns of M D in this case are nonoverlapping segments of s, the signature for user, where each segment is of length m. Specically, if N=m = D, the jth column of M D is [M D ] j 0 = [0 : : : 0 s ;[(j?)m+:jm] 0 0 : : : 0] (35) where j D, prime ( 0 ) denotes transpose, and there are (j? )m zeros on the left and (D? j)m zeros on the right. This is a simple reduced-rank technique that allows the selection of MSE performance between the matched and full-rank MMSE lters by adjusting the number of adaptive lter coecients. 4 Large System Analysis Our main results, presented in the next section, are motivated by the large system results for synchronous CDMA with random signature sequences presented in [5, 6, 7]. Specically, we evaluate the large system limit of the output SINR for the reduced-rank lters described in the preceding section when the signatures are chosen randomly. This limit is dened by letting the number of dimensions, N, and number of users, K, tend to innity with K=N = held constant. The large system results presented in [5, 6, 7], as well as some of the results presented here, make use of the limiting eigenvalue distribution of a class of random matrices. Let C ij be an innite matrix of i.i.d. complex-valued random variables with variance, and P i be a sequence of real-valued random variables (corresponding to user powers). Let S be an N K matrix, whose (i; j)th entry is p C ij N. Let P be a K K diagonal matrix with diagonal entries P ; ; P K. As K!, we assume that the empirical distribution function of these entries converges almost surely in distribution to a deterministic limit F (). Let G N () denote the empirical distribution function of the eigenvalues of the Hermitian matrix SPS y. It is shown in [8] that as N; K!, and for K! > 0, N G N converges almost surely to a deterministic limit G. Let m G (z) denote the Stieltjes transform of the limit distribution G, Z m G (z) = for z C + fz C : =(z) > 0g. It is shown in [8] that m G (z) = dg() (36)? z?z + R df ( ) +m G (z) 9 (37)

10 for all z C +. In what follows, we will denote the range of for which G() is nonzero as [a(); b()]. For an arbitrary distribution F, closed-form expressions for G, a() and b() do not exist. However, a closed-form expression for G is given in [7] for the case where F (x) =, x > P, and F (x) = 0, x < P (i.e., all users are received with the same power). This will be used in Section 7 to derive some of our results. The preceding result was used in [7] to derive the large system limit of the output SINR for the linear MMSE lter in a synchronous CDMA system. Specically, let P k denote the received power of user k, and P denote the received power of a random user, which has the limit distribution F (P ). Let user be the user of interest. It is shown in [7] that as K = N!, the (random) output SINR of the linear MMSE receiver for user converges in probability to the deterministic limit = P + R 0 I(P; P ; )df (P ) where I(P; P ; P P ) = P + P (39) is the \eective interference" associated with an interferer received with power P. For the case where all users are received with the same power, (38) becomes = (38) P + P (40) + which yields a closed-form solution for [7]. It will be convenient to denote this solution as = B U ; P Similary, we will denote the solution to (38) for an arbitrary power distribution as (4) = B F [; P ; ; F ()] (4) Finally, we remark that the analogous large system limit for the matched lter is MF = P + R 0 P df (P ) (43) 5 Main Results In this section we present the large system limits of output SINR for the reduced-rank lters presented in Section 3. Proofs and derivations are given in Section 7. For nite K and N the output SINR is a random variable due to the assignment of random signature sequences. For the MSWF and partial despreading, we are able to show, in analogy with the full-rank MMSE receiver, that the output SINR converges to a deterministic limit as K = N!. We conjecture that this is also true for the PC and CS methods. 0

11 5. Multi-Stage Wiener Filter We rst state the large system SINR for the MSWF assuming that all users are received with the same power. Theorem As K = N!, the output SINR of the rank D MSWF converges in probability to the limit D MS, which satises MS D+ = P + P +D MS for D 0 (44) where P is the received power for each user, MS 0 = 0, and MS = P=( + P ) is the large system limit of the output SINR for the matched lter. The proof is given in Section 7.. According to this theorem, for nite D the output SINR of the MSWF can be expressed as a continued fraction. For example, MS = P + P + P +P As D increases, this continued fraction converges to the full-rank MMSE given by (40). Two important consequences of this result are:. The dimension D needed to achieve a target SINR within some small of the full-rank SINR does not scale with the system size (K and N). This is in contrast with the other techniques considered, for which the large system output SINR is determined by the ratio D=N.. As D increases, D MS converges rapidly to the full-rank MMSE. Specically, consider the case without background noise, = 0. It can be shown that MS D = DX n=?n = 8 >< >:?D?? for < D for =??D+? for > (45) (46) In particular, D MS increases exponentially with D for < and linearly with D for =. If >, then the gap between i MS and the full-rank performance = decreases exponentially. Numerical results to be presented in the? next section indicate that for Signal-to-Noise Ratios (P= ) and loads (K=N) of interest, the full-rank MMSE performance is essentially achieved with D = 8. We now consider the MSWF with an arbitrary power distribution. In this case we do not have a closed-form result for the large system SINR, although we can compute it numerically. In analogy with the uniform power case we also have the approximation MS D+ + R 0 P P P P +P D MS df (P ) (47)

12 where MS 0 = 0, MS is the asymptotic SINR of the matched lter given by (43), and F (P ) is an arbitrary power distribution. This approximation is accurate for many cases we have considered; however, in Appendix A we show that it is not exact. To compute the output SINR for the MSWF with an arbitrary power distribution, we rst give an alternate representation of the subspace spanned by the basis vectors, or columns of M D. Let S D denote the D-dimensional subspace associated with the rank D MSWF, which is spanned by the set of basis vectors given by (3), and let R I denote the interference plus noise covariance matrix given by (9). Theorem : The subspace S D is spanned by the vectors y 0 ; : : : ; y D? where y n = R n I s. The proof is given in Section 7.. The matrix of basis vectors for the MSWF can therefore be written as M D = [s R I s R Is R D? I s ] (48) It is straightforward to show that R I can be replaced by R = E[r(i)r y (i)] in Theorem. Approximations of the full-rank MMSE lter in terms of powers of the covariance matrix have also been considered in [8] and [9]. Let m = s y y m = s y R m I s (49) l:m = [ l l+ : : : m ] 0 (50)? l:l+m = [ l:l+m l+:l+m+ : : : l+m:l+m ] (5) Note that? l:l+m is an (m + ) (m + ) matrix. From (3)-(5), the reduced-rank MSWF is where ~c D = (M y DM D )(M y DRM D )? M y Ds (5) =? 0:D??:D + P 0:D? y 0:D?? 0:D? (53) = =?0:D??? + D + D 6 4 D =P??? 0:D? :D 0:D? (54) (55) D = P y 0:D??? :D 0:D? (56) is the output SINR from (6). To compute the large system limit, we therefore need to compute the large system limit of n, which is given by the following Lemma, where G is the asymptotic eigenvalue distribution of S I PS y I. Lemma As K = N!, n converges in probability to the limit n (; ) = Z ( + ) n dg() (57)

13 provided that this moment is nite. Proof: From (49) we have n = NX k= ( k + ) n js y v k;i j ; (58) where v k;i is the kth eigenvector of R I, and the sum can be restricted to v k;i in the signal subspace since otherwise s y v k;i = 0. It is shown in [7] that js y v k;i j is O(=N), and the Lemma follows from the same argument used to prove Lemma 4.3 in [7]. When all users have the same power P and = 0, the limit can be evaluated explicitly as [7] n (; 0) = P n n? X k + k=0 n k! n? k To compute n (; ) for > 0, we observe that! k+ n = ; ; (59) dn Z d( ) = n ( + ) n? dg() = n n? (60) which leads to a recursive method for computing the sequence fn g. For a nonuniform power distribution, the large system limit n can be computed directly from (57). In Appendix A we give two other methods for computing n, one of which does not make explicit use of the asymptotic eigenvalue distribution G. We can now state the large system SINR for the rank D MSWF where the limiting power distribution is F (). The superscript indicates the large system limit of the associated variable(s). Theorem 3 As K = N!, the output SINR of the rank D MSWF converges in probability to MS D = P ( 0:D?) y (? :D)? 0:D? (6) Proof: This follows directly from Lemma and the fact that D, given by (56), is a continuous and bounded function of ; ; D?. As for the uniform power case, the dimension D needed to achieve a target SINR within some small constant of the full-rank SINR does not scale with K and N. Because this representation for the output SINR is not as transparent as that for the uniform power case, it is dicult to see how fast the SINR given by (6) converges to the full-rank value as D increases. Numerical examples are presented in Section 6, and indicate that, as for the uniform power case, full-rank performance is achieved for D < 0. For large D Theorem 3 gives an alternative method to (38) for computing the fullrank MMSE. This method does not require knowledge of the asymptotic eigenvalue 3

14 distribution if the second method for computing the moments f n g presented in Appendix B is used. Finally, we remark that for a uniform power distribution, the SINR shown in Theorem 3 must be the same as the continued fraction representation in Theorem. Finding a direct proof of this equivalence appears to be an open problem. 5. Eigen-Decomposition Methods We now state our results for the PC and CS reduced-rank methods presented in Section 3.. In what follows, the large system limit is dened as D; K; N! where K=N = and D=N =. In particular, D now increases proportionally with K and N. As stated earlier, we conjecture that the SINR converges to a deterministic large system limit. The technical diculty in proving this is characterization of the large system limit of jvns y j where v n is the nth eigenvector of the covariance matrix R corresponding to the particular ordering given in Section 3.. Note, in particular, that v n and s are correlated. In order to proceed, we assume that the conjecture is true, and evaluate the corresponding large system limit. The numerical results in Section 6 show that the large system results are nearly identical to the corresponding simulation results. For PC, we are only able to evaluate the large system limit for the uniform (equal) power case. In what follows, G is the limit distribution for the eigenvalues of SS y (see Section 7.3). For both the PC and CS methods the output SINR for a rank D lter can be written as where D = v D = P v D? v D (6) DX n= jv y ns j n + ; (63) and where n and v n are the nth eigenvalue and eigenvector, respectively, of the covariance matrix R. We rst consider PC, for which N. We show in Section 7.3 that lim K=N! D=N! =?? E(v D ) = v P C (; ) = q[c? a()][b()? c] + q [a() + ][b() + ] Z b() c + dg() " # a() + b() + c? a()? b()? arcsin 4 b()? a() "? arcsin (c? )[a() + b()]? a()b() + c (c + )[b()? a()] # (64) where a() =? q = P q b() = + = P (65) 4

15 and c is dened by G(c) = + " a() + b() c? a()? b() q[c? a()][b()? c] + arcsin + # b()? a() q " # [a() + b()]c? a()b() a()b() arcsin + (? )f < g =? c[b()? a()] If v D in (63) converges to a deterministic large system limit, then this limit must be v P C (; ). The large system limit for output SINR is then + (66) P C = vp C (; )? v P C (; ) (67) Numerical examples are presented in the next section, and show that, as expected, when =, the PC algorithm achieves full-rank performance. As decreases below, the performance degrades substantially. Although we do not have an analogous result for an arbitrary power distribution, we observe that for the PC algorithm again achieves full-rank performance, since the eigenvectors chosen for the projection include the signal subspace. As decreases below, in a near-far situation the performance can be substantially worse than that with equal received powers. For example, assume that there are two groups of users where users in each group have the same power, but users in the rst group transmit with much more power than users in the second group. (The groups may correspond to dierent services, such as voice and data.) In this situation, the eigenvalues corresponding to the signal subspace can also be roughly divided into two sets corresponding to the two user groups. If the desired user belongs to the second group, then its energy is mostly contained in the subspace spanned by eigenvectors associated with the small eigenvalues. Consequently, PC will choose a subspace which contains little energy from the desired user, resulting in poor performance. The CS method, described in Section 3., performs better than the PC method for < since it accounts for the projection of the desired user spreading sequence onto the selected subspace. The output SINR is again given by (6) and (63) where the ordering of eigenvalues and eigenvectors corresponds to decreasing values of jvns y j =( n + ). Let so that p n = K p vy ns (68) n + v D = K DX n= j n j (69) As K = N!, numerical results indicate that the sequence f n g converges to a deterministic distribution H(). Assuming this is true, it follows that as K = N! 5

16 and D = N!, v D! v CS (; ) = Z c jj dh() (70) where c satises H(c) =?. In what follows we assume that H() is zero-mean Gaussian. Justication for this assumption stems from the analysis in [30], where it is shown that if v n;i is a randomly chosen eigenvector of the interference plus noise covariance matrix, then vn;is y is zeromean Gaussian. p (In that case, v n;i and s are independent.) It is shown in Section 7.3 that Kvns y has variance E[ n ]. (Note that n converges to a deterministic large system limit for xed n=k.) Consequently, at high SNRs ( << n for n < D), E[j n j ], independent of n. Further justication for this assumption is the close agreement between the large system analytical and simulation results shown in the next section. With the Gaussian assumption for H(), v CS (; ) = Z where c satises Q(c= ) = R c= p e?x = dx =? = c Z x q e?x =( ) dx (7) and? x dh(x); (7) which is the large system limit of E[j n j ] where n is chosen randomly according to a uniform distribution between one and K. In Section 7.4 it is shown that = + (73) where is the full-rank SINR. When all users have the same power P, = p q a() +? q b() + (74) and Q(c= ) =? =. If v D converges to a deterministic large system limit, then it must converge to v CS (; ) in which case D converges in probability to the corresponding limit CS = vcs (; )? v CS (; ) (75) The numerical results shown in the next section are generated according to these assumptions. 6

17 5.3 Partial Despreading (PD) The output SINR for PD can be expressed in terms of the full-rank MMSE expressions B U and B F given by (4) and (4). The large system limit is obtained by despreading over M chips, where M is held constant, so that N=D = M = =. Theorem 4 Assume that the elements of S are i.i.d., zero-mean, and are selected from either a binary or Gaussian distribution. As K = N! and D = N=M!, the output SINR of the Partial Despreading (PD) MMSE lter converges in probability to the limit P D (; M) = B F M; MP ; M ; F () (76) where M = =. The proof is given in Section 7.5. If = 0 (M! ), then P D is the large system limit for the matched lter output SINR given by (43). The large system limit of the output SINR for the MMSE PD lter with a uniform power distribution is P D P (; ) = MB U M; (77) M 6 Numerical Results In this section, we present numerical results, which illustrate the performance of the reduced-rank techniques considered. Simulation results for a CDMA system with nite N and random binary signature sequences are included for comparison with the large system limit. The latter results are averaged over random binary signature sequences and the received power distribution. Figure shows plots of output SINR vs. rank D for the MSWF with dierent loads, assuming the background SNR is = = 0 db and that all users are received with equal power. Also included are simulation results corresponding to N = 3. Figure 3 shows the analogous results for dierent SNRs and xed load = =. These results show that the large system limit accurately predicts the simulated values. In all cases shown, the MSWF achieves essentially full-rank performance for D < 0. Furthermore, the SINR for D = 4 is within db of the full-rank SINR, and the SINR for D = is approximately midway between the SINRs for the matched lter and full-rank MMSE receivers. Figure 4 shows simulated output SINR for the MSWF as a function of normalized rank D=N for N = 3, 64, and 8. This illustrates the convergence to the large system limit, which is the full-rank performance for all values of D=N (shown as the solid line in the Figure). Figure 5 shows output SINR vs. normalized rank D=K for the reduced-rank lters considered assuming uniform (equal) power, SNR = 0 db, and = =. For all four methods considered, the large system analysis accurately predicts the simulation results, which are shown for N = 3. As discussed previously, the large 7

18 8 SIR vs. Rank: Uniform Power, N=3 for simulation, SNR=0dB 6 4 Output SIR (db) 0 α=0.5, Asymptotic α=0.5, Simulated α=.0, Asymptotic α=.0, Simulated α=.5, Asymptotic α=.5, Simulated Rank D Figure : Output SINR vs. rank D for the MSWF with dierent loads. 6 SIR vs. Rank: Uniform Power, N=3 for simulation, α= Output SIR (db) 3 0 SNR=5dB, Asymptotic SNR=5dB, Simulated SNR=0dB, Asymptotic SNR=0dB, Simulated SNR=5dB, Asymptotic SNR=5dB, Simulated Rank D Figure 3: Output SINR vs. rank D for the MSWF with dierent SNRs. system SINR for the MSWF as D! is the full-rank SINR for any D=N > 0. (Large system results for the MSWF corresponding to nite D are not shown.) Consequently, there is a large gap between the curve for the MSWF and the curves for the other methods for small D=N. The CS and PC reduced-rank lters can achieve the fullrank performance only when D K. For D < K, these results show that the CS lter performs much better than the PC lter. The PD lter can achieve the full-rank performance only when D = N, since for any D < N, the selected subspace S D does not generally contain the MMSE solution. For small D=K the PD lter performs close to the matched lter, which is signicantly better than the eigen-decomposition methods. This is because for the latter methods, the desired signal energy is spread over many eigenvectors, so that for small D, relatively little desired signal energy is retained in the selected subspace. 8

19 8 SINR vs. Rank/N: Uniform Power, SNR=0dB, α= Output SINR (db) N, Asymptotic N=8, Simulated N=64, Simulated N=3, Simulated Rank D/N Figure 4: Output SINR vs. spreading gains. normalized rank D=N for the MSWF with dierent Performance results for non-uniform power distributions are shown in Figures 6-8. Two distributions are considered: log-normal, and discrete with two powers. In the former case, the desired user has power P =, and the log-variance of the log-normal distribution is 6 db. In the latter case, P () =P () = 0 db, where P (j) is the power associated with users in group j = ;, and the fraction of high power users is 0.. The desired user is assumed to be in group one with an SNR of 0 db. The rst case applies to the reverse link of an isolated cell where power control is used to compensate for shadowing. The second case corresponds to two service categories, such as data and voice, with perfect power control. Figure 6 compares the large system output SINR of the MSWF computed via the approximation (47) with the exact SINR computed from (6). Figures 7 and 8 show output SINR vs. normalized rank for the dierent reduced-rank lters considered. Simulation results are shown for N = 3 (and N = 5 in Figure 6). In these Figures = 0:5. Figures 7 and 8 show that all methods perform approximately the same as for the uniform power case, except for PC, which performs signicantly worse for D < K. 7 Proofs and Derivations 7. Theorem : MSWF with Uniform Power The proof is based on an induction argument in which the full-rank MSWF is partitioned into two component lters. The rst lter consists of the rst i? stages and the second lter consists of stages i through K (i.e., the full-rank lter which estimates d i? from r i? ). We rst consider the case i = and prove that: (i) The Theorem is valid for D = i =, (ii) The large system SINR associated with the 9

20 8 SINR vs. Rank: logvar=0db, N=3 for simulation, SNR=0dB, α=/ 6 4 Output SINR (db) PC: Simulated PC: Large System CS: Simulated CS: Large System PD: Simulated PD: Large System MSWF: Simulated MSWF: Large System Rank of filters / K Figure 5: Output SINR vs. normalized rank D=K for reduced-rank lters with equal received powers. second component lter is the full-rank large system SINR, and (iii) The large system SINR associated with the lter c (with appropriately dened desired signal and interference components) is equal to the large system SINR for the matched lter. For the induction step, we make the analogous assumptions (i)-(iii) for some i where i K, and prove that (i)-(iii) hold for i +. The rank one MSWF is the matched lter c = s, which has output d = c y (b s + S I b I + n) = S + I + N (78) where S, I, and N denote the corresponding desired signal, interference, and noise terms. The SINR at the output of c is = E[jS j ] E[jN j ] + E[jI j ]?! K! P + P where K! denotes the large system limit (K=N = ), the expectation is with respect to the transmitted symbols and noise, and the limit follows from the fact that ji j! P. Let c? be a vector which is orthogonal to c. The output of c? is and it is easily shown that We now express d? (79) d? = (c? ) y r = (c? ) y S I b I + (c? ) y n (80) E[(d? ) S ] = 0 (8) E[(d? ) N ] = 0 (8) E[(d? ) I ] = c y (S I S y I)c? = c y Rc? (83) as the sum of a desired signal component, interference, and noise, d? = S? + I? + N? (84) 0

21 8 MSWF SINR vs. Rank: User groups, SNR=0dB, α=/ 6 4 Output SINR (db) 0 Simulation: N=3 Simulation: N=5 Large System Approximation Rank D Figure 6: Output SINR vs. D for the MSWF with two groups of high- and low-power users. The large system approximation (47) is compared with the exact large system SINR computed from (6). We dene the desired signal as where a minimizes E[jd?? ai j ]. That is, S? = ai (85) S? = E[(d? ) I ] I E[jI j = cy Rc? ] E[jI j ] I (86) is the MMSE estimate of d? given I, so that by the orthogonality principle, E h (S? ) (I? + N? ) i = 0 (87) Given these denitions of the signal, interference, and noise, we associate an (output) SINR with the lter c?, which is given by? = = E[jS? j ] E[jI? + N? j ] = E[jS? j ] E[jd? j ]? E[jS? j ] jc y Rc? j E[jI j ] (c? ) y Rc?? jcy Rc? j E[jI j ] (88) (89) where the expectation is again with respect to the transmitted symbols and noise. From (88), (89) we have that? +? = E[jS? j ] E[jd? j ] = jc y Rc? j E[jI j ] (c? ) y Rc? (90)

22 0 SINR vs. Rank: logvar=6db, N=3 for simulation, SNR=0dB, α=/ 5 0 Output SINR (db) PC: Simulated CS: Simulated CS: Large System PD: Simulated PD: Large System MSWF: Simulated MSWF: Large System Rank of filters / K Figure 7: Output SINR vs. normalized rank D=K for reduced-rank lters with a log-normal received power distribution. Now consider the lter c = c + w c?, which has output d = d + w d?. Since the output contains the desired signal S, the SINR associated with c is c = P E[jdj ]? P Choosing w to maximize the SINR, or equivalently, minimize the output energy E[jdj ] gives w =? E[(d? ) d ] E[jd? j ] =? E[(d? ) I ] (c? ) y Rc? (9) (9) and =? cy Rc? (c? ) y Rc? E[jdj ] = E[jd j ]? je[(d? ) d ]j E[jd? j ] = c y Rc? jcy Rc? j (c? ) y Rc? (93) Combining (9)-(94) gives = c y Rc? E[jI j ]? +? = P + + E[jI j ] +??! K! P + + P + (? ) (94) c = P + E[jI j ] +? (95)

23 0 SINR vs. Rank: User groups, N=3 for simulation, SNR=0dB, α=/ 5 0 Output SINR (db) PC: Simulated CS: Simulated CS: Large System PD: Simulated PD: Large System MSWF: Simulated MSWF: Large System Rank of filters / K Figure 8: Output SINR vs. normalized rank D=K for reduced-rank lters with two groups of high- and low-power users. and letting K = N! gives c = P (96) + P +(? ) Now suppose that we choose c? = c, so that c = MS. To prove the Theorem for D =, we must show that the large system SINR associated with c? = c is (? ) = MS = P P + ; (97) which is the large system SINR for the matched lter. We therefore let c? = c = P? c (Rc ) (98) = P? c [(S I S y I)c ] (99) where P? x (y) = y? kxk? (x y y)x is the orthogonal projection of y onto x, and is the normalization constant. Now from (99), kp? c (Rc )k = (Rc ) y [P? c (Rc )] (00) = [(S I S? I )c ] y P? c [(S I S y I)c ] (0) = c y (S I S y I) c? (c y S I S y Ic )?! K! ( + )P? (P ) = P (0) (See Appendix B, which discusses the computation of the large system limit of the moments c y (S I S y I) k c.) From (86) and (0) we have js? j = jc y Rc j E[jI j ] ji j = kp? c (Rc )k E[jI j ] ji j?! K! P (03) 3

24 and it is shown in Appendix D that jd? j = c y Rc?! K! P + P + (04) Combining (88) with (03) and (04) gives (97), and substituting into (96) gives MS = P + P + MS (05) Before proceeding to the induction step, we need to make an additional observation. Suppose that we choose c? = c :K, which consists of stages two through K of the MSWF. Referring to Figure, this lter has input r (i) and output w (i). Let c m:k denote stages m through K of the MSWF (input r m? (i) and output w m m (i)). From Figure and (6) we can write c m:k = w m (c m? c m+:k ) (06) where we have used the fact that Q n? m= B m c n = c n. We can therefore express c :K as a linear combination of the MSWF lters c ; : : : ; c K, i.e., c :K = KX i= w i;(:k) c i (07) where the w i;(:k) 's, i = ; : : : ; K, are the corresponding combining coecients, and depend on the lter indices : K. Since the c i 's are orthogonal, c :K is orthogonal to c, as required. From the discussion in Section 3, c :K is the MMSE lter for estimating d from r, the output of the blocking lter B in Figure. Since K stages are needed to obtain the full-rank MMSE lter, the output SINR of c :K is the full-rank SINR, which from (40) must satisfy Comparing (08) with (96) with c? c = P = + P (08) + = c :K shows that the output SINR of c :K must be :K! as K = N!. This completes the rst step of the proof. For the induction step we partition the MSWF into the rst i? stages, consisting of c ; ; c i?, and the rest of the lter, consisting of c i:k. By assumption, the large system output SINR of c i:k is i:k =. We also need to dene the lter c i:l, which consists of stages i through L of a rank-l MSWF. Clearly, c i:l is a linear combination of c i ; : : : ; c L, LX c i:l = w l;(i:l) c l (09) l=i where L K, and w l;(i:l), l = i; : : : ; L, are the combining coecients. We decompose c i:l as c i:l = c i + w i+ c? i (0) 4

25 where c? i is orthogonal to c i, which appears in the MSWF. In what follows, we will choose c? i = c i+:l. That is, for L = i +, c? i = c i+ and for L = K, c? i = c i+:k is the \bottom part" of the full-rank MSWF below stage i. Note that c i+:k is the MMSE lter for estimating d i from r i. Let d i = c y ir = S i + I i + N i () d? i = (c? i ) y r = S? i + I? i + N? i () d i:l = c y i:lr = d i + w i+ d? i (3) where N i = c y in, N i? = (c? i ) y n, the desired signal S i is the projection of d i onto I i?, and S i? is the projection of d? i onto I i, -- - S? i = a i I i ; a i = E[(d? i ) I i ] E[jI i j ] The variables needed for the induction step are illustrated in Figure 9. (4) r c i d i + d i:l c i = c i+:l d i w i+ Figure 9: Illustration of variables used in the proof of Theorem. Lemma : E[(d? i ) N i ] = 0 (5) E[(d? i ) S i ] = 0 (6) E[(d? i ) I i ] = c y irc? i (7) Proof: First, To show (6), we write E[(d? i ) N i ] = E h c y inr y c? i = E[(c? i ) y c i ] = 0 E[(d? i ) S i ] = a i? E[(d? i ) I i? ] i = E(c y inn y c? i ) = a i? E[(d? i ) (d i?? S i?? N i? ) 5

26 Now and is a linear combination of c i+ ; ; c K. Con- follows from (3) and the fact that c? i sequently, Finally, E[(d? i ) N i? ] = c y i?c? i = 0; (8) E[(d? i ) d i? ] = c y i?rc? i = 0 (9) E[(d? i ) S i ] =?a i? E[(d? i ) S i? ] =?a i? a i? E[(d? i ) I i? )] =?a i? a i? E[(d? i ) (d i?? S i?? N i? ) = a i? a i? E[(d? i ) S i? ] = constant E[(d? i ) S ] = constant E[c y Rc? i ] = 0 E[(d? i ) I i ] = E[(d? i ) (d i? S i? N i )] which completes the proof of Lemma. We now compute the large system limit of i:l = in terms of the output SINR for c? i, which is = E[(d? i ) d i ] = c y irc? i (0) E[jS i j ] E[jd i:l j ]? E[jS i j ] : ()? i = = E[jS? i j ] E[jd? i j ]? E[jS? i j ] jc y i Rc? i j E[jI i j ] (c? i ) y Rc? i? jcy i Rc? i j E[jI i j ] ; () and which satsies? i +? i = E[jS? i j ] E[jd? i j ] = jc y irc? i j E[jI i j ] (c? i ) y Rc? i : (3) Selecting w i+ to minimize E[jd i:l j ] in (3) gives w i+ =? E[(d? i ) d i ] E[jd? i j ] =? cy irc? i (c? i ) y Rc? i 6 =? E[(d? i ) I i ] (c? i ) y Rc? i (4) (5)

27 and E[jd i:l j ] = E[jd i j ]? je[(d? i ) d i ]j E[jd? i j ] = E[jd i j ]? jcy irc? i j (c? i ) y Rc? i = E[jS i j ] + E[jI i j ] +? E[jI i j ] = E[jS i j ] + + E[jI i j ] + i? where (3) has been used. Combining ()-(6) gives i:l = E[jS ij ] + E[jI ij ] +? i? i +? i (6) (7) For? i = i+:k, corresponding to c? i = c i+:k, we have i:k = (by assumption), so that (7) and (40) imply that i:k = E[jS i j ] + E[jI ij ] We can rewrite (8) as " E [js i j ]? # = P + i+:k?! P K! P +? E [ji i j ] E [js i j ] (8) i+:k (9) where the superscript \" denotes the large-system limit of the associated variable. Lemma 3 c i, E[jS i j ] and E[jI i j ] are independent of 8i. The proof is given in Appendix C. As!, i+:k! 0 and! 0, so that (9) can only be true if E [js i j ] = P. Consequently, from (8) as K = N!, E[jS i j ]! P; E[jI i j ]! P; i+:k! (30) and the SINR associated with the output of c i is P + P i:i = E [js i j ] + E [ji i j ] = (3) for i = ; ; K, where convergence in probability follows from the fact that the variables are continuous and bounded functions of the moments s y R k Is, k = ; ; D?. From (7)-(30), we can write P i?:i = + P +i:i (3) where i:i is given by (3). Similarly, (7) can be used again to express i?:i in terms of i?:i, which is given by (3) and (3). Iterating in this manner gives the Theorem. 7

28 7. Theorem : Basis for the MSWF In what follows it will be convenient to replace the normalized MSWF lters c ; ; c D, given by (30), by the unnormalized lters c i, which satisfy X c i+ = (I? i c l c y l )R I c i ; i = ; ; D? : (33) l= Clearly, c ; ; c D span the same space S D. The Theorem is obviously true for D = since c = c = s. Since c = (I? c c y )R I c = R I s? (s y R I s )s ; (34) [c c ] = [c R I c ]A where A = "?(s y R I s ) 0 # is non-singular, so the Theorem is true for D =. Assume that the Theorem is true for D = i, and that [c c c i ] = [s R I s R i? I s ]A i (35) where A i is a non-singular upper triangular ii matrix with diagonal elements equal to one. From (30), X c i = (I? i? c l c y l )R I c i? l= = R I c i?? [c c c i? ]u; i = ; ; D where u = [c y Rc i? c y Rc i? c y i?rc i? ] T. Denoting the jth column of A i as A (j) i where, we have c i = R I [s R I s RI i? s ]A (i) i? [s R I s RI i? s ]A i? u = [s R I s R i Is ]A (i+) i+ A (i+) i+ = 6 4?A i? u " A (i) i Hence, A i+ is also a non-singular upper triangular matrix with diagonal elements equal to one, which establishes the Theorem. # 8

29 7.3 Principal Components We assume all users are received with power P. From (63) we have E(v D ) = P DX n= E jvy ns j n +! (36) where the expectation is with respect to the random signature matrix S. Since the elements of S are i.i.d., we can replace s by s k, so that E jvy ns j n +! Combining (36) and (37) gives E(v D ) = P K = P N = K E KX! jvns y k j n + k= =! K E vy nss y v n n + = K E n n +! DX n= E DX n= E! n n +! n n + (37) (38) As K = N!, the distribution of f n g converges to a deterministic limit G() with associated density [7, Theorem.] g(x) = 8 < : p [x?a()][b()?x] x for a() < x < b() 0 otherwise (39) where a() and b() are given by (65), and for 0 < < there is an additional mass point at x = 0, Prf n = 0g =? (40) Combining (38) with the asymptotic eigenvalue distribution gives lim K=N! D=N! where c satises E[v D ] = Z c = Z b() c Z b()+ c+ + dg() q [x? a()? ][b() +? x] x q [x? a()][b()? x] dx + (? )f < g dx (4) G(c) = a() x =? (4) 9

Performance of Reduced-Rank Linear Interference Suppression

Performance of Reduced-Rank Linear Interference Suppression 1928 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 5, JULY 2001 Performance of Reduced-Rank Linear Interference Suppression Michael L. Honig, Fellow, IEEE, Weimin Xiao, Member, IEEE Abstract The

More information

1 Introduction Reduced-rank linear ltering has been proposed for array processing and radar applications to enable accurate estimation of lter coecien

1 Introduction Reduced-rank linear ltering has been proposed for array processing and radar applications to enable accurate estimation of lter coecien Adaptive Reduced-Rank Interference Suppression Based on the Multistage Wiener Filter Michael L. Honig y J. Scott Goldstein z Abstract A class of adaptive reduced-rank interference suppression algorithms

More information

Design of MMSE Multiuser Detectors using Random Matrix Techniques

Design of MMSE Multiuser Detectors using Random Matrix Techniques Design of MMSE Multiuser Detectors using Random Matrix Techniques Linbo Li and Antonia M Tulino and Sergio Verdú Department of Electrical Engineering Princeton University Princeton, New Jersey 08544 Email:

More information

Optimum Signature Sequence Sets for. Asynchronous CDMA Systems. Abstract

Optimum Signature Sequence Sets for. Asynchronous CDMA Systems. Abstract Optimum Signature Sequence Sets for Asynchronous CDMA Systems Sennur Ulukus AT&T Labs{Research ulukus@research.att.com Abstract Roy D.Yates WINLAB, Rutgers University ryates@winlab.rutgers.edu In this

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Spectral Efficiency of CDMA Cellular Networks

Spectral Efficiency of CDMA Cellular Networks Spectral Efficiency of CDMA Cellular Networks N. Bonneau, M. Debbah, E. Altman and G. Caire INRIA Eurecom Institute nicolas.bonneau@sophia.inria.fr Outline 2 Outline Uplink CDMA Model: Single cell case

More information

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Hong Zhang, Ali Abdi and Alexander Haimovich Center for Wireless Communications and Signal Processing Research Department

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Multiuser Receivers, Random Matrices and Free Probability

Multiuser Receivers, Random Matrices and Free Probability Multiuser Receivers, Random Matrices and Free Probability David N.C. Tse Department of Electrical Engineering and Computer Sciences University of California Berkeley, CA 94720, USA dtse@eecs.berkeley.edu

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Received Signal, Interference and Noise

Received Signal, Interference and Noise Optimum Combining Maximum ratio combining (MRC) maximizes the output signal-to-noise ratio (SNR) and is the optimal combining method in a maximum likelihood sense for channels where the additive impairment

More information

Asymptotic Eigenvalue Moments for Linear Multiuser Detection

Asymptotic Eigenvalue Moments for Linear Multiuser Detection Asymptotic Eigenvalue Moments for Linear Multiuser Detection Linbo Li and Antonia M Tulino and Sergio Verdú Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA flinbo, atulino,

More information

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers Pramod Viswanath, Venkat Anantharam and David.C. Tse {pvi, ananth, dtse}@eecs.berkeley.edu

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Minimum Mean Squared Error Interference Alignment

Minimum Mean Squared Error Interference Alignment Minimum Mean Squared Error Interference Alignment David A. Schmidt, Changxin Shi, Randall A. Berry, Michael L. Honig and Wolfgang Utschick Associate Institute for Signal Processing Technische Universität

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM ANAYSIS OF A PARTIA DECORREATOR IN A MUTI-CE DS/CDMA SYSTEM Mohammad Saquib ECE Department, SU Baton Rouge, A 70803-590 e-mail: saquib@winlab.rutgers.edu Roy Yates WINAB, Rutgers University Piscataway

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Advanced 3 G and 4 G Wireless Communication Prof. Aditya K Jagannathan Department of Electrical Engineering Indian Institute of Technology, Kanpur

Advanced 3 G and 4 G Wireless Communication Prof. Aditya K Jagannathan Department of Electrical Engineering Indian Institute of Technology, Kanpur Advanced 3 G and 4 G Wireless Communication Prof. Aditya K Jagannathan Department of Electrical Engineering Indian Institute of Technology, Kanpur Lecture - 19 Multi-User CDMA Uplink and Asynchronous CDMA

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control

CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control 962 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO. 3, MAY 2000 CDMA Systems in Fading Channels: Admissibility, Network Capacity, and Power Control Junshan Zhang, Student Member, IEEE, and Edwin

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques EE6604 Personal & Mobile Communications Week 13 Multi-antenna Techniques 1 Diversity Methods Diversity combats fading by providing the receiver with multiple uncorrelated replicas of the same information

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Pramod Viswanath and Venkat Anantharam {pvi, ananth}@eecs.berkeley.edu EECS Department, U C Berkeley CA 9470 Abstract The sum capacity of

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou Some 2 2 Unitary Space-Time Codes from Sphere Packing Theory with Optimal Diversity Product of Code Size 6 Haiquan Wang Genyuan Wang Xiang-Gen Xia Abstract In this correspondence, we propose some new designs

More information

Iterative Algorithms for Radar Signal Processing

Iterative Algorithms for Radar Signal Processing Iterative Algorithms for Radar Signal Processing Dib Samira*, Barkat Mourad**, Grimes Morad*, Ghemit Amal* and amel Sara* *Department of electronics engineering, University of Jijel, Algeria **Department

More information

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek SPIE Advanced Signal Proc: Algorithms, Architectures & Implementations V, San Diego, -9 July 99 A new fast algorithm for blind MA-system identication based on higher order cumulants KD Kammeyer and B Jelonnek

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

Multi-access Interference Suppression in Canonical. Space-Time Coordinates: A Decentralized Approach

Multi-access Interference Suppression in Canonical. Space-Time Coordinates: A Decentralized Approach Multi-access Interference Suppression in Canonical Space-Time Coordinates: A Decentralized Approach Eko N. Onggosanusi, Barry D. Van Veen, and Akbar M. Sayeed Department of Electrical and Computer Engineering

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

ε ε

ε ε The 8th International Conference on Computer Vision, July, Vancouver, Canada, Vol., pp. 86{9. Motion Segmentation by Subspace Separation and Model Selection Kenichi Kanatani Department of Information Technology,

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

Adaptive interference suppression algorithms for DS-UWB systems

Adaptive interference suppression algorithms for DS-UWB systems Adaptive interference suppression algorithms for DS-UWB systems This thesis is submitted in partial fulfilment of the requirements for Doctor of Philosophy (Ph.D.) Sheng Li Communications Research Group

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Direct-Sequence Spread-Spectrum

Direct-Sequence Spread-Spectrum Chapter 3 Direct-Sequence Spread-Spectrum In this chapter we consider direct-sequence spread-spectrum systems. Unlike frequency-hopping, a direct-sequence signal occupies the entire bandwidth continuously.

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Approximate Minimum Bit-Error Rate Multiuser Detection

Approximate Minimum Bit-Error Rate Multiuser Detection Approximate Minimum Bit-Error Rate Multiuser Detection Chen-Chu Yeh, Renato R. opes, and John R. Barry School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, Georgia 30332-0250

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

[4] T. I. Seidman, \\First Come First Serve" is Unstable!," tech. rep., University of Maryland Baltimore County, 1993.

[4] T. I. Seidman, \\First Come First Serve is Unstable!, tech. rep., University of Maryland Baltimore County, 1993. [2] C. J. Chase and P. J. Ramadge, \On real-time scheduling policies for exible manufacturing systems," IEEE Trans. Automat. Control, vol. AC-37, pp. 491{496, April 1992. [3] S. H. Lu and P. R. Kumar,

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems

Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems Thomas Ostman, Stefan Parkvall and Bjorn Ottersten Department of Signals, Sensors and Systems, Royal Institute of

More information

MULTICARRIER code-division multiple access (MC-

MULTICARRIER code-division multiple access (MC- IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 2, FEBRUARY 2005 479 Spectral Efficiency of Multicarrier CDMA Antonia M. Tulino, Member, IEEE, Linbo Li, and Sergio Verdú, Fellow, IEEE Abstract We

More information

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH : Antenna Diversity and Theoretical Foundations of Wireless Communications Wednesday, May 4, 206 9:00-2:00, Conference Room SIP Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Channel. Feedback. Channel

Channel. Feedback. Channel Space-time Transmit Precoding with Imperfect Feedback Eugene Visotsky Upamanyu Madhow y Abstract The use of channel feedback from receiver to transmitter is standard in wireline communications. While knowledge

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo ON CONSTRUCTING MATRICES WITH PRESCRIBED SINGULAR VALUES AND DIAGONAL ELEMENTS MOODY T. CHU Abstract. Similar to the well known Schur-Horn theorem that characterizes the relationship between the diagonal

More information

VII Selected Topics. 28 Matrix Operations

VII Selected Topics. 28 Matrix Operations VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1)

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1) 926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 Reduced-Rank Channel Estimation for Time-Slotted Mobile Communication Systems Monica Nicoli, Member, IEEE, and Umberto Spagnolini,

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Minimum Feedback Rates for Multi-Carrier Transmission With Correlated Frequency Selective Fading

Minimum Feedback Rates for Multi-Carrier Transmission With Correlated Frequency Selective Fading Minimum Feedback Rates for Multi-Carrier Transmission With Correlated Frequency Selective Fading Yakun Sun and Michael L. Honig Department of ECE orthwestern University Evanston, IL 60208 Abstract We consider

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

MMSE Equalizer Design

MMSE Equalizer Design MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Performance Analysis of Spread Spectrum CDMA systems

Performance Analysis of Spread Spectrum CDMA systems 1 Performance Analysis of Spread Spectrum CDMA systems 16:33:546 Wireless Communication Technologies Spring 5 Instructor: Dr. Narayan Mandayam Summary by Liang Xiao lxiao@winlab.rutgers.edu WINLAB, Department

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

Nordhaus-Gaddum Theorems for k-decompositions

Nordhaus-Gaddum Theorems for k-decompositions Nordhaus-Gaddum Theorems for k-decompositions Western Michigan University October 12, 2011 A Motivating Problem Consider the following problem. An international round-robin sports tournament is held between

More information

VECTOR QUANTIZATION TECHNIQUES FOR MULTIPLE-ANTENNA CHANNEL INFORMATION FEEDBACK

VECTOR QUANTIZATION TECHNIQUES FOR MULTIPLE-ANTENNA CHANNEL INFORMATION FEEDBACK VECTOR QUANTIZATION TECHNIQUES FOR MULTIPLE-ANTENNA CHANNEL INFORMATION FEEDBACK June Chul Roh and Bhaskar D. Rao Department of Electrical and Computer Engineering University of California, San Diego La

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1

Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters 1 2005 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 2005 Transmit Directions and Optimality of Beamforming in MIMO-MAC with Partial CSI at the Transmitters Alkan

More information

Random Matrices and Wireless Communications

Random Matrices and Wireless Communications Random Matrices and Wireless Communications Jamie Evans Centre for Ultra-Broadband Information Networks (CUBIN) Department of Electrical and Electronic Engineering University of Melbourne 3.5 1 3 0.8 2.5

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Keshu Zhang X. Rong Li Peng Zhang Department of Electrical Engineering, University of New Orleans, New Orleans, L 70148 Phone: 504-280-7416,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray A Fast Algorithm for 2-D Direction-of-Arrival Estimation Yuntao Wu 1,Guisheng Liao 1 and H. C. So 2 1 Laboratory for Radar Signal Processing, Xidian University, Xian, China 2 Department of Computer Engineering

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

arxiv:cs/ v1 [cs.it] 11 Sep 2006

arxiv:cs/ v1 [cs.it] 11 Sep 2006 0 High Date-Rate Single-Symbol ML Decodable Distributed STBCs for Cooperative Networks arxiv:cs/0609054v1 [cs.it] 11 Sep 2006 Zhihang Yi and Il-Min Kim Department of Electrical and Computer Engineering

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted. THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION Sunday, March 1, Time: 3 1 hours No aids or calculators permitted. 1. Prove that, for any complex numbers z and w, ( z + w z z + w w z +

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

A Game-Theoretic Framework for Interference Avoidance

A Game-Theoretic Framework for Interference Avoidance IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 4, APRIL 2009 1087 A Game-Theoretic Framework for Interference Avoidance R. Menon, A. B. MacKenzie, Member, IEEE, J. Hicks, Member, IEEE, R. M. Buehrer,

More information

On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA

On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA Ralf R. Müller Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) Lehrstuhl für Digitale Übertragung 13 December 2014 1. Introduction

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information