TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY 1. Success probability of the Babai estimators for box-constrained integer linear models

Size: px
Start display at page:

Download "TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY 1. Success probability of the Babai estimators for box-constrained integer linear models"

Transcription

1 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY uccess probability of the Babai estimators for box-constrained integer linear models Jinming Wen and Xiao-Wen Chang Abstract In many applications including communications, one may encounter a linear model where the parameter vector ˆx is an integer vector in a box. To estimate ˆx, a typical method is to solve a box-constrained integer least squares (BI) problem. However, due to its high complexity, the box-constrained Babai integer point x BB is commonly used as a suboptimal solution. In this paper, we first derive formulas for the success probability of x BB and the success probability of the ordinary Babai integer point x OB when ˆx is uniformly distributed over the constraint box. ome properties of and and the relationship between them are studied. Then, we investigate the effects of some column permutation strategies on. In addition to -BAT and QRD, we also consider the permutation strategy involved in the lattice reduction, to be referred to as -P. On the one hand, we show that when the noise is relatively small, -P always increases and argue why both -BAT and QRD often increase ; and on the other hand, we show that when the noise is relatively large, -P always decreases and argue why both - BAT and QRD often decrease. We also derive a column permutation invariant bound on, which is an upper bound and a lower bound under these two opposite conditions, respectively. Numerical results demonstrate our findings. Finally, we consider a conjecture concerning x OB proposed by Ma et al. We first construct an example to show that the conjecture does not hold in general, and then show that it does hold under some conditions. Index Terms Box-constrained integer least squares estimation, Babai integer point, success probability, column permutations, -P, QRD, -BAT. I. INTRODUCTION UPPOE that we have the following box-constrained linear model: y = Aˆx + v, v N (0, σ 2 I) (a) ˆx B {x Z n : l x u, l, u Z n } (b) where y R m is an observation vector, A R m n is a deterministic model matrix with full column rank, This work was supported by NERC of Canada grant and ANR through the HPAC project under Grant ANR B J. Wen was with The Department of Mathematics and tatistics, McGill University, Montreal, QC H3A 0B9, Canada, and CNR, aboratoire de l Informatique du Parallélisme (U. yon, CNR, EN, INRIA, UCB), yon 69007, France. He is with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6G 24, Canada ( jinming@ualberta.ca). X.-W. Chang is with The chool of Computer cience, McGill University, Montreal, QC H3A 2A7, Canada ( chang@cs.mcgill.ca). Manuscript received; revised. ˆx is an unknown integer parameter vector in the box B, v R m is a noise vector following the Gaussian distribution N (0, σ 2 I) with σ being known. This model arises in various applications including wireless communications, see e.g., [], [2]. In this paper, we assume that ˆx is random and uniformly distributed over the box B. This assumption is often made for MIMO applications, see, e.g., [3]. A common method to estimate/detect ˆx in () is to solve the following box-constrained integer least squares (BI) problem: min y x B Ax 2 2 (2) whose solution is the maximum likelihood estimator/detector of ˆx. Here we would like to make a comment on terminology. In communications, it is proper to use detect and detector for the constrained case. However, later in this paper we will use estimate and estimator as an extension of the terminology commonly used in the unconstrained case. A typical approach to solving (2) is discrete search, which usually consists of two stages: reduction and search. In the first stage, orthogonal transformations are used to transform A to an upper triangular matrix R. To make the search process more efficient, a column permutation strategy is often used in reduction. Two well-known strategies are -BAT [4], [] and QRD [5], [6]. The commonly used search methods are the so-called sphere decoding methods [], [7] and [6], which are the extensions of the chnorr-euchner search method [8], a variation of the Fincke-Pohst search method [9], for ordinary integer least squares problems to be mentioned below. There are also some variants of chnorr- Euchner search methods, see, e.g., [0]. If the true parameter vector ˆx Z n in the linear model (a) is not subject to any constraint, then we say (a) is an ordinary linear model. In this case, to estimate ˆx, one solves an ordinary integer least squares (OI) problem (also referred to as the closest vector problem): min y x Z Ax 2 n 2 (3) and whose solution is referred to as the OI estimator of ˆx. Algorithms and theory for OI problems are surveyed in [] and [2]. The most widely used reduction strategy in solving (3) is the reduction [3], which consists of two types of operations called size reduction and column permutation.

2 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 2 But it is difficult to use it to solve a BI problem because after size reductions the box constraint becomes too complicated to handle in the search process. However, one can use its permutation strategy, to be referred to as -P (we referred it to as -permute in [4]). The -P, QRD and -BAT strategies use only the information of A to do the column permutations. ome column permutation strategies which use not only the information of A, but also the information of y and the box constraint have also been proposed [5], [6] and [6]. For a fixed constraint box B in (b), where all the entries of l are equal and all the entries of u are equal, it was shown in [3] that when the signal-to-noise ratio (NR) is fixed the expected complexity of solving (2) by the Fincke- Pohst search method behaves as an exponential function of the dimension n when n is large enough, although it is dominated by polynomial terms for high NR and small n [7] [3]. o for some real-time applications, an approximate solution, which can be produced quickly, is computed instead. For the OI problem, the Babai integer point x OB, to be referred to as the ordinary Babai estimator, which can be obtained by the Babai nearest plane algorithm [8], is an often used approximate solution. Taking the box constraint into account, one can easily modify the Babai nearest plane algorithm to get an approximate solution x BB to the BI problem (2), to be referred to as the box-constrained Babai estimator. This estimator is the first point found by the search methods proposed in [7], [] and [6], and it has been used as a suboptimal solution, see, e.g., [9]. In communications, algorithms for finding the Babai estimators are often referred to as successive interference cancellation detectors. There have been algorithms which find other suboptimal solutions to the BI problems in communications, see, e.g., [20] [29] etc. In this paper we will focus only on the Babai estimators. In order to see how good an estimator is, one needs to find the probability of the estimator being equal to the true integer parameter vector, which is referred to as success probability [30]. The probability of wrong estimation is referred to as error probability, see, e.g., [26]. For the estimation of ˆx in the ordinary linear model (a), where ˆx is supposed to be deterministic, the formula of the success probability of the ordinary Babai estimator x OB was first given in [3], which considers a variant form of the I problem (3). A simple derivation for an equivalent formula of was given in [4]. It was shown in [4] that increases after applying the reduction algorithm or only the -P column permutation strategy, but may strictly decrease after applying the QRD and -BAT permutation strategies. The main goal of this paper is to extend the main results we obtained in [4] for the ordinary case to the boxconstrained case. We will present a formula for the success probability of the box-constrained Babai estimator x BB and a formula for the success probability of the ordinary Babai estimator x OB when ˆx in () follows a uniform distribution over the box B. ome properties of and and the relationship between them will also be given. Then we will investigate the effect of the -P column permutation strategy on. We will show that increases under a condition. urprisingly, we will also show that decreases after -P is applied under an opposite condition. Roughly speaking, these two opposite conditions are that the noise standard deviation σ in (a) are relatively small and large, respectively. This is different from the ordinary case, where always increases after the -P strategy is applied. Although our theoretical results for -P cannot be extended to QRD and - BAT, our numerical tests indicate that under the two conditions, often (not always) increases and decreases, respectively, after applying QRD or -BAT. Explanations will be given for these phenomena. These suggest that before we applying -P, QRD or -BAT we should check the conditions. Moreover, we will give a bound on, which is column permutation invariant. It is interesting that the bound is an upper bound under the small noise condition we just mentioned and becomes a lower bound under the opposite condition. In [32], the authors made a conjecture, based on which a stopping criterion for the search process was proposed to reduce the computational cost of solving the BI problem. The conjecture is related to the success probability of the ordinary Babai estimator x OB. We will first show that the conjecture does not always hold and then show it holds under a condition. The rest of the paper is organized as follows. In ection II, we introduce the QR reduction and the -P, QRD and -BAT column recording strategies. In ection III, we present the formulas for and, study the properties of and and the relationship between them. In ection I, we investigate the effects of the - P, QRD and -BAT column permutation strategies and derive a bound on. In ection, we investigate the conjecture made in [32] and obtain some negative and positive results. Finally, we summarize this paper in ection I. Notation. For matrices, we use bold upper-case letters and for vectors we use bold lower-case letters. For x R n, we use x to denote its nearest integer vector, i.e., each entry of x is rounded to its nearest integer (if there is a tie, the one with smaller magnitude is chosen). For a vector x, x i:j denotes the subvector of x formed by entries i, i +,..., j. For a matrix A, A i:j,i:j denotes the submatrix of A formed by rows and columns i, i+,..., j.

3 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 3 II. QR FACTORIZATION AND COUMN REORDERING Assume that the model matrix A in the linear model (a) has the QR factorization [ ] R A = [Q, Q 2 ] (4) 0 where [Q, Q 2 ] R m m is orthogonal and R R n n n m n is upper triangular. Without loss of generality, we assume that the diagonal entries of R are positive throughout the paper. Define ỹ = Q T y and ṽ = Q T v. Then, the linear model () is reduced to ỹ = Rˆx + ṽ, ṽ N (0, σ 2 I), (5a) ˆx B {x Z n : l x u, l, u Z n } and the BI problem (2) is reduced to (5b) min ỹ x B Rx 2 2. (6) To solve the reduced problem (6), sphere decoding search algorithms are usually used to find the optimal solution. For search efficiency, one typically adopts a column permutation strategy, such as -BAT, QRD or -P, in the reduction process to obtain a better R. For simplicity, we assume that the column permutations are performed on R in (4) no matter which strategy is used, i.e., Q T RP = R (7) where Q R n n is orthogonal, P Z n n is a permutation matrix, and R R n n is an upper triangular matrix satisfying the properties of the corresponding column permutation strategies. Notice that combining (4) and (7) result in the following QR factorization of the column reordered A: [ ] [ ] R Q AP = Q 0, Q Q. 0 0 I m n The -BAT strategy determines the columns of R from the last to the first. uppose columns n, n,..., k+ of R have been determined, this strategy chooses a column from k remaining columns of R as the k-th column such that r kk is maximum over all of the k choices. For more details, including efficient algorithms, see [], [4], [33] [35] etc. One may refer to [36] for the performance analysis of -BAT. In contrast to -BAT, the QRD strategy determines the columns of R from the first to the last by using the modified Gram-chmidt algorithm or the Householder QR algorithm. uppose columns, 2,..., k of R have been determined. In the k-th step of the algorithm, the k-th column of R we seek is chosen from the remaining n k + columns of R such that r kk is smallest. For more details, see [5] and [6] etc. The -P strategy [4] does the column permutations of the reduction algorithm and produces R satisfying the ovász condition: δ r 2 k,k r 2 k,k + r 2 kk, k = 2, 3,..., n (8) where δ is a parameter satisfying /4 < δ. uppose that δ rk,k 2 > r2 k,k +r2 k,k for some k. Then we interchange columns k and k of R. After the permutation, the upper triangular structure of R is no longer maintained. But we can bring R back to an upper triangular matrix by using the Gram-chmidt orthogonalization technique (see [3]) or by a Givens rotation: R = G T k,krp k,k (9) where G k,k is an orthogonal matrix and P k,k is a permutation matrix, and R satisfies r 2 k,k = r 2 k,k + r 2 k,k, r 2 k,k + r 2 k,k = r 2 k,k, r k,k r kk = r k,k r kk. (0) Note that the above operation guarantees that the inequality in (8) holds. For simplicity, later when we refer to a column permutation, we mean the whole process of a column permutation and triangularization. For readers convenience, we describe the -P strategy in Algorithm, which can also be called the -P reduction. Algorithm -P : set P = I n, k = 2; 2: while k n do 3: if δ r 2 k,k > r2 k,k + r2 kk then 4: perform a column permutation: R = G T k,krp k,k ; 5: update P : P = P P k,k ; 6: k = k, when k > 2; 7: else 8: k = k + ; 9: end if 0: end while Here we give a remark about the -P algorithm. Note that the -P algorithm is the same as the original algorithm, except that any operations related to size reductions are not performed. When the ovász condition (8) for two consecutive columns k and k of R is not satisfied, the algorithm interchanges the two columns and performs triangularization. We have just shown that the two updated columns satisfy the ovász condition. The algorithm terminates when the ovász condition for any two consecutive columns is satisfied. The proof for the convergence of the original algorithm, which does not use the size reduction condition, can be applied here to show the convergence of the -P algorithm. We would like to point out that as the size reduction condition ( r ij r ii /2) in the reduction is not satisfied any

4 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 4 more, some properties of the reduction are lost in the -P reduction. With the QR factorization (7), we define ȳ = Q T ỹ, ẑ = P T ˆx, v = Q T ṽ, z = P T x, l = P T l, ū = P T u. Then the linear model (5) is transformed to () ȳ = Rẑ + v, v N (0, σ 2 I), (2a) ẑ B = {z Z n : l z ū, l, ū Z n } and the BI problem (6) is transformed to (2b) min ȳ Rz 2 z B 2 (3) whose solution is the BI estimator of ẑ. III. UCCE PROBABIITIE OF THE BABAI ETIMATOR We consider the reduced box-constrained linear model (5). The same analysis can be applied to the transformed reduced linear model (2). The box-constrained Babai estimator x BB of ˆx in (5), a suboptimal solution to (6), can be computed as follows: c BB i = (ỹ i l i, x BB i = c BB i, u i, n j=i+ r ij x BB j )/r ii, if c BB i l i if l i < c BB i < u i if c BB i u i (4) for i = n, n,...,, where n n+ = 0. If we do not take the box constraint into account, we get the ordinary Babai estimator x OB : n = (ỹ i r ij x OB j )/r ii, x OB i = c OB i (5) c OB i j=i+ for i = n, n,...,. In the following, we give formulas for the success probabilities of x BB and x OB. Theorem : uppose that in () ˆx is uniformly distributed over the constraint box B, and ˆx and v are independent. uppose that () is transformed to (5) through the QR factorization (4). Then the success probabilities of the box-constrained Babai estimator x BB and the ordinary Babai estimator x OB, which are respectively defined in (4) and (5), are Pr(x BB = ˆx) n [ = i= u i l i + + Pr(x OB = ˆx) = u ( ) i l i u i l i + erf rii ] 2, (6) n ( ) rii erf 2, (7) i= where the error function is erf(ζ) = 2 π ζ 0 exp ( t 2) dt. Proof. To simplify notation, we denote ( ) ζ φ σ (ζ) = erf (8) which will be used in this proof and other places. ince the random vectors ˆx and v in () are independent, ˆx and ṽ in (5) are also independent. From (5a), n ỹ i = r iiˆx i + r ij ˆx j + ṽ i, i = n, n,...,. j=i+ Then from (4), we obtain n r c BB ij i = ˆx i + (ˆx j x BB j ) + ṽi, i = n, n,...,. r ii r ii j=i+ Therefore, if x BB i+ = ˆx i+,, x BB we have c BB i (9) n = ˆx n and ˆx i is fixed, N (ˆx i, σ 2 /rii 2 ). Thus, ( (c BB i 0, ). (20) 2 ˆx i )r ii N To simplify notation, we denote events E i = (x BB i = ˆx i,..., x BB n = ˆx n ), i =,..., n. Then, applying the chain rule of conditional probabilities yields n = Pr(E ) = Pr(x BB i = ˆx i E i+ ) (2) i= where E n+ is the sample space Ω leading to Pr(x BB n = ˆx n E n+ ) = Pr(x BB n = ˆx n ). ince events ˆx i = l i, l i < ˆx i < u i and ˆx i = u i are independent, by (4), we have Pr(x BB i = ˆx i E i+ ) = Pr ((ˆx i = l i, c BB i l i + /2) E i+ ) + Pr ((l i < ˆx i < u i, ˆx i /2 c BB i < ˆx i + /2) E i+ ) + Pr ((ˆx i = u i, c BB i u i /2) E i+ ). (22) In the following we will use this simple result: if Ē, Ē2 and Ē3 are three events, and Ē2 and Ē3 are independent, then Pr((Ē, Ē2) Ē3) = Pr(Ē) Pr(Ē2 (Ē, Ē3)). (23) This can easily be proved. In fact, Pr((Ē, Ē2) Ē3) Pr(Ē, Ē2, Ē3) = Pr(Ē3) Pr(Ē, Ē2, Ē3) = Pr(Ē) Pr(Ē, Ē3) = Pr(Ē) Pr(Ē2 (Ē, Ē3)),

5 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 5 where the second equality follows from the fact that Ē and Ē3 are independent. Thus, by (22) and (23), we obtain Pr(x BB i = ˆx i E i+ ) = Pr(ˆx i = l i ) Pr (c BB i l i + /2 (ˆx i = l i, E i+ )) + Pr(l i < ˆx i < u i ) Pr (ˆx i /2 c BB i < ˆx i + /2 (l i < ˆx i < u i, E i+ )) + Pr(ˆx i = u i ) Pr (c BB i u i /2 (ˆx i = u i, E i+ )). (24) ince ˆx is uniformly distributed over the box B, for the first factors of the three terms on the right-hand side of (24), we have Pr(ˆx i = l i ) = u i l i +, Pr(l i < ˆx i < u i ) = u i l i u i l i +, Pr(ˆx i = u i ) = u i l i +. By (8) and (20), for the second factors of these three terms, we have Pr(c BB i l i + /2 (ˆx i = l i, E i+ )) ( (c BB i ˆx i )r ii = Pr r ) ii 2 (ˆxi = l i, E i+ ) = rii exp ( t 2) dt = π 2 [ + φ σ(r ii )], Pr(ˆx i /2 c BB i < ˆx i + /2 (l i < ˆx i < u i, E i+ )) ( (c BB i ˆx i )r ) ii r = Pr ii 2 (li < ˆx i < u i, E i+ ) = rii exp ( t 2) dt = φ σ (r ii ), π r ii Pr(c BB i u i /2 (ˆx i = u i, E i+ )) ( (c BB i ˆx i )r ii = Pr r ii = π r ii (ˆxi = u i, E i+ ) exp ( t 2) dt = 2 [ + φ σ(r ii )]. Combining the equalities above with (24) yields Pr(x BB i = ˆx i E i+ ) = 2(u i l i + ) [ + φ σ(r ii )] + u i l i u i l i + φ σ(r ii ) + 2(u i l i + ) [ + φ σ(r ii )] = u i l i + + u i l i u i l i + φ σ(r ii ) which, with (8) and (2), yields (6). Now we consider the success probability of the ordinary Babai estimator x OB. Everything in the first three ) paragraphs of this proof still holds if we replace each superscript BB by OB. But we need to make more significant changes to the last two paragraphs. We change (22) and (24) as follows: Pr(x OB i = ˆx i E i+ ) = Pr((l i ˆx i u i, ˆx i /2 c OB i < ˆx i + /2) E i+ ) = Pr(l i ˆx i u i ) Pr(ˆx i /2 c OB i < ˆx i + /2 (l i ˆx i u i, E i+ )). Here Pr(l i ˆx i u i ) =, Pr(ˆx i /2 c OB i < ˆx i + /2 (l i ˆx i u i, E i+ )) = φ σ (r ii ). Thus Pr(x OB i = ˆx i E i+ ) = φ σ (r ii ). Then (7) follows from (8) and (2) with each superscript BB replaced by OB. From the proof of (7), we observe that the formula holds no matter what distribution of ˆx is over the box B. Furthermore, the formula is identical to the one for the success probability of the ordinary Babai estimator x OB when ˆx in () is deterministic and is not subject to any box constraint; for more details, see [4]. The following result shows the relationship between and. Corollary : Under the same assumption as in Theorem, Thus Proof. Note that <, (25) lim =. (26) all i n,u i l i φ σ (r ii ) = erf(r ii /()) <. φ σ (r ii ) = u i l i + φ σ(r ii ) + u i l i u i l i + φ σ(r ii ) < u i l i + + u i l i u i l i + φ σ(r ii ). Then, by Theorem, we can conclude that (25) holds, and we can also see (26) holds. Corollary 2: Under the same assumption as in Theorem, and increase when σ decreases and lim = lim =. σ 0 σ 0 Proof. For a given ζ, when σ decreases erf(r ii /()) increases and lim erf(r ii /()) =. Then from Theorem, we immediately see that the corollary σ 0 holds.

6 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 6 I. EFFECT OF -P, QRD AND -BAT ON uppose that we perform the QR factorization (7) by using a column permutation strategy, such as -P, QRD or -BAT, then we have the reduced boxconstrained linear model (2). For (2) we can define its corresponding Babai point z BB, and use it as an estimator of ẑ, which is equal to P T ˆx, or equivalently we use P z BB as an estimator of ˆx. In this section, we will investigate how -P, QRD and -BAT column permutation strategies affect the success probability of the box-constrained Babai estimator. A. Effect of -P on The -P strategy involves a sequence of permutations of two consecutive columns of R. To investigate how -P affects, we first look at one column permutation. uppose that δ rk,k 2 > r2 k,k + r2 kk for some k for the R in (5). After the permutation of columns k and k, R becomes R = G T k,krp k,k (see (9)). Then with the transformations given in (), where Q = G k,k and P = P k,k, (5) is transformed to (2). We will compare Pr(x BB = ˆx) and Pr(z BB = ẑ). To prove our main results, we need the following two lemmas. emma : Given α > 0, define ( ) f(ζ, α) =( 2ζ 2 ) + α erf(ζ) 2α ζ exp( ζ 2 ) π (27) for ζ 0. Then, f(ζ, α) is a strictly decreasing function of ζ and has a unique zero r(α), i.e., f(r(α), α) = 0. (28) When ζ > r(α), f(ζ, α) < 0 and when ζ < r(α), f(ζ, α) > 0. Furthermore, 0 < r(α) < / 2, r(α) is a strictly decreasing function of α, and lim r(α) = 0. α Proof. By some simple calculations, we obtain f(ζ, α) ( ) = 4ζ + α erf(ζ). ζ Thus, for any ζ 0 and α > 0, f(ζ, α)/ ζ 0, where the equality holds if and only ζ = 0. Therefore, f(ζ, α) is a strictly decreasing function of ζ. Note that f(0, α) = > 0 and f(/ 2, α) < 0 for α > 0, by the implicit function theorem, there exists a unique r(α), which is continuously differentiable with respect to α, such that (28) holds and 0 < r(α) < / 2. ince f(ζ, α) is strictly decreasing with respect to ζ, when ζ > r(α), f(ζ, α) < 0 and when ζ < r(α), f(ζ, α) > 0. In the following, we show that r(α) is a strictly decreasing function of α. From (28), we have ( 2 r 2 (α) )( +α erf(r(α)) ) = 2α r(α) exp ( r 2 (α) ). π (29) Taking the derivative for both sides of (29) with respect to α yields 2 r(α) r (α) ( + α erf(r(α)) ) + ( 2 r 2 (α) ) ( erf(r(α)) + 2α r (α) exp ( r 2 (α) )) π = 2 π r(α) exp ( r 2 (α) ) + 2α π r (α) exp ( r 2 (α) )( 2 r 2 (α) ). Therefore, 2 r(α) r (α) ( + α erf(r(α)) ) =( 2 r 2 (α)) erf(r(α)) 2 π r(α) exp ( r 2 (α) ) = α ( 2 r2 (α)), where the latter equality follows from (29). Hence r ( 2 r 2 (α)) (α) = 2 α r(α) ( + α erf(r(α)) ) < 0. Finally, we show that lim r(α) = 0. ince r(α) is α continuously differentiable with respect to α and r(α) > 0 for α > 0, lim r(α) exists. et η = lim r(α), by the α α fact that r(α) is strictly decreasing with α, we obtain that 0 η / 2. From (29), we have ( 2 r 2 (α) ) erf(r(α)) 2 r(α) exp ( r 2 (α) ) π = 2( r(α) ) 2. α Then we take limits on both sides of the above equation as α, resulting in ( 2 η 2 ) erf(η) 2 η exp( η 2 ) = 0. π ince 0 η / 2, one can conclude from the above equation that r(α) = η = 0. lim α + Remark : Given α, we can easily solve (28) by a numerical method, e.g., the Newton method, to find r(α). emma 2: Given α, β > 0, define g(ζ, α, β) = ( + α erf(ζ) )( + α erf(β/ζ) ), ζ > 0. (30) Then, when min{ β, β/r(α)} ζ < max{ β, β/r(α)} (3) where r(α) is defined in emma, g(ζ, α, β) is a strictly decreasing function of ζ. Proof. By the definition of g, we can easily obtain g(ζ, α, β) ζ = 2α πζ ( + α erf(ζ) )( + α erf(β/ζ) ) [h(ζ, α) h(β/ζ, α)],

7 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 7 where h(ζ, α) = ζ exp( ζ2 ) + α erf(ζ). (32) It is easy to see that in order to show the result, we need only to show h(ζ, α) < h(β/ζ, α) under the condition (3) with ζ β/ζ. By some simple calculations and (27), we have h(ζ, α) ζ = ( exp( ζ 2 ) ) 2 f(ζ, α). (33) + α erf(ζ) Now we assume that ζ satisfies (3) with ζ β/ζ. If β < β/r(α), by (3), we have ζ > β/ζ > r(α), and then from emma, in this case, f(ζ, α) < 0, thus h(ζ, α)/ ζ < 0, i.e., h(ζ, α) is a strictly deceasing function of ζ, thus h(ζ, α) < h(β/ζ, α). If β > β/r(α), by (3), we obtain ζ < β/ζ < r(α), and then from emma, f(ζ, α) > 0, thus h(ζ, α)/ ζ > 0, i.e., h(ζ, α) is a strictly increasing function of ζ, thus again h(ζ, α) < h(β/ζ, α). With the above lemmas, we can show how the success probability of the box-constrained Babai estimator changes after two consecutive columns are swapped when the - P strategy is applied. pecifically, we have the following theorem. Theorem 2: uppose that in () the box B is a cube with edge length of d, ˆx is uniformly distributed over B, and ˆx and v are independent. uppose that () is transformed to (5) through the QR factorization (4) and δ rk,k 2 > rk,k 2 + r2 kk. After the permutation of columns k and k of R and triangularization (see (9)), (5) is transformed to (2). ) If r kk 2 2 σ r(d), where r( ) is defined in emma, then after the permutation, the success probability of the box-constrained Babai estimator increases, i.e., Pr(x BB = ˆx) Pr(z BB = ẑ). (34) 2) If r k,k 2 2 σ r(d), then after the permutation, the success probability of the box-constrained Babai estimator decreases, i.e., Pr(x BB = ˆx) Pr(z BB = ẑ). (35) Furthermore, the equality in each of (34) and (35) holds if and only if r k,k = 0. Proof. When r k,k = 0, by Theorem, we see the equalities in (34) and (35) hold. In the following we assume r k,k 0 and show the strict inequalities in (34) and (35) hold. Define β r k,k r kk = r k,k r kk (36) where for the second equality, see (0). Using δ rk,k 2 > r2 k,k + r2 kk and the equalities in (0), we can easily verify that { rk,k β max, r } kk 2 { rk,k < max, r } kk 2 = r k,k β = r kk /(), (37) β r k,k /() = r { kk = min rk,k, r } kk { rk,k < min, r } kk 2 β. (38) Now we prove part. Note that after the permutation, r k,k and r kk change, but other diagonal entries of R do not change. Then by Theorem, we can easily observe that (34) is equivalent to [ d + + [ d d + erf ( rk,k ) ] [ d + + d ( ) d + erf rkk ] d + + d ( ) d + erf rk,k ] [ d + + d ( ) d + erf rkk ] 2. (39) By (30), we can see that (39) is equivalent to ( { rk,k g max, r } ) kk 2, d, β ( { rk,k g max, r } ) kk 2, d, β. (40) If r kk 2 2 σ r(d), then the right-hand side of the last equality in (37) satisfies β r kk /() β r(d). (4) Then by combining (37) and (4) and applying emma 2 we can conclude that the strict inequality in (40) holds. The proof for part 2 is similar. The inequality (35) is equivalent to ( { rk,k } ) g min, d, β ( g min, r kk { rk,k, r kk }, d, β ). (42) If r k,k 2 2 σ r(d), then the left-hand side of the first equality in (38) satisfies β r(d) β r k,k /(). (43)

8 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 8 Then by combining (38) and (43) and applying emma 2 we can conclude that the strict inequality in (42) holds. We make a few remarks about Theorem 2. Remark 2: In the theorem, B is assumed to be a cube, not a more general box. This restriction simplified the theoretical analysis. Furthermore, in practical applications, such as communications, indeed B is often a cube. Remark 3: After the permutation, the larger one of r k,k and r kk becomes smaller (see (37)) and the smaller one becomes larger (see (38)), so the gap between r k,k and r kk becomes smaller. This makes increase under the condition r kk 2 2 σ r(d) or decrease under the condition r k,k 2 2 σ r(d). It is natural to ask for fixed r k,k and r kk, when will increase most or decrease most after the permutation under the corresponding conditions? From the proof we observe that will become maximal when the first inequality in (37) becomes an equality or minimal when the last inequality in (38) becomes an equality under the corresponding conditions. Either of the two equalities holds if and only if r k,k = r kk, which is equivalent to rk,k 2 + r2 kk = r k,k r kk by (0). Remark 4: The case where r kk < 2 2 σ r(d) < r k,k is not covered by the theorem. For this case, may increase or decrease after the permutation, for more details, see the simulations in ec. I-D. Based on Theorem 2, we can establish the following general result for the -P strategy. Theorem 3: uppose that in () the box B is a cube with edge length of d, ˆx is uniformly distributed over B, and ˆx and v are independent. uppose that () is first transformed to (5) through the QR factorization (4) and then to (2) through the QR factorization (7) where the -P strategy is used for column permutations. ) If the diagonal entries of R in (5) satisfies min i n r ii 2 2 σ r(d), (44) where r( ) is defined in emma, then Pr(x BB = ˆx) Pr(z BB = ẑ). (45) 2) If the diagonal entries of R in (5) satisfies then max r ii 2 2 σ r(d), (46) i n Pr(x BB = ˆx) Pr(z BB = ẑ). (47) And the equalities in (45) and (47) hold if and only if no column permutation occurs in the process or whenever two consecutive columns, say k and k, are permuted, r k,k = 0. Proof. It is easy to show that after each column permutation, the smaller one of the two diagonal entries of R involved in the permutation either keeps unchanged (the involved super-diagonal entry is 0 in this case) or strictly increases, while the larger one either keeps unchanged or strictly decreases (see (37) and (38)). Thus, after each column permutation, the minimum of the diagonal entries of R either keeps unchanged or strictly increases and the maximum either keeps unchanged or strictly decreases, so the diagonal entries of any upper triangular R produced after a column permutation satisfies min r ii r kk i n max r ii for all k =,..., n. Then the conclusion follows i n from Theorem 2. We make some remarks about Theorem 3. Remark 5: The quantity r(d) is involved in the conditions. To get some idea about how large it is, we compute it for a few different d = 2 k. For k =, 2, 3, 4, 5, the corresponding values of r are , , , , They are decreasing with k as proved in emma. As d, r(d) 0. Thus, when d is large enough, the condition (44) will be satisfied. By Corollary, taking the limit as d on both sides of (45), we obtain the following result proved in [4]: Pr(x OB = ˆx) Pr(z OB = ẑ), i.e., -P always increases the success probability of the ordinary Babai estimator. Remark 6: The two conditions (44) and (46) also involve the noise standard deviation σ. When σ is small, (44) is likely to hold, so applying -P is likely to increase, and when σ is large, (46) is likely to hold, so applying -P is likely to decrease. It is quite surprising that when σ is large enough applying -P will decrease. Thus, before applying -P, one needs to check the conditions (44) and (46). If (44) holds, one has confidence to apply -P. If (46) holds, one should not apply it. If both do not hold, i.e., min r ii < 2 2 σ r(d) < max r ii, i n i n applying -P may increase or decrease. B. Effects of QRD and -BAT on QRD and -BAT have been used to find better ordinary and box-constrained Babai estimators in the literature. It has been demonstrated in [4] that unlike - P, both QRD and -BAT may decrease the success probability of the ordinary Babai estimator when the parameter vector ˆx is deterministic and not subject to any constraint. We would like to know how QRD and -BAT affect. Unlike -P, both QRD and -BAT usually involve two non-consecutive columns permutations, resulting in the changes of all diagonal entries between and including the two columns. This makes it very difficult to analyze under what condition increases or decreases. We will use numerical test results to show the effects of QRD and -BAT on with explanations. In Theorem 2 we showed that if the condition (44) holds, then applying -P will increase and if

9 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 9 (46) holds, then applying -P will decrease. The following example shows that both QRD and -BAT may decrease even if (44) holds, and they may increase even if (46) holds. Example : et d = and consider two matrices: R () = , R (2) = Applying QRD, -BAT and -P to R () and R (2), we obtain R () = , R () = R () = , R (2) = , R (2) = R (2) = R (2). If σ = 0.2, then it is easy to verify that for both R () and R (2), (44) holds (note that 2 2 r(d) = 2 2 r() =.6798). imple calculations by using (6) give and (R () ) = , (R (2) ) = , (R () ) = , (R () ) = (R () ) = 0.990, (R (2) ) = 0.753, (R (2) ) = (R (2) ) = Thus, QRD decreases, while -BAT and -P increase for R (), and -BAT decreases, while QRD and -P keep unchanged for R (2). If σ = 2.2, then it is easy to verify that for both R () and R (2), (46) holds. imple calculations by using (6) give Then (R () ) = , (R (2) ) = (R () ) = , (R () ) = (R () ) = , (R (2) ) = 0.898, (R (2) ) = (R (2) ) = Thus, QRD increases, while -BAT and -P decrease for R (), and -BAT increases, while QRD and -P keep unchanged for R (2). Although Example indicates that under the condition (44), unlike -P, both QRD and -BAT may decrease, often they increase. This is the reason why QRD and -BAT (especially the latter) have often been used to increase the accuracy of the Babai estimator in practice. Example also indicates that under the condition (46), unlike -P, both QRD and -BAT may increase, but often they decrease. This is the opposite of what we commonly believe. ater we will give numerical test results to show both phenomena. In the following we give some explanations. It is easy to show that like -P, -BAT increases min i r ii (not strictly) after each permutation and like -P, QRD decreases max i n r ii (not strictly) after each permutation. The relation between -BAT and QRD can be found in [34] and [28]. Thus if the condition (44) holds before applying -BAT, it will also hold after applying it; and if the condition (46) holds before applying QRD, it will also hold after applying it. Often applying -BAT decreases max i n r ii and applying QRD increases min i r ii (both may not be true sometimes, see Example ). Thus often the gaps between the large diagonal entries and the small ones of R decrease after applying QRD or -BAT. From the proof of Theorem 2 we see reducing the gaps will likely increase under (44) and decrease under (46). Thus it is likely both QRD and -BAT will increase under (44) and decrease it under (46). We will give further explanations in the next subsection. C. A bound on In this subsection we give a bound on, which is an upper bound under (44) and becomes a lower bound under (46). This bound can help us to understand what a column permutation strategy should try to achieve. Theorem 4: uppose that the assumptions in Theorem hold. et the box B in (b) be a cube with edge length of d and denote γ = (det(r)) /n. ) If the condition (44) holds, then [ Pr(x BB = ˆx) d + + d ( γ ) ] n d + erf 2. (48) 2) If the condition (46) holds, then [ Pr(x BB = ˆx) d + + d d + erf ( γ ) ] n. (49) The equality in either (48) or (49) holds if and only if r ii = γ for i =,..., n. Proof. We prove only part. Part 2 can be proved similarly. Note that γ n = Π n i= r ii. Obviously, if r ii = γ for i =,..., n, then by (6) the equality in (48) holds. In the following we assume that there exist j and k such that r jj r kk, we only need to show that the strict inequality (48) holds. ( ) Denote F (ζ) = ln( + d erf exp(ζ) 2 ), η i = ln(r ii ) for i =, 2,..., n and η = n n i= η i, then by (6), (48) is equivalent to n n F (η i ) < F (η). i=

10 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 0 ince min i r ii 2 2 σ r(d) and r jj r kk, it suffices to show that F (ζ) is a strict concave function on (ln(2 2 σ r(d)), + ). Therefore, we only need to show that F (ζ) < 0 when ζ > ln(2 2 σ r(d)). To simplify notation, denote ξ = exp(ζ)/(). By some simple calculations, we obtain F (ζ) = d ξ exp( ξ2 ) + d erf(ξ) where h(, ) is defined in (32). Then = d h(ξ, d) F (ζ) = d ξ ( h(ξ, d)/ ξ). By the proof of emma 2, h(ξ, d)/ ξ < 0 when ξ > r(d). Thus, we can conclude that F (ζ) < 0 when ζ > ln(2 2 σ r(d)), completing the proof. Now we make some remarks about Theorem 4. Remark 7: The quantity γ is invariant with respect to column permutations, i.e., for R and R in (7), we have the same γ no matter what the permutation matrix P is. Thus the bounds in (48) and (49), which are actually the same quantity, are invariant with respect to column permutations. Although the condition (44) is variant with respect to column permutations, if it holds before applying -P or -BAT, it will hold afterwards, since the minimum of the diagonal entries of R will not be smaller than that of R after applying -P or -BAT. imilarly, the condition (46) is also variant with respect to column permutations. But if it holds before applying -P or QRD, it will hold afterwards, since the maximum of the diagonal entries of R will not be larger than that of R after applying -P or QRD. Remark 8: The equalities in (48) and (49) are reached if all the diagonal entries of R are identical. This suggests that if the gaps between the larger entries and small entries become smaller after permutations, it is likely that increases under the condition (44) or decreases under the condition (46). As we know, the gap between the largest one and the smallest one decreases after applying -P. Numerical tests indicate usually this is also true for both -BAT and QRD. Thus both -BAT and QRD will likely bring closer to the bound under the two opposite conditions, respectively. Remark 9: When d, by emma, r(d) 0, thus the condition in part of Theorem 4 becomes max r ii i n 0, which always holds. Taking the limit as d on both sides of (48) and using Corollary, we obtain ( Pr(x OB = ˆx) erf(γ/(2 n ))). (50) The above result was obtained in [37] and a simple proof was provided in [4]. D. Numerical tests We have shown that if (44) holds, then -P increases and (48) is an upper bound on ; and if (46) holds, then the -P decreases and (49) is a lower bound on. Example in ec. I-B indicates that this conclusion does not always hold for QRD and -BAT. To further understand the effects of -P, QRD and -BAT on and to see how close they bring their corresponding to the bounds given by (48) and (49), we performed some numerical tests by MATAB. For comparisons, we also performed tests for. First we performed tests for the following two cases: Case. A is an n n matrix whose entries are chosen independently and randomly according to a zero mean Gaussian distribution with variance /2. Case 2. A = UD T, U, are random orthogonal matrices obtained by the QR factorization of matrices whose entries are chosen independently and andomly according to the standard Gaussian distribution and D is an n n diagonal matrix with d ii = 0 3(n/2 i)/(n ). The condition number of A is 000. In the tests for each case, we first chose n = 4 and B = [0, ] 4 and took different noise standard deviation σ to test different situations according to the conditions (44) and (46) imposed in Theorems 3 and 4. The edge length d of B is. o in (44) and (46), 2 2 r(d) = 2 2 r() = Details about choosing σ will be given later. We use,, and respectively denote the success probability of the box-constrained Babai estimator corresponding to QR factorization (i.e., no permutations are involved), -P, QRD and -BAT, and use µ BB to denote the right-hand side of (48) or (49), which is an upper bound if (44) holds and a lower bound if (46) holds. imilarly,,, and respectively denote the success probability of the ordinary Babai estimator corresponding to QR factorization, -P, QRD and - BAT. We use µ OB to denote the right-hand side of (50), which is an upper bound on,, and. For each case, we performed 0 runs (notice that for each run we have different A, ˆx and v due to randomness) and the results are displayed in Tables I-I. In Tables I and II, σ = σ min r ii/.8. It is easy to i n verify that the condition (44) holds. This means that by Theorem 3 and,, µ BB by Theorem 4 and Remark 7. The numerical results given in Tables I and II are consistent with the theoretical results. The numerical results also indicate that QRD and -BAT usually increase (not strictly), although there is one exceptional case for QRD in Table II. We observe that the permutation strategies increase more significantly for Case 2 than for Case. The reason is that A is more ill-conditioned for Case 2, resulting in larger gaps between the diagonal entries of R, which can usually be reduced more effectively by the permutation strategies. We also observe that µ BB in both tables. Although in theory the inequality may not hold as we cannot guarantee the condition (44) holds after applying QRD, usually QRD

11 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY can make min i n r ii larger. Thus if (44) holds before applying QRD, it is likely that the condition still holds after applying it. Thus it is likely that µ BB holds. Tables III and I are opposite to Tables I and II. In both tables, σ = σ 2 max r ii/.6, then the condition i n (46) holds. This means that by Theorem 3 and,, µ BB by Theorem 4 and Remark 7. The numerical results given in the two tables are consistent with the theoretical results. The results in the two tables also indicate that both QRD and -BAT decrease (not strictly), although Example shows that neither is always true under the condition (46). We also observe that µ BB in both tables. Although in theory the inequality may not hold as we cannot guarantee the condition (46) holds after applying -BAT, usually -BAT can make max i n r ii smaller. Thus if (46) holds before applying -BAT, it is likely the condition still holds after applying it. Thus it is likely µ BB holds. In Tables and I, σ = σ 3 (0.3 max i n r ii min i n r ii)/.68. In this case, min r ii.68 i n r(d) max r ii, i n indicating that (46) does not hold and it is very likely that (44) does not hold either. In theory we do not have results that cover this situation. The numerical results in the two tables indicate all of the three permutation strategies can either increase or decrease strictly and µ BB can be larger or smaller than,, and. The reason we chose 0.3 and 0.7 rather than a more natural choice of 0.5 and 0.5 in defining σ here is that we may not be able to observe both increasing and decreasing phenomena due to limited runs. Now we make comments on the success probability of ordinary Babai points. From Tables I-I, we observe that -P always increases (not strictly), and QRD and -BAT almost always increases (there is one exceptional case for QRD in Table II and two exceptional cases for -BAT in Table I). Thus the ordinary case is different from the box-constrained case. We also observe for the same permutation strategies. ometimes the difference between the two is large (see Tables I and I). Each of Tables I-I displays the results for only 0 runs due to space limitation. To make up for this shortcoming, we give Tables II and III, which display some statistics for 000 runs on the data generated exactly the same way as the data for the 0 runs. pecifically these two tables display the number of runs, in which ( ) strictly increases, keeps unchanged and strictly decreases after each of the three permutation strategies is applied for Case and Case 2, respectively. In the two tables, σ, σ 2 and σ 3 are defined in the same as those used in Tables I I. From Tables II and III, we can see that often these permutation strategies increase or decrease for the same data. The numerical results given in all the tables suggest that if the condition (44) holds, we should have confidence to use any of these permutation strategies; and if the condition (46) holds we should not use any of them. Tables II and III do not show which permutation strategy increases most for small σ. The information on this given in Tables I-I are limited. In the following we give more test results to investigate this. As the main application of this research is in digit communications, we used the MIMO model in the new tests. For a fixed dimension, a fixed type of QAM and a fixed E b /N 0, we randomly generated 200 complex channel matrices whose entries independently and identically follow the standard complex normal distribution, and for each generated channel matrix, we randomly generated 500 pairs of complex signal vector (whose entries are uniformly distributed according to the QAM constellation) and complex noise vector (whose entries are independently and identically normally distributed), resulting in 0000 instances of a complex linear model. Each complex instance was then transformed to an instance of the real linear model (). Unlike the previous tests, we compare the experimental error probabilities of the box-constrained Babai estimators (i.e., the ratio of the number of runs that the Babai point is not equal to the true parameter vector ˆx to 0000) corresponding to QR, -P, QRD and -BAT, and the theoretical bound on the error probability of a Babai estimator (i.e., the difference between and the bound on its success probability (see (48))). Figures and 2 respectively display the experimental error probability corresponding to the QR factorization, and the three permutation strategies, and the average theoretical bound over the 0000 runs versus E b /N 0 = 5:5:30 for the 4 4 MIMO system with 6-QAM and 64- QAM. imilarly, Figures 3 and 4 respectively show the corresponding results for the 8 8 MIMO system with 6-QAM and 64-QAM. And Figures 5 and 6 show the corresponding results for the 6 6 MIMO system with 6-QAM and 64-QAM, respectively. From Figures -6, we can see that on average all of the three column permutation strategies decrease the error probability of the Babai point and the error bound is a lower bound (this is because (44) usually holds, which ensures (48)). These Figures also show that the effect of - BAT is much more significant than that of -P and QRD, which have more or less the same performance. This phenomenon is similar to that for, as shown in [4].. ON THE CONJECTURE PROPOED IN [32] In [32], a conjecture was made on the ordinary Babai estimator, based on which a stopping criterion was then

12 TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY 2 σ TABE I UCCE PROBABIITIE AND BOUND FOR CAE, σ = min i n r ii /.8 µ BB µ OB σ TABE II UCCE PROBABIITIE AND BOUND FOR CAE 2, σ = min i n r ii /.8 µ BB µ OB σ TABE III UCCE PROBABIITIE AND BOUND FOR CAE, σ = max(r ii )/.6 µ BB µ OB σ TABE I UCCE PROBABIITIE AND BOUND FOR CAE 2, σ = max(r ii )/.6 µ BB µ OB proposed for the sphere decoding search process for solving the BI problem (2). In this section, we first introduce this conjecture, then give an example to show that this conjecture may not hold in general, and finally we show that the conjecture holds under some conditions. The problem considered in [32] is to estimate the integer parameter vector ˆx in the box-constrained linear model (). The method proposed in [32] first ignores the box constraint (b). Instead of using the column permutations

SUPPOSE that we have the following box-constrained linear model:

SUPPOSE that we have the following box-constrained linear model: UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY uccess probability of the Babai estimators for box-constrained integer linear models Jinming Wen and Xiao-Wen Chang arxiv:40.5040v [cs.it] 9

More information

Partial LLL Reduction

Partial LLL Reduction Partial Reduction Xiaohu Xie School of Computer Science McGill University Montreal, Quebec, Canada H3A A7 Email: xiaohu.xie@mail.mcgill.ca Xiao-Wen Chang School of Computer Science McGill University Montreal,

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design

An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design Jinming Wen, Lanping Li, Xiaohu Tang, Wai Ho Mow, and Chintha Tellambura Department of Electrical and Computer Engineering,

More information

Augmented Lattice Reduction for MIMO decoding

Augmented Lattice Reduction for MIMO decoding Augmented Lattice Reduction for MIMO decoding LAURA LUZZI joint work with G. Rekaya-Ben Othman and J.-C. Belfiore at Télécom-ParisTech NANYANG TECHNOLOGICAL UNIVERSITY SEPTEMBER 15, 2010 Laura Luzzi Augmented

More information

A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design

A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design Jing Wen and Xiao-Wen Chang ENS de Lyon, LIP, CNRS, ENS de Lyon, Inria, UCBL), Lyon 69007, France, Email: jwen@math.mcll.ca

More information

Application of the LLL Algorithm in Sphere Decoding

Application of the LLL Algorithm in Sphere Decoding Application of the LLL Algorithm in Sphere Decoding Sanzheng Qiao Department of Computing and Software McMaster University August 20, 2008 Outline 1 Introduction Application Integer Least Squares 2 Sphere

More information

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Basic Algorithms Instructor: Daniele Micciancio UCSD CSE We have already seen an algorithm to compute the Gram-Schmidt orthogonalization of a lattice

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms

MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms Xiao-Wen Chang and Tianyang Zhou Scientific Computing Laboratory School of Computer Science McGill University

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

arxiv:cs/ v1 [cs.it] 11 Sep 2006

arxiv:cs/ v1 [cs.it] 11 Sep 2006 0 High Date-Rate Single-Symbol ML Decodable Distributed STBCs for Cooperative Networks arxiv:cs/0609054v1 [cs.it] 11 Sep 2006 Zhihang Yi and Il-Min Kim Department of Electrical and Computer Engineering

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 31, No. 3, pp. 1071 1089 c 2009 Society for Industrial and Applied Mathematics SOLVING ELLIPSOID-CONSTRAINED INTEGER LEAST SQUARES PROBLEMS XIAO-WEN CHANG AND GENE H. GOLUB

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation

Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation Luay Azzam and Ender Ayanoglu Department of Electrical Engineering and Computer Science University of California, Irvine

More information

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS.

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS. 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS. Sandra Roger, Alberto Gonzalez,

More information

IEEE C80216m-09/0079r1

IEEE C80216m-09/0079r1 Project IEEE 802.16 Broadband Wireless Access Working Group Title Efficient Demodulators for the DSTTD Scheme Date 2009-01-05 Submitted M. A. Khojastepour Ron Porat Source(s) NEC

More information

A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm

A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm Wen Zhang, Institute of Mathematics, School of Mathematical Science Fudan University, Shanghai, 00433 P.R. China Yimin Wei School of

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications

Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications arxiv:cs/0508064v1 [cs.it] 12 Aug 2005 Massimiliano Siti Advanced System Technologies STMicroelectronics 20041 Agrate Brianza

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE 4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 An Algebraic Family of Complex Lattices for Fading Channels With Application to Space Time Codes Pranav Dayal, Member, IEEE,

More information

Lattice Basis Reduction Part 1: Concepts

Lattice Basis Reduction Part 1: Concepts Lattice Basis Reduction Part 1: Concepts Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao October 25, 2011, revised February 2012

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

On the diversity of the Naive Lattice Decoder

On the diversity of the Naive Lattice Decoder On the diversity of the Naive Lattice Decoder Asma Mejri, Laura Luzzi, Ghaya Rekaya-Ben Othman To cite this version: Asma Mejri, Laura Luzzi, Ghaya Rekaya-Ben Othman. On the diversity of the Naive Lattice

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels Deric W. Waters and John R. Barry School of ECE Georgia Institute of Technology Atlanta, GA 30332-0250 USA {deric, barry}@ece.gatech.edu

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27 Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1 8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]

More information

The Diagonal Reduction Algorithm Using Fast Givens

The Diagonal Reduction Algorithm Using Fast Givens The Diagonal Reduction Algorithm Using Fast Givens Wen Zhang, Sanzheng Qiao, and Yimin Wei Abstract Recently, a new lattice basis reduction notion, called diagonal reduction, was proposed for lattice-reduction-aided

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

A Hybrid Method for Lattice Basis Reduction and. Applications

A Hybrid Method for Lattice Basis Reduction and. Applications A Hybrid Method for Lattice Basis Reduction and Applications A HYBRID METHOD FOR LATTICE BASIS REDUCTION AND APPLICATIONS BY ZHAOFEI TIAN, M.Sc. A THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTING AND SOFTWARE

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

The complexity of factoring univariate polynomials over the rationals

The complexity of factoring univariate polynomials over the rationals The complexity of factoring univariate polynomials over the rationals Mark van Hoeij Florida State University ISSAC 2013 June 26, 2013 Papers [Zassenhaus 1969]. Usually fast, but can be exp-time. [LLL

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection

HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 11, NOVEMBER 2012 5963 HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection Wen Zhang, Sanzheng Qiao, and Yimin Wei Abstract

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

3 QR factorization revisited

3 QR factorization revisited LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

A Fast-Decodable, Quasi-Orthogonal Space Time Block Code for 4 2 MIMO

A Fast-Decodable, Quasi-Orthogonal Space Time Block Code for 4 2 MIMO Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 2007 ThC6.4 A Fast-Decodable, Quasi-Orthogonal Space Time Block Code for 4 2 MIMO Ezio Biglieri Universitat Pompeu

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

Lattices and Hermite normal form

Lattices and Hermite normal form Integer Points in Polyhedra Lattices and Hermite normal form Gennady Shmonin February 17, 2009 1 Lattices Let B = { } b 1,b 2,...,b k be a set of linearly independent vectors in n-dimensional Euclidean

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Hufei Zhu, Ganghua Yang Communications Technology Laboratory Huawei Technologies Co Ltd, P R China

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

On Some Polytopes Contained in the 0,1 Hypercube that Have a Small Chvátal Rank

On Some Polytopes Contained in the 0,1 Hypercube that Have a Small Chvátal Rank On ome Polytopes Contained in the 0,1 Hypercube that Have a mall Chvátal Rank Gérard Cornuéjols Dabeen Lee April 2016, revised July 2017 Abstract In this paper, we consider polytopes P that are contained

More information

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

More information

Sherman-Morrison-Woodbury

Sherman-Morrison-Woodbury Week 5: Wednesday, Sep 23 Sherman-Mrison-Woodbury The Sherman-Mrison fmula describes the solution of A+uv T when there is already a factization f A. An easy way to derive the fmula is through block Gaussian

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information