SUPPOSE that we have the following box-constrained linear model:

Size: px
Start display at page:

Download "SUPPOSE that we have the following box-constrained linear model:"

Transcription

1 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY uccess probability of the Babai estimators for box-constrained integer linear models Jinming Wen and Xiao-Wen Chang arxiv: v [cs.it] 9 Oct 204 Abstract To estimate the box-constrained integer parameter vector ˆx in a linear model, a typical method is to solve a boxconstrained integer least squares BI) problem. However, due to its high complexity, the box-constrained Babai integer point x BB is commonly used as a suboptimal solution. First we derive formulas for the success probability of x BB and the success probability of the ordinary Babai integer point x OB when ˆx is uniformly distributed over the constraint box. ome properties of and and the relationship between them are studied. Then we investigate the effects of some column permutation strategies on. The -BAT and QRD column permutation strategies are often used to get better Babai integer points. The permutation strategy involved in the wellknown lattice reduction, to be referred to as -P, can also be used for this purpose. On the one hand we show that under a condition -P always increases and argue why both -BAT and QRD often increase under the same condition; and on the other hand we show that under an opposite condition -P always decreases and argue why both -BAT and QRD often decrease under the same condition. We also derive a column permutation invariant bound on, which is an upper bound and a lower bound under the two opposite conditions, respectively. Numerical results will be given to demonstrate our findings. Finally we consider a conjecture concerning the ordinary Babai integer point proposed by Ma et al. We first construct an example to show that the conjecture does not hold in general, and then show that the conjecture does hold under some conditions. Index Terms Box-constrained integer least squares estimation, Babai integer point, success probability, column permutations, - P, QRD, -BAT. I. INTRODUCTION UPPOE that we have the following box-constrained linear model: y = Aˆx + v, v N 0, σ 2 I) a) ˆx B {x Z n : l x u, l, u Z n } where y R m is an observation vector, A R m n is a deterministic model matrix with full column rank, ˆx is an unknown integer parameter vector in the box B, v R m is a noise vector following the Gaussian distribution N 0, σ 2 I) with σ being known. This model arises from some applications, including wireless communications, see e.g., [0], [2]. In this paper we assume that ˆx is random and uniformly distributed over the box B. This assumption is often made for MIMO applications, see, e.g., [2]. A common method to estimate ˆx in ) is to solve the following box-constrained integer least squares BI) problem: min y x B Ax 2 2 2) whose solution is the maximum likelihood estimator of ˆx. A typical approach to solving 2) is discrete search, which usually has two stages: reduction and search. In the first stage, orthogonal transformations are used to transform A to an upper triangular matrix R. To make the search process more efficient, a column permutation strategy is often used in reduction. Two well-known strategies are -BAT [3], [0] and QRD [36], [7]. The commonly used search methods are the so-called sphere decoding methods, see, e.g., [0], [5] and [7], which are the extensions of the chnorr-euchner search method [29], a variation of the Fincke-Pohst search method [], for ordinary integer least squares problems to be mentioned below. b) Jinming Wen is with The Department of Mathematics and tatistics, McGill University, Montreal, QC H3A 0B9, Canada jinming.wen@mail.mcgill.ca). X.-W. Chang is with The chool of Computer cience, McGill University, Montreal, QC H3A 2A7, Canada chang@cs.mcgill.ca). This research was supported by NERC of Canada Grant RGPIN Manuscript received; revised.

2 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 2 If the true parameter vector ˆx Z n in the linear model a) is not subject to any constraint, then we say a) is an ordinary linear model. In this case, to estimate ˆx, one solves an ordinary integer least squares OI) problem min x Z n y Ax ) The OI problem 3) is also referred to as the closest vector problem, as it is equivalent to find a point in the lattice {Ax : x Z n } which is closest to y. For algorithms and theories for OI problems, see the survey papers [] and [6]. The most widely used reduction strategy in solving 3) is the reduction [23], which consists of size reductions and column permutations. But it is difficult to use it to solve a BI problem because after size reductions the box constraint would become too complicated to handle in the search process. However, one can only use its permutation strategy, to be referred to as -P in [9] we referred it to as-permute). The -P, QRD and -BAT strategies use only the information of A to do the column permutations. ome column permutation strategies which use not only the information of A, but also the information of y and the box constraint have also been proposed, see e.g., [3], [7] and [6]. Although it was shown in [9] that in communication applications for a wide range of signal-to-noise ratios and dimensions, the expected complexity of solving 2) by the Fincke-Pohst search method is polynomial, the expected number of nodes visited in the search tree tends to infinity as an exponential function for a large class of detection problems [2]. o for some real-time applications, an approximate solution, which can be produced quickly, is computed instead. For the OI problem, the Babai integer point x OB, to be referred to as the ordinary Babai estimator, which can be obtained by the Babai nearest plane algorithm [3], is an often used approximate solution. Taking the box constraint into account, one can easily modify the Babai nearest plane algorithm to get an approximate solution x BB to the BI problem 2), to be referred to as the box-constrained Babai estimator. This estimator is the first point found by the search methods proposed in [5], [0] and [7] and it has been used as a suboptimal solution, see, e.g., [34]. In communications, algorithms for finding the Babai estimators are often referred to as successive interference cancelation detectors. There have been algorithms which find other suboptimal solutions to the BI problems in communications, see, e.g., [26], [2], [5], [4], [4], [30], [20], [35], [22] etc. But in the this paper we will focus on the Babai estimators. In order to verify whether an estimator is good enough for a practical use, one needs to find the probability of the estimator being equal to the true integer parameter vector, which is referred to as success probability [7]. The probability of wrong estimation is referred to as error probability, see, e.g., [20]. For the estimation of ˆx in the ordinary linear model a), where ˆx is supposed to be deterministic, the formula of the success probability of the ordinary Babai estimator x OB was first given in [32], which considers a variant form of the I problem 3). A simple derivation for an equivalent formula of was given in [9]. It was shown in [9] that increases after applying the reduction algorithm or only the -P column permutation strategy, but it may decrease after applying the QRD and -BAT permutation strategies. The main goal of this paper is to extend the main results we obtained in [9] for the ordinary case to the boxconstrained case. We will present a formula for the success probability of the box-constrained Babai estimator x BB and a formula for the success probability of the ordinary Babai estimator x OB when ˆx in ) follows a uniform distribution over the box B. ome properties of and and the relationship between them will also be given. Then we will investigate the effect of the -P column permutation strategy on. We will show that increases under a condition. urprisingly, we will also show that decreases after -P is applied under an opposite condition. Roughly speaking the two opposite conditions are that the noise standard deviation σ in a) are relatively small and large, respectively. This is different from the ordinary case, where always increases after the -P strategy is applied. Although our theoretical results for -P cannot be extended to QRD and -BAT, our numerical tests indicate that under the two conditions, often not always) increases and decreases, respectively, after applying QRD or -BAT. Explanations will be given for these phenomenons. These suggest that before we apply -P, QRD or -BAT we should check the conditions. Moreover, we will give a bound on, which is column permutation invariant. It is interesting that the bound is an upper bound under the small noise condition we just mentioned and becomes a lower bound under the opposite condition. In [27], the authors made a conjecture, based on which a stopping criterion for the search process was proposed to reduce the computational cost of solving the BI problem. The conjecture is related to the success probability of the ordinary Babai estimator x OB. We will first show that the conjecture does not always hold and then show it holds under a condition. The rest of the paper is organized as follows. In section II, we introduce the QR reduction and the -P, QRD and -BAT column recording strategies. In section III, we present the formulas for and, study the properties of and and the relationship between them. In section I, we investigate the effects of the -P, QRD and

3 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 3 -BAT column permutation strategies and derive a bound on. In section, we investigate the conjecture made in [27] and obtain some negative and positive results. Finally we summarize this paper in section I. Notation. For matrices, we use bold capital letters and for vectors we use bold small letters. For x R n, we use x to denote its nearest integer vector, i.e., each entry of x is rounded to its nearest integer if there is a tie, the one with smaller magnitude is chosen). For a vector x, x i:j denotes the subvector of x formed by entries i, i +,..., j. For a matrix A, A i:j,i:j denotes the submatrix of A formed by rows and columns i, i +,..., j. II. QR REDUCTION AND COUMN REORDERING Assume that the model matrix A in the linear model a) has the QR factorization [ ] R A = [Q, Q 2 ] 0 4) where [Q, Q 2 ] R m m is orthogonal and R R n n is upper triangular. Without loss of generality, we assume n m n that the diagonal entries of R are positive throughout the paper. Define ỹ = Q T y and ṽ = Q T v. Then, the linear model ) is reduced to and the BI problem 2) is reduced to ỹ = Rˆx + ṽ, ṽ N 0, σ 2 I), 5a) ˆx B {x Z n : l x u, l, u Z n } 5b) min ỹ x B Rx ) To solve the reduced problem 6) sphere decoding search algorithms are usually used to find the optimal solution. For search efficiency, one typically adopts a column permutation strategy, such as -BAT, QRD or -P, in the reduction process to obtain a better R. For the sake of description simplicity, we assume that the column permutations are performed on R in 4) no matter which strategy is used, i.e., Q T RP = R 7) where Q R n n is orthogonal, P Z n n is a permutation matrix, and R R n n is an upper triangular matrix satisfying the properties of the corresponding column permutation strategies. Notice that combining 4) and 7) result in the following QR factorization of the column reordered A: AP = Q [ R 0 ] [ ] Q 0, Q Q. 0 I m n The -BAT strategy determines the columns of R from the last to the first. uppose columns n, n,..., k + of R have been determined, this strategy chooses a column from k remaining columns of R as the k-th column such that r kk is maximum over all of the k choices. For more details, including efficient algorithms, see [3], [0], [8], [8], [24], [37] etc. For the performance analysis of -BAT, one may refer to [25]. In contrast to -BAT, the QRD strategy determines the columns of R from the first to the last by using the modified Gram-chmidt algorithm or the Householder QR algorithm. uppose columns, 2,..., k of R have been determined. In the k-th step of the algorithm, the k-th column of R we seek is chosen from the remaining n k + columns of R such that r kk is smallest. For more details, see [36] and [7] etc. The -P strategy [9] does the column permutations of the reduction algorithm and produces R satisfying the ovász condition: δ r 2 k,k r 2 k,k + r 2 kk, k = 2, 3,..., n 8) where δ is a parameter satisfying /4 < δ. uppose that δ rk,k 2 > r2 k,k +r2 k,k for some k. Then we interchange columns k and k of R. After the permutation the upper triangular structure of R is no longer maintained. But we can bring R back to an upper triangular matrix by using the Gram-chmidt orthogonalization technique see [23]) or by a Givens rotation: R = G T k,krp k,k 9) where G k,k is an orthogonal matrix and P k,k is a permutation matrix, and r 2 k,k = r 2 k,k + r 2 k,k, r 2 k,k + r 2 k,k = r 2 k,k, r k,k r kk = r k,k r kk. 0)

4 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 4 Algorithm -P : set P = I n, k = 2; 2: while k n do 3: if δ rk,k 2 > r2 k,k + r2 kk then 4: perform a column permutation: R=G T k,krp k,k ; 5: update P : P = P P k,k ; 6: k = k, when k > 2; 7: else 8: k = k + ; 9: end if 0: end while Note that the above operation guarantees that the inequality in 8) holds. For simplicity, later when we refer to a column permutation, we mean the whole process of a column permutation and triangularization. For readers convenience, we describe the -P strategy as follows. With the QR factorization 7), we define Then the linear model 5) is transformed to and the BI problem 6) is transformed to whose solution is the BI estimator of ẑ. ȳ = Q T ỹ, ẑ = P T ˆx, v = Q T ṽ, z = P T x, l = P T l, ū = P T u. ) ȳ = Rẑ + v, v N 0, σ 2 I), 2a) ẑ B = {z Z n : l z ū, l, ū Z n } 2b) min ȳ Rz 2 z B 2 3) III. UCCE PROBABIITIE OF THE BABAI ETIMATOR We consider the reduced box-constrained liner model 5). The same analysis can be applied to the transformed reduced linear model 2). The box-constrained Babai estimator x BB of ˆx in 5), a suboptimal solution to 6), can be computed as follows: n l i, if c BB i l i c BB i = ỹ i r ij x BB j )/r ii, x BB i = c j=i+ BB i, if l i < c BB i < u i 4) u i, if c BB i u i for i = n, n,...,, where n ordinary Babai estimator x OB : i+ c OB i = 0 if i = n. If we do not take the box constraint into account, we get the = ỹ i n j=i+ r ij x OB j )/r ii, x OB i = c OB i 5) for i = n, n,...,. In the following, we give formulas for the success probability of x BB and for the success probability of x OB. Theorem : uppose that in the linear model ) ˆx is uniformly distributed over the constraint box B and ˆx and v are independent. uppose that the linear model ) is transformed to the linear model 5) through the QR factorization 4). Then n [ Prx BB = ˆx) = u i= i l i + + u i l ] i u i l i + φ σr ii ), 6) n Prx OB = ˆx) = φ σ r ii ) 7) i=

5 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 5 where φ σ ζ) = 2 2π 0 ζ 2σ exp 2 t2) dt. 8) Proof. ince the random vectors ˆx and v in ) are independent, ˆx and ṽ in 5) are also independent. From 5a), n ỹ i = r iiˆx i + r ij ˆx j + ṽ i. Then from 4), we obtain c BB i = ˆx i + n j=i+ Therefore, if x BB i+ = ˆx i+,, x BB n = ˆx n and ˆx i is fixed, To simplify notation, denote event E i = x BB i Then by the chain rule of conditional probabilities, j=i+ r ij r ii ˆx j x BB j ) + ṽi r ii, i = n, n,...,. 9) c BB i N ˆx i, σ 2 /r 2 ii). 20) = ˆx i,..., x BB n = ˆx n ), i =,..., n. = PrE ) = n i= Prx BB i = ˆx i E i+ ) 2) where E n+ is the sample space Ω, so Prx BB n = ˆx n E n+ ) = Prx BB n = ˆx n ). In the following we will use this fact: if A, B and C are three events and A and C are independent, then This can easily be verified. Using 22), we obtain Prx BB i PrA, B C) = PrA) PrB A, C). 22) = ˆx i E i+ ) = Prˆx i = l i, c BB i l i + /2 E i+ ) 23) + Prl i < ˆx i < u i, ˆx i /2 c BB i < ˆx i + /2 E i+ ) + Prˆx i = u i, c BB i u i /2 E i+ ) = Prˆx i = l i ) Prc BB i l i + /2 ˆx i = l i, E i+ ) 24) + Prl i < ˆx i < u i ) Prˆx i /2 c BB i < ˆx i + /2 l i < ˆx i < u i, E i+ ) + Prˆx i = u i ) Prc BB i u i /2 ˆx i = u i, E i+ ) where in deriving the second equality we used the independency of relevant events, which can be observed from 9) and 4). ince ˆx is uniformly distributed over the box B, for the first factors of the three terms on the right-hand side of 24), we have Prˆx i = l i ) = u i l i +, Prl i < ˆx i < u i ) = u i l i u i l i +, Prˆx i = u i ) = u i l i +. By 20), for the second factors of these three terms, we have Prc BB i l i + /2 ˆx i = l i, E i+ ) = 2π σ r ii ) 2 = rii 2σ 2π li+ 2 exp t2 2 Prˆx i /2 c BB i < ˆx i + /2 l i < ˆx i < u i, E i+ ) = 2π σ r ii ) 2 exp t l i) 2 ) 2 σ r ii ) 2 dt ˆxi+ 2 ˆx i 2 ) dt = 2 [ + φ σr ii )], exp t ˆx i) 2 ) 2 σ r ii ) 2 dt = φ σ r ii ),

6 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 6 Prc BB i u i /2 ˆx i = u i, E i+ ) = Combining the equalities above, from 24) we obtain Prx BB i = ˆx i E i+ ) = = 2π σ r ii ) 2 u i 2 2u i l i + ) [ + φ σr ii )] + u i l i u i l i + φ σr ii ) + u i l i + + u i l i u i l i + φ σr ii ) exp t u i) 2 ) 2 σ r ii ) 2 dt = 2 [ + φ σr ii )]. 2u i l i + ) [ + φ σr ii )] which, with 2), gives 6). Now we consider the success probability of the ordinary Babai estimator x OB. Everything in the first three paragraphs of this proof still holds if we replace each superscript BB by OB. But we need to make more significant changes to the last two paragraphs. We change 23) and 24) as follows: Here Thus Prx OB i = ˆx i E i+ ) = Prl i ˆx i u i, ˆx i /2 c OB i < ˆx i + /2 E i+ ) = Prl i ˆx i u i ) Prˆx i /2 c OB i < ˆx i + /2 l i ˆx i u i, E i+ ). Prl i ˆx i u i ) =, Prˆx i /2 c OB i < ˆx i + /2 l i ˆx i u i, E i+ ) = φ σ r ii ). Prx OB i = ˆx i E i+ ) = φ σ r ii ). Then 7) follows from 2) with each superscript BB replaced by OB. From the proof for 7), we observe that the formula holds no matter what distribution of ˆx is over the box B. Furthermore, the formula is identical to the one for the success probability of the ordinary Babai estimator x OB when ˆx in the linear model ) is deterministic and is not subject to any box constraint, see [9]. The following result shows the relation between and. Corollary : Under the same assumption as in Theorem, Proof. Note that φ σ r ii ). Thus φ σ r ii ) = u i l i + φ σr ii ) +, 25) lim =. 26) i,u i l i u i l i u i l i + φ σr ii ) u i l i + + u i l i u i l i + φ σr ii ). Then 25) follows from Theorem. Obviously the equality 26) holds. Corollary 2: Under the same assumption as in Theorem, and increase when σ decreases and lim = lim =. σ 0 σ 0 Proof. For a given ζ, when σ decreases φ σ ζ) increases and lim σ 0 φ σ ζ) =. Then from 6) and 7), we immediately see that the corollary holds. I. EFFECT OF PERMUTATION ON uppose we perform the QR factorization 7) by using a column permutation strategy, such as -P, QRD or -BAT. Then we have the reduced box-constrained linear model 2). For 2) we can define its corresponding Babai point z BB, and use it as an estimator of ẑ, which is equal to P T ˆx, or equivalently we use P z BB as an estimator of ˆx. In this section, we will investigate how applying the -P, QRD and -BAT column permutation strategies affect the success probability of the box-constrained Babai estimator.

7 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 7 A. Effect of column permutations by -P on The -P strategy involves a sequence of permutations of two consecutive columns of R. To investigate how -P affects, we look at one column permutation first. uppose that δ rk,k 2 > r2 k,k + r2 kk for some k for the R matrix in the linear model 5). After the permutation of columns k and k, R becomes R = G T k,krp k,k see 9)). Then with the transformations given in ), where Q = G k,k and P = P k,k, the linear model 5) is transformed to the linear model 2). We will compare Prx BB = ˆx) and Prz BB = ẑ). To prove our results, we need the following lemmas. emma : Given α 0, define ζ fζ, α) = ζ 2 ) α + exp t2 ) ) dt ζ exp ζ2 ), ζ 0. 27) 2 2 Then, fζ, α) is a strictly decreasing function of ζ and has a unique zero rα), i.e., 0 frα), α) = 0. 28) When ζ > rα), fζ, α) < 0 and when ζ < rα) 0, fζ, α) > 0. Furthermore, 0 rα) <, where the first inequality becomes an equality if and only if α = 0, and rα) is a strictly increasing function of α. Proof. By a simple calculation, we obtain fζ, α) ζ ζ = 2ζ α + exp ). t2 0 2 )dt Thus, for any ζ 0 and α 0, fζ, α)/ ζ 0, where the equality holds if and only ζ = 0. Therefore, fζ, α) is a strictly decreasing function of ζ. ince f0, α) = α 0 and f, α) < 0, there exists a unique rα) such that 28) holds and 0 rα) <. Obviously r0) = 0. ince fζ, α) is strictly decreasing with respect to ζ, when ζ > rα), fζ, α) < 0 and when ζ < rα), fζ, α) > 0. From 28), we obtain that for α > 0, ) rα)) 2 2 Thus, rα) is a strictly increasing function of α. r α) = 2rα)) 2 exp rα)) 2 /2) > 0. Given α, we can easily solve 28) by a numerical method, e.g., the Newton method, to find rα). emma 2: Given α 0 and β > 0, define ζ β/ζ gζ, α, β) = α exp t2 2 )dt + Then, when 0 0 ) ζ β/ζ exp t2 2 )dt + exp t2 0 2 )dt exp t2 )dt, ζ > 0. 29) 0 2 min{ β, β/rα)} ζ < max{ β, β/rα)} 30) where rα) is defined in emma and β/rα) if α = 0, gζ, α, β) is a strictly decreasing function of ζ. Proof. From the definition of g, we obtain gζ, α, β) = α exp ζ 2 ζ2 ) β ζ 2 exp β 2 ) 2 ζ 2 +exp β/ζ 2 ζ2 ) exp t2 0 2 )dt β ζ 2 exp β 2 ) ζ 2 ζ 2 exp t2 0 2 )dt = [ ζ exp β/ζ ) ζ 2 ζ2 ) α + exp t2 0 2 )dt β ζ exp β 2 ζ )] 2 ζ 2 ) α + exp t2 0 2 )dt = β/ζ ζ ) α + exp )α t2 ζ 0 2 )dt + exp t2 0 2 )dt [hζ, α) hβ/ζ, α)] where hζ, α) = ζ exp ζ2 2 ) α + ζ 0 exp t2 2 )dt. It is easy to see that in order to show the result, we need only to show hζ, α) hβ/ζ, α) < 0 under the condition 30) with ζ β/ζ.

8 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 8 By some simple calculations and 27), we have hζ, α) ζ = exp ζ2 2 ) α + ) ζ 2 fζ, α). 3) 0 exp t2 2 )dt Now we assume that ζ satisfies 30) with ζ β/ζ. If β < β/rα), then ζ > β/ζ > rα) and by emma, hζ, α)/ ζ < 0, i.e., hζ, α) is a strictly deceasing function of ζ, thus hζ, α) hβ/ζ, α) < 0. If β > β/rα), then ζ < β/ζ < rα) and by emma, hζ, α)/ ζ > 0, i.e., hζ, α) is a strictly increasing function of ζ, thus again hζ, α) hβ/ζ, α) < 0. With the above lemmas, we can show how the success probability of the box-constrained Babai estimator changes after two consecutive columns are swapped when the -P strategy is applied. Theorem 2: uppose that in the linear model ) the box B is a cube with edge length of d, ˆx is uniformly distributed over B, and ˆx and v are independent. uppose that the linear model ) is transformed to the linear model 5) through the QR factorization 4) and δ rk,k 2 > r2 k,k + r2 kk. After the permutation of columns k and k of R see 9)), the linear model 5) is transformed to the linear model 2). ) If r kk /2σ r 2π/2d)), where r ) is defined in emma, then after the permutation, the success probability of the box-constrained Babai estimator increases, i.e., Prx BB = ˆx) Prz BB = ẑ). 32) 2) If r k,k /2σ r 2π/2d)), then after the permutation, the success probability of the box-constrained Babai estimator decreases, i.e., Prx BB = ˆx) Prz BB = ẑ). 33) Furthermore, the equality in each of 32) and 33) holds if and only if r k,k = 0. Proof. When r k,k = 0, by Theorem, we see the equalities in 32) and 33) hold. In the following we assume r k,k 0 and show the strict inequalities in 32) and 33) hold. Define β r k,k 2σ r kk 2σ = r k,k r kk 2σ 2σ where for the second equality, see 0). Using δ rk,k 2 > r2 k,k + r2 kk and the equalities in 0), we can easily verify that { rk,k β max, r } { kk rk,k < max, r } kk = r k,k β = 2σ 2σ 2σ 2σ 2σ r kk /2σ), 35) β r k,k /2σ) = r { kk 2σ = min rk,k, r } { kk rk,k < min, r } kk β. 36) 2σ 2σ 2σ 2σ Now we prove part. Note that after the permutation, r k,k and r kk change, but other diagonal entries of R do not change. Then by Theorem, we can easily observe that 32) is equivalent to [ d + + d ][ d + φ σr k,k ) d + + d ] [ d + φ σr kk ) d + + d ][ d + φ σ r k,k ) d + + d ] d + φ σ r kk ). 37) By the definition of φ σ in 8) and the definition of g in 29), we can easily verify that 37) is equivalent to { rk,k g max, r } kk 2π ) {, 2σ 2σ 2d, β rk,k g max, r } kk 2π ), 2σ 2σ 2d, β. 38) If r kk /2σ r 2π/2d)), then the right-hand side of the last equality in 35) satisfies 34) β r kk /2σ) β r 2π/2d)). 39) Then by combining 35) and 39) and applying emma 2 we can conclude that the strict inequality in 38) holds. The proof for part 2 is similar. The inequality 33) is equivalent to { rk,k g min, r } kk 2π ) {, 2σ 2σ 2d, β rk,k g min, r } kk 2π ), 2σ 2σ 2d, β. 40)

9 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 9 If r k,k /2σ r 2π/2d)), then the left-hand side of the first equality in 36) satisfies β r 2π/2d)) β r k,k /2σ). 4) Then by combining 36) and 4) and applying emma 2 we can conclude that the strict inequality in 40) holds. We make a few remarks about Theorem 2. Remark : In the theorem, B is assumed to be a cube, not a more general box. This restriction simplified the theoretical analysis. In practical applications, such as communications, indeed B is often a cube. Remark 2: After the permutation, the larger one of r k,k and r kk becomes smaller see 35)) and the smaller one becomes larger see 36)), so the gap between r k,k and r kk becomes smaller. This makes increase under the condition r kk /2σ r 2π/2d)) or decrease under the condition r k,k /2σ r 2π/2d)). It is natural to ask for fixed r k,k and r kk when will increase most or decrease most after the permutation under the the corresponding conditions? From the proof we observe that will become maximal when the first inequality in 35) becomes an equality or minimal when the last inequality in 36) becomes an equality under the corresponding conditions. Either of the two equalities holds if and only if r k,k = r kk, which is equivalent to rk,k 2 + r2 kk = r k,k r kk by 0). Remark 3: The case where r kk /2σ < r 2π/2d)) < r k,k /2σ is not covered by the theorem. For this case, may increase or decrease after the permutation, for more details, see the simulations in ec. I-D. Based on Theorem 2, we can establish the following general result for the -P strategy. Theorem 3: uppose that in the linear model ) the box B is a cube with edge length of d, ˆx is uniformly distributed over B, and ˆx and v are independent. uppose that the linear model ) is first transformed to the linear model 5) through the QR factorization 4) and then to the new linear model 2) through the QR factorization 7) where the -P strategy is used for column permutations. ) If the diagonal entries of R in 5) satisfies where r ) is defined in emma, then 2) If the diagonal entries of R in 5) satisfies then min i r ii /2σ) r 2π/2d)), 42) Prx BB = ˆx) Prz BB = ẑ). 43) max r ii /2σ) r 2π/2d)), 44) i Prx BB = ˆx) Prz BB = ẑ). 45) And the equalities in 43) and 45) hold if and only if no column permutation occurs in the process or whenever two consecutive columns, say k and k, are permuted, r k,k = 0. Proof. It is easy to show that after each column permutation, the smaller one of the two diagonal entries of R involved in the permutation either keeps unchanged the involved super-diagonal entry is 0 in this case) or strictly increases, while the larger one either keeps unchanged or strictly decreases see 35) and 36)). Thus, after each column permutation, the minimum of the diagonal entries of R either keeps unchanged or strictly increases and the maximum either keeps unchanged or strictly decreases, so the diagonal entries of any upper triangular R produced after a column permutation satisfies min i r ii r kk max i r ii for all k =,..., n. Then the conclusions follows from Theorem 2. We make some remarks about Theorem 3. Remark 4: The quantity r 2π/2d)) is involved in the conditions. To get some idea about how large it is, we compute it for a few different d = 2 k. For k =, 2, 3, 4, 5, the corresponding values of r are , , , , They are decreasing with k as proved in emma. As d, r 2π/2d)) r0) = 0. Thus, when d is large enough, the condition 42) will be satisfied. By Corollary, taking the limit as d on both sides of 43), we obtain the following result proved in [9]: Prx OB = ˆx) Prz OB = ẑ), i.e., -P always increases the success probability of the ordinary Babai estimator. Remark 5: The two conditions 42) and 44) also involve the noise standard deviation σ. When σ is small, 42) is likely to hold, so applying -P is likely to increase, and when σ is large, 44) is likely to hold, so applying

10 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 0 -P is likely to decrease. It is quite surprising that when σ is large enough applying -P will decrease. Thus, before applying -P, one needs to check the conditions 42) and 44). If 42) holds, one has confidence to apply -P. If 44) holds, one should not apply it. If both do not hold, i.e., min i r ii /2σ) < r 2π/2d)) < max i r ii /2σ), applying -P may increase or decrease. B. Effects of QRD and -BAT on QRD and -BAT have been used to find better ordinary and box-constrained Babai estimators in the literature. It has been demonstrated in [9] that unlike -P, both QRD and -BAT may decrease the success probability of the ordinary Babai estimator when the parameter vector ˆx is deterministic and not subject to any constraint. We would like to know how QRD and -BAT affect. Unlike -P, both QRD and -BAT usually involve two non-consecutive columns permutations, resulting in the changes of all diagonal entries between and including the two columns. This makes it very difficult to analyze under which conditions increases or decreases. We will use numerical test results to show the effects of QRD and -BAT on with explanations. In Theorem 2 we showed that if the condition 42) holds, then applying -P will increase and if 44) holds, then applying -P will decrease. The following example shows there are not true for QRD and -BAT. Example : et d = and consider two matrices: R ) = , R 2) = Applying QRD, -BAT and -P to R ) and R 2), we obtain R ) = , R ) = R ) = , R 2) = , R 2) = R 2) = R 2) If σ = 0.2, then it is easy to verify that for both R ) and R 2), the condition 42) holds. imple calculations by using 6) give R ) ) = , R 2) ) = Then R ) ) = , R ) ) = R ) ) = 0.990, R 2) ) = 0.753, R 2) ) = R 2) ) = Thus for R ) QRD decreases, while -BAT and -P increase and for R 2) -BAT decreases, while QRD and -P keep unchanged. If σ = 2.2, then it is easy to verify that for both R ) and R 2), the condition 44) holds. imple calculations by using 6) give R ) ) = , R 2) ) = Then R ) ) = , R ) ) = R ) ) = , R 2) ) = 0.898, R 2) ) = R 2) ) = Thus for R ) QRD increases, while -BAT and -P decrease and for R 2) -BAT increases, while QRD and -P keep unchanged. Although Example indicates that under the condition 42), both QRD and -BAT may decrease unlike -P, often they increase. This is the reason why QRD and -BAT especially the latter) have often been used to increase the accuracy of the Babai estimator in practice. Example also indicates that under the condition 44), both QRD and -BAT may increase unlike -P, but often they decrease. This is the opposite of what we commonly believe. ater we give numerical test results to show both phenomenons. In the following we give some explanations.

11 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY It is easy to show that like -P, -BAT increases min i r ii after each permutation and like -P, QRD decreases max i r ii after each permutation, see [3], [24]. For the relation between -BAT and QRD, see, e.g., [8] and [24]. Thus if the condition 42) holds before applying -BAT, it will also hold after applying it; and if the condition 44) holds before applying QRD, it will also hold after applying it. Often applying -BAT decreases max i r ii and applying QRD increases min i r ii both may not be true sometimes, see Example ). Thus often the gaps between the large diagonal entries and the small ones of R decrease after applying QRD or -BAT. From the proof of Theorem 2 we see reducing the gaps will likely increase under the condition 42) and decrease under the condition 44). Thus it is likely both QRD and -BAT will increase under 42) and decrease it under 44). We will give further explanations in the next subsection. C. A bound on In this subsection we give a bound on, which is an upper bound under one condition and becomes a lower bound under an opposite condition. This bound can help us to understand what a column permutation strategy should try to achieve. Theorem 4: uppose the assumptions in Theorem hold. et the box B in b) be a cube with edge length of d and denote γ = detr)) /n. ) If the condition 42) holds, then 2) If the condition 44) holds, then [ Prx BB = ˆx) [ Prx BB = ˆx) d + + d + + d ] n. d + φ σγ) 46) d ] n. d + φ σγ) 47) The equality in either 46)or 47) holds if and only if r ii = γ for i =,..., n. Proof. We prove only part. Part 2 can be proved similarly. Note that γ n = Π n i= r ii. Obviously, if r ii = γ for i =,..., n, then by 6) the equality in 46) holds. In the following we assume there exist j and k such that r jj r kk, we only need to show that the strict inequality 46) holds. Denote F ζ) = ln + dφ σ expζ))), η i = lnr ii ) for i =, 2,..., n and η = n n i= η i. It is easy to see that 46) is equivalent to n F η i ) < F η). n i= ince min i r ii 2σr 2π/2d)) and r jj r kk, it suffices to show that F ζ) is a strict concave function on ln2σr 2π/2d))), + ). Therefore, we only need to show that F ζ) < 0 when ζ > ln2σr 2π/2d))). To simplify notation, denote ξ = expζ)/2σ). imple calculations give Using 27), we obtain F ζ) = F ζ) = 2π 2d 2π 2d ξ exp 2 ξ2 ) + ξ 0 exp 2 t2) dt. ξ exp 2 ξ2 ) + ξ 0 exp 2 t2) dt) f ξ, 2 When ζ > ln2σr 2π/2d))), ξ > r 2π/2d)). Thus, by emma, fξ, 2π/2d)) < 0. Then we can conclude that F ζ) < 0 when ζ > ln2σr 2π/2d))), completing the proof. Now we make some remarks about Theorem 4. Remark 6: The quantity γ is invariant with respect to column permutations, i.e., for R and R in 7), we have the same γ no matter what the permutation matrix P is. Thus the bounds in 46) and 47), which are actually the same quantity, are invariant with respect to column permutations. Although the condition 42) is variant with respect to column permutations, if it holds before applying -P or -BAT, it will hold afterwards, since the minimum of the diagonal entries of R will not be smaller than that of R after applying -P or -BAT. imilarly, the condition 44) is also variant with respect to column permutations. But if it holds before applying -P or QRD, it will hold afterwards, since the maximum of the diagonal entries of R will not be larger than that of R after applying -P or QRD. 2π 2d ).

12 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 2 Remark 7: The bounds 46) and 47) are reached if all the diagonal entries of R are identical. This suggests that if the gaps between the larger entries and small entries become smaller after permutations, it is likely that increases under the condition 42) or decreases under the condition 44). As we know, the gap between the largest one and the smallest one decreases after applying -P. Numerical tests indicate usually this is also true for both -BAT and QRD. Thus both -BAT and QRD will likely bring closer to the bound under the two opposite conditions, respectively. Remark 8: When d, by emma, r 2π/2d)) 0, thus the condition in part of Theorem 4 becomes max i r ii 0, which certainly holds always. Taking the limit as d on both sides of 46) and using Corollary, we obtain Prx OB = ˆx) φ σ γ)) n. 48) The above result was obtained in [33] and a simple proof was provided in [9]. D. Numerical tests We have shown in Theorem 3 that if 42) holds, then the -P increases and 46) is an upper bound on ; and if 44) holds, then the -P decreases and 47) is a lower bound on. Example shows that this conclusion does not always hold for QRD and -BAT. To further understand the effects of the -P, QRD and -BAT on and to see how close they bring their corresponding to the bounds given by 46) and 47), we performed some numerical tests. For comparisons, we also performed tests for. We performed MATAB tests for the following two cases. Case. A = 2 2 randnn, n), where randnn, n) is a MATAB built-in function to generate a random n n matrix, whose entries follow the i.i.d normal distribution N 0, ). o the elements of A follow the i.i.d normal distribution N 0, /2). Case 2. A = UD T, U, are random orthogonal matrices obtained by the QR factorization of random matrices generated by randnn, n) and D is a n n diagonal matrix with d ii = 0 3n/2 i)/n ). The condition number of A is 000. In the tests for each case, we first chose n = 4 and B = [0, ] 4 and took different noise standard deviation σ to test different situations according to the conditions 42) and 44) imposed in Theorems 3 and 4. The edge length d of B is equal to. o in 42) and 44), r 2π/2d) = r 2π/2) = Details about choosing σ will be given later. We use,, and respectively denote the success probability of the box-constrained Babai estimator corresponding to QR factorization i.e., no permutations are involved), -P, QRD and -BAT. et µ BB denote the right-hand side of 46) or 47), so it is an upper bound if 42) holds and a lower bound if 44) holds. imilarly,,, and respectively denote the success probability of the ordinary Babai estimator corresponding to QR factorization, -P, QRD and -BAT. We use µ OB to denote the right-hand side of 48), which is an upper bound on,, and. For each case, we performed 0 runs notice that for each run we have different A, ˆx and v due to randomness) and the results are displayed in Tables I-I. In Tables I and II, σ = σ minr ii )/.8. It is easy to verify that the condition 42) holds. This means that in theory by Theorem 3 and,, µ BB by Theorem 4 and Remark 6. The numerical results given in Tables I and II are consistent with the theoretical results. The numerical results also indicate that QRD and -BAT nonstrictly) increase, although there is one exceptional case for QRD in Table II. We observe that the permutation strategies increase more significantly for Case 2 than for Case. The reason is that A is more ill-conditioned for Case 2, resulting in bigger gaps between the diagonal entries of R, which can usually be reduced more effectively by the permutation strategies. We also observe that µ BB in both tables. Although in theory the inequality may not hold as we cannot guarantee the condition 42) holds after applying QRD, usually QRD can make min i r ii larger. Thus if 42) holds before applying QRD, it is likely the condition still holds after applying it. Thus it is likely µ BB holds. Tables III and I are opposite to Tables I and II. In both tables, σ = σ 2 maxr ii )/.6, then the condition 44) holds. This means that in theory by Theorem 3 and,, µ BB by Theorem 4 and Remark 6. The numerical results given in the two tables are consistent with the theoretical results. The results in the two tables also indicate that both QRD and -BAT nonstrictly) decrease, although Example shows that neither is always true under the condition 44). We also observe that µ BB in both tables. Although in theory the inequality may not hold as we cannot guarantee the condition 44) holds after applying -BAT, usually -BAT can make max i r ii smaller. Thus if 44) holds before applying -BAT, it is likely the condition still holds after applying it. Thus it is likely µ BB holds.

13 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 3 In Tables and I, σ = σ maxr ii ) minr ii ))/.68. It is easy to verify that neither 42) almost) nor 44) holds in theory. In theory we do not have results cover this situation. The numerical results in the two tables indicate all of the three permutation strategies can either increase or decrease and µ BB can be larger or smaller than,, and. The reason we chose 0.3 and 0.7 rather than a more natural choice of 0.5 and 0.5 in defining σ here is that we may not be able to observe both increasing and decreasing phenomenons due to limited runs. Now we make comments on the success probability of ordinary Babai points. From Tables I-I, we observe that - P always nonstrictly) increases, and QRD and -BAT almost always increases there is one exceptional case for QRD in Table II and two exceptional cases for -BAT in Table I). Thus the ordinary case is different from the box-constrained case. We also observe for the same permutation strategies. ometimes the difference between the two is large see Tables I and I). Each of Tables I-I displays the results for only 0 runs due to space limitation. To make up for this shortcoming, we give Tables II and III, which display some statistics for 000 runs on the data generated exactly the same way as the data for the 0 runs. pecifically these two tables display the number of runs, in which ) increases, keeps unchanged and decreases after each of the three permutation strategies is applied for Case and Case 2, respectively. In the two tables, σ, σ 2 and σ 3 are defined in the same as those used in Tables I I. From Tables II and III, we can see that often these permutation strategies increase or decrease for the same data. The numerical results given in all the tables suggest that if the condition 42) holds, we should have confidence to use any of these permutation strategies; and if the condition 44) holds we should not use any of them. σ TABE I UCCE PROBABIITIE AND BOUND FOR CAE, σ = minr ii )/.8 µ BB σ TABE II UCCE PROBABIITIE AND BOUND FOR CAE 2, σ = minr ii )/.8 µ BB σ TABE III UCCE PROBABIITIE AND BOUND FOR CAE, σ = maxr ii )/.6 µ BB µ OB µ OB µ OB

14 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 4 σ TABE I UCCE PROBABIITIE AND BOUND FOR CAE 2, σ = maxr ii )/.6 µ BB µ OB TABE UCCE PROBABIITIE AND BOUND FOR CAE, σ = 0.3 maxr ii ) minr ii ))/.68 σ µ BB µ OB TABE I UCCE PROBABIITIE AND BOUND FOR CAE 2, σ = 0.3 maxr ii ) minr ii ))/.68 σ µ BB µ OB TABE II NUMBER OF RUN OUT OF 000 IN WHICH AND CHANGE FOR CAE σ trategy Result -P QRD -BAT -P QRD -BAT Increase σ No change Decrease Increase σ 2 No change Decrease Increase σ 3 No change Decrease

15 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 5 TABE III NUMBER OF RUN OUT OF 000 IN WHICH AND CHANGE FOR CAE 2 σ trategy Result -P QRD -BAT -P QRD -BAT Increase σ No change Decrease Increase σ 2 No change Decrease Increase σ 3 No change Decrease Tables II and III do not show which permutation strategy can increase most. The information on this given in Tables I and I are limited. In the following we give more test results to investigate this. We still consider Cases and 2, but we take B = [0, 5] n and choose different n and σ from before. In Figures and 2 for n = 20, we take σ = 0. : 0. : 0.8 and σ = 0.0 : 0.0 : 0.08 for Cases and 2, respectively. For each σ, we give 200 runs to generate 200 different A s. These two figures display the average corresponding to QR, -P, QRD and -BAT over 200 runs for Cases and 2, respectively. Figures 3 and 4 display the average corresponding to the various permutation strategies over 200 runs versus n = 5 : 5 : 40 with σ = 0.4 and σ = 0.04 for Cases and 2, respectively. The reason we choose different σ for the two cases is to ensure is neither close to 0 nor close to ; otherwise, it is not much interesting to investigate the effects of the column permutations on. From Figures -4, we can see that on average all of the three column permutation strategies improve. The effect of -BAT is much more significantly than that of -P and QRD, which have more or less the same performance. This phenomenon is similar to that for, as shown in [9] Average QR P QRD BAT Noise σ Fig.. Average success probabilities over 200 runs versus σ for Case, n = 20. ON THE CONJECTURE PROPOED IN [27] In [27], a conjecture was made on the ordinary Babai estimator, based on which a stopping criterion was then proposed for the sphere decoding search process for solving the BI problem 2). In this section, we first introduce this conjecture, then give an example to show that this conjecture may not hold in general, and finally we show that the conjecture holds under some conditions. The problem considered in [27] is to estimate the integer parameter vector ˆx in the box-constrained linear model ). The method proposed in [27] first ignores the box constraint b). Instead of using the column permutations in 7), it performs the reduction: Q T RZ = R 49) where Q is orthogonal, Z is unimodular i.e, Z Z n n and detz) = ± ) and the upper triangular R is reduced, i.e., it satisfies the ovász condition 8) and the size-reduce condition: r ik 2 r ii for k = i +, i + 2,..., n

16 UBMITTED TO JOURNA OF IEEE TRANACTION ON INFORMATION THEORY 6 Average QR P QRD BAT Noise σ Fig. 2. Average success probabilities over 200 runs versus σ for Case 2, n = Average QR -P QRD -BAT Dimension n Fig. 3. Average success probabilities over 200 runs versus n for Case, σ = Average QR P QRD BAT Dimension n Fig. 4. Average success probabilities over 200 runs versus n for Case 2, σ = 0.04

TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY 1. Success probability of the Babai estimators for box-constrained integer linear models

TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY 1. Success probability of the Babai estimators for box-constrained integer linear models TO APPEAR IN IEEE TRANACTION ON INFORMATION THEORY uccess probability of the Babai estimators for box-constrained integer linear models Jinming Wen and Xiao-Wen Chang Abstract In many applications including

More information

Partial LLL Reduction

Partial LLL Reduction Partial Reduction Xiaohu Xie School of Computer Science McGill University Montreal, Quebec, Canada H3A A7 Email: xiaohu.xie@mail.mcgill.ca Xiao-Wen Chang School of Computer Science McGill University Montreal,

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design

An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design An Efficient Optimal Algorithm for Integer-Forcing Linear MIMO Receivers Design Jinming Wen, Lanping Li, Xiaohu Tang, Wai Ho Mow, and Chintha Tellambura Department of Electrical and Computer Engineering,

More information

Application of the LLL Algorithm in Sphere Decoding

Application of the LLL Algorithm in Sphere Decoding Application of the LLL Algorithm in Sphere Decoding Sanzheng Qiao Department of Computing and Software McMaster University August 20, 2008 Outline 1 Introduction Application Integer Least Squares 2 Sphere

More information

Augmented Lattice Reduction for MIMO decoding

Augmented Lattice Reduction for MIMO decoding Augmented Lattice Reduction for MIMO decoding LAURA LUZZI joint work with G. Rekaya-Ben Othman and J.-C. Belfiore at Télécom-ParisTech NANYANG TECHNOLOGICAL UNIVERSITY SEPTEMBER 15, 2010 Laura Luzzi Augmented

More information

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Basic Algorithms Instructor: Daniele Micciancio UCSD CSE We have already seen an algorithm to compute the Gram-Schmidt orthogonalization of a lattice

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 31, No. 3, pp. 1071 1089 c 2009 Society for Industrial and Applied Mathematics SOLVING ELLIPSOID-CONSTRAINED INTEGER LEAST SQUARES PROBLEMS XIAO-WEN CHANG AND GENE H. GOLUB

More information

MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms

MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms MILES: MATLAB package for solving Mixed Integer LEast Squares problems Theory and Algorithms Xiao-Wen Chang and Tianyang Zhou Scientific Computing Laboratory School of Computer Science McGill University

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information

Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation

Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation Luay Azzam and Ender Ayanoglu Department of Electrical Engineering and Computer Science University of California, Irvine

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design

A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design Jing Wen and Xiao-Wen Chang ENS de Lyon, LIP, CNRS, ENS de Lyon, Inria, UCBL), Lyon 69007, France, Email: jwen@math.mcll.ca

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm

A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm A Delayed Size-reduction Technique for Speeding Up the LLL Algorithm Wen Zhang, Institute of Mathematics, School of Mathematical Science Fudan University, Shanghai, 00433 P.R. China Yimin Wei School of

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS.

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS. 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS. Sandra Roger, Alberto Gonzalez,

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Lattice Basis Reduction Part 1: Concepts

Lattice Basis Reduction Part 1: Concepts Lattice Basis Reduction Part 1: Concepts Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao October 25, 2011, revised February 2012

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Min-Rank Conjecture for Log-Depth Circuits

Min-Rank Conjecture for Log-Depth Circuits Min-Rank Conjecture for Log-Depth Circuits Stasys Jukna a,,1, Georg Schnitger b,1 a Institute of Mathematics and Computer Science, Akademijos 4, LT-80663 Vilnius, Lithuania b University of Frankfurt, Institut

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Lattices and Hermite normal form

Lattices and Hermite normal form Integer Points in Polyhedra Lattices and Hermite normal form Gennady Shmonin February 17, 2009 1 Lattices Let B = { } b 1,b 2,...,b k be a set of linearly independent vectors in n-dimensional Euclidean

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

Lattice Basis Reduction Part II: Algorithms

Lattice Basis Reduction Part II: Algorithms Lattice Basis Reduction Part II: Algorithms Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao November 8, 2011, revised February

More information

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

arxiv:cs/ v1 [cs.it] 11 Sep 2006

arxiv:cs/ v1 [cs.it] 11 Sep 2006 0 High Date-Rate Single-Symbol ML Decodable Distributed STBCs for Cooperative Networks arxiv:cs/0609054v1 [cs.it] 11 Sep 2006 Zhihang Yi and Il-Min Kim Department of Electrical and Computer Engineering

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

Sherman-Morrison-Woodbury

Sherman-Morrison-Woodbury Week 5: Wednesday, Sep 23 Sherman-Mrison-Woodbury The Sherman-Mrison fmula describes the solution of A+uv T when there is already a factization f A. An easy way to derive the fmula is through block Gaussian

More information

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions CSC 2414 Lattices in Computer Science September 27, 2011 Lecture 4 Lecturer: Vinod Vaikuntanathan Scribe: Wesley George Topics covered this lecture: SV P CV P Approximating CVP: Babai s Nearest Plane Algorithm

More information

The Diagonal Reduction Algorithm Using Fast Givens

The Diagonal Reduction Algorithm Using Fast Givens The Diagonal Reduction Algorithm Using Fast Givens Wen Zhang, Sanzheng Qiao, and Yimin Wei Abstract Recently, a new lattice basis reduction notion, called diagonal reduction, was proposed for lattice-reduction-aided

More information

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Viveck R Cadambe, Syed A Jafar, Hamed Maleki Electrical Engineering and

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

On Some Polytopes Contained in the 0,1 Hypercube that Have a Small Chvátal Rank

On Some Polytopes Contained in the 0,1 Hypercube that Have a Small Chvátal Rank On ome Polytopes Contained in the 0,1 Hypercube that Have a mall Chvátal Rank Gérard Cornuéjols Dabeen Lee April 2016, revised July 2017 Abstract In this paper, we consider polytopes P that are contained

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function MODEL ANSWERS TO THE FIRST QUIZ 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function A: I J F, where I is the set of integers between 1 and m and J is

More information

HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection

HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 11, NOVEMBER 2012 5963 HKZ and Minkowski Reduction Algorithms for Lattice-Reduction-Aided MIMO Detection Wen Zhang, Sanzheng Qiao, and Yimin Wei Abstract

More information

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009 Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1 8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]

More information

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

The Hodge Star Operator

The Hodge Star Operator The Hodge Star Operator Rich Schwartz April 22, 2015 1 Basic Definitions We ll start out by defining the Hodge star operator as a map from k (R n ) to n k (R n ). Here k (R n ) denotes the vector space

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications

Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications arxiv:cs/0508064v1 [cs.it] 12 Aug 2005 Massimiliano Siti Advanced System Technologies STMicroelectronics 20041 Agrate Brianza

More information

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Hufei Zhu, Ganghua Yang Communications Technology Laboratory Huawei Technologies Co Ltd, P R China

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

K User Interference Channel with Backhaul

K User Interference Channel with Backhaul 1 K User Interference Channel with Backhaul Cooperation: DoF vs. Backhaul Load Trade Off Borna Kananian,, Mohammad A. Maddah-Ali,, Babak H. Khalaj, Department of Electrical Engineering, Sharif University

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

A Hybrid Method for Lattice Basis Reduction and. Applications

A Hybrid Method for Lattice Basis Reduction and. Applications A Hybrid Method for Lattice Basis Reduction and Applications A HYBRID METHOD FOR LATTICE BASIS REDUCTION AND APPLICATIONS BY ZHAOFEI TIAN, M.Sc. A THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTING AND SOFTWARE

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

The Shortest Vector Problem (Lattice Reduction Algorithms)

The Shortest Vector Problem (Lattice Reduction Algorithms) The Shortest Vector Problem (Lattice Reduction Algorithms) Approximation Algorithms by V. Vazirani, Chapter 27 - Problem statement, general discussion - Lattices: brief introduction - The Gauss algorithm

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information