MODIFIED SPHERE DECODING ALGORITHMS AND THEIR APPLICATIONS TO SOME SPARSE APPROXIMATION PROBLEMS. Przemysław Dymarski and Rafał Romaniuk

Similar documents
arxiv:cs/ v1 [cs.it] 4 Feb 2007

Decision-Point Signal to Noise Ratio (SNR)

Emotional Optimized Design of Electro-hydraulic Actuators

Impulse Response and Generating Functions of sinc N FIR Filters

Nonlinear Model Reduction of Differential Algebraic Equation (DAE) Systems

Transmit Beamforming for Frequency Selective Channels

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Coding for Discrete Source

SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION

Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2008, vol. LIV, article No. 1618

Digital Electronics Paper-EE-204-F SECTION-A

Linearized optimal power flow

SGN Advanced Signal Processing Project bonus: Sparse model estimation

OPTIMAL BEAMFORMING AS A TIME DOMAIN EQUALIZATION PROBLEM WITH APPLICATION TO ROOM ACOUSTICS

Ch 14: Feedback Control systems

Joint multi-target detection and localization with a noncoherent statistical MIMO radar

Round-off Error Free Fixed-Point Design of Polynomial FIR Predictors

Generalized Least-Powers Regressions I: Bivariate Regressions

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS

COMPARISON OF WINDOWING SCHEMES FOR SPEECH CODING. Johannes Fischer * and Tom Bäckström * Fraunhofer IIS, Am Wolfsmantel 33, Erlangen, Germany


MATCHING PURSUIT WITH STOCHASTIC SELECTION

SPARSE signal representations have gained popularity in recent

EE-597 Notes Quantization

RATE OPTIMIZATION FOR MASSIVE MIMO RELAY NETWORKS: A MINORIZATION-MAXIMIZATION APPROACH

Using Quasi-Newton Methods to Find Optimal Solutions to Problematic Kriging Systems

Optimization of Mechanical Design Problems Using Improved Differential Evolution Algorithm

Analysis of Outage and Throughput for Opportunistic Cooperative HARQ Systems over Time Correlated Fading Channels

Pulse-Code Modulation (PCM) :

n j u = (3) b u Then we select m j u as a cross product between n j u and û j to create an orthonormal basis: m j u = n j u û j (4)

FULL rate and full diversity codes for the coherent

STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS.

V DD. M 1 M 2 V i2. V o2 R 1 R 2 C C

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka

MODEL SELECTION CRITERIA FOR ACOUSTIC SEGMENTATION

EUSIPCO

On the diversity of the Naive Lattice Decoder

Generalized Tree Architecture with High-Radix Processing for an SC Polar Decoder

Probabilistic image processing and Bayesian network

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

On the Robustness of Algebraic STBCs to Coefficient Quantization

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Energy-Efficient Resource Allocation for Ultra-reliable and Low-latency Communications

Amplitude Adaptive ASDM without Envelope Encoding

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

A Rate-Splitting Strategy for Max-Min Fair Multigroup Multicasting

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer

10log(1/MSE) log(1/MSE)

Fourier Optics and Image Analysis

ON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS. Justin Dauwels

Re-estimation of Linear Predictive Parameters in Sparse Linear Prediction

Robust multichannel sparse recovery

Logic Design 2013/9/26. Introduction. Chapter 4: Optimized Implementation of Logic Functions. K-map

C. Non-linear Difference and Differential Equations: Linearization and Phase Diagram Technique

Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2009, vol. LV, article No. 1683

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

CONSIDER the following generic model:

1-Bit Compressive Sensing

Algebraic Multiuser Space Time Block Codes for 2 2 MIMO

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Asymptotic Distortion Performance of Source-Channel Diversity Schemes over Relay Channels

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.

arxiv: v2 [math.oc] 11 Jun 2018

A Performance Comparison Study with Information Criteria for MaxEnt Distributions

Compression methods: the 1 st generation

On K-Means Cluster Preservation using Quantization Schemes

USE OF FILTERED SMITH PREDICTOR IN DMC

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

Parametric Equations

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Simultaneous Sparsity

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage

Partial LLL Reduction

Chapter 10 Applications in Communications

Scalable color image coding with Matching Pursuit

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Optimum Ordering and Pole-Zero Pairing. Optimum Ordering and Pole-Zero Pairing Consider the scaled cascade structure shown below

Improved Methods for Search Radius Estimation in Sphere Detection Based MIMO Receivers

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Robust Sparse Recovery via Non-Convex Optimization

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Genetic algorithm approach for phase extraction in interferometric fiber optic sensor

SIMPLIFIED ROBUST FIXED-COMPLEXITY SPHERE DECODER

Upper Bounds to Error Probability with Feedback

Design and implementation of model predictive control for a three-tank hybrid system

v are uncorrelated, zero-mean, white

What Causes Image Intensity Changes?

Dynamical Diffraction

KALMAN FILTER IMPLEMENTATION FOR UNMANNED AERIAL VEHICLES NAVIGATION DEVELOPED WITHIN A GRADUATE COURSE

Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation

Co-Prime Arrays and Difference Set Analysis

A Constant Complexity Fair Scheduler with O(log N) Delay Guarantee

Strong Interference and Spectrum Warfare

Grayscale and Colour Image Codec based on Matching Pursuit in the Spatio-Frequency Domain.

Transcription:

MODIFIED SPHERE DECODING ALGORITHMS AND THEIR APPLICATIONS TO SOME SPARSE APPROXIMATION PROBLEMS Przemysław Dymarsi and Rafał Romaniu Institute of Telecommunications, Warsaw University of Technoloy ul. Nowowiejsa 15/19, 00-665, Warsaw, Poland email: dymarsi@tele.pw.edu.pl, r.romaniu@tele.pw.edu.pl ABSTRACT This wor presents modified sphere decodin MSD) alorithms for optimal solution of some sparse sinal modelin problems. These problems include multi-pulse ecitation sinal calculations for multi-pulse ecitation MPE), alebraic code ecited linear predictive ACELP) and -pulse maimum lielihood quantization MP-MLQ) speech coders. With the proposed MSD alorithms, the optimal solution of these problems can be obtained at substantially lower computational cost compared with full search alorithm. The MSD alorithms are compared with a series of suboptimal approaches in sparse approimation of correlated Gaussian sinals and low delay speech codin tass. Inde Terms sphere decoder, lattice, sparse approimation, speech codin, CELP, MP-MLQ, ACELP. 1. INTRODUCTION Sparse approimation techniques are widely used in speech and audio codin [1-4], MIMO communications [5, 6], array sinal processin [7], radar, etc. In this paper the followin sparse approimation problem is considered: min y F 1) where F NL - codeboo dictionary, lattice-eneratin matri), y - taret vector, DM belons to N L L a finite L-dimensional lattice with M elements per dimension; i.e. each component of may tae one of the M values the alphabet). An eample of such an alphabet is the set of data transmission symbols in MIMO communication or the quantized ain values in a multistae shape-ain vector quantizer. If the alphabet contains a zero value, then a sparse solution may be obtained. In some applications MIMO communication) both vectors and matri are comple, but in this wor real numbers are used. Three cases are considered in this paper. Case 1. Closest point in the lattice. Vector may be dense or K-sparse. If N L, the optimal solution of 1) may be found by usin a sphere decodin SD) alorithm [5, 8]. For N L eneralized SD alorithms may be used [9], at the cost of increasin the computational compleity. This model is mainly used in simulation of MIMO systems [5, 6]. Case. Uniformly scaled lattice isotropic scalin). Problem 1) is eneralized: now vector is multiplied by a scalar ain. Thus and are searched, yieldin minimum y F. The most important applications are in ACELP alebraic CELP) and MP-MLQ multi-pulse maimum lielihood quantization) speech coders [1]. In these coders vector is K-sparse and its components are i { -1,0,1}. The sparse approimation of the taret vector is the sum of the sined columns of F multiplied by : where 1, and i) K F ) i i) j i) f i) f is a column of a square matri F. j Column f contains the response of the predictive synthesis filter to a sinle pulse of a unit amplitude positioned at j. In order to find indices j i), sins i) and ain, suboptimal codeboo search alorithms are used [1,, 4]. Case 3. Non-uniformly scaled lattice anisotropic scalin). Vector is now replaced with G, where G dia{ 1,,, L} contains real ains. This is a typical speech model used in MPE multi-pulse ecitation) speech coders [10]. In these coders vector is K-sparse and its components are i { 0,1}. The sparse approimation of the taret vector is a linear combination: K F G 3) i i) j i) f In MPE coders matri F, containin filtered unit pulses, is square which is assumed in this paper) but in CELP coders L may be reater than N. Indices j i) and ains i) are searched usin suboptimal alorithms [3, 10-1]. A sinal model 3) is also used to describe radar sinals, sparse communication channels [13], etc. The sparse approimation problems 1 to 3) are NP-hard. A full search alorithm, testin all possible solutions, is enerally not feasible. However, the optimal tree searchin alorithms may reduce the computational load sinificantly. In Case 1 several variants of the sphere decodin SD) alorithm are used [5]. In Cases and 3 SD and related alorithms are used for calculation of the inteer vector if ains or G)

are nown; thus, the ains calculation is decoupled from searchin the closest point in the lattice, yieldin suboptimal alorithms [6, 7, 13, 14]. In this wor the optimal tree search, based on a modified SD approach, is proposed for joint calculation of the lattice point and ains). The proposed alorithms are compared with suboptimal ones for two selected problems: sparse approimation of correlated Gaussian sinals and low delay speech codin.. MODIFIED AND SPARSE SD ALGORITHMS.1. The standard sphere decodin alorithm Generally, Sphere Decodin SD) alorithm is used for solvin 1) and the alphabet D M is usually a set of inteer val- L ues. The matri F NL is assumed to be square, but etension to N L is straihtforward. Usin QR-decomposition of the matri F, 1) may be transformed to the followin problem: min z R 4) where z Q y, and R is an upper trianular matri. The SD alorithm consists of N levels. In each level a new column of R is appended from the riht to the left side) and lattice points, enerated by a correspondin component of, are tested. At the th ) level it is the column r and the component ), where j N. The partial solution of problem 4), reduced to -1 dimensions, is not chaned at the th level, and thus the partial difference vector z R 5) remains constant see Fi.1). Fi.1. Evolution of the accumulated distance in SD alorithm At the th level the partial difference equals N 1 ) z R zn rn N z 0 R where r N - a row of matri R and 0 - a column of zeroes. The squared norm of the partial difference vector i.e. the squared partial distance) increases: z r N N 6) 7) Thus, when looin for a lattice point within a sphere of a radius R centered at z, it maes no sense to pass to the th level if R. The search must be continued at lower levels. That is why the SD alorithm yields far fewer arithmetical operations than the full search alorithm... The modified sphere decodin alorithm Now we introduce the modified SD MSD) alorithm for searchin for the optimal point in a uniformly scaled lattice Case ). At the level -1, the squared norm of the partial difference vector may be epressed as follows: ) ) ) ) ) z R f ) 8) In the MSD alorithm the optimal ain is calculated: t z R opt t t R R At the th level, the partial difference equals: ) 1 ) N z R yieldin zn rn N z 0 R z r f ) 9) 10) f ) 11) For opt N N the second term attains its minimum, but enerally, ain used at th ) level, i.e. opt differs from opt. Because 11) holds for any ain, the followin inequality is ) ) true: f opt ) f opt ) f opt ). Thus, the squared norm of the partial distance increases and the MSD alorithm is applicable for searchin for the optimal scalin factor and the optimal lattice point in the uniformly scaled lattice..3. The sparse sphere decodin alorithm In the above described alorithm, sparsity of is not demanded, but it may be attained if the alphabet contains a symbol equal to zero. Thus the optimal number of nonzero elements in is obtained by usin SD and MSD alorithms. In order to obtain a K-sparse solution some constraints are added to the search procedure. The solution is updated and N) the radius R is reduced at the last N th ) level, if R and if sparsity number of nonzero components of ) equals K. Moreover, the th ) level is entered, if R and if the sparsity of does not eceed K. The second condition yields additional reduction of computational compleity, which has been noted in [13]. Further reduction may be obtained by removin the null symbol from the alphabet and testin combinations of columns of R in such a way that the best K-combination is not omitted. Compleity reduction of the proposed sparse SD alorithm is owed to reduction of the alphabet.

At the th level 1,, K ) of the sparse SD alorithm the columns ) r ), r ),, r 1) are fied and the column r is searched toether with the correspondin sym- and in Case ) the common ain. If the partial bol ) distance R, then the alorithm passes immediately to the net level depth first approach). If R for any possible position ), symbol ) and ain, then the alorithm continues searchin at the previous level. Combinations 1) ) are enerated in K nested loops. In the outer loop r is placed from ) 1) N to 1) K. The possible positions of r rane from j 1) to j K, etc. Testin occurs from the riht to the left, as in standard SD alorithms. An SD alorithm may be used if the partial distance at level is always reater than the partial distance at level -1. First it will be shown for Case 1. At level -1 the partial solution of problem 4), reduced to -1) dimensions, yields the difference vector 5). At the net level this vector equals Fi.): z R 1) z R z R z 0 R z R where 0 null matri. Fi.. Evolution of the accumulated distance in sparse SD alorithm Similarly to 7), the squared partial distance equals z R 13) ) Since, the th level may be abandoned if R. The same holds for Case the proof is almost identical to that presented in Section.) and the sparse SD alorithm yieldin the optimal solution of problem ) will be called the sparse modified sphere decodin alorithm SMSD). The proposed SMSD alorithm may be prorammed in the form of a self-callin procedure, similarly to the standard SD alorithm [15]: SMSD) where is a level. At the first call =1 and R is an initial radius, obtained with some suboptimal search procedure. 1) For Case and,,, K two nested loops are needed: Outer loop: j ),, K j 0) N ) Inner loop: for all nonzero symbols ) e.. 1 ) ) v R, is a -sparse vector) ) t ) z v opt, z ) t ) opt v v v If R call SMSD+1) If, for j 0, R brea the outer loop The last condition stems from the fact that the partial distance increases with increasin vector dimension. At the last level K ) the N-dimensional problem is considered: Outer loop: j K ),, 1 Inner loop: for all nonzero symbols e.. 1 ) v R, is an N-dim K-sparse vector) z v opt, t v v If t z opt v out : R, R, out opt If, for 0, R brea the outer loop Note that, in eneral, is not an N-dimensional vector, so. The SMSD alorithm returns the opti- out out mal vector and the optimal ain in Case. Etension to Case 1 is straihtforward ain =1), and Case 3 is analyzed below..4. The sparse sphere decodin alorithm and nonuniformly scaled lattice The sparse sphere decodin alorithm may be also applied to solve the classic sparse approimation problems 3). After a QR decomposition of F, problem 3) is transformed to min z R 14) N where is a K-sparse vector. First it must be proved that the partial distance increases at subsequent levels. At level -1, the problem is reduced to N j dimensions Fi.): vector z is approimated as a linear combination of -1 columns of matri ) R : z R, where is a -1-sparse 1) vector. By etraction of these columns matri R is obtained, yieldin ) 1) 1) z R, where is -1-

dimensional, dense vector of ains. The error of this approimation is a function of 1) : ) ) ) z R f ) 15) The optimal vector of ains equals: 1) 1) 1 t ) t ) opt [ R R ] R z 16) At the net level the partial difference vector equals: z R 17) z R ) z R z 0 R z R where 0 - a column of N j zeroes, R - matri of dimensions j ). Therefore, for j, the norm f ) equals f z R f ) 18) For the optimal ains, i.e. ), opt opt, opt f opt f opt ) because of the first term of 18) and because the last -1 components of opt are not necessarily equal to opt. This justifies use of the SD alorithm. The correspondin self-callin procedure SMSD)) does not differ much from that described in Section.3. The main difference is a suppression of the inner loop, testin a set of symbols ). At each level, the optimal ain vector is calculated, similarly to 16), and the squared partial distance is evaluated. If R, the level is increased, if not, the outer loop is broen. This does not concern the last level, in which N-dimensional system is tested. Here, if R, the radius R is updated, and the indices 1), ),, and ains K opt are stored. 3. COMPARATIVE EVALUATION In this section, the proposed SMSD alorithms are compared with full codeboo search and with some suboptimal alorithms. Two sparse sinal models are used: they are described in equations ) and 3). In accordance with the codeboo F and the taret vector y two eamples are analyzed. Synthetic sinals: y is a vector of dimension N=0, obtained by passin a Gaussian noise throuh an AR filter H z) 1, where 0. 98 and / 16. 1 1 cos z z The columns of F are obtained by passin sinle pulses throuh the same filter, i.e. they contain shifted impulse response of Hz). This yields a coherent codeboo, main search procedures difficult [16]. Speech sinals processed in a nonstandard low-delay CELP coder. In this case y is a perceptual sinal vector filtered speech) of dimension N=16 delay ms at samplin frequency 8 Hz), and F is obtained as before, but Hz) is an adaptive predictive synthesis filter. The ain coefficient is not quantized, in order to compare only the sparse approimation alorithms. At the end, however, the fully quantized coder is simulated, transmittin speech at 13.5 bit/s. Suboptimal alorithms for solvin the uniformly scaled lattice search problem ) are compared in [4]. These alorithms may deliver a startin point the initial radius R) for SMSD alorithms. The followin alorithms are considered. Sparsity-forcin: calculation of a dense solution F y and choice of K components of reatest absolute values. Minimum anle: a simple reedy alorithm, selectin K codeboo vectors in K steps by minimizin the anle between the taret vector y a and its model F [17]. Global replacement: the initial solution is found as above, then all vectors, one by one, are replaced by others, if such echane yields reduction of error []. M-best implementation of minimum anle alorithm. The M-best alorithm calculates, in a parallel way, M sequences of codeboo vectors here, M=10). At the end the best sequence is retained [4]. M-best + replacement: an M-best alorithm is eecuted, and then lobal replacement is performed. Suboptimal alorithms for solvin the non-uniformly scaled lattice search problem 3) are compared in [3, 1]. Here the followin alorithms are considered. Sparsity-forcin: indices 1), ),, are chosen as above, and then ains are calculated as in 16). OOMP: optimized orthoonal matchin pursuit) [10-1]. Here the fast implementation of this alorithm is used, namely the RMGS recursive modified Gram-Schmidt) [1]. M-best implementation of the OOMP alorithm. Results for problem ) and synthetic sinals show considerable reduction of computational effort Tab.1): SMSD visits less than 0.6% of nodes tested with full search approach, yieldin the same optimal result. Further reduction may be obtained if ain is forced to be positive SMSD >0), but in some cases the optimal solution is sipped. The other suboptimal search alorithms yield much worse results. The SMSD alorithm is much more efficient than the full search approach in solvin problem 3) Tab.). Similar conclusions stem from simulations of the LD-CELP coder without ain quantization in Fi.3 and Fi.4 mean SNR values for four phrases and more than 10000 sements are iven). Usin sinal model ) with K=10 and 4-bit predictive ain quantizer, we obtain a bit rate of 13.5 bit/s. Comparin mean opinion score values for 10 speech phrases, we observe a sys-

tematic advantae of the SMSD alorithm over the other ones. Alorithm SNR [db] nodes tested % of full search Full search 5.187 3.5 10 6 100 MSD 5.187 588 10 3 1.8 SMSD/sparsity-forcin 5.187 191 10 3 0.59 SMSD/min.anle 5.187 189 10 3 0.59 SMSD/M-best 5.187 181 10 3 0.56 SMSD >0 5.11 51 10 3 0.16 M-best+replacement 0.31 3.3 10 3 0.01 Global replacement 19.67 810 0.00 M-best M=10) 18.68.8 10 3 0.008 Minimum anle 15.4 88 0.001 Sparsity-forcin 10.01 1 0.00003 Tab.1. Comparison of optimal and suboptimal alorithms in solvin ) for K=8, N=0 mean values for 1000 runs). SMSD/min.anle means that the initial radius is obtained usin minimum anle alorithm, etc. confidence interval for SNR values: 0.15 db) Alorithm SNR [db] nodes tested % of full search Full search 3.574 15970 100 SMSD/OOMP 3.574 953 7.6 M-best 31.85 140 1 OOMP 9.94 14 0.1 Sparsity-forcin 7.03 1 0.0008 Tab.. Comparison of several sparse approimation alorithms in solvin 3) for K=8, N=0 mean values for 1000 runs, confidence interval for SNR: 0.15 db) Fi.3. SNR for LD-CELP coder with sinal model ) - confidence interval for all SNR values: 0.05 db Fi.4. SNR for LD-CELP coder with sinal model 3) - confidence interval for all SNR values: 0.05 db 4. CONCLUSIONS It is shown that the sphere decodin alorithm may be etended to uniformly or non-uniformly scaled lattices. The resultin modified SD and sparse modified SD alorithms yield the optimal solution of some sparse approimation problems at substantially reduced computational cost, compared with the full search approach. The proposed alorithms are compared with suboptimal solutions. 5. REFERENCES [1] F. K. Chen, J. F. Yan, Maimum tae precedence ACELP a low compleity search method, Proc. ICASSP 01, pp.693-696 [] F. K. Chen, G. M. Chen, B. K. Su, Y. R. Tsai, Unified pulsereplacement search alorithms for alebra codeboos of speech code, IET Sinal Proc., 010, Vol. 4, Iss. 6, pp. 658-665 [3] P. Dymarsi, R. Romaniu, Sparse Sinal Approimation Alorithms in a CELP Coder, Proc. EUSIPCO 011, pp. 01-05 [4] P.Dymarsi, R.Romaniu "Sparse sinal modelin in a scalable CELP coder", Proc. of EUSIPCO 013, Marraesh, pp. 1-5 [5] A.D. Muruan, H. El Gamal, M.O. Damen, G. Caire, A unified framewor for tree search decodin: rediscoverin the sequential decoder, IEEE Trans. Information Theory, Vol. 5, pp. 933-953, March 006 [6] E. Viterbo, J. Boutros, A Universal Lattice Code Decoder for Fadin Channels, IEEE Trans. on Information Theory, Vol. 45, Iss. 5, July 1999, pp. 1639-164 [7] T. Yardibi, J. Li, P. Stoica, L.N. Cattafesta III, Sparse representations and sphere decodin for array sinal processin, Diital Sinal Processin 011, doi:10.1016/j.dsp.011.10.006 [8] E. Arell, T. Erisson, A. Vardy, K. Zeer, Closest point search in lattices, IEEE Transactions on Information Theory, Vol. 48, Iss. 8, 00, pp. 160-165 [9] P.Wan, T. Le-Noc, Selection of Desin Parameters for Generalized Sphere Decodin Alorithms, Int. J. Communications, Networ and System Sciences,Vol. 3, No=, 010, pp. 16-13 [10] P. Kroon, E. Deprettere, Eperimental evaluation of different approaches to the multi-pulse coder, Proc. ICASSP 84, pp. 396-399 [11] L. Rebollo-Neira, D. Lowe, Optimized orthoonal matchin pursuit approach, IEEE Si. Proc. Letters, Vol. 9/00, pp.137-140. [1] P. Dymarsi, N. Moreau, G. Richard, Greedy sparse decompositions: a comparative study, Eurasip Journal on Advances in Sinal Processin, 011:34, doi:10.1186/1687-6180-011-34 [13] S. Bari, H. Vialo Sparsity-aware sphere decodin: alorithms and compleity IEEE Trans. on Sinal Proc. Vol. 6, May 014, pp.1-5 [14] S. Sparrer, R.F.H. Fischer Discrete Sparse Sinals: Compressed Sensin by Combinin OMP and the Sphere Decoder, in Proceedins of CoRR, 013, arxiv:1310.456 [15] X. Guo Sphere Decoder for MIMO systems, Matlab Central, File Echane, Feb. 009 [16] Z. Ben-Haim, Y. C. Eldar, M. Elad Coherence-Based Performance Guarantees for Estimatin a Sparse Vector Under Random Noise, IEEE Trans. on Sinal Proc., Vol. 58, Iss. 10, Oct. 010, pp. 5030-5043 [17] P.Dymarsi, N.Moreau, Alorithms for the CELP coder with ternary ecitation, Proc. Eurospeech 1993, pp.41-44.