On iterative equalization, estimation, and decoding

Similar documents
On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding

Soft-Output Decision-Feedback Equalization with a Priori Information

The Turbo Principle in Wireless Communications

Efficient Computation of EXIT Functions for Non-Binary Iterative Decoding

A Relation between Conditional and Unconditional Soft Bit Densities of Binary Input Memoryless Symmetric Channels

State-of-the-Art Channel Coding

Turbo Codes for Deep-Space Communications

Ralf Koetter, Andrew C. Singer, and Michael Tüchler

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 8, AUGUST Linear Turbo Equalization Analysis via BER Transfer and EXIT Charts

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Iterative Equalization using Improved Block DFE for Synchronous CDMA Systems

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

TO combat the effects of intersymbol interference, an equalizer

A soft-in soft-out detection approach using partial gaussian approximation

Soft-Output Trellis Waveform Coding

Bounds on Mutual Information for Simple Codes Using Information Combining

Joint FEC Encoder and Linear Precoder Design for MIMO Systems with Antenna Correlation

OPTIMUM fixed-rate scalar quantizers, introduced by Max

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

MMSE DECISION FEEDBACK EQUALIZER FROM CHANNEL ESTIMATE

TURBO equalization is a joint equalization and decoding

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding

One Lesson of Information Theory

A Systematic Description of Source Significance Information

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Chapter 7: Channel coding:convolutional codes

Trellis-based Detection Techniques

Constellation Shaping for Communication Channels with Quantized Outputs

ON DECREASING THE COMPLEXITY OF LATTICE-REDUCTION-AIDED K-BEST MIMO DETECTORS.

Information Theoretic Imaging

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

UNIVERSITÀ DEGLI STUDI DI PARMA

Soft-Input Soft-Output Sphere Decoding

Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

Adaptive Bit-Interleaved Coded OFDM over Time-Varying Channels

BASICS OF DETECTION AND ESTIMATION THEORY

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

QPP Interleaver Based Turbo-code For DVB-RCS Standard

QAM Constellations for BICM-ID

UNBIASED MAXIMUM SINR PREFILTERING FOR REDUCED STATE EQUALIZATION

An Efficient Low-Complexity Technique for MLSE Equalizers for Linear and Nonlinear Channels

An Analytical Method for MMSE MIMO T Equalizer EXIT Chart Computation. IEEE Transactions on Wireless Commun 6(1): 59-63

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

Interleaver Design for Turbo Codes

Lecture 4 : Introduction to Low-density Parity-check Codes

Expected Error Based MMSE Detection Ordering for Iterative Detection-Decoding MIMO Systems

Joint Iterative Decoding of LDPC Codes and Channels with Memory

ML Detection with Blind Linear Prediction for Differential Space-Time Block Code Systems

Flat Rayleigh fading. Assume a single tap model with G 0,m = G m. Assume G m is circ. symmetric Gaussian with E[ G m 2 ]=1.

SPARSE intersymbol-interference (ISI) channels are encountered. Trellis-Based Equalization for Sparse ISI Channels Revisited

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation

New reduced state space BCJR algorithms for the ISI channel

Design of Multidimensional Mappings for Iterative MIMO Detection with Minimized Bit Error Floor

Bit-wise Decomposition of M-ary Symbol Metric

Code-aided ML joint delay estimation and frame synchronization

Successive Cancellation Decoding of Single Parity-Check Product Codes

THE EXIT CHART INTRODUCTION TO EXTRINSIC INFORMATION TRANSFER IN ITERATIVE PROCESSING

Turbo per tone equalization for ADSL systems 1

Layered Orthogonal Lattice Detector for Two Transmit Antenna Communications

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

A REDUCED COMPLEXITY TWO-DIMENSIONAL BCJR DETECTOR FOR HOLOGRAPHIC DATA STORAGE SYSTEMS WITH PIXEL MISALIGNMENT

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

TWO OF THE major problems encountered in mobile

Lattice Reduction Aided Precoding for Multiuser MIMO using Seysen s Algorithm

Low-complexity soft-decision feedback turbo equalization for multilevel modulations

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping

Multi-Branch MMSE Decision Feedback Detection Algorithms. with Error Propagation Mitigation for MIMO Systems

Decoupling of CDMA Multiuser Detection via the Replica Method

Data and error rate bounds for binar gathering wireless sensor networks. He, Xin; Zhou, Xiaobo; Juntti, Markk Author(s) Tad

Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity- Check Codes in the Presence of Burst Noise

Bit Error Rate Estimation for a Joint Detection Receiver in the Downlink of UMTS/TDD

Channel Estimation with Low-Precision Analog-to-Digital Conversion

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels

Turbo Codes for xdsl modems

arxiv:cs/ v2 [cs.it] 1 Oct 2006

Optimal Receiver for MPSK Signaling with Imperfect Channel Estimation

Rate-Compatible Low Density Parity Check Codes for Capacity-Approaching ARQ Schemes in Packet Data Communications

Digital Communications

Efficient Computation of the Pareto Boundary for the Two-User MISO Interference Channel with Multi-User Decoding Capable Receivers

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Convolutional Codes ddd, Houshou Chen. May 28, 2012

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Quantization for Distributed Estimation

APPLICATIONS. Quantum Communications

Reduced-State BCJR-Type Algorithms

The Effect of Memory Order on the Capacity of Finite-State Markov and Flat-Fading Channels

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation

IN the mobile communication systems, the channel parameters

Introduction to Convolutional Codes, Part 1

Transcription:

On iterative equalization, estimation, and decoding R. Otnes and M. Tüchler Norwegian Defence Research Establishment, PO box 25, N-227 Kjeller, Norway Institute for Communications Engineering, Munich University of Technology, Arcisstr. 2, D-829 München, Germany Abstract We consider the problem of coded data transmission over an inter-symbol interference (ISI) channel with unknown and possibly time-varying parameters. We propose a low-complexity algorithm for joint equalization, estimation, and decoding using an estimator, which is separate from the equalizer. Based on existing techniques for analyzing the convergence of iterative decoding algorithms, we show how to find powerful system configurations. This includes the use of recursive precoders in the transmitter. We derive novel a-posteriori probability equalization algorithms for imprecise knowledge of the channel parameters. We show that the performance loss implied by not knowing the parameters of the ISI channel is entirely a loss in signal-to-noise ratio for which a suitably designed iterative receiver algorithm converges. I. INTRODUCTION Many practical communication systems encounter the problem of data transmission over a channel with unknown and possibly time-varying parameters, such as the signal-to-noise ratio (SNR), the delays and phases in a multi-path channel, or the fading amplitude and phase in a wireless channel. We assume a baseband symbol-spaced receiver front-end, where the transmit filter, the channel, and the receive filter are approximated by a discrete-time linear filter with length M channel impulse response (CIR). The data is assumed to be protected by an errorcorrection code (ECC). A standard approach to the arising detection problem in the receiver splits the global problem into the three tasks estimation, equalization, and decoding. Estimation is performed blindly or non-blindly using algorithms such as least-squares- (LS), least-mean-square- (LMS) estimation, recursive-least-square- (RLS), or Kalman estimation (KEST) []. The effects of ISI are addressed by equalization, which can be a linear equalizer (LE), a decision feedback equalizer (DFE) [2], or a method minimizing the sequence- or bit-error rate (BER), e.g., the BER-optimal BCJR algorithm [3] maximizing the a-posteriori probabilities (APPs) of the data. A wide range of decoding algorithms for ECCs exist, among which we focus on APP decoding for convolutional codes. An optimal detection approach would be joint decoding and equalization, which treats the ECC encoder and the ISI channel as a concatenated code. However, the computational burden is most often prohibitive, especially when encoder and ISI channel are separated by an inter. A successful approach to approximately perform joint decoding and equalization is iterative (Turbo) equalization and decoding [4], which has been studied quite extensively [5 7] when SNR and CIR are precisely known to the receiver. In case they are not known and/or possibly time-varying, some methods attempt to perform estimation and equalization simultaneously (jointly), e.g., by extending the equalizer trellis [8 ]. Others exploit parametric dependencies specifying how the channel varies, e.g., in the context of joint estimation and equalization alone [, 2] or in the context of Turbo equalization [3]. However, these approaches rely on trellis-based equalization algorithms, i.e., their computational complexity is intractable for large M or given higher-order signal constellations. On the other hand, since (nearly) joint estimation and equalization is performed, they perform very well, in particular for fast-varying channels. We propose in this paper a much simpler algorithm for (suboptimal) joint equalization, estimation, and decoding. The estimator is separated out from the equalizer, such that we can freely choose equalization, estimation, and decoding algorithms with varying complexity. The estimator is allowed to incorporate reliability information communicated between equalizer and decoder, e.g., as in adaptive equalization, where harddecisions from the equalizer are used to track a time-varying channel, or as in [4, 5], where hard-decisions from the decoder feedback are used for re-estimation. We avoid these harddecisions and perform soft channel estimation, which has already been applied in the context of LMS and RLS estimation [6 8] or KEST [9]. It is shown there that soft channel estimation outperforms approaches using hard-decisions, since the latter suffer from error propagation. We extend these results and show that the equalizer, which processes imperfect channel parameters from the estimator, must address their noisiness. We outline a novel APP equalization algorithm for imprecise knowledge of the CIR, which can be extended to other equalization algorithms. Based on that, powerful concatenated codes are designed using the EXIT chart tool [2]. We investigate the application of recursive precoding [2] in the transmitter when the CIR must be estimated and show why it significantly improves the performance of iterative equalization and decoding without increase in complexity. II. SYSTEM DEFINITION Consider the communication system in Fig.. A block of K data bits is encoded with the outer encoder of rate R = K/N to N code bits c =(c,..., c N ), c n F 2. The inter permutes the bits in c to x =(x,..., x N ). Inserting T training bits t n into the stream of N code bits x n according to a predefined schedule yields a block x =( x,..., xñ ) of bits x n, where Ñ = N +T. The following rate- mapper maps q-tuples x k =( x qk q+,..., x qk ), k =,..., Ñ/q, of bits x n to symbols y k Cfrom the 2 q -ary signal constellation S (with average power P ) using the bijective function S( ). The mapper includes a precoder defined by the state-space equations s k+ = s k A + x k B, y k = s k C T + x k D, y k = S(y k ), where y k =(y qk q+,..., y qk ). The length-m vector s k is the precoder state at time step k. The dimensions of A, B, C, D are m m, q m, q m, q q, respectively. The precoder state is initially zero, i.e., s =( ).

data Transmitter c n Encoder Inter t n x n training information 8PSK with Gray mapping Channel w k Receiver Precoder y z k k L e ( xn) ISI Equalizer Mapper Channel estimator L ( xn) Deinter Inter L ( cn) L d ( cn) Decoder data estimate Fig.. Coded data transmission system applying iterative equalization, estimation, and decoding in the receiver. We use the example alphabet S depicted in Fig., an 8PSK constellation, to illustrate our results. We apply precoding to account for the fact that the inner encoder in a serially concatenated system should have rate- [22, 23] (no extra redundancy) and be recursive [22 24] to approach the capacity of the channel as close as possible. Here, the inner encoder is the cascade of the mapper and the ISI channel. Transmitted over the ISI channel are Ñ/q symbols y k yielding the total rate R tot = R N/Ñ = K/(N +T ) () of the communication system. The T training bits t n are chosen such that T/q fixed training symbols are among the Ñ/q symbols y k, i.e., the t n depend on the data bits x n when precoding is applied. Independent and identically distributed (i.i.d) complex noise samples w k with probability density function (PDF) f(w)=/(πσ 2 ) exp( w 2 /σ 2 ), w C are added at the receiver front end. Received are the symbols z k =w k + M h k,ly k l, h k =(h k, h k,m ), l= where h k is the CIR at time step k. The equalizer and the decoder communicate log-likelihood ratios (LLRs) as reliability information [25]. The equalizer outputs the LLRs L e (x n ), which are used after deinterleaving as a-priori LLRs L(c n )= ln (Pr{c n =}/P r{c n =)} on the c n by the decoder. An APP-based equalizer outputs L e (x n )=L(x n z,l(x)) L(x n ), where L(x n z,l(x)) is the a-posteriori LLR defined by L(x n z,l(x))=ln Pr{x n = z,l(x)} Pr{x n = z,l(x)} x X :x =ln f(z x)pr{x} (2) n= x X :x f(z x)pr{x}, n= where z =(z,..., zñ/q ), L(x)=(L(x ) L(x N )), and X is the set of valid code words c interleaved to x. In turn, the decoder outputs the LLRs L d (c n ), which are used after interleaving as a-priori LLRs L(x n ) on the x n by the equalizer. An APPbased decoder would compute L d (c n )=L(c n L(c)) L(c n ). An overview of how to compute L e (x n ) for other types of equalizers is presented for example in [7]. The minimal trellis describing a general length M ISI channel has q M states, where the state at time step k is given by (y k,..., y k M+ ) [3]. An APP equalizer may decode the precode using the same trellis without complexity overhead when s k is part of the channel state. This is achieved for example with the parameter choice m = q, A = C T, B = D, such that y k = y k A+ x k B holds. We restrict ourselves to such precoders to keep decoding for the precode as simple as possible. Using a linear equalizer and precoding requires an extra predecoding operation. This scenario is not addressed here due to space limitations and we focus entirely on APP equalization. III. ESTIMATION The equalizer needs estimates ĥk of h k for each k. Let H =(h hñ/q ) and Ĥ =(ĥ ĥñ/q ) be length MÑ/q vectors containing the CIRs and its estimates for all time steps. For simplicity of the derivations, we assume that the channel noise variance σ 2 is known to the receiver with sufficient accuracy. We propose to use a separate estimator calculating the ĥk from z k, the known bits t n, and possibly L(y n z,l(x)) from the equalizer or L(c n L(c)) from the decoder produced in the previous iteration. The estimator does not need to use the lessreliable extrinsic LLRs L e (x n ) or L d (c n ), which are mandatory for iterating between equalizer and decoder. Suitable soft estimation algorithms using LLRs have been derived in [6, 7] (LMS, RLS estimation) and [9] (KEST). They all transfer the bit-oriented LLRs into soft symbols (means) of the symbols y k and not into hard-decisions from S. For example, without precoder, i.e., when y k = x k, the soft symbols are given by E{y k cond } = S(i) Pr{y k =i cond }, i=(i i q) F q 2 where cond stands for L(y n z,l(x)) or L(c n L(c)) and Pr{y k = i cond } = q j= Pr{y kq q+j = i j cond }. We note that computing E{y k L(y n z,l(x))} in general and computing E{y k L(c n L(c))} without precoder is simple, but computing E{y k L(c n L(c))} given a memory-m precoder is cumbersome. In the results section we avoid this parameter constellation and leave a satisfactory solution to this problem open. In the first iteration, the estimator has only the training bits t n available and it may calculate estimates Ĥ with little reliability. The equalizer can support the estimator in this case by producing intermediate hard-decisions or LLRs on the y k within some delay. However, this delay could be as large as the entire block, e.g., for APP equalizers. The precomputation of approximate hard-decisions or LLRs may be a solution, but it should be avoided, since wrong hard-decisions or incorrect (socalled inconsistent [26]) LLRs cause estimation errors followed by errors in equalization, then in decoding, and finally in the entire iterative receiver algorithm. Therefore, we propose to wait until the iterative decoding algorithm offers consistent LLRs, i.e., first after one iteration. However, there is another threat of inconsistency. The equalizer has to compute L e (x n ) from Ĥ. Assuming that they are correct, i.e., that Ĥ = H, causes erroneous LLRs L e (x n ), since Ĥ is a distorted version of H. One way to overcome this deficiency is to merge estimator and equalizer as in [8 3] to perform estimation and equalization jointly. However, this usually causes a complexity problem, since these joint estimator-equalizers are trellis-based requiring more than the q M states of the conventional APP

equalizer. Even for reduced-state approximations, it is the additional need for estimation, which causes the joint estimatorequalizer trellis to have more states than that of an APP equalizer knowing H precisely. Also, simpler equalization strategies, e.g., linear equalization, cannot be applied. This is why we separate out the estimator and instead specify the statistics µ k,l =EĤ H {d k,l } and ν k,k,l =EĤ H {d k,l d k,l} µ k,l µ k,l of the mismatch d k,l = ĥk,l h k,l between estimate and true value over the PDF of Ĥ given H. This knowledge is used by the equalizer computing output LLRs L e (x n ) for imprecise (noisy) knowledge of the CIR. Thus, we need to derive a new instance of the APP equalization algorithm. Both steps outlined above, i.e., waiting for valid LLRs and performing equalization such that the LLRs L e (x n ) are consistent, might degrade the receiver performance for early iterations. However, as shown later, properly designed decoding algorithms do converge after a sufficient number of iterations and outperform systems with a better initial performance. For example, in a receiver using one-time equalization and decoding, a DFE outperforms a LE due to its non-linear (harddecision-based) processing of past equalized symbols, but the output LLRs are inconsistent. The LE outputs poor but consistent LLRs in early iterations, but it assures convergence of the iterative receiver algorithm, whereas a DFE-based system performs initially better but fails after convergence [7]. IV. EQUALIZATION We derive here an APP equalization algorithm for imprecise channel knowledge. Other algorithms are treated in [7]. For a known CIR, the expressions f(z x) in (2) are proportional to f(z x) Ñ/q exp ( z k h k [y k y k M+ ] T 2 /σ 2), k= where the y k are computed given the hypothesis x. Obviously, f(z x) can be factored into K terms depending on at most M symbols y k, which yields that the APP rule (2) can be performed on a q M -state trellis [27]. Note that any factor of f(z x) not depending on x can be neglected for the APP rule. A general algorithm for APP equalization given the imprecisely known CIRs ĥk is still unknown. That is, we need a rule to compute the quantity f(z x, Ĥ). The imperfectness of the estimates is specified via their statistics µ k,l and ν k,k,l. The equalizer incorporating this extra knowledge is expected to produce less reliable output LLRs L e (x n ), which is a wanted effect, since it assures that the LLRs L e (x n ) are consistent. Computing f(z x, Ĥ) requires a detailed derivation, which is beyond the scope of this paper. We present here only partial results from [7] for the case that the ĥk,l at different taps l are mutually independent. Let r l =(h,l,..., hñ/q,l ) and ˆr l = (ĥ,l,..., ĥñ/q,l ) be the l-th tap coefficients of the CIR and its estimates for each time step k. Invoking the independence assumption yields f(r l, r l )=f(r l )f(r l ) for any l l.forsimplicity of the derivation, we assume that the estimator is unbiased, i.e., µ k,l =for all k and l and that the correlations ν k,k,l are identical for each tap, i.e., ν k,k, =ν k,k, =... =ν k,k,m for each k and k. We assume that the noise distorting Ĥ is complex Gaussian, which yields the following PDF f(ˆr l r l ): f(ˆr l r l ) = exp ( (ˆr l r l )Σ (ˆr l r l ) H) /(πñ/q det(σ)), where ν k,k,l is the entry in the k-th column and k -th row of Σ. We rewrite f(z x, Ĥ) to f(z, Ĥ x)/f(ĥ), where f(z, Ĥ x)= f(z, Ĥ, H x) d H. Factoring f(z, Ĥ, H x) yields finally: f(z x, Ĥ) f(z x, H)f(Ĥ H)f(H) d H, (3) where f(ĥ H)f(H) = M l= f(ˆr l r l )f(r l ). The PDF f(r l ) contains possibly available information about the distribution of the channel taps, e.g., a uniform, Rayleigh, or Rice distribution. This PDF also governs the correlations of the taps over time. We consider here only one example, the time-invariant channel with h = h 2 =... = hñ/q. Assuming a uniform distribution on the channel taps, i.e., f(h k,l )=(πc 2 ) for h k,l c, and elsewhere, where c is large, we can solve (3) and find f(z x, Ĥ) exp( (z ĥey)a (z ĥey) H ) det(i M +YY H /( ωσ 2, )) where A=σ 2 IÑ/q +Y H Y/ ω, I i is the i i identity matrix, and y y 2... y M... yñ/q y... y M... yñ/q Y =.......... y... yñ/q M+ The weight ω k is the sum of the Ñ/q entries in the k-th column of Σ, and ω = Ñ/q k= ω k. The effective estimate ĥ e =(ĥe, ĥe,m ) is the weighted sum of all Ñ/q estimates ĥk, i.e., ĥe =/ ω Ñ/q k= ω kĥk. Thus, it is suboptimal to perform APP equalization on a trellis, where the metrics in the k-th trellis section (at time step k) are computed using the estimates ĥk. Instead, ĥe should be used for all sections, which is intuitively correct as the channel is time-invariant. The above expression for f(z x, Ĥ) can be approximated by replacing YY H with its average P Ñ/q I M and Y H Y with its average PM IÑ/q over all qñ/q possible sequences x: f(z x, Ĥ) exp( z ĥey 2 /(σ 2 +PM/ ω)), which yields an APP rule implementable on a q M -state trellis using ĥe and the increased effective channel noise variance σ 2 +PM/ ω. Thus, the APP equalizer accounts for the imperfectly known channel parameters by decreasing the reliability of the output LLRs L e (x n ) and it utilizes a diversity effect by averaging over the Ñ/q available estimates ĥk,l using suitable weights ω k. With this framework it is also possible to solve (3) for other channel tap distributions [7]. Consider the following example. Suppose we transmit Ñ/q = 58 symbols y k (T/q = 5 training symbols plus N/q =43data symbols) over a time-invariant length M =3 ISI channel. An RLS-based estimator with forgetting factor λ=.99 attempts to estimate the CIR using the known symbols y k, k =,..., 5, and the means E{y k cond }, k =6,..., 58. An analytical expression of the tap error variance ν k,k,l per

tap ĥk,l for this estimator was derived in [6]. The variance ν k,k,l is majorly affected by the SNR P/σ 2 and the average energy Eȳ =/43 58 k=6 E{y k cond } 2 of the soft-symbols. Fig. 2 shows the profile of the normalized weights ω k / ω versus the scaled tap error variance 5ν k,k,l for different Eȳ at 5 db P/σ 2. The diversity effect by averaging over the ĥk is small, since the estimates are strongly correlated. This is because λ approaches, for which RLS estimation turns into non-timesequential LS estimation algorithm. The effective estimate ĥe is merely a combination of ĥ5 and ĥ58 obtained using only the training or all 58 symbols, respectively. The weights ω 5 and ω 58 depend on the reliability of ĥ5 and ĥ58, i.e., on ν 5,5,l and ν 58,58,l. Computing the sum ω, e.g., ω =54(Eȳ =.), ω =72(Eȳ =.6), ω =9 (Eȳ =.), reveals that the effective noise variance σ 2 +PM/ ω =.32+3/ ω, i.e.,.37 (Eȳ =.),.36 (Eȳ =.6),.33 (Eȳ =.), has not risen significantly. This analysis can be redone for other estimation algorithms, other estimator parameters, and finally a time-varying channel, too. Because of space limitations we omit results here. training 5 5 2 25 3 35 4 45 5 55 5 5 2 25 3 35 4 45 5 55 normalized tap weights scaled tap error variance data symbols =. =.6 =. 5 5 2 25 3 35 4 45 5 55 time index k Fig. 2. Weight distribution for RLS estimation. V. SYSTEM DESIGN The LLRs communicated between equalizer and decoder can be modelled as outcomes of the random variables (r.v. s) Λ e modelling the LLRs L e (x n ) and Λ d modelling the LLRs L d (c n ). The outcomes of Λ e and Λ d are distributed with the PDFs f d (l c) conditioned on c n =c and f e (l x) conditioned on x n = x, respectively, which both vary for each iteration. Unfortunately, analyzing these PDFs is extremely difficult. As asimplification one could observe only a single parameter of the PDFs after each iteration, e.g., the mutual information I d = I(Λ d ; C) [, ] between Λ d and the r.v. C, whose outcomes are the bits c n [2]. Similarly, I e = I(Λ e ; X) is defined on f e (l x). The evolution of I d and I e over the iterations is the trajectory of the decoding algorithm. Both I d and I e can be calculated either via histograms of the output LLRs L e (x n ) or L d (c n ) as in [2] or, when f d (l c) and f e (l x) satisfy certain constraints, via a time average of a function of the output LLRs [23], e.g., I d = N N n= log 2(+exp( µ(c n )L d (c n ))), where µ() = + and µ() =. To predict the behavior of the decoding algorithm without actually running the algorithm, equalizer and decoder are analyzed separately via their transfer functions I e = T e (I in ) and I d = T d (I in ), which map any mutual information I in [, ] specifying a particular input LLR distribution to I e or I d. Since the input PDF, which is f d (l c) or f e (l x) due to the feedback, is not accessible, a Gaussian distribution yielding the same I in is used [2]. This analysis is accurate, i.e., the decoders output the same I d or I e when fed with LLRs distributed either with f d (l c) and f e (l x) or the Gaussian PDF at the same I in, only for large N or a finite number of iterations, respectively [2]. We use properties of this method, called extrinsic information transfer (EXIT) chart, to select system parameters such as the outer code and the number of training symbols T. Let A d = T d(i) di be the area under T d (i) and A e be the area under T e (i). Moreover, the law T (i) di= T (i) di holds for any transfer function in the EXIT chart. Given an APPbased decoder for the rate-r outer code, we have approximately A d = R [23], a property, which was proven under somewhat simplified conditions in [22]. We say that the iterative decoding algorithm converges whenever T e (i) >T d (i), for all i [, ɛ), where ɛ is small. This implies that A e > ( A d ) or A e >R. In fact, A e can be related to the capacity of the underlying communication channel [22, 23]. We use the latter law to select system parameters by precomputing some T e (i) for a number of parameter constellations and pick those for which A e is maximal. VI. RESULTS We encoded 5 data bits to N = 24 code bits c n using a rate-/2 convolutional code with generator [+D 2 +D+D 2 ]. The c n are S-random interleaved (S=7) and partitioned into 8 blocks of 28 bits. Transmitted are 8 frames of 43 + T/3 symbols y k mapped from T training and 28 data bits using either no precoder, i.e., y k =x k, or the precoder given by ] ] A = C T = and B = D T =. [ [ The length M =3ISI channel is assumed to be time-invariant and given by the impulse response [.47.85.47]. The receiver estimates the channel for each frame of 43+T/3 received symbols using RLS estimation. We chose this example for simplicity. In fact, the derived algorithms exhibit their powerfulness especially in scenarios where the CIR is time-varying. Because of space limitation we have to omit results here. Fig. 3 depicts the equalizer transfer function T e (i) for both precoder types and two receiver strategies (one-time estimation, iterative estimation) at 8 db E b /N defined as P/(qσ 2 R tot ) for four different T/3 {7, 5, 3, 63}. WepickT/3=5, since A e is maximal for both receiver strategies, i.e., this T/3 is the best trade-off between estimate-reliability and rate-loss. Iterative estimation outperforms one-time estimation in particular for short T, since it utilizes the 28 data bits for estimation as well. Note that A e is the largest achievable rate R of the outer code such that decoding convergence is achieved. For example,

T e (i) Example: T/3=5 equalizer (no precoding) equalizer (precoding) decoder T d (i) 7 5 3 63 T/3 no marker: η and h[k] are perfectly known, : η and h[k] are estimated once using the T/3 training symbols, : η and h[k] are estimated iteratively Fig. 3. Achievable performance of 3 system configurations at 8 db E b /N. for 8 db E b /N we find that decoding convergence is possible up to R<.63 using iterative estimation with T/q = 5. To achieve this rate, T d (i) should be matched to T e (i), e.g., using irregular codes [23]. Observe that A e is indifferent from the chosen precoder, but the error shoulder after decoding convergence disappears with increasing N when a precoder is used since T e () =. We note that specifying T e (i) is cumbersome when the CIR is unknown and possibly time-varying. To design a concatenated system, e.g., to select a suitable T d (i), a representative T e (i) averaged over all possible CIRs based on f(h) could be obtained. Also, the transmitter could adjust its parameters using feedback from the receiver. Fig. 4 shows the BER performance of the systems described above for both precoders and both estimation strategies. For iterative estimation, the estimator incorporates the LLRs produced by the equalizer in the previous iteration, i.e., L(y n z,l(x)). We see that, first, iterative estimation closely approaches the performance when the CIR is known to the receiver, and, second, no significant error shoulder occurs with precoding. Again, we note that the achievable gains compared to one-time estimation are larger for time-varying channels, but we have to omit results here. BER without precoder BER with precoder one time equal. and decod. 2 4 4 6 2 E b /N 2 h[k] is known one time estimation 4 iterative estimation 4 6 8 2 A e after 5 iterations 2 4 4 6 2 E b /N 2 4 4 6 8 2 Fig. 4. BER performance of iterative equalization, estimation, and decoding with (bottom plots) and without (upper plots) a precoder in the transmitter. REFERENCES [] S. Haykin, Adaptive Filter Theory, 3rd Ed. Upper Saddle River, New Jersey: Prentice Hall, 996. [2] J. Proakis, Digital Communications, 3rd Ed. McGraw-Hill, 995. [3] L.R. Bahl et al., Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. on IT, vol. 2, pp. 284 287, Mar 974. [4] C. Douillard et al., Iterative correction of intersymbol interference: Turbo equalization, European Trans. on Telecomm., vol. 6, pp. 57 5, Sep/Oct 995. [5] A. Anastasopoulos and K. Chugg, Iterative equalization/decoding for TCM for frequency-selective fading channels, in Proc. 3th Asilomar Conf. on Signals, Systems & Comp., vol., pp. 77 8, Nov 997. [6] A. Glavieux, C. Laot, and J. Labat, Turbo equalization over a frequency selective channel, in Proc. 2nd Intern. Symp. on Turbo codes, Brest, France, pp. 96 2, Sep 997. [7] M. Tüchler, R. Koetter, and A. Singer, Turbo equalization: principles and new results, IEEE Trans. on Comm., pp. 754 767, May 22. [8] L. Davis, I. Collings, and P.Hoeher, Joint MAP equalization and channel estimation for frequency-selective and frequency-flat fast-fading channels, IEEE Trans. on Comm., vol. 49, pp. 26 24, Dec 2. [9] X. Cheng and P.Hoeher, Blind turbo equalization for wireless DPSK systems, in Proc. Intern. ITG Conf. on Sourse and Channel Coding, pp. 37 378, Jan 22. [] M. Peacock, I. Collings, and I. Land, Design rules for adaptive turbo equalization in fast-fading channels, in Proc. Intern. Conf. Communication Systems & Networks, (Valencia, Spain), Sep 22. [] K. Chugg and A. Polydoros, MLSE for an unknown channel - part I: optimality considerations, IEEE Trans. on Comm., vol. 44, pp. 836 846, July 996. [2] E. Baccarelli and R. Cusani, Combined channel estimation and data detection using soft statistics for frequency-selective fast-fading digital links, IEEE Trans. on Comm., vol. 46, pp. 424 427, April 998. [3] A. Anastasopoulos and K. Chugg, Adaptive soft-in soft-out algorithms for iterative detection with parameteric uncertainty, IEEE Trans. on Comm., vol. 48, pp. 638 64, Oct 2. [4] N. Nefedov and M. Pukkila, Turbo equalization and iterative (turbo) estimation techniques for packet data transmission, in 2nd Intern. Symp. on Turbo Codes, Brest, France, pp. 423 426, 2. [5] P.Strauch, C. Luschi, and A. Kuzminskiy, Iterative channel estimation for EPGRS, in IEEE Vehicular techn. Conf. Fall, 2, (Boston),Sep 2. [6] M. Tüchler, R. Otnes, and A. Schmidbauer, Performance evaluation of soft iterative channel estimation in turbo equalization, in Proc. IEEE Intern. Conf. on Comm., pp. 53 6, May 22. [7] C. Kuhn, Iterative Kanalschätzung, Entzerrung, und Dekodierung für den EDGE Standard, Master s thesis, Munich University of Technology, 22, email to: michael.tuechler@ei.tum.de. [8] R.Otnes and M. Tüchler, Soft iterative channel estimation for turbo equalization: comparison of channel estimation algorithms, in Proc. IEEE Intern. Conf. on Communication Systems, (Singapore), Nov 22. [9] S. Song, A. Singer, and K. Sung, Turbo equalization with an unknown channel, in Proc. IEEE Intern. Conf. on Acc, Speech, and Signal Proc., May 22. [2] S. ten Brink, Convergence behaviour of iteratively decoded parallel concatenated codes, IEEE Trans. on Comm., vol. 49, pp. 727 737, Oct 2. [2] I. Lee, The effect of a precoder on serially concatenated coding systems with an ISI channel, IEEE Trans. on Comm., vol. 49, pp. 68 75, July 2. [22] A. Ashikhmin, G. Kramer, S. ten Brink, Extrinsic information transfer functions: A model and two properties, in Proc. CISS, Princeton, March 22. [23] M. Tüchler and J. Hagenauer, Exit charts of irregular codes, in Proc. CISS, Princeton, March 22. [24] S. Benedetto et al., Serial concatenation of interleaved codes: performance analysis design, and iterative decoding, IEEE Trans. on Information Theory, vol. 44, pp. 99 926, May 998. [25] J. Hagenauer, E. Offer, and L. Papke, Iterative decoding of binary block and convolutional codes, IEEE Trans. on Information Theory, pp. 429 445, March 996. [26] M. Tüchler, Design of serially concatenated systems depending on the block length, in Proc. ICC 23, Anchorage, U.S.A., May 23. [27] S. Aji and R. McEliece, The generalized distributive law, IEEE Trans. on Information Theory, vol. 46, pp. 325 343, March 2.