Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Size: px
Start display at page:

Download "Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University"

Transcription

1 Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University

2 Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver 4. Iterative decoding of turbo codes

3 Turbo codes 2 Reference 1. Lin, Error Contro Coding chapter Moon, Error Correction Coding chapter C. Heegard and S. B. Wicker, Turbo Coding 4. B. Vucetic and J. Yuan, Turbo Codes

4 Turbo codes 3 Introduction

5 Turbo codes 4 What are turbo codes? Turbo codes, which beong to a cass of Shannon-capacity approaching error correcting codes, are introduced by Berrou and Gavieux in ICC93. Turbo codes can come coser to approaching Shannon s imit than any other cass of error correcting codes. Turbo codes achieve their remarkabe performance with reativey ow compexity encoding and decoding agorithms.

6 Turbo codes 5 We can tak about Bock turbo codes or (convoutiona) turbo codes A firm understanding of convoutiona codes is an important prerequisite to the understanding of turbo codes. Tow fundamenta ideas of Turbo code: Encoder: It produce a codeword with randomike properties. Decoder: It make use of soft-output vaues and iterative decoding. These are done by introducing an intereaver in the transmitter and an iterative decoding in the receivers. Appy the MAP, Log-MAP, Max-Log-MAP (SOVA) for decoding of component codes.

7 Turbo codes 6 Power efficiency of existing standards

8 Turbo codes 7 (37, 21, 65536) turbo codes with G(D) = [1, 1+D 4 1+D+D 2 +D 3 +D 4 ]

9 Turbo codes 8 Turbo code encoder

10 Turbo codes 9 Turbo code encoder The fundamenta turbo code encoder: Two identica recursive systematic convoutiona (RSC) codes with parae concatenation. An RSC encoder a is termed a component encoder. The two component encoders are separated by an intereaver. Figure A: Fundamenta Turbo Code Encoder (r = 1 3 ) a In genera, an RSC encoder is a (2, 1) convoutiona code.

11 Turbo codes 10 To achieve performance cose to the Shannon imit, the information bock ength (intereaver size) is chosen to be very arge, usuay at east severa thousand bits. RSC codes, generated by systematic feedback encoders, give much better performance than nonrecursive systematic convoutiona codes, that is, feedforward encoders. Because ony the ordering of the bits changed by the intereaver, the sequence that enters the second RSC encoder has the same weight as the sequence x that enters the first encoder.

12 Turbo codes 11 Turbo codes suffer from two disadvantages: 1. A arge decoding deay, owing to the arge bock engths and many iterations of decoding required for near capacity performance. 2. Good performance in ow signa to noise ratio (waterfa region) and modest performance in high signa to noise ratio (error foor region). 3. It significanty weakened performance at BERs beow 10 5 owing to the fact that the codes have a reativey poor minimum distance, which manifests itsef at very ow BERs.

13 Turbo codes 12 Figure A-1: Performance comparison of convoutiona codes and turbo codes.

14 Turbo codes 13 Recursive systematic convoutiona (RSC) encoder The RSC encoder: The conventiona convoutiona encoder by feeding back one of its encoded outputs to its input. Exampe. Consider the conventiona convoutiona encoder in which Generator sequence g 1 = [111],g 2 = [101] Compact form G = [g 1, g 2 ]

15 Turbo codes 14 Figure B: Conventiona convoutiona encoder with r = 1 2 and K = 3

16 Turbo codes 15 The RSC encoder of the conventiona convoutiona encoder: G = [1, g 2 /g 1 ] where the first output is fed back to the input. In the above representation: 1 the systematic output g 2 g 1 the feed forward output the feedback to the input of the RSC encoder Figure C shows the resuting RSC encoder.

17 Turbo codes 16 Figure C: The RSC encoder obtained from figure B with r = 1 2 and m = 2.

18 Turbo codes 17 Treis termination For the conventiona encoder, the treis is terminated by inserting m = K 1 additiona zero bits after the input sequence. These additiona bits drive the conventiona convoutiona encoder to the a-zero state (Treis termination). However, this strategy is not possibe for the RSC encoder due to the feedback. Convoutiona encoder are time-invariant, and it is this property that accounts for the reativey arge numbers of ow-weight codewords in terminated convoutiona codes. Figure D shows a simpe strategy that has been deveoped in a which overcomes this probem. adivsaar, D. and Poara, F., Turbo Codes for Deep-Space Communications, JPL TDA Progress Report , Feb. 15, 1995.

19 Turbo codes 18 For encoding the input sequence, the switch is turned on to position A and for terminating the treis, the switch is turned on to position B. Figure D: Treis termination strategy for RSC encoder

20 Turbo codes 19 Recursive and nonrecursive convoutiona encoders Exampe. Figure E shows a simpe nonrecursive convoution encoder with generator sequence g1 = [11] and g2 = [10]. Figure E: Nonrecursive r = 1 2 and K = 2 convoutiona encoder with input and output sequences.

21 Turbo codes 20 Exampe. Figure F shows the[ equivaent recursive convoutiona encoder of Figure E with G = 1, g 2 g 1 ]. Figure F: Recursive r = 1 2 and K = 2 convoutiona encoder of Figure E with input and output sequences.

22 Turbo codes 21 Compare Figure E with Figure F: The nonrecursive encoder output codeword with weight of 3 The recursive encoder output codeword with weight of 5 State diagram: Figure G-1: State diagram of the nonrecursive encoder in Figure E.

23 Turbo codes 22 Figure G-2: State diagram of the recursive encoder in Figure F. A recursive convoutiona encoder tends to produce codewords with increased weight reative to a nonrecursive encoder. This resuts in fewer codewords with ower weights and this eads to better error performance. For turbo codes, the main purpose of impementing RSC encoders as component encoders is to utiize the recursive nature of the encoders and not the fact that the encoders are systematic.

24 Turbo codes 23 Ceary, the state diagrams of the encoders are very simiar. The transfer function of Figure G-1 and Figure G-2 : T(D) = D3 1 D Where N and J are negected.

25 Turbo codes 24 The two codes have the same minimum free distance and can be described by the same treis structure. These two codes have different bit error rates. This is due to the fact that BER depends on the input output correspondence of the encoders. b It has been shown that the BER for a recursive convoutiona code is ower than that of the corresponding nonrecursive convoutiona code at ow signa-to-noise ratios E b /N 0. c b c Benedetto, S., and Montorsi, G., Unveiing Turbo Codes: Some Resuts on Parae Concatenated Coding Schemes, IEEE Transactions on Information Theory, Vo. 42, No. 2, pp , March Berrou, C., and Gavieux, A., Near Optimum Error Correcting Coding and Decoding: Turbo-Codes, IEEE Transactions on Communications, Vo. 44, No. 10, pp , Oct. 10, 1996.

26 Turbo codes 25 Concatenation of codes A concatenated code is composed of two separate codes that combined to form a arge code. There are two types of concatenation: Seria concatenation Parae concatenation

27 Turbo codes 26 The tota code rate for seria concatenation is r tot = k 1k 2 n 1 n 2 which is equa to the product of the two code rates. Figure H: Seria concatenated code

28 Turbo codes 27 The tota code rate for parae concatenation is: k r tot = n 1 + n 2 Figure I: Parae concatenated code.

29 Turbo codes 28 For seria and parae concatenation schemes: An intereaver is often between the encoders to improve burst error correction capacity or to increase the randomness of the code. Turbo codes use the parae concatenated encoding scheme. However, the turbo code decoder is based on the seria concatenated decoding scheme. The seria concatenated decoders are used because they perform better than the parae concatenated decoding scheme due to the fact that the seria concatenation scheme has the abiity to share information between the concatenated decoders whereas the decoders for the parae concatenation scheme are primariy decoding independenty.

30 Turbo codes 29 Intereaver design The intereaver is used to provide randomness to the input sequences. Aso, it is used to increase the weights of the codewords as shown in Figure J. Figure J: The intereaver increases the code weight for RSC Encoder 2 as compared to RSC Encoder 1.

31 Turbo codes 30 Form Figure K, the input sequence x i produces output sequences c 1i and c 2i respectivey. The input sequence x 1 and x 2 are different permuted sequences of x 0. Figure K: An iustrative exampe of an intereaver s capabiity.

32 Turbo codes 31 Tabe: Input an Output Sequences for Encoder in Figure K. Input Output Output Codeword Sequence x i Sequence c 1i Sequence c 2i Weight i i = i = i = The interever affect the performance of turbo codes because it directy affects the distance properties of the code.

33 Turbo codes 32 Bock intereaver The bock intereaver is the most commony used intereaver in communication system. It writes in coumn wise from top to bottom and eft to right and reads out row wise from eft to right and top to bottom. Figure L: Bock intereaver.

34 Turbo codes 33 Random (Pseudo-Random) intereaver The random intereaver uses a fixed random permutation and maps the input sequence according to the permutation order. The ength of the input sequence is assumed to be L. The best intereaver reorder the bits in a pseudo-random manner. Conventiona bock (row-coumn) intereavers do not perform we in turbo codes, except at reativey short bock engths.

35 Turbo codes 34 Figure M: A random (pseudo-random) intereaver with L = 8.

36 Turbo codes 35 Circuar-Shifting intereaver The permutation p of the circuar-shifting intereaver is defined by p(i) = (ai + s) mod L satisfying a < L, a is reativey prime to L, and s < L where i is the index, a is the step size, and s is the offset.

37 Turbo codes 36 Figure N: A circuar-shifting intereaver with L = 8, a = 3, s = 0.

38 Turbo codes 37 Iterative decoding of turbo codes

39 Turbo codes 38 The notation of turbo code enocder The information sequence (incuding termination bits) is considered to a bock of ength K = K + v and is represented by the vector u = (u 0, u 1,...,u K 1 ).

40 Turbo codes 39 Because encoding is systematic the information sequence u is the first transmitted sequence; that is u = v (0) = (v (0) 0, v(0) 1,...,v(0) K 1 ). The first encoder generates the parity sequence v (1) = (v (1) 0, v(1) 1,...,v(1) K 1 ). The parity sequence by the second encoder is represented as v (2) = (v (2) 0, v(2) 1,...,v(2) K 1 ). The fina transmitted sequence (codeword) is given by the vector v = (v (0) 0 v(1) 0 v(2) 0, v(0) 1 v(1) 1 v(2) 1,...,v(0) K 1 v(1) K 1 v(2) K 1 )

41 Turbo codes 40 The basic structure of an iterative turbo decoder The basic structure of an iterative turbo decoder is shown in Figure O. (We assume here a rate R = 1/3 parae concatenated code without puncturing.) Figure O: Basic structure of an iterative turbo decoder.

42 Turbo codes 41 Figure 1: Another view of Turbo iterative decoder

43 Turbo codes 42 At each time unit, three output vaues are received from the channe, one for the information bit u = v (0), denoted by r (0), and two for the parity bits v (1) and v (2), denote by r (1) and r (2), and the 3K-dimensiona received vector is denoted by r = (r (0) 0 r(1) 0 r(2) 0, r(0) 1 r(1) 1 r(2) 1,...,r(0) K 1 r(1) K 1 r(2) K 1 ) Let each transmitted bit represented using the mapping 0 1 and 1 +1.

44 Turbo codes 43 The genera operation of turbo iterative decoding is done as shown in the above two figures. The received data (r (0), r (1) ) and the prior probabiities (it is assume equa for the first iteration) are fed to decoder I and the decoder I produces the extrinsic probabiities. After the intereaver, they are used as the prior probabiities of the second decoder II aong with the received data (r (0), r (2) ). The extrinsic output probabiities of decoder II are then deintereaver and passed back to become prior probabiities of the decoder I. Compare this structure with the sum-product decoding in factor graph.

45 Turbo codes 44 The process of passing probabiities back and forth continues unti the decoder determine that the process has converged, or unti some maximum number of iterations is reached. Construct the treis for each component codes, reguar for convoutiona codes and irreguar for bock codes. The decoding of each component code commony used is the MAP, Log-MAP, and Max-Log-MAP (SOVA).

46 Turbo codes 45 For an AWGN channe with unquantized (soft) outputs, we define the og-ikeihood ratio (L-vaue) L(v (0) r (0) ) = L(u r (0) ) (before decoding) of a transmitted information bit u given the received vaue r (0) as L(u r (0) ) = n P(u =+1 r (0) = n P(r(0) ) P(u = 1 r (0) P(r (0) = n P(r(0) P(r (0) = n e (E s/n 0 )(r ) u =+1)P(u =+1) u = 1)P(u = 1) u =+1) P(u = 1) u = 1) + n P(u =+1) e (E s/n 0 )(r (0) (0) 1) 2 +1) 2 + n P(u =+1) P(u = 1) where E s /N 0 is the channe SNR, and u and r (0) normaized by a factor of E s. have both been

47 Turbo codes 46 This equation simpifies to { } L(u r (0) ) = E s N 0 (r (0) 1) 2 (r (0) + 1) 2 = E s N 0 r (0) + n P(u =+1) P(u = 1) = L c r (0) + L a (u ), + n P(u =+1) P(u = 1) where L c = 4(E s /N 0 ) is the channe reiabiity factor, and L a (u ) is the a priori L-vaue of the bit u. In the case of a transmitted parity bit v (j), given the received vaue r (j), j = 1, 2, the L-vaue (before decoding) is given by L(v (j) r (j) ) = L c r (j) + L a (v (j) ) = L c r (j), j = 1, 2,

48 Turbo codes 47 In a inear bock code with equay ikey information bits, the parity bits are aso equay ikey to be +1 or 1, and thus the a priori L vaues of the parity bits are 0; that is, L a (v (j) ) = n P(v(j) = +1) P(v (j) = 1) = 0, j = 1, 2. Remark: the a prior L vaues of the information bits L a (u ) are aso equa to 0 for the first iteration of decoder I, but that thereafter the a prior L vaues are then repaced by extrinsic L vaues from the other decoder.

49 Turbo codes 48 Iterative decoding of decoder I The output of decoder 1 contains two terms: [ ] 1. L (1) (u ) = n P(u = +1 r 1,L (1) a )/P(u = 1 r 1,L (1) a ), the a posteriori L-vaue (after decoding) of each information bit produced by decoder 1 given the (partia) received vector r [ ] 1 = r (0) 0 r(1) 0, r(0) 1 r(1) 1,..., r(0) K 1 r(1) K 1 and the a priori input vector [ ] L (1) a = L (1) a (u 0 ), L (1) a (u 1 ),..., L (1) a (u K 1 ) for decoder 1. [ ] 2. L (1) e (u ) = L (1) (u ) L c r (0) + L (2) e (u ), the extrinsic a posteriori L-vaue (after decoding) associated with each information bit produced by decoder 1, which, after intereaving, is passed to the input of decoder 2 as the a priori vaue L (2) a (u ).

50 Turbo codes 49 The received soft channe L-vaued L c r (0) for u and L c r (1) enter decoder 1 aong with a prior L vaues of the v (1) information bits L (1) a (u ) = L (2) e (u ). Subtracting the term in brackets, namey, L c r (0) + L (2) e (u ), removes the effect of the current information bit u from L (1) (u ), eaving ony the effect of the parity constraint, thus providing an independent estimate of the information bit u to decoder 2 in addition to the received soft channe L-vaues at time. for

51 Turbo codes 50 Iterative decoding of decoder II The output of decoder 2 contains two terms: [ ] 1. L (2) (u ) = n P(u = +1 r 2,L (2) a )/P(u = 1 r 2,L (2) a ), the a posteriori L-vaue (after decoding) of each information bit produced by decoder 2 given the (partia) received vector r [ ] 2 = r (0) 0 r(2) 0, r(0) 1 r(2) 1,..., r(0) K 1 r(2) K 1 and the a priori input vector [ ] L (2) a = L (2) a (u 0 ), L (2) a (u 1 ),..., L (2) a (u K 1 ) for decoder 2. [ ] 2. L (2) e (u ) = L (2) (u ) L c r (0) + L (1) e (u ), the extrinsic a posteriori L vaues L (2) e (u ) produced by decoder 2, after deintereaving, are passed back to the input of decoder 1 as the a priori vaues L (1) a (u ).

52 Turbo codes 51 The (propery intereaved) received soft channe L vaued L c r (0) for u and the soft channe L vaued L c r (2) for v (2) enter decoder 2 aong with a (propery intereaved) prior L vaues of the information bits L (2) a (u ) = L (1) e (u ). Subtracting the term in brackets, namey, L c r (0) + L (1) e (u ), removes the effect of the current information bit u from L (2) (u ), eaving ony the effect of the parity constraint, thus providing an independent estimate of the information bit u to decoder 1 in addition to the received soft channe L-vaues at time.

53 Turbo codes 52 Figure 2: The factor graph of turbo codes

54 Turbo codes 53 In summary, the input to each decoder contains three terms: 1. the soft channe L vaues L c r (0) 2. the soft channe L vaues L c r (1) or L c r (2) 3. the extrinsic a posterior L vaues as new prior L vaues L (1) a (u ) = L (2) e (u ) or L (2) a (u ) = L (1) e (u ) The term turbo in turbo coding is reated to decoding, not encoding. The feedback of extrinsic information form the SISO decoders in the iterative decoding that mimics the feedback of exhaust gases in a turbo engine.

55 Turbo codes 54 Pease review the SISO decoding presented in convoutiona codes or treis of bock codes, such as BCJR, Log-BCJR, and Max-Log-BCJR (SOVA). Present two exampes: one using og-bcjr and the other using max-og-bcjr

56 Turbo codes 55 Iterative decoding using the og-map agorithm Exampe. Consider the parae concatenated convoutiona code (PCCC) formed by using the 2 state (2, 1, 1) systematic recursive convoutiona code (SRCC) with generator matrix G(D) = [ D ] as the constituent code. A bock diagram of the encoder is shown in Figure P(a). Aso consider an input sequence of ength K = 4, incuding one termination bit, aong with a 2 2 bock (row coumn) intereaver, resuting in a (12, 3) PCCC with overa rate R = 1/4.

57 Turbo codes 56 Figure P: (a) A 2-state turbo encoder and (b) the decoding treis for (2, 1, 1) constituent code with K = 4.

58 Turbo codes 57 The ength K = 4 decoding treis for the component code is show in Figure P(b), where the branches are abeed using the mapping 0 1 and The input bock is given by the vector u = [u 0, u 1, u 2, u 3 ], the intereaved input bock is u = [u 0, u 1, u 2, u 3] = [u 0, u 2, u 1, u 3 ], the parity [ vector for the first ] component code is given by p (1) = p (1) 0, p(1) 1, p(1) 2, p(1) 3, and the parity vector for the [ ] second component code is p (2) = p (2) 0, p(2) 1, p(2) 2, p(2) 3. We can represent the 12 transmitted bits in a rectanguar array, as shown in Figure R(a), where the input vector u determines the parity vector p (1) in the first two rows, and the intereaved input vector u determines the parity vector p (2) in the first two coumns.

59 Turbo codes 58 Figure R: Iterative decoding exampe for a (12,3) PCCC.

60 Turbo codes 59 For purposes of iustration, we assume the particuar bit vaues shown in Figure R(b). We aso assume a channe SNR of E s /N 0 = 1/4 ( 6.02dB), so that the received channe L-vaues corresponding to the received vector [ ] r = r (0) 0 r(1) 0 r(2) 0, r(0) 1 r(1) 1 r(2) 1, r(0) 2 r(1) 2 r(2) 2, r(0) 3 r(1) 3 r(2) 3 are given by L c r (j) = 4( E s )r (j) N 0 = r (j), = 0, 1, 2, 3, j = 0, 1, 2. Again for purposes of iustration, a set of particuar received channe L-vaues is given in Figure R(c).

61 Turbo codes 60 In the first iteration of decoder 1 (row decoding), the og MAP agorithm is appied to the treis of the 2-state (2, 1, 1) code shown in Figure P(b) to compute the a posteriori L-vaues L (1) (u ) for each of the four input bits and the corresponding extrinsic a posteriori L-vaues L (1) e (u ) to pass to decoder 2 (the coumn decoder). Simiary, in the first iteration of decoder 2, the og-map agorithm uses the extrinsic posteriori L-vaues L (1) e (u ) received from decoder 1 as the a priori L-vaues, L (2) a (u ) to compute the a posteriori L-vaues L (2) (u ) for each of the four input bits and the corresponding extrinsic a posteriori L-vaues L (2) e (u ) to pass back to decoder 1. Further decoding proceeds iterativey in this fashion.

62 Turbo codes 61 To simpify notation, we denote the transmitted vector as v = (v 0,v 1,v 2,v 3 ), where v = (u, p ), = 0, 1, 2, 3, u is an input bit, and p is a parity bit. Simiary, the received vector is denoted as r = (r 0,r 1,r 2,r 3 ), where r = (r u, r p ), = 0, 1, 2, 3, r u is the received symbo corresponding to the transmitted input bit u, and r p is the received symbo corresponding to the transmitted parity bit p.

63 Turbo codes 62 An input bit a posteriori L-vaue is given by where L(u ) = n P(u =+1 r) = n P(u = 1 r) (s,s) + p(s,s,r) p(s,s,r) (s,s) s represents a state at time (denote by s σ ). s represents a state at time + 1 (denoted by s σ +1 ). The sums are over a state pairs (s, s) for which u = +1 or 1, respectivey.

64 Turbo codes 63 We can write the joint probabiities p(s, s,r) as p(s, s,r) = e α (s )+γ (s,s)+β +1 (s), where α (s ), γ (s, s), and β+1 (s) are the famiiar og-domain α s, γ s and β s of the MAP agorithm.

65 Turbo codes 64 For a continuous-output AWGN channe with an SNR of E s /N 0, we can write the MAP decoding equations as Branch metric: r (s, s) = u L a (u ) 2 + L c 2 r v, = 0, 1, 2, 3 Forward metric: α +1(s) = max s α [ γ (s, s) + α (s ) ], = 0, 1, 2, 3 Backward metric: β (s ) = max [ γ (s, s) + β+1(s) ] = 0, 1, 2, 3 s σ +1 where the max function is defined in max(x, y) n(e x + e y ) = max(x, y) + n(1 + e x+y ) and the initia conditions are α 0(S 0 ) = β 4(S 0 ) = 0, and α 0(S 1 ) = β 4(S 1 ) =.

66 Turbo codes 65 Further simpifying the branch metric, we obtain r (s, s) = u L a (u ) + L c 2 2 (u r u + p r p ) = u 2 [L a(u ) + L c r u ] + p 2 L c r p, = 0, 1, 2, 3.

67 Turbo codes 66 Figure 3: Another view of Turbo iterative decoder

68 Turbo codes 67 computation of L(u 0 ) We can express the a posteriori L-vaue of u 0 as L(u 0 ) = n p(s = S 0, s = S 1,r) n p(s = S 0, s = S 0,r) = [ α 0 (S 0) + γ 0 (s = S 0, s = S 1 ) + β 1 (S 1) ] [ α 0 (S 0 ) + γ 0 (s = S 0, s = S 0 ) + β 1 (S 1) ] = { [L a(u 0 ) + L c r u0 ] L cr p0 + β1 (S 1) } { 1 2 [L a(u 0 ) + L c r u0 ] 1 2 L cr p0 + β1 (S 0) } = { [L a(u 0 ) + L c r u0 ] } { 1 2 [L a(u 0 ) + L c r u0 ] } + { L cr p0 + β1 (S 1) L cr p0 β1 (S 0) } = L c r u0 + L a (u 0 ) + L e (u 0 ), where L e (u 0 ) L c r p0 + β 1 (S 1) β 1 (S 0) represents the extrinsic a posterior (output) L-vaue of u 0.

69 Turbo codes 68 The fina form of above equation iustrate ceary the three components of the a posteriori L-vaue of u 0 computed at the output of a og-map decoder: L c r u0 : the received channe L-vaue corresponding to bit u 0, which was part of the decoder input. L a (u 0 ): the a priori L-vaue of u 0, which was aso part of the decoder input. Expect for the first iteration of decoder 1, this term equas the extrinsic a posteriori L-vaue of u 0 received from the output of the other decoder. L e (u 0 ): the extrinsic part of the a posteriori L-vaue of u 0, which dose not depend on L c r u0 or L a (u 0 ). This term is then sent to the other decoder as its a priori input.

70 Turbo codes 69 computation of L(u 1 ) We now proceed in a simiar manner to compute the a posteriori L-vaue of bit u 1. We see from Figure P(b) that in this case there are two terms in each of the sums in L(u ) = n (s,s) + (s,s) p(s,s,r) p(s,s,r), because at this time there are two +1 and two 1 transitions in the treis diagram.

71 Turbo codes 70 L(u 1 ) = n [ p(s = S 0, s = S 1,r) + p(s = S 1, s = S 0,r) ] n [ p(s = S 0, s = S 0,r) + p(s = S 1, s = S 1,r) ] = max {[ α 1 (S 0) + γ 1 (s = S 0, s = S 1 ) + β 2 (S 1) ], [ α 1 (S 1) + γ 1 (s = S 1, s = S 0 ) + β 2 (S 0) ]} max {[ α 1 (S 0) + γ 1 (s = S 0, s = S 0 ) + β 2 (S 0) ], [ α 1 (S 1) + γ 1 (s = S 1, s = S 1 ) + β 2 (S 1) ]} {( = max [L a(u 1 ) + L c r u1 ] + 1 ) 2 L cr p1 + α 1 (S 0) + β 2 (S 1), ( [L a(u 1 ) + L c r u1 ] 1 2 L cr p1 + α 1 (S 1) + β 2 (S 0) )} {( max 1 2 [L a(u 1 ) + L c r u1 ] 1 ) 2 L cr p1 + α 1 (S 0) + β 2 (S 0), ( 1 2 [L a(u 1 ) + L c r u1 ] L cr p1 + α 1 (S 1) + β 2 (S 1) )} = { [L a(u 1 ) + L c r u1 ] } { 1 2 [L a(u 1 ) + L c r u1 ] } {[ + max + 1 ] [ 2 L cr p1 + α 1 (S 0) + β 2 (S 1), 1 ]} 2 L cr p1 + α 1 (S 1) + β 2 (S 0) {[ max 1 ] [ 2 L cr p1 + α 1 (S 0) + β 2 (S 0), + 1 ]} 2 L cr p1 + α 1 (S 1) + β 2 (S 1) = L c r u1 + L a (u 1 ) + L e (u 1 ) max(w + x, w + y) w + max(x, y)

72 Turbo codes 71 computation of L(u 2 ) and L(u 3 ) Continuing, we can use the same procedure to compute the a posteriori L-vaues of bits u 2 and u 3 as where L(u 2 ) = L c r u2 + L a (u 2 ) + L e (u 2 ), {[ L e (u 2 ) = max {[ max + 1 ] 2 L cr p2 + α 2 (S 0) + β 3 (S 1) ] 1 2 L cr p2 + α 2 (S 0) + β 3 (S 0) [,, 1 ]} 2 L cr p2 + α 2 (S 1) + β 3 (S 0) ]} [ L cr p2 + α 2 (S 1) + β 3 (S 1)

73 Turbo codes 72 and where L(u 3 ) = L c r u3 + L a (u 3 ) + L e (u 3 ), L(u 3 ) = [ 1 2 L cr p3 + α 3 (S 1) + β 4 (S 0) ] [ 1 2 L cr p3 + α 3 (S 0) + β 4 (S 0) ] = α 3 (S 1) α 3 (S 0)

74 Turbo codes 73 We now need expressions for the terms α 1(S 0 ), α 1(S 1 ), α 2(S 0 ), α 2(S 1 ), α 3(S 0 ), α 3(S 1 ), β 1(S 0 ), β 1(S 1 ), β 2(S 0 ), β 2(S 1 ), β 3(S 0 ), and β 3(S 1 ) that are used to cacuate the extrinsic a posteriori L-vaues L e (u ), = 0, 1, 2, 3. We use the shorthand notation L u = L c r u + L a (u ) and L p = L c r p, = 0, 1, 2, 3, for intrinsic information bit L vaues and parity bit L vaues respectivey.

75 Turbo codes 74 We can obtain the foowing: α 1 (S 0) = 1 2 (L u0 + L p0 ) α 1 (S 1) = 1 2 (L u0 + L p0 ) {[ α 2 (S 0) = max 1 ] [ 2 (L u1 + L p1 ) + α 1 (S 0), + 1 ]} 2 (L u1 L p1 ) + α 1 (S 1) {[ α 2 (S 1) = max + 1 ] [ 2 (L u1 + L p1 ) + α 1 (S 0), 1 ]} 2 (L u1 L p1 ) + α 1 (S 1) {[ α 3 (S 0) = max 1 ] [ 2 (L u2 + L p2 ) + α 2 (S 0), + 1 ]} 2 (L u2 L p2 ) + α 2 (S 1) {[ α 3 (S 1) = max + 1 ] [ 2 (L u2 + L p2 ) + α 2 (S 0), 1 ]} 2 (L u2 L p2 ) + α 2 (S 1)

76 Turbo codes 75 β 3 (S 0) = 1 2 (L u3 + L p3 ) β 1 (S 1) = (L u3 L p3 ) {[ β 2 (S 0) = max 1 ] 2 (L u2 + L p2 ) + β 3 (S 0) {[ β 2 (S 1) = max + 1 ] 2 (L u2 L p2 ) + β 3 (S 0) {[ β 1 (S 0) = max 1 ] 2 (L u1 + L p1 ) + β 2 (S 0) {[ β 1 (S 1) = max + 1 ] 2 (L u1 L p1 ) + β 2 (S 0) [, + 1 ]} 2 (L u2 + L p2 ) + β 3 (S 1) [, 1 ]} 2 (L u2 L p2 ) + β 3 (S 1) [, + 1 ]} 2 (L u1 + L p1 ) + β 2 (S 1) [, 1 ]} 2 (L u1 L p1 ) + β 2 (S 1) We note here that the a priori L-vaue of a parity bit L a (p ) = 0 for a, since for a inear code with equay ikey.

77 Turbo codes 76 We can write the extrinsic a posteriori L-vaues in terms of L u2 and L p2 as L e (u 0 ) = L p0 + β 1 (S 1) β {[ 1 (S 0), L e (u 1 ) = max + 1 ] [ 2 L p1 + α 1 (S 0) + β 2 (S 1), 1 ]} 2 L p1 + α 1 (S 1) + β 2 (S 0) {[ max 1 ] [ 2 L p1 + α 1 (S 0) + β 2 (S 0), + 1 ]} 2 L p1 + α 1 (S 1) + β 2 (S 1) {[ L e (u 2 ) = max + 1 ] [ 2 L p2 + α 2 (S 0) + β 3 (S 1), 1 ]} 2 L p2 + α 2 (S 1) + β 3 (S 0) {[ max 1 ] [ 2 L p2 + α 2 (S 0) + β 3 (S 0), + 1 ]} 2 L p2 + α 2 (S 1) + β 3 (S 1) and L e (u 3 ) = α 3 (S 1) α 3 (S 0). The extrinsic L-vaue of bit u does not depend directy on either the received or a priori L-vaues of u.

78 Turbo codes 77 Pease see the detai computation in Exampe in Lin s book at page

79 Turbo codes 78

80 Turbo codes 79

81 Turbo codes 80

82 Turbo codes 81

83 Turbo codes 82

84 Turbo codes 83

85 Turbo codes 84 Iterative decoding using the max-og-map agorithm Exampe. When the approximation max(x, y) max(x, y) is appied to the forward and backward recursions, we obtain for the first iteration of decoder 1 α 2 (S 0) max{ 0.70, 1.20} = 1.20 α 2 (S 1) max{ 0.20, 0.30} = 0.20 {[ α 3 (S 0) max 12 ] ( ) = max {1.55, 1.65} = 1.55 {[ α 3 (S 1) max + 12 ] ( ) , [+ 12 ]} ( ) 0.20, [ 12 ]} ( ) 0.20 = max {0.85, 1.25} = 1.25

86 Turbo codes 85 β 2 (S 0) max{0.35, 1.25} = 1.25 β 2 (S 1) max{ 1.45, 3.05} = 3.05 {[ β 1 (S 0) max 12 ] ( ) = max {1.00, 3.30} = 3.30 {[ β 1 (S 1) max + 12 ] ( ) , [+ 12 ]} ( ) , [ 12 ]} ( ) = max {2.00, 2.30} = 2.30 L (1) e (u 0) = 0.90 L (1) e (u 0) max {[ ], [0.25, ]} max {[ ], [ ]} = max{2.35, 1.95} max{1.05, 3.25} = = 0.90, and, using simiar cacuations, we have. L (1) e (u 2 ) +1.4 and L (1) e (u 3 ) 0.3

87 Turbo codes 86 Using these approximate extrinsic a posteriori L-vaues as a posteriori L-vaue as a priori L-vaues for decoder 2, and recaing that the roes of u 1 and u 2 are reversed for decoder 2, we obtain α 1 (S 0) = 1 2 ( ) = 0.65 α 1 (S 1) = ( ) = 0.65 α 2 (S 0) max {[ 1 2 ( ) ], [ ( ) 0.65]} = max{0.25, 1.45} = 0.25 α 2 (S 1) max {[ ( ) ], [ 1 2 ( ) 0.65]} = max{1.05, 0.15} = 1.05 α 3 (S 0) max {[ 1 2 ( ) ], [ ( ) ]} = max{0.10, 1.00} = 1.00 α 3 (S 1) max {[ ( ) ], [ 1 2 ( ) ]} = max{0.40, 1.10} = 1.10

88 Turbo codes 87 β 3 (S 0) = 1 2 ( ) = 0.10 β 3 (S 1) = ( ) = 1.20 β 2 (S 0) max {[ 1 2 ( ) 0.10], [ ( ) ]} = max{ 0.25, 1.35} = 1.35 β 2 (S 1) max {[ ( ) 0.10], [ 1 2 ( ) ]} = max{ 0.15, 1.25} = 1.25 β 1 (S 0) max {[ 1 2 ( ) ], [ ( ) ]} = max{0.95, 1.65} = 1.65 β 1 (S 1) max {[ ( ) ], [ 1 2 ( ) ]} = max{0.55, 2.05} = 2.05 L (2) e (u 0) = 0.80 L (2) e (u 2) max {[ ], [ ]} max {[ ], [ ]} = max {2.5, 0.1} max {1.4, 1.2} = = 1.10 and, using simiar cacuations, we have L (2) e (u 1 ) 0.8 and L (2) e (u 3 ) +0.1.

89 Turbo codes 88 We cacuate the approximate a posteriori L-vaue of information bit u 0 after the first compete iteration of decoding as L (2) (u 0 ) = L c r u0 + L (2) a (u 0 ) + Le(u 0 ) = 0.9, and we simiar obtain the remaining approximate a posteriori L-vaues as L (2) (u 2 ) +0.7, L (2) (u 1 ) 0.7, and L (2) (u 3 ) +1.4.

90 Turbo codes 89 Pease see the detai computation in Exampe in Lin s book at page

91 Turbo codes 90 Fundamenta principe of turbo decoding We now summarize our discussion of iterative decoding using the og-map and Max-og-MAP agorithm: The extrinsic a posteriori L-vaues are no onger stricty independent of the other terms after the first iteration of decoding, which causes the performance improvement from successive iterations to diminish over time. The concept of iterative decoding is simiar to negative feedback in contro theory, in the sense that the extrinsic information from the output that is fed back to the input has the effect of ampifying the SNR at the input, eading to a stabe system output.

92 Turbo codes 91 Decoding speed can be improved by a factor of 2 by aowing the two decoders to work in parae. In this case, the a priori L-vaues for the first iteration of decoder 2 wi be the same as for decoder 1 (normay equa to 0), and the extrinsic a posteriori L-vaues wi then be exchanged at the same time prior to each succeeding iteration. After a sufficient number of iterations, the fina decoding decision can be taken from the a posteriori L-vaues of either decoder, or form the average of these vaues, without noticeaby affect performance.

93 Turbo codes 92 As noted earier, the L-vaues of the parity bits remain constant throughout decoding. In seriay concatenated iterative decoding systems, however, parity bits from the outer decoder enter the inner decoder, and thus the L-vaues of these parity bits must be updated during the iterations. The forgoing approach to iterative decoding is ineffective for nonsystematic constituent codes, since channe L-vaues for the information bits are not avaiabe as inputs to decoder 2; however, the iterative decoder of Figure O can be modified to decode PCCCs with nonsystematic component codes.

94 Turbo codes 93 As noted previousy, better performance is normay achieved with pseudorandom intereavers, particuary for arge bock engths, and the iterative decoding procedure remains the same. It is possibe, however, particuary on very noisy channes, for the decoder to converge to the correct decision and then diverge again, or even to osciate between correct and incorrect decision.

95 Turbo codes 94 Iterations can be stopped after some fixed number, typicay in the range for most turbo codes, or stopping rues based on reiabiity statistics can be used to hat decoding. The Max-og-MAP agorithm is simper to impement than the og-map agorithm; however, it typicay suffers a performance degradation of about 0.5 db. It can be shown that MAX-og-MAP agorithm is equivaent to the SOVA agorithm.

96 Turbo codes 95 The stopping rues for iterative decoding 1. One method is based on the cross-entropy (CE) of the APP distributions at the outputs of the two decoders. - The cross-entropy D(P Q) of two joint probabiity distributions P(u) and Q(u), assume statistica independence of the bits in the vector u = [u 0, u 1,...,u K 1 ], is defined as { D(P Q) = E p og P(u) } Q(u) = K 1 =0 { E p og P(u } ). Q(u ) where E p { } denote expectation with respect to the probabiity distribution P(u ).

97 Turbo codes 96 - D(P Q) is a measure of the coseness of two distributions, and D(P Q) = 0 iff P(u ) = Q(u ), u = ±1, = 0, 1,...,K 1. - The CE stopping rue is based on the different between the a posteriori L-vaues after successive iterations at the outputs of the two decoders. For exampe, et L (1) (i) (u ) = L c r u + L (1) a(i) (u ) + L (1) e(i) (u ) represent the a posteriori L-vaue at the output decoder 1 after iteration i, and et L (2) (i) (u ) = L c r u + L (2) a(i) (u ) + L (2) e(i) (u ) represent the a posteriori L-vaue at the output decoder 2 after iteration i.

98 Turbo codes 97 - Now, using the facts that L (1) a(i) (u ) = L (2) e(i 1) (u ) and L (2) a(i) (u ) = L (1) e(i) (u ), and etting Q(u ) and P(u ) represent the a posteriori probabiity distributions at the outputs of decoders 1 and 2, respectivey, we can write and L (Q) (i) (u ) = L c r u + L (P) e(i 1) (u ) + L (Q) e(i) (u ) L (P) (i) (u ) = L c r u + L (Q) e(i) (u ) + L (P) e(i) (u ). - We can write the difference in the two soft outputs as L (P) (i) (u ) L (Q) (i) (u ) = L (P) e(i) (u ) L (P) e(i 1) (u ) = L (P) e(i) (u ); that is, L (P) e(i) (u ) represents the difference in the extrinsic a posteriori L-vaues of decoder 2 in two successive iterations.

99 Turbo codes 98 - We now compute the CE of the a posteriori probabiity distributions P(u ) and Q(u ) as foows: E p { og P(u ) Q(u ) } = P(u = +1)og P(u = +1) Q(u = +1) = +P(u = 1)og P(u = 1) Q(u = 1) e L(P) (i) (u ) 1 + e L(P) (i) (u ) e L(P) (i) (u ) 1 + e L(P) (i) (u ) og og e L(P) (i) (u ) 1 + e L(P) (i) (u ) e L(P) (i) (u ) 1 + e L(P) (i) (u ) (Q) 1 + el (i) (u ) e L(Q) (i) (u ) + (Q) 1 + e L (i) (u ) e L(Q) (i) (u ) where we have used expressions for the a posteriori distributions P(u = ±1) and Q(u = ±1) anaogous to those given in P(u = ±1) = e±l a(u ) {1+e ±L a(u ) }.,

100 Turbo codes 99 - The above equation can simpify as { } E p og P(u ) Q(u ) = L(P) e(i) (u ) 1+e L(P) (i) (u ) + og 1+e - The hard decisions after iteration i, û (i) u (i) = sgn [ ] L (P) (i) (u ) = sgn L (Q) (i) (u ). 1+e L(P) (i) (u ), satisfy [ L (Q) (i) (u ) ].

101 Turbo codes Using above equation and noting that [ ] L (P) (i) (u ) = sgn L (P) (i) (u ) L (P) (i) (u ) = û (i) L (P) (i) (u ) and [ ] L (Q) (i) (u ) = sgn L (Q) (i) (u ) L (Q) (i) (u ) = û (i) L (Q) (i) (u ), we can show that { E p og P(u ) Q(u ) } = L(P) e(i) (u ) further to E p { og P(u ) Q(u ) 1+e L(P) (i) (u L (Q) (i) (u ) 1+e + og ) } û(i) L (P) e(i) (u ) + og 1+e (i) (u ) 1+e L(P) 1+e L(P) (i) (u ) simpifies L (Q) (i) (u ). (i) (u ) 1+e L(P)

102 Turbo codes We now use the facts that once decoding has converged, the magnitudes of the a posteriori L-vaues are arge; that is, L (P) (i) (u ) 0 and L (Q) (i) (u ) 0, and that when x is arge, e x is sma, and 1 + e x 1 and og(1 + e x ) e x - Appying these approximations to E p { og P(u ) Q(u ) show that { E p og P(u } ) Q(u ) L e (Q) (i) (u ) ( 1 e û(i) ) (1 + û (i) L (P) e(i) (u )) }, we can L (P) e(i) (u )

103 Turbo codes Noting that the magnitude of L (P) e(i) (u ) wi be smaer than 1 when decoding converges, we can approximate the term e û(i) L (P) as foows: E p { og P(u ) Q(u ) e(i) (u ) using the first two terms of its series expansion e û(i) L (P) e(i) (u ) 1 û (i) which eads to the simpified expression } L e (Q) (i) (u ) = e L (Q) (i) (u ) L (P) e(i) (u ), [( ) ( 1 û (i) L (P) e(i) (u ) 1 + û (i) [ ] 2 L û (i) L (P) (P) e(i) (u ) = e(i) (u ) e L(Q) (i) (u ) 2 L (P) )] e(i) (u )

104 Turbo codes 103 We can write the CE of the probabiity distributions P(u) and Q(u) at iteration i as { } D (i) (P Q) = E p og P(u) K 1 =0 Q(u) L (P) e(i) (u ) e L(Q) (i) (u ) where we note that the statistica independence assumption does not hod exacty as the iterations proceed. We next define T(i) = L (P) e(i) (u ) 2 (i) (u ) L (Q) e as the approximate vaue of the CE at iteration i. T(i) can be computed after each iteration. 2,

105 Turbo codes 104 Experience with computer simuations has shown that once convergence is achieved, T(i) drops by a factor of 10 2 to 10 4 compared with its initia vaue, and thus it is reasonabe to use T(i) < 10 3 T(1) as a stopping rue for iterative decoding.

106 Turbo codes Another approach to stopping the iterations in turbo decoding is to concatenate a high-rate outer cycic code with an inner turbo code. Figure : A concatenation of an outer cycic code with an inner turbo code.

107 Turbo codes 106 After each iteration, the hard-decision output of the turbo decoder is used to check the syndrome of the cycic code. If no errors are detected, decoding is assumed correct and the iterations are stopped. It is important to choose an outer code with a ow undetected error probabiity, so that iterative decoding is not stopped prematurey. For this reason it is usuay advisabe not to check the syndrome of the outer code during the first few iterations, when the probabiity of undetected error may be arger than the probabiity that the turbo decoder is error free.

108 Turbo codes 107 This method of stopping the iterations is particuary effective for arge bock engths, since in this case the rate of the outer code can be made very high, thus resuting in a negigibe overa rate oss. For arge bock engths, the foregoing idea can be extended to incude outer codes, such as BCH codes, that can correct a sma number of errors and sti maintain a ow undetected error probabiity. In this case, the iterations are stopped once the number of hard-decision errors at the output of the turbo decoder is within the error-correcting capabiity of the outer code.

109 Turbo codes 108 This method aso provides a ow word-error probabiity for the compete system; that is, the probabiity that the entire information bock contains one or more decoding errors can be made very sma.

110 Turbo codes 109 Q&A

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

On the Achievable Extrinsic Information of Inner Decoders in Serial Concatenation

On the Achievable Extrinsic Information of Inner Decoders in Serial Concatenation On the Achievabe Extrinsic Information of Inner Decoders in Seria Concatenation Jörg Kiewer, Axe Huebner, and Danie J. Costeo, Jr. Department of Eectrica Engineering, University of Notre Dame, Notre Dame,

More information

Trapping Set Enumerators for Repeat Multiple Accumulate Code Ensembles

Trapping Set Enumerators for Repeat Multiple Accumulate Code Ensembles Trapping Set Enumerators for Repeat Mutipe Accumuate Code Ensembes Christian Koer, Aexandre Grae i Amat, Jörg Kiewer, and Danie J. Costeo, Jr. Department of Eectrica Engineering, University of Notre Dame,

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

Fast Blind Recognition of Channel Codes

Fast Blind Recognition of Channel Codes Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK

ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK Aexander Arkhipov, Michae Schne German Aerospace Center DLR) Institute of Communications and Navigation Oberpfaffenhofen, 8224 Wessing, Germany

More information

Improved Min-Sum Decoding of LDPC Codes Using 2-Dimensional Normalization

Improved Min-Sum Decoding of LDPC Codes Using 2-Dimensional Normalization Improved Min-Sum Decoding of LDPC Codes sing -Dimensiona Normaization Juntan Zhang and Marc Fossorier Department of Eectrica Engineering niversity of Hawaii at Manoa Honouu, HI 968 Emai: juntan, marc@spectra.eng.hawaii.edu

More information

Utilization of multi-dimensional sou correlation in multi-dimensional sin check codes. Izhar, Mohd Azri Mohd; Zhou, Xiaobo; Author(s) Tad

Utilization of multi-dimensional sou correlation in multi-dimensional sin check codes. Izhar, Mohd Azri Mohd; Zhou, Xiaobo; Author(s) Tad JAIST Reposi https://dspace.j Tite Utiization of muti-dimensiona sou correation in muti-dimensiona sin check codes Izhar, Mohd Azri Mohd; Zhou, Xiaobo; Author(s) Tad Citation Teecommunication Systems,

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

Limited magnitude error detecting codes over Z q

Limited magnitude error detecting codes over Z q Limited magnitude error detecting codes over Z q Noha Earief choo of Eectrica Engineering and Computer cience Oregon tate University Corvais, OR 97331, UA Emai: earief@eecsorstedu Bea Bose choo of Eectrica

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,... Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008 Random Booean Networks Barbara Drosse Institute of Condensed Matter Physics, Darmstadt University of Technoogy, Hochschustraße 6, 64289 Darmstadt, Germany (Dated: June 27) arxiv:76.335v2 [cond-mat.stat-mech]

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Error Floor Approximation for LDPC Codes in the AWGN Channel

Error Floor Approximation for LDPC Codes in the AWGN Channel submitted to IEEE TRANSACTIONS ON INFORMATION THEORY VERSION: JUNE 4, 2013 1 Error Foor Approximation for LDPC Codes in the AWGN Channe Brian K. Buter, Senior Member, IEEE, Pau H. Siege, Feow, IEEE arxiv:1202.2826v2

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

BICM Performance Improvement via Online LLR Optimization

BICM Performance Improvement via Online LLR Optimization BICM Performance Improvement via Onine LLR Optimization Jinhong Wu, Mostafa E-Khamy, Jungwon Lee and Inyup Kang Samsung Mobie Soutions Lab San Diego, USA 92121 Emai: {Jinhong.W, Mostafa.E, Jungwon2.Lee,

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

VI.G Exact free energy of the Square Lattice Ising model

VI.G Exact free energy of the Square Lattice Ising model VI.G Exact free energy of the Square Lattice Ising mode As indicated in eq.(vi.35), the Ising partition function is reated to a sum S, over coections of paths on the attice. The aowed graphs for a square

More information

arxiv: v1 [physics.flu-dyn] 2 Nov 2007

arxiv: v1 [physics.flu-dyn] 2 Nov 2007 A theoretica anaysis of the resoution due to diffusion and size-dispersion of partices in deterministic atera dispacement devices arxiv:7.347v [physics.fu-dyn] 2 Nov 27 Martin Heer and Henrik Bruus MIC

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

A Branch and Cut Algorithm to Design. LDPC Codes without Small Cycles in. Communication Systems

A Branch and Cut Algorithm to Design. LDPC Codes without Small Cycles in. Communication Systems A Branch and Cut Agorithm to Design LDPC Codes without Sma Cyces in Communication Systems arxiv:1709.09936v1 [cs.it] 28 Sep 2017 Banu Kabakuak 1, Z. Caner Taşkın 1, and Ai Emre Pusane 2 1 Department of

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem 1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department

More information

A Statistical Framework for Real-time Event Detection in Power Systems

A Statistical Framework for Real-time Event Detection in Power Systems 1 A Statistica Framework for Rea-time Event Detection in Power Systems Noan Uhrich, Tim Christman, Phiip Swisher, and Xichen Jiang Abstract A quickest change detection (QCD) agorithm is appied to the probem

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various

More information

On Efficient Decoding of Polar Codes with Large Kernels

On Efficient Decoding of Polar Codes with Large Kernels On Efficient Decoding of Poar Codes with Large Kernes Sarit Buzago, Arman Fazei, Pau H. Siege, Veeresh Taranai, andaexander Vardy University of Caifornia San Diego, La Joa, CA 92093, USA {sbuzago, afazeic,

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017 In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm 1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information

Tracking Control of Multiple Mobile Robots

Tracking Control of Multiple Mobile Robots Proceedings of the 2001 IEEE Internationa Conference on Robotics & Automation Seou, Korea May 21-26, 2001 Tracking Contro of Mutipe Mobie Robots A Case Study of Inter-Robot Coision-Free Probem Jurachart

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework Limits on Support Recovery with Probabiistic Modes: An Information-Theoretic Framewor Jonathan Scarett and Voan Cevher arxiv:5.744v3 cs.it 3 Aug 6 Abstract The support recovery probem consists of determining

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

Recursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines

Recursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines Recursive Constructions of Parae FIFO and LIFO Queues with Switched Deay Lines Po-Kai Huang, Cheng-Shang Chang, Feow, IEEE, Jay Cheng, Member, IEEE, and Duan-Shin Lee, Senior Member, IEEE Abstract One

More information

The Streaming-DMT of Fading Channels

The Streaming-DMT of Fading Channels The Streaming-DMT of Fading Channes Ashish Khisti Member, IEEE, and Star C. Draper Member, IEEE arxiv:30.80v3 cs.it] Aug 04 Abstract We consider the sequentia transmission of a stream of messages over

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea-Time Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process

More information

A Robust Voice Activity Detection based on Noise Eigenspace Projection

A Robust Voice Activity Detection based on Noise Eigenspace Projection A Robust Voice Activity Detection based on Noise Eigenspace Projection Dongwen Ying 1, Yu Shi 2, Frank Soong 2, Jianwu Dang 1, and Xugang Lu 1 1 Japan Advanced Institute of Science and Technoogy, Nomi

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

PERFORMANCE ANALYSIS OF MULTIPLE ACCESS CHAOTIC-SEQUENCE SPREAD-SPECTRUM COMMUNICATION SYSTEMS USING PARALLEL INTERFERENCE CANCELLATION RECEIVERS

PERFORMANCE ANALYSIS OF MULTIPLE ACCESS CHAOTIC-SEQUENCE SPREAD-SPECTRUM COMMUNICATION SYSTEMS USING PARALLEL INTERFERENCE CANCELLATION RECEIVERS Internationa Journa of Bifurcation and Chaos, Vo. 14, No. 10 (2004) 3633 3646 c Word Scientific Pubishing Company PERFORMANCE ANALYSIS OF MULTIPLE ACCESS CHAOTIC-SEQUENCE SPREAD-SPECTRUM COMMUNICATION

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

Coded Caching for Files with Distinct File Sizes

Coded Caching for Files with Distinct File Sizes Coded Caching for Fies with Distinct Fie Sizes Jinbei Zhang iaojun Lin Chih-Chun Wang inbing Wang Department of Eectronic Engineering Shanghai Jiao ong University China Schoo of Eectrica and Computer Engineering

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Efficient Generation of Random Bits from Finite State Markov Chains

Efficient Generation of Random Bits from Finite State Markov Chains Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Manipulation in Financial Markets and the Implications for Debt Financing

Manipulation in Financial Markets and the Implications for Debt Financing Manipuation in Financia Markets and the Impications for Debt Financing Leonid Spesivtsev This paper studies the situation when the firm is in financia distress and faces bankruptcy or debt restructuring.

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

Haar Decomposition and Reconstruction Algorithms

Haar Decomposition and Reconstruction Algorithms Jim Lambers MAT 773 Fa Semester 018-19 Lecture 15 and 16 Notes These notes correspond to Sections 4.3 and 4.4 in the text. Haar Decomposition and Reconstruction Agorithms Decomposition Suppose we approximate

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Chemical Kinetics Part 2

Chemical Kinetics Part 2 Integrated Rate Laws Chemica Kinetics Part 2 The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates the rate

More information

Related Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage

Related Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage Magnetic induction TEP Reated Topics Maxwe s equations, eectrica eddy fied, magnetic fied of cois, coi, magnetic fux, induced votage Principe A magnetic fied of variabe frequency and varying strength is

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Agorithmic Operations Research Vo.4 (29) 49 57 Approximated MLC shape matrix decomposition with intereaf coision constraint Antje Kiese and Thomas Kainowski Institut für Mathematik, Universität Rostock,

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

arxiv: v1 [math.co] 17 Dec 2018

arxiv: v1 [math.co] 17 Dec 2018 On the Extrema Maximum Agreement Subtree Probem arxiv:1812.06951v1 [math.o] 17 Dec 2018 Aexey Markin Department of omputer Science, Iowa State University, USA amarkin@iastate.edu Abstract Given two phyogenetic

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

1. Measurements and error calculus

1. Measurements and error calculus EV 1 Measurements and error cacuus 11 Introduction The goa of this aboratory course is to introduce the notions of carrying out an experiment, acquiring and writing up the data, and finay anayzing the

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

The distribution of the number of nodes in the relative interior of the typical I-segment in homogeneous planar anisotropic STIT Tessellations

The distribution of the number of nodes in the relative interior of the typical I-segment in homogeneous planar anisotropic STIT Tessellations Comment.Math.Univ.Caroin. 51,3(21) 53 512 53 The distribution of the number of nodes in the reative interior of the typica I-segment in homogeneous panar anisotropic STIT Tesseations Christoph Thäe Abstract.

More information

Space-time coding techniques with bit-interleaved coded. modulations for MIMO block-fading channels

Space-time coding techniques with bit-interleaved coded. modulations for MIMO block-fading channels Space-time coding techniques with bit-intereaved coded moduations for MIMO boc-fading channes Submitted to IEEE Trans. on Information Theory - January 2006 - Draft version 30/05/2006 Nicoas Gresset, Loïc

More information

A GENERAL METHOD FOR EVALUATING OUTAGE PROBABILITIES USING PADÉ APPROXIMATIONS

A GENERAL METHOD FOR EVALUATING OUTAGE PROBABILITIES USING PADÉ APPROXIMATIONS A GENERAL METHOD FOR EVALUATING OUTAGE PROBABILITIES USING PADÉ APPROXIMATIONS Jack W. Stokes, Microsoft Corporation One Microsoft Way, Redmond, WA 9852, jstokes@microsoft.com James A. Ritcey, University

More information

Rate-Distortion Theory of Finite Point Processes

Rate-Distortion Theory of Finite Point Processes Rate-Distortion Theory of Finite Point Processes Günther Koiander, Dominic Schuhmacher, and Franz Hawatsch, Feow, IEEE Abstract We study the compression of data in the case where the usefu information

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

2146 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 6, JUNE 2013

2146 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 6, JUNE 2013 246 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 6, NO. 6, JUNE 203 On Achieving an Asymptoticay Error-Free Fixed-Point of Iterative Decoding for Perfect APrioriInformation Jörg Kiewer, Senior Member, IEEE,

More information

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY The ogic of Booean matrices C. R. Edwards Schoo of Eectrica Engineering, Universit of Bath, Caverton Down, Bath BA2 7AY A Booean matrix agebra is described which enabes man ogica functions to be manipuated

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

The influence of temperature of photovoltaic modules on performance of solar power plant

The influence of temperature of photovoltaic modules on performance of solar power plant IOSR Journa of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vo. 05, Issue 04 (Apri. 2015), V1 PP 09-15 www.iosrjen.org The infuence of temperature of photovotaic modues on performance

More information

A simple reliability block diagram method for safety integrity verification

A simple reliability block diagram method for safety integrity verification Reiabiity Engineering and System Safety 92 (2007) 1267 1273 www.esevier.com/ocate/ress A simpe reiabiity bock diagram method for safety integrity verification Haitao Guo, Xianhui Yang epartment of Automation,

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

FOURIER SERIES ON ANY INTERVAL

FOURIER SERIES ON ANY INTERVAL FOURIER SERIES ON ANY INTERVAL Overview We have spent considerabe time earning how to compute Fourier series for functions that have a period of 2p on the interva (-p,p). We have aso seen how Fourier series

More information

arxiv:math/ v2 [math.pr] 6 Mar 2005

arxiv:math/ v2 [math.pr] 6 Mar 2005 ASYMPTOTIC BEHAVIOR OF RANDOM HEAPS arxiv:math/0407286v2 [math.pr] 6 Mar 2005 J. BEN HOUGH Abstract. We consider a random wa W n on the ocay free group or equivaenty a signed random heap) with m generators

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea- Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process Management,

More information

Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009

Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Forty-Seventh Annua Aerton Conference Aerton House, UIUC, Iinois, USA September 30 - October 2, 2009 On systematic design of universay capacity approaching rate-compatibe sequences of LDPC code ensembes

More information

Random maps and attractors in random Boolean networks

Random maps and attractors in random Boolean networks LU TP 04-43 Rom maps attractors in rom Booean networks Björn Samuesson Car Troein Compex Systems Division, Department of Theoretica Physics Lund University, Sövegatan 4A, S-3 6 Lund, Sweden Dated: 005-05-07)

More information