INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts towards final mark) e) Oral exam some time in late November or early December 1
Feil Coding theory: Applications Areas: Data transmission Modems, mobile phones, satellite connections, deep space communication, microwave links, power line communication, optical fibers, and so on Data storage CDs, DVDs, hard disks, RAM, and so on Others: E.g. Personnummer, ISBN, credit cards, and bank account numbers Advantages: Increased reliability Reduced effect for a given reliability level Increased data rate/storage density SNR 2
System model A sender transmits information across a noisy channel to a receiver Sender Source encoder ECC encoder Corrupted by noise! Channel ECC decoder Source decoder Receiver INF 244 focuses on this part! 3
Block codes Code types u:k v:n, v = f(u) r = v + n u :k ECC ECC Channel encoder decoder Convolutional codes u i :k v i :n, v i = f(u i, u i-1,..., u i-m ) r i = v i + n i u i :k ECC encoder Channel ECC decoder 4
Block codes vs. convolutional codes Any code is limited by Code rate Error probability in a given channel Decoding complexity Block codes (classical approach): Algebraic structure Convolutional codes: Emphasis on decoding complexity Which is best? A matter of discussion, but A block code is a convolutional code with m=0 A convolutional code with finite input length is a block code 5
Error detection and correction a) Error-detecting codes Automatic repeat request (ARQ) If frame error: Ask for retransmission b) Error-correcting codes If frame error: Instead of asking for retransmission, try to estimate what was originally sent 6
Coding theory: Error detection Sender 1101 Receiver 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0 0 1 1 0 1 1 1 0 0 0 1 1 1 0 1 1 0 1 1 1 1 1 1 1 Sender 0 0 0 0 0 1 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1101 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 0 1 0 1 1 1 0 1 1 1 1 1 1 1 1 0 Receiver 7
Coding theory: Error detection Advantages: Simple! Extensions/generalizations: Schemes that detect more than one error? Disadvantages and conditions: Requires return channel Extra delay Return channel may not exist 8
Error correction: Hamming codes Sender 1 1 0 1 a b c d 1 e 0 f 0 g Receiver 1 a:1 d:1 b:1 c:0 0 0 Claim: Can correct one error or detect up to 2 errors 9
Error correction: Hamming codes e b a d g c f a b c d e f g 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 ( ) Parity check matrix H 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 1 0 1 0 0 0 1 0 1 1 0 1 1 0 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 1 0 0 1 0 1 0 1 0 1 0 1 1 0 1 1 0 0 0 0 1 1 1 0 0 1 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 1 Code Generator matrix G: G H T =0 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 1 1 1 ( ) 10
Error correction Coding Decoding Sender Receiver k n k Rate R=k/n and min. distance d min between codewords: Can correct floor((d min -1)/2) errors Problems: How to find good codes (high rate, large distance)? Construction; bounds for what can be achieved How to find efficient decoding algorithms? Exploit the mathematical structure of codes 11
Channel model, more detail Sometimes it is convenient to consider modulation as part of the coding process Analog channel ECC encoder Modulator Channel Demodulator ECC decoder continuous-time processes 12
Modulation and coding Binary phase-shift-keying (BPSK) modulation: 1 corresponds to s 1 (t)= K cos (2πf 0 t), 0 t T 0 corresponds to s 0 (t)= K cos (2πf 0 t+π) = -s 1 (t), 0 t T where T = duration of one signal; f 0 = 1/T K = 2E s /T E s is the energy dedicated to sending one signal General M-ary PSK (QPSK, 8-PSK, etc.): M input symbols Symbol j corresponds to s j (t)= K cos (2πf 0 t+ϕ j ), 0 t T 13
Sampling and noise Communication is a continuous-time process: r(t) = s (t) + n(t) (considering additive noise only) Noise n(t) is often a white Gaussian random process; AWGN r(t), s(t), n(t) are continuous-time processes, where pieces of duration T correspond to individually transmitted signals A matched filter and a sampler produces discrete-time outputs y i = it (i+1)t r(t) K cos (2πf 0 t)dt (assuming BPSK modulation) The received sequence that can be fed to the decoder is a sequence of real numbers y i-1, y i, y i+1, In practical implementations, the real numbers are often quantized to a discrete number of levels 14
More on quantization a) Assume M-ary input. Quantizing to Q-ary output, where the output signal is independent of previously transmitted signals and the noise affecting them gives a discrete memoryless channel (DMC) b) Q = 2: With symmetric quantization this gives the binary symmetric channel (BSC) with bit error transition probability p=q 2 E s / N 0 1 2 e E s/ N 0 Hard-decision decoding: Simple processing, suitable for fast, algebraic decoding algorithms (INF 243) Q > 2: Soft-decision decoding Q = 3: With symmetric quantization this gives the binary symmetric erasure channel (BSEC) Larger Q: Softer decisions 15
Channel models Random-error or memoryless channels Examples include the deep-space channel, many satellite channels, and most line-of-sight channels Burst-error channels or channels with memory Examples include magnetic recording channels and mobile telephony channels. A popular channel model for a multipath environment is the (frequency-selective) fading channel Compound channels When designing codes, the channel model should be clear! 16
Transmission rate Symbol transmission rate = 1/T k information bits are fed into the encoder, producing n output bits Code rate R = k/n Bandwidth W [Hertz]: 2W 1/T (Nyquist condition for zero inter-symbol interference) Information transmission rate = R/T 2RW Coded system: R<1 Requires bandwidth expansion to maintain data rate If extra bandwidth is not available and data rate must be maintained, non-binary coding must be used Combined coding and modulation 17