7 K-D MAP. Tuesday, August 13, :24 PM. ECS452 7 Page 1

Size: px
Start display at page:

Download "7 K-D MAP. Tuesday, August 13, :24 PM. ECS452 7 Page 1"

Transcription

1 7 K-D MAP Tuesday, August 13, 13 :4 PM ECS45 7 Page 1

2 r r 1 ECS45 7 Page

3 n -5-5 n n n 1 ECS45 7 Page 3

4 n -5-5 n n n 1 s 1 s 3 s ECS45 7 Page 4

5 s 1 s 3 s ECS45 7 Page 5

6 ECS45 7 Page 6

7 ECS45 7 Page 7

8 7 Probability Calculation involving Gaussian Noise Tuesday, August, 13 :31 PM n 1 n n -5-5 n ECS45 7 Page 8 n -5-5 n 1

9 n -5 n n -5-5 n 1 ECS45 7 Page 9

10 ECS45 7 Page 1

11 Quiz Solution Tuesday, August, 13 3:38 PM ECS45 7 Page 11

12 7 Correlator Receiver Thursday, August, 13 1:45 PM ECS45 7 Page 1

13 ECS45 7 Page 13

14 7 MAP Detector: The Proof (for interested reader) Saturday, August 4, 13 3:47 PM ECS45 7 Page 14

15 ECS45 7 Page 15

16 8 Random Processes and White Noise A random process consider an infinite collection of random variables. These random variables are usually indexed by time. So, the obvious notation for random process would be X(t). As in the signals-and-systems class, time can be discrete or continuous. When time is discrete, it may be more appropriate to use X 1, X,... or X[1], X[], X[3],... to denote a random process. Example 8.1. Sequence of results ( or 1) from a sequence of Bernoulli trials is a discrete-time random process. 8.. Two perspectives: (a) We can view a random process as a collection of many random variables indexed by t. (b) We can also view a random process as the outcome of a random experiment, where the outcome of each trial is a deterministic waveform (or sequence) that is a function of t. The collection of these functions is known as an ensemble, and each member is called a sample function. Example 8.3. Gaussian Random Processes: A random process X(t) is Gaussian if for all positive integers n and for all t l, t,..., t n, the random variables X(t 1 ), X(t ),..., X(t n ) are jointly Gaussian random variables Formal definition of random process requires going back to the probability space (Ω, A, P ). Recall that a random variable X is in fact a deterministic function of the outcome ω from Ω. So, we should have been writing it as X(ω). However, as we get more familiar with the concept of random variable, we usually drop the (ω) part and simply refer to it as X. For random process, we have X(t, ω). This two-argument expression corresponds to the two perspectives that we have just discussed earlier. (a) When you fix the time t, you get a random variable from a random process. (b) When you fix ω, you get a deterministic function of time from a random process. 37

17 As we get more familiar with the concept of random processes, we again drop the ω argument. Definition 8.5. A sample function x(t, ω) is the time function associated with the outcome ω of an experiment. Example 8.6 (Randomly Scaled Sinusoid). Consider the random process defined by X(t) = A cos(1t) where A is a random variable. For example, A could be a Bernoulli random variable with parameter p. This is a good model for a one-shot digital transmission via amplitude modulation. (a) Consider the time t = ms. X(t) is a random variable taking the value 1 cos() =.4161 with probability p and value cos() = with probability 1 p. If you consider t = 4 ms. X(t) is a random variable taking the value 1 cos(4) =.6536 with probability p and value cos(4) = with probability 1 p. (b) From another perspective, we can look at the process X(t) as two possible waveforms cos(1t) and. The first one happens with probability p; the second one happens with probability 1 p. In this view, notice that each of the waveforms is not random. They are deterministic. Randomness in this situation is associated not with the waveform but with the uncertainty as to which waveform will occur in a given trial. Definition 8.7. At any particular time t, because we have a random variable, we can also find its expected value. The function m X (t) captures these expected values as a deterministic function of time: m X (t) = E [X(t)]. 38

18 9 Probability theory, random variables and random processes (a) (b) t t (c) +V V (d) T b Fig. 3.8 Typical Figureensemble 9: Typical members ensemble for four members random processes for fourcommonly random encountered processes commonly in communications: encountered (a) thermal in communications: noise, (b) uniform (a) phase, thermal (c) Rayleigh noise, (b) fading uniform process, phase and (d) (encountered binary randomindata communication process. systems where it is not feasible to establish timing at the receiver.), (c) Rayleigh fading process, and (d) binary random data process (which may represent transmitted bits and 1 that are mapped to +V and V (volts)). [3, Fig. 3.8] x(t) = V(a k 1)[u(t kt b + ɛ) u(t (k + 1)T b + ɛ)], (3.46) k= where a k = 1 with probability p, with probability (1 p) (usually p = 1/), T b is the bit duration, and ɛ is uniform over [, T b ). Observe that two of these ensembles have member functions that look very deterministic, one is quasi deterministic but for the last one even individual time functions look random. But the point is not whether any 39 one member function looks deterministic or not; the issue when dealing with random processes is that we do not know for sure which member function we shall have to deal with. t t

19 8.1 Autocorrelation Function and WSS One of the most important characteristics of a random process is its autocorrelation function, which leads to the spectral information of the random process. The frequency content process depends on the rapidity of the amplitude change with time. This can be measured by correlating the values of the process at two time instances t l and t. Definition 8.8. Autocorrelation Function: The autocorrelation function R X (t 1, t ) for a random process X(t) is defined by R X (t 1, t ) = E [X(t 1 )X(t )]. Example 8.9. The random process x(t) is a slowly varying process compared to the process y(t) in Figure 1. For x(t), the values at t l and t are similar; that is, have stronger correlation. On the other hand, for y(t), values at t l and t have little resemblance, that is, have weaker correlation. T_ (c) Figure 1: Autocorrelation functions for a slowly varying and a rapidly varying random process [1, Fig. 11.4] 4

20 Example 8.1 (Randomly Phased Sinusoid). Consider a random process x(t) = 5 cos(7t + Θ) where Θ ia a uniform random variable on the interval (, π). and m X (t) = E [X(t)] = = π + 5 cos(7t + θ)f Θ (θ)dθ 5 cos(7t + θ) 1 dθ =. π R X (t 1, t ) = E [X(t 1 )X(t )] = E [5 cos(7t 1 + Θ) 5 cos(7t + Θ)] = 5 cos (7(t t 1 )). Definition A random process whose statistical characteristics do not change with time is classified as a stationary random process. For a stationary process, we can say that a shift of time origin will be impossible to detect; the process will appear to be the same. Example 8.1. The random process representing the temperature of a city is an example of a nonstationary process, because the temperature statistics (mean value, for example) depend on the time of the day. On the other hand, the noise process is stationary, because its statistics (the mean ad the mean square values, for example) do not change with time In general, it is not easy to determine whether a process is stationary. In practice, we can ascertain stationary if there is no change in the signalgenerating mechanism. Such is the case for the noise process. A process may not be stationary in the strict sense. condition for stationary can also be considered. A more relaxed Definition A random process X(t) is wide-sense stationary (WSS) if (a) m X (t) is a constant (b) R X (t 1, t ) depends only on the time difference t t 1 and does not depend on the specific values of t 1 and t. 41

21 In which case, we can write the correlation function as R X (τ) where τ = t t 1. One important consequence is that E [ X (t) ] will be a constant as well. Example The random process defined in Example 8.9 is WSS with R X (τ) = 5 cos (7τ) Most information signals and noise sources encountered in communication systems are well modeled as WSS random processes. Example White noise process is a WSS process N(t) whose (a) E [N(t)] = for all t and (b) R N (τ) = N δ(τ). See also 8.4 for its definition. Since R N (τ) = for τ, any two different samples of white noise, no matter how close in time they are taken, are uncorrelated Suppose N(t) is a white noise process. Define random variables N i by N i = N(t), g i (t) where the g i (t) s are some deterministic functions. Then, (a) E [N i ] = and (b) E [N i N j ] = N g i(t), g (t). 4

22 Example [Thermal noise] A statistical analysis of the random motion (by thermal agitation) of electrons shows that the autocorrelation of thermal noise N(t) is well modeled as R N (τ) = kt G e τ t t watts, where k is Boltzmann s constant (k = joule/degree Kelvin), G is the conductance of the resistor (mhos), T is the (ambient) temperature in degrees Kelvin, and t is the statistical average of time intervals between collisions of free electrons in the resistor, which is on the order of 1 1 seconds. [3, p. 15] 8. Power Spectral Density (PSD) An electrical engineer instinctively thinks of signals and linear systems in terms of their frequency-domain descriptions. Linear systems are characterized by their frequency response (the transfer function), and signals are expressed in terms of the relative amplitudes and phases of their frequency components (the Fourier transform). From the knowledge of the input spectrum and transfer function, the response of a linear system to a given signal can be obtained in terms of the frequency content of that signal. This is an important procedure for deterministic signals. We may wonder if similar methods may be found for random processes. In the study of stochastic processes, the power spectral density function, S X (f), provides a frequency-domain representation of the time structure of X(t). Intuitively, S X (f) is the expected value of the squared magnitude of the Fourier transform of a sample function of X(t). You may recall that not all functions of time have Fourier transforms. For many functions that extend over infinite time, the Fourier transform does not exist. Sample functions x(t) of a stationary stochastic process X(t) are usually of this nature. To work with these functions in the frequency domain, we begin with X T (t), a truncated version of X(t). It is identical to X(t) for T t T and elsewhere. We use F{X T }(f) to represent the Fourier transform of X T (t) evaluated at the frequency f. Definition 8.. Consider a WSS process X(t). The power spectral 43

23 density (PSD) is defined as 1 S X (f) = lim t T E [ F{X T }(f) ] 1 = lim t T E [ T T X(t)e jπft dt We refer to S X (f) as a density function because it can be interpreted as the amount of power in X(t) in the small band of frequencies from f to f + df Wiener-Khinchine theorem: the PSD of a WSS random process is the Fourier transform of its autocorrelation function: and S X (f) = R X (τ) = One important consequence is + + R X () = E [ X (t) ] = R X (τ)e jπfτ dτ S X (f)e jπfτ df. + S X (f)df. Example 8.. For the thermal noise in Example 8.19, the corresponding PSD is S N (f) = watts/hertz. kt G 1+(πft ) 8.3. Observe that the thermal noise s PSD in Example 8. is approximately flat over the frequency range 1 gigahertz. As far as a typical communication system is concerned we might as well let the spectrum be flat from to, i.e., S N (f) = N watts/hertz, where N is a constant; in this case N = 4kT G. Definition 8.4. Noise that has a uniform spectrum over the entire frequency range is referred to as white noise. In particular, for white noise, S N (f) = N watts/hertz, 44 ]

24

25 { } { } Random processes Time-domain ensemble Frequency-domain ensemble x 1 (t, ω 1 ) X 1 ( f, ω 1 ) t f x (t, ω ) X ( f, ω ) t.. f x M (t, ω M ).. X M ( f, ω M ).. t.. f.... Fig. 3.9 Figure Fourier transforms 11: Fourier of member transforms functions of member of a random functions process. of Fora simplicity, random only process. the magnitude For simplicity, spectra only are shown. the magnitude spectra are shown. [3, Fig. 3.9] What we have managed to accomplish thus 45far is to create the random variable, P, which in some sense represents the power in the process. Now we find the average value of P, i.e.,

26 The factor in the denominator is included to indicate that S N (f) is a two-sided spectrum. The adjective white comes from white light, which contains equal amounts of all frequencies within the visible band of electromagnetic radiation. The average power of white noise is obviously infinite. (a) White noise is therefore an abstraction since no physical noise process can truly be white. (b) Nonetheless, it is a useful abstraction. The noise encountered in many real systems can be assumed to be approximately white. This is because we can only observe such noise after it has passed through a real system, which will have a finite bandwidth. Thus, as long as the bandwidth of the noise is significantly larger than that of the system, the noise can be considered to have an infinite bandwidth. As a rule of thumb, noise is well modeled as white when its PSD is flat over a frequency band that is 35 times that of the communication system under consideration. [3, p 15] Theorem 8.5. When we input X(t) through an LTI system whose frequency response is H(f). Then, the PSD of the output Y (t) will be given by S Y (f) = S X (f) H(f). 46

27 6 Probability theory, random variables and random processes (a) N / S w ( f ) (W/Hz) White noise Thermal noise f (GHz) (b) N /δ(τ) R w (t) (W) White noise Thermal noise τ (pico sec) 3.11 (a) The Figure PSD (S 1: w (f(a) )), and The(b) PSD the(s autocorrelation N (f)), and (b) (Rthe w (τ)) autocorrelation of thermal noise. (R N (τ)) of noise. (Assume G = 1/1 (mhos), T = K, and t = seconds.) [3, Fig. 3.11] L x(t) R y(t) 3.1 A lowpass filter. Finally, since the noise samples of white noise are uncorrelated, if the noise is both white 47 and Gaussian (for example, thermal noise) then the noise samples are also independent. Example 3.7 Consider the lowpass filter given in Figure 3.1. Suppose that a (WSS)

28 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong Optimal Detection for Waveform Channels 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17: Modulator and Waveform Channel Goal: Want to transmit the message (index) W {1,, 3,, M} p PWi Prior Probabilities: Waveform Channel: M possible messages requires M possibilities for S(t): s1t, st,, sm t W Digital Modulator i S t R t S t N t Received waveform Additive White Noise (Independent of S(t)) Transmitted waveform Transmission of the message is done by inputting the corresponding waveform into the channel. Prior Probabilities: p PW i PSt st i Nt Rt Digital DeModulator Energy: E s t, s t E p E log M i i i s M i1 i i i Wˆ M-ary Scheme M =: Binary M = 3: Ternary M = 4: Quaternary E b

29 3 Conversion to Vector Channels Waveform Channel: R t S t N t Vector Channel R S N Note that the j th component of the vector, comes from the inner-product: i The received vector is computed in the same way: the j component is given by R r t, t j j S S t, t j j In which case, the corresponding noise vector is computed in the same way: the j component is given by N N t, t j j Use GSOP to find K orthonormal basis functions for the space spanned by This gives vector representations for the waveforms s, s,, s 1 M which can be visualized in the form of signal constellation Prior Probabilities: pi P W i PS t si t i P S s For additive white Gaussian noise (AWGN) process N(t), 1 n N 1 N K K N N j N,, N, I f n e 4 Assume AWGN Optimal Receiver (Detector) It can be shown that we don t lose optimality by considering the equivalent vector channel. The optimal detector is the MAP (maximum a posteriori) detector: i wˆ r arg max p f r s MAP i 1,, M i 1,, M i arg min r s N ln p i N i N 1 arg max rs, ln pi Ei i1,, M bias term: N 1 arg max r t, si t ln pi Ei i1,, M i i bias term: arg min, i d r s i 1,, M i Assume equiprobable messages OR using ML detector

30 [FIGURE 4.-6] Four Implementations [FIGURE 4.-8] [FIGURE 4.-7] 5 Assume s Binary PAM (1D) 1P 1 P P p p pq s 1 1 d p ln d p d p * * s s p1q pq p Q d p ln wˆ r arg max p f rs MAP i N i1, *, r 1, otherwise N p f r s i 6 1 d s s s t s t 1 N / 1 N pf r s 1 1 s * 1 1 s d p1 Area p P P 1 P W 1 PR S s Q Q ln d p * s d p 1 P P W PR S s Q Q ln d p s 1 * 1 ln 1 s s p 1 p1 s s Area p P 1 1 ln d p p s s r

31 Binary Signaling Schemes Use the symmetry in the Gaussian density. In particular, noise is rotationinvariant. 7 P 1 s d s 1 p1p 1 pp d 1 s s d p 1 d p 1 p1q ln pq ln d p d p d s s 1 1 N / s t s t Equiprobable Binary Signaling Schemes 1 Assume p1 p. Or, assume that ML detector is used instead of the MAP detector. d d P P 1 P Q Q N 8 N N 1 1 N N Equiprobable Antipodal Signaling Scheme (including BPSK) s s 1 d E b Q Q N N i d Eb Es s 1 s s 1 Equiprobable Binary Orthogonal Signaling Scheme i s t a t d E b Q Q N N d Es Eb i s 1 s 1

32 Standard Quaternary PAM 1 s P s i d 3 s 4 s q, i,3 q, i1,4 d d qq Q N E 4E Q Q b b 5 5N 9 1 3d 1 d 1 d 1 3d 5 log M Eb Es d Note: the constellation could be shifted horizontally; however, the one that is centered at origin use minimum E s E b P P i q Q M i1 5N Standard Rectangular Quaternary QAM r d P P i q q q 1 1 (Same for QPSK) r 1 d d qq Q N E E Q Q b b N 1

33 Standard Rectangular Quaternary QAM d 1 Three cases: P i q q P i 3q 3q P i 4q 4q 8 16 P q q Es 4 d 4d d d log 9 Eb 4 11 d 3 E log 9 b N 8 N qq Q E b /N vs. P() P(E) BPSK, binary antipodal, standard -PAM Binary Orthogonal Standard 4-PAM Standard square 4-QAM Standard 9-PAM Standard 16-PAM Standard 5-PAM Standard 36-PAM Standard 49-PAM Standard 64-PAM Standard square 9-QAM Standard square 16-QAM Standard square 5-QAM Standard square 36-QAM Standard square 49-QAM Standard square 64-QAM E b /N [db]

34 M-ary ASK (PAM) Increasing M deteriorates the performance. For large M, penalty for increasing the rate was 6 db/bit. the distance between curves corresponding to M and M is roughly 6 db. Require an additional 6 db/bit to maintain the same error probability. 13 [Proakis and Salehi, 7, Figure 4.3-] M-ary QAM Square constellation For large M, penalty for increasing the rate was 3 db/bit 8 16 P q q E log 9 b q Q 8 N 14 [Proakis and Salehi, 7, Figure 4.3-8]

35 M-ary PSK No easy expression for M > M = M = 4 M = 8 M = 16 M = P(E) E b /N [db] [Proakis and Salehi, 7, Figure 4.3-5] 15 M-ary PSK No easy expression. For large M, penalty for increasing the rate was 6 db/bit [Proakis and Salehi, 7, Eq ] [Proakis and Salehi, 7, Figure 4.3-5] 16

36 Equal-energy orthogonal signaling No easy expression when M >. (See Eq in [Proakis and Salehi, 7] for an expression.) In direct contrast with the performance characteristics of ASK, PSK, and QAM signaling, here, by increasing M, one can reduce the SNR per bit required to maintain the same probability of error. P(E) M = M = 4 M = 8 M = 16 M = 3 M = E b /N [db] 17

37 9 Optimal Detector for Waveform Channel Thursday, August 9, 13 1:9 PM ECS Page 1

38 ECS Page

39 ECS Page 3

40 ECS Page 4

41 1 Information Theoretic Quantities Tuesday, September 1, 13 3: PM ECS45 1 Page 1

42 ECS45 1 Page

43 ECS45 1 Page 3

44 ECS45 1 Page 4

45 ECS45 1 Page 5

46 ECS45 1 Page 6

47 Quiz 4 Solution Tuesday, September 17, 13 3:44 PM ECS45 1 Page 7

48 ECS45 1 Page 8

49 1 Information Channel Capacity Thursday, September 19, 13 1:43 PM ECS45 1 Page 9

50 ECS45 1 Page 1

51 ECS45 1 Page 11

52 Quiz 5 Solution Tuesday, September 4, 13 3:5 PM ECS45 1 Page 1

53 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong Information-Theoretic Quantities 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17: Reference for this chapter Elements of Information Theory By Thomas M. Cover and Joy A. Thomas nd Edition (Wiley) Chapters, 7, and 8 1 st Edition available at SIIT library: Q36 C

54 4 Channel Model The model considered here is a simplified version of what we have seen earlier in the course. In the next chapter, we will present how this model can be derived from the digital modulator-demodulator over continuous-time AWGN noise one. The channel input is denoted by a random variable X. The pmf p X (x) is usually denoted by simply p(x) and usually expressed in the form of a row vector or. The support is often denoted by. The channel output is denoted by a random variable Y. The pmf p Y (y) is usually denoted by simply q(y) and usually expressed in the form of a row vector. The support is often denoted by. The channel corrupts X in such a way that when the input is, the output is randomly selected from the conditional pmf. This conditional pmf is usually denoted by Q and usually expressed in the form of a probability transition matrix Q. Q Q X Q y x Y Information Channel Capacity Consider a (discrete memoryless) channel whose is Q(y x). The information channel capacity of this channel is defined as C max I X; Y max I p, Q, p X x p where the maximum is taken over all possible input pmf s p X (x). Remarks: In the next chapter, we shall define an operational definition of channel capacity as the highest rate in bits per channel use at which information can be sent with arbitrarily low probability of error. Shannon s theorem establishes that the information channel capacity is equal to the operational channel capacity. Thus, we may drop the word information in most discussions of channel capacity. 5

55 X Binary Symmetric Channel (BSC) Y H, 4,.6 H Y X.3 q p,1 p ;,4, I X Y H q H Q.4.6 p p,1 p p Capacity of.9 bits is achieved by p.5,.5 I(X;Y) X Binary Asymmetric Channel 1 1-p p 1-1 Y I(X;Y) p Ex. p.9,.4 1 p Q p p Capacity of.918 bits is achieved by p.538,.46 7

56 Iterative Calculation of C In general, there is no closed-form solution for capacity. The maximum can be found by standard nonlinear optimization techniques. A famous iterative algorithm, called the Blahut Arimoto algorithm, was developed by Arimoto and Blahut. Start with a guess input pmf p (x). For r >, construct p r (x) according to the following iterative prescription: 8 Berger plaque 9

57 Richard Blahut Former chair of the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign Best known for Blahut Arimoto algorithm (Iterative Calculation of C) 1 Raymond Yeung BS, MEng and PhD degrees in electrical engineering from Cornell University in 1984, 1985, and 1988, respectively. 11

58 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong Channel Capacity 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17: Reliable Communication X Q y x Y Reliable communication means arbitrary small error probability can be achieved. This seems to be an impossible goal. If the channel introduces errors, how can one correct them all? Any correction process is also subject to error, ad infinitum. Operational Channel capacity C = the maximum rate at which reliable communication over a channel is possible. 3

59 Coding or Encoding or Channel Encoding Introduce redundancy so that even if some of the information is lost or corrupted, it will still be possible to recover the message at the receiver. 4 Repetition Code (k = 1) The most obvious coding scheme is to repeat information. For example, to send a 1, we send 11111, and to send a, we send. This scheme uses five symbols to send 1 bit, and therefore has a rate of 1/5 bit per symbol. If this code is used on a binary symmetric channel, the ML decoding rule (which is optimal when the s and 1s are equiprobable), is equivalent to taking the majority vote of each block of five received bits. If three or more bits are 1, we decode the block as a 1; otherwise, we decode it as. By using longer repetition codes, we can achieve an arbitrarily low probability of error. But the rate of the code also goes to zero with (larger) block length, so even though the code is simple, it is really not a very useful code. 5

60 Repetition Code over BSC P p 6 Parity Bit or Check Bit In mathematics, parity refers to the evenness or oddness of an integer Here, parity refers to the evenness or oddness of the # 1 s within a given set of bits. It can be calculated via an XOR sum of the bits, yielding for even parity and 1 for odd parity. Ex. Even parity: 11, 1111 Odd Parity: 111, 111 A parity bit, or check bit, is a bit added to the end of the k information bit. There are two variants of parity bits: even parity bit and odd parity bit. Even parity bit: Choose the nth bit so that the number of 1 s in the block is even. Ex. k = 5 Bi, i 1,,, k 11; 11; ; 1111 X i B B B B, i k 1 n 1 3 k, 7

61 Parity Bit or Check Bit Used as the simplest form of error detecting code. Does not detect an even number of errors Does not give any information about how to correct the errors that occur. Generalization: Parity Check Codes We can extend the idea of parity check bit to allow for multiple parity check bits and to allow the parity checks to depend on various subsets of the information bits. The Hamming code is an example of a parity check code. 8 NOISY CHANNEL CODING THEOREM [SHANNON, 1948] 1. Reliable communication over a (discrete memoryless) channel is possible if the communication rate R satisfies R < C, where C is the channel capacity. In particular, for any R < C, there exist codes (encoders and decoders) with sufficiently large n such that P P ˆ W Wˆ. At rates higher than capacity, reliable communication is impossible. ne R Positive function of R for R < C Completely determined by the channel characteristics 9

62 NOISY CHANNEL CODING THEOREM Express the limit to reliable communication Provides a yardstick to measure the performance of communication systems. A system performing near capacity is a near optimal system and does not have much room for improvement. On the other hand a system operating far from this fundamental bound can be improved (mainly through coding techniques). 1 Shannon s nonconstructive proof 11 Shannon introduces a method of proof called random coding. Instead of looking for the best possible coding scheme and analyzing its performance, which is a difficult task, all possible coding schemes are considered by generating the code randomly with appropriate distribution and the performance of the system is averaged over them. Then it is proved that if R < C, the average error probability tends to zero. This proves that as long as R < C, at any arbitrarily small (but still positive) probability of error, one can find (there exist) at least one code (with sufficiently long block length n) that performs better than the specified probability of error.

63 Shannon s nonconstructive proof If we used the scheme suggested and generate a code at random, the code constructed is likely to be good for long block lengths. No structure in the code. Very difficult to decode In addition to achieving low probabilities of error, useful codes should be simple, so that they can be encoded and decoded efficiently. Hence the theorem does not provide a practical coding scheme. Since Shannon s paper, a variety of techniques have been used to construct good error correcting codes. The entire field of coding theory has been developed during this search. Turbo codes have come close to achieving capacity for Gaussian channels. 1 Deriving the Q Matrix 13

64 Probability Calculation: 1-D Noise N, b a a b Pa Nb Q Q Decision region for i s Decision region for j s i j s s a i b i a j b j (t) 14 i i i i i i i i P s Pa R b S s Pa s N b s i i i i ai s bi s s a i bi s Q Q 1Q Q ˆ i i i PW jwi Pa j Rbj Ss Pa j s Nbj s i i aj s bj s Q Q Ex. Standard 3-PAM i i ˆ aj s bj s PW jw i Q Q s i d d j i s d d s 1 3 d d d d d d d d d d Q Q Q Q Q Q d d d d Q Q Q Q Q Q d d d d d d d d d d Q Q Q Q Q Q d 3 s d (t) 15

65 Ex. Standard 3-PAM Transition probabilities PW ˆ jwi i j q q q q d d 3d 3d 1 1Q Q Q Q d d d d 1Q Q Q Q = q 1 1 q1 q1 3d 3d d d 3 1Q Q Q Q 3 q 3 q1 q3 1 q1 i j d q1 Q q 3 3d Q 16 Probability Calculation: -D Noise i.i.d. N, i i s j : j s d c Decision region for j s PW ˆ jwi PR j S s i s i Pa R1 b,c R d S s i i i i Pa s1 N1 b s 1 Pc s N d s i i i i as 1 bs 1 cs d s Q Q Q Q a b 17

66 18 1 s Ex. Standard QPSK s d 4 s 3 s 1 d d d d ˆ PW 1W 1 Q Q Q Q d d 1Q 1Q 1q d d d d PW ˆ 1W3 Q Q Q Q d d Q 1Q q1q d d d d ˆ PW 1W 4 Q Q Q Q d d Q Q q i j q q q q q q q q q q q q q q q q q q q q q q q q PW ˆ jwi d q Q MATLAB Calculation Q_Matrix = Q_ML(SS,EbNdB,n) Capacity_BPSK_Example 19

67 Ex: Capacity for BPSK BPSK (sim) BPSK (theoretical) C E b /N [db]

68 I(X;Y) for continuous X and Y Discrete X and Y Continuous X and Y log X log H X p X x log p YX YX px xp y xlog YX pyxy x H Y X x p x p x p YX I( X; Y) H Y H Y X p Y X YX log py Y y pyxyx YX p x log x y py y p x p y x x YX X p y x y log X log h X f X log f YX hyx X yxlog yx YX YX f YX I( X; Y) h Y h Y X x f x f x X YX YX log fy Y X f x f f dydx f yx YX f X x f yxlog YX dydx f yx fy y fx x f dx YX Y y Capacity for additive Gaussian noise channel Suppose where the additive noise is a zeromean Gaussian RV with variance. Input Power Constraint: In addition, it is usually assumed that the channel input satisfies a power constraint of the form Capacity: C X P 1 P log 1 [bits per transmission] [bits per channel use] The input pdf that achieves this capacity is a zero-mean Gaussian pdf with variance P. 3

69 Capacity for AWGN Waveform Channel Assume Band-Limited Channel: The channel has a given bandwidth W. Can use only the frequencies in the range. Input Power Constraint: X t P AWGN with PSD N / Capacity: C Wlog 1 P NW [bps] This is the celebrated equation for the capacity of a bandlimited AWGN channel with input power constraint derived by Shannon in

70 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong Channel Encoding 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17: Block Codes Work on blocks of k information bits. There are possible information blocks. Convert each k-bit block into an n-bit codeword. (n, k) code B Encoder X So, to transmit k information bits, the channel is used n times. Rate:.

71 GF() The construction of the codes can be expressed in matrix form using the following definition of addition and multiplication of bits: 3 These are modulo- addition and modulo- multiplication, respectively. The operations are the same as the exclusive-or (XOR) operation and the AND operation, but we will call them addition and multiplication so that we can use a matrix formalism to define the code. The two-element set {, 1} together with this definition of addition and multiplication is a number system called a finite field or a Galois field, and is denoted by the label GF(). Channel X Y Again, to transmit k information bits, the channel is used n times. B Encoder X Y y xe Error pattern 4

72 Error Detection Error detection: the determination of whether errors are present in a received word. An error pattern is undetectable if and only if it causes the received word to be a valid codeword other than that which was transmitted. Given a transmitted code word, there are M-1 codewords other than that may arrive at the receiver and thus M-1 undetectable error patterns. 5 Error Correction In FEC (forward error correction) system, when the decoder detects error, the arithmetic or algebraic structure of the code is used to determine which of the valid code words is most likely to have been sent, given the erroneous received word. It is possible for a detectable error pattern to cause the decoder to select a code word other than that which was actually transmitted. The decoder is then said to have committed a decoder error. 6

73 Weight and Distance The weight of a codeword or error pattern is the number of nonzero coordinates in the code word or error pattern. The weight of the codeword is commonly written as. The Hamming distance between two n-bit blocks is the number of coordinates in which the two blocks differ. The minimum distance (d min ) of a block code is the minimum Hamming distance between all distinct pairs of codewords. A code with minimum distance d min can Detect all error patterns of weight less than or equal to d min -1. Correct all error patterns of weight less than or equal to. 7 8 Linear Block Codes Generator matrix: k j1 xbg b g Repetition code: Single-parity-check code: j G j G xbg b b b G I ;1 T k k k xbg b ; bj j1 g g g 1 k kn

74 9 Systematic Encoding Code constructed with distinct information bits and check bits in each codeword are called systematic codes. Message bits are visible in the codewod. G P I k nk xbg b1 b bk Pk n k I k x1 x xnk b1 b bk The generator matrix is of the form Construct a parity check matrix Key property: k T H Ink P T Ink GH P I k n k k P P P kk Syndrome Table Decoding Syndrome Vector: Decoding is performed by computing the syndrome of a received vector, lookup the corresponding error pattern, and subtracting the error pattern from the received word. h1 h T T T H d1 d dn hn k nk n 1 n T j1 s eh e d j j

75 Hamming codes Named after Richard W. Hamming The IEEE Richard W. Hamming Medal, named after him, is an award given annually by Institute of Electrical and Electronics Engineers (IEEE), for "exceptional contributions to information sciences, systems and technology. Sponsored by Qualcomm, Inc Some Recipients: Richard W. Hamming Thomas M. Cover David A. Huffman 11 -Toby Berger The simplest of a class of (algebraic) error correcting codes that can correct one error in a block of bits 11 Hamming codes: Parameters number of parity bits d min = 3. Error correcting capability: 1

76 Construction of Hamming Codes Here, we want Hamming code whose. Parity check matrix H: Construct a matrix whose columns consist of all nonzero binary m-tuples. The ordering of the columns is arbitrary. However, next step is easy when the columns are arranged so that. Generator matrix G: When, we have. 13 Example: (7,4) Hamming Codes H G Syndrome decoding table: Error pattern e Syndrome = (,,,,,,) (,,) (,,,,,,1) (1,1,1) (,,,,,1,) (1,1,) (,,,,1,,) (1,,1) (,,,1,,,) (,1,1) (,,1,,,,) (,,1) (,1,,,,,) (,1,) (1,,,,,,) (1,,) T eh

77 Hamming Codes: Syndrome decoding table 15 Note that for an error pattern with a single one in the jth coordinate position, the syndrome is the same as the j th column of H. Syndrome decoding table: Error pattern e Syndrome = (,,,,,,) (,,) (,,,,,,1) (1,1,1) (,,,,,1,) (1,1,) (,,,,1,,) (1,,1) (,,,1,,,) (,1,1) (,,1,,,,) (,,1) (,1,,,,,) (,1,) (1,,,,,,) (1,,) T eh Hamming Codes: Decoding Algorithm 1. Compute the syndrome for the received word. If, then go to step 4.. Determine the position j of the column of H that is the transposition of the syndrome. 3. Complement the j th bit in the received word. 4. Output the resulting codeword and STOP. 16

78 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong Fading Channels 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17: Problems of Wireless Comm. Impairment: Multipath-induced fading Fading = random fluctuation in signal level to fade = to fluctuate randomly. The arrival of the transmitted signal at an intended receiver through differing angles and/or differing time delays and/or differing frequency (i.e., Doppler) shifts due to the scattering of electromagnetic waves in the environment. Transmitted signals are received through multiple paths which usually add destructively Consequently, the received signal power fluctuates in space (due to angle spread) and/or frequency (due to delay spread) and/or time (due to Doppler spread) through the random superposition of the impinging multi-path components. Recource constraints/scarcity: Limited power Highly constrained transmit powers Scarce frequency bandwidth (radio spectrum) Unlike wireline communications, in which capacity can be increased by adding infrastructure such as new optical fiber, wireless capacity increases have traditionally required increases in either the radio bandwidth or power, both of which are severely limited in most wireless systems. Interference: Information is transmitted not by a single source but by several (uncoordinated, bursty, and geographically separated) sources/users/applications.

79 Bad solution to improve BW efficiency How to transmit more using the same amount of BW? Simple/naive approach that naturally comes to mind: use higher order modulation schemes. Drawback: poor reliability For the same level of transmit power, higher order modulation schemes yield performance that is inferior to that of the lower order modulation schemes. In fact, even for small signal constellations, i.e., low-order modulation schemes (e.g. binary), the reliability of uncoded communications over wireless links is very poor in general. Multiantenna systems offer such a possibility. 3 Better Solutions The single most effective technique to accomplish reliable communication over a wireless channel is diversity which attempts to provide the receiver with independently faded copies of the transmitted signal with the hope that at least one of these replicas will be received correctly. Diversity may be realized in different ways, including frequency diversity, time (temporal) diversity, (transmit and/or receive) antenna diversity (spatial diversity), modulation diversity, etc. Channel coding may also be used to provide (a form of time) diversity for immunization against the impairments of the wireless channel. In the context of wireless communications, channel coding schemes are usually combined with interleaving to achieve time diversity in an efficient manner. 4

80 New View While channel fading has traditionally been regarded as a source of unreliability that has to be mitigated, information theory and channel capacity analysis have suggested an opposite view: Channel fading can instead be exploited. 5 Wireless Digital Comm. System Transmitter (Tx) Receiver (Rx) Encoder x y Decoder Wireless Channel (SISO) y hx n 6 Channel gain (Channel (fading) coefficient) Noise

81 Probability Facts Consider a complex-valued RV i.i.d. Z X jy where X, Y ~, 7 Let R and be the magnitude and phase of the RV above. Then 1. R and are independent.. is uniformly distributed on [,] 3. R has a Rayleigh pdf: 1 r F () 1 e, r, R r, otherwise. 4 R, VarR f R (Read: ray -lee) 1 r 1 () r re, r,, otherwise. John William Strutt, 3rd Baron Rayleigh ( ) English physicist Discovered argon > Nobel Prize Discovered Rayleigh scattering, explaining why the sky is blue Complex-Valued Random Variables Complex-valued random variable: X and Y are real-valued random variable Z X jy Z VarZ Cov Z, Z Z EZ X Y Suppose We write Z X jy * Z Z Z Z Z Z Z Z Z Z Cov, * * i.i.d. Z X jy where X, Y ~,, z, Z 8 z x y x y fz z fx, Y x, y e e e e e Z z Z

82 Rayleigh Fading Channel i.i.d. h, : Re h,im h ~, Usually normalized so that 1 i. i.d. N n, N : Re n,im n ~, Most applicable when there is no dominant propagation along a line of sight between the transmitter and receiver If there is a dominant line of sight, Rician fading may be more applicable. there are many objects in the environment that scatter the radio signal before it arrives at the receiver Ex. Densely-built Manhattan. 9 Digital Communication Systems ECS 45 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th Introduction to Multiple-Antenna System 1 Office Hours: Rangsit Library: Tuesday 16:-17: BKD361-7: Thursday 16:-17:

83 Multiantenna Systems Since the 199s, there has been enormous interest in multiantenna systems. Two types [Molisch, 11, p 445] Smart antenna systems : multiantenna elements at one link end only Ex. Rx smart antennas Signals from different elements are combined by an adaptive (intelligent) algorithm Intelligence (smartness) is not in the antenna, but rather in signal processing. Multiple Input Multiple Output (MIMO) systems (Pronounced mee-moh or my-moh) :multiantenna elements at both link ends. 11 MIMO Channel Model (Multiple Input Multiple Output) Encoder Decoder x y Wireless Channel (MIMO) y Hx n Channel Matrix Noise

84 MIMO Channel Model 13 H is now a matrix. Its entries form an i.i.d. Gausian collection with zero-mean, independent real and imaginary parts, each with variance ½. Equivalent, each entry of H has uniform phase and Rayligh magnitude. x y Hx n h i,j = complex channel gain from the jth transmit to the ith receive antenna h h h h h h H h h h 1,1 1, 1, N,1,, N N,1 N, N, N R R R T T T From Impairment to Opportunity Multipath scattering is commonly seen as an impairment to wireless communication. However, it can now also be seen as providing an opportunity to significantly improve the capacity and reliability of such systems. By using multiple antennas at the transmitter and receiver in a wireless system, the rich scattering channel can be exploited to create a multiplicity of parallel links over the same radio band, and thereby to either increase the rate of data transmission through (spatial) multiplexing (transmission of several data streams in parallel) or to improve system reliability through the increased antenna diversity. Moreover, we need not choose between multiplexing and diversity, but rather we can have both subject to a fundamental tradeoff between the two. 14

85 MIMO Benefits: Spatial Diversity Mitigates fading Realized by providing the receiver with multiple (ideally independent) copies of the transmitted signal in space, frequency or time. With an increasing number of independent copies (the number of copies is often referred to as the diversity order), the probability that at least one of the copies is not experiencing a deep fade increases, thereby improving the quality and reliability of reception. A MIMO channel with N T transmit antennas and N R receive antennas potentially offers N T N R independently fading links, and hence a spatial diversity order of N T N R. Improve reliability. 15 MIMO Benefits: Spatial Multiplexing MIMO systems offer a linear increase in data rate through spatial multiplexing, i.e., transmitting multiple, independent data streams (not multiple copies as in obtaining spatial diversity) within the bandwidth of operation. Under suitable channel conditions, such as rich scattering in the environment, the receiver can separate the data streams. Furthermore, each data stream experiences at least the same channel quality that would be experienced by a SISO system, effectively enhancing the capacity by a multiplicative factor equal to the number of streams. In general, the number of data streams that can be reliably supported by a MIMO channel equals min{n T,N R }. 16

86 MIMO Benefits: Spatial Multiplexing Transmit multiple independent data streams or spatial streams on different antennas Encoder x SU-MIMO y Decoder Problem: Solution: Interference among transmitting antennas Pre-process (pre-code) the transmitted signals 17 MIMO Coding Schemes Achieve the best spatial diversity: space-time trellis codes space-time block codes Maximize the transmission rate: Bell Lab layered space-time (BLAST) coding schemes These two families of space-time codes represent two extremes in the sense that one achieves the best reliability and the other achieves the maximum transmission rate. Other space-time coding schemes that provide a trade-off between diversity and rate also exist. 18 Prapun Suksompong 1//13

87 Ex. Spatial Multiplexing y H A s n x If H can be decomposed as H QHP H, with conjugate transpose H H Q Q P P I and we set A = P, then at the receiver, we have H H y QHP As n QHP Ps n QHs n. 19 Ex. Spatial Multiplexing Finally, we can find H H H r Q y Q QHs Q n Hs n The whole MIMO system can be reduced to r H s n Q: Why is this better than our original y Hx n A: Clever decomposition can reduce the interference among data streams.

88 Ex. Spatial Multiplexing Conventional scheme uses SVD (Singular Value Decomp.) Alternatively, we can use GTD (Generalized Triangular Decomposition) [Jiang et al. 4,7] SVD H H D UDV D11 D D 33 H H GTD H QRP R R R R R R R H 1 Ex. Spatial Multiplexing r H s n SVD H D r D s n r D s n r D s n Streams are completely separated GTD H R r R s R s R s n r R s R s n 3 3 r R s n Can use successive cancellation

7 The Waveform Channel

7 The Waveform Channel 7 The Waveform Channel The waveform transmitted by the digital demodulator will be corrupted by the channel before it reaches the digital demodulator in the receiver. One important part of the channel

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Lecture 15: Thu Feb 28, 2019

Lecture 15: Thu Feb 28, 2019 Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10 Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,

More information

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1 Wireless : Wireless Advanced Digital Communications (EQ2410) 1 Thursday, Feb. 11, 2016 10:00-12:00, B24 1 Textbook: U. Madhow, Fundamentals of Digital Communications, 2008 1 / 15 Wireless Lecture 1-6 Equalization

More information

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7 ECS 332: Principles of Communications 2012/1 HW 4 Due: Sep 7 Lecturer: Prapun Suksompong, Ph.D. Instructions (a) ONE part of a question will be graded (5 pt). Of course, you do not know which part will

More information

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design Chapter 4 Receiver Design Chapter 4 Receiver Design Probability of Bit Error Pages 124-149 149 Probability of Bit Error The low pass filtered and sampled PAM signal results in an expression for the probability

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

A First Course in Digital Communications

A First Course in Digital Communications A First Course in Digital Communications Ha H. Nguyen and E. Shwedyk February 9 A First Course in Digital Communications 1/46 Introduction There are benefits to be gained when M-ary (M = 4 signaling methods

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN

Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, 007 Chenggao HAN Contents 1 Introduction 1 1.1 Elements of a Digital Communication System.....................

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Multiple Antennas in Wireless Communications

Multiple Antennas in Wireless Communications Multiple Antennas in Wireless Communications Luca Sanguinetti Department of Information Engineering Pisa University luca.sanguinetti@iet.unipi.it April, 2009 Luca Sanguinetti (IET) MIMO April, 2009 1 /

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Wednesday, June 1, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Chapter 4: Continuous channel and its capacity

Chapter 4: Continuous channel and its capacity meghdadi@ensil.unilim.fr Reference : Elements of Information Theory by Cover and Thomas Continuous random variable Gaussian multivariate random variable AWGN Band limited channel Parallel channels Flat

More information

A Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems

A Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems A Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems Wei Zhang, Xiang-Gen Xia and P. C. Ching xxia@ee.udel.edu EE Dept., The Chinese University of Hong Kong ECE Dept., University of Delaware

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process 1 ECE6604 PERSONAL & MOBILE COMMUNICATIONS Week 3 Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process 2 Multipath-Fading Mechanism local scatterers mobile subscriber base station

More information

Block 2: Introduction to Information Theory

Block 2: Introduction to Information Theory Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz. CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c

More information

Multiple-Input Multiple-Output Systems

Multiple-Input Multiple-Output Systems Multiple-Input Multiple-Output Systems What is the best way to use antenna arrays? MIMO! This is a totally new approach ( paradigm ) to wireless communications, which has been discovered in 95-96. Performance

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

2016 Spring: The Final Exam of Digital Communications

2016 Spring: The Final Exam of Digital Communications 2016 Spring: The Final Exam of Digital Communications The total number of points is 131. 1. Image of Transmitter Transmitter L 1 θ v 1 As shown in the figure above, a car is receiving a signal from a remote

More information

= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2

= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2 ECE 341: Probability and Random Processes for Engineers, Spring 2012 Homework 13 - Last homework Name: Assigned: 04.18.2012 Due: 04.25.2012 Problem 1. Let X(t) be the input to a linear time-invariant filter.

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @

More information

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH : Antenna Diversity and Theoretical Foundations of Wireless Communications Wednesday, May 4, 206 9:00-2:00, Conference Room SIP Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Lecture 2. Fading Channel

Lecture 2. Fading Channel 1 Lecture 2. Fading Channel Characteristics of Fading Channels Modeling of Fading Channels Discrete-time Input/Output Model 2 Radio Propagation in Free Space Speed: c = 299,792,458 m/s Isotropic Received

More information

Signals and Spectra - Review

Signals and Spectra - Review Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs

More information

that efficiently utilizes the total available channel bandwidth W.

that efficiently utilizes the total available channel bandwidth W. Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Introduction We consider the problem of signal

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

Lecture 4 Capacity of Wireless Channels

Lecture 4 Capacity of Wireless Channels Lecture 4 Capacity of Wireless Channels I-Hsiang Wang ihwang@ntu.edu.tw 3/0, 014 What we have learned So far: looked at specific schemes and techniques Lecture : point-to-point wireless channel - Diversity:

More information

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS ch03.qxd 1/9/03 09:14 AM Page 35 CHAPTER 3 MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS 3.1 INTRODUCTION The study of digital wireless transmission is in large measure the study of (a) the conversion

More information

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error

More information

Signal Design for Band-Limited Channels

Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Signal Design for Band-Limited Channels Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted. Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets

More information

Digital Modulation 1

Digital Modulation 1 Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic

More information

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture) ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling

More information

Performance Analysis of Spread Spectrum CDMA systems

Performance Analysis of Spread Spectrum CDMA systems 1 Performance Analysis of Spread Spectrum CDMA systems 16:33:546 Wireless Communication Technologies Spring 5 Instructor: Dr. Narayan Mandayam Summary by Liang Xiao lxiao@winlab.rutgers.edu WINLAB, Department

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Emna Eitel and Joachim Speidel Institute of Telecommunications, University of Stuttgart, Germany Abstract This paper addresses

More information

EE401: Advanced Communication Theory

EE401: Advanced Communication Theory EE401: Advanced Communication Theory Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE.401: Introductory

More information

Digital Communications

Digital Communications Digital Communications Chapter 9 Digital Communications Through Band-Limited Channels Po-Ning Chen, Professor Institute of Communications Engineering National Chiao-Tung University, Taiwan Digital Communications:

More information

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary Spatial Correlation Ahmed K Sadek, Weifeng Su, and K J Ray Liu Department of Electrical and Computer Engineering, and Institute for Systems

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

Square Root Raised Cosine Filter

Square Root Raised Cosine Filter Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 15: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 29 th, 2015 1 Example: Channel Capacity of BSC o Let then: o For

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

s o (t) = S(f)H(f; t)e j2πft df,

s o (t) = S(f)H(f; t)e j2πft df, Sample Problems for Midterm. The sample problems for the fourth and fifth quizzes as well as Example on Slide 8-37 and Example on Slides 8-39 4) will also be a key part of the second midterm.. For a causal)

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Direct-Sequence Spread-Spectrum

Direct-Sequence Spread-Spectrum Chapter 3 Direct-Sequence Spread-Spectrum In this chapter we consider direct-sequence spread-spectrum systems. Unlike frequency-hopping, a direct-sequence signal occupies the entire bandwidth continuously.

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k. Kevin Buckley - -4 ECE877 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set Review of Digital Communications, Introduction to

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Power Spectral Density of Digital Modulation Schemes

Power Spectral Density of Digital Modulation Schemes Digital Communication, Continuation Course Power Spectral Density of Digital Modulation Schemes Mikael Olofsson Emil Björnson Department of Electrical Engineering ISY) Linköping University, SE-581 83 Linköping,

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Lecture 4. Capacity of Fading Channels

Lecture 4. Capacity of Fading Channels 1 Lecture 4. Capacity of Fading Channels Capacity of AWGN Channels Capacity of Fading Channels Ergodic Capacity Outage Capacity Shannon and Information Theory Claude Elwood Shannon (April 3, 1916 February

More information

EE303: Communication Systems

EE303: Communication Systems EE303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE303: Introductory Concepts

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 08 December 2009 This examination consists of

More information

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Multiplexing,

More information

Multi-Input Multi-Output Systems (MIMO) Channel Model for MIMO MIMO Decoding MIMO Gains Multi-User MIMO Systems

Multi-Input Multi-Output Systems (MIMO) Channel Model for MIMO MIMO Decoding MIMO Gains Multi-User MIMO Systems Multi-Input Multi-Output Systems (MIMO) Channel Model for MIMO MIMO Decoding MIMO Gains Multi-User MIMO Systems Multi-Input Multi-Output Systems (MIMO) Channel Model for MIMO MIMO Decoding MIMO Gains Multi-User

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

TSKS01 Digital Communication Lecture 1

TSKS01 Digital Communication Lecture 1 TSKS01 Digital Communication Lecture 1 Introduction, Repetition, and Noise Modeling Emil Björnson Department of Electrical Engineering (ISY) Division of Communication Systems Emil Björnson Course Director

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β j exp[ 2πifτ j (t)] = exp[2πid j t 2πifτ o j ]

Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β j exp[ 2πifτ j (t)] = exp[2πid j t 2πifτ o j ] Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β exp[ 2πifτ (t)] = exp[2πid t 2πifτ o ] Define D = max D min D ; The fading at f is ĥ(f, t) = 1 T coh = 2D exp[2πi(d

More information

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process Lecture Notes 7 Stationary Random Processes Strict-Sense and Wide-Sense Stationarity Autocorrelation Function of a Stationary Process Power Spectral Density Continuity and Integration of Random Processes

More information

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

UTA EE5362 PhD Diagnosis Exam (Spring 2011) EE5362 Spring 2 PhD Diagnosis Exam ID: UTA EE5362 PhD Diagnosis Exam (Spring 2) Instructions: Verify that your exam contains pages (including the cover shee. Some space is provided for you to show your

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia

More information

Copyright license. Exchanging Information with the Stars. The goal. Some challenges

Copyright license. Exchanging Information with the Stars. The goal. Some challenges Copyright license Exchanging Information with the Stars David G Messerschmitt Department of Electrical Engineering and Computer Sciences University of California at Berkeley messer@eecs.berkeley.edu Talk

More information

Trellis Coded Modulation

Trellis Coded Modulation Trellis Coded Modulation Trellis coded modulation (TCM) is a marriage between codes that live on trellises and signal designs We have already seen that trellises are the preferred way to view convolutional

More information

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood ECE559:WIRELESS COMMUNICATION TECHNOLOGIES LECTURE 16 AND 17 Digital signaling on frequency selective fading channels 1 OUTLINE Notes Prepared by: Abhishek Sood In section 2 we discuss the receiver design

More information

19. Channel coding: energy-per-bit, continuous-time channels

19. Channel coding: energy-per-bit, continuous-time channels 9. Channel coding: energy-per-bit, continuous-time channels 9. Energy per bit Consider the additive Gaussian noise channel: Y i = X i + Z i, Z i N ( 0, ). (9.) In the last lecture, we analyzed the maximum

More information

On the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection

On the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection On the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection ustafa Cenk Gursoy Department of Electrical Engineering University of Nebraska-Lincoln, Lincoln, NE 68588 Email: gursoy@engr.unl.edu

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. Digital Modulation and Coding Tutorial-1 1. Consider the signal set shown below in Fig.1 a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. b) What is the minimum Euclidean

More information

Introduction to Probability and Stochastic Processes I

Introduction to Probability and Stochastic Processes I Introduction to Probability and Stochastic Processes I Lecture 3 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark Slides

More information