Modulation & Coding for the Gaussian Channel
|
|
- Russell Greer
- 6 years ago
- Views:
Transcription
1 Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad 1 / 54
2 Digital Communication Convey a message from transmitter to receiver in a finite amount of time, where the message can assume only finitely many values. ˆ time can be replaced with any resource: space available in a compact disc, number of cells in flash memory Picture courtesy brigetteheffernan.wordpress.com 2 / 54
3 The Additive Noise Channel ˆ Message m takes finitely many, say M, distinct values Usually, not always, M = 2 k, for some integer k assume m is uniformly distributed over {1,..., M} ˆ Time duration T transmit signal s(t) is restricted to 0 t T ˆ Number of message bits k = log 2 M (not always an integer) 3 / 54
4 Modulation Scheme ˆ The transmitter & receiver agree upon a set of waveforms {s 1 (t),..., s M (t)} of duration T. ˆ The transmitter uses the waveform s i (t) for the message m = i. ˆ The receiver must guess the value of m given r(t). ˆ We say that a decoding error occurs if the guess ˆm m. Definition An M-ary modulation scheme is simply a set of M waveforms {s 1 (t),..., s M (t)} each of duration T. Terminology ˆ Binary: M = 2, modulation scheme {s 1 (t), s 2 (t)} ˆ Antipodal: M = 2 and s 2 (t) = s 1 (t) ˆ Ternary: M = 3, Quaternary: M = 4 4 / 54
5 Parameters of Interest ˆ Bit rate R = log 2 M T bits/sec Energy of the i th waveform E i = s i (t) 2 = ˆ Average Energy i=1 i=1 T t=0 s 2 i (t)dt M M 1 T E = P (m = i)e i = s i (t) 2 M t=0 ˆ Energy per message bit E b = E log 2 M ˆ Probability of error P e = P (m ˆm) Note P e depends on the modulation scheme, noise statistics and the demodulator. 5 / 54
6 Example: On-Off Keying, M = 2 6 / 54
7 Objectives 1 Characterize and analyze a modulation scheme in terms of energy, rate and error probability. What is the best/optimal performance that one can expect? 2 Design a good modulation scheme that performs close to the theoretical optimum. Key tool: Signal Space Representation ˆ Represent waveforms as vectors: geometry of the problem ˆ Simplifies performance analysis and modulation design ˆ Leads to efficient modulation/demodulation implementations 7 / 54
8 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 8 / 54
9 References ˆ I. M. Jacobs and J. M. Wozencraft, Principles of Communication Engineering, Wiley, ˆ G. D. Forney and G. Ungerboeck, Modulation and coding for linear Gaussian channels, in IEEE Transactions on Information Theory, vol. 44, no. 6, pp , Oct ˆ D. Slepian and H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty I, in The Bell System Technical Journal, vol. 40, no. 1, pp , Jan ˆ H. J. Landau and H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty III: The dimension of the space of essentially time- and band-limited signals, in The Bell System Technical Journal, vol. 41, no. 4, pp , July / 54
10 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 9 / 54
11 Goal Map waveforms s 1 (t),..., s M (t) to M vectors in a Euclidean space R N, so that the map preserves the mathematical structure of the waveforms. 9 / 54
12 Quick Review of R N : N-Dimensional Euclidean Space R N = { (x 1, x 2,..., x N ) x 1,..., x N R } Notation: x = (x 1, x 2,..., x N ) and 0 = (0, 0,..., 0) Addition Properties: ˆ x + y = (x 1,..., x N ) + (y 1,..., y N ) = (x 1 + y 1,..., x N + y N ) ˆ x y = (x 1,..., x N ) (y 1,..., y N ) = (x 1 y 1,..., x N y N ) ˆ x + 0 = x for every x R N Multiplication Properties: ˆ a x = a (x 1,..., x N ) = (ax 1,..., ax N ), where a R ˆ a(x + y) = ax + ay ˆ (a + b)x = ax + bx ˆ ax = 0 if and only if a = 0 or x = 0 10 / 54
13 Quick Review of R N : Inner Product and Norm Inner Product ˆ x, y = y, x = x 1 y 1 + x 2 y x N y N ˆ x, y + z = x, y + x, z (distributive law) ˆ ax, y = a x, y ˆ If x, y = 0 we say that x and y are orthogonal Norm ˆ x = x x2 N = x, x denotes the length of x ˆ x 2 = x, x denotes the energy of the vector x ˆ x 2 = 0 if and only if x = 0 ˆ If x = 1 we say that x is of unit norm ˆ x y is the distance between two vectors. Cauchy-Schwarz Inequality ˆ x, y x y ˆ Or equivalently, 1 x, y x y 1 11 / 54
14 Waveforms as Vectors The set of all finite-energy waveforms of duration T and the Euclidean space R N share many structural properties. Addition Properties ˆ We can add and subtract two waveforms x(t) + y(t), x(t) y(t) ˆ The all-zero waveform 0(t) = 0 for 0 t T is the additive identity x(t) + 0(t) = x(t) for any waveform x(t) Multiplication Properties ˆ We can scale x(t) using a real number a and obtain a x(t) ˆ a(x(t) + y(t)) = ax(t) + ay(t) ˆ (a + b)x(t) = ax(t) + bx(t) ˆ ax(t) = 0(t) if and only if a = 0 or x(t) = 0(t) 12 / 54
15 Inner Product and Norm of Waveforms Inner Product ˆ x(t), y(t) = y(t), x(t) = T t=0 x(t)y(t)dt ˆ x(t), y(t) + z(t) = x(t), y(t) + x(t), z(t) (distributive law) ˆ ax(t), y(t) = a x(t), y(t) ˆ If x(t), y(t) = 0 we say that x(t) and y(t) are orthogonal Norm ˆ x(t) = x(t), x(t) = T t=0 x2 (t)dt is the norm of x(t) ˆ x(t) 2 = T t=0 x2 (t)dt denotes the energy of x(t) ˆ If x(t) = 1 we say that x(t) is of unit norm ˆ x(t) y(t) is the distance between two waveforms Cauchy-Schwarz Inequality ˆ x(t), y(t) x(t) y(t) for any two waveforms x(t), y(t) We want to map s 1 (t),..., s M (t) to vectors s 1,..., s M R N so that the addition, multiplication, inner product and norm properties are preserved. 13 / 54
16 Orthonormal Waveforms Definition A set of N waveforms {φ 1 (t),..., φ N (t)} is said to be orthonormal if 1 φ 1 (t) = φ 2 (t) = = φ N (t) = 1 (unit norm) 2 φ i (t), φ j (t) = 0 for all i j (orthogonality) The role of orthonormal waveforms is similar to that of the standard basis e 1 = (1, 0, 0,..., 0), e 2 = (0, 1, 0,..., 0),, e N = (0, 0,..., 0, 1) Remark Say x(t) = x 1 φ 1 (t) + x N φ N (t), y(t) = y 1 φ 1 (t) + + y N φ N (t) N N x(t), y(t) = x i φ i (t), y j φ j (t) = x i y j φ i (t), φ j (t) i=1 j=1 i j = x i y j = x i y i i j=i i = x, y 14 / 54
17 Example 15 / 54
18 Orthonormal Basis Definition An orthonormal basis for {s 1 (t),..., s M (t)} is an orthonormal set {φ 1 (t),..., φ N (t)} such that s i (t) = s i,1 φ i (t) + s i,2 φ 2 (t) + + s i,m φ N (t) for some choice of s i,1, s i,2,..., s i,n R ˆ We associate s i (t) s i = (s i,1, s i,2,..., s i,n ) ˆ A given modulation scheme can have many orthonormal bases. ˆ The map s 1 (t) s 1, s 2 (t) s 2,..., s M (t) s M depends on the choice of orthonormal basis. 16 / 54
19 Modulation Scheme Example: M-ary Phase Shift Keying ˆ s i (t) = A cos(2πf c t + 2πi M ), i = 1,..., M ˆ Expanding s i (t) using cos(c + D) = cos C cos D sin C sin D s i (t) = A cos Orthonormal Basis ( 2πi M ) cos(2πf c t) A sin ( 2πi M ) sin(2πf c t) ˆ Use φ 1 (t) = 2/T cos(2πf c t) and φ 2 (t) = 2/T sin(2πf c t) ( ) ( ) T 2πi T 2πi s i (t) = A 2 cos φ 1 (t) + A M 2 sin φ 2 (t) M ˆ Dimension N = 2 Waveform to Vector s i (t) ( A2 T 2 cos ( ) 2πi A2 T, M 2 sin ( ) ) 2πi M 17 / 54
20 8-ary Phase Shift Keying 18 / 54
21 How to find an orthonormal basis Gram-Schmidt Procedure Given a modulation scheme {s 1 (t),..., s M (t)}, constructs an orthonormal basis φ 1 (t),..., φ N (t) for the scheme. Similar to QR factorization of matrices r 1,1 r 1,2 r 1,M r 2,1 r 2,2 r 2,M A = [a 1 a 2 a M ] = [q 1 q 2 q N ]... = QR r N,1 r N,2 r N,M s 1,1 s 2,1 s M,1 s 1,2 s 2,2 s M,2 [s 1 (t) s M (t)] = [φ 1 (t) φ N (t)]... s 1,N s 2,N s M,N 19 / 54
22 Waveforms to Vectors, and Back Say {φ 1 (t),..., φ N (t)} is an orthonormal basis for {s 1 (t),..., s M (t)}. N Then, s i (t) = s i,j φ j (t) for some choice of {s i,j } Waveform to Vector j=1 s i (t), φ j (t) = k s i,k φ k (t), φ j (t) = k s i,k φ k (t), φ j (t) = s i,j s i (t) (s i,1, s i,2,..., s i,n ) = s i where s i,1 = s i (t), φ 1 (t), s i,2 = s i (t), φ 2 (t),..., s i,n = s i (t), φ N (t) Vector to Waveform s i = (s i,1,..., s i,n ) s i,1 φ 1 (t) + s i,2 φ 2 (t) + + s i,n φ N (t) ˆ Every point in R N corresponds to a unique waveform. ˆ Going back and forth between vectors and waveforms is easy. 20 / 54
23 Waveforms to Vectors, and Back Caveat v(t) Waveform to vector v v Vector to waveform ˆv(t) ˆv(t) = v(t) iff v(t) is some linear combination of φ 1 (t),..., φ N (t), or equivalently, v(t) is some linear combination of s 1 (t),..., s M (t) 21 / 54
24 Equivalence Between Waveform and Vector Representations Say v(t) = v 1 φ 1 (t) + + v N φ N (t) and u(t) = u 1 φ 1 (t) + + u N φ N (t) Addition v(t) + u(t) v + u Scalar Multiplication a v(t) a v Energy v(t) 2 v 2 Inner product v(t), u(t) v, u Distance v(t) u(t) v u Basis φ i (t) e i (Std. basis) 22 / 54
25 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 23 / 54
26 Vector Gaussian Channel Definition An M-ary modulation scheme of dimension N is a set of M vectors {s 1,..., s M } in R N ˆ Average energy E = 1 ( s s M 2) M 23 / 54
27 Vector Gaussian Channel Relation between received vector r and transmit vector s i The j th component of received vector r = (r 1,..., r N ) r j = r(t), φ i (t) = s i (t) + n(t), φ j (t) = s i (t), φ j (t) + n(t), φ j (t) = s i,j + n j Denoting n = (n 1,..., n N ) we obtain r = s i + n If n(t) is a Gaussian random process, noise vector n follows Gaussian distribution. Note Effective noise at the receiver ˆn(t) = n 1 φ 1 (t) + + n N φ N (t) In general, n(t) not a linear combination of basis, and ˆn(t) n(t), 24 / 54
28 Designing a Modulation Scheme 1 Choose an orthonormal basis φ 1 (t),..., φ N (t) Determines bandwidth of transmit signals, signalling duration T 2 Construct a (vector) modulation scheme s 1,..., s M R N Determines the signal energy, probability of error An N-dimensional modulation scheme exploits N uses of a scalar Gaussian channel r j = s i,j + n j where j = 1,..., N With limits on bandwidth and signal duration, how large can N be? 25 / 54
29 Dimension of Time/Band-limited Signals Say transmit signals s(t) must be time/band limited 1 s(t) = 0 if t < 0 or t T, and (time-limited) 2 S(f) = 0 if f < f c W 2 or f > f c + W 2 (band-limited) Uncertainty Principle: No non-zero signal is both time- and band-limited. No signal transmission is possible! We relax the constraint to approximately band-limited 1 s(t) = 0 if t < 0 or t > T, and (time-limited) 2 fc+w/2 S(f) 2 df (1 δ) f=f c W/2 0 + Here δ > 0 is the fraction of out-of-band signal energy. S(f) 2 df (approx. band-lim.) What is the largest dimension N of time-limited/approximately band-limited signals? 26 / 54
30 Dimension of Time/band-limited Signals Let T > 0 and W > 0 be given, and consider any δ, ɛ > 0. Theorem (Landau, Pollak & Slepian ) If T W is sufficiently large, there exists N = 2T W (1 ɛ) orthonormal waveforms φ 1 (t),..., φ N (t) such that 1 φ i (t) = 0 if t < 0 or t > T, and (time-limited) 2 fc+w/2 f=f c W/2 In summary Φ i (f) 2 df (1 δ) + 0 Φ i (f) 2 df (approx. band-lim.) ˆ We can pack N 2T W dimensions if the time-bandwidth product T W is large enough. ˆ Number of dimensions/channel uses normalized to 1 sec of transmit duration and 1 Hz of bandwidth N T W 2 dim/sec/hz 27 / 54
31 Relation between Waveform & Vector Channels Assume N = 2T W Signal energy E i s i (t) 2 s i 2 1 Avg. energy E M i s i(t) 2 1 M i s i 2 Transmit Power Rate S R E T log 2 M T E N 2W log 2 M N 2W Parameters for Vector Gaussian Channel ˆ Spectral Efficiency η = 2 log 2 M/N (unit: bits/sec/hz) Allows comparison between schemes with different bandwidths. Related to rate as η = R/W ˆ Power P = E/N (unit: Watt/Hz) Related to actual transmit power as S = 2W P 28 / 54
32 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 29 / 54
33 Detection in the Gaussian Channel Definition Detection/Decoding/Demodulation is the process of estimating the message m given the received waveform r(t) and the modulation scheme {s 1 (t),..., s M (t)}. Objective: Design the decoder to minimize P e = P ( ˆm m). 29 / 54
34 The Gaussian Random Variable ˆ P (X < a) = P (X > a) = Q(a) ˆ Q( ) is a decreasing function ˆ Y = σx is Gaussian with mean 0 and var σ 2, i.e., N (0, σ 2 ) ˆ P (Y > b) = P (σx > b) = P (X > b σ ) = Q ( ) b σ 30 / 54
35 White Gaussian Noise Process n(t) Noise waveform n(t) modelled as a white Gaussian random process, i.e., as a a collection of random variables {n(τ) < τ < + } such that ˆ Stationary random process Statistics of the processes n(t) and n(t constant) are identical ˆ Gaussian random process Any linear combination of finitely many samples of n(t) is Gaussian a 1 n(t 1 ) + a 2 n(t 2 ) + + a l n(t l ) Gaussian distributed ˆ White random process The power spectrum N(f) of the noise process is flat N(f) = N o 2 W/Hz, for < f < + 31 / 54
36 32 / 54
37 Noise Process Through Waveform-to-Vector Converter Properties of the noise vector n = (n 1,..., n N ) ˆ n 1, n 2,..., n N are independent N (0, N o /2) random variables f(n i ) = 1 ( ) exp n2 i πno N o ˆ Noise vector n describes only a part of n(t) ˆn(t) = n 1 φ 1 (t) + + n N φ N (t) n(t) The noise component not captured by waveform-to-vector converter: n(t) = n(t) ˆn(t) 0 33 / 54
38 White Gaussian Noise Vector n n = (n 1,..., n N ) ˆ Probability density of n = (n 1,..., n N ) in R N N ) 1 f noise (n) = f(n 1,..., n N ) = f(n i ) = ( ( πn o ) exp n 2 N i=1 N o Probability density depends only on n 2 Spherically symmetric: Isotropic distribution Density highest near 0 and decreasing in n 2 noise vector of larger norm less likely than a vector with smaller norm ( ˆ For any a R N, n, a N 0, a 2 N ) o 2 ˆ a 1,..., a K are orthonormal n, a 1,..., n, a K are independent N (0, N o /2) 34 / 54
39 n(t) Carries Irrelevant Information ˆ r = s i + n does not carry all the information in r(t) ˆr(t) = r 1 φ 1 (t) + + r N φ N (t) r(t) ˆ The information about r(t) not contained in r r(t) j r j φ j (t) = s i (t) + n(t) j s i,j φ j (t) j n j φ j (t) = n(t) Theorem The vector r contains all the information in r(t) that is relevant to the transmitted message. ˆ n(t) is irrelevant for the optimum detection of transmit message. 35 / 54
40 The (Effective) Vector Gaussian Channel ˆ Modulation Scheme/Code is a set {s 1,..., s M } of M vectors in R N ˆ Power P = 1 N s s M 2 M ˆ Noise variance σ 2 = N o (per dimension) 2 ˆ Signal to noise ratio SNR = P σ 2 = 2P N o ˆ Spectral Efficiency η = 2 log 2 M bits/s/hz (assuming N = 2T W ) N 36 / 54
41 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 37 / 54
42 Optimum Detection Rule Objective Given {s 1,..., s M } & r, provide an estimate ˆm of the transmit message m, so that P e = P ( ˆm m) is as small as possible. Optimal Detection: Maximum a posteriori (MAP) detector Given received vector r, choose the vector s j that has the highest probability of being transmitted ˆm = arg max P (s k transmitted r received ) k {1,...,M} In other words, choose ˆm = k if P (s k transmitted r received ) > P (s j transmitted r received ) for every j k ˆ In case of a tie, can choose one of the indices arbitrarily. This does not increase P e. 37 / 54
43 Use Bayes rule P (A B) = Optimum Detection Rule P (A)P (B A) P (B) P (s k )f(r s k ) ˆm = arg max P (s j r) = arg max k k f(r) P (s j ) = Probability of transmitting s j = 1/M (equally likely messages) f(r s k ) = Probability density of r when s k is transmitted f(r) = Probability density of r averaged over all possible transmissions ˆm = arg max k 1/M f(r s k ) f(r) = arg max f(r s k ) k Likelihood function f(r s k ), Max. likelihood rule ˆm = arg max k f(r s k ) If all the M messages are equally likely Max. a posteriori detection = Max. likelihood (ML) detection 38 / 54
44 Maximum Likelihood Detection in Vector Gaussian Channel Use the model r = s i + n and the assumption n is independent of s i ˆm = arg max k f(r s k ) = arg max f noise (r s k s k ) k = arg max f noise (r s k ) k = arg max k 1 ( πn o ) N exp = arg min k r s k 2 ML Detection Rule for Vector Gaussian Channel Choose ˆm = k if r s k < r s j for every j k ( r s k 2 ) ˆ Also called minimum distance/nearest neighbor decoding ˆ In case of a tie, choose one of the contenders arbitrarily. N o 39 / 54
45 Example: M = 6 vectors in R 2 The k th Decision region D k D k = set of all points closer to s k than any other s j = { r R N r s k < r s j for all j k } The ML detector outputs ˆm = k if r D k. 40 / 54
46 Examples in R 2 41 / 54
47 1 Signal Space Representation 2 Vector Gaussian Channel 3 Vector Gaussian Channel (contd.) 4 Optimum Detection 5 Probability of Error 42 / 54
48 Error Probability when M = 2 Scenario Let {s 1, s 2 } R N be a binary modulation scheme with ˆ P (s 1 ) = P (s 2 ) = 1/2, and ˆ detected using the nearest neighbor decoder ˆ Error E occurs if (s 1 tx, ˆm = 2) or (s 2 tx, ˆm = 1) ˆ Conditional error probability P (E s 1 ) = P ( ˆm = 2 s 1 ) = P ( r s 2 < r s 1 s 1 ) ˆ Note that P (E) = P (s 1 )P (E s 1 ) + P (s 2 )P (E s 2 ) = P (E s 1 ) + P (E s 2 ) 2 ˆ P (E s i ) can be easy to analyse 42 / 54
49 Conditional Error Probability when M = 2 E s 1 : s 1 is transmitted r = s 1 + n, and r s 1 2 > r s 2 2 (E s 1 ) : s 1 + n s 1 2 > s 1 + n s 2 2 n 2 > s 1 s 2 + n, s 1 s 2 + n n 2 > s 1 s 2, s 1 s 2 + s 1 s 2, n + n, s 1 s 2 + n, n n 2 > s 1 s n, s 1 s 2 + n 2 n, s 1 s 2 < s 1 s s 1 s 2 2 n, s 1 s 2 < s 1 s 2 2 N o 2 2 n, s 1 s 2 s 1 s 2 N o < s 1 s 2 2No 1 2 s 1 s 2 N o 43 / 54
50 ˆ Z = n, N o 2 Error Probability when M = 2 s 1 s 2 s 1 s 2 ˆ P (E s 1 ) = P 2 N o s 1 s 2 2 s 1 s 2 N o ( ) s1 s 2 ˆ P (E s 2 ) = Q 2No is Gaussian with zero mean and variance 2 = N o 2 N o ( Z < s ) ( ) 1 s 2 s1 s 2 = Q 2No 2No P (E) = P (E s 1 ) + P (E s 2 ) 2 2 s 1 s 2 s 1 s 2 ( ) s1 s 2 = Q 2No 2 = 1 ˆ Error probability decreasing function of distance s 1 s 2 44 / 54
51 Bound on Error Probability when M > 2 Scenario Let C = {s 1,..., s M } R N be a modulation/coding scheme with ˆ P (s 1 ) = = P (s M ) = 1/M, and ˆ detected using the nearest neighbor decoder ˆ Minimum distance d min = smallest Euclidean distance between any pair of vectors in C d min = min i j s i s j ˆ Observe that s i s j d min for any i j ˆ Since Q( ) is a decreasing function ( ) ( ) si s j dmin Q Q 2No 2No for any i j ˆ Bound based only on d min Simple calculations, not tight, intuitive 45 / 54
52 46 / 54
53 Union Bound on Conditional Error Probability Assume that s 1 is transmitted, i.e., r = s 1 + n. We know that ( ) s1 s j P ( r s j < r s j s 1 ) = Q 2No Decoding error occurs if r is closer some s j than s 1, j = 2, 3,..., M M P (E s 1 ) = P (r / D 1 s 1 ) = P r s j < r s 1 s 1 j=2 From union bound P (A 2 A M ) P (A 2 ) + + P (A M ) M M ( ) s1 s j P (E s 1 ) P ( r s j < r s j s 1 ) = Q 2No j=2 j=2 47 / 54
54 Union Bound on Error Probability Since Q( ) is a decreasing function and s 1 s j d min M ( ) s1 s j M ( ) dmin P (E s 1 ) Q Q 2No 2No j=2 j=2 ( ) dmin P (E s 1 ) (M 1)Q 2No Upper bound on average error probability P (E) = M i=1 P (s i )P (E s i ) ( ) dmin P (E) (M 1)Q 2No Note ˆ Exact P e (or good approximations better than the union bound) can be derived for several constellations, for example PAM, QAM and PSK. ˆ Chernoff bound can be useful: Q(a) 1 2 exp( a2 /2) for a 0 ˆ Union bound, in general, is loose. 48 / 54
55 ˆ Abscissa is [SNR] db = 10 log 10 SNR ˆ The union bound is a reasonable approximation for large values of SNR 49 / 54
56 Performance of QAM and FSK η = 2 log 2 M N 50 / 54
57 Performance of QAM and FSK Probability of Error P e = 10 5 Modulation/Code Spectral Efficiency Signal-to-Noise Ratio η (bits/sec/hz) SNR (db) 16-QAM QAM FSK FSK 3/ FSK 1/2 4.6 How good are these modulation schemes? What is the best trade-off between SNR and η? 51 / 54
58 Capacity of the (Vector) Gaussian Channel Let the maximum allowable power be P and noise variance be N o /2. SNR = P N o /2 = 2P N o What is the highest η achievable while ensuring that P e is small? Theorem Given an ɛ > 0 and any constant η such that η < log 2 (1 + SNR), there exists a coding scheme with P e ɛ and spectral efficiency at least η. Conversely, for any coding scheme with η > log 2 (1 + SNR) and M sufficiently large, P e is close to 1. C(SNR) = log 2 (1 + SNR) is the capacity of the Gaussian channel. 52 / 54
59 How Good/Bad are QAM and FSK? Least SNR required to communicate reliably with spectral efficiency η is SNR (η) = 2 η 1 Probability of Error P e = 10 5 Modulation/Code η SNR (db) SNR (η) 16-QAM QAM FSK FSK 3/ FSK 1/ / 54
60 How to Perform Close to Capacity? ˆ We need P e to be small at a fixed finite SNR d min must be large to ensure that P e is small ˆ It is necessary to use coding schemes in high dimensions N 1 Can ensure that d min constant N ˆ If N is large it is possible to pack vectors {s i } in R N such that Average power is at the most P d min is large η is close to log 2 (1 + SNR) P e is small ˆ A large N implies that M = 2 ηn/2 is also large. We must ensure that such a large code can be encoded/decoded with practical complexity Several known coding techniques η > 1: Trellis coded modulation, multilevel codes, lattice codes, bit-interleaved coded modulation, etc. η < 1: Low-density parity-check codes, turbo codes, polar codes, etc. Thank You! 54 / 54
Lattices and Lattice Codes
Lattices and Lattice Codes Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad
More informationCapacity Bounds for Power- and Band-Limited Wireless Infrared Channels Corrupted by Gaussian Noise
40th Annual Allerton Conference on Communications, Control, and Computing, U.Illinois at Urbana-Champaign, Oct. 2-4, 2002, Allerton House, Monticello, USA. Capacity Bounds for Power- and Band-Limited Wireless
More informationSummary II: Modulation and Demodulation
Summary II: Modulation and Demodulation Instructor : Jun Chen Department of Electrical and Computer Engineering, McMaster University Room: ITB A1, ext. 0163 Email: junchen@mail.ece.mcmaster.ca Website:
More informationA First Course in Digital Communications
A First Course in Digital Communications Ha H. Nguyen and E. Shwedyk February 9 A First Course in Digital Communications 1/46 Introduction There are benefits to be gained when M-ary (M = 4 signaling methods
More informationthat efficiently utilizes the total available channel bandwidth W.
Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Introduction We consider the problem of signal
More informationDigital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10
Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,
More informationDigital Modulation 1
Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic
More informationSignal Design for Band-Limited Channels
Wireless Information Transmission System Lab. Signal Design for Band-Limited Channels Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal
More informationa) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.
Digital Modulation and Coding Tutorial-1 1. Consider the signal set shown below in Fig.1 a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. b) What is the minimum Euclidean
More informationA Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.
Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets
More informationDigital Transmission Methods S
Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume
More informationLecture 15: Thu Feb 28, 2019
Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More information7 The Waveform Channel
7 The Waveform Channel The waveform transmitted by the digital demodulator will be corrupted by the channel before it reaches the digital demodulator in the receiver. One important part of the channel
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationEE 574 Detection and Estimation Theory Lecture Presentation 8
Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy
More informationPerformance of small signal sets
42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable
More informationCHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS
THE UNIVERSITY OF SOUTH AUSTRALIA SCHOOL OF ELECTRONIC ENGINEERING CHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS Philip Edward McIllree, B.Eng. A thesis submitted in fulfilment of the
More informationUpper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels Corrupted by Gaussian Noise
S BIENNIAL SYPOSIU ON COUNICAIONS, QUEEN S UNIVERSIY, KINGSON, ON, CANADA, JUNE 5,. Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels Corrupted by Gaussian Noise Steve Hranilovic
More informationECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220
ECE 564/645 - Digital Communications, Spring 08 Midterm Exam # March nd, 7:00-9:00pm Marston 0 Overview The exam consists of four problems for 0 points (ECE 564) or 5 points (ECE 645). The points for each
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationρ = sin(2π ft) 2π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero.
Problem 5.1 : The correlation of the two signals in binary FSK is: ρ = sin(π ft) π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero. Thus: ϑρ
More informationGEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems
GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems Aids Allowed: 2 8 1/2 X11 crib sheets, calculator DATE: Tuesday
More informationEE4304 C-term 2007: Lecture 17 Supplemental Slides
EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27 Geometric Representation: Optimal
More informationSquare Root Raised Cosine Filter
Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design
More informationDiversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver
Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Michael R. Souryal and Huiqing You National Institute of Standards and Technology Advanced Network Technologies Division Gaithersburg,
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationPSK bit mappings with good minimax error probability
PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department
More informationRevision of Lecture 5
Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information
More informationCHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.
CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More information5 Analog carrier modulation with noise
5 Analog carrier modulation with noise 5. Noisy receiver model Assume that the modulated signal x(t) is passed through an additive White Gaussian noise channel. A noisy receiver model is illustrated in
More informationThis examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS
THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of
More informationSummary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations
TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error
More informationLecture 7. Union bound for reducing M-ary to binary hypothesis testing
Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing
More informationEs e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0
Problem 6.15 : he received signal-plus-noise vector at the output of the matched filter may be represented as (see (5-2-63) for example) : r n = E s e j(θn φ) + N n where θ n =0,π/2,π,3π/2 for QPSK, and
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationThese outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n
Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:
More informationThe E8 Lattice and Error Correction in Multi-Level Flash Memory
The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon
More informationEE456 Digital Communications
EE456 Digital Communications Professor Ha Nguyen September 5 EE456 Digital Communications Block Diagram of Binary Communication Systems m ( t { b k } b k = s( t b = s ( t k m ˆ ( t { bˆ } k r( t Bits in
More informationApplications of Lattices in Telecommunications
Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationKevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.
Kevin Buckley - -4 ECE877 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set Review of Digital Communications, Introduction to
More informationComputing Probability of Symbol Error
Computing Probability of Symbol Error I When decision boundaries intersect at right angles, then it is possible to compute the error probability exactly in closed form. I The result will be in terms of
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationProblem 7.7 : We assume that P (x i )=1/3, i =1, 2, 3. Then P (y 1 )= 1 ((1 p)+p) = P (y j )=1/3, j=2, 3. Hence : and similarly.
(b) We note that the above capacity is the same to the capacity of the binary symmetric channel. Indeed, if we considerthe grouping of the output symbols into a = {y 1,y 2 } and b = {y 3,y 4 } we get a
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationOn Design Criteria and Construction of Non-coherent Space-Time Constellations
On Design Criteria and Construction of Non-coherent Space-Time Constellations Mohammad Jaber Borran, Ashutosh Sabharwal, and Behnaam Aazhang ECE Department, MS-366, Rice University, Houston, TX 77005-89
More informationOn the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection
On the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection ustafa Cenk Gursoy Department of Electrical Engineering University of Nebraska-Lincoln, Lincoln, NE 68588 Email: gursoy@engr.unl.edu
More informationRapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping
Rapport technique #INRS-EMT-010-0604 Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping Leszek Szczeciński, Cristian González, Sonia Aïssa Institut National de la Recherche
More informationPerformance Analysis of Spread Spectrum CDMA systems
1 Performance Analysis of Spread Spectrum CDMA systems 16:33:546 Wireless Communication Technologies Spring 5 Instructor: Dr. Narayan Mandayam Summary by Liang Xiao lxiao@winlab.rutgers.edu WINLAB, Department
More informationA Systematic Description of Source Significance Information
A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh
More information19. Channel coding: energy-per-bit, continuous-time channels
9. Channel coding: energy-per-bit, continuous-time channels 9. Energy per bit Consider the additive Gaussian noise channel: Y i = X i + Z i, Z i N ( 0, ). (9.) In the last lecture, we analyzed the maximum
More informationError Exponent Region for Gaussian Broadcast Channels
Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI
More informationELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random
More informationDetecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf
Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise
More informationOptimum Soft Decision Decoding of Linear Block Codes
Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is
More informationOutline Digital Communications. Lecture 02 Signal-Space Representation. Energy Signals. MMSE Approximation. Pierluigi SALVO ROSSI
Outline igital Communications Lecture 2 Signal-Space Representation Pierluigi SALVO ROSSI epartment of Industrial and Information Engineering Second University of Naples Via Roma 29, 8131 Aversa (CE, Italy
More informationLIKELIHOOD RECEIVER FOR FH-MFSK MOBILE RADIO*
LIKELIHOOD RECEIVER FOR FH-MFSK MOBILE RADIO* Item Type text; Proceedings Authors Viswanathan, R.; S.C. Gupta Publisher International Foundation for Telemetering Journal International Telemetering Conference
More informationDr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems
Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes
More informationX b s t w t t dt b E ( ) t dt
Consider the following correlator receiver architecture: T dt X si () t S xt () * () t Wt () T dt X Suppose s (t) is sent, then * () t t T T T X s t w t t dt E t t dt w t dt E W t t T T T X s t w t t dt
More informationEE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels
EE6604 Personal & Mobile Communications Week 15 OFDM on AWGN and ISI Channels 1 { x k } x 0 x 1 x x x N- 2 N- 1 IDFT X X X X 0 1 N- 2 N- 1 { X n } insert guard { g X n } g X I n { } D/A ~ si ( t) X g X
More informationConsider a 2-D constellation, suppose that basis signals =cosine and sine. Each constellation symbol corresponds to a vector with two real components
TUTORIAL ON DIGITAL MODULATIONS Part 3: 4-PSK [2--26] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello Quadrature modulation Consider a 2-D constellation,
More informationTrellis Coded Modulation
Trellis Coded Modulation Trellis coded modulation (TCM) is a marriage between codes that live on trellises and signal designs We have already seen that trellises are the preferred way to view convolutional
More informationMapper & De-Mapper System Document
Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation
More informationEE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design
Chapter 4 Receiver Design Chapter 4 Receiver Design Probability of Bit Error Pages 124-149 149 Probability of Bit Error The low pass filtered and sampled PAM signal results in an expression for the probability
More informationMITOCW ocw-6-451_ feb k_512kb-mp4
MITOCW ocw-6-451_4-261-09feb2005-220k_512kb-mp4 And what we saw was the performance was quite different in the two regimes. In power-limited regime, typically our SNR is much smaller than one, whereas
More informationInformation Theory - Entropy. Figure 3
Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system
More information8 PAM BER/SER Monte Carlo Simulation
xercise.1 8 PAM BR/SR Monte Carlo Simulation - Simulate a 8 level PAM communication system and calculate bit and symbol error ratios (BR/SR). - Plot the calculated and simulated SR and BR curves. - Plot
More informationLecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1
: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless
More informationExample: Bipolar NRZ (non-return-to-zero) signaling
Baseand Data Transmission Data are sent without using a carrier signal Example: Bipolar NRZ (non-return-to-zero signaling is represented y is represented y T A -A T : it duration is represented y BT. Passand
More informationLecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.
Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationEE Introduction to Digital Communications Homework 8 Solutions
EE 2 - Introduction to Digital Communications Homework 8 Solutions May 7, 2008. (a) he error probability is P e = Q( SNR). 0 0 0 2 0 4 0 6 P e 0 8 0 0 0 2 0 4 0 6 0 5 0 5 20 25 30 35 40 SNR (db) (b) SNR
More informationEE 661: Modulation Theory Solutions to Homework 6
EE 66: Modulation Theory Solutions to Homework 6. Solution to problem. a) Binary PAM: Since 2W = 4 KHz and β = 0.5, the minimum T is the solution to (+β)/(2t ) = W = 2 0 3 Hz. Thus, we have the maximum
More informationThe E8 Lattice and Error Correction in Multi-Level Flash Memory
The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan ICC 2011 IEEE International Conference on Communications
More informationLecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH
: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Wednesday, June 1, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication
More informationUpper Bounds for the Average Error Probability of a Time-Hopping Wideband System
Upper Bounds for the Average Error Probability of a Time-Hopping Wideband System Aravind Kailas UMTS Systems Performance Team QUALCOMM Inc San Diego, CA 911 Email: akailas@qualcommcom John A Gubner Department
More informationProblem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30
Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationDigital Communications
Digital Communications Chapter 5 Carrier and Symbol Synchronization Po-Ning Chen, Professor Institute of Communications Engineering National Chiao-Tung University, Taiwan Digital Communications Ver 218.7.26
More information4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE
4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 An Algebraic Family of Complex Lattices for Fading Channels With Application to Space Time Codes Pranav Dayal, Member, IEEE,
More informationAdvanced Digital Communications
Advanced Digital Communications Suhas Diggavi École Polytechnique Fédérale de Lausanne (EPFL) School of Computer and Communication Sciences Laboratory of Information and Communication Systems (LICOS) 3rd
More informationMemo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN
Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, 007 Chenggao HAN Contents 1 Introduction 1 1.1 Elements of a Digital Communication System.....................
More informationECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process
1 ECE6604 PERSONAL & MOBILE COMMUNICATIONS Week 3 Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process 2 Multipath-Fading Mechanism local scatterers mobile subscriber base station
More informationPrinciples of Communications Lecture 8: Baseband Communication Systems. Chih-Wei Liu 劉志尉 National Chiao Tung University
Principles of Communications Lecture 8: Baseband Communication Systems Chih-Wei Liu 劉志尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.tw Outlines Introduction Line codes Effects of filtering Pulse
More information2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES
2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral
More informationCommunication Theory Summary of Important Definitions and Results
Signal and system theory Convolution of signals x(t) h(t) = y(t): Fourier Transform: Communication Theory Summary of Important Definitions and Results X(ω) = X(ω) = y(t) = X(ω) = j x(t) e jωt dt, 0 Properties
More informationA Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems
A Design of High-Rate Space-Frequency Codes for MIMO-OFDM Systems Wei Zhang, Xiang-Gen Xia and P. C. Ching xxia@ee.udel.edu EE Dept., The Chinese University of Hong Kong ECE Dept., University of Delaware
More informationSignals and Spectra - Review
Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More information2016 Spring: The Final Exam of Digital Communications
2016 Spring: The Final Exam of Digital Communications The total number of points is 131. 1. Image of Transmitter Transmitter L 1 θ v 1 As shown in the figure above, a car is receiving a signal from a remote
More informationIEEE C80216m-09/0079r1
Project IEEE 802.16 Broadband Wireless Access Working Group Title Efficient Demodulators for the DSTTD Scheme Date 2009-01-05 Submitted M. A. Khojastepour Ron Porat Source(s) NEC
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationDecision-Point Signal to Noise Ratio (SNR)
Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless
More informationShannon meets Wiener II: On MMSE estimation in successive decoding schemes
Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises
More information