Image Data Compression. Dirty-paper codes Alexey Pak, Lehrstuhl für Interak<ve Echtzeitsysteme, Fakultät für Informa<k, KIT

Size: px
Start display at page:

Download "Image Data Compression. Dirty-paper codes Alexey Pak, Lehrstuhl für Interak<ve Echtzeitsysteme, Fakultät für Informa<k, KIT"

Transcription

1 Image Data Compression Dirty-paper codes 1

2 Reminder: watermarking with side informa8on Watermark embedder Noise n Input message m Auxiliary informa<on Watermark encoder! w a Cover work +! c o! c w Channel! c wn Watermark detector Watermark decoder Auxiliary informa<on Output message m n Blind embedding (no red arrow): To embed a mul<-bit message, need error-correc<ng code (trellis code / Viterbi algorithm) To find if WM is present, re-compute the embedded mark w a, find correla<on with c wn Watermarking with side informa6on (with red arrow): Limited channel rate R, given constraints on fidelity P e and channel distor<on power P a If cover work is known a priori (or trivial), then scheme equivalent to AWGN channel In that case, classical result by Shannon applies: R 1 2 log " 1+ P % e $ ' # P a & More generally, formulated as R I(X;Y ), where I(X; Y) is the mutual informa<on between the channel input variable X and the received signal Y. For jointly Gaussian X and Y, it is: I(X;Y ) = I(Y; X) = 1 2 log " E[X 2 ]E[Y 2 ] % $ ' # E[X 2 ]E[Y 2 ] E[XY ] 2 & Let us now consider some non-trivial cover work 2

3 Dirty-paper codes (DPC) [Costa 83]: imagine a sheet of paper covered with independent dirt spots of normally distributed intensity. In some sense, the problem of wri<ng a message on this sheet is analogous to that of sending informa<on through the channel. The writer knows the loca<on and intensity of the dirt spots, but the reader cannot dis<nguish them from the ink marks applied by the writer. Usual codes: one message corresponds to one codeword Dirty-paper codes: one message m è codebook C m (i.e. DPC = collec<on of codes) Given a message, the embedder selects the codeword that best fits the cover work The detector finds the most likely codeword from the union of all sub-codebooks, and outputs its syndrome i.e. the corresponding message Example: LSB embedding into a single numeric value Cover work: some real number n Embedded signal: a binary digit, m = {0, 1} Method: output n rounded to the nearest odd (m = 0) or even (m = 1) value 3

4 Least significant bit (LSB) watermarking Modify every work s value so that its least significant bit encodes a binary message: Original work Message Watermarked work = Each pixel value: v = (8 bits) Each value: m = 0 or m = 1 Watermarked value: v m = (v & ~1) m Changes each value by at most 1 unit (out of 255), difference is usually impercep<ble On average, only 50% of bit values are actually changed LSB embedding is fragile against channel noise or global re-scaling Scalar quan<za<on: each pixel value is independently watermarked (i.e. the channel rate may be significantly lower than with the n-dimensional ICS quan<za<on) LSB effect may be visible in the uniform areas (e.g., sky) Widely used for content authen6ca6on and steganography! Why use only one LSB? Why assume independent pixels? 4

5 Watermarking Gaussian signals naïve approach Let Z be a Gaussian random variable, jointly normal and uncorrelated with S (cover work) Let E[Z 2 ] = P e and X = S + Z be the watermarked signal Let n be sufficiently large. For each message m, choose a random codebook C m over Z with sufficiently many elements, so that the sub-codebook rate necessarily sa<sfies R(C m ) log 2 C m I(S; X) = 1 n 2 log E[X 2 ] E# $ Z 2 % & Following the proof of Shannon s theorem, for every signal s over S, there is a codeword x in C m such that d(s, x) < E[Z 2 ] = P e. In other words, sub-codebooks are sufficiently dense. Moreover, for large n and R(C m ) = I(S; X), the expected distor<on is E[d(s, x)] = E[Z 2 ] = P e. Finally, let C be the union of all C m. To transmit a message m for a given cover work s, the sender embeds a codeword x of C m that is the closest to s (it can always be found within the distance P e from s). The channel noise is Gaussian with power E[N 2 ] = P a, the received signal is Y = X + N. For unique decoding, density of union codebook must be sufficiently low: R(C) I(X;Y ) Finally, each message is encoded by mul<ple codewords. Thus, the transmission rate boundary is: R 1 I(X;Y ) I(S; X) = 1 2 log $ P a + P s P e ' & ) % P a P s ( (!) Random codebook constructed over variable X = S + Z Result depends on the sta<s<cs of works Rate always below the Shannon s limit (= for P s = 0) Rate approaches 0 as P s increases to Can we do beder than that? 5

6 Costa s theorem Before: codebook was constructed over X = S + Z. Re-write: X = (αs + Z) + (1 - α)s Let us now construct the codebook over the new variable U α = αs + Z, 0 < α < 1. As in the previous scheme, the received signal is Y = S + N + Z. By exactly the same reasoning as before, we obtain: R R α = I(U α ;Y ) I(S;U α ) Now compute the pieces: From the # of codewords in the union codebook # of codewords in sub-codebook I(S;U α ) = I(S;αS + Z) = 1 2 log " E[S 2 ]E[(αS + Z) 2 ] % $ ' = 1 # E[S 2 ]E[(αS + Z) 2 ] E[S(αS + Z)] 2 & 2 log " P s (α 2 P s ) % $ P s (α 2 P s ) α 2 2 ' = 1 # P s & 2 log " α 2 P s % $ ', # P e & I(Y;U α ) = I(S + Z + N;αS + Z) = 1 2 log " (P a + P s )(α 2 P s ) % $ ' = 1 #(P a + P s )(α 2 P s ) (αp s ) 2 & 2 log " (P a + P s )(α 2 P s ) % $ ', # P a (P e +α 2 P s )+ P s P e (1 α 2 )& R α = 1 2 log " P e (P s + P a ) % $ ' # P a (P e +α 2 P s )+ P s P e (1 α 2 )& Variables S, Z, N are independent, uncorrelated Find the op6mum parameter α to maximize the rate: α* = argmax R α = α P e P a, R α* = 1 2 log 1+ P e P a Scheme known as Distor<on- Compensated Quan<za<on (DCQ) Generaliza6on of regular quan6za6on Ideal Costa scheme (ICS): Max rate exactly as without (or with a known) cover work! (i.e. cover work has no influence on transmission rate) 6

7 LSB: quan8za8on index modula8on (= dither modula8on) Scalar quan6za6on according to uniform sub-codebooks: m = 0,..., M 1 C 0 Δ 0 M Δ 2M Δ C m = {(m + km )Δ, k Z} C 1 C 2 C = C m = { kδ, k Z} Δ (M + 2) Δ m C Embed message m into sample value s: find the nearest value x m in the sub-codebook C m "# Equivalent to quan<za<on: s m = Q MΔ;m/M (s), with quan<zer Q A;d (s) = s / A d $%+ d A Power of quan<za<on error: P e = ( s Q A;d (s)) 2 = A2, in case of QIM: P e = M 2 Δ " Received signal y, syndrome detec<on: m = y % # $ Δ& ' mod M If channel noise power is less than Δ2 /12, then the WM system is error-free Distor6on-compensated QIM, DCQIM (= scalar Costa scheme, SCS): Fix value α, quan<ze only frac<on α in sample s: s m = Q MΔ;m/M (αs)+ (1 α)s Hard to find op<mal α for SCS analy<cally! = α 1 Q [Eggers et al, 2003]: experimental op<mum (max rate): MΔ/α;m/M (s)+ (1 α)s * α SCS P e P e P a Expect: ICS performs bewer (has higher rate) than SCS, and SCS is bewer than QIM 7

8 Performance of scalar watermarking schemes SS = spread spectrum; blind embedding, work as Gaussian noise [Eggers et al, 2003] DWR = P s /P e Dither modula<on; binary DM = LSB = Rate NB: systema<c gap due to 1D-quan<za<on vs true nd-vector quan<za<on Watermark-to-noise ra<o, = P e /P a 8

9 LaPce codes Costa s theorem assumes that dimensionality (message length) n -> In prac<cal embedders, break work into blocks, embed short message (n ~ 1) in each block Want embedding/detec<on speed of scalar scheme, but bewer rate Fundamental tradeoff between robustness and encoding cost: Code separa<on: need wide spacing between codewords Coset forma<on: need codewords of different messages to be closely spaced Dirty-paper code must allow distor<on compensa<on to balance between the two Direct binning (sample-based quan<za<on): efficient but too expensive Laqce (= linear) dirty-paper codes Dirty-paper code: (C, {C 0, C 1,, C M - 1 }) Each C i (and C) is linear wrt Z: c 1, c 2 C a c 1 + b c 2 C a, b Z C is generated by finite number of elements Each C i is a subset of C Example: Quincunx code Codebooks: (C, {C 0, C 1 }) Rate = 0.5 (encodes 2 messages) Non-separable, linear (2D vector quan<zer) Two cosets: red and green nodes Quincunx pawern 9

10 LaPce types Orthogonal la{ces: set on n orthogonal vectors rotated and shi ed scalar quan<za<on Non-orthogonal la{ces: any set of n non-coplanar vectors (rank of base matrix is n) Important proper6es of laqces: Quan<za<on error: ( ) 2 P e =! x Q Λ (! x) Voronoi cell: all points that are closer to the given node than to the other nodes Sphere packing density: ra<o of volume of one sphere to the volume of Voronoi cell Kissing number: how many spheres touch a given sphere # of deep holes: points with max distance from nodes inside one primary cell Orthogonal Z 2 la{ce Hexagonal A 2 la{ce Deep holes: by shi ing la{ce to DH, obtain another coset Used in prac<ce Dimension Best P e Z A 2 A 3 D 4 D 5 E 6 E 7 E 8 Dense packing Z A 2 A 3 D 4 D 5 E 6 E 7 E 8 Largest kissing number Z A 2 D 3 D 4 D 5 E 6 E 7 E 8 10

11 Prac8cal encoding with E 8 lapce E 8 laqce: E 8 = D 8 U (D * (1,1,1,1,1,1,1,1)), where D 8 = (subset of Z n (x x 8 ) mod 2 = 0) 15 deep holes => can build dirty-paper code with 16 messages (4 bits) WM embedding steps: 1. Trellis-encode incoming message x (error-correc<on), obtain embedded message x* 2. Divide x* into N blocks of 4 bits, divide work into at least N blocks 3. Correlate each block with 2 reference pawerns w 1 and w 2 => vector v of length 2N 4. Using E 8 la{ce and some distor<on compensa<on parameter α, embed 4 bits of x* into corresponding 8 vector coordinates 5. Modify each block of work to embed WM WM detec6on steps: 1. Project image w n onto 2N-dimensional vector v r, as in embedder 2. For every 8 dimensions in v r, awempt to quan<ze with all possible 4-bit messages 3. Quan<zer that results in smallest distor<on indicates the trellis-coded message 4. Apply Viterbi decoder to extract the embedded message x d Message errors (in %) Orthogonal la{ce E8 la{ce [Cox et al, 2009] Amount of noise (in standard devia<ons) 11

12 LaPce codes and valumetric scaling La{ce codes: simple, robust (AWGN), may approach full ICS rate of dirty-paper channel However: do not survive global re-scaling of work values! Possible solu6ons: Scaling-invariant marking space (ideally: real perceptual space) One op<on: embed WM into phase coefficients of Fourier-transformed work; Works well for images, does not work for audio signals. Ra<onal dither modula<on: adjust quan<za<on step based on some features of work Example: use some DCT coefficients (not used for watermarking) to determine the quan<za<on steps for the following coefficients (where WM is embedded); Can be used together with perceptual adapta<on (cf. next lecture). Invert scaling based on some separate embedded pilot signal Or determine scaling parameters from message-carrying WM (with enough data) Dirty-paper trellis codes: use the angles between codewords, rather than correla<ons Spherical codes: la{ce is on surface of unit sphere, corresponding distance func<on Modified trellis: addi<onal arcs, mul<ple paths represen<ng the same message Encoder: find op<mal path in a single coset (corresponding to the message being sent) Decoder: Viterbi algorithm, find best path in the union of all cosets. Problem: it is not so easy to generate good spherical codewords (random vectors do not work well for moderate dimensions) 12

13 Characteriza8on of errors in watermarking systems Message errors (incorrect decoding) Measure: bit error rate (BER) Protec<on: error-correc<ng codes False posi6ve (FP) errors: FP error rate / probability Depend on detector Random-WM FP (one of many WMs in a work) Random-work FP (single WM in many works) FN rate 3 1 Receiver opera<ng characteris<c (ROC) curve 2 False nega6ve (FN) errors: FN rate or probability Depend on embedder, channel and detector Related to effec<veness (no channel awack), robustness (normal processing distor<ons), security (hos<le awacks). Occurrence frequency No WM 2 3 FP rate WM ed works ROC curve: tradeoff between FP and FN errors When too liwle data exists to es<mate one or both axes, need interpola<on model (behavior near zero) 1 Detec<on value 13

14 Prac8cal watermarking: whitening filter Analysis of dirty-paper codes assumed that works are normally distributed (AWGN) We know that image pixels do have strong correla<ons (sta<s<cs of the 2 nd order) May use known correla<on matrix to perform PCA whitening before embedding/detec<on Otherwise, a poor-man s solu<on: z LC (! c,! w) z LC (! c! f,! w! f ) f whitening filter, * - convolu<on E.g.: delta-filter f = (-1, 1) removes horizontal correla<ons between the adjacent pixels If the correla<on matrix R of distribu<on is known, then f is the middle row of R -1 If the filter is computed for a wrong distribu<on, may instead introduce new correla<ons Unwanted effects most prominent in tails (low probability regions) Whitening has opposite effect on random-wm and random-work errors: works and WM codewords are usually drawn from different distribu<ons with different correla<ons Ozen used in prac6cal watermarking schemes! 14

Image Data Compression. Steganography and steganalysis Alexey Pak, PhD, Lehrstuhl für Interak;ve Echtzeitsysteme, Fakultät für Informa;k, KIT

Image Data Compression. Steganography and steganalysis Alexey Pak, PhD, Lehrstuhl für Interak;ve Echtzeitsysteme, Fakultät für Informa;k, KIT Image Data Compression Steganography and steganalysis 1 Stenography vs watermarking Watermarking: impercep'bly altering a Work to embed a message about that Work Steganography: undetectably altering a

More information

Dirty Paper Writing and Watermarking Applications

Dirty Paper Writing and Watermarking Applications Dirty Paper Writing and Watermarking Applications G.RaviKiran February 10, 2003 1 Introduction Following an underlying theme in Communication is the duo of Dirty Paper Writing and Watermarking. In 1983

More information

Witsenhausen s counterexample and its links with multimedia security problems

Witsenhausen s counterexample and its links with multimedia security problems Witsenhausen s counterexample and its links with multimedia security problems Pedro Comesaña-Alfaro Fernando Pérez-González Chaouki T. Abdallah IWDW 2011 Atlantic City, New Jersey Outline Introduction

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

An Information-Theoretic Analysis of Dirty Paper Coding for Informed Audio Watermarking

An Information-Theoretic Analysis of Dirty Paper Coding for Informed Audio Watermarking 1 An Information-Theoretic Analysis of Dirty Paper Coding for Informed Audio Watermarking Andrea Abrardo, Mauro Barni, Andrea Gorrieri, Gianluigi Ferrari Department of Information Engineering and Mathematical

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

Wavelets & Mul,resolu,on Analysis

Wavelets & Mul,resolu,on Analysis Wavelets & Mul,resolu,on Analysis Square Wave by Steve Hanov More comics at http://gandolf.homelinux.org/~smhanov/comics/ Problem set #4 will be posted tonight 11/21/08 Comp 665 Wavelets & Mul8resolu8on

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Quantization Index Modulation using the E 8 lattice

Quantization Index Modulation using the E 8 lattice 1 Quantization Index Modulation using the E 8 lattice Qian Zhang and Nigel Boston Dept of Electrical and Computer Engineering University of Wisconsin Madison 1415 Engineering Drive, Madison, WI 53706 Email:

More information

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

TCQ PRACTICAL EVALUATION IN THE HYPER-CUBE WATERMARKING FRAMEWORK. 161, rue Ada, Montpellier cedex 05, France contact:

TCQ PRACTICAL EVALUATION IN THE HYPER-CUBE WATERMARKING FRAMEWORK. 161, rue Ada, Montpellier cedex 05, France contact: TCQ PRACTICAL EVALUATION IN THE HYPER-CUBE WATERMARKING FRAMEWORK Marc CHAUMONT 1,2 and Dalila GOUDIA 2 1 University of Nîmes, Place Gabriel Péri, 30000 Nîmes, France 2 Laboratory LIRMM, UMR CNRS 5506,

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Matrix Embedding with Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography

Matrix Embedding with Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography Matrix Embedding with Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography Anindya Sarkar, Student Member, IEEE, Upamanyu Madhow, Fellow, IEEE, and B. S. Manjunath,

More information

Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding

Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding To appear in IEEE Trans. Inform. Theory. Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding Brian Chen and Gregory W. Wornell Submitted June

More information

Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels

Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels Anindya Sarkar and B. S. Manjunath Department of Electrical and Computer Engineering, University

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

X 1 Q 3 Q 2 Y 1. quantizers. watermark decoder N S N

X 1 Q 3 Q 2 Y 1. quantizers. watermark decoder N S N Quantizer characteristics important for Quantization Index Modulation Hugh Brunk Digimarc, 19801 SW 72nd Ave., Tualatin, OR 97062 ABSTRACT Quantization Index Modulation (QIM) has been shown to be a promising

More information

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals ECE 8440 - Unit 4 Digital Processing of Analog Signals- - Non- Ideal Case (See sec8on 4.8) Before considering the non- ideal case, recall the ideal case: 1 Assump8ons involved in ideal case: - no aliasing

More information

Network Coding and Schubert Varieties over Finite Fields

Network Coding and Schubert Varieties over Finite Fields Network Coding and Schubert Varieties over Finite Fields Anna-Lena Horlemann-Trautmann Algorithmics Laboratory, EPFL, Schweiz October 12th, 2016 University of Kentucky What is this talk about? 1 / 31 Overview

More information

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02 MMSE estimation and lattice encoding/decoding for linear Gaussian channels Todd P. Coleman 6.454 9/22/02 Background: the AWGN Channel Y = X + N where N N ( 0, σ 2 N ), 1 n ni=1 X 2 i P X. Shannon: capacity

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Exact data mining from in- exact data Nick Freris

Exact data mining from in- exact data Nick Freris Exact data mining from in- exact data Nick Freris Qualcomm, San Diego October 10, 2013 Introduc=on (1) Informa=on retrieval is a large industry.. Biology, finance, engineering, marke=ng, vision/graphics,

More information

University of Siena. Multimedia Security. Watermark extraction. Mauro Barni University of Siena. M. Barni, University of Siena

University of Siena. Multimedia Security. Watermark extraction. Mauro Barni University of Siena. M. Barni, University of Siena Multimedia Security Mauro Barni University of Siena : summary Optimum decoding/detection Additive SS watermarks Decoding/detection of QIM watermarks The dilemma of de-synchronization attacks Geometric

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

A Hypothesis Testing Approach for Achieving Semi-fragility in Multimedia Authentication

A Hypothesis Testing Approach for Achieving Semi-fragility in Multimedia Authentication A Hypothesis Testing Approach for Achieving Semi-fragility in Multimedia Authentication Chuhong Fei a, Deepa Kundur b, and Raymond Kwong a a University of Toronto, 10 King s College Road, Toronto, ON Canada

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

Information Hiding and Covert Communication

Information Hiding and Covert Communication Information Hiding and Covert Communication Andrew Ker adk @ comlab.ox.ac.uk Royal Society University Research Fellow Oxford University Computing Laboratory Foundations of Security Analysis and Design

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Streaming - 2. Bloom Filters, Distinct Item counting, Computing moments. credits:www.mmds.org.

Streaming - 2. Bloom Filters, Distinct Item counting, Computing moments. credits:www.mmds.org. Streaming - 2 Bloom Filters, Distinct Item counting, Computing moments credits:www.mmds.org http://www.mmds.org Outline More algorithms for streams: 2 Outline More algorithms for streams: (1) Filtering

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An Introduction to (Network) Coding Theory Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland July 12th, 2018 1 Coding Theory Introduction Reed-Solomon codes 2 Introduction Coherent network

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Networks. Can (John) Bruce Keck Founda7on Biotechnology Lab Bioinforma7cs Resource

Networks. Can (John) Bruce Keck Founda7on Biotechnology Lab Bioinforma7cs Resource Networks Can (John) Bruce Keck Founda7on Biotechnology Lab Bioinforma7cs Resource Networks in biology Protein-Protein Interaction Network of Yeast Transcriptional regulatory network of E.coli Experimental

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

New wrinkle in dirty paper techniques

New wrinkle in dirty paper techniques New wrinkle in dirty paper techniques Brett Bradley, John Stach Digimarc Corporation, 19801 SW 72 nd Ave, Suite 250, Tualatin, OR, USA 97062 ABSTRACT The many recent publications that focus upon watermarking

More information

Other types of errors due to using a finite no. of bits: Round- off error due to rounding of products

Other types of errors due to using a finite no. of bits: Round- off error due to rounding of products ECE 8440 Unit 12 More on finite precision representa.ons (See sec.on 6.7) Already covered: quan.za.on error due to conver.ng an analog signal to a digital signal. 1 Other types of errors due to using a

More information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1629 Duality Between Channel Capacity Rate Distortion With Two-Sided State Information Thomas M. Cover, Fellow, IEEE, Mung Chiang, Student

More information

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Assignment

More information

CS 6140: Machine Learning Spring 2016

CS 6140: Machine Learning Spring 2016 CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Assignment

More information

Coset Decomposition Method for Decoding Linear Codes

Coset Decomposition Method for Decoding Linear Codes International Journal of Algebra, Vol. 5, 2011, no. 28, 1395-1404 Coset Decomposition Method for Decoding Linear Codes Mohamed Sayed Faculty of Computer Studies Arab Open University P.O. Box: 830 Ardeya

More information

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie

More information

Lecture 15: Thu Feb 28, 2019

Lecture 15: Thu Feb 28, 2019 Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:

More information

General linear model: basic

General linear model: basic General linear model: basic Introducing General Linear Model (GLM): Start with an example Proper>es of the BOLD signal Linear Time Invariant (LTI) system The hemodynamic response func>on (Briefly) Evalua>ng

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference The and The Binary Dirty Multiple Access Channel with Common Interference Dept. EE - Systems, Tel Aviv University, Tel Aviv, Israel April 25th, 2010 M.Sc. Presentation The B/G Model Compound CSI Smart

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0 Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear

More information

Linear Codes and Syndrome Decoding

Linear Codes and Syndrome Decoding Linear Codes and Syndrome Decoding These notes are intended to be used as supplementary reading to Sections 6.7 9 of Grimaldi s Discrete and Combinatorial Mathematics. The proofs of the theorems are left

More information

Phase Precoded Compute-and-Forward with Partial Feedback

Phase Precoded Compute-and-Forward with Partial Feedback Phase Precoded Compute-and-Forward with Partial Feedback Amin Sakzad, Emanuele Viterbo Dept. Elec. & Comp. Sys. Monash University, Australia amin.sakzad,emanuele.viterbo@monash.edu Joseph Boutros, Dept.

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Leech Constellations of Construction-A Lattices

Leech Constellations of Construction-A Lattices Leech Constellations of Construction-A Lattices Joseph J. Boutros Talk at Nokia Bell Labs, Stuttgart Texas A&M University at Qatar In collaboration with Nicola di Pietro. March 7, 2017 Thanks Many Thanks

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

EMBEDDING STRENGTH CRITERIA FOR AWGN WATERMARK, ROBUST AGAINST EXPECTED DISTORTION

EMBEDDING STRENGTH CRITERIA FOR AWGN WATERMARK, ROBUST AGAINST EXPECTED DISTORTION Computing and Informatics, Vol. 29, 2010, 357 387 EMBEDDING STRENGTH CRITERIA FOR AWGN WATERMARK, ROBUST AGAINST EXPECTED DISTORTION Vesna Vučković Faculty of Mathematics Studentski trg 16 11000 Belgrade,

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Bias/variance tradeoff, Model assessment and selec+on

Bias/variance tradeoff, Model assessment and selec+on Applied induc+ve learning Bias/variance tradeoff, Model assessment and selec+on Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège October 29, 2012 1 Supervised

More information

Overview: In addi:on to considering various summary sta:s:cs, it is also common to consider some visual display of the data Outline:

Overview: In addi:on to considering various summary sta:s:cs, it is also common to consider some visual display of the data Outline: Lecture 2: Visual Display of Data Overview: In addi:on to considering various summary sta:s:cs, it is also common to consider some visual display of the data Outline: 1. Histograms 2. ScaCer Plots 3. Assignment

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

Ma/CS 6b Class 24: Error Correcting Codes

Ma/CS 6b Class 24: Error Correcting Codes Ma/CS 6b Class 24: Error Correcting Codes By Adam Sheffer Communicating Over a Noisy Channel Problem. We wish to transmit a message which is composed of 0 s and 1 s, but noise might accidentally flip some

More information

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

UTA EE5362 PhD Diagnosis Exam (Spring 2011) EE5362 Spring 2 PhD Diagnosis Exam ID: UTA EE5362 PhD Diagnosis Exam (Spring 2) Instructions: Verify that your exam contains pages (including the cover shee. Some space is provided for you to show your

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

Quantization Based Watermarking Methods Against Valumetric Distortions

Quantization Based Watermarking Methods Against Valumetric Distortions International Journal of Automation and Computing 14(6), December 2017, 672-685 DOI: 10.1007/s11633-016-1010-6 Quantization Based Watermarking Methods Against Valumetric Distortions Zai-Ran Wang 1,2 Jing

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Convolutional Coding LECTURE Overview

Convolutional Coding LECTURE Overview MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 EECS 121 Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 PRINT your student ID: PRINT AND SIGN your name:, (last) (first) (signature) PRINT your Unix account login: ee121- Prob.

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

18.310A Final exam practice questions

18.310A Final exam practice questions 18.310A Final exam practice questions This is a collection of practice questions, gathered randomly from previous exams and quizzes. They may not be representative of what will be on the final. In particular,

More information

CS 6140: Machine Learning Spring What We Learned Last Week 2/26/16

CS 6140: Machine Learning Spring What We Learned Last Week 2/26/16 Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Sign

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Watermark Detection after Quantization Attacks

Watermark Detection after Quantization Attacks Watermark Detection after Quantization Attacks Joachim J. Eggers and Bernd Girod Telecommunications Laboratory University of Erlangen-Nuremberg Cauerstr. 7/NT, 9158 Erlangen, Germany feggers,girodg@lnt.de

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information