Basic information theory

Similar documents
UNIT I INFORMATION THEORY. I k log 2

Information Theory - Entropy. Figure 3

Revision of Lecture 5

Uncertainity, Information, and Entropy

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Principles of Communications

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

Communication Theory II

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A


Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

ITCT Lecture IV.3: Markov Processes and Sources with Memory

Lecture 2. Capacity of the Gaussian channel

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Chapter 9 Fundamental Limits in Information Theory

3F1 Information Theory, Lecture 1

CSCI 2570 Introduction to Nanocomputing

Revision of Lecture 4

XOR - XNOR Gates. The graphic symbol and truth table of XOR gate is shown in the figure.

Exercise 1. = P(y a 1)P(a 1 )

Lecture 18: Gaussian Channel

Coding for Discrete Source

Communication Theory II

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

Digital communication system. Shannon s separation principle

Lecture 8: Shannon s Noise Models

E303: Communication Systems

Upper Bounds on the Capacity of Binary Intermittent Communication

Shannon s noisy-channel theorem

Information Theory and Coding

EE5713 : Advanced Digital Communications

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Information in Biology

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0

Digital Baseband Systems. Reference: Digital Communications John G. Proakis

Week No. 06: Numbering Systems

Communications Theory and Engineering

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur

Information and Entropy

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Digital Communications: An Overview of Fundamentals

Digital Image Processing Lectures 25 & 26

Lecture 14 February 28

Physical Layer and Coding

ECE Information theory Final (Fall 2008)

Noisy-Channel Coding

Chapter I: Fundamental Information Theory

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Compression and Coding

4 An Introduction to Channel Coding and Decoding over BSC

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7

Entropies & Information Theory

Information in Biology

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Image and Multidimensional Signal Processing

Information Theory (Information Theory by J. V. Stone, 2015)

CHAPTER 4 ENTROPY AND INFORMATION

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Block 2: Introduction to Information Theory

Lecture 5b: Line Codes

3F1 Information Theory, Lecture 3

EE4304 C-term 2007: Lecture 17 Supplemental Slides

Channel Coding I. Exercises SS 2017

Noisy channel communication

arxiv: v2 [cs.it] 20 Feb 2018

16.36 Communication Systems Engineering

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Intermittent Communication

Energy State Amplification in an Energy Harvesting Communication System

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

ELEC546 Review of Information Theory

log 2 N I m m log 2 N + 1 m.

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

X 1 : X Table 1: Y = X X 2

at Some sort of quantization is necessary to represent continuous signals in digital form

ECE Information theory Final

Draft of a lecture with exercises

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Lecture 4. Capacity of Fading Channels

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

Solutions to Homework Set #3 Channel and Source coding

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Intelligent Data Analysis Lecture Notes on Document Mining

Information Theory, Statistics, and Decision Trees

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut

Appendix B Information theory from first principles

Principles of Communications

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

CSE468 Information Conflict

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Digital Transmission Methods S

3F1 Information Theory, Lecture 3

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Transcription:

Basic information theory Communication system performance is limited by Available signal power Background noise Bandwidth limits. Can we postulate an ideal system based on physical principles, against which we can assess actual systems? The role of a communication system is to convey, from transmitter to receiver, a sequence of messages selected from a finite number of possible messages. Within a specified time interval, one of these messages is transmitted, during the next another (maybe the same one), and so on. The messages are predetermined and known by the receiver The message selected for transmission during a particular interval is not known by the receiver The receiver knows the (different) probabilities for the selection of each message by the transmitter. The job of the receiver is not to answer the question What was the message?, but rather Which one?. Amount of information Suppose the allowable messages (or symbols) are m, m 2,..., and each have probability of occurrence p, p 2,....

The transmitter selects message k with probability. (The complete set of symbols {m, m 2,...} is called the alphabet.) If the receiver correctly identifies the message then an amount of information I k given by I k log 2 has been conveyed. I k is dimensionless, but is measured in bits. Given two equiprobable symbols m and m 2. Then p = /2, so correctly identifying m carries I = log 2 2 = bit of information. Similarly the correct transmission of m 2 carries bit of information. Sometimes bit is used as an abbreviation for the term binary digit. If in the above example m is represented by a 0 and m 2 by a, then we see that one binary digit carries bit of information. This is not always the case use the term binit for binary digit when confusion can occur. Suppose the binits 0 and occur with probabilities /4 and 3/4 respectively. Then binit 0 carries log 2 4 = 2 bits of information, and binit carries log 2 4/3 = 0.42 bits. The definition of information satisfies a number of useful criteria: It is intuitive: the occurrence of a highly probable event carries little information (I k = 0 for = ). It is positive: information may not decrease upon receiving a message (I k 0 for 0 ). We gain more information when a less probable message is received (I k > I l for < p l ). 2

Information is additive if the messages are independent: I k,l = log 2 p l = log 2 + log 2 p l = I k + I l. 2 Average information: entropy Suppose we have M different independent messages (as before), and that a long sequence of L messages is generated. In the L message sequence, we expect p L occurrences of m, p 2 L of m 2, etc. The total information in the sequence is I total = p L log 2 p + p 2 L log 2 p 2 + so the average information per message interval will be H = I total L = p log 2 p + p 2 log 2 p 2 + = This average information is referred to as the entropy. M k= log 2. Consider the case of just two messages with probabilities p and ( p). This is called a binary memoryless source, since the successive symbols emitted are statistically independent. The average information per message is H = p log 2 p + ( p) log 2 p 3

H 0.5 0 0 0.2 0.4 0.6 0.8 The average information is maximised for p = /2, with a corresponding entropy of bit/message. In general, it can be proved that for M messages, the entropy H becomes maximum when all messages are equally likely. Then each message has probability /M, and the entropy is p H max = M k= M log 2 M = log 2 M. 3 Information rate If the source of the messages generates messages at the rate r per second, then the information rate is defined to be R r H = average number of bits of information per second. An analogue signal is band limited to B Hz, sampled at the Nyquist rate, and the samples quantised to 4 levels. The quantisation levels Q, Q 2, Q 3, and Q 4 (messages) are assumed independent, and occur with probabilities p = p 4 = /8 and p 2 = p 3 = 3/8. 4

The average information per symbol at the source is H = 4 k= The information rate R is log 2 = 8 log 2 8 + 3 8 log 2 =.8 bits/message. 8 3 + 3 8 log 2 R = r H = 2B(.8) = 3.6B bits/s. 8 3 + 8 log 2 8 Note that we could identify each message by a 2 digit binary code: Message Probability Binary code Q /8 0 0 Q 2 3/8 0 Q 3 3/8 0 Q 4 /8 If we transmit 2B messages per second, we will be transmitting 4B binits per second (since each message requires 2 binits). Since each binit should be able to carry bit of information, we should be able to transmit 4B bits of information using 4B binits. From the example we see that we are only transmitting 3.6B bits. We are therefore not taking full advantage of the ability of binary PCM to convey information. One remedy is to select different quantisation levels, such that each level is equally likely. Other methods involve the use of more complicated code assignments. 5